text
stringlengths
216
4.52M
meta
dict
\section{Introduction} \noindent The vertical dynamics of a free falling ball on a moving racket is considered. The racket is supposed to move periodically in the vertical direction according to a regular periodic function $f(t)$ and the ball is reflected according to the law of elastic bouncing when hitting the racket. The only force acting on the ball is the gravity $g$. Moreover, the mass of the racket is assumed to be large with respect to the mass of the ball so that the impacts do not affect the motion of the racket. \noindent This model has inspired many authors as it represents a simple model exhibiting complex dynamics, depending on the properties of the function $f$. The first results were given by Pustyl'nikov in \cite{pust} who studied the possibility of having motions with velocity tending to infinity, for $\df$ large enough. On the other hand, KAM theory implies that if the $C^k$ norm, for $k$ large, of $\df$ is small then all motions are bounded. On these lines some recent results are given in \cite{ma_xu,maro4,maro6}. Bounded motions can be regular (periodic and quasiperiodic, see \cite{maro3}) and chaotic (see \cite{maro5,maro2,ruiztorres}). Moreover, the non periodic case is studied in \cite{kunzeortega2,ortegakunze}, the case of different potentials is considered in \cite{dolgo} and recent results on ergodic properties are present in \cite{studolgo}. In this paper we are concerned with $(p,q)$-periodic motions understood as $p$-periodic with $q$ bounces in each period. Here $p,q$ are supposed to be positive coprime integers. In \cite{maro3} it is proved that if $p/q$ is sufficiently large, then there exists at least one $(p,q)$-periodic motion. This result comes from an application of Aubry-Mather theory as presented in \cite{bangert}. Actually, the bouncing motions correspond to the orbits of an exact symplectic twist map of the cylinder. The orbits of such maps can be found as critical points of an action functional and the $(p,q)$-periodic orbits found in \cite{maro3} correspond to minima. Here we first note that a refined version of Aubry-Mather theory (see \cite{katok_hass}) gives, for each couple of coprime $p,q$ such that $p/q$ is large, the existence of another $(p,q)$-periodic orbit that is not a minimum, since it is found via a minimax argument. This gives the existence of two different $(p,q)$-periodic motions for fixed values of $p,q$, with $p/q$ large. We are first interested in the stability in the sense of Lyapunov of such periodic motions. This is related to the structure of the $(p,q)$-periodic orbits of the corresponding exact symplectic twist map, for fixed $p,q$. It comes from Aubry-Mather theory that the $(p,q)$-periodic orbits that are minima can be ordered and if there are two with no other in the middle then they are connected by heteroclinic orbits. In this case they are unstable. On the other hand, $(p,q)$-periodic orbits can form an invariant curve. In this case they are all minima but their stability cannot be determined as before since we are in a degenerate scenario. However, if we suppose to be in the real analytic case, a topological argument (see \cite{ortega_fp,ortega_book}) can be used to deduce instability. More precisely we will use the fact that for a real analytic area and orientation preserving embedding of the plane that is not the identity, every stable fixed point is isolated. Therefore, the hypothesis of $f$ being real analytic comes here into play. Concerning the structure of the set of $(p,q)$-periodic motions we prove that in the real analytic case they can only be either isolated or in a degenerate case, in the sense that the corresponding orbits form an invariant curve that is the graph of a real analytic function. As before, in the isolated case at least one is unstable and in the degenerate case they all are minima and unstable. Note that this result differs from Aubry-Mather theory since we are not requiring the orbits to be minima of a functional. To prove this result we need the $q$-th iterate of the map to be twist. For $q=1$ this is true for every real analytic $f$, while for the general case $q>1$ we need to restrict to $\norm{\ddf}$ being small. The paper is organized as follows. In Section \ref{sec:theory} we recall some known facts about exact symplectic twist maps together with the results for the analytic case. In Section \ref{sec:tennis} we introduce the bouncing ball map and describe its main properties. Finally, the results on the existence of two $(p,q)$-periodic motions, the instability and the structure of the set are given in Section \ref{sec:per}. \section{Some results on periodic orbits of exact symplectic twist maps}\label{sec:theory} Let us denote by $\Sigma=\RR\times(a,b)$ with $-\infty\leq a<b\leq +\infty$ a possibly unbounded strip of $\RR^2$. We will deal with $C^k$ ($k\geq 1$) or real analytic embeddings $\tilde{S}:\Sigma \rightarrow \RR^2$ such that \begin{equation}\label{def_cyl} \tilde{S}\circ\sigma=\sigma\circ \tilde{S} \end{equation} where $\sigma:\RR^2\rightarrow\RR^2$ and $\sigma(x,y)=(x+1,y)$. By this latter property, $\tilde{S}$ can be seen as the lift of an embedding $S:\Sigma\rightarrow\AA$ where $\AA = \TT\times \RR$ with $\TT = \RR/\ZZ$ and $\Sigma$ is now understood as the corresponding strip of the cylinder. We denote $\tilde{S}(x,y)=(\bar{x},\bar{y})$ and the corresponding orbit by $(x_n,y_n)_{n\in\ZZ}$. We say that $\tilde{S}$ is exact symplectic if there exists a $C^1$ function $V:\Sigma\rightarrow \RR$ such that $V\circ\sigma=V$ and \[ \bar{y} d \bar{x} -y dx = dV(x,y) \quad\mbox{in }\Sigma. \] Moreover, by the (positive) twist condition we understand \[ \frac{\partial \bar{x}}{\partial y} >0 \quad\mbox{in }\Sigma. \] A negative twist condition would give analogous results. The exact symplectic condition implies that $\tilde{S}$ preserves the two-form $dy\wedge dx$ so that it is area and orientation preserving. An equivalent characterization is the existence of a generating function, i.e. a $C^2$ function $h:\Omega\subset \RR^2\rightarrow \RR$ such that $h(x+1,\bar{x}+1) = h(x,\bar{x})$ and $h_{12}(x,\bar{x}) <0$ in $\Omega$ and for $(x,y)\in\Sigma$ we have $\tilde{S}(x,y) = (\bar{x},\bar{y})$ if and only if \[ \left\{ \begin{split} h_1(x,\bar{x})&=-y \\ h_2(x,\bar{x})&=\bar{y}. \end{split} \right. \] Moreover, $\tilde{S}$ preserves the ends of the cylinder if, uniformly in $x$, \[ \bar{y}(x,y) \rightarrow \pm\infty \quad\mbox{as }y\rightarrow \pm\infty \] and twists each end infinitely if, uniformly in $x$, \[ \bar{x}(x,y)-x \rightarrow \pm\infty \quad\mbox{as }y\rightarrow \pm\infty. \] Finally we will say that an embedding of the cylinder $S:\Sigma\rightarrow\AA$ satisfies any of these properties if so does its corresponding lift. These maps enjoy several properties and many interesting orbits are proved to exist. Here we recall some results concerning periodic orbits. We start with the following \begin{definition} Fix two coprime integers $p,q$ with $q\neq 0$. An orbit $(x_n,y_n)_{n\in\ZZ}$ of $\tilde{S}$ is said $(p,q)$-periodic if $(x_{n+q},y_{n+q})=(x_n+p,y_n)$ for every $n\in\ZZ$. Moreover, we say that it is stable (in the sense of Lyapunov) if for every $\varepsilon>0$ there exists $\delta>0$ such that for every $(\hat{x}_0,\hat{y}_0)$ satisfying $|(x_0,y_0)-(\hat{x}_0,\hat{y}_0)|<\delta$ we have $|\tilde{S}^n(\hat{x}_0,\hat{y}_0)-({x}_n,{y}_n)|<\varepsilon$ for every $n\in\ZZ$. \end{definition} \begin{remark}\label{rem_periodic} Note that $(q,p)$-periodic orbits correspond to fixed points of the map $\sigma^{-p}\circ \tilde{S}^q$. This follows from the fact that $\tilde{S}$ is a diffeomorphism defined on the cylinder. Each point of the orbit is a fixed point of $\sigma^{-p}\circ\tilde{S}^q$ and a fixed point of $\sigma^{-p}\circ\tilde{S}^q$ is the initial condition of a $(p,q)$-periodic orbit. Note that different fixed points may correspond to the same orbit but not viceversa. Moreover, if an orbit is $(q,p)$-periodic then it cannot be also $(q',p')$-periodic unless $p/q=p'/q'$. Actually, let $\xi=(x,y)$ and suppose that $\xi=\sigma^{-p}\circ\Phi^q(\xi)=\sigma^{-p'}\circ \Phi^{q'}(\xi)$. Then $\sigma^{p}(\xi)=\Phi^q(\xi)$ and $\sigma^{p'}(\xi)=\Phi^{q'}(\xi)$ from which $\Phi^{qq'}(\xi)=\sigma^{pq'}(\xi)=\sigma^{p'q}(\xi)$ and $pq'=p'q$. Finally, the stability of a $(q,p)$-periodic orbit corresponds to the stability in the sense of Lyapunov of the corresponding fixed point of the map $\sigma^{-p}\circ\tilde{S}^q$. \end{remark} A particular class of periodic orbits are the so called Birkhoff periodic orbits. \begin{definition} Fix two coprime integers $p,q$ with $q\neq 0$. An orbit $(x_n,y_n)_{n\in\ZZ}$ of $\tilde{S}$ is said a Birkhoff $(p,q)$-periodic orbit if there exists a sequence $(s_n,u_n)_{n\in\ZZ}$ such that \begin{itemize} \item $(s_0,u_0)=(x_0,y_0)$ \item $s_{n+1}>s_n$ \item $(s_{n+q},u_{n+q})=(s_{n}+1,u_{n})$ \item $(s_{n+p},u_{n+p})=\tilde{S}(s_{n},u_{n})$ \end{itemize} \end{definition} \begin{remark} Note that a Birkhoff $(p,q)$-periodic orbit is a $(p,q)$-periodic orbit since \[ (x_{n+q},y_{n+q})=(s_{np+qp},u_{np+qp})=(s_{np}+p,u_{np})=\tilde{S}^n(s_0,u_0)+(p,0)=(x_{n}+p,y_{n}). \] \end{remark} The existence of Birkhoff $(p,q)$-periodic orbits comes from Aubry-Mather theory. Here we give a related result \cite[Th. 9.3.7]{katok_hass} \begin{theorem}\label{birk_orbits} Let $S:\AA\rightarrow\AA$ be an exact symplectic twist diffeomorphism that preserves and twists the ends infinitely and let $p,q$ be two coprime integers. Then there exist at least two Birkhoff $(p,q)$-periodic orbits for $S$. \end{theorem} \begin{remark} The Theorem is proved via variational techniques. The periodic orbits correspond to critical points of an action defined in terms of the generating function. One of this point is a minimum and the other is a minimax if the critical points are isolated. \end{remark} In the analytic case, something more can be said on the topology of these orbits. \begin{proposition}\label{unstable} Let $\tilde{S}:\Sigma\rightarrow\RR^2$ be an exact symplectic twist embedding satisfying condition \eqref{def_cyl} and admitting a $(p,q)$-periodic orbit. Then there exists at least one $(p,q)$-periodic orbit that is unstable. \end{proposition} \begin{proof} The proof is essentially given in \cite{maroTMNA,ortega_fp,ortega_book}. We give here a sketch. It is enough to prove that there exists at least one unstable fixed point of the area and orientation preserving 1-1 real analytic map $\sigma^{-p}\circ \tilde{S}^q$. Let us first note that since $\tilde{S}$ is twist, the map $\sigma^{-p}\circ \tilde{S}^q$ is not the identity. Actually, it is known (see for example \cite{herman})) that if $\tilde{S}$ is twist, then the image of a vertical line under $\tilde{S}^q$ is a positive path, i.e. a curve such that the angle between the tangent vector and the vertical is always positive. This implies that $\tilde{S}^q$ cannot be a horizontal translation.\\ By hypothesis the set of fixed points of $\sigma^{-p}\circ \tilde{S}^q$ is not empty so that applying \cite[Chapter 4.9, Theorem 15]{ortega_book} we deduce that every stable fixed point is an isolated fixed point.\\ Hence, if there exists some non isolated fixed point, it must be unstable. Finally, suppose that we only have isolated fixed points that are all stable. From \cite[Chapter 4.5, Theorem 12]{ortega_book} they must all have index $1$. On the other hand, the Euler characteristic of the cylinder is null, and by the Poincar\'e-Hopf index formula we have a contradiction. Hence, there must exists at least one fixed point that is unstable. \end{proof} \begin{corollary} In the conditions of Theorem \ref{birk_orbits}, if $S$ is real analytic then there exists at least one unstable $(p,q)$-periodic orbit. \end{corollary} In the analytic case, the twist condition gives information on the structure of the set of $(p,q)$-periodic orbits. Actually the following result has been proved in \cite{maroTMNA,ortega_pb} \begin{theorem}\label{maro_teo} Consider a $C^1$-embedding $\tilde{S}:\Sigma\rightarrow\RR$ satisfying property \eqref{def_cyl} and suppose it is exact symplectic and twist. Fix a positive integer $p$ and assume that for every $x\in\RR$ there exists $y\in (a,b)$ such that \begin{equation}\label{maro_cond} \bar{x}(x,a)<x+p<\bar{x}(x,y). \end{equation} Then the map $\sigma^{-p}\circ\tilde{S}$ has at least two fixed points in $[0,1)\times (a,b)$. Moreover, if $\tilde{S}$ is real analytic then the set of fixed points is finite or the graph of a real analytic $1$-periodic function. In the first case the index of such fixed points is either $-1$, $0$, $1$ and at least one is unstable. In the second case, all the fixed points are unstable. \end{theorem} \begin{remark} Aubry-Mather theory gives a description, for fixed $p,q$ of those $(p,q)$-periodic orbits that are global minimizers. They can be ordered and if two of them are neighbouring, in the sense that there is no other minimal $(p,q)$-periodic orbit in the middle, then there are heteroclinic connections between them (\cite{bangert,katok_hass}). In this case, the $(p,q)$-periodic orbits are unstable. On the other hand, they can form an invariant curve. We stress that in the analytic case Theorem \ref{maro_teo} gives the description of the set of all $(p,q)$-periodic orbits, not only those that are action minimizing. \end{remark} \section{The bouncing ball map and its properties}\label{sec:tennis} Consider the motion of a bouncing ball on a vertically moving racket. We assume that the impacts do not affect the racket whose vertical position is described by a $1$-periodic $C^k$, $k\geq 2$, or real analytic function $f:\RR\rightarrow\RR$. Let us start getting the equations of motion, following \cite{maro5}. In an inertial frame, denote by $(\to,w)$ the time of impact and the corresponding velocity just after the bounce, and by $(\Pto,\bar w)$ the corresponding values at the subsequent bounce. From the free falling condition we have \begin{equation}\label{timeeq} f(t) + w(\Pto-\to) - \frac{g}{2}(\Pto -\to)^2 = f(\Pto) \,, \end{equation} where $g$ stands for the standard acceleration due to gravity. Noting that the velocity just before the impact at time $\Pto$ is $w-g(\Pto-\to)$, using the elastic impact condition and recalling that the racket is not affected by the ball, we obtain \begin{equation} \label{veleq} \bar{w}+w-g(\Pto-\to) = 2\dot{f}(\Pto)\,, \end{equation} where $\dot{}$ stands for the derivative with respect to time. From conditions (\ref{timeeq}-\ref{veleq}) we can define a bouncing motion given an initial condition $(t,w)$ in the following way. If $w\leq\df(t)$ then we set $\bar{t}=t$ and $\bar{w}=w$. If $w>\df(t)$, we claim that we can choose $\bar{t}$ to be the smallest solution $\bar{t}> t$ of \eqref{timeeq}. Bolzano theorem gives the existence of a solution of \eqref{timeeq} considering \[ F_t(\bar{t})=f(t)-f(\Pto) + w(\Pto-\to) - \frac{g}{2}(\Pto -\to)^2 \] and noting that $F_t(\bar{t})<0$ for $\Pto-\to$ large and $F_t(\bar{t})>0$ for $\Pto-\to\rightarrow 0^+$. Moreover, the infimum $\bar{t}$ of all these solutions is strictly larger than $t$ since if there exists a sequence $\bar{t}_n\rightarrow t$ satisfying \eqref{timeeq} then, \[ w - \frac{g}{2}(\Pto_n -\to) = (f(\Pto_n)- f(t)) /(\Pto_n-\to) \] that contradicts $w>\df(t)$ using the mean value theorem. For this value of $\bar{t}$, condition \eqref{veleq} gives the updated velocity $\bar{w}$. For $\Pto-\to>0$, we introduce the notation \[ f[\to,\Pto]=\frac{f(\Pto)-f(\to)}{\Pto-\to}, \] and write \begin{equation} \label{w1} \Pto = \to + \frac 2g w -\frac 2g f[\to,\Pto]\,, \end{equation} that also gives \begin{equation} \label{w2} \bar{w}= w -2f [\to,\Pto] + 2\dot{f}(\Pto). \end{equation} Now we change to the moving frame attached to the racket, where the velocity after the impact is expressed as $v=w-\dot{f}(t)$, and we get the equations \begin{equation}\label{eq:unb} \left\{ \begin{split} \Pto = {} & \to + \frac 2g \vo-\frac 2g f[\to,\Pto]+\frac 2g \df(\to)\textcolor{blue}{\,} \\ \Pvo = {} & \vo - 2f[\to,\Pto] + \df (\Pto)+\df(\to)\textcolor{blue}{\,.} \end{split} \right. \end{equation} By the periodicity of the function $f$, the coordinate $t$ can be seen as an angle. Hence, equations \eqref{eq:unb} define formally a map \[ \begin{array}{rcl} \tilde\Psi: \RR & \longrightarrow & \RR \\ (\to,\vo) & \longmapsto & (\Pto, \Pvo), \end{array} \] satisfying $\tilde\Psi\circ\sigma=\sigma\circ\tilde\Psi$ and the associated map of the cylinder $\Psi:\AA\rightarrow\AA$. This is the formulation considered by Kunze and Ortega \cite{kunzeortega2}. Another approach was considered by Pustylnikov in \cite{pust} and leads to a map that is equivalent to \eqref{eq:unb}, see \cite{maro3}. Noting that $w>\df(t)$ if and only if $v>0$, we can define a bouncing motion as before and denote it as a sequence $(t_n,v_n)_{n\in\ZZ^+}$ with $\ZZ^+=\{n\in\ZZ \::\: n\geq 0\}$ such that $(t_n,v_n)\in \TT\times [0,+\infty)$ for every $n\in\ZZ^+$. The maps $\Psi$ and its lift $\tilde{\Psi}$ are only defined formally. In the following lemma we state that they are well defined and have some regularity. Let us introduce the notation $\RR_{v_*}=\{v\in\RR \: :\: v>v_* \}$, $\AA_{v_*} = \TT\times \RR_{v_*}$ and $\RR^2_{v_*}=\RR\times\RR_{v_*}$. We will denote the $\sup$ norm by $\norm{\cdot}$ and recall that $f\in C^k(\TT), k\geq 2$ or real analytic. \begin{lemma} \label{well_def} There exists $v_*>4\norm{\df}$ such that the map $\Psi:\AA_{v_*}\rightarrow \AA$ is a $C^{k-1}$ embedding. If $f$ is real analytic, then $\Psi$ is a real analytic embedding. \end{lemma} \begin{proof} The proof is essentially given in \cite{maro5}. We give here a sketch. To prove that the map is well defined and $C^{k-1}$ we denote $v_{**}=4\norm{\df}$ and apply the implicit function theorem to the $C^{k-1}$ function $F :\{(\to,\vo,\Pto,\Pvo)\in \AA_{v_{**}} \times \RR^2 \: :\:\to\neq\Pto \} \rightarrow \RR^2$ given by \begin{equation*} F(\to,\vo,\Pto,\Pvo):= \left( \begin{split} & \Pto - \to - \frac 2g \vo + \frac 2g f[\to,\Pto]-\frac 2g \df (\to) \\ & \Pvo - \vo + 2f[\to,\Pto] - \df(\Pto)-\df(\to) \end{split} \right), \end{equation*} This gives the existence of a $C^{k-1}$ map $\Psi$ defined in $\AA_{v_{**}}$ such that $F(\to,\vo,\Psi(\to,\vo) )=0$. If $f$ is real analytic, we get that $\Psi$ is real analytic applying the analytic version of the implicit function theorem. \\ One can easily check that $\Psi$ is a local diffeomorphism since \[ \det \Dif_{\to,\vo} \Psi (t,v) =-\frac{\det (\Dif_{\to,\vo} F(\to,\vo,\Psi(\to,\vo) ))}{ \det (\Dif_{\Pto,\Pvo} F(\to,\vo,\Psi(\to,\vo) ))} \neq 0 \quad\mbox{on }\AA_{v_{**}}. \] To prove that $\Psi$ is a global embedding we need to prove that it is injective in $\AA_{v_*}$ for $v_*$ eventually larger than $v_{**}$. This can be done as in \cite{maro5}. \end{proof} \begin{remark} Note that we cannot guarantee that if $(\to_0,\vo_0) \in \AA_{v_*}$ then $\Psi(\to_0,\vo_0) \in \AA_{v_*}$. This is reasonable, since the ball can slow down decreasing its velocity at every bounce. However, a bouncing motion is defined for $v\geq 0$ \end{remark} \begin{remark} From the physical point of view, the condition $\Psi^n(\to_0,\vo_0)\in\AA_{v_*}$ for every $n$, implies that we can only hit the ball when it is falling. To prove it, suppose that $\to_0 =0$ and let us see what happens at the first iterate. The time at which the ball reaches its maximum height is $t^{max}=\frac{\vo_0}{g}$. On the other hand, the first impact time $\Pto$ satisfies, \[ \Pto \geq \frac{2}{g}\vo_0 - \frac{4}{g}\norm{\df}= t^{max}\left( 2-\frac{4}{\vo_0}\norm{\df} \right) > t^{max}, \] where the last inequality comes from $\vo_0\in\RR_{v_*}$ and $v_*>4\norm{\df}$. \end{remark} The map $\tilde\Psi$ is exact symplectic if we pass to the variables time-energy $(\to,\Eo)$ defined by \[ (\to,\Eo) = \left(\to,\frac{1}{2}\vo^2\right), \] obtaining the conjugated map \[ \Phi :\AA_{\Eo_*} \longrightarrow \AA, \qquad \Eo_*=\frac{1}{2}v_*^2 \] defined by \begin{equation}\label{eq:unbe} \left\{ \begin{split} \Pto = & \to + \frac 2g \sqrt{2\Eo}-\frac 2g f[\to,\Pto]+\frac 2g \df(\to) \\ \PEo = & \frac{1}{2}\left( \sqrt{2\Eo} - 2f[\to,\Pto] + \df (\Pto)+\df(\to) \right)^2, \end{split} \right. \end{equation} that by Lemma \ref{well_def} is a $C^{k-1}$-embedding or real analytic if $f$ is real analytic. More precisely, we have the following \begin{lemma} \label{lemma:exact} The map $\Phi$ is exact symplectic and twist in $\AA_{e_*}$. Moreover, $\Phi$ preserves and twists infinitely the upper end. \end{lemma} The map $\Phi$ is not defined in the whole cylinder. However, it is possible to extend to the whole cylinder preserving its properties. More precisely: \begin{lemma} \label{extension} There exists a $C^{k-2}$ exact symplectic and twist diffeomorphism $\bar{\Phi}:\AA\rightarrow\AA$ such that $\bar{\Phi} \equiv \Phi$ on $\AA_{e_*}$ and $\bar{\Phi}\equiv\Phi_0$ on $\AA\setminus\AA_{\frac{e_*}{2}}$ where $\Phi_0$ is the integrable twist map $\Phi_0(t,e)=(t+e,e)$. Moreover, $\bar{\Phi}$ preserves the ends of the cylinder and twists them infinitely. If $f$ is real analytic, then the extension $\bar{\Phi}$ is $C^\infty$. \end{lemma} Due to Lemma \ref{well_def} and the fact that the maps $\Phi$ and $\Psi$ are conjugated we can consider the lift $\tilde{\Phi}:\RR^2_{e_*}\rightarrow\RR^2$ and give the following \begin{definition} A complete bouncing motion $(t_n,e_n)_{n\in\ZZ}$ is a complete orbit of the map $\tilde{\Phi}$. \end{definition} In the following section we will study the existence and properties of periodic complete bouncing motions as orbits of the map $\tilde{\Phi}$ defined in \eqref{eq:unbe}. \section{Periodic bouncing motions}\label{sec:per} The existence of periodic complete bouncing motions follows from an application of Aubry-Mather theory. In this section we prove it and in the analytic case we give some results on the structure of such motions and their stability. We start saying that a complete bouncing motion is $(q,p)$-periodic if in time $p$ the ball makes $q$ bouncing before repeating the motion, more precisely: \begin{definition} Given two coprime integers $q,p\in\ZZ^+$, a complete bouncing motion $(t_n,e_n)_{n\in\ZZ}$ is $(q,p)$-periodic if the corresponding orbit of $\tilde{\Phi}$ is $(p,q)$-periodic. Moreover, we say that it is stable if the corresponding orbit is stable. \end{definition} The existence of two $(q,p)$-periodic complete bouncing motions comes from an application of Theorem \ref{birk_orbits}. \begin{theorem}\label{pre_bouncing} For every $f\in C^3$ there exists $\alpha>0$ such that for every positive coprime $p,q$ satisfying $p/q>\alpha$ there exist two different $(q,p)$-periodic complete bouncing motions. Moreover, if $f$ is real analytic, then at least one of the $(q,p)$-periodic complete bouncing motion is unstable. \end{theorem} \begin{proof} By Lemma \ref{lemma:exact}, the map $\Phi$ defined in \eqref{eq:unbe} is a $C^2$ exact symplectic twist embedding in $\AA_{e_*}$ for some large $e_*$ depending on $\norm{\df}$. Moreover $\Phi$ preserves and twists infinitely the upper end. Its extension $\bar{\Phi}$ coming from Lemma \ref{extension} satisfies the hypothesis of Theorem \ref{birk_orbits} and admits for every coprime $p,q$ two Birkhoff $(q,p)$-periodic orbit. Consider the Birkhoff $(q,p)$-periodic orbits for $p,q$ positive and $p/q$ large enough such that \begin{equation}\label{choice_pq} \frac{p}{q}-1-\frac{4}{g}\norm\df>\frac{2}{g}\sqrt{2e_*}. \end{equation} Since Birkhoff periodic orbits are cyclically ordered, from \cite[lemma 9.1]{gole} we have that they satisfy the estimate $t_{n+1}-t_n>p/q - 1$. On the other hand, from \eqref{eq:unbe}, \[ t_{n+1}-t_n\leq\frac{2}{g}\sqrt{2e_n}+\frac{4}{g}\norm\df \] so that it must be \[ \frac{2}{g}\sqrt{2e_n}+\frac{4}{g}\norm\df>\frac{p}{q}-1. \] By the choice of $p/q$ in \eqref{choice_pq} we have that $e_n>e_*$ for every $n\in\ZZ$ so that these Birkhoff periodic orbits are all contained in $\AA_{e_*}$ and so they are orbits of the original map $\Phi$. If $f$ is real analytic the result on the instability follows from Proposition \ref{unstable}. \end{proof} Theorem \ref{pre_bouncing} gives the existence of $(p,q)$-periodic bouncing motions but does not give information on the topological structure of the set of $(p,q)$-periodic bouncing motions for fixed values of $(p,q)$. This is a complicated issue and some results comes from Aubry-Mather theory. However, here we will see which results can be obtained using Theorem \ref{maro_teo}. To state this result we give the following \begin{definition} We say that the set of $(p,q)$-periodic complete bouncing motions is (analytically) degenerate if there exists a real analytic curve $(t(s),e(s))$ such that $(t(s+1),e(s+1))=(t(s)+1,e(s))$ for every $s\in\RR$, the function $t(s)$ is bijective for $s\in[0,1)$ and $(t_n,e_n)_{n\in\ZZ}$ is a $(p,q)$-periodic complete bouncing motion if and only if there exist $n_0,s_0$ such that $(t_{n_0},e_{n_0})=(t(s_0),e(s_0))$. \end{definition} The following result is a quite direct consequence of Lemma \ref{lemma:exact}. \begin{proposition} If $f$ is real analytic, then there exists $\alpha>0$ such that for every $p>\alpha$ the set of $(p,1)$-periodic complete bouncing motions is either finite or degenerate. In the first case at least one $(p,1)$-periodic complete bouncing motion is unstable. In the degenerate case, all $(p,1)$-periodic complete bouncing motion are unstable. \end{proposition} \begin{proof} By Lemma \ref{lemma:exact} the map $\tilde{\Phi}$ is exact symplectic and twist on $\RR^2_{e_*}$. Moreover, let us choose $a$ such that $e_*<\sqrt{2a}$ and $p$ such that \[ \frac{gp}{2}-2\norm{\dot{f}}>\sqrt{2a}. \] Let us start with the following estimates for the lift $\tilde{\Phi}$ that can be easily proved by induction on $n$ from \eqref{eq:unbe}: \begin{equation}\label{stim_t_e} |\sqrt{2e_n}-\sqrt{2e}|\leq 4n\norm{\dot{f}}, \qquad \left|t_n-t-\frac{2}{g}n\sqrt{2e}\right|\leq 4n^2\frac{\norm{\dot{f}}}{g}. \end{equation} These give \[ \bar{t}(t,a)-t\leq \frac{2}{g}\sqrt{2a}+4\frac{\norm{\dot{f}}}{g}<p. \] On the other hand, Lemma \ref{lemma:exact} also gives that $\tilde{\Phi}$ twists the upper end infinitely, i.e. $\lim_{e\rightarrow +\infty}\bar{t}(t,e)-t=+\infty$ uniformly in $t$. Hence, condition \eqref{maro_cond} is satisfied in the strip $\Sigma=[a,+\infty)$ for every $p$. The conclusion comes from the application of Theorem \ref{maro_teo} and the fact that $(p,1)$-periodic complete bouncing motions corresponds to the fixed points of the map $\sigma^{-p}\circ\tilde{\Phi}$. \end{proof} This result is not trivially extended to $(p,q)$-periodic motions for $q\geq 1$ since $\Phi^q$ need to be exact symplectic and twist. The twist condition is in general not preserved by composition, while the exactness is, as shown in the following result, inspired by \cite{bosc_ort} \begin{lemma}\label{lemma_isot_exact} For every $q>0$ there exists $e_\#\geq e_*$ such that for every $p>0$ the map $\sigma^{-p}\circ\tilde{\Phi}^q:\RR^2_{e_\#}\rightarrow\RR^2$ is exact symplectic \end{lemma} \begin{proof} Since the map $\tilde{\Phi}$ is defined in $\RR^2_{e_*}$, the image $\tilde{\Phi}(\RR^2_{e_*})$ could not be contained in $\RR^2_{e_*}$ so that the iterate could not be defined. From \eqref{eq:unb} we have that $|\bar{v}-v|\leq 4\norm{\dot{f}}$ from which, the map $\tilde{\Psi}^q$ is well defined in $\RR^2_{v_\#}$ with $v_\#=v_* + 4 q \norm{\dot{f}}$. Hence passing to the variables $(t,e)$, the map $\tilde{\Phi}^q$ is defined in $\RR^2_{e_\#}$ with $e_\#=\frac{1}{2}v_\#^2$. Since $\Phi$ is exact symplectic, there exists $V:\RR^2_{e_\#}\rightarrow\RR$ such that, defining $\lambda=e dt$ we have $\tilde{\Phi}^*\lambda-\lambda=dV$. Hence, denoting $V_1=V+V\circ\tilde{\Phi}+\dots+V\circ\tilde{\Phi}^{q-1}$ it holds that $V_1\circ\sigma=V_1$ on $\RR^2_{e_\#}$ and \begin{align*} dV_1 &= dV+\tilde{\Phi}^*dV+\dots+(\tilde{\Phi}^{q-1})^*dV \\ & = \tilde{\Phi}^*\lambda-\lambda + (\tilde{\Phi}^2)^*\lambda-\tilde{\Phi}^*\lambda +\dots+(\tilde{\Phi}^q)^*\lambda-(\tilde{\Phi}^{q-1})^*\lambda \\ &= (\tilde{\Phi}^q)^*\lambda-\lambda \end{align*} from which $\tilde{\Phi}$ is exact symplectic. Finally, we conclude noting that by the definition of $\sigma^{-p}$, $(\sigma^{-p}\circ\tilde{\Phi}^q)^*\lambda = (\tilde{\Phi}^q)^*\lambda$. \end{proof} Concerning the twist condition the following technical result holds. \begin{lemma}\label{twist_q} Let $f$ be $C^2$. For every $q\geq 1$ there exist $\epsilon_q>0$, $e^q>e_\#$, such that if $\norm{\ddf}<\epsilon_q$ then \[ \frac{\partial t_q}{\partial e}=\frac{2q}{g\sqrt{2e}}(1+\tilde{f}_q(t,e)) \] where $|\tilde{f}_q(t,e)|< 1/2$ on $\RR^2_{e^q}$. \end{lemma} \begin{proof} To simplify the computation, let us perform the change of variables $y=\sqrt{2e}+\df(t)$ so that \eqref{eq:unbe} becomes \begin{equation}\label{eq:unby} \left\{ \begin{split} \Pto = & \to + \frac 2g y-\frac 2g f[\to,\Pto] \\ \bar{y} = & y - 2f[\to,\Pto] + 2\df (\Pto). \end{split} \right. \end{equation} Since $\partial t_q/\partial e=(\partial t_q/\partial y) (\partial y/\partial e)$ it is enough to prove that for every $q\geq 1$ there exist $\epsilon_q>0$ and $y^q$ large, such that if $\norm{\ddf}<\epsilon_q$ then \begin{equation}\label{new_th} \frac{\partial t_q}{\partial y}=\frac{2q}{g}(1+\tilde{f}_q(t,y)) \end{equation} where $|\tilde{f}_q(t,y)|< 1/2$ on $\RR^2_{y^q}$. Let us start with some estimates that hold for every $q\geq 1$. It comes from \eqref{eq:unby} that \begin{equation}\label{y_q} y_q=y+2\sum_{i=1}^{q}\df(t_i)+2\sum_{i=1}^{q}f[t_{i-1},t_{i}] \end{equation} so that \[ |y_q-y|\leq 4q\norm\df. \] Using it, \begin{equation}\label{t_q} |t_q-t_{q-1}|\geq \frac{2}{g}y-\frac{2}{g}(4q+1)\norm\df, \end{equation} from which there exists $y^q$ large enough and $C_q$ such that \begin{equation}\label{fqq-11} \left|\partial_{t_q}f[t_{q-1},t_q]\right|=\left|\frac{\dot{f}(t_q)-f[t_{q-1},t_q]}{t_q-t_{q-1}}\right|\leq \frac{g\norm{\dot{f}}}{y-(4q+2)\norm{\dot{f}}}<\frac{C_q}{y} \qquad \mbox{on } \RR^2_{y^q} \end{equation} and analogously \begin{equation}\label{fqq-12} \left|\partial_{t_{q-1}}f[t_{q-1},t_q]\right|<\frac{C_{q-1}}{y} \qquad \mbox{on } \RR^2_{y^{q-1}} \end{equation} Now we can start the proof by induction on $q\geq 1$. For $q=1$ we have, differentiating the first equation in \eqref{eq:unby} \begin{equation}\label{t_1_y} \frac{\partial t_1}{\partial y}\left(1+\frac{2}{g}\partial_{t_{1}}f[t,t_1]\right)=\frac{2}{g}y \end{equation} from which, using \eqref{fqq-11} we get the initial step taking a suitably larger value of $y^1$.\\ For the inductive step, let us suppose \eqref{new_th} to be true for $i=1,\dots,q-1$. By implicit differentiation \begin{equation}\label{t_q_e} \frac{\partial t_q}{\partial y}\left(1+\frac{2}{g}\partial_{t_{q}}f[t_{q-1},t_q]\right)=\frac{2}{g}\frac{\partial y_{q-1}}{\partial y}+\frac{\partial t_{q-1}}{\partial y}\left(1-\frac{2}{g}\partial_{t_{q-1}}f[t_{q-1},t_q]\right) \end{equation} From \eqref{y_q} and the inductive hypothesis we have \begin{align*} \frac{\partial y_{q-1}}{\partial y}&=1+2\sum_{i=1}^{q-1}\left(\ddf(t_i) \frac{\partial t_i}{\partial y}-\partial_{t_{i}}f[t_{i-1},t_{i}]\frac{\partial t_i}{\partial y} -\partial_{t_{i-1}}f[t_{i-1},t_{i}]\frac{\partial t_{i-1}}{\partial y}\right)\\ &=1+2\sum_{i=1}^{q-1}\left(\ddf(t_i) (1+\tilde{f}_i)-\partial_{t_{i}}f[t_{i-1},t_{i}](1+\tilde{f}_i) -\partial_{t_{i-1}}f[t_{i-1},t_{i}](1+\tilde{f}_{i-1})\right). \end{align*} Since by (\ref{fqq-11}-\ref{fqq-12}) for every $i$ $|\partial_{t_{i}}f[t_{i-1},t_{i}]|$ tends to zero uniformly as $y \rightarrow +\infty$ and $|\tilde{f}_i|<1/2$ for $y$ large, we can find new constants $C_{q-1}$ and $y^{q-1}$ such that on $\RR^2_{y^{q-1}}$, \begin{equation}\label{stim_y} \frac{\partial y_{q-1}}{\partial y} = 1 + \bar{f}_{q-1}(t,y) \qquad \mbox{with } |\bar{f}_{q-1}|\leq C_{q-1}\norm\ddf. \end{equation} Using it and the inductive hypothesis in \eqref{t_q_e} we get \begin{align}\label{final} \frac{\partial t_q}{\partial y}\left(1+\frac{2}{g}\partial_{t_{q}}f[t_{q-1},t_q]\right)&=\frac{2}{g}(1 + \bar{f}_{q-1}(t,y))+\frac{2(q-1)}{g}(1+\tilde{f}_{q-1}(t,y))\left(1-\frac{2}{g}\partial_{t_{q-1}}f[t_{q-1},t_q]\right)\\ &=\frac{2q}{g}\left(1+ \tilde{f}_q(t,y)\right) \end{align} where, \[ \tilde{f}_q(t,y)=\frac{1}{q}\bar{f}_{q-1}(t,y)+\frac{q-1}{q}\tilde{f}_{q-1}(t,y)+\frac{2(q-1)}{gq}\partial_{t_{q-1}}f[t_{q-1},t_q](1+\tilde{f}_{q-1}(t,y)). \] Now (\ref{fqq-11}-\ref{fqq-12}) and \eqref{stim_y} imply \[ |\tilde{f}_q|<\frac{C_{q-1}}{q}\norm\ddf+\frac{q-1}{2q}+\frac{C'_{q-1}}{y} \] so that we can find $\epsilon_q$ and $y_q$ such that if $\norm\ddf<\epsilon_q$ then $|\tilde{f}_q|<\frac{1}{2}-\frac{1}{2q}$ on $\RR^2_{y^{q}}$. Plugging it into \eqref{final} and using again (\ref{fqq-11}-\ref{fqq-12}) we get the thesis, eventually increasing $y^q$. \end{proof} This is used to prove the following result \begin{proposition} Suppose that $f$ is real analytic. For every $q>0$ there exist $\alpha>0$ and $\epsilon_q>0$ such that if $p>\alpha$ and $\norm{\ddf}<\epsilon_q$ then the set of $(p,q)$-periodic complete bouncing motion is either finite or degenerate. In the first case at least one $(p,q)$-periodic complete bouncing motion is unstable. In the degenerate case, all $(p,q)$-periodic complete bouncing motion are unstable. \end{proposition} \begin{proof} We would like to apply Theorem \ref{maro_teo} to the map $\tilde{\Phi}^q$, noting that $(p,q)$-periodic bouncing motions correspond to fixed points of the map $\sigma^{-p}\circ\tilde{\Phi}^q$. Let us fix $q>0$. In Lemma \ref{lemma_isot_exact} we proved that $\tilde{\Phi}^q$ is exact symplectic in $\RR^2_{e_\#}$ for some $e_\#$ depending on $q$. Moreover, by Lemma \ref{twist_q}, there exist $\epsilon_q$ and $e^q>e_\#$ such that if $\norm{\ddf}<\epsilon_q$ then $\tilde{\Phi}^q$ is also twist on $\RR^2_{e^q}$. Now choose $p>0$ such that \[ \frac{gp}{2q}-2q\norm{\dot{f}}>e^q. \] Hence, there exist $b>a$ such that \[ e^q<\sqrt{2a}<\frac{gp}{2q}-2q\norm{\dot{f}}<\frac{gp}{2q}+2q\norm{\dot{f}}<\sqrt{2b}. \] This choice for $a,b$ gives condition \eqref{maro_cond} on the strip $\Sigma=\RR\times (a,b)$. Actually, from \eqref{stim_t_e}, \begin{align*} t_q(t,b)-t & \geq\frac{2}{g}q\sqrt{2b}-4q^2\frac{\norm{\dot{f}}}{g}>p \\ t_q(t,a)-t & \leq\frac{2}{g}q\sqrt{2a}+4q^2\frac{\norm{\dot{f}}}{g}<p. \end{align*} This concludes the proof. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Deligne's category $\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t)$} From the perspective of the Killing-Cartan-Weyl classification of simple Lie algebras and their representation theory in terms of highest weights, root systems, Weyl groups and associated combinatorics it is not so easy to understand the extreme uniformity in the representation theory that exists among different Lie groups. With possible application to a universal Chern-Simons type knot invariant in mind, P. Vogel \cite{Vog1999} tried to define a universal Lie algebra, $\mathfrak{g}(\alpha:\beta:\gamma)$ depending on three {\em Vogel parameters} that determine a point $(\alpha:\beta:\gamma)$ in the {\em Vogel plane}, in which all simple Lie algebras find their place. The dimension of the Lie algebra $\mathfrak{g}(\alpha:\beta:\gamma)$ is given by a universal rational expression \begin{equation*} \dim \mathfrak{g}(\alpha:\beta:\gamma)\, = \, \frac{(\alpha-2t)(\beta-2t)(\gamma-2t)}{\alpha\beta\gamma},\qquad t=\alpha+\beta+\gamma , \end{equation*} and similar universal rational formulas can be given for the dimensions of irreducible constituents of $S^2\mathfrak{g}, S^3\mathfrak{g}$ and $S^4\mathfrak{g}$. Although the current status of Vogel's suggestions is unclear to us, these ideas have led to many interesting developments, such as the discovery of $E_{7\frac{1}{2}}$ by Landsberg and Manivel, \cite{LM2002}, \cite{LM2004}, \cite{LM2006}, \cite{LM2006a}, \cite{LM2006a}. In order to interpolate within the classical $A,B,C,D$ series of Lie algebras, Deligne has defined $\otimes$-categories \[ \mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t),\;\; \mathop{\underline{\smash{\mathrm{Rep}}}}(O_t), \] where $t$ is a parameter that can take on any complex value. (The category $\mathop{\underline{\smash{\mathrm{Rep}}}}(Sp_{2t})$ is usually not discussed as it can be expressed easily in terms of the category $\mathop{\underline{\smash{\mathrm{Rep}}}}(O_T)$ with $T=-2t$.) If $n$ is an integer, there are natural surjective functors \[\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_n) \to \mathop{\mathrm{Rep}}(GL_n)\] In the tannakian setup one would attempt to reconstruct a group $G$ from its $\otimes$-category of representations $\mathop{\mathrm{Rep}}(G)$ using a fibre functor to the $\otimes$-category $Vect$ of vector spaces, but Deligne's category has no fibre functor and is not tannakian, or, in general, even abelian. (However, when $t$ is not an integer, the category \emph{is} abelian semisimple.) According to the axioms, in an arbitrary rigid $\otimes$-category $\mathcal{R}$ there exist a unit object~${\bf 1}$ and canonical evaluation and coevaluation morphisms \[ \epsilon: V \otimes V^* \to {\bf 1},\qquad \delta: {\bf 1} \to V \otimes V^*\] so that we can assign to any object a dimension by setting \[ \dim V =\epsilon \circ \delta \in \mathop{\mathrm{End}}({\bf 1}) \in \mathbb{C}. \] A simple diagrammatic description of $\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t)$ can be found in \cite{CW2012}. One first constructs a skeletal category ${\mathop{\underline{\smash{\mathrm{Rep}}}}\,}_0(GL_t)$, whose objects are words in the alphabet $\{\bullet, \circ\}$. The letter $\bullet$ corresponds to the fundamental representation $V$ of $GL_t$, $\circ$ to its dual $V^*$. A~$\otimes$-structure is induced by concatenation of words. The space of morphisms between two such words is the $\mathbb{C}$-span of a set of admissible graphs, with vertices the circles and dots of the two words. Such an admissible graph consists of edges between the letters of the two words. Each letter is contained in one edge. Such an edge connects different letters of the same word or the same letter if the words are different. $$\vcenter{ \xymatrix{ \bullet \ar@/_2ex/@{-}[rr] & \bullet \ar@{-}[ld] & \circ & \circ \ar@{-}[d] \\ \bullet \ar@{-}[rrd] & \circ \ar@/^/@{-}[r] \ar@/_/@{-}[r]& \bullet & \circ \ar@{-}[lld] \\ & \circ & \bullet & \\ } } = t \cdot \left( \vcenter{ \xymatrix{\bullet \ar@/_2ex/@{-}[rr] & \bullet \ar@{-}[ddr] & \circ & \circ \ar@{-}[ddll] \\ & & \\ & \circ & \bullet & \\ }} \right) $$ The composition is juxtaposition of the two graphs, followed by the elimination of loops, which results in a factor $t$.\\ Deligne's category is now obtained by first forming its additive hull by introducing formally direct sums and then passing to the Karoubian hull, i.e. forming a category of pairs $(W,e)$, consisting of an object together with an idempotent: \[\mathop{\underline{\smash{\mathrm{Rep}}}} (GL_t) =({\mathop{\underline{\smash{\mathrm{Rep}}}}\,}_0(GL_t)^{\text{add}})^\text{Karoubi}. \] \bf Example. \rm Consider the word $\bullet \bullet$ and the morphisms $\mathrm{Id}$ and $\mathrm{Swap}$ with the obvious meaning. One then can put \[ S^2V=(\bullet \bullet, s), \;\;\wedge^2 V=(\bullet\bullet, a),\] where \[ s=\frac{1}{2}(\mathrm{Id}+\mathrm{Swap}),\;\;a=\frac{1}{2}(\mathrm{Id}-\mathrm{Swap})\] so that in $\mathop{\underline{\smash{\mathrm{Rep}}}}(Gl_t)$ one has: \[ V \otimes V=(\bullet \bullet ,Id)=S^2V \oplus \wedge^2V,\] which upon taking dimensions is the identity \[ t^2 = \frac{t(t+1)}{2}+\frac{t(t-1)}{2} .\] \subsection{`Spaces of sections' as objects in Deligne's category and the beta integral.} As above, we assume that $n$ is a natural number. Write $t=N+1$ and let $V_t=V$ be the fundamental object of $\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t)$ so that $\dim V_t=t$. We do not define the projective space $\P =\P^N$, but we can pretend that, in the sense of Deligne, the space of global sections is \[ H(\mathcal{O}_{\P}(n)) :=\mathop{\mathrm{Sym}^n}(V_t^*) \in \mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t) .\] Its dimension is then, as expected \begin{equation} \chi(\mathcal{O}_{\P}(n)) :=\dim H (\mathcal{O}_{\P}(n))={N+n \choose n} \label{chi-interpret}, \end{equation} (interpreted in the obvious way as a polynomial in $N$ if $N\not\in\mathbb{Z}$), so that e.g. \[\chi(\mathcal{O}_{\P^{1/2}}(2))=\frac{3}{8}.\] The Poincar\'e series \[P(y):=\sum_{n=0}^{\infty} \chi(\mathcal{O}_{\P}(n)) y^n =\frac{1}{(1-y)^{N+1}},\] which is consistent with the idea that $\dim V_t = N+1$. \medskip Returning to the question posed at the beginning, `is there a way to extend the interpolation of $\chi$ individually to the Chern and the Todd ingredients?', we reason as follows. If $X$ is a smooth projective $n$-dimensional variety, and $E$ a vector bundle on $X$, then the Euler characteristic \[\chi(X,E):=\sum_{i=0}^n (-1)^i\dim H^i(X,E)\] can be expressed in terms of characteristic numbers \[\chi(X,E)=\int_X \mathop{\mathrm{ch}}(E) \cdot \mathop{\mathrm{td}}(X) . \] Here the integral in the right hand side is usually interpreted as resulting from evaluating the cap product with the fundamental class $[X]$ on the cohomology algebra $H^*(X)$, and the Chern character and Todd class are defined in terms of the Chern roots $x_i$ of $E$ and $y_i$ of $TX$: \[\mathop{\mathrm{ch}}(E)=\sum_{i=1}^r e^{x_i}\,, \qquad \mathop{\mathrm{td}}(X)=\prod_{i=1}^n \frac{y_i}{1-e^{-y_i}} .\] The cohomology ring of an $n$-dimensional projective space is a truncated polynomial ring: \[H^*(\P^N)=\mathbb{Z}[\xi]/(\xi^{N+1})\,, \qquad\xi=c_1(\mathcal{O}(1)),\] and it is not directly clear how to make sense of this if $N$ is not an integer. Our tactic will be to drop the relation \[\xi^{N+1}=0\] altogether, thinking instead of $\mathbb{Z}[\xi]$ as a Verma module over the $sl_2$ of the Lefschetz theory, and replacing taking the cap product with integration. As we will be integrating meromorphic functions in $\xi$, the polynomial ring is too small, and we put \[ \hat{H}(\P) :=\mathbb{Z}[[s]] \supset \mathbb{Z}[s] .\] One has \[ \chi(\mathcal{O}(n))=e^{n\xi}\,, \qquad \mathop{\mathrm{td}}(\P)=\left(\frac{\xi}{1-e^{-\xi}}\right)^{N+1}, \] so Hirzebruch-Riemann-Roch reads \[\chi(\mathcal{O}(n))=\left[e^{n\xi} \left(\frac{\xi}{1-e^{-\xi}}\right)^{N+1}\right]_N\] where $[...]_N$ is the coefficient at $\xi^N$ in a series. This can be expressed analytically as a residue integral along a small circle around the origin: \begin{equation*} \chi(\mathcal{O}(n))=\frac{1}{2\pi i}\oint e^{n \xi}\left(\frac{\xi}{1-e^{-\xi}}\right)^{N+1}\frac{d\xi}{\xi^{N+1}} . \end{equation*} As it stands, it cannot be extended to non-integer $N$ since the factor $(1- e^{-\xi})^{-N-1}$ is not univalued on the circle. The usual way to adapt it is to consider, for $n \ge 0$, the integral along the path going from $- \infty - i \varepsilon$ to $ - i \varepsilon$, making a half--turn round the origin and going back, and choosing the standard branch of the logarithm. Because of the change in the argument this integral is equal to \begin{equation*} J(N,n) = \frac{e^{2 \pi i (N+1)}-1}{2 \pi i} \int_{-\infty}^0 \frac{e^{n \xi}}{(1-e^{-\xi})^{N+1}} d \xi , \end{equation*} or, after the substitution $s=e^{\xi}$, \begin{equation*} J(N,n) = \frac{e^{2 \pi i (N+1)}-1}{2 \pi i} \int_0^1 s^{n-1} (1-1/s)^{-N-1} ds = \frac{\sin \pi (N+1)}{ \pi } \int_0^1 s^{n+N} (1-s)^{-N-1} ds . \end{equation*} Using Euler's formulas \begin{equation} \Gamma(x)\Gamma(1-x) =\frac{\pi}{\sin \pi x} \,, \label{gamma-one-minus-argument} \end{equation} \begin{equation} \int_0^1 s^{\alpha-1}(1-s)^{\beta-1} ds = \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)} \,, \label{beta-integral} \end{equation} and \begin{equation*} \frac{\Gamma(N+n+1)}{\Gamma(n+1) \Gamma(N+1)} = {N+n \choose n} \,, \end{equation*} we arrive at a version of RRH `with integrals': \medskip \noindent \bf Proposition 1. \rm Let $n \in \mathbb{N}$. Assume $\mathop{\mathrm{Re}} N < 0, \, N \notin \mathbb{Z}$. Interpret the Euler characteristic of~$\P ^N$ via formula \eqref{chi-interpret}. Then \begin{equation*} \label{little-propo} \chi_\P(\O (n)) = \frac{e^{2 \pi i (N+1)}-1}{2 \pi i} \int_{-\infty}^0 \frac{e^{n \xi}}{(1-e^{-\xi})^{N+1}} d \xi . \end{equation*} \qed \bigskip \medskip \subsection{The grassmannian and the Selberg integral.} For $\P^N$, we ended up with the beta function, a one-dimensional integral, as the cohomology ring is generated by a single class~$\xi$. In the cases where the cohomology ring is generated by $k$ elements, for example the grassmannian $G(k,N+k)$, we would like to see a $k$-dimensional integral appear in a natural way. For $N \in \mathbb{N}$ the cohomology ring of the grassmannian $\mathbb{G}:=G(k,N+k)$ is given by \[H^*(G(k,N+k))=\mathbb{C}[s_1,s_2,\ldots,s_k]/(q_{N+1},q_{N+2}, \dots, q_{N+k}),\] where the $s_i$ are the Chern classes of the universal rank $k$ sub-bundle and $q_i=c_i(Q)$ are formally the Chern classes of the universal quotient bundle $Q$ (so that the generating series of $q$'s is inverse to that of $s$'s). In the same vein as before, we set: \begin{equation} \hat{H}^*(\mathbb{G}):=\mathbb{C}[[s_1,s_2,\ldots,s_k]]=\mathbb{C}[[x_1,x_2,\ldots,x_k]]^{S_k} \label{drop-rel} \end{equation} by dropping the relations. A $\mathbb{C}$-basis of this ring given by the Schur polynomials \[\sigma_{\lambda} :=\frac{\det(x_i^{\lambda_j+k-j})}{\det(x_i^{k-j})}\] where $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_k)$ is an arbitrary Young diagram with at most $k$ rows. There is a Satake--type map for the extended cohomology: \[ \mathrm{Sat}: \hat{H}(\mathbb{G}) \to \wedge^k \hat{H}(\P) \] obtained from the Young diagram by `wedging its rows': \[\sigma_{\lambda} \mapsto s^{\lambda_1+k-1} \wedge s^{\lambda_2+k-2}\wedge \ldots \wedge s^{\lambda_k-1}. \] We are therefore seeking an expression for the values of the Hilbert polynomial of $G(k,N)$ in terms of a $k$--dimensional integral of the beta type involving $k$--wedging. Euler's beta integral \eqref{beta-integral} has several generalizations. Selberg introduced \cite{Selberg1944} an integral \cite{FW2008} over the $k$-dimensional cube \begin{equation*} S(\alpha, \beta,\gamma, k):=\int_0^1\ldots\int_0^1 (s_1 s_2\ldots s_k)^{\alpha-1}((1-s_1)(1-s_2)\ldots(1-s_k))^{\beta-1}\Delta(s)^{2\gamma} ds_1ds_2\ldots ds_k \end{equation*} where \[ \Delta(s)=\Delta(s_1,s_2,\ldots,s_k)=\prod_{i <j} (s_i-s_j) ,\] and showed that it admits meromorphic continuation, which we will also denote by $S$. \medskip \noindent {\bf Proposition 2.} For $k \in \mathbb{N},\, n \in \mathbb{Z}_+$, let $\chi(\mathcal{O}_{\mathbb{G}}(n))$ denote the result of interpolating the polynomial function $\chi(\mathcal{O}_{G (k,k+N)}(n))$ of the argument $N \in \mathbb{N}$ to $\mathbb{C}$. One has \begin{equation*} \chi(\mathcal{O}_{\mathbb{G}}(n))= \frac{(-1)^{k(k-1)/2}}{k!}\left( \frac{\sin \pi(N+1)}{\pi}\right)^k S(n+N+1,-N-k+1;1,k) . \end{equation*} \rm {\sc Proof.} The shortest (but not the most transparent) way to see this is to use the expressions for the LHS and the RHS in terms of the product of gamma factors found by Littlewood and Selberg respectively. By Selberg, \begin{equation} S(\alpha,\beta,\gamma,k)=\prod_{i=0}^{k-1} \frac{\Gamma(\alpha+i\gamma)\Gamma(\beta+i\gamma)\Gamma(1+(i+1)\gamma)} {\Gamma(\alpha+\beta+(k+i-1)\gamma) \Gamma(1+\gamma)} \label{Selberg-formula}. \end{equation} By Littlewood \cite{Lit1942}, for $N \in \mathbb{Z}_{>0}$ one has \begin{equation*} \chi(\mathcal{O}_{G(k,k+N)}(n)) =\frac{{N+n \choose n} {N+n+1 \choose n+1} \ldots {N+n+(k-1) \choose n+(k-1)}}{ {N \choose 0} {N+1 \choose 1} \ldots {N+(k-1) \choose (k-1)}}, \end{equation*} where there are $k$ factors at the top and the bottom. Rearranging the terms in the RHS of \eqref{Selberg-formula} and using \eqref{gamma-one-minus-argument}, we bring the $\Gamma$-factors that involve $\beta$ to the denominator in order to form the binomial coefficients at the expense of the sine factor. \qed \bigskip As an example, for $k=2$ and $N=-1/2$, we get the Hilbert series \[ \sum_{k=0}^\infty \chi(\mathcal{O}_{G(2,3/2)}(n)) \, y^n = 1+6\, \frac{t}{16} +60\left(\frac{t}{16}\right)^2 + 700 \left(\frac{t}{16} \right)^3+8820 \left(\frac{t}{16} \right)^4 +\ldots \] which is no longer algebraic, but can be expressed in terms of elliptic functions. More generally, one can consider a Selberg--type integral with an arbitrary symmetric function rather than the discriminant in the numerator and use separation of variables together with the Jacobi--Trudi formula in order to obtain similar expressions in terms of the gamma function in order to interpolate between the Euler characteristics of more general vector bundle on grassmannians (or the dimensions of highest weight representations of $GL_{N+k}$). \subsection{Towards a gamma conjecture in non--integral dimensions.} \label{gamma-phenomena} The by now standard predictions of mirror symmetry relate the RRH formalism on a Fano variety $F$ to the monodromy of its regularized quantum differential equation. It is expected that this differential equation arises from the Gauss--Manin connection in the middle cohomology of level hypersurfaces of a regular function $f$ defined on some quasiprojective variety (typically a Laurent polynomial on $\mathbb{G}_\mathrm{m}^{\, d}$), called in this case a Landau--Ginzburg model of $F$. By stationary phase, the monodromy of the Gauss--Manin connection in a pencil translates into the asymptotic behavior of oscillatory integrals of the generic form $I (z) = \int \exp (izf)\, d\mu (\mathbb{G}_\mathrm{m}^{\, d})$, which satisfy the quantum differential equation of $F$, this time without the word `regularized'. The asymptotics are given by Laplace integrals computed at the critical points, and the critical values of $f$ are the exponents occurring in the oscillatory integrals $I_i(z)$ that have `pure' asymptotic behavior in sectors. One wants to express these pure asymptotics in terms of the Frobenius basis of solutions $\{ \Psi_i (z) \}$ around $z = 0$. The gamma conjecture \cite{GGI2016} predicts that such an expression for the highest--growth asymptotic (arising from the critical value next to infinity) will give the `gamma--half' of the Todd genus and therefore effectively encode the Hilbert polynomial of $F$ with respect to the anticanonical bundle. At first sight, none of this seems capable of surviving in non--integer dimensions. Yet, to return to the example of $G(2,N+2)$, define the numbers $c_j$ and $d_j$ by the expansions \begin{equation*} \Gamma_\P^{(0)} (\varepsilon) = \Gamma (1+\varepsilon)^{N+2} = \sum_{j=0}^\infty d_j \varepsilon^j , \end{equation*} \begin{equation*} \Gamma_\P^{(1)} (\varepsilon) = \Gamma (1+\varepsilon)^{N+2} e^{2 \pi i \varepsilon} = \sum_{j=0}^\infty c_j \varepsilon^j. \end{equation*} Put \begin{equation*} F(\varepsilon,z) = \sum_{k=0}^{\infty} \frac{z^{l+\varepsilon}}{\Gamma(1+l+\varepsilon)^{N+2}} \end{equation*} and \begin{equation*} \Psi (\varepsilon,z) = \Gamma_\P (\varepsilon) F (\varepsilon, z) = \sum_{k=0}^\infty \Psi_k (z) \varepsilon^k. \end{equation*} \medskip \noindent \bf Claim \rm (rudimentary gamma conjecture). For fixed $N > 2$ and $i, \, j$ in a box of at least some moderate size, one should have \begin{equation*} \label{claim-grass} \lim_{z \to - \infty} \frac{\Psi_i (z) \Psi'_j (z) - \Psi_j (z) \Psi'_i (z)}{\Psi_1 (z) \Psi'_0 (z) - \Psi_0 (z) \Psi'_1 (z)} = \frac{c_i d_j - c_j d_i}{c_1 d_0 - c_0 d_1} . \end{equation*} \bigskip \bigskip \noindent The LHS and RHS mimic, in the setup of formula \eqref{drop-rel}, the $\sigma_{[j-1,i]}$-coefficients in the expansion of the `principal asymptotic class' and the gamma class of the usual grassmannian: in the case when $N \in \mathbb{N}$ and $0 \le i,j \le N$ one would use the identification of $2$--Wronskians of a fundamental matrix of solutions to a higher Bessel equation with homology classes of $G(2,N+2)$. Preliminary considerations together with numerical evidence suggest that the claim has a good chance to be true, as well as its versions for $G(k,N+k)$ with $k > 2$. \bigskip \bigskip \bigskip The first--named author is grateful to Yuri Manin and Vasily Pestun for stimulating discussions. We thank Hartmut Monien for pointing us to \cite{FW2008}. \bigskip \nocite{MV2017} \nocite{GM2014} \nocite{Bra2013} \nocite{BS2013} \nocite{Etingof1999} \nocite{Etingof2014} \nocite{Etingof2016} \nocite{EGNO2015} \nocite{FW2008} \nocite{Man2006} \nocite{Opd1999} \nocite{Man1985} \nocite{Lit1942} \nocite{Lit1943} \nocite{BD2016} \nocite{LM2002} \nocite{LM2004} \nocite{LM2006} \nocite{LM2006a} \nocite{GW2011} \nocite{Del2002} \nocite{} \nocite{} \nocite{} \nocite{} \nocite{} \nocite{}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Mobile ad-hoc networking has presented many challenges to the research community, especially in designing suitable, efficient, and well performing protocols. The practical analysis and validation of such protocols often depends on synthetic data, generated by some mobility model. The model has the goal of simulating real life scenarios~\cite{camp02wcmc} that can be used to tune networking protocols and to evaluate their performance. A lot of work has been done in designing realistic mobility models. Till a few years ago, the model of choice in academic research was the random way point mobility model (RWP)~\cite{rwp}, simple and very efficient to use in simulations. Recently, with the aim of understanding human mobility~\cite{toronto, hui05, hui06, milan07, UCAM-CL-TR-617}, many researchers have performed real-life experiments by distributing wireless devices to people. From the data gathered during the experiments, they have observed the typical distribution of metrics such as inter-contact time (time interval between two successive contacts of the same people) and contact duration. Inter-contact time, which corresponds to how often people see each other, characterizes the opportunities of packet forwarding between nodes. Contact duration, which limits the duration of each meeting between people in mobile networks, limits the amount of data that can be transferred. In~\cite{hui05, hui06}, the authors show that the distribution of inter-contact time is a power-law. Later, in~\cite{milan07}, it has been observed that the distribution of inter-contact time is best described as a power law in a first interval on the time scale (12 hours, in the experiments under analysis), then truncated by an exponential cut-off. Conversely, \cite{cai07mobicom} proves that RWP yields exponential inter-contact time distribution. Therefore, it has been established clearly that models like RWP are not good to simulate human mobility, raising the need of new, more realistic mobility models for mobile ad-hoc networking. In this paper we present small world in motion (SWIM), a simple mobility model that generates small worlds. The model is very simple to implement and very efficient in simulations. The mobility pattern of the nodes is based on a simple intuition on human mobility: People go more often to places not very far from their home and where they can meet a lot of other people. By implementing this simple rule, SWIM is able to raise social behavior among nodes, which we believe to be the base of human mobility in real life. We validate our model using real traces and compare the distribution of inter-contact time, contact duration and number of contact distributions between nodes, showing that synthetic data that we generate match very well real data traces. Furthermore, we show that SWIM can predict well the performance of forwarding protocols. We compare the performance of two forwarding protocols---epidemic forwarding~\cite{vahdat00epidemic} and (a simplified version of) delegation forwarding~\cite{dfw08}---on both real traces and synthetic traces generated with SWIM. The performance of the two protocols on the synthetic traces accurately approximates their performance on real traces, supporting the claim that SWIM is an excellent model for human mobility. The rest of the paper is organized as follows: Section~\ref{sec:relatedwork} briefly reports on current work in the field; in Section~\ref{sec:solution} we present the details of SWIM and we prove theoretically that the distribution of inter-contact time in SWIM has an exponential tail, as recently observed in real life experiments; Section~\ref{sec:experiments} compares synthetic data traces to real traces and shows that the distribution of inter-contact time has a head that decays as a power law, again like in real experiments; in Section~\ref{sec:forwarding} we show our experimental results on the behavior of two forwarding protocols on both synthetic and real traces; lastly, Section~\ref{sec:conclusions} present some concluding remarks. \section{Related work} \label{sec:relatedwork} The mobility model recently presented in~\cite{levy} generates movement traces using a model which is similar to a random walk, except that the flight lengths and the pause times in destinations are generated based on Levy Walks, so with power law distribution. In the past, Levy Walks have been shown to approximate well the movements of animals. The model produces inter-contact time distributions similar to real world traces. However, since every node moves independently, the model does not capture any social behavior between nodes. In~\cite{musolesi07}, the authors present a mobility model based on social network theory which takes in input a social network and discuss the community patterns and groups distribution in geographical terms. They validate their synthetic data with real traces and show a good matching between them. The work in \cite{LCA-CONF-2008-049} presents a new mobility model for clustered networks. Moreover, a closed-form expression for the stationary distribution of node position is given. The model captures the phenomenon of emerging clusters, observed in real partitioned networks, and correlation between the spatial speed distribution and the cluster formation. In~\cite{workingDay}, the authors present a mobility model that simulates the every day life of people that go to their work-places in the morning, spend their day at work and go back to their homes at evenings. Each one of this scenarios is a simulation per se. The synthetic data they generate match well the distribution of inter-contact time and contact durations of real traces. In a very recent work, Barabasi et al.~\cite{barabasi08} study the trajectory of a very large (100,000) number of anonymized mobile phone users whose position is tracked for a six-months period. They observe that human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time independent characteristic travel distance and a significant probability to return to a few highly frequented locations. They also show that the probability density function of individual travel distances are heavy tailed and also are different for different groups of users and similar inside each group. Furthermore, they plot also the frequency of visiting different locations and show that it is well approximated by a power law. All these observations are in contrast with the random trajectories predicted by Levy flight and random walk models, and support the intuition behind SWIM. \section{Small World in Motion} \label{sec:solution} We believe that a good mobility model should \begin{enumerate} \item be simple; and \item predict well the performance of networking protocols on real mobile networks. \end{enumerate} We can't overestimate the importance of having a \emph{simple} model. A simple model is easier to understand, can be useful to distill the fundamental ingredients of ``human'' mobility, can be easier to implement, easier to tune (just one or few parameters), and can be useful to support theoretical work. We are also looking for a model that generates traces with the same statistical properties that real traces have. Statistical distribution of inter-contact time and number of contacts, among others, are useful to characterize the behavior of a mobile network. A model that generates traces with statistical properties that are far from those of real traces is probably useless. Lastly, and most importantly, a model should be accurate in predicting the performance of network protocols on real networks. If a protocol performs well (or bad) in the model, it should also perform well (or bad) in a real network. As accurately as possible. None of the mobility models in the literature meets all of these properties. The random way-point mobility model is simple, but its traces do not look real at all (and has a few other problems). Some of the other protocols we reviewed in the related work section can indeed produce traces that look real, at least with respect to some of the possible metrics, but are far from being simple. And, as far as we know, no model has been shown to predict real world performance of protocols accurately. Here, we propose \emph{small world in motion} (SWIM), a very simple mobility model that meets all of the above requirements. Our model is based on a couple of simple rules that are enough to make the typical properties of real traces emerge, just naturally. We will also show that this model can predict the performance of networking protocols on real mobile networks extremely well. \subsection{The intuition} When deciding where to move, humans usually trade-off. The best supermarket or the most popular restaurant that are also not far from where they live, for example. It is unlikely (though not impossible) that we go to a place that is far from home, or that is not so popular, or interesting. Not only that, usually there are just a few places where a person spends a long period of time (for example home and work office or school), whereas there are lots of places where she stays less, like for example post office, bank, cafeteria, etc. These are the basic intuitions SWIM is built upon. Of course, trade-offs humans face in their everyday life are usually much more complicated, and there are plenty of unknown factors that influence mobility. However, we will see that simple rules---trading-off proximity and popularity, and distribution of waiting time---are enough to get a mobility model with a number of desirable properties and an excellent capability of predicting the performance of forwarding protocols. \subsection{The model in details} More in detail, to each node is assigned a so called \emph{home}, which is a randomly and uniformly chosen point over the network area. Then, the node itself assigns to each possible destination a \emph{weight} that grows with the popularity of the place and decreases with the distance from home. The weight represents the probability for the node to chose that place as its next destination. At the beginning, no node has been anywhere. Therefore, nodes do not know how popular destinations are. The number of other nodes seen in each destination is zero and this information is updated each time a node reaches a destination. Since the domain is continuous, we divided the network area into many small contiguous cells that represent possible destinations. Each cell has a squared area, and its size depends on the transmitting range of the nodes. Once a node reaches a cell, it should be able to communicate with every other node that is in the same cell at the same time. Hence, the size of the cell is such that its diagonal is equal to the transmitting radius of the nodes. Based on this, each node can easily build a \emph{map} of the network area, and can also calculate the weight for each cell in the map. These information will be used to determine the next destination: The node chooses its cell destination randomly and proportionally with its weight, whereas the exact destination point (remind that the network area is continuous) is taken uniformly at random over the cell's area. Note that, according to our experiments, it is not really necessary that the node has a \emph{full} map of the domain. It can remember just the most popular cells it has visited and assume that everywhere else there is nobody (until, by chance, it chooses one of these places as destination and learn that they are indeed popular). The general properties of SWIM holds as well. Once a node has chosen its next destination, it starts moving towards it following a straight line and with a speed that is proportional to the distance between the starting point and the destination. To keep things simple, in the simulator the node chooses as its speed value exactly the distance between these two points. The speed remains constant till the node reaches the destination. In particular, that means that nodes finish each leg of their movements in constant time. This can seem quite an oversimplification, however, it is useful and also not far from reality. Useful to simplify the model; not far from reality since we are used to move slowly (maybe walking) when the destination is nearby, faster when it is farther, and extremely fast (maybe by car) when the destination is far-off. More specifically, let $A$ be one of the nodes and $h_A$ its home. Let also $C$ be one of the possible destination cells. We will denote with $\textit{seen}(C)$ the number of nodes that node~$A$ encountered in $C$ the last time it reached $C$. As we already mentioned, this number is $0$ at the beginning of the simulation and it is updated each time node~$A$ reaches a destination in cell~$C$. Since $h_A$ is a point, whereas $C$ is a cell, when calculating the distance of $C$ from its home $h_A$, node~$A$ refers to the center of the cell's area. In our case, being the cell a square, its center is the mid diagonal point. The weight that node~$A$ assigns to cell $C$ is as follows: \begin{equation} \label{eq:weight} w(C) = \alpha\cdot\textit{distance}(h_A, C) + (1-\alpha)\cdot\textit{seen}(C). \end{equation} where $\textit{distance}(h_A, C)$ is a function that decays as a power law as the distance between node~$A$ and cell~$C$ increases. In the above equation $\alpha$ is a constant in $[0;1]$. Since the weight that a node assigns to a place represents the probability that the node chooses it as its next destination, the value of $\alpha$ has a strong effect on the node's decisions---the larger is $\alpha$, the more the node will tend to go to places near its home. The smaller is $\alpha$, the more the node will tend to go to ``popular'' places. Even if it goes beyond our scope in this paper, we strongly believe that would be interesting to exploit consequences of using different values for $\alpha$. We do think that both small and big values for $\alpha$ rise clustering effect of the nodes. In the first case, the clustering effect is based on the neighborhood locality of the nodes, and is more related to a social type: Nodes that ``live'' near each other should tend to frequent the same places, and therefore tend to be ``friends''. In the second case, instead, the clustering effect should raise as a consequence of the popularity of the places. When reaching destination the node decides how long to remain there. One of the key observations is that in real life a person usually stays for a long time only in a few places, whereas there are many places where he spends a short period of time. Therefore, the distribution of the waiting time should follow a power law. However, this is in contrast with the experimental evidence that inter-contact time has an exponential cut-off, and with the intuition that, in many practical scenarios, we won't spend more than a few hours standing at the same place (our goal is to model day time mobility). So, SWIM uses an upper bounded power law distribution for waiting time, that is, a truncated power law. Experimentally, this seems to be the correct choice. \subsection{Power law and exponential decay dichotomy} In a recent work~\cite{milan07}, it is observed that the distribution of inter-contact time in real life experiments shows a so called dichotomy: First a power law until a certain point in time, then an exponential cut-off. In~\cite{cai07mobicom}, the authors suggest that the exponential cut-off is due to the bounded domain where nodes move. In SWIM, inter-contact time distribution shows exactly the same dichotomy. More than that, our experiments show that, if the model is properly tuned, the distribution is strikingly similar to that of real life experiments. We show here, with a mathematically rigorous proof, that the distribution of inter-contact time of nodes in SWIM has an exponential tail. Later, we will see experimentally that the same distribution has indeed a head distributed as a power law. Note that the proof has to cope with a difficulty due to the social nature of SWIM---every decision taken in SWIM by a node \emph{not} only depends on its own previous decisions, but also depends on other nodes' decisions: Where a node goes now, strongly affects where it will choose to go in the future, and, it will affect also where other nodes will chose to go in the future. So, in SWIM there are no renewal intervals, decisions influence future decisions of other nodes, and nodes never ``forget'' their past. In the following, we will consider two nodes $A$ and $B$. Let $A(t)$, $t\ge0$, be the position of node~$A$ at time~$t$. Similarly, $B(t)$ is the position of node~$B$ at time~$t$. We assume that at time~$0$ the two nodes are leaving visibility after meeting. That is, $||A(0)-B(0)||=r$, $||A(t)-B(t)||<r$ for $t\in 0^-$, and $||A(t)-B(t)||>r$ for $t\in 0^+$. Here, $||\cdot||$ is the euclidean distance in the square. The inter-contact time of nodes $A$ and $B$ is defined as: \begin{equation*} T_I=\inf_{t>0} \{t:||A(t)-B(t)||\le r\} \end{equation*} \begin{assumption} \label{ass:lower} For all nodes~$A$ and for all cells~$C$, the distance function $distance(A,C)$ returns at least $\mu>0$. \end{assumption} \begin{theorem} If $\alpha>0$ and under Assumption~\ref{ass:lower}, \emph{the tail} of the inter-contact time distribution between nodes~$A$ and~$B$ in SWIM has an exponential decay. \end{theorem} \begin{IEEEproof} To prove the presence of the exponential cut-off, we will show that there exists constant $c>0$ such that \begin{equation*} \mathbb{P}\{T_I>t\}\le e^{-ct} \end{equation*} for all sufficiently large $t$. Let $t_i=i\lambda$, $i=1,2,\dotsc$, be a sequence of times. Constant $\lambda$ is large enough that each node has to make a way point decision in the interval between $t_i$ and $t_{i+1}$ and that each node has enough time to finish a leg. Recall that this is of course possible since waiting time at way points is bounded above and since nodes complete each leg of movement in constant time. The idea is to take snapshots of nodes $A$ and $B$ and see whether they see each other at each snapshot. However, in the following, we also need that at least one of the two nodes is not moving at each snapshot. So, let \begin{equation*} \begin{split} \delta_i=\text{min}\{ & \delta\ge 0 : \text{either $A$ or $B$ is}\\ & \text{at a way point at time $t_i+\delta$}\}. \end{split} \end{equation*} Clearly, $t_i+\delta_i<t_{i+1}$, for all $i=1,2,\dotsc$. We take the sequence of snapshots $\{t_i+\delta_i\}_{i>0}$. Let $\epsilon_i=\{||A(t_i+\delta_i)-B(t_i+\delta_i)||>r\}$ be the event that nodes $A$ and $B$ are not in visibility range at time $t_i+\delta_i$. We have that \begin{equation*} \mathbb{P}\{T_I>t\}\le \mathbb{P}\left\{\bigcap_{i=1}^{\lfloor t/\lambda\rfloor -1} \epsilon_i\right\}=\prod_{i=1}^{\lfloor t/\lambda\rfloor -1} \mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}. \end{equation*} Consider $\mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}$. At time~$t_i+\delta_i$, at least one of the two nodes is at a way point, by definition of $\delta_i$. Say node~$A$, without loss of generality. Assume that node~$B$ is in cell $C$ (either moving or at a way point). During its last way point decision, node~$A$ has chosen cell $C$ as its next way point with probability at least $\alpha\mu>0$, thanks to Assumption~\ref{ass:lower}. If this is the case, the two nodes~$A$ and~$B$ are now in visibility. Note that the decision has been made after the previous snapshot, and that it is not independent of previous decisions taken by node~$A$, and it is not even independent of previous decisions taken by node~$B$ (since the social nature of decisions in SWIM). Nonetheless, with probability at least $\alpha\mu$ the two nodes are now in visibility. Therefore, \begin{equation*} \mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}\le 1-\alpha\mu. \end{equation*} So, \begin{equation*} \begin{split} \mathbb{P}\{T_I>t\} & \le \mathbb{P}\left\{\bigcap_{i=1}^{\lfloor t/\lambda\rfloor -1} \epsilon_i\right\}=\prod_{i=1}^{\lfloor t/\lambda\rfloor -1} \mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}\\ & \le (1-\alpha\mu)^{\lfloor t/\lambda\rfloor -1}\sim e^{-ct}, \end{split} \end{equation*} for sufficiently large $t$. \end{IEEEproof} \section{Real traces} \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Experimental data set \T \B & Cambridge~05 & Cambridge~06 & Infocom~05\\ \hline Device \T & iMote & iMote & iMote\\ Network type & Bluetooth & Bluetooth & Bluetooth\\ Duration (days)& 5 & 11 & 3\\ Granularity (sec)& 120 & 600 & 120\\ Devices number & 12 & 54 (36 mobile) & 41\\ Internal contacts number& 4,229 & 10,873 & 22,459\\ Average Contacts/pair/day & 6.4 & 0.345 & 4.6\\[1mm] \hline \end{tabular} \caption{The three experimental data sets} \label{tab:realtraces} \end{center} \end{table*} In order to show the accuracy of SWIM in simulating real life scenarios, we will compare SWIM with three traces gathered during experiments done with real devices carried by people. We will refer to these traces as \emph{Infocom~05}, \emph{Cambridge~05} and \emph{Cambridge~06}. Characteristics of these data sets such as inter-contact and contact distribution have been observed in several previous works~\cite{hui05, leguay06,hui06}. \begin{itemize} \item In \emph{Cambridge 05}~\cite{cambridge05} the authors used Intel iMotes to collect the data. The iMotes were distributed to students of the University of Cambridge and were programmed to log contacts of all visible mobile devices. The number of devices that were used for this experiment is 12. This data set covers 5 days. \item In \emph{Cambridge 06}~\cite{upmcCambridgeData} the authors repeated the experiment using more devices. Also, a number of stationary nodes were deployed in various locations around the city of Cambridge UK. The data of the stationary iMotes will not be used in this paper. The number of mobile devices used is 36 (plus 18 stationary devices). This data set covers 11 days. \item In \emph{Infocom~05} ~\cite{cambridgeInfocomData} the same devices as in \emph{Cambridge} were distributed to students attending the Infocom 2005 student workshop. The number of devices is 41. This experiment covers approximately 3 days. \end{itemize} Further details on the real traces we use in this paper are shown in Table~\ref{tab:realtraces}. \section{SWIM vs Real traces} \label{sec:experiments} \subsection{The simulation environment} In order to evaluate SWIM, we built a discrete even simulator of the model. The simulator takes as input \begin{itemize} \item $n$: the number of nodes in the network; \item $r$: the transmitting radius of the nodes; \item the simulation time in seconds; \item coefficient $\alpha$ that appears in Equation~\ref{eq:weight}; \item the distribution of the waiting time at destination. \end{itemize} The output of the simulator is a text file containing records on each main event occurrence. The main events of the system and the related outputs are: \begin{itemize} \item \emph{Meet} event: When two nodes are in range with each other. The output line contains the ids of the two nodes involved and the time of occurrence. \item \emph{Depart} event: When two nodes that were in range of each other are not anymore. The output line contains the ids of the two nodes involved and the time of occurrence. \item \emph{Start} event: When a node leaves its current location and starts moving towards destination. The output line contains the id of the location, the id of the node and the time of occurrence. \item \emph{Finish} event: When a node reaches its destination. The output line contains the id of the destination, the id of the node and the time of occurrence. \end{itemize} In the output, we don't really need information on the geographical position of the nodes when the event occurs. However, it is just straightforward to extend the format of the output file to include this information. In this form, the output file contains enough information to compute correctly inter-contact intervals, number of contacts, duration of contacts, and to implement state of the art forwarding protocols. During the simulation, the simulator keeps a vector $\textit{seen}(C)$ updated for each sensor. Note that the nodes do not necessarily agree on what is the popularity of each cell. As mentioned earlier, it is not necessary to keep in memory the whole vector, without changing the qualitative behavior of the mobile system. However, the three scenarios Infocom~05, Cambridge~05, and Cambridge~06 are not large enough to cause any real memory problem. Vector~$\textit{seen}(C)$ is updated at each \emph{Finish} and \emph{Start} event, and is not changed during movements. \subsection{The experimental results} In this section we will present some experimental results in order to show that SWIM is a simple and good way to generate synthetic traces with the same statistical properties of real life mobile scenarios. \begin{figure}[!ht] \centering \subfigure[Distribution of the inter-contact time in Infocom~05 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Infocom/InterContacts} \label{fig:ICT infocom}} \qquad \subfigure[Distribution of the contact duration for each pair of nodes in Infocom~05 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Infocom/Contacts} \label{fig:CONT infocom}} \qquad \subfigure[Distribution of the number of contacts for each pair of nodes in Infocom~05 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Infocom/ContactsNumber} \label{fig:CONT-NR infocom}} \caption{SWIM and Infocom~05} \label{fig:infocom} \end{figure} \begin{figure}[t] \centering \subfigure[Distribution of the inter-contact time in Cambridge~05 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Cambridge05/InterContacts} \label{fig:ICT cambridge05}} \qquad \subfigure[Distribution of the contact duration for each pair of nodes in Cambridge~05 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Cambridge05/Contacts} \label{fig:CONT cambridge05}} \qquad \subfigure[Distribution of the number of contacts for each pair of nodes in Cambridge~05 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Cambridge05/ContactsNumber} \label{fig:CONT-NR cambridge05}} \caption{SWIM and Cambridge~05} \label{fig:cambridge05} \end{figure} \begin{figure}[t] \centering \subfigure[Distribution of the inter-contact time in Cambridge~06 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Cambridge06/InterContacts} \label{fig:ICT cambridge}} \qquad \subfigure[Distribution of the contact duration for each pair of nodes in Cambridge~06 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Cambridge06/Contacts} \label{fig:CONT cambridge}} \qquad \subfigure[Distribution of the number of contacts for each pair of nodes in Cambridge~06 and in SWIM]{ \centering \includegraphics[width=.4\textwidth]{graphics/Cambridge06/ContactsNumber} \label{fig:CONT-NR cambridge}} \caption{SWIM and Cambridge~06} \label{fig:cambridge06} \end{figure} The idea is to tune the few parameters used by SWIM in order to simulate Infocom~05, Cambridge~05, and Cambridge~06. For each of the experiments we consider the following metrics: inter-contact time CCD function, contact distribution per pair of nodes, and number of contacts per pair of nodes. The inter-contact time distribution is important in mobile networking since it characterizes the frequency with which information can be transferred between people in real life. It has been widely studied for real traces in a large number of previous papers~\cite{hui05, hui06, leguay06, cai07mobicom, milan07, musolesi07, cai08mobihoc}. The contact distribution per pair of nodes and the number of contacts per pair of nodes are also important. Indeed they represent a way to measure relationship between people. As it was also discussed in~\cite{hui07community, hui07socio, hui08mobihoc} it's natural to think that if a couple of people spend more time together and meet each other frequently they are familiar to each other. Familiarity is important in detecting communities, which may help improve significantly the design and performance of forwarding protocols in mobile environments such as DNTs~\cite{hui08mobihoc}. Let's now present the experimental results obtained with SWIM when simulating each of the real scenarios of data sets. Since the scenarios we consider use iMotes, we model our network node according to iMotes properties (outdoor range $30\textrm{m}$). We initially distribute the nodes over a network area of size $300\times300~\textrm{m}^2$. In the following, we assume for the sake of simplicity that the network area is a square of side 1, and that the node transmission range is 0.1. In all the three experiments we use a power law with slope $a=1.45$ in order to generate waiting time values of nodes when arriving to destination, with an upper bound of 4 hours. We use as $\textit{seen}(C)$ function the fraction of the nodes seen in cell~$C$, and as $\textit{distance}(x,C)$ the following \begin{equation*} \textit{distance}(x,C)=\frac{1}{\left(1+k||x-y||\right)^2}, \end{equation*} where $x$ is the position of the home of the current node, and $y$ is the position of the center of cell~$C$. Positions are coordinates in the square of size 1. Constant $k$ is a scaling factor, set to $0.05$, which accounts for the small size of the experiment area. Note that function $\textit{distance}(x,C)$ decays as a power law. We come up with this choice after a large set of experiments, and the choice is heavily influenced by scaling factors. We start with Infocom~05. The number of nodes $n$ and the simulation time are the same as in the real data set, hence 41 and 3 days respectively. Since the area of the real experiment was quite small (a large hotel), we deem that $300\times300~m^2$ can be a good approximation of the real scenario. In Infocom~05, there were many parallel sessions. Typically, in such a case one chooses to follow what is more interesting to him. Hence, people with the same interests are more likely to meet each other. In this experiment, the parameter $\alpha$ such that the output fit best the real traces is $\alpha=0.75$. The results of this experiment are shown in Figure~\ref{fig:infocom}. We continue with the Cambridge scenario. The number of nodes and the simulation time are the same as in the real data set, hence 11 and 5 days respectively. In the Cambridge data set, the iMotes were distributed to two groups of students, mainly undergrad year~1 and~ 2, and also to some PhD and Master students. Obviously, students of the same year are more likely to see each other more often. In this case, the parameter $\alpha$ which best fits the real traces is $\alpha=0.95$. This choice proves to be fine for both Cambridge~05 and Cambridge~06. The results of this experiment are shown in Figure~\ref{fig:cambridge05} and~\ref{fig:cambridge06}. In all of the three experiments, SWIM proves to be an excellent way to generate synthetic traces that approximate real traces. It is particularly interesting that the same choice of parameters gets goods results for all the metrics under consideration at the same time. \section{Comparative performance of forwarding protocols} \label{sec:forwarding} \begin{figure*} \label{fig:forwarding} \centering \subfigure{ \centering \includegraphics[width=.31\textwidth]{graphics/Infocom/PerformanceInfocom} \label{fig:perf infocom}} \subfigure{ \centering \includegraphics[width=.31\textwidth]{graphics/Cambridge05/PerformanceCambridge05} \label{fig:perf cambridge05}} \subfigure{ \centering \includegraphics[width=.31\textwidth]{graphics/Cambridge06/PerformanceCambridge06} \label{fig:perf cambridge 06}} \caption{Performance of both forwarding protocols on real traces and SWIM traces. EFw denotes Epidemic Forwarding while DFwd Delegation Forwarding.} \label{fig:performance} \end{figure*} In this section we show other experimental results of SWIM, related to evaluation of two simple forwarding protocols for DNTs such as Epidemic Forwarding~\cite{vahdat00epidemic} and simplified version of Delegation Forwarding\cite{dfw08} in which each node has a random constant as its quality. Of course, this simplified version of delegation forwarding is not very interesting and surely non particularly efficient. However, we use it just as a worst case benchmark against epidemic forwarding, with the understanding that our goal is just to validate the quality of SWIM, and not the quality of the forwarding protocol. In the following experiments, we use for each experiment the same tuning used in the previous section. That is, the parameters input to SWIM are not ``optimized'' for each of the forwarding protocols, they are just the same that has been used to fit real traces with synthetic traces. For the evaluation of the two forwarding protocols we use the same assumptions and the same way of generating traffic to be routed as in~\cite{dfw08}. For each trace and forwarding protocol a set of messages is generated with sources and destinations chosen uniformly at random, and generation times form a Poisson process averaging one message per 4 seconds. The nodes are assumed to have infinite buffers and carry all message replicas they receive until the end of the simulation. The metrics we are concerned with are: \emph{cost}, which is the number of replicas per generated message; \emph{success rate} which is the fraction of generated messages for which at least one replica is delivered; \emph{average delay} which is the average duration per delivered message from its generation time to the first arrival of one of its replicas. As in \cite{dfw08} we isolated 3-hour periods for each data trace (real and synthetic) for our study. Each simulation runs therefore 3 hours. to avoid end-effects no messages were generated in the last hour of each trace. In the two forwarding protocols, upon contact with node $A$, node $B$ decides which message from its message queue to forward in the following way: \begin{trivlist} \item \textbf{Epidemic Forwarding:} Node $A$ forwards message~$m$ to node $B$ unless $B$ already has a replica of $m$. This protocol achieves the best possible performance, so it yields upper bounds on success rate and average delay. However, it does also have a high cost. \item \textbf{(Simplified) Delegation Forwarding:} To each node is initially given a quality, distributed uniformly in $(0;1]$. To each message is given a rate, which, in every instant corresponds to the quality of the node with the best quality that message have seen so far. When generated the message inherits the rate from the node that generates it (that would be the sender for that message). Node $A$ forwards message $m$ to node $B$ if the quality of node $B$ is greater than the rate of the copy of $m$ that $A$ holds. If $m$ is forwarded to $B$, both nodes $A$ and $B$ update the rate of their copy of $m$ to the quality of $B$. \end{trivlist} Figure~\ref{fig:forwarding} shows how the two forwarding protocols perform in both real and synthetic traces, generated with SWIM. As you can see, the results are excellent---SWIM predicts very accurately the performance of both protocols. Most importantly, this is not due to a customized tuning that has been optimized for these forwarding protocols, it is just the same output that SWIM has generated with the tuning of the previous section. This can be important methodologically: To tune SWIM on a particular scenario, you can concentrate on a few well known and important statistical properties like inter-contact time, number of contacts, and duration of contacts. Then, you can have a good confidence that the model is properly tuned and usable to get meaningful estimation of the performance of a forwarding protocol. \section{Conclusions} \label{sec:conclusions} In this paper we present SWIM, a new mobility model for ad hoc networking. SWIM is simple, proves to generate traces that look real, and provides an accurate estimation of forwarding protocols in real mobile networks. SWIM can be used to improve our understanding of human mobility, and it can support theoretical work and it can be very useful to evaluate the performance of networking protocols in scenarios that scales up to very large mobile systems, for which we don't have real traces. \IEEEtriggeratref{7} \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{SecIntro} Let $\mathcal{G}=(V,E)$ be a connected undirected graph, with $V$ at most countable and each vertex $x\in V$ of finite degree. We do not allow self-loops, however the edges might be multiple. Given $e\in E$ an edge, we will denote $e_{+}$ and $e_{-}$ its end-vertices, even though $e$ is non-oriented and one can interchange $e_{+}$ and $e_{-}$. Each edge $e\in E$ is endowed with a conductance $W_{e}>0$. There may be a killing measure $\kappa=(\kappa_{x})_{x\in V}$ on vertices. We consider $(X_{t})_{t\ge0}$ the \textit{Markov jump processes} on $V$ which being in $x\in V$, jumps along an adjacent edge $e$ with rate $W_{e}$. Moreover if $\kappa_{x}\neq 0$, the process is killed at $x$ with rate $\kappa_{x}$ (the process is not defined after that time). $\zeta$ will denote the time up to which $X_{t}$ is defined. If $\zeta<+\infty$, then either the process has been killed by the killing measure $\kappa$ (and $\kappa \not\equiv 0$) or it has gone off to infinity in finite time (and $V$ infinite). We will assume that the process $X$ is transient, which means, if $V$ is finite, that $\kappa\not\equiv 0$. $\mathbb{P}_{x}$ will denote the law of $X$ started from $x$. Let $(G(x,y))_{x,y\in V}$ be the Green function of $X_{t}$: \begin{displaymath} G(x,y)=G(y,x)=\mathbb{E}_{x}\left[\int_{0}^{\zeta} 1_{\{X_{t}=y\}} dt\right]. \end{displaymath} Let $\mathcal{E}$ be the Dirichlet form defined on functions $f$ on $V$ with finite support: \begin{eqnarray}\label{Dirichlet-form} \mathcal{E}(f,)=\sum_{x\in V}\kappa_{x} f(x)^{2}+ \sum_{e\in E}W_e(f(e_{+})-f(e_{-}))^{2}. \end{eqnarray} $P_{\varphi}$ will be the law of $(\varphi_{x})_{x\in V}$ the centred \textit{Gaussian free field} (GFF) on $V$ with covariance $E_{\varphi}[\varphi_{x}\varphi_{y}]=G(x,y)$. In case $V$ is finite, the density of $P_{\varphi}$ is \begin{displaymath} \dfrac{1}{(2\pi)^{\frac{\vert V\vert}{2}}\sqrt{\det G}} \exp\left(-\dfrac{1}{2}\mathcal{E}(f,f)\right)\prod_{x\in V} df_{x}. \end{displaymath} Given $U$ a finite subset of $V$, and $f$ a function on $U$, $P^{U,f}_{\varphi}$ will denote the law of the GFF $\varphi$ conditioned to equal $f$ on $U$. $(\ell_{x}(t))_{x\in V, t\in [0,\zeta]}$ will denote the family of local times of $X$: \begin{displaymath} \ell_{x}(t)=\int_{0}^{t}1_{\{X_{s}=x\}} ds. \end{displaymath} For all $x\in V$, $u>0$, let \begin{displaymath} \tau_{u}^{x}=\inf\lbrace t\geq 0; \ell_{x}(t)>u\rbrace. \end{displaymath} Recall the generalized second Ray-Knight theorem on discrete graphs by Eisenbaum, Kaspi, Marcus, Rosen and Shi \cite{ekmrs} (see also \cite{MarcusRosen2006MarkovGaussianLocTime,Sznitman2012LectureIso}): \begin{2ndRK} For any $u>0$ and $x_{0}\in V$, \begin{center} $\left(\ell_{x}(\tau_{u}^{x_{0}})+\dfrac{1}{2}\varphi_{x}^{2}\right)_{x\in V}$ under $\mathbb{P}_{x_{0}}(\cdot \vert \tau_{u}^{x_{0}}<\zeta)\otimes P^{\lbrace x_{0}\rbrace,0}_{\varphi}$ \end{center} has the same law as \begin{center} $\left(\dfrac{1}{2}\varphi_{x}^{2}\right)_{x\in V}$ under $P^{\lbrace x_{0}\rbrace,\sqrt{2u}}_{\varphi}$. \end{center} \end{2ndRK} Sabot and Tarrès showed in \cite{SabotTarres2015RK} that the so-called ``magnetized'' reverse Vertex-Reinforced Jump Process provides an inversion of the generalized second Ray-Knight theorem, in the sense that it enables to retrieve the law of $(\ell_x(\tau_u^{x_0}), \varphi^2_x)_{x\in V}$ conditioned on $\left(\ell_x(\tau_u^{x_0})+\frac{1}{2}\varphi^2_x\right)_{x\in V}$. The jump rates of that latter process can be interpreted as the two-point functions of the Ising model associated to the time-evolving weights. However in \cite{SabotTarres2015RK} the link with the Ising model is only implicit, and a natural question is whether Ray-Knight inversion can be described in a simpler form if we enlarge the state space of the dynamics, and in particular include the ``hidden'' spin variables. The answer is positive, and goes through an extension of the Ray-Knight isomorphism introduced by Lupu \cite{Lupu2014LoopsGFF}, which couples the sign of the GFF to the path of the Markov chain. The Ray-Knight inversion will turn out to take a rather simple form in Theorem \ref{thm-Poisson} of the present paper, where it will be defined not only through the spin variables but also random currents associated to the field though an extra Poisson Point Process. The paper is organised as follows. In Section \ref{sec:srk} we recall some background on loop soup isomorphisms and on related couplings and state and prove a signed version of generalized second Ray-Knight theorem. We begin in Section \ref{sec:lejan} by a statement of Le Jan's isomorphism which couples the square of the Gaussian Free Field to the loop soups, and recall how the generalized second Ray-Knight theorem can be seen as its Corollary: for more details see \cite{lejan4}. In Subsection \ref{sec:lupu} we state Lupu's isomorphism which extends Le Jan's isomorphism and couples the sign of the GFF to the loop soups, using a cable graph extension of the GFF and Markov Chain. Lupu's isomorphism yields an interesting realisation of the well-known FK-Ising coupling, and provides as well a ``Current+Bernoulli=FK'' coupling lemma \cite{lupu-werner}, which occur in the relationship between the discrete and cable graph versions. We briefly recall those couplings in Sections \ref{fkising} and \ref{randomcurrent}, as they are implicit in this paper. In Section \ref{sec:glupu} we state and prove the generalized second Ray-Knight ``version'' of Lupu's isomorphism, which we aim to invert. Section \ref{sec:inversion} is devoted to the statements of inversions of those isomorphisms. We state in Section \ref{sec_Poisson} a signed version of the inversion of the generalized second Ray-Knight theorem through an extra Poisson Point Process, namely Theorem \ref{thm-Poisson}. In Section \ref{sec_dicr_time} we provide a discrete-time description of the process, whereas in Section \ref{sec_jump} we yield an alternative version of that process through jump rates, which can be seen as an annealed version of the first one. We deduce a signed inversion of Le Jan's isomorphism for loop soups in Section \ref{sec:lejaninv}, and an inversion of the coupling of random current with FK-Ising in Section \ref{sec:coupinv} Finally Section \ref{sec:proof} is devoted to the proof of Theorem \ref{thm-Poisson}: Section \ref{sec:pfinite} deals with the case of a finite graph without killing measure, and Section \ref{sec:pgen} deduces the proof in the general case. \section{Le Jan's and Lupu's isomorphisms} \label{sec:srk} \subsection{Loop soups and Le Jan's isomorphism} \label{sec:lejan} The \textit{loop measure} associated to the Markov jump process $(X_{t})_{0\leq t<\zeta}$ is defined as follows. Let $\mathbb{P}^{t}_{x,y}$ be the bridge probability measure from $x$ to $y$ in time $t$ (conditionned on $t<\zeta$). Let $p_{t}(x,y)$ be the transition probabilities of $(X_{t})_{0\leq t<\zeta}$. Let $\mu_{\rm loop}$ be the measure on time-parametrised nearest-neighbour based loops (i.e. loops with a starting site) \begin{displaymath} \mu_{\rm loop}=\sum_{x\in V}\int_{t>0}\mathbb{P}^{t}_{x,x} p_{t}(x,x) \dfrac{dt}{t}. \end{displaymath} The loops will be considered here up to a rotation of parametrisation (with the corresponding pushforward measure induced by $\mu_{\rm loop}$), that is to say a loop $(\gamma(t))_{0\leq t\leq t_{\gamma}}$ will be the same as $(\gamma(T+t))_{0\leq t\leq t_{\gamma}-T}\circ (\gamma(T+t-t_{\gamma}))_{t_{\gamma}-T\leq t\leq t_{\gamma}}$, where $\circ$ denotes the concatenation of paths. A \textit{loop soup} of intensity $\alpha>0$, denoted $\mathcal{L}_{\alpha}$, is a Poisson random measure of intensity $\alpha \mu_{\rm loop}$. We see it as a random collection of loops in $\mathcal{G}$. Observe that a.s. above each vertex $x\in V$, $\mathcal{L}_{\alpha}$ contains infinitely many trivial "loops" reduced to the vertex $x$. There are also with positive probability non-trivial loop that visit several vertices. Let $L_{.}(\mathcal{L}_{\alpha})$ be the \textit{occupation field} of $\mathcal{L}_{\alpha}$ on $V$ i.e., for all $x\in V$, \begin{displaymath} L_x(\mathcal{L}_{\alpha})= \sum_{(\gamma(t))_{0\leq t\leq t_{\gamma}}\in\mathcal{L}_{\alpha}} \int_{0}^{t_{\gamma}}1_{\{\gamma(t)=x\}} dt. \end{displaymath} In \cite{LeJan2011Loops} Le Jan shows that for transient Markov jump processes, $L_x(\mathcal{L}_{\alpha})<+\infty$ for all $x\in V$ a.s. For $\alpha=\frac{1}{2}$ he identifies the law of $L_.(\mathcal{L}_{\alpha})$: \begin{IsoLeJan} $L_.(\mathcal{L}_{1/2})=\left(L_x(\mathcal{L}_{1/2})\right)_{x\in V}$ has the same law as $\dfrac{1}{2}\varphi^2=\left(\dfrac{1}{2}\varphi_{x}^{2}\right)_{x\in V}$ under $P_{\varphi}$. \end{IsoLeJan} Let us briefly recall how Le Jan's isomorphism enables one to retrieve the generalized second Ray-Knight theorem stated in Section \ref{SecIntro}: for more details, see for instance \cite{lejan4}. We assume that $\kappa$ is supported by $x_0$: the general case can be dealt with by an argument similar to the proof of Proposition \ref{PropKillingCase}. Let $D=V\setminus\{x_0\}$, and note that the isomorphism in particular implies that $L_.(\mathcal{L}_{1/2})$ conditionally on $L_{x_0}(\mathcal{L}_{1/2})=u$ has the same law as $\varphi^2/2$ conditionally on $\varphi_{x_0}^2/2=u$. On the one hand, given the classical energy decomposition, we have $\varphi=\varphi^D+\varphi_{x_0}$, with $\varphi^D$ the GFF associated to the restriction of $\mathcal{E}$ to $D$, where $\varphi^D$ and $\varphi_{x_0}$ are independent. Now $\varphi^2/2$ conditionally on $\varphi_{x_0}^2/2=u$ has the law of $(\varphi^D+\eta\sqrt{2u})^2/2$, where $\eta$ is the sign of $\varphi_{x_0}$, which is independent of $\varphi^D$. But $\varphi^D$ is symmetric, so that the latter also has the law of $(\varphi^D+\sqrt{2u})^2/2$. On the other hand, the loop soup $\mathcal{L}_{1/2}$ can be decomposed into the two independent loop soups $\mathcal{L}_{1/2}^D$ contained in $D$ and $\mathcal{L}_{1/2}^{(x_0)}$ hitting $x_0$. Now $L_.(\mathcal{L}_{1/2}^D)$ has the law of $(\varphi^D)^2/2$ and $L_.(\mathcal{L}_{1/2}^{(x_0)})$ conditionally on $L_{x_0}(\mathcal{L}_{1/2}^{(x_0)})=u$ has the law of the occupation field of the Markov chain $\ell(\tau_{u}^{x_{0}})$ under $\mathbb{P}_{x_{0}}(\cdot \vert \tau_{u}^{x_{0}}<\zeta)$, which enables us to conclude. \subsection{Lupu's isomorphism} \label{sec:lupu} As in \cite{Lupu2014LoopsGFF}, we consider the \textit{metric graph} $\tilde{\mathcal{G}}$ associated to $\mathcal{G}$. Each edge $e$ is replaced by a continuous line of length $\frac{1}{2}W_{e}^{-1}$. The GFF $\varphi$ on $\mathcal{G}$ with law $P_\varphi$ can be extended to a GFF $\tilde{\varphi}$ on $\tilde{\mathcal{G}}$ as follows. Given $e\in E$, one considers inside $e$ a conditionally independent Brownian bridge, actually a bridge of a $\sqrt{2} \times$ \textit{standard Brownian motion}, of length $\frac{1}{2}W_{e}^{-1}$, with end-values $\varphi_{e_{-}}$ and $\varphi_{e_{+}}$. This provides a continuous field on the metric graph which satisfies the spatial Markov property. Similarly one can define a standard Brownian motion $(B^{\tilde{\mathcal{G}}})_{0\le t\le \tilde{\zeta}}$ on $\tilde{\mathcal{G}}$, whose trace on $\mathcal{G}$ indexed by the local times at $V$ has the same law as the Markov process $(X_t)_{t\ge0}$ on $V$ with jump rate $W_e$ to an adjacent edge $e$ up to time $\zeta$, as explained in Section 2 of \cite{Lupu2014LoopsGFF}. One can associate a measure on time-parametrized continuous loops $\tilde{\mu}$, and let $\tilde{\mathcal{L}}_{\frac{1}{2}}$ be the Poisson Point Process of loops of intensity $\tilde{\mu}/2$: the discrete-time loops $\mathcal{L}_{\frac{1}{2}}$ can be obtained from $\tilde{\mathcal{L}}_{\frac{1}{2}}$ by taking the print of the latter on $V$. Lupu introduced in \cite{Lupu2014LoopsGFF} an isomorphism linking the GFF $\tilde{\varphi}$ and the loop soup $\tilde{\mathcal{L}}_{\frac{1}{2}}$ on $\tilde{\mathcal{G}}$. \begin{theorem}[Lupu's Isomorphism,\cite{Lupu2014LoopsGFF}] \label{thm:Lupu} There is a coupling between the Poisson ensemble of loops $\tilde{\mathcal{L}}_{\frac{1}{2}}$ and $(\tilde{\varphi}_y)_{y\in\tilde{\mathcal{G}}}$ defined above, such that the two following constraints hold: \begin{itemize} \item For all $y\in\tilde{\mathcal{G}}$, $L_y(\tilde{{\mathcal{L}}}_{\frac{1}{2}})=\frac{1}{2}\tilde{\varphi}_y^2$ \item The clusters of loops of $\tilde{\mathcal{L}}_{\frac{1}{2}}$ are exactly the sign clusters of $(\tilde{\varphi}_y)_{y\in\tilde{\mathcal{G}}}$. \end{itemize} Conditionally on $(|\tilde{\varphi}_y|)_{y\in\tilde{\mathcal{G}}}$, the sign of $\tilde{\varphi}$ on each of its connected components is distributed independently and uniformly in $\{-1,+1\}$. \end{theorem} Lupu's isomorphism and the idea of using metric graphs were applied in \cite{Lupu2015ConvCLE} to show that on the discrete half-plane $\mathbb{Z}\times\mathbb{N}$, the scaling limits of outermost boundaries of clusters of loops in loop soups are the Conformal Loop Ensembles $\hbox{CLE}$. Let $\mathcal{O}(\tilde{\varphi})$ (resp. $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$) be the set of edges $e\in E$ such that $\tilde{\varphi}$ (resp. $\tilde{\mathcal{L}}_{\frac{1}{2}}$) does not touch $0$ on $e$, in other words such that all the edge $e$ remains in the same sign cluster of $\tilde{\varphi}$ (resp. $\tilde{\mathcal{L}}_{\frac{1}{2}}$). Let $\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$ be the set of edges $e\in E$ that are crossed (i.e. visited consecutively) by the trace of the loops $\mathcal{L}_{\frac{1}{2}}$ on $V$. In order to translate Lupu's isomorphism back onto the initial graph $\mathcal{G}$, one needs to describe on one hand the distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on the values of $\varphi$, and on the other hand the distribution of $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$ conditionally on $\mathcal{L}_{\frac{1}{2}}$ and the cluster of loops $\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$ on the discrete graph $G$. These two distributions are described respectively in Subsections \ref{fkising} and \ref{randomcurrent}, and provide realisations of the FK-Ising coupling and the ``Current+Bernoulli=FK'' coupling lemma \cite{lupu-werner}. \subsection{The FK-Ising distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on $|\varphi|$} \label{fkising} \begin{lemma} \label{lem:fki} Conditionally on $(\varphi_{x})_{x\in V}$, $(1_{e\in \mathcal{O}(\tilde{\varphi})})_{e\in E}$ is a family of independent random variables and \begin{displaymath} \mathbb{P}\left(e\not\in \mathcal{O}(\tilde{\varphi})\vert \varphi\right)= \left\lbrace \begin{array}{ll} 1 & \text{if}~ \varphi_{e_{-}}\varphi_{e_{+}}<0,\\ \exp\left(-2W_{e}\varphi_{e_{-}}\varphi_{e_{+}}\right) & \text{if}~ \varphi_{e_{-}}\varphi_{e_{+}}>0. \end{array} \right. \end{displaymath} \end{lemma} \begin{proof} Conditionally on $(\varphi_{x})_{x\in V}$, are constructed as independent Brownian bridges on each edge, so that $(1_{e\in \mathcal{O}(\tilde{\varphi})})_{e\in E}$ are independent random variables, and it follows from the reflection principle that, if $\varphi_{e_{-}}\varphi_{e_{+}}>0$, then $$\mathbb{P}\left(e\not\in \mathcal{O}(\tilde{\varphi})\vert \varphi\right)=\dfrac{\exp\left(-\frac{1}{2}W_{e}(\varphi_{e_{-}}+\varphi_{e_{+}})^{2}\right)} {\exp\left(-\frac{1}{2}W_{e}(\varphi_{e_{-}}-\varphi_{e_{+}})^{2}\right)}=\exp\left(-2W_{e}\varphi_{e_{-}}\varphi_{e_{+}}\right).$$ \end{proof} Let us now recall how the conditional probability in Lemma \ref{lem:fki} yields a realisation of the FK-Ising coupling. Assume $V$ is finite. Let $(J_{e})_{e\in E}$ be a family of positive weights. An \textit{Ising model} on $V$ with interaction constants $(J_{e})_{e\in E}$ is a probability on configuration of spins $({\sigma}_{x})_{x\in V}\in \{+1,-1\}^V$ such that \begin{displaymath} \mathbb{P}^{\rm Isg}_{J}((\sigma_x)_{x\in V})= \dfrac{1}{\mathcal{Z}^{\rm Isg}_{J}}\exp\left(\sum_{e\in E} J_{e}\sigma_{e_{+}}\sigma_{e_{-}}\right). \end{displaymath} An \textit{FK-Ising random cluster model} with weights $(1-e^{-2J_{e}})_{e\in E}$ is a random configuration of open (value $1$) and closed edges (value $0$) such that \begin{displaymath} \mathbb{P}^{\rm FK-Isg}_{J}((\omega_{e})_{e\in E})= \dfrac{1}{\mathcal{Z}^{\rm FK-Isg}_{J}} 2^{\sharp~\text{clusters}} \prod_{e\in E}(1-e^{-2J_{e}})^{\omega_{e}}(e^{-2J_{e}})^{1-\omega_{e}}, \end{displaymath} where "$\sharp~\text{clusters}$" denotes the number of clusters created by open edges. The well-known FK-Ising and Ising coupling reads as follows. \begin{proposition}[FK-Ising and Ising coupling] \label{FK-Ising} Given an FK-Ising model, sample on each cluster an independent uniformly distributed spin. The spins are then distributed according to the Ising model. Conversely, given a spins configuration $\hat{\sigma}$ following the Ising distribution, consider each edge $e$, such that $\hat{\sigma}_{e_{-}}\hat{\sigma}_{e_{+}}<0$, closed, and each edge $e$, such that $\hat{\sigma}_{e_{-}}\hat{\sigma}_{e_{+}}>0$ open with probability $1-e^{-2J_{e}}$. Then the open edges are distributed according to the FK-Ising model. The two couplings between FK-Ising and Ising are the same. \end{proposition} Consider the GFF $\varphi$ on $\mathcal{G}$ distributed according to $P_{\varphi}$. Let $J_{e}(\vert\varphi\vert)$ be the random interaction constants \begin{displaymath} J_{e}(\vert\varphi\vert)=W_{e}\vert\varphi_{e_{-}}\varphi_{e_{+}}\vert. \end{displaymath} Conditionally on $\vert\varphi\vert$, $(\operatorname{sign}(\varphi_{x}))_{x\in V}$ follows an Ising distribution with interaction constants $(J_{e}(\vert\varphi\vert))_{e\in E}$: indeed, the Dirichlet form (\ref{Dirichlet-form}) can be written as \begin{displaymath} \mathcal{E}(\varphi,\varphi)=\sum_{x\in V}\kappa_{x} \varphi(x)^{2}+ \sum_{x\in V}(\varphi(x))^2(\sum_{y\sim x} W_{x,y})- 2\sum_{e\in E}J_e(\vert\varphi\vert) \operatorname{sign}(\varphi(e_{+}))\operatorname{sign}(\varphi(e_{-})). \end{displaymath} Similarly, when $\varphi\sim P_{\varphi}^{\{x_0\},\sqrt{2u}}$ has boundary condition $\sqrt{2u}\ge 0$ on $x_0$, then $(\operatorname{sign}(\varphi_{x}))_{x\in V}$ has an Ising distribution with interaction $(J_{e}(\vert\varphi\vert))_{e\in E}$ and conditioned on $\sigma_{x_0}=+1$. Now, conditionally on $\varphi$, $\mathcal{O}(\tilde{\varphi})$ has FK-Ising distribution with weights $(1-e^{-2J_{e}(\vert\varphi\vert)})_{e\in E}$. Indeed, the probability for $e\in\mathcal{O}(\tilde{\varphi})$ conditionally on $\varphi$ is $1-e^{-2J_{e}(\vert\varphi\vert)}$, by Lemma \ref{lem:fki}, as in Proposition \ref{FK-Ising}. Note that, given that $\mathcal{O}(\tilde{\varphi})$ has FK-Ising distribution, the fact that the sign of on its connected components is distributed independently and uniformly in $\{-1,1\}$ can be seen either as a consequence of Proposition \ref{FK-Ising}, or from Theorem \ref{thm:Lupu}. Given $\varphi=(\varphi_x)_{x\in V}$ on the discrete graph $\mathcal{G}$, we introduce in Definition \ref{def_FK-Ising} as the random set of edges which has the distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on $\varphi=(\varphi_x)_{x\in V}$. \begin{definition}\label{def_FK-Ising} We let $\mathcal{O}(\varphi)$ be a random set of edges which has the distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on $\varphi=(\varphi_x)_{x\in V}$ given by Lemma \ref{lem:fki}. \end{definition} \subsection{Distribution of $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$ conditionally on $\mathcal{L}_{\frac{1}{2}}$ } \label{randomcurrent} The distribution of $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$ conditionally on $\mathcal{L}_{\frac{1}{2}}$ can be retrieved by Corollary 3.6 in \cite{Lupu2014LoopsGFF}, which reads as follows. \begin{lemma}[Corollary 3.6 in \cite{Lupu2014LoopsGFF}] \label{36} Conditionally on $\mathcal{L}_{\frac{1}{2}}$, the events $e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$, $e\in E\setminus\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$, are independent and have probability \begin{equation} \label{cp} \exp\left(-2 W_{e} \sqrt{L_{e_{+}}(\mathcal{L}_{\frac{1}{2}})L_{e_{-}}(\mathcal{L}_{\frac{1}{2}})}\right). \end{equation} \end{lemma} This result gives rise, together with Theorem \ref{thm:Lupu}, to the following discrete version of Lupu's isomorphism, which is stated without any recourse to the cable graph induced by $\mathcal{G}$. \begin{definition} \label{def:out} Let $(\omega_{e})_{e\in E}\in\lbrace 0,1\rbrace^{E}$ be a percolation defined as follows: conditionally on $\mathcal{L}_{\frac{1}{2}}$, the random variables $(\omega_{e})_{e\in E}$ are independent, and $\omega_{e}$ equals $0$ with conditional probability given by \eqref{cp}. Let $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$ the set of edges: \begin{displaymath} \mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})=\mathcal{O}(\mathcal{L}_{\frac{1}{2}}) \cup \lbrace e\in E\vert \omega_{e}=1\rbrace. \end{displaymath} \end{definition} \begin{proposition}[Discrete version of Lupu's isomorphism, Theorem 1 bis in \cite{Lupu2014LoopsGFF}] \label{PropIsoLupuLoops} Given a loop soup $\mathcal{L}_{\frac{1}{2}}$, let $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$ be as in Definition \ref{def:out}. Let $(\sigma_{x})_{x\in V}\in\lbrace -1,+1\rbrace^{V}$ be random spins taking constant values on clusters induced by $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$ ($\sigma_{e_{-}}=\sigma_{e_{+}}$ if $e\in \mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$) and such that the values on each cluster, conditional on $\mathcal{L}_{\frac{1}{2}}$ and $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$, are independent and uniformly distributed. Then \begin{displaymath} \left(\sigma_{x}\sqrt{2 L_{x}(\mathcal{L}_{\frac{1}{2}})}\right)_{x\in V} \end{displaymath} is a Gaussian free field distributed according to $P_{\varphi}$. \end{proposition} Proposition \ref{PropIsoLupuLoops} induces the following coupling between FK-Ising and random currents. If $V$ is finite, a \textit{random current model} on $\mathcal{G}$ with weights $(J_{e})_{e\in E}$ is a random assignment to each edge $e$ of a non-negative integer $\hat{n}_{e}$ such that for all $x\in V$, \begin{displaymath} \sum_{e~\text{adjacent to}~x}\hat{n}_{e} \end{displaymath} is even, which is called the \textit{parity condition}. The probability of a configuration $(n_{e})_{e\in E}$ satisfying the parity condition is \begin{displaymath} \mathbb{P}^{\rm RC}_{J}(\forall e\in E, \hat{n}_{e}=n_{e})= \dfrac{1}{\mathcal{Z}^{\rm RC}_{J}}\prod_{e\in E}\dfrac{(J_{e})^{n_{e}}}{n_{e}!}, \end{displaymath} where actually $\mathcal{Z}^{\rm RC}_{J}=\mathcal{Z}^{\rm Isg}_{J}$. Let \begin{displaymath} \mathcal{O}(\hat{n})=\lbrace e\in E\vert \hat{n}_{e}>0\rbrace. \end{displaymath} The open edges in $\mathcal{O}(\hat{n})$ induce clusters on the graph $\mathcal{G}$. Given a loop soup $\mathcal{L}_{\alpha}$, we denote by $N_{e}(\mathcal{L}_{\alpha})$ the number of times the loops in $\mathcal{L}_{\alpha}$ cross the nonoriented edge $e\in E$. The transience of the Markov jump process $X$ implies that $N_{e}(\mathcal{L}_{\alpha})$ is a.s. finite for all $e\in E$. If $\alpha=\frac{1}{2}$, we have the following identity (see for instance \cite{Werner2015}): \begin{LoopsRC} Assume $V$ is finite and consider the loop soup $\mathcal{L}_{\frac{1}{2}}$. Conditionally on the occupation field $(L_{x}(\mathcal{L}_{\frac{1}{2}}))_{x\in V}$, $(N_{e}(\mathcal{L}_{\frac{1}{2}}))_{e\in E}$ is distributed as a random current model with weights $\left(2W_{e}\sqrt{L_{e_{-}}(\mathcal{L}_{\frac{1}{2}})L_{e_{+}} (\mathcal{L}_{\frac{1}{2}})}\right)_{e\in E}$. If $\varphi$ is the GFF on $\mathcal{G}$ given by Le Jan's or Lupu's isomorphism, then these weights are $(J_{e}(\vert\varphi\vert))$. \end{LoopsRC} Conditionally on the occupation field $(L_{x}(\mathcal{L}_{\frac{1}{2}}))_{x\in V}$, $\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$ are the edges occupied by a random current and $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$ the edges occupied by FK-Ising. Lemma \ref{lem:fki} and Proposition \ref{PropIsoLupuLoops} imply the following coupling, as noted by Lupu and Werner in \cite{lupu-werner}. \begin{proposition}[Random current and FK-Ising coupling, \cite{lupu-werner}] \label{RCFKIsing} Assume $V$ is finite. Let $\hat{n}$ be a random current on $\mathcal{G}$ with weights $(J_{e})_{e\in E}$. Let $(\omega_{e})_{e\in E}\in\lbrace 0,1\rbrace^{E}$ be an independent percolation, each edge being opened (value $1$) independently with probability $1-e^{-J_{e}}$. Then \begin{displaymath} \mathcal{O}(\hat{n})\cup\lbrace e\in E\vert \omega_{e}=1\rbrace \end{displaymath} is distributed like the open edges in an FK-Ising with weights $(1-e^{-2 J_{e}})_{e\in E}$. \end{proposition} \subsection{Generalized second Ray-Knight ``version'' of Lupu's isomorphism}\label{sec:glupu} We are now in a position to state the coupled version of the second Ray-Knight theorem. \begin{theorem} \label{Lupu} Let $x_{0}\in V$. Let $(\varphi_{x}^{(0)})_{x\in V}$ with distribution $P_{\varphi}^{\lbrace x_{0}\rbrace,0}$, and define $\mathcal{O}(\varphi^{(0)})$ as in Definition \ref{def_FK-Ising}. Let $X$ be an independent Markov jump process started from $x_{0}$. Fix $u>0$. If $\tau_{u}^{x_{0}}<\zeta$, we let $\mathcal{O}_{u}$ be the random subset of $E$ which contains $\mathcal{O}(\varphi_{x}^{(0)})$, the edges used by the path $(X_{t})_{0\leq t\leq \tau_{u}^{x_{0}}}$, and additional edges $e$ opened conditionally independently with probability \begin{displaymath} 1-e^{W_{e}\vert\varphi_{e_{-}}^{(0)}\varphi_{e_{+}}^{(0)}\vert - W_{e}\sqrt{(\varphi_{e_{-}}^{(0)2}+2\ell_{e_{-}}(\tau_{u}^{x_{0}})) (\varphi_{e_{+}}^{(0)2}+2\ell_{e_{+}}(\tau_{u}^{x_{0}}))}}. \end{displaymath} We let $\sigma\in\lbrace -1,+1\rbrace^{V}$ be random spins sampled uniformly independently on each cluster induced by $\mathcal{O}_{u}$, pinned at $x_0$, i.e. $\sigma_{x_0}=1$, and define \begin{displaymath} \varphi_{x}^{(u)}:=\sigma_{x}\sqrt{\varphi_{x}^{(0)2}+2\ell_{x}(\tau_{u}^{x_{0}})}. \end{displaymath} Then, conditionally on $\tau_{u}^{x_{0}}<\zeta$, $\varphi^{(u)}$ has distribution $P_{\varphi}^{\lbrace x_{0}\rbrace,\sqrt{2u}}$, and $\mathcal{O}_{u}$ has distribution $\mathcal{O}(\varphi^{(u)})$ conditionally on $\varphi^{(u)}$. \end{theorem} \begin{remark} One consequence of that coupling is that the path $(X_{s})_{s\le \tau_{u}^{x_{0}}}$ stays in the positive connected component of ${x_0}$ for $\varphi^{(u)}$. This yields a coupling between the range of the Markov chain and the sign component of $x_{0}$ inside a GFF $P_{\varphi}^{\lbrace x_{0}\rbrace,\sqrt{2u}}$. \end{remark} \noindent{\it Proof of Theorem \ref{Lupu}:~} The proof is based on \cite{Lupu2014LoopsGFF}. Let $D=V\setminus\{x_0\}$, and let $\tilde{\mathcal{L}}_{\frac{1}{2}}$ be the loop soup of intensity $1/2$ on the cable graph $\tilde{\mathcal{G}}$, which we decompose into $\tilde{\mathcal{L}}_{\frac{1}{2}}^{(x_0)}$ (resp. $\tilde{\mathcal{L}}_{\frac{1}{2}}^{D}$) the loop soup hitting (resp. not hitting) $x_0$, which are independent. We let $\mathcal{L}_{\frac{1}{2}}$ and $\mathcal{L}_{\frac{1}{2}}^{(x_0)}$ (resp. $\mathcal{L}_{\frac{1}{2}}^{D}$) be the prints of these loop soups on $V$ (resp. on $D=V\setminus\{x_0\}$). We condition on $L_{x_0}(\mathcal{L}_{\frac{1}{2}})=u$. Theorem \ref{thm:Lupu} implies (recall also Definition \ref{def_FK-Ising}) that we can couple $\tilde{\mathcal{L}}_{\frac{1}{2}}^{D}$ with $\varphi^{(0)}$ so that $L_x(\mathcal{L}_{\frac{1}{2}}^{D})=\varphi_x^{(0)2}/2$ for all $x\in V$, and $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})=\mathcal{O}(\varphi^{(0)})$. Define $\varphi^{(u)}=(\varphi^{(u)}_x)_{x\in V}$ from $\tilde{\mathcal{L}}_{\frac{1}{2}}$ by, for all $x\in V$, \begin{equation*} \label{abs} |\varphi_x^{(u)}|=\sqrt{2L_x(\mathcal{L}_{\frac{1}{2}})} \end{equation*} and $\varphi_x^{(u)}=\sigma_x|\varphi_x^{(u)}|$, where $\sigma\in\{-1,+1\}^V$ are random spins sampled uniformly independently on each cluster induced by $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$, pinned at $x_0$, i.e. $\sigma_{x_0}=1$. Then, by Theorem \ref{thm:Lupu}, $\varphi^{(u)}$ has distribution $P_{\varphi}^{\lbrace x_{0}\rbrace,\sqrt{2u}}$. For all $x\in V$, we have $$L_x(\tilde{\mathcal{L}}_{\frac{1}{2}})=\frac{\varphi_x^{(0)2}}{2}+L_x(\mathcal{L}_{\frac{1}{2}}^{(x_0)}).$$ On the other hand, conditionally on $L_.(\mathcal{L}_{\frac{1}{2}})$, \begin{align*} &\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})\,|\, e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\cup\mathcal{O}(\mathcal{L}_{\frac{1}{2}})) =\frac{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}))}{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\cup\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}= \frac{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})\,|\, e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\,|\, e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}\\ &=\frac{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})\,|\, e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\,|\, e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}^D))} =\exp\left(-W_e\sqrt{L_{e_-}(\mathcal{L}_{\frac{1}{2}})L_{e_+}(\mathcal{L}_{\frac{1}{2}})} +W_e\sqrt{L_{e_-}(\mathcal{L}_{\frac{1}{2}}^D)L_{e_+}(\mathcal{L}_{\frac{1}{2}}^D)}\right), \end{align*} where we use in the third equality that the event $e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)$ is measurable with respect to the $\sigma$-field generated by $\tilde{\mathcal{L}}_{\frac{1}{2}}^D$, which is independent of $\tilde{\mathcal{L}}_{\frac{1}{2}}^{(x_0)}$, and where we use Lemma \ref{36} in the fourth equality, for $\tilde{\mathcal{L}}_{\frac{1}{2}}$ and for $\tilde{\mathcal{L}}_{\frac{1}{2}}^D$. We conclude the proof by observing that $\mathcal{L}_{\frac{1}{2}}^{(x_0)}$ conditionally on $L_{x_0}(\mathcal{L}_{\frac{1}{2}}^{(x_0)})=u$ has the law of the occupation field of the Markov chain $\ell(\tau_{u}^{x_{0}})$ under $\mathbb{P}_{x_{0}}(\cdot \vert \tau_{u}^{x_{0}}<\zeta)$. {\hfill $\Box$} \section{Inversion of the signed isomorphism} \label{sec:inversion} In \cite{SabotTarres2015RK}, Sabot and Tarrès give a new proof of the generalized second Ray-Knight theorem together with a construction that inverts the coupling between the square of a GFF conditioned by its value at a vertex $x_{0}$ and the excursions of the jump process $X$ from and to $x_{0}$. In this paper we are interested in inverting the coupling of Theorem \ref{Lupu} with the signed GFF : more precisely, we want to describe the law of $(X_t)_{0\le t\le \tau_u^{x_0}}$ conditionally on $\varphi^{(u)}$. We present in section \ref{sec_Poisson} an inversion involving an extra Poisson process. We provide in Section \ref{sec_dicr_time} a discrete-time description of the process and in Section \ref{sec_jump} an alternative description via jump rates. Sections \ref{sec:lejaninv} and \ref{sec:coupinv} are respectively dedicated to a signed inversion of Le Jan's isomorphism for loop soups, and to an inversion of the coupling of random current with FK-Ising. \subsection{A description via an extra Poisson point process}\label{sec_Poisson} Let $(\check \varphi_x)_{x\in V}$ be a real function on $V$ such that $\check\varphi_{x_0}=+\sqrt{2u}$ for some $u>0$. Set $$ \check \Phi_x=\vert\check\varphi_x\vert, \;\;\sigma_x=\operatorname{sign}(\check\varphi_x). $$ We define a self-interacting process $(\check X_t, (\check n_e(t))_{e\in E})$ living on $V\times {\mathbb{N}}^E$ as follows. The process $\check X$ starts at $\check X(0)=x_0$. For $t\ge 0$, we set $$ \check\Phi_x(t)=\sqrt{(\check\Phi_x)^2-2\check\ell_x(t)},\;\;\forall x\in V,\;\;\;\hbox{ and }\; J_e(\check\Phi(t))=W_e \check\Phi_{e-}(t)\check\Phi_{e+}(t), \;\; \forall e\in E. $$ where $\check\ell_x(t)=\int_0^t{{\mathbbm 1}}_{\{\check X_s=x\}}ds$ is the local time of the process $\check X$ up to time $t$. Let $(N_e(u))_{u\ge 0}$ be an independent Poisson Point Processes on ${\mathbb R}_+$ with intensity 1, for each edge $e\in E$. We set $$ \check n_e(t)= \begin{cases} N_e(2J_e(t)), &\hbox{ if } \sigma_{e-}\sigma_{e+}=+1, \\ 0, &\hbox{ if } \sigma_{e-}\sigma_{e+}=-1. \end{cases} $$ We also denote by $\check {\mathcal C}(t)\subset E$ the configuration of edges such that $\check n_e(t)>0$. As time increases, the interaction parameters $J_{e}(\check\Phi(t))$ decreases for the edges neighboring $\check X_t$, and at some random times $\check n_e(t)$ may drop by 1. The process $(\check X_t)_{t\ge 0}$ is defined as the process that jumps only at the times when one of the $\check n_e(t)$ drops by 1, as follows: \begin{itemize} \item if $\check n_e(t)$ decreases by 1 at time $t$, but does not create a new cluster in $\check {\mathcal C}_t$, then $\check X_t$ crosses the edge $e$ with probability ${1/2}$ or does not move with probability ${1/2}$, \item if $\check n_e(t)$ decreases by 1 at time $t$, and does create a new cluster in $\check {\mathcal C}_t$, then $\check X_t$ moves/or stays with probability 1 on the unique extremity of $e$ which is in the cluster of the origin $x_0$ in the new configuration. \end{itemize} We set $$ \check T:=\inf\{t\ge 0,\;\; \exists x\in V, \hbox{ s. t. } \check\Phi_x(t)=0\}, $$ clearly, the process is well-defined up to time $\check T$. \begin{proposition} For all $0\le t\le \check T$, $\check X_t$ is in the connected component of $x_0$ of the configuration $\check {\mathcal C}(t)$. If $V$ is finite, the process ends at $x_0$, i.e. $\check X_{\check T}=x_0$. \end{proposition} \begin{theorem} \label{thm-Poisson} Assume that $V$ is finite. With the notation of Theorem \ref{Lupu}, conditioned on $\varphi^{(u)}=\check\varphi$, $(X_{t})_{t\le \tau_{u}^{x_{0}}}$ has the law of $(\check X_{\check T-t})_{0\le t\le \check T}$. Moreover, conditioned on $\varphi^{(u)}=\check\varphi$, $(\varphi^{(0)},\mathcal{O}(\varphi^{(0)}))$ has the law of $(\sigma'_x\check\Phi_x(\check T), \check{\mathcal C}(\check T))$ where $(\sigma'_x)_{x\in V}\in \lbrace -1,+1\rbrace^{V}$ are random spins sampled uniformly independently on each cluster induced by $\check{\mathcal C}(\check T)$, with the condition that $\sigma'_{x_0}=+1$. If $V$ is infinite, then $P_{\varphi}^{\lbrace x_{0}\rbrace, \sqrt{2u}}$-a.s., $\check X_t$ (with the initial condition $\check\varphi=\varphi^{(u)}$) ends at $x_0$, i.e. $\check T<+\infty$ and $\check X_{\check T}=x_0$. All previous conclusions for the finite case still hold. \end{theorem} \subsection{Discrete time description of the process} \label{sec_dicr_time} We give a discrete time description of the process $(\check X_t, (\check n_e(t))_{e\in E})$ that appears in the previous section. Let $t_{0}=0$ and $0<t_{1}<\dots<t_{j}$ be the stopping times when one of the stacks $n_e(t)$ decreases by $1$, where $t_{j}$ is the time when one of the stacks is completely depleted. It is elementary to check the following: \begin{proposition} \label{PropDiscrTime} The discrete time process $(\check X_{t_{i}}, (\check n_e(t_{i}))_{e\in E})_{0\leq i\leq j}$ is a stopped Markov process. The transition from time $i-1$ to $i$ is the following: \begin{itemize} \item first chose $e$ an edge adjacent to the vertex $\check X_{t_{i-1}}$ according to a probability proportional to $\check n_e(t_{i-1})$, \item decrease the stack $\check n_e(t_{i-1})$ by 1, \item if decreasing $\check n_e(t_{i-1})$ by 1 does not create a new cluster in $\check {\mathcal C}_{t_{i-1}}$, then $\check X_{t_{i-1}}$ crosses the edge $e$ with probability ${1/2}$ or does not move with probability ${1/2}$, \item if decreasing $\check n_e(t_{i-1})$ by 1 does create a new cluster in $\check {\mathcal C}_{t_{i-1}}$, then $\check X_{t_{i-1}}$ moves/or stays with probability 1 on the unique extremity of $e$ which is in the cluster of the origin $x_0$ in the new configuration. \end{itemize} \end{proposition} \subsection{An alternative description via jump rates}\label{sec_jump} We provide an alternative description of the process $(\check X_t, \check {\mathcal C}(t))$ that appears in Section \ref{sec_Poisson}. \begin{proposition}\label{prop-jump} The process $(\check X_t, \check{\mathcal C}(t))$ defined in section \ref{sec_Poisson} can be alternatively described by its jump rates : conditionally on its past at time $t$, if $\check X_t=x$, $y\sim x$ and $\lbrace x,y\rbrace\in \check{\mathcal{C}}(t)$, then \begin{itemize} \item[(1)] $\check X$ jumps to $y$ without modification of $\check{\mathcal C}(t)$ at rate \begin{displaymath} W_{x,y}\dfrac{\check\Phi_{y}(t)}{\check\Phi_{x}(t)} \end{displaymath} \item[(2)] the edge $\lbrace x,y\rbrace$ is closed in $\check{\mathcal C}(t)$ at rate \begin{displaymath} 2W_{x,y}\dfrac{\check\Phi_{y}(t)}{\check\Phi_{x}(t)} \left(e^{2W_{x,y}\check\Phi_{x}(t)\check\Phi_{y}(t)}-1\right)^{-1} \end{displaymath} and, conditionally on that last event: - if $y$ is connected to $x$ in the configuration $\check {\mathcal C}(t)\setminus\{x,y\}$, then $\check X$ simultaneously jumps to $y$ with probability $1/2$ and stays at $x$ with probability $1/2$ - otherwise $\check X_t$ moves/or stays with probability 1 on the unique extremity of $\{x,y\}$ which is in the cluster of the origin $x_0$ in the new configuration. \end{itemize} \end{proposition} \begin{remark} It is clear from this description that the joint process $(\check X_t, \check {\mathcal C}(t), \check \Phi(t))$ is Markov process, and well defined up to the time $$ \check T:=\inf\{t\ge 0:\;\; \exists x\in V, \hbox{ s.t. } \check\Phi_x(t)=0\}. $$ \end{remark} \begin{remark} One can also retrieve the process in Section \ref{sec_Poisson} from the representation in Proposition \ref{prop-jump} as follows. Consider the representation of Proposition \ref{prop-jump} on the graph where each edge $e$ is replaced by a large number $N$ of parallel edges with conductance $W_e/N$. Consider now $\check n^{(N)}_{x,y}(t)$ the number of parallel edges that are open in the configuration $\check {\mathcal C}(t)$ between $x$ and $y$. Then, when $N\to\infty$, $(\check n^{(N)}(t))_{t\ge0}$, converges in law to $(\check n(t))_{t\ge0}$, defined in section \ref{sec_Poisson}. \end{remark} \noindent {\it Proof of Proposition \ref{prop-jump}:~} Assume $\check X_t=x$, fix $y\sim x$ and let $e=\{x,y\}$. Recall that $\{x,y\}\in\check{\mathcal C}(t)$ iff $\check n_e(t)\ge1$. Let us first prove (1): \begin{align*} &\mathbb{P}\left(\check X\text{ jumps to $y$ on time interval $[t,t+\Delta t]$ without modification of }\check{\mathcal C}(t)\,|\,\{x,y\}\in\check{\mathcal C}(t)\right)\\ &=\frac{1}{2}\mathbb{P}\left(\check n_e(t)-\check n_e(t+\Delta t)=1,\,\check n_e(t+\Delta t)\ge1\,|\,\check n_e(t)\ge1\right)\\ &=\frac{1}{2}(2J_e(t)-2J_e(t+\Delta t))+o(\Delta t)=W_{xy}\dfrac{\check\Phi_{y}(t)}{\check\Phi_{x}(t)}\Delta t+o(\Delta t). \end{align*} Similarly, (2) follows from the following computation: \begin{align*} &\mathbb{P}\left(\{x,y\}\text{ closed in }\check{\mathcal C}(t+\Delta t)\,|\,\{x,y\}\in\check{\mathcal C}(t)\right) =\mathbb{P}\left(n_e(t+\Delta t)=0\,|\,\check n_e(t)\ge1\right)\\ &=\frac{\mathbb{P}\left(\check n_e(t)=1,\,\check n_e(t+\Delta t)=0\right)}{\mathbb{P}\left(\check n_e(t)\ge1\right)} =\frac{e^{-2J_e(t)}2J_e(t)}{1-e^{-2J_e(t)}}(J_e(t)-J_e(t+\Delta t))+o(\Delta t) \end{align*} {\hfill $\Box$} We easily deduce from the Proposition \ref{prop-jump} and Theorem \ref{thm-Poisson2} the following alternative inversion of the coupling in Theorem \ref{Lupu}. \begin{theorem}\label{thm-jump-rates} With the notation of Theorem \ref{Lupu}, conditionally on $(\varphi^{(u)},\mathcal{O}_{u})$, $(X_{t})_{t\le \tau_{u}^{x_{0}}}$ has the law of self-interacting process $(\check X_{\check T-t})_{0\le t\le \check T}$ defined by jump rates of Proposition \ref{prop-jump} starting with $$ \check \Phi_x=\sqrt{(\varphi_{x}^{(0)})^2+2\ell_{x}(\tau_{u}^{x_{0}})} \hbox{ and } \check{\mathcal C}(0)=\mathcal{O}_{u}. $$ Moreover $(\varphi^{(0)},\mathcal{O}(\varphi^{(0)}))$ has the same law as $(\sigma'\check\Phi(T), \check{\mathcal C}(\check T))$ where $(\sigma'_x)_{x\in V}$ is a configuration of signs obtained by picking a sign at random independently on each connected component of $\check{\mathcal C}(T)$, with the condition that the component of $x_0$ has a + sign. \end{theorem} \subsection{A signed version of Le Jan's isomorphism for loop soup} \label{sec:lejaninv} Let us first recall how the loops in $\mathcal{L}_{\alpha}$ are connected to the excursions of the jump process $X$. \begin{proposition}[From excursions to loops] \label{PropPD} Let $\alpha>0$ and $x_{0}\in V$. $L_{x_{0}}(\mathcal{L}_{\alpha})$ is distributed according to a Gamma $\Gamma(\alpha, G(x_{0},x_{0}))$ law, where $G$ is the Green's function. Let $u>0$, and consider the path $(X_{t})_{0\leq t\leq \tau_{u}^{x_{0}}}$ conditioned on $\tau_{u}^{x_{0}}<\zeta$. Let $(Y_{j})_{j\geq 1}$ be an independent Poisson-Dirichlet partition $PD(0,\alpha)$ of $[0,1]$. Let $S_{0}=0$ and \begin{displaymath} S_{j}=\sum_{i=1}^{j}Y_{i}. \end{displaymath} Let \begin{displaymath} \tau_{j}= \tau_{u S_{j}}^{x_{0}}. \end{displaymath} Consider the family of paths \begin{displaymath} \left((X_{\tau_{j-1}+t})_{0\leq t\leq \tau_{j}-\tau_{j-1}}\right)_{j\geq 1}. \end{displaymath} It is a countable family of loops rooted in $x_{0}$. It has the same law as the family of all the loops in $\mathcal{L}_{\alpha}$ that visit $x_{0}$, conditioned on $L_{x_0}(\mathcal{L}_{\alpha})=u$. \end{proposition} Next we describe how to invert the discrete version fo Lupu's isomorphism Proposition \ref{PropIsoLupuLoops} for the loop-soup in the same way as in Theorem \ref{thm-Poisson}. Let $(\check \varphi_x)_{x\in V}$ be a real function on $V$ such that $\check\varphi_{x_0}=+\sqrt{2u}$ for some $u>0$. Set $$ \check \Phi_x=\vert\check\varphi_x\vert, \;\;\sigma_x=\operatorname{sign}(\check\varphi_x). $$ Let $(x_{i})_{1\leq i\leq\vert V\vert}$ be an enumeration of $V$ (which may be infinite). We define by induction the self interacting processes $((\check X_{i,t})_{1\leq i\leq\vert V\vert}, (\check n_e(t))_{e\in E})$. $\check{T}_{i}$ will denote the end-time for $\check X_{i,t}$, and $\check{T}^{+}_{i}=\sum_{1\leq j\leq i}\check{T}_{j}$. By definition, $\check{T}^{+}_{0}=0$. $L(t)$ will denote \begin{displaymath} L_{x}(t):=\sum_{1\leq i\leq\vert V\vert} \check{\ell}_{x}(i,0\vee(t-\check{T}^{+}_{i})), \end{displaymath} where $\check{\ell}_{x}(i,t)$ are the occupation times for $\check X_{i,t}$. For $t\ge 0$, we set $$ \check\Phi_x(t)=\sqrt{(\check\Phi_x)^2-2L_x(t)},\;\;\forall x\in V,\;\;\;\hbox{ and }\; J_e(\check\Phi(t))=W_e \check\Phi_{e-}(t)\check\Phi_{e+}(t), \;\; \forall e\in E. $$ The end-times $\check{T}_{i}$ are defined by inductions as \begin{displaymath} \check{T}_{i}=\inf\lbrace t\geq 0\vert \check{\Phi}_{\check{X}_{i,t}}(t+\check{T}^{+}_{i-1})=0\rbrace. \end{displaymath} Let $(N_e(u))_{u\ge 0}$ be independent Poisson Point Processes on ${\mathbb R}_+$ with intensity 1, for each edge $e\in E$. We set $$ \check n_e(t)= \begin{cases} N_e(2J_e(t)), &\hbox{ if } \sigma_{e-}\sigma_{e+}=+1, \\ 0, &\hbox{ if } \sigma_{e-}\sigma_{e+}=-1. \end{cases} $$ We also denote by $\check {\mathcal C}(t)\subset E$ the configuration of edges such that $\check n_e(t)>0$. $\check X_{i,t}$ starts at $x_{i}$. For $t\in[\check{T}^{+}_{i-1},\check{T}^{+}_{i}]$, \begin{itemize} \item if $\check n_e(t)$ decreases by 1 at time $t$, but does not create a new cluster in $\check {\mathcal C}_t$, then $\check X_{i,t-\check{T}^{+}_{i-1}}$ crosses the edge $e$ with probability ${1/2}$ or does not move with probability ${1/2}$, \item if $\check n_e(t)$ decreases by 1 at time $t$, and does create a new cluster in $\check {\mathcal C}_t$, then $\check X_{i,t-\check{T}^{+}_{i-1}}$ moves/or stays with probability 1 on the unique extremity of $e$ which is in the cluster of the origin $x_i$ in the new configuration. \end{itemize} By induction, using Theorem \ref{thm-Poisson}, we deduce the following: \begin{theorem} \label{ThmPoissonLoopSoup} Let $\varphi$ be a GFF on $\mathcal{G}$ with the law $P_{\varphi}$. If one sets $\check{\varphi}=\varphi$ in the preceding construction, then for all $i\in \lbrace 1,\dots,\vert V\vert\rbrace$, $\check{T}_{i}<+\infty$, $\check{X}_{i,\check{T}_{i}} = x_{i}$ and the path $(\check{X}_{i,t})_{t\leq\check{T}_{i}}$ has the same law as a concatenation in $x_{i}$ of all the loops in a loop-soup $\mathcal{L}_{1/2}$ that visit $x_{i}$, but none of the $x_{1},\dots,x_{i-1}$. To retrieve the loops out of each path $(\check{X}_{i,t})_{t\leq\check{T}_{i}}$, on has to partition it according to a Poisson-Dirichlet partition as in Proposition \ref{PropPD}. The coupling between the GFF $\varphi$ and the loop-soup obtained from $((\check X_{i,t})_{1\leq i\leq\vert V\vert}, (\check n_e(t))_{e\in E})$ is the same as in Proposition \ref{PropIsoLupuLoops}. \end{theorem} \subsection{Inverting the coupling of random current with FK-Ising} \label{sec:coupinv} By combining Theorem \ref{ThmPoissonLoopSoup} and the discrete time description of Section \ref{sec_dicr_time}, and by conditionning on the occupation field of the loop-soup, one deduces an inversion of the coupling of Proposition \ref{RCFKIsing} between the random current and FK-Ising. We consider that the graph $\mathcal{G}=(V,E)$ and that the edges are endowed with weights $(J_{e})_{e\in E}$. Let $(x_{i})_{1\le i\le \vert V\vert}$ be an enumeration of $V$. Let $\check{\mathcal{C}}(0)$ be a subset of open edges of $E$. Let $(\check{n}_{e}(0))_{e\in E}$ be a family of random integers such that $\check{n}_{e}(0)=0$ if $e\not\in\check{\mathcal{C}}(0)$, and $(\check{n}_{e}(0)-1)_{e\in\check{\mathcal{C}}(0)}$ are independent Poisson random variables, where $\mathbb{E}[\check{n}_{e}(0)-1]=2J_{e}$. We will consider a family of discrete time self-interacting processes $((\check X_{i,j})_{1\leq i\leq \vert V\vert}, (\check{n}_{e}(j))_{e\in E})$. $\check X_{i,j}$ starts at $j=0$ at $x_{i}$ and is defined up to a integer time $\check{T}_{i}$. Let $\check{T}_{i}^{+}=\sum_{1\leq k\leq i}\check{T}_{k}$, with $\check{T}_{0}^{+}=0$. The end-times $\check{T}_{i}$ are defined by induction as \begin{displaymath} \check{T}_{i}= \inf\Big\lbrace j\geq 0\Big\vert \sum_{e~\text{edge adjacent to}~\check X_{i,j}} \check{n}_{e}(j+\check{T}_{i-1}^{+})=0\Big\rbrace. \end{displaymath} For $j\geq 1$, $\check{\mathcal{C}}(j)$ will denote \begin{displaymath} \check{\mathcal{C}}(j)=\lbrace e\in E\vert \check{n}_{e}(j)\geq 1\rbrace, \end{displaymath} which is consistent with the notation $\check{\mathcal{C}}(0)$. The evolution is the following. For $j\in \lbrace \check{T}_{i-1}^{+}+1,\dots, \check{T}_{i}^{+}\rbrace$, the transition from time $j-1$ to time $j$ is the following: \begin{itemize} \item first chose an edge $e$ adjacent to the vertex $\check{X}_{i,j-1-\check{T}_{i-1}^{+}}$ with probability proportional to $\check{n}_{e}(j-1)$, \item decrease the stack $\check{n}_{e}(j-1)$ by 1, \item if decreasing $\check{n}_{e}(j-1)$ by 1 does not create a new cluster in $\check{\mathcal{C}}(j-1)$, then $\check{X}_{i,\cdot}$ crosses $e$ with probability $1/2$ and does not move with probability $1/2$. \item if decreasing $\check{n}_{e}(j-1)$ by 1 does create a new cluster in $\check{\mathcal{C}}(j-1)$, then $\check{X}_{i,\cdot}$ moves/or stays with probability 1 on the unique extremity of $e$ which is in the cluster of the origin $x_{i}$ in the new configuration. \end{itemize} Denote $\hat{n}_{e}$ the number of times the edge $e$ has been crossed, in both directions, by all the walks $((\check{X}_{i,j})_{0\le j\le \check{T}_{i}})_{1\le i\le\vert V\vert}$. \begin{proposition} A.s., for all $i\in\lbrace 1,\dots,\vert V\vert\rbrace$, $\check{T}_{i}<+\infty$ and $\check{X}_{i,\check{T}_{i}}=x_{i}$. If the initial configuration of open edges $\check{\mathcal{C}}(0)$ is random and follows an FK-Ising distribution with weights $(1-e^{-2 J_{e}})_{e\in E}$, then the family of integers $(\hat{n}_{e})_{e\in E}$ is distributed like a random current with weights $(J_{e})_{e\in E}$. Moreover, the coupling between the random current and the FK-Ising obtained this way is the same as the one given by Proposition \ref{RCFKIsing}. \end{proposition} \section{Proof of theorem \ref{thm-Poisson} } \label{sec:proof} \subsection{Case of finite graph without killing measure} \label{sec:pfinite} Here we will assume that $V$ is finite and that the killing measure $\kappa\equiv 0$. In order to prove Theorem \ref{thm-Poisson}, we first enlarge the state space of the process $(X_t)_{t\ge 0}$. We define a process $(X_t,(n_e(t)))_{t\ge 0}$ living on the space $V\times {\mathbb N}^E$ as follows. Let $\varphi^{(0)}\sim P_{\varphi}^{\{x_0\},0}$ be a GFF pinned at $x_0$. Let $\sigma_x=\hbox{sign}(\varphi^{(0)}_x)$ be the signs of the GFF with the convention that $\sigma_{x_0}=+1$. The process $(X_t)_{t\ge 0}$ is as usual the Markov Jump process starting at $x_0$ with jump rates $(W_e)$. We set \begin{equation} \label{Phi-J} \Phi_x=\vert\varphi^{(0)}_x\vert, \;\; \Phi(t)=\sqrt{\Phi_x^2+2\ell_x(t)}, \;\;\;\forall x\in V, \;\;\; J_e(\Phi(t))=W_e \Phi_{e-}(t)\Phi_{e+}(t), \;\;\; \forall e\in E. \end{equation} The initial values $(n_e(0))$ are choosen independently on each edge with distribution $$ n_e(0)\sim \begin{cases} 0,& \hbox{ if $\sigma_{e-}\sigma_{e+}=-1$} \\ \mathcal{P}(2J_e(\Phi)),& \hbox{ if $\sigma_{e-}\sigma_{e+}=+1$} \end{cases} $$ where ${\mathcal{P}}(2J_e(\Phi))$ is a Poisson random variable with parameter $2J_e(\Phi)$. Let $((N_e(u))_{u\ge 0})_{e\in E}$ be independent Poisson point processes on ${\mathbb R}_+$ with intensity 1. We define the process $(n_e(t))$ by $$ n_e(t)=n_e(0)+N_e(J_e(\Phi(t)))-N_e(J_e(\Phi))+K_e(t), $$ where $K_e(t)$ is the number of crossings of the edge $e$ by the Markov jump process $X$ before time $t$. \begin{remark} Note that compared to the process defined in Section \ref{sec_Poisson}, the speed of the Poisson process is related to $J_e(\Phi(t))$ and not $2J_e(\Phi(t))$. \end{remark} We will use the following notation $$ {\mathcal C}(t)=\{e\in E, \;\; n_e(t)>0\}. $$ Recall that $\tau_u^{x_0}=\inf\{t\ge 0, \; \ell_{x_0}(t)=u\}$ for $u>0$. To simplify notation, we will write $\tau_u$ for $\tau_u^{x_0}$ in the sequel. We define $\varphi^{(u)}$ by $$ \varphi^{(u)}_x=\sigma_x\Phi(\tau_u), \;\;\; \forall x\in V, $$ where $(\sigma_x)_{x\in V}\in \lbrace -1,+1\rbrace^{V}$ are random spins sampled uniformly independently on each cluster induced by $\check{\mathcal C}(\check T)$ with the condition that $\sigma_{x_0}=+1$. \begin{lemma} \label{end-distrib} The random vector $(\varphi^{(0)}, {\mathcal C}(0), \varphi^{(u)}, {\mathcal C}(\tau_u^{x_0}))$ thus defined has the same distribution as $(\varphi^{(0)}, {\mathcal{O}}(\varphi^{(0)}), \varphi^{(u)}, {\mathcal{O}}_u)$ defined in Theorem \ref{Lupu}. \end{lemma} \begin{proof} It is clear from construction, that ${\mathcal C}(0)$ has the same law as ${\mathcal{O}}(\varphi^{(0)})$ (cf Definition \ref{def_FK-Ising}), the FK-Ising configuration coupled with the signs of $\varphi^{(0)}$ as in Proposition \ref{FK-Ising}. Indeed, for each edge $e\in E$ such that $\varphi^{(0)}_{e-}\varphi^{(0)}_{e+}>0$, the probability that $n_e(0)>0$ is $1-e^{-2J_e(\Phi)}$. Moreover, conditionally on ${\mathcal C}(0)={\mathcal{O}}(\varphi^{(0)})$, ${\mathcal C}(\tau_u^{x_0})$ has the same law as ${\mathcal{O}}_u$ defined in Theorem \ref{Lupu}. Indeed, ${\mathcal C}(\tau_u^{x_0})$ is the union of the set ${\mathcal C}(0)$, the set of edges crossed by the process $(X_u)_{u\le \tau_u^{x_0}}$, and the additional edges such that $N_e(J_e(\tau_u^{x_0}))-N_e(J_e(\Phi))>0$. Clearly $N_e(J_e(\tau_u^{x_0}))-N(J_e(\Phi))>0$ independently with probability $1-e^{-(J_e(\Phi(\tau_u^{x_0}))-J_e(\Phi))}$ which coincides with the probability given in Theorem \ref{Lupu}. \end{proof} We will prove the following theorem that, together with Lemma \ref{end-distrib}, contains the statements of both Theorem \ref{Lupu} and \ref{thm-Poisson}. \begin{theorem}\label{thm-Poisson2} The random vector $\varphi^{(u)}_x$ is a GFF distributed according to $P_{\varphi}^{\{x_0\},\sqrt{2u}}$. Moreover, conditionally on $\varphi^{(u)}_x=\check \varphi$, the process $$(X_{t},(n_{e}(t))_{e\in E})_{t\le \tau_u^{x_0}}$$ has the law of the process $(\check X_{\check T-t },(\check n_e(\check T -t))_{e\in E})_{t\le \check T}$ described in section \ref{sec_Poisson}. \end{theorem} \begin{proof} {\bf Step 1 :} We start by a simple lemma. \begin{lemma}\label{distrib-phi-n} The distribution of $(\Phi:=\vert \varphi^{(0)}\vert, n_e(0))$ is given by the following formula for any bounded measurable test function $h$ \begin{multline*} {\mathbb{E}}\left(h(\Phi, n(0))\right)= \\\sum_{(n_e)\in {\mathbb N}^E} \int_{{\mathbb R}_+^{V\setminus\{x_0\}}} d\Phi h(\Phi, n) e^{-{1\over 2} \sum_{x\in V} W_x(\Phi_x)^2-\sum_{e\in E} J_e(\Phi)} \left(\prod_{e\in E}{\frac{(2J_e(\Phi))^{n_e}}{n_e!}}\right) 2^{\#{\mathcal C}(n_e)-1}. \end{multline*} where the integral is on the set $\{(\Phi_x)_{x\in V}, \;\; \Phi_x>0\; \forall x\neq x_0,\; \Phi_{x_0}=0\}$ and $d\Phi={\frac{\prod_{x\in V\setminus\{x_0\}} d\Phi_x}{\sqrt{2\pi}^{\vert V\vert -1}}}$ and $\#{\mathcal C}(n)$ is the number of clusters induced by the edges such that $n_e>0$. \end{lemma} \begin{proof} Indeed, by construction, summing on possible signs of $\varphi^{(0)}$, we have \begin{eqnarray} \nonumber &&{\mathbb{E}}\left(h(\Phi, n(0))\right) \\ \label{int-eee}&=&\sum_{\sigma_x} \sum_{n\ll \sigma_x} \int_{{\mathbb R}_+^{V\setminus\{x_0\}}} d\Phi h(\Phi, n) e^{-{1\over 2} {\mathcal E}(\sigma\Phi)}\left(\prod_{e\in E, \; \sigma_{e+}\sigma_{e-}=+1} {e^{-2J_e(\Phi)} (2J_e(\Phi))^{n_e}\over n_e!}\right). \end{eqnarray} where the first sum is on the set $\{\sigma_x\in \{+1,-1\}^V, \; \sigma_{x_0}=+1\}$ and the second sum is on the set of $\{(n_e)\in {\mathbb N}^E, \; n_e=0\hbox{ if $\sigma_{e-}\sigma_{e+}=-1$}\}$ (we write $n\ll \sigma$ to mean that $n_e$ vanishes on the edges such that $\sigma_{e-}\sigma_{e+}=-1$). Since \begin{eqnarray*} {1\over 2}{\mathcal E}(\sigma \Phi)&=& {1\over 2}\sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi)\sigma_{e-}\sigma_{e+}. \\ &=& {1\over 2}\sum_{x\in V} W_x (\Phi_x)^2+\sum_{e\in E} J_e(\Phi) -\sum_{\substack{e\in E\\\sigma_{e-}\sigma_{e+}=+1}} 2J_e(\Phi), \end{eqnarray*} we deduce that the integrand in (\ref{int-eee}) is equal to \begin{eqnarray*} && h(\Phi,n) e^{-{1\over 2} {\mathcal E}(\sigma\Phi)}\left(\prod_{e\in E, \; \sigma_{e+}\sigma_{e-}=+1} {e^{-2J_e(\Phi)} (2J_e(\Phi))^{n_e}\over n_e!}\right) \\ &=& h(\Phi,n) e^{-{1\over 2} {\mathcal E}(\sigma\Phi)}e^{-\sum_{e\in E, \; \sigma_{e+}\sigma_{e-}=+1} 2J_e(\Phi)}\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right) \\ &=& h(\Phi,n) e^{-{1\over 2}\sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi)}\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right) \end{eqnarray*} where we used in the first equality that $n_e=0$ on the edges such that $\sigma_{e+}\sigma_{e-}=-1$. Thus, \begin{eqnarray*} &&{\mathbb{E}}\left(h(\Phi, n(0))\right) \\ &=& \sum_{\sigma_x}\sum_{n_e\ll \sigma_x} \int_{{\mathbb R}_+^{V\setminus\{x_0\}}} d\Phi h(\Phi, n) e^{-{1\over 2}\sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi)}\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right). \end{eqnarray*} Inverting the sum on $\sigma$ and $n$ and summing on the number of possible signs which are constant on clusters induced by the configuration of edges $\{e\in E, \; n_e>0\}$, we deduce Lemma \ref{distrib-phi-n}. \end{proof} \noindent{\bf Step 2 :} We denote by $Z_t=(X_t, \Phi(t), n_e(t))$ the process defined previously and by $E_{x_0, \Phi, n_0}$ its law with initial condition $(x_0, \Phi, n_0)$. We now introduce a process $\tilde Z_t$, which is a "time reversal" of the process $Z_t$. This process will be related to the process defined in section \ref{sec_Poisson} in Step 4, Lemma \ref{RN}. For $(\tilde n_e)\in {\mathbb N}^E$ and $(\tilde \Phi_x)_{x\in V}$ such that $$ \tilde \Phi_{x_0}=u, \;\; \tilde \Phi_x>0, \;\; \forall x\neq x_0, $$ we define the process $\tilde Z_t=(\tilde X_t, \tilde\Phi(t), \tilde n_e(t))$ with values in $V\times {\mathbb R}_+^V\times {\mathbb Z}^E$ as follows. The process $(\tilde X_t)$ is a Markov jump process with jump rates $(W_e)$ (so that $\tilde X\stackrel{\text{law}}{=} X$), and $\tilde\Phi(t)$, $\tilde n_e(t)$ are defined by \begin{eqnarray}\label{tildePhi} \tilde \Phi_x(t)=\sqrt{\tilde \Phi_x^2-2\tilde \ell_x(t)},\;\;\;\forall x\in V, \end{eqnarray} where $(\tilde\ell_x(t))$ is the local time of the process $\check X$ up to time $t$, \begin{eqnarray}\label{tilden} \tilde n_e(t)= \tilde n_e-\left(N_e(J_e(\tilde\Phi))-N_e(J_e(\tilde\Phi(t)))\right)-\tilde K_e(t) \end{eqnarray} where $((N_e(u))_{u\ge 0})_{e\in E}$ are independent Poisson point process on ${\mathbb R}_+$ with intensity 1 for each edge $e$, and $\tilde K_e(t)$ is the number of crossings of the edge $e$ by the process $\tilde X$ before time $t$. We set \begin{eqnarray}\label{tildeZ} \tilde Z_t=(\tilde X_t, (\tilde \Phi_x(t)), (\tilde n_e(t))), \end{eqnarray} This process is well-defined up to time $$ \tilde T=\inf\left\{t\ge 0, \;\; \exists x\in V\; \tilde \Phi_x(t)=0\right\}. $$ We denote by $\tilde E_{x_0, \tilde\Phi, \tilde n_0}$ its law. Clearly $\tilde Z_t=(\tilde X_t, \tilde\Phi(t), \tilde n_e(t))$ is a Markov process, we will later on make explicit its generator. We have the following change of variable lemma. \begin{lemma}\label{change-var} For all bounded measurable test functions $F,G,H$ \begin{multline*} \sum_{(n_e)\in {\mathbb N}^E} \int d\Phi F(\Phi, n)E_{x_0,\Phi,n} \left( G((Z_{\tau_u^{x_0}-t})_{0\le t\le\tau_u^{x_0}}) H(\Phi(\tau_u^{x_0}), n(\tau_u^{x_0}))\right)= \\ \sum_{(\tilde n_e)\in {\mathbb N}^E} \int d\tilde\Phi H(\tilde\Phi, \tilde n) \tilde E_{x_0,\tilde \Phi,\tilde n} \Big({{\mathbbm 1}}_{\{\tilde X_{\tilde T}=x_0,\tilde n_e(\tilde T)\ge 0\; \forall e\in E\}} G((\tilde Z_{t})_{t\le\check T}) F(\tilde\Phi(\tilde T), \tilde n(\tilde T))\prod_{x\in V\setminus\{x_0\}} {\tilde \Phi_x\over \tilde\Phi_x(\tilde T) }\Big) \end{multline*} where the integral on the l.h.s. is on the set $\{(\Phi_x)\in {\mathbb R}_+^V, \;\; \Phi_{x_0}=0\}$ with $d\Phi= {\prod_{x\in V\setminus\{x_0\}} d\Phi_x\over \sqrt{2\pi}^{\vert V\vert -1}}$ and the integral on the r.h.s. is on the set $\{(\tilde\Phi_x)\in {\mathbb R}_+^V, \;\; \tilde\Phi_{x_0}=u\}$ with $d\tilde\Phi= {\prod_{x\in V\setminus\{x_0\}} d\tilde\Phi_x\over \sqrt{2\pi}^{\vert V\vert -1}}$ \end{lemma} \begin{proof} We start from the left-hand side, i.e. the process, $(X_t, n_e(t))_{0\le\le \tau_u^{x_0}}$. We define $$ \tilde X_{t}=X_{\tau_u-t},\;\;\; \tilde n_e(t)=n_e(\tau_u-t), $$ and $$ \tilde \Phi_x=\Phi_x(\tau_u),\;\;\;, \tilde\Phi_x(t)=\Phi_x({\tau_u-t}), $$ (The law of the processes such defined will later be identified with the law of the processes ($\tilde X_t, \tilde \Phi(t),\tilde n(t))$ defined at the beginning of step 2, cf (\ref{tildePhi}) and (\ref{tilden})). We also set $$ \tilde K_e(t)= K_e(\tau_u)-K_e(t), $$ which is also the number of crossings of the edge $e$ by the process $\tilde X$, between time 0 and $t$. With these notations we clearly have $$ \tilde \Phi_x(t)=\sqrt{\tilde \Phi_x^2-2\tilde \ell_x(t)}, $$ where $\tilde \ell_x(t)=\int_{0}^t{{\mathbbm 1}}_{\{\tilde X_u=x\}} du$ is the local time of $\tilde X$ at time $t$, and $$ \tilde n_e(t)= \tilde n_e(0)+(N_e(J_e(\tilde \Phi(t)))-N_e(J_e(\tilde\Phi(0))))-\tilde K_e(t). $$ By time reversal, the law of $(\tilde X_t)_{0\le s\le \tilde \tau_u}$ is the same as the law of the Markov Jump process $(X_t)_{0\le t\le \tau_u}$, where $\tilde \tau_u=\inf\{t\ge 0, \; \tilde\ell_{x_0}(t)=u\}$. Hence, we see that up to the time $\tilde T=\inf\{t\ge 0, \; \exists x\; \tilde\Phi_x(t)=0\}$, the process $(\tilde X_t, (\tilde \Phi_x(t))_{x\in V}, (\tilde n_e(t))_{t\le \tilde T}$ has the same law as the process defined at the beginning of step 2. Then, following \cite{SabotTarres2015RK}, we make the following change of variables conditionally on the processes $(X_t, (N_e(t)))$ \begin{eqnarray*} ({\mathbb R}_+^*)^V\times {\mathbb N}^E&\mapsto& ({\mathbb R}_+^*)^V\times {\mathbb N}^E\\ ((\Phi_x), (n_e)_{e\in E})&\mapsto& ((\tilde \Phi_x), (\tilde n_e)_{e\in E}) \end{eqnarray*} which is bijective onto the set \begin{multline*} \{\tilde\Phi_x, \;\; \tilde\Phi_{x_0}=\sqrt{2u}, \; \check\Phi_x>\sqrt{2\ell_x(\tau_u^{x_0})}\;\;\forall x\neq x_0\} \\\times \{(\tilde n_e),\;\; \tilde n_e\ge K_e(\tau_u)+(N_e(J_e(\tilde \Phi(\tau_u)))-N_e(J_e(\Phi)))\} \end{multline*} (Note that we always have $\tilde \Phi_{x_0}=\sqrt{2u}$.) The last conditions on $\tilde \Phi$ and $\tilde n_e$ are equivalent to the conditions $\tilde X_{\tilde T}=x_0$ and $\tilde n_e(\tilde T)\ge 0$. The Jacobian of the change of variable is given by $$ \prod_{x\in V\setminus\{x_0\}} d\Phi_x=\left({\prod_{x\in V\setminus\{x_0\}} {\check\Phi_x\over \Phi_x} }\right)\prod_{x\in V\setminus\{x_0\}} d\check\Phi_x. $$ \end{proof} \noindent {\bf Step 3:} With the notations of Theorem \ref{thm-Poisson2}, we consider the following expectation for $g$ and $h$ bounded measurable test functions \begin{eqnarray}\label{test-functions} {\mathbb{E}}\left( g\left(\left(X_{\tau_u-t}, n_e(\tau_u-t)\right)_{0\le t\le \tau_u}\right)h(\varphi^{(u)})\right) \end{eqnarray} By definition, we have $$ \varphi^{(u)}=\sigma \Phi(\tau_u), $$ where $(\sigma_x)_{x\in V}\in \{\pm 1\}^V$ are random signs sampled uniformly independently on clusters induced by $\{e\in E, \; n_e(\tau_u)>0\}$ and conditioned on the fact that $\sigma_{x_0}=+1$. Hence, we define for $(\Phi_x)\in {\mathbb R}_+^V$ and $(n_e)\in {\mathbb N}^E$ \begin{eqnarray}\label{h} H(\Phi, n)=2^{-\#{\mathcal C}(n)+1} \sum_{\sigma\ll n} h(\sigma \Phi), \end{eqnarray} where $\sigma\ll n$ means that the signs $(\sigma_x)$ are constant on clusters of $\{ e\in E, \; n_e>0\}$ and such that $\sigma_{x_0}=+1$. Hence, setting $$ F(\Phi, n)=e^{-{1\over 2} \sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi) }\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right) 2^{\#{\mathcal C}(n_e)-1}, $$ $$ G\left((Z_{\tau_u-t})_{t\le\tau_u}\right)= g\left(\left(X_{\tau_u-t}, n_e(\tau_u-t)\right)_{t\le \tau_u}\right), $$ using lemma \ref{distrib-phi-n} in the first equality and lemma \ref{change-var} in the second equality, we deduce that (\ref{test-functions}) is equal to \begin{multline} \label{eq-3.3} {\mathbb{E}}\left( G\left((Z_{\tau_u-t})_{0\le t\le \tau_u}\right)H(\Phi(\tau_u), n(\tau_u)))\right)= \\ \sum_{(n_e)\in {\mathbb N}^E} \int d\Phi F(\Phi, n) E_{x_0, \Phi,n}\left(G\left((Z_{\tau_u-t})_{t\le\tau_u}\right)H\left(\Phi(\tau_u, n(\tau_u))\right)\right) d\Phi = \\ \sum_{(\tilde n_e)\in {\mathbb N}^E} \int d\tilde\Phi H\left(\tilde \Phi,\tilde n\right) \tilde E_{x_0, \tilde \Phi, \tilde n}\Big({{\mathbbm 1}}_{\{\tilde X_{\tilde T}=x_0,\tilde n_e(\tilde T)\ge 0\; \forall e\in E\}} F\left(\tilde \Phi(\tilde T) , \tilde n(\tilde T)\right) G\left((\tilde Z_{t})_{t\le\tilde T}\right) \prod_{x\in V\setminus\{x_0\}} {\tilde \Phi_x\over \tilde\Phi_x(\tilde T) } \Big) \end{multline} with notations of Lemma \ref{change-var}. Let $\tilde{\mathcal F}_t=\sigma\{\tilde X_s, \; s\le t\}$ be the filtration generated by $\tilde X$. We define the $\tilde {\mathcal F}$-adapted process $\tilde M_t$, defined up to time $\tilde T$ by \begin{multline} \label{Mart} \tilde M_t = {F(\tilde \Phi(t), \tilde n(t))\over \prod_{V\setminus\{\tilde X_t\}} \tilde\Phi_x(t) }{{\mathbbm 1}}_{\{\tilde X_t\in {\mathcal C}(x_0,\tilde n)\}}{{\mathbbm 1}}_{\{\tilde n_e(t)\ge 0\; \forall e\in E\}}= \\ e^{-{1\over 2} \sum_{x\in V} W_x(\tilde \Phi_x(t))^2-\sum_{e\in E} J_e(\tilde\Phi(t)) } \Big(\prod_{e\in E} {(2J_e(\tilde \Phi(t)))^{\tilde n_e(t)}\over \tilde n_e(t) !}\Big) {2^{\#{\mathcal C}(\tilde n_e(t))-1} \over \prod_{x\in V\setminus\{\tilde X_t\}} \tilde\Phi_x(t) }{{\mathbbm 1}}_{\{\tilde X_t\in {\mathcal C}(x_0,\tilde n(t)),\tilde n_e(t)\ge 0\; \forall e\in E\}} \end{multline} where ${\mathcal C}(x_0,\tilde n(t))$ denotes the cluster of the origin $x_0$ induced by the configuration ${\mathcal C}(\tilde n(t))$. Note that at time $t=\tilde T$, we also have \begin{eqnarray}\label{M-T} \tilde M_{\tilde T}= {F(\tilde \Phi(\tilde T), \tilde n(\tilde T))\over \prod_{V\setminus\{\tilde x_0\}} \tilde\Phi_x(\tilde T) }{{\mathbbm 1}}_{\{\tilde X_{\tilde T}=x_0}{{\mathbbm 1}}_{\tilde n_e(t)\ge 0\; \forall e\in E\}} \end{eqnarray} since $\tilde M_{\tilde T}$ vanishes on the event where $\{\tilde X_{\tilde T}=x\}$, with $x\neq x_0$. Indeed, if $\tilde X_{\tilde T}=x\neq x_0$, then $\tilde\Phi_x(\tilde T)=0$ and $J_e(\tilde\Phi(\tilde T))=0$ for $e\in E$ such that $x\in e$. It means that $\tilde M_{\tilde T}$ is equal to 0 if $\tilde n_{e}(\tilde T)>0$ for some edge $e$ neighboring $x$. Thus, $\tilde M_{\tilde T}$ is null unless $\{x\}$ is a cluster in ${\mathcal C}(\tilde n(\tilde T))$. Hence, $\tilde M_{\tilde T}=0$ if $x\neq x_0$ since $\tilde M_{\tilde T}$ contains the indicator of the event that $\tilde X_{\tilde T}$ and $x_0$ are in the same cluster. Hence, using identities (\ref{eq-3.3}) and (\ref{M-T}) we deduce that (\ref{test-functions}) is equal to \begin{eqnarray} \label{equ-M} (\ref{test-functions})&=& \sum_{(\tilde n_e)\in {\mathbb N}^E} \int d\tilde\Phi H\left(\tilde \Phi,\tilde n\right) F\left(\tilde \Phi,\tilde n\right) \tilde E_{x_0, \tilde \Phi, \tilde n}\left( {\tilde M_{\tilde T}\over \tilde M_0} G\left((\tilde Z_{t})_{t\le\tilde T}\right) \right) \end{eqnarray} \noindent {\bf Step 4 :} We denote by $\check Z_t=(\check X_t, \check \Phi_t, \check n(t))$ the process defined in section \ref{sec_Poisson}, which is well defined up to stopping time $\check T$, and $\check Z^T_t=\check Z_{t\wedge \check T}$. We denote by $\check E_{x_0, \check \Phi, \check n}$ the law of the process $\check Z$ conditionnally on the initial value $\check n(0)$, i.e. conditionally on $(N_e(2J(\check\Phi)))=(\check n_e)$. The last step of the proof goes through the following lemma. \begin{lemma}\label{RN} i) Under $\check E_{x_0,\check\Phi,\check n}$, $\check X$ ends at $\check X_{\check T}=x_0$ a.s. and $\check n_e(\check T)\ge 0$ for all $e\in E$. ii) Let $\tilde P^{\le t}_{x_0,\tilde\Phi,\tilde n}$ and $\check P^{\le t}_{x_0,\check\Phi,\check n}$ be the law of the process $(\tilde Z^T_s)_{s\le t}$ and $(\check Z^T_s)_{s\le t}$, then $$ {d\check P^{\le t}_{x_0,\tilde \Phi,\tilde n}\over d\tilde P^{\le t}_{x_0,\tilde \Phi,\check n}}={\tilde M_{t\wedge \tilde T}\over \tilde M_0}. $$ \end{lemma} Using this lemma we obtain that in the right-hand side of (\ref{equ-M}) $$ \tilde E_{x_0, \tilde \Phi , \tilde n}\left( {\tilde M_{\tilde T}\over \tilde M_0} G\left((\tilde Z_{t})_{t\le\tilde T}\right)\right)= \check E_{x_0, \tilde \Phi , \tilde n} \left( G\left((\check Z_{t})_{t\le\check T}\right)\right) $$ Hence, we deduce, using formula (\ref{h}) and proceeding as in lemma \ref{distrib-phi-n}, that (\ref{test-functions}) is equal to \begin{multline*} \label{final} \int_{{\mathbb R}^{V\setminus\{x_0\} }} d\tilde\varphi e^{-{1\over 2} {\mathcal E}(\tilde\varphi)} h(\tilde \varphi) \sum_{(\tilde n_e)\ll (\tilde \varphi_x)} \left(\prod_{e\in E, \; \tilde\varphi_{e-}\tilde\varphi_{e+}\ge 0} {e^{-2J_e(\vert \tilde \varphi\vert)}(2J_e(\vert \tilde \varphi\vert ))^{\tilde n_e}\over \tilde n_e !}\right) \\\tilde E_{x_0, \vert \tilde \varphi\vert , \tilde n}\left({\tilde M_{\tilde T}\over \tilde M_0} G\left((\tilde Z_{t})_{t\le\tilde T}\right)\right), \end{multline*} where the last integral is on the set $\{(\tilde\varphi_x)\in {\mathbb R}^V, \;\; \varphi_{x_0}=u\}$, $d\tilde\varphi={\prod_{x\in V\setminus\{x_0\}} d\tilde\varphi_x\over \sqrt{2\pi}^{\vert V\vert -1}}$, and where $(n_e)\ll (\varphi_x)$ means that $(\tilde n_e)\in {\mathbb N}^E$ and $\tilde n_e=0$ if $\tilde\varphi_{e-}\tilde\varphi_{e+}\le 0$. Finally, we conclude that \begin{eqnarray*} {\mathbb{E}}\left[ g\left(\left(X_{\tau_u^{x_0}-t}, n_e(\tau_u^{x_0}-t)\right)_{0\le t\le \tau_u^{x_0}}\right)h(\varphi^{(u)})\right]= {\mathbb{E}}\left[ g\left(\left(\check X_{t}, \check n_e(t)\right)_{0\le t\le \check T}\right)h(\check \varphi)\right] \end{eqnarray*} where in the right-hand side $\check \varphi\sim P_{\varphi}^{\{x_0\}, \sqrt{2u}} $ is a GFF and $(\check X_t, \check n(t))$ is the process defined in section \ref{sec_Poisson} from the GFF $\check \varphi$. This exactly means that $\varphi^{(u)} \sim P_{\varphi}^{\{x_0\}, \sqrt{2u}}$ and that $$ {\mathcal L}\left(\left(X_{\tau_u^{x_0}-t}, n_e(\tau_u^{x_0}-t)\right)_{0\le t\le \tau_u^{x_0}}\; \Big| \; \varphi^{(u)}=\check\varphi\right)= {\mathcal L}\left(\left(\check X_t, \check n(t)\right)_{t\le \check T}\right). $$ This concludes the proof of Theorem \ref{thm-Poisson2}. \end{proof} \begin{proof}[Proof of lemma \ref{RN}] The generator of the process $\tilde Z_t$ defined in (\ref{tildeZ}) is given, for any bounded and $\mathcal{C}^{1}$ for the second component test function $f$, by \begin{equation} \label{tildeL2} \begin{split} &(\tilde L f)(x,\tilde\Phi,\tilde n)= -{1\over \tilde \Phi_x} ({\partial\over \partial \tilde\Phi_x}f) (x,\tilde\Phi, \tilde n) +\\ & \sum_{y, \; y\sim x} \left(W_{x,y} \left(f(y,\tilde\Phi,\tilde n-\delta_{\{x,y\}})-f(x,\tilde\Phi,n)\right)+ W_{x,y} {\tilde \Phi_{y}\over \tilde \Phi_x} \left(f(x,\tilde\Phi, n-\delta_{\{x,y\}})-f(x,\tilde\Phi,n)\right)\right). \end{split} \end{equation} where $ n-\delta_{\{x,y\}}$ is the value obtained by removing 1 from $n$ at edge $\{x,y\}$. Indeed, since $\tilde \Phi_x(t)= \sqrt{\tilde\Phi_{x}(0)^{2} -2\tilde \ell_x(t)}$, we have \begin{eqnarray} \label{deriv-Phi} {\partial\over\partial t} \tilde \Phi_x(t)= -{{\mathbbm 1}}_{\{\tilde X_t=x\}}{1\over \tilde \Phi_x(t)}, \end{eqnarray} which is explains the first term in the expression. The second term is obvious from the definition of $\tilde Z_t$, and corresponding to the term induced by jumps of the Markov process $\tilde X_t$. The last term corresponds to the decrease of $\tilde n$ due to the increase in the process $\tilde N_e(\tilde \Phi)-\tilde N_e(\tilde \Phi(t))$. Indeed, on the interval $[t,t+dt]$, the probability that $\tilde N_{e}(\tilde \Phi(t))-\tilde N_{e}(\tilde \Phi(t+dt))$ is equal to 1 is of order $$-{\partial\over \partial t} \tilde N_e(\tilde \Phi(t))dt={{\mathbbm 1}}_{\{\tilde X_t\in e\}} {W_e \tilde \Phi_{\underline e}(t)\tilde\Phi_{\overline e}(t)\over \Phi_{\tilde X_t}(t)^2}dt $$ using identity (\ref{deriv-Phi}). Let $\check L$ be the generator of the Markov jump process $\check Z_t=(\check X_t, (\check \Phi_x(t)), (\check n_e(t)))$. We have that the generator is equal, for any smooth test function $f$, to \begin{eqnarray*} &&(\check L f)(x,\Phi, n)= -{1\over \Phi_x} ({\partial\over \partial \Phi_x}f)(x,\Phi, n) +\\ &&{1\over 2} \sum_{y, \; y\sim x}{ n_{x,y} \over \Phi_x^2} {{{\mathbbm 1}}_{\mathcal{A}_1(x,y)}} \left(f(y,\tilde\Phi,n-\delta_{\{x,y\}})+f(x,\tilde\Phi,n-\delta_{\{x,y\}})- 2f(x,\tilde\Phi,n)\right) \\ &&+ \sum_{y, \; y\sim x}{ n_{x,y} \over \Phi_x^2} {{\mathbbm 1}}_{\mathcal{A}_2(x,y)} \left( f(y,\tilde\Phi,n-\delta_{\{x,y\}})- f(x,\tilde\Phi,n)) \right) \\ &&+\sum_{y, \; y\sim x}{n_{x,y} \over \Phi_x^2} {{\mathbbm 1}}_{\mathcal{A}_3(x,y)} \left(f(x,\tilde\Phi,n-\delta_{\{x,y\}}) - f(x,\tilde\Phi,n) \right) \end{eqnarray*} where $\mathcal{A}_{i}(x,y)$ correspond to the following disjoint events \begin{itemize} \item $\mathcal{A}_1(x,y)$ if the numbers of connected clusters induced by $n-\delta_{\{x,y\}}$ is the same as that of $\check n$. \item $\mathcal{A}_2(x,y)$ if a new cluster is created in $ n-\delta_{\{x,y\}}$ compared with $\check n$ and if $y$ is in the connected component of $x_0$ in the cluster induced by $ n-\delta_{\{x,y\}}$. \item $\mathcal{A}_3(x,y)$ if a new cluster is created in $ n-\delta_{\{x,y\}}$ compared with $n$ and if $x$ is in the connected component of $x_0$ in the cluster induced by $ n-\delta_{\{x,y\}}$. \end{itemize} Indeed, conditionally on the value of $\check n_e(t)=N_e(2J_e(\check\Phi(t)))$ at time $t$, the point process $N_e$ on the interval $[0, 2J_e(\check\Phi(t)))]$ has the law of $n_e(t)$ independent points with uniform distribution on $[0, 2J_e(\check\Phi(t)))]$. Hence, the probability that a point lies in the interval $[2J_e(\check\Phi(t+dt))), 2J_e(\check\Phi(t)))]$ is of order $$ -\check n_e(t) {1\over J_e(\check\Phi(t)))}{\partial\over \partial t} J_e(\check\Phi(t))) dt= {{\mathbbm 1}}_{\{X_t\in e\}}\;\check n_e(t){1\over \check\Phi_{X_t}(t)^2}dt. $$ We define the function \begin{multline} \nonumber\Theta(x,(\Phi_x),(n_e))=\\ e^{-{1\over 2} \sum_{x\in V}W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi) } \left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e !}\right) {2^{\#{\mathcal C}(n_e)-1} \over \prod_{V\setminus\{x\}} \Phi }{{\mathbbm 1}}_{\{x\in {\mathcal C}(x_0,n), n_e\ge 0\; \forall e\in E\}}, \end{multline} so that $$ \tilde M_{t\wedge \tilde T}= \Theta(\tilde Z_{t\wedge\tilde T}). $$ To prove the lemma it is sufficient to prove (\cite{ChungWalsh05MP}, Chapter 11) that for any bounded smooth test function $f$ \begin{eqnarray}\label{LcheckL} {1\over \Theta}\tilde L\left(\Theta f\right)= \check L\left(f\right) \end{eqnarray} Let us first consider the first term in (\ref{tildeL2}). Direct computation gives $$ \left({1\over \Theta}{1\over \Phi_x}\left({\partial\over\partial \Phi_x} \Theta\right)\right) (x,\Phi,n)= -W_x+\sum_{y\sim x} \left(- W_{x,y}{\Phi_y\over\Phi_x}+n_{x,y}{1\over \Phi_x^2}\right). $$ For the second part, remark that the indicators ${{\mathbbm 1}}_{\{x\in {\mathcal C}(x_0,n)\}}$ and ${{\mathbbm 1}}_{\{n_e\ge 0\; \forall e\in E\}}$ imply that $ \Theta(y,\Phi, n-\delta_{x,y}) $ vanishes if $n_{x,y}=0$ or if $y\not\in {\mathcal C}(x_0,n-\delta_{x,y})$. By inspection of the expression of $\Theta$, we obtain for $x\sim y$, \begin{eqnarray*} \Theta (y,\Phi, n-\delta_{x,y})&=& \left({{\mathbbm 1}}_{\{n_{x,y}>0\}} ({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_2}) {n_{x,y}\over 2J_{x,y}(\Phi)}{\Phi_y\over \Phi_x}\right)\Theta(x,\Phi, n) \\ &=&\left(({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_2}) {n_{x,y}\over 2W_{x,y}}{1\over \Phi_x^2}\right)\Theta(x,\Phi, n). \end{eqnarray*} Similarly, for $x\sim y$, \begin{eqnarray*} \Theta(x,\Phi, n-\delta_{x,y})&=& \left({{\mathbbm 1}}_{\{n_{x,y}>0\}}({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_3}){n_{x,y}\over 2J_{x,y}}\right)\Theta(x,\Phi, n)\\ &=& \left(({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_3}) {n_{x,y}\over 2W_{x,y}\Phi_x\Phi_y}\right)\Theta(x,\Phi, n). \end{eqnarray*} Combining these three identities with the expression (\ref{tildeL2}) we deduce \begin{eqnarray*} &&{1\over \Theta}\tilde L\left(\Theta f\right)(x,\Phi,n)=\\ && -{1\over \Phi_x} {\partial\over\partial \Phi_x}f(x,\Phi,n)-\sum_{y\sim x} \left(n_{x,y}{1\over \Phi_x^2}\right)f(x,\Phi,n) \\ && +\sum_{y\sim x} ({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_2}) n_{x,y}{1\over 2\Phi_x^2} f(y, n-\delta_{\{x,y\}},\Phi)+ \sum_{y\sim x}({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_3}){1\over 2 \Phi_x^2} f(x, n-\delta_{\{x,y\}},\Phi). \end{eqnarray*} It exactly coincides with the expression for $\check L$ since $1={{\mathbbm 1}}_{\mathcal{A}_1}+{{\mathbbm 1}}_{\mathcal{A}_2}+{{\mathbbm 1}}_{\mathcal{A}_3}$. \end{proof} \subsection{General case} \label{sec:pgen} \begin{proposition} \label{PropKillingCase} The conclusion of Theorem \ref{thm-Poisson} still holds if the graph $\mathcal{G}=(V,E)$ is finite and the killing measure is non-zero ($\kappa\not\equiv 0$). \end{proposition} \begin{proof} Let $h$ be the function on $V$ defined as \begin{displaymath} h(x)=\mathbb{P}_{x}(X~\text{hits}~x_{0}~\text{before}~\zeta). \end{displaymath} By definition $h(x_{0})=1$. Moreover, for all $x\in V\setminus\lbrace x_{0}\rbrace$, \begin{displaymath} -\kappa_{x} h(x)+\sum_{y\sim x}W_{x,y}(h(y)-h(x))=0. \end{displaymath} Define the conductances $W^{h}_{x,y}:=W_{x,y}h(x)h(y)$, and the corresponding jump process $X^{h}$, and the GFF $\varphi_{h}^{(0)}$ and $\varphi_{h}^{(u)}$ with conditions $0$ respectively $\sqrt{2u}$ at $x_{0}$. The Theorem \ref{thm-Poisson} holds for the graph $\mathcal{G}$ with conductances $(W^{h}_{e})_{e\in E}$ and with zero killing measure. But the process $(X^{h}_{t})_{t\leq \tau_{u}^{x_{0}}}$ has the same law as the process $(X_{s})_{s\leq \tau_{u}^{x_{0}}}$, conditioned on $\tau_{u}^{x_{0}}<\zeta$, after the change of time \begin{displaymath} dt = h(X_{s})^{-2}ds. \end{displaymath} This means in particular that for the occupation times, \begin{equation} \label{EqTimeChange} \ell_{x}(t)=h(X_{s})^{-2}\ell_{x}(s). \end{equation} Moreover, we have the equalities in law \begin{displaymath} \varphi_{h}^{(0)}\stackrel{\text{law}}{=}h^{-1}\varphi^{(0)},\qquad \varphi_{h}^{(u)}\stackrel{\text{law}}{=}h^{-1}\varphi^{(u)}. \end{displaymath} Indeed, at the level of energy functions, we have: \begin{equation*} \begin{split} &\mathcal{E}(hf,hf)= \sum_{x\in V}\kappa_{x} h(x)^{2}f(x)^{2}+ \sum_{e}W_{e}(h(e_{+})f(e_{+})-h(e_{-})f(e_{-}))^{2}\\&= \sum_{x\in V}[\kappa_{x}h(x)^{2}f(x)^{2}+ \sum_{y\sim x}W_{x,y}h(y)f(y)(h(y)f(y)-h(x)f(x))]\\ &= \sum_{x\in V}[\kappa_{x}h(x)^{2}f(x)^{2}- \sum_{y\sim x}W_{x,y}(h(y)-h(x))h(x)f(x)^{2}] -\sum_{\substack{x\in V\\y\sim x}}W_{x,y}h(x)h(y)(f(y)-f(x))f(x) \\&=[\kappa_{x_{0}}- \sum_{y\sim x_{0}}W_{x_{0},y}(h(y)-1)]f(x_{0})^{2} +\sum_{e}W_{e}^{h}(h(e_{+})f(e_{+})-h(e_{-})f(e_{-}))^{2} \\&= \text{Cste}(f(x_{0}))+\mathcal{E}^{h}(f,f), \end{split} \end{equation*} where $\text{Cste}(f(x_{0}))$ means that this term does not depend of $f$ once the value of the function at $x_{0}$ fixed. Let $\check{X}^{h}_{t}$ be the inverse process for the conductances $(W_{e}^{h)_{e\in E}}$ and the initial condition for the field $\varphi_{h}^{(u)}$, given by Theorem \ref{thm-Poisson}. By applying the time change \ref{EqTimeChange} to the process $\check{X}^{h}_{t}$, we obtain an inverse process for the conductances $W_{e}$ and the field $\varphi^{(u)}$. \end{proof} \begin{proposition} \label{PropInfiniteCase} Assume that the graph $\mathcal{G}=(V,E)$ is infinite. The killing measure $\kappa$ may be non-zero. Then the conclusion of Theorem \ref{thm-Poisson} holds. \end{proposition} \begin{proof} Consider an increasing sequence of connected sub-graphs $\mathcal{G}_{i}=(V_{i},E_{i})$ of $\mathcal{G}$ which converges to the whole graph. We assume that $V_{0}$ contains $x_{0}$. Let $\mathcal{G}_{i}^{\ast}=(V_{i}^{\ast},E_{i}^{\ast})$ be the graph obtained by adding to $\mathcal{G}_{i}$ an abstract vertex $x_{\ast}$, and for every edge $\lbrace x,y\rbrace$, where $x\in V_{i}$ and $y\in V\setminus V_{i}$, adding an edge $\lbrace x,x_{\ast}\rbrace$, with the equality of conductances $W_{x,x_{\ast}}=W_{x,y}$. $(X_{i,t})_{t\geq 0}$ will denote the Markov jump process on $\mathcal{G}_{i}^{\ast}$, started from $x_{0}$. Let $\zeta_{i}$ be the first hitting time of $x_{\ast}$ or the first killing time by the measure $\kappa{{\mathbbm 1}}_{V_{i}}$. Let $\varphi^{(0)}_{i}$, $\varphi^{(u)}_{i}$ will denote the GFFs on $\mathcal{G}_{i}^{\ast}$ with condition $0$ respectively $\sqrt{2u}$ at $x_{0}$, with condition $0$ at $x_{\ast}$, and taking in account the possible killing measure $\kappa{{\mathbbm 1}}_{V_{i}}$. The limits in law of $\varphi^{(0)}_{i}$ respectively $\varphi^{(u)}_{i}$ are $\varphi^{(0)}$ respectively $\varphi^{(u)}$. We consider the process $(\hat{X}_{i,t},(\check{n}_{i,e}(t))_{e\in E_{i}^{\ast}}) _{0\leq t\leq\check{T}_{i}}$ be the inverse process on $\mathcal{G}_{i}^{\ast}$, with initial field $\varphi^{(u)}_{i}$. $(X_{i,t})_{t\leq \tau_{i,u}^{x_{0}}}$, conditional on $\tau_{i,u}^{x_{0}}$, has the same law as $(\check{X}_{i,\check{T}_{i}-t})_{t\leq \check{T}_{i}}$. Taking the limit in law as $i$ tends to infinity, we conclude that $(X_{t})_{t\leq \tau_{u}^{x_{0}}}$, conditional on $\tau_{u}^{x_{0}}<+\infty$, has the same law as $(\check{X}_{\check{T}-t})_{t\leq \check{T}}$ on the infinite graph $\mathcal{G}$. The same for the clusters. In particular, \begin{multline*} \mathbb{P}(\check{T}\leq t, \check{X}_{[0,\check{T}]}~\text{stays in}~V_{j})= \lim_{i\to +\infty} \mathbb{P}(\check{T}_{i}\leq t, \check{X}_{i,[0,\check{T}_{i}]}~\text{stays in}~V_{j}) \\= \lim_{i\to +\infty} \mathbb{P}(\tau_{i,u}^{x_{0}}\leq t, X_{i,[0,\tau_{i,u}^{x_{0}}]}~\text{stays in}~V_{j}\vert \tau_{i,u}^{x_{0}}<\zeta_{i})= \mathbb{P}(\tau_{u}^{x_{0}}\leq t, X_{[0,\tau_{u}^{x_{0}}]} ~\text{stays in}~V_{j}\vert \tau_{u}^{x_{0}} < \zeta), \end{multline*} where in the first two probabilities we also average by the values of the free fields. Hence \begin{displaymath} \mathbb{P}(\check{T}=+\infty~\text{or}~\check{X}_{\check{T}}\neq x_{0})= 1-\lim_{\substack{t\to +\infty\\ j\to +\infty}} \mathbb{P}(\tau_{u}^{x_{0}}\leq t, X_{[0,\tau_{u}^{x_{0}}]} ~\text{stays in}~V_{j}\vert \tau_{u}^{x_{0}} < \zeta) = 0. \end{displaymath} \end{proof} \section*{Acknowledgements} TL acknowledges the support of Dr. Max Rössler, the Walter Haefner Foundation and the ETH Zurich Foundation. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{farey_intro} In \cite{Weyl1} Hermann Weyl developed a general and far-reaching theory for the equidistribution of sequences modulo 1, which is discussed from a historical point of view in Stammbach's paper \cite{Stammbach}. Especially Weyl's result that for real $t$ the sequence $ t ,\, 2t ,\, 3t,\,\ldots $ is equidistributed modulo 1 if and only if $t$ is irrational can be found in \cite[\S 1]{Weyl1}. This means that \begin{equation*} \lim \limits_{N \to \infty} \frac{1}{N} \#\left\{nt -\lfloor nt \rfloor \in [a,b] : n \leq N \right\}=b-a \end{equation*} holds for all subintervals $[a,b] \subseteq [0,1]$ if and only if $t \in \mathbb{R} \setminus \mathbb{Q}$. Here $\#M$ denotes the number of elements of a finite set $M$. This generalization of Kronecker's theorem \cite[Chapter XXIII, Theorem 438]{hw} is an important result in number theory. We have only mentioned its one dimensional version, but the higher dimensional case is also treated in Weyl's paper. Now we put \begin{equation}\label{Sxdef} S(x,t)=\sum \limits_{k \leq x} \left(kt -\lfloor kt \rfloor-\frac12\right) \end{equation} for $x \geq 0$ and $t \in \mathbb{R}$. If the sequence $(n t)_{n \in \mathbb{N}}$ is "well distributed" modulo 1 for irrational $t$, then $|S(x,t)|/x$ should be "small" for $x$ large enough. In \cite[equation (2), p. 80]{Ost1922} Ostrowski used the continued fraction expansion $t=\langle \lambda_0,\lambda_1,\lambda_2,\ldots \rangle$ for irrational $t$ and presented a very efficient calculation of $S(n,t)$ with $n \in \mathbb{N}_0$, he namely obtained a simple iterative procedure using at most $\mathcal{O}(\log n)$ steps for $n \to \infty$, uniformly in $t \in \mathbb{R}$. We have summarized his result in Theorem \ref{ostrowski} of the paper on hand. From this theorem he derived an estimate for $S(n,t)$ in the case of irrational $t \in \mathbb{R}$ which depends on the choice of $t$. Especially if $(\lambda_k)_{k \in \mathbb{N}_0}$ is a bounded sequence, then we say that $t$ has \textit{bounded partial quotients}, and have in this case from Ostrowski's paper \begin{equation}\label{ostopt} |S(n,t)| \leq C(t) \log n \,,\quad n \geq 2\,, \end{equation} with a constant $C(t)>0$ depending on $t$\,. Ostrowski also showed that this gives the best possible result, answering an open question posed by Hardy and Littlewood.\\ In \cite{Lang1966} and \cite[III,\S 1]{Lang1995} Lang obtained for every fixed $\varepsilon > 0$ that \begin{equation}\label{lang_fast_ueberall} |S(n,t)| \leq\left(\log n \right)^{2+\varepsilon} \quad \mbox{~for ~} n \geq n_0(t,\varepsilon) \end{equation} for almost all $t \in \mathbb{R}$ with a constant $n_0(t,\varepsilon) \in \mathbb{N}$. Let $\alpha$ be an irrational real number and $g \geq 1$ be an increasing function, defined for sufficiently large positive numbers. Due to Lang \cite[II,\S 1]{Lang1995} the number $\alpha$ is of type $\leq g$ if for all sufficiently large numbers $B$, there exists a solution in relatively prime integers $q,p$ of the inequalities $$ |q\alpha-p|<1/q\,, \quad B/g(B)\leq q <B\,. $$ After Corollary 2 in \cite[II,\S 3]{Lang1995}, where Lang studied the quantitative connection between Weyl's equidistribution modulo 1 for the sequence $t,2t,3t,\ldots$ and the type of the irrational number $t$, he mentioned the work of Ostrowski \cite{Ost1922} and Behnke \cite{Behnke} and wrote: "Instead of working with the type as we have defined it, however, these last-mentioned authors worked with a less efficient way of determining the approximation behaviour of $\alpha$ with respect to $p/q$, whence followed weaker results and more complicated proofs." \\ Though Lang's theory gives Ostrowski's estimate \eqref{ostopt} for all real irrational numbers $t$ with bounded partial quotients, see \cite[II, \S 2, Theorem 6 and III,\S 1, Theorem 1]{Lang1995}, as well as estimate \eqref{lang_fast_ueberall} for almost all $t \in \mathbb{R}$, Lang did not use Ostrowski's efficient formula for the calculation of $S(n,t)$\,. We will see in Section \ref{dirichlet_section} of the paper on hand that Ostrowski's formula can be used as well in order to derive estimate \eqref{lang_fast_ueberall} for almost all $t \in \mathbb{R}$, without working with the type defined in \cite[II,\S 1]{Lang1995}. For this purpose we will present the general and useful Theorem \ref{kettenabschaetz}, which will be derived in Section \ref{sawtooth} from the elementary theory of continued fractions. Our resulting new Theorems \ref{B0_mass_almost_everywhere}, \ref{B0_mass} now have the advantage to provide an explicit form for those sets of $t$-values which satisfy crucial estimates of $S(n,t)$.\\ If $\Theta:[1,\infty) \to [1,\infty)$ is any monotonically increasing function with $\begin{displaystyle}\lim \limits_{n \to \infty} \Theta(n)=\infty\end{displaystyle}$\,, then Theorem \ref{B0_mass} gives the inequality $ |S(n,t)| \leq 2 \log^2(n) \Theta(n) $ uniformly for all $n \geq 3$ and all $t \in \mathcal{M}_n$ for a sequence of sets $\mathcal{M}_n \subseteq [0,1]$ with $\lim \limits_{n \to \infty}|\mathcal{M}_n|=1$. Here $|\mathcal{M}_n|$ denotes the Lebesgue-measure of $\mathcal{M}_n$. On the other hand Theorem \ref{Bx_L2} states that $$ \left(\int \limits_{0}^{1}S(n,t)^2\,dt\right)^{1/2}=\mathcal{O}(\sqrt{n}) \quad \text{for~}n \to \infty $$ gives the true order of magnitude for the $L_2(0,1)$-norm of $S(n,\cdot)$. If $\Theta$ increases slowly then the values of $S(n,t)$ with $t$ in the unit-interval $[0,1]$ which give the major contribution to the $L_2(0,1)$-norm have their pre-images only in the small complements $[0,1] \setminus \mathcal{M}_n$. We see that $n_0(t,\varepsilon)$ in estimate \eqref{lang_fast_ueberall} depends substantially on the choice of $t$. Moreover, a new representation formula for $B_n(t)=S(n,t)/n$ given in Section \ref{sawtooth}, Theorem \ref{Bx_thm} will also give an alternative proof of Ostrowski's estimate \eqref{ostopt} if $t$ has bounded partial quotients. In this way we summarize and refine the corresponding results given by Ostrowski and Lang, respectively.\\ For $n \in \mathbb{N}$ and $N = \sum\limits_{k=1}^{n}\varphi(k)$ the Farey sequence $\mathcal{F}_n$ of order $n$ consists of all reduced and ordered fractions \begin{equation*} \frac{0}{1}=\frac{a_{0,n}}{b_{0,n}} < \frac{a_{1,n}}{b_{1,n}} < \frac{a_{2,n}}{b_{2,n}} < \ldots < \frac{a_{N,n}}{b_{N,n}}=\frac{1}{1} \end{equation*} with $1 \leq b_{\alpha,n} \leq n$ for $\alpha = 0,1, \ldots,N$. By $\mathcal{F}^{ext}_n$ we denote the extension of $\mathcal{F}_n$ consisting of all reduced and ordered fractions $\frac{a}{b}$ with $a \in \mathbb{Z}$ and $b \in \mathbb{N}$, $b \leq n$. In the former paper \cite{Ku4} we have studied 1-periodic functions $\Phi_n : \mathbb{R} \to \mathbb{R}$ which are related to the Farey sequence $\mathcal{F}_n$, based on the theory developed in \cite{Ku1, Ku2, Ku3} for related functions. For $k \in \mathbb{N}$ and $x>0$ the 1-periodic functions $q_k, \Phi_x : \mathbb{R} \to \mathbb{R}$ are \begin{equation}\label{familien} \begin{split} q_k(t) &=-\sum \limits_{d | k} \mu(d)\, \beta \left(\frac{kt}{d}\right) ~\, \mbox{with}~\, \beta(t)=t-\lfloor t \rfloor - \frac12\,,\\ \Phi_x\left(t \right) &= \frac{1}{x} \sum \limits_{k \leq x} q_k(t)=- \frac{1}{x} \sum \limits_{j \leq x} \sum \limits_{k \leq x/j} \mu(k) \beta\left(jt \right)\,.\\ \end{split} \end{equation} The functions $\Phi_x$ determine the number of Farey-fractions in prescribed intervals. More precisely, $t\sum\limits_{k \leq n}\varphi(k)+n\Phi_n(t)+\frac12$ gives the number of fractions of $\mathcal{F}^{ext}_n$ in the interval $[0,t]$ for $t \geq 0$ and $n \in \mathbb{N}$. Moreover, there is a connection between the functions $S(x,t)$ and $\Phi_x(t)$ via the Mellin-transform and the Riemann-zeta function, namely the relation \begin{equation*} \int \limits_{1}^{\infty} \frac{S(x,t)}{x^{s+1}}dx =-\zeta(s)\,\int \limits_{1}^{\infty} \frac{\Phi_x(t)}{x^s}dx\,, \end{equation*} valid for $\Re(s)>1$ and any fixed $t \in \mathbb{R}$. We will use it in a modified form in Theorem \ref{F_thm}. In contrast to Ostrowski's approach using elementary evaluations of $S(n,t)$ for real values of $t$, Hecke \cite{Hecke} considered the case of special quadratic irrational numbers $t$, studied the analytical properties of the corresponding Dirichlet series \begin{equation}\label{Hecke_series} \sum \limits_{m=1}^{\infty}\frac{mt-\lfloor mt \rfloor-\frac12}{m^s}= s\int \limits_{1}^{\infty} \frac{S(x,t)}{x^{s+1}}dx \end{equation} and obtained its meromorphic continuation to the whole complex plane, including the location of poles. Hecke could use his analytical method to derive estimates for $S(n,t)$, but he did not obtain Ostrowski's optimal result \eqref{ostopt} for real irrationalities $t$ with bounded partial quotients. For positive irrational numbers $t$ Sourmelidis \cite{Sourmelidis} studied analytical relations between the Dirichlet series in \eqref{Hecke_series} and the so called Beatty zeta-functions and Sturmian Dirichlet series. For $x>0$ we set \begin{equation*} r_x =\frac{1}{x} \sum \limits_{k \leq x} \varphi(k)\,, \quad s_x = \sum \limits_{k \leq x} \frac{\varphi(k)}{k} \,, \end{equation*} and define the \textit{continuous} and odd function $h : \mathbb{R} \to \mathbb{R}$ by \begin{equation*} h(x)=\begin{cases} 0 &\ \text{for}\ x =0 \,,\\ 3x/\pi^2+r_x-s_x &\ \text{for}\ x >0 \,,\\ -h(-x) &\ \text{for}\ x < 0\, .\\ \end{cases} \end{equation*} Then we obtained in \cite[Theorem 2.2]{Ku4} for any fixed reduced fraction $a/b$ with $a \in \mathbb{Z}$ and $b \in \mathbb{N}$ and any $x_*>0$ that for $n \to \infty$ $$ \tilde{h}_{a,b}(n,x)=-b \, \Phi_n\left(\frac{a}{b}+\frac{x}{bn}\right) $$ converges uniformly to $h(x)$ for $-x_* \leq x \leq x_*$. For this reason we have called $h$ a limit function. It follows from \cite[Theorem 1]{Mont} with an absolute constant $c>0$ for $x\geq 2$ that $\begin{displaystyle} h(x) = \mathcal{O}\left( e^{-c\sqrt{\log x}}\right)\,. \end{displaystyle}$ Plots of this limit function are presented in Section \ref{appendix}, Figures \ref{h25eps},\ref{h50eps},\ref{h500eps}. In Section \ref{sawtooth} we introduce another limit function ${\tilde \eta} : \mathbb{R} \to \mathbb{R}$ by ${\tilde \eta}(0)=-\frac12$ and \begin{equation*} {\tilde \eta}(x)= \frac{(x-\lfloor x \rfloor)(x-\lfloor x \rfloor-1)}{2x} \quad \text{for}\ x \in \mathbb{R} \setminus \{0\}\,, \end{equation*} and obtain from Theorem \ref{rescaled_limit} for $B_n(t)=S(n,t)/n$ analogous to \cite[Theorem 2.2]{Ku4} the new result that for $n \to \infty$ $$ \tilde{\eta}_{a,b}(n,x)= b \, B_n\left(\frac{a}{b}+\frac{x}{bn}\right) $$ converges uniformly to $\tilde{\eta}(x)$ for $-x_* \leq x \leq x_*$. A plot of ${\tilde \eta}(x)$ for $-8 \leq x \leq 8$ is given in Section \ref{appendix}, Figure \ref{eta8eps}. Now Theorem \ref{Bx_thm}(b) follows from part (a) and leads to the formula \eqref{Bsequence}, which bears a strong resemblance to that in Ostrowski's theorem \ref{ostrowski} and gives an alternative proof for Ostrowski's estimate \eqref{ostopt} if $t$ has bounded partial quotients. Hence it would be interesting to know whether there is a deeper reason for this analogy. \section{Sums with sawtooth functions}\label{sawtooth} With the sawtooth function $\beta(t)=t-\lfloor t \rfloor -\frac12$ we define for $x>0$ the 1-periodic functions $B_{x}: \mathbb{R} \to \mathbb{R}$ by \begin{equation}\label{Bxdef} B_{x}\left(t \right)= \frac{1}{x}\sum \limits_{k \leq x} \beta(kt)\,. \end{equation} The function $\beta(t)$ has jumps of height $-1$ exactly at integer numbers $t \in \mathbb{Z}$ but is continuous elsewhere. Let $a/b$ with $a \in \mathbb{Z}$, $b \in \mathbb{N}$ be any reduced fraction with denominator $b \leq x$. By $\begin{displaystyle} u^{\pm}(t)=\lim \limits_{\varepsilon \downarrow 0} u(t \pm \varepsilon) \end{displaystyle}$ we denote the one-handed limits of a real- or complex valued function $u$ with respect to the real variable $t$. Then the height of the jump of $B_x$ at $a/b$ is given by \begin{equation}\label{Bxjump} B_x^+(a/b)-B_x^-(a/b)=-\frac{1}{x}\left\lfloor\frac{x}{b}\right\rfloor\,. \end{equation} We introduce the function ${\tilde \eta} : \mathbb{R} \to \mathbb{R}$ given by ${\tilde \eta}(0)=-\frac12$ and \begin{equation}\label{etalimit} {\tilde \eta}(x)= \frac{(x-\lfloor x \rfloor)(x-\lfloor x \rfloor-1)}{2x} \quad \text{for}\ x \in \mathbb{R} \setminus \{0\}\,. \end{equation} The function ${\tilde \eta}$ is continuous apart from the zero-point with derivative \begin{equation}\label{etader} {\tilde \eta}'(x)= \frac12-\frac{\lfloor x \rfloor (\lfloor x \rfloor+1)}{2x^2}\quad \text{for} ~ x \in \mathbb{R} \setminus \mathbb{Z}\,. \end{equation} In the following theorem we assume that $\frac{a}{b} < \frac{a^*}{b^*}$ are conse\-cu\-tive reduced fractions in the extended Farey sequence $\mathcal{F}^{ext}_b$ of order $b \leq n$ with $b,b^*,n \in \mathbb{N}$. \begin{thm}\label{Bx_thm} \begin{itemize} \item[(a)] For $0<x\leq n/b^*$ we have \begin{equation*} \begin{split} &B_{n}\left(\frac{a}{b}+\frac{x}{bn}\right) =B_{n}\left(\frac{a}{b}\right)+\frac{1}{2b}+\frac{{\tilde \eta}(x)}{b} -\frac{x}{n}\, B_{x}^-\left(\frac{n/x-b^*}{b}\right)\\ &+\frac{x}{2bn} +\frac{1}{n}\,\sum \limits_{k \leq x}\beta\left(\frac{n-kb^*}{b}\right)\,.\\ \end{split} \end{equation*} \item[(b)] For $0<t \leq 1$ and $n \in \mathbb{N}$ we have \begin{equation*} B_{n}\left(t\right) ={\tilde \eta}(tn) -\frac{\lfloor tn \rfloor}{n}\, B_{\lfloor tn \rfloor}^-\left(\frac{1}{t}- \left \lfloor \frac{1}{t} \right \rfloor\right) +\frac{tn-\lfloor tn \rfloor}{2n}\,. \end{equation*} \end{itemize} \end{thm} \begin{proof} Since (b) follows from (a) in the special case $a=0$, $a^*=b^*=b=1$, it is sufficient to prove (a). We define for $0 < x \leq n/b^*$: \begin{equation}\label{rntilde} \begin{split} R_n\left(\frac{a}{b},x\right) &=-b\left( B_{n}\left(\frac{a}{b}+\frac{x}{bn}\right)- B_{n}\left(\frac{a}{b}\right)\right)\\ &+\frac12+{\tilde \eta}(x)-\frac{bx}{n}\, B_{x}^-\left(\frac{n/x-b^*}{b}\right)+\frac{x}{2n}\,.\\ \end{split} \end{equation} We use \eqref{Bxdef}, \eqref{etader} and obtain, except of the discrete set of jump discontinuities of $R_n$, its derivative \begin{equation*} \begin{split} \frac{d}{dx} R_n\left(\frac{a}{b},x\right) &=-b\cdot\frac{n+1}{2}\cdot \frac{1}{bn} +\frac{1}{2}-\frac{\lfloor x \rfloor (\lfloor x \rfloor+1)}{2x^2}\\ &-\frac{b}{n}\,\frac{d}{dx} \left( x \, B_{x}^-\left(\frac{n/x-b^*}{b}\right) \right)+\frac{1}{2n}\\ & = -\frac{\lfloor x \rfloor (\lfloor x \rfloor+1)}{2x^2}-\frac{b}{n}\,\frac{d}{dx} \sum \limits_{k \leq \lfloor x \rfloor} \beta^-\left(k \, \frac{n/x-b^*}{b}\right)\\ & = -\frac{\lfloor x \rfloor (\lfloor x \rfloor+1)}{2x^2}-\frac{b}{n} \sum \limits_{k \leq \lfloor x \rfloor}k \cdot \frac{n}{b} \cdot \left(-\frac{1}{x^2}\right)=0\,.\\ \end{split} \end{equation*} Note that $B_n=B_n^+$ and $R_n=R^+_{n}$. We deduce from \cite[Theorem 2.2]{Ku3} for any $x$ in the interval $0 < x \leq n/b^*$ that $\begin{displaystyle} a/b+x/(bn)\end{displaystyle}$ is a jump discontinuity of $B_{n}$ if and only if $\begin{displaystyle}(n/x-b^*)/b\end{displaystyle}$ is a jump discontinuity of $B_{x}$. Let $$ q =\frac{a'}{b'}=\frac{n/x_+(q)-b^*}{b} $$ be any reduced fraction $a'/b' \in \mathcal{F}^{ext}_{\lfloor x_+(q) \rfloor}$ from \cite[Theorem 2.2(b)]{Ku3}. We use \eqref{Bxjump} and have \begin{equation}\label{phin_sprung} -b (B_{n}^{+}-B_{n}^{-})\left(\frac{a}{b}+\frac{x_+(q)}{nb}\right) = \frac{b}{n} \left\lfloor \frac{n}{b^*b'+ba'} \right\rfloor = \frac{b}{n} \left\lfloor \frac{x_+(q)}{b'} \right\rfloor\,. \end{equation} First we consider the case that $x_+(q)$ is a {\it non-integer} number. Using again \eqref{Bxjump} we obtain \begin{equation*} \begin{split} & \lim \limits_{x \,\downarrow \,x_+(q)} \left(x \,B_{x}^-\left(\frac{n/x-b^*}{b}\right) \right)- \lim \limits_{x \,\uparrow \, x_+(q)} \left(x \,B_{x}^-\left(\frac{n/x-b^*}{b}\right) \right)\\ &=x_+(q)\cdot\left(B_{x_+(q)}^- -B_{x_+(q)}^+ \right)\left(\frac{a'}{b'}\right) =\left\lfloor \frac{x_+(q)}{b'} \right\rfloor\,, \end{split} \end{equation*} taking into account that $(n/x-b^*)/b$ is monotonically decreasing with respect to $x$. In this case we obtain from \eqref{phin_sprung} \begin{equation*} \left(R_n^{+}-R_n^{-}\right)\left(\frac{a}{b},x_+(q)\right) = \frac{b}{n} \left\lfloor \frac{x_+(q)}{b'} \right\rfloor -\frac{b}{n} \left\lfloor \frac{x_+(q)}{b'} \right\rfloor=0\,, \end{equation*} which implies that $R_n$ is free from jumps at non-integer arguments $x$. It remains to calculate the jumps of $R_n$ at any {\it integer argument} $k$ with $0<k \leq n/b^*$. Here we also have to take care of the jump in $B_{x}=B_{k}$ with respect to the index $x=k$, and conclude \begin{equation}\label{sprung_fall2} \begin{split} & \lim \limits_{x \,\downarrow \,k} \left(x \,B_{x}^-\left(\frac{n/x-b^*}{b}\right) \right)- \lim \limits_{x \,\uparrow \, k} \left(x \,B_{x}^-\left(\frac{n/x-b^*}{b}\right) \right)\\ &=\lim \limits_{\varepsilon \,\downarrow \,0} \left[ \sum \limits_{j \leq k}\beta^-\left( j \cdot\frac{\frac{n}{k+\varepsilon}-b^*}{b}\right) - \sum \limits_{j < k}\beta^-\left(j \cdot \frac{\frac{n}{k-\varepsilon}-b^*}{b}\right) \right]\\ &=\sum \limits_{j \leq k}\beta^- \left( j \cdot \frac{n/k-b^*}{b}\right) -\sum \limits_{j \leq k}\beta^+ \left(j \cdot \frac{n/k-b^*}{b}\right)+ \beta \left(\frac{n-kb^*}{b} \right)\\ &=\beta \left(\frac{n-kb^*}{b} \right)- k\left(B_{k}^{+}-B_{k}^{-} \right)\left(\frac{n/k-b^*}{b} \right)\,. \end{split} \end{equation} Using \eqref{rntilde}, \eqref{sprung_fall2} we obtain \begin{equation*} \begin{split} \left(R_n^{+}-R_n^{-}\right)\left(\frac{a}{b},k \right) &=-\frac{b}{n}\,\beta \left(\frac{n-kb^*}{b}\right)\\ & -b\, ( B_{n}^{+}- B_{n}^{-}) \left( \frac{a}{b}+\frac{k}{bn}\right)\\ &+\frac{b}{n}k\, ( B_{k}^{+}- B_{k}^{-}) \left(\frac{n/k-b^*}{b}\right)\,.\\ \end{split} \end{equation*} Due to \eqref{phin_sprung} and \cite[Theorem 2.2]{Ku3} the second and third terms on the right-hand side cancel each other. We conclude that $R_n$ is a step function with respect to $x$ for a given fraction $a/b$ which has jumps of heigth \begin{equation*} R_n^+\left(\frac{a}{b},k \right) -R_n^-\left(\frac{a}{b},k \right) =-\frac{b}{n}\,\beta \left(\frac{n-kb^*}{b}\right) \end{equation*} only at integer numbers $k$ with $0<k \leq n/b^*$. To complete the proof of the theorem we only have to note that $\begin{displaystyle} \lim \limits_{\varepsilon \, \downarrow \, 0} R_n(a/b,\varepsilon)=0\,. \end{displaystyle}$ \end{proof} \begin{thm}\label{Bx_L2} For $x \to \infty$ we have with the $L_2(0,1)$-Norm $\|\cdot\|_2$ $$ \|B_{x}\|_2=\mathcal{O}\left(\frac{1}{\sqrt{x}}\right)\,. $$ On the other hand we have a constant $C>0$ with $$ \|B_{x}\|_2 \geq\frac{C}{\sqrt{x}} \quad \text{for~}x \geq 1\,. $$ \end{thm} \begin{proof} Since $\begin{displaystyle} \int \limits_0^1\beta(mx)\beta(nx)dx = \frac{(m,n)^2}{12 mn} \end{displaystyle}$ for all $m,n \in \mathbb{N}$, we obtain \begin{equation*} \begin{split} \|B_{x}\|_2^2 &=\frac{1}{12x^2}\sum \limits_{m,n \leq x}\frac{(m,n)^2}{mn} =\frac{1}{12x^2}\sum \limits_{d \leq x}\sum \limits_{\underset{(m,n)=d}{m,n\leq x\,:}}\frac{d^2}{mn}\\ &=\frac{1}{12x^2}\sum \limits_{d \leq x}\sum \limits_{\underset{(j,k)=1}{j,k\leq x/d\,:}}\frac{1}{jk} \leq \frac{1}{12x^2}\sum \limits_{d \leq x}\sum \limits_{j,k\leq x/d}\frac{1}{jk}\\ &\leq \frac{1}{12x^2}\sum \limits_{d \leq x}\left(\log(x/d)+2\right)^2 =\mathcal{O}\left(\frac{1}{x}\right) \quad\text{for}~x \to \infty\\ \end{split} \end{equation*} from Euler's summation formula, regarding that $$ \int \limits_{1}^{x}\left(\log(x/t)+2 \right)^2\,dt=10(x-1)-6\log(x)-\log(x)^2=\mathcal{O}\left(x \right)\,, \quad x \geq 1\,. $$ To complete the proof we note that \begin{equation*} \|B_{x}\|_2^2 =\frac{1}{12x^2}\sum \limits_{d \leq x} \sum \limits_{\underset{(j,k)=1}{j,k\leq x/d\,:}}\frac{1}{jk}\geq \frac{1}{12x^2}\sum \limits_{d \leq x}1=\frac{\lfloor x \rfloor}{12x^2}\,. \end{equation*} \end{proof} The next two theorems employ the elementary theory of continuous fractions. We will use them to derive estimates for $B_n(t)$ with $t$ in certain subsets $\mathcal{M}_n , \tilde{\mathcal{M}}_n \subset (0,1)$ and $\lim \limits_{n \to \infty} |\mathcal{M}_n|= \lim \limits_{n \to \infty}|\tilde{\mathcal{M}}_n| =1$. First we recall some basic facts and notations about continued fractions. For $\lambda_0 \in \mathbb{R}$ and $\lambda_1,\lambda_2,\ldots,\lambda_m>0$ the finite continued fraction $\langle \lambda_0,\lambda_1,\ldots,\lambda_m \rangle$ is defined recursively by $\langle \lambda_0\rangle=\lambda_0$, $\langle \lambda_0,\lambda_1\rangle=\lambda_0+1/\lambda_1$ and \begin{equation*} \langle \lambda_0,\lambda_1,\ldots,\lambda_m \rangle= \langle \lambda_0,\ldots,\lambda_{m-2},\lambda_{m-1}+1/\lambda_m \rangle\,, \quad m\geq 2\,. \end{equation*} Moreover, if $\lambda_j\geq 1$ is given for all $j \in \mathbb{N}$, then the limit \begin{equation*} \lim \limits_{m \to \infty}\langle \lambda_0,\lambda_1,\ldots,\lambda_m \rangle= \langle \lambda_0,\lambda_1,\lambda_2\ldots\rangle \end{equation*} exists and defines an infinite continued fraction. Especially for integer numbers $\lambda_0 \in \mathbb{Z}$ and $\lambda_1,\lambda_2,\ldots \in \mathbb{N}$ we obtain a unique representation \begin{equation}\label{kette0} t=\langle \lambda_0,\lambda_1,\lambda_2\ldots\rangle \end{equation} for all $t \in \mathbb{R} \setminus \mathbb{Q}$ in terms of an infinite continued fraction. Here the coefficients $\lambda_j$ are obtained as follows: For given $t \in \mathbb{R} \setminus \mathbb{Q}$ we put \begin{equation}\label{kette1} \vartheta_0=t\,, \quad \vartheta_j=\frac{1}{\vartheta_{j-1}-\lfloor \vartheta_{j-1}\rfloor}>1\,, \quad j \in \mathbb{N}\,. \end{equation} We may also write $\vartheta_j =\vartheta_j(t)$ in order to indicate that the quantities $\vartheta_j$ depend on the fixed number $t$. We have \begin{equation}\label{kette2} \lambda_0= \lfloor t \rfloor\,,~\lambda_j = \lfloor \vartheta_j \rfloor \quad \text{and~} t=\langle \lambda_0,\ldots,\lambda_{j-1},\vartheta_j\rangle \quad \text{for ~ all ~} j \in \mathbb{N}\,. \end{equation} The following theorem is due to A. Ostrowski. It allows a very efficient calculation of the values $B_n(t)$ in terms of the continued fraction expansion of $t$. \begin{thm}\label{ostrowski}Ostrowski \cite[equation (2), p. 80]{Ost1922}\\ Put $S(n,t)=\sum \limits_{k \leq n}\beta(kt)=nB_n(t)$ for $n \in \mathbb{N}_0$ and $t \in \mathbb{R}$. Given are the continued fraction expansion $t=\langle\lambda_0,\lambda_1,\lambda_2,\ldots\rangle$ of any fixed $t \in \mathbb{R} \setminus \mathbb{Q}$ and $n \in \mathbb{N}$. Then there is exactly one index $j_* \in \mathbb{N}$ with $b_{j_*} \leq n < b_{j_*+1}$, where $\begin{displaystyle} a_k/b_k = \langle \lambda_0,\ldots,\lambda_{k-1} \rangle \end{displaystyle}$ are reduced fractions $a_k/b_k$ and $k,b_k \in \mathbb{N}$. Put $$ n'=n-b_{j_*}\left \lfloor \frac{n}{b_{j_*}}\right \rfloor\,. $$ Then we have \begin{equation}\label{Sost1} S(n,t)=S(n',t)+\frac{(-1)^{j_*}}{2}\left \lfloor \frac{n}{b_{j_*}}\right \rfloor \left(1-\rho_{j_*} (n+n'+1) \right) \end{equation} with $\rho_{j_*} = |b_{j_*} t-a_{j_*} |$ and \begin{equation}\label{Sost2} \left \lfloor \frac{n}{b_{j_*}}\right \rfloor \leq \lambda_{j_*}\,,\quad 0 < \left|1-\rho_{j_*}(n+n'+1) \right| <1\,. \end{equation} \end{thm} Following Ostrowski's strategy we note two important conclusions. We fix any number $t = \langle \lambda_0,\lambda_1,\lambda_2,\ldots \rangle \in \mathbb{R} \setminus \mathbb{Q}$ and apply Ostrowski's theorem \ref{ostrowski} successively, starting with the calculation of $S(n,t)$ and $\begin{displaystyle} |S(n,t)|\leq |S(n',t)| +\lambda_{j_*}/2\end{displaystyle}$\,. If $n'=0$, then $S(n',t)=0$, and we are done. Otherwise we replace $n$ by the reduced number $n'$ with $0<n'<b_{j_*}$ and apply Ostrowski's theorem again, and so on. For the final calculation of $S(n,t)$ we need at most ${j_*}$ applications of the recursion formula and conclude from \eqref{Sost1}, \eqref{Sost2} that \begin{equation}\label{Snfinal} n|B_n(t)| = |S(n,t)| \leq \frac12 \, \sum \limits_{k=1}^{j_*} \lambda_k \,. \end{equation} From $b_0=0$, $b_1=1$ and $b_{j+1}=b_{j-1}+\lambda_j b_j$ for $j \in \mathbb{N}$ we obtain $b_{j+1}\geq 2b_{j-1}$, and hence for all $j \geq 3$ that $\begin{displaystyle} b_j \geq 2^{\frac{j-1}{2}}\,. \end{displaystyle}$ Since $n \geq b_{j_*}$, we obtain without restrictions on $j_*$ for $n \geq 3$ that $\begin{displaystyle} n \geq 2^{\frac{j_*-1}{2}} \end{displaystyle}$ and \begin{equation}\label{jstarabschaetz} \begin{split} j_* \leq 1+\frac{2}{\log 2 }\log n \leq \left(1 + \frac{2}{2/3}\right)\log n = 4 \log n \,, \quad n \geq 3\,. \end{split} \end{equation} We will see that \eqref{Snfinal} and \eqref{jstarabschaetz} have important conclusions, an immediate consequence is Ostrowski's estimate \eqref{ostopt} for irrational numbers $t$ with bounded partial quotients, but first shed new light on these estimates by using Theorem \ref{Bx_thm}(b) instead of Theorem \ref{ostrowski}. We put $J_*=(0,1)\setminus \mathbb{Q}$ and fix any $t \in J_*$ and $n \in \mathbb{N}$. The sequence \begin{equation}\label{tsequence} t_0 =t\,, \quad t_{j}=\frac{1}{t_{j-1}} - \left \lfloor\frac{1}{t_{j-1}} \right \rfloor \end{equation} with $j \in \mathbb{N}$ is infinite, whereas the corresponding sequence of non-negative integer numbers \begin{equation* n_0 =n\,, \quad n_{j} = \lfloor t_{j-1} n_{j-1} \rfloor \end{equation*} is strictly decreasing and terminates if $n_{j}=0$. Therefore $n_{j'}=0$ for some index $j'\in \mathbb{N}$. We assume $1 \leq j<j'$ and distinguish the two cases $0<t_{j-1}<1/2$ and $1/2<t_{j-1}<1$. In the first case we have $n_{j+1}<n_{j} < n_{j-1}/2$, and in the second case again $$n_{j+1}= \lfloor t_{j}\lfloor t_{j-1} n_{j-1}\rfloor\rfloor < t_{j} t_{j-1} n_{j-1} = (1-t_{j-1})n_{j-1} < n_{j-1}/2\,. $$ If $j'$ is odd, then $$ n=n_0 \geq 2^{\frac{j'-1}{2}}n_{j'-1} \geq 2^{\frac{j'-1}{2}}\,, $$ otherwise $$ n=n_0 \geq n_1 \geq 2^{\frac{j'-2}{2}}n_{j'-1} \geq 2^{\frac{j'-2}{2}}\,, $$ and $n \geq 2^{\frac{j'-2}{2}}$ in both cases. Therefore \begin{equation}\label{jstrich} j' \leq 2+ \frac{2}{\log 2} \log n \leq \left(1+ \frac{2}{\log 2}\right) \log n \leq 4 \log n\,,\quad n \geq 8\,. \end{equation} Estimate \eqref{jstrich} bears a strong resemblance with \eqref{jstarabschaetz}. Now it follows from Theorem \ref{Bx_thm}(b) that \begin{equation}\label{Bsequence} B_n(t) = \sum \limits_{j=0}^{j'-1}(-1)^j \left( \frac{n_j}{n} {\tilde \eta}(t_j n_j) + \frac{t_j n_j-\lfloor t_j n_j \rfloor}{2n} \right)\,. \end{equation} For the sequence in \eqref{tsequence} we have $\vartheta_{j+1}t_{j}=1$ for all $j \in \mathbb{N}_0$, and we obtain from the definition \eqref{etalimit} of ${\tilde \eta}$ that \begin{equation*} \frac{n_j}{n} {\tilde \eta}(t_j n_j)+ \frac{t_j n_j-\lfloor t_j n_j \rfloor}{2n} =-\frac{t_j n_j-\lfloor t_j n_j \rfloor}{2n} \left\{ \vartheta_{j+1}\left(1-\left(t_j n_j-\lfloor t_j n_j \rfloor\right)\right)-1 \right\}\,. \end{equation*} We see from \eqref{Bsequence} with \eqref{kette1} and \eqref{kette2} that \begin{equation}\label{Bestimate_modified} \left | B_n(t)\right | \leq \sum \limits_{j=0}^{j'-1} \frac{\max(1,\vartheta_{j+1}-1)}{2n} \leq \frac{1}{2n}\sum \limits_{k=1}^{j'}\lambda_k\,. \end{equation} We finally conclude that estimates \eqref{Bestimate_modified} and \eqref{Snfinal} are equivalent. Hence Theorem \ref{Bx_thm}(b) may be used as well for an efficient calculation and estimation of $B_n(t)$ and $S(n,t)$. \begin{thm}\label{kettenabschaetz} Given are integer numbers $\alpha_1,\ldots,\alpha_m \in \mathbb{N}$. We put $J_*=(0,1)\setminus \mathbb{Q}$. Using the notations from \eqref{kette0}, \eqref{kette1}, \eqref{kette2} we obtain for the measure $| \mathcal{M}|$ of the set $\begin{displaystyle} \mathcal{M}=\left\{t \in J_*\,:\,\vartheta_j < \alpha_j \quad\text{for~all~} j=1,\ldots,m\right\} \end{displaystyle}$ the estimates \begin{equation*} \prod \limits_{j=1}^m \left(1-\frac{1}{\alpha_j} \right)^2 \leq | \mathcal{M}| \leq \prod \limits_{j=1}^m \left(1-\frac{1}{\alpha_j} \right)\,. \end{equation*} \end{thm} \begin{proof} The desired result is valid for $m=1$ with $\begin{displaystyle} \mathcal{M}=\left\{t \in J_*\,:\,1/t < \alpha_1\right\} \end{displaystyle}$ and $\begin{displaystyle} |\mathcal{M}|=1-1/\alpha_1\,. \end{displaystyle}$ Assume that the statement of the theorem is already true for a given $m \in \mathbb{N}$. We prescribe $\alpha_{m+1} \in \mathbb{N}$ and will use induction to prove the statement for $m+1$. For all $j \in \mathbb{N}$ and general given numbers $\lambda_0 \in \mathbb{R}$ and $\lambda_1,\ldots,\lambda_{j-1}>0$ we put for $1\leq k<j$: \begin{equation}\label{kette3} \begin{split} a_0=1\,,~ a_1=\lambda_0\,,~ & a_{k+1}=a_{k-1}+\lambda_k a_k\,,\\ b_0=0\,,~ b_1=1\,,~ & b_{k+1}=b_{k-1}+\lambda_k b_k\,.\\ \end{split} \end{equation} We have \begin{equation}\label{kette4} \begin{split} & \langle \lambda_0,\lambda_1, \ldots \lambda_{j-1},x \rangle - \langle \lambda_0,\lambda_1, \ldots \lambda_{j-1},x' \rangle\\ &= \frac{(-1)^j (x-x')}{(b_jx+b_{j-1})(b_jx'+b_{j-1})}\quad \text{for ~}x,x'>0\,. \end{split} \end{equation} Especially for $\lambda_0=0$ and {\it integer numbers} $\lambda_1,\ldots,\lambda_j \in \mathbb{N}$ we define the set $J(\lambda_1,\ldots,\lambda_j)$ consisting of all $t \in J_*$ between the two rational numbers $\langle 0,\lambda_1, \ldots ,\lambda_{j-1},\lambda_{j}\rangle$ and $\langle 0,\lambda_1, \ldots ,\lambda_{j-1},\lambda_{j}+1 \rangle$. It follows from \eqref{kette3},\eqref{kette4} and all $j \in \mathbb{N}$ that \begin{equation}\label{kette5} |J(\lambda_1,\ldots,\lambda_j)|= \frac{1}{(b_j(\lambda_j+1)+b_{j-1})(b_j \lambda_j +b_{j-1})}\,. \end{equation} The sets $J(k)=(1/(k+1),1/k)\setminus \mathbb{Q}$ with $k \in \mathbb{N}$ form a partition of $J_*=(0,1)\setminus \mathbb{Q}$. More general, it follows from \eqref{kette1}, \eqref{kette2} that for fixed numbers $\lambda_1,\ldots,\lambda_j \in \mathbb{N}$ the pairwise disjoint sets $J(\lambda_1,\ldots,\lambda_j,k)$ with $k \in \mathbb{N}$ form a partition of the set $J(\lambda_1,\ldots,\lambda_j)$. We conclude by induction with respect to $j$ that the pairwise disjoint sets $J(\lambda_1,\ldots,\lambda_j)$ with $(\lambda_1,\ldots,\lambda_j) \in \mathbb{N}^j$ also form a partition of $J_*$. Now we put $j=m$ and distinguish two cases, $m$ odd and $m$ even, respectively. In both cases, $m$ odd or $m$ even, the union \begin{equation*} \bigcup \limits_{k=1}^{\alpha_{m+1}-1}J(\lambda_1,\ldots,\lambda_m,k) \end{equation*} is the set of all numbers $t \in J_*$ with $\lfloor \vartheta_j \rfloor = \lambda_j$ for $j=1,\ldots,m$ such that $\vartheta_{m+1} < \alpha_{m+1}$. We define the set \begin{equation*} \mathcal{M}'=\left\{t \in J_*\,:\,\vartheta_j < \alpha_j \quad\text{for~all~} j=1,\ldots,m+1\right\} \end{equation*} and conclude \begin{equation}\label{mstrich} |\mathcal{M}'| = \sum \limits_{\lambda_1=1}^{\alpha_{1}-1} \sum \limits_{\lambda_2=1}^{\alpha_{2}-1} \cdots \sum \limits_{\lambda_m=1}^{\alpha_{m}-1} \sum \limits_{k=1}^{\alpha_{m+1}-1} |J(\lambda_1,\ldots,\lambda_m,k)|\,. \end{equation} It also follows from our induction hypothesis that \begin{equation}\label{hypo} \begin{split} &\prod \limits_{j=1}^m \left(1-\frac{1}{\alpha_j} \right)^2\\ \leq&\sum \limits_{\lambda_1=1}^{\alpha_{1}-1} \sum \limits_{\lambda_2=1}^{\alpha_{2}-1} \cdots \sum \limits_{\lambda_m=1}^{\alpha_{m}-1} |J(\lambda_1,\ldots,\lambda_m)| \leq \prod \limits_{j=1}^m \left(1-\frac{1}{\alpha_j} \right)\,.\\ \end{split} \end{equation} We evaluate the inner sum in \eqref{mstrich}, and obtain for odd values of $m$ the telescopic sum \begin{equation*} \begin{split} &\sum \limits_{k=1}^{\alpha_{m+1}-1} |J(\lambda_1,\ldots,\lambda_m,k)|\\ &= \sum \limits_{k=1}^{\alpha_{m+1}-1} \left( \langle 0,\lambda_1,\ldots,\lambda_m,k+1\rangle -\langle 0,\lambda_1,\ldots,\lambda_m,k\rangle \right)\\ &=\langle 0,\lambda_1,\ldots,\lambda_m,\alpha_{m+1}\rangle -\langle 0,\lambda_1,\ldots,\lambda_m,1\rangle\\ &=\langle 0,\lambda_1,\ldots,\lambda_{m-1},\lambda_m+1/\alpha_{m+1}\rangle - \langle 0,\lambda_1,\ldots,\lambda_{m-1},\lambda_m+1\rangle\,.\\ \end{split} \end{equation*} Apart from a minus sign on the right hand side we get the same result for even values of $m$, and hence from \eqref{kette4} with $j=m$ in both cases \begin{equation}\label{tele} \begin{split} &\sum \limits_{k=1}^{\alpha_{m+1}-1} |J(\lambda_1,\ldots,\lambda_m,k)|\\ &= \frac{1-\frac{1}{\alpha_{m+1}}}{(b_m(\lambda_m+1)+b_{m-1}) (b_m(\lambda_m+\frac{1}{\alpha_{m+1}})+b_{m-1})}\,.\\ \end{split} \end{equation} Using $\lambda_m\geq 1$ we have \begin{equation*} \frac{\left(1-\frac{1}{\alpha_{m+1}}\right)^2}{b_m \lambda_m+b_{m-1}} \leq \frac{1-\frac{1}{\alpha_{m+1}}}{ b_m(\lambda_m+\frac{1}{\alpha_{m+1}})+b_{m-1}} \leq \frac{1-\frac{1}{\alpha_{m+1}}}{b_m \lambda_m+b_{m-1}}\,, \end{equation*} and obtain from \eqref{tele} and \eqref{kette5} with $j=m$ that \begin{equation}\label{innerschaetz} \begin{split} &|J(\lambda_1,\ldots,\lambda_m)| \left(1-\frac{1}{\alpha_{m+1}}\right)^2\\ &\leq \sum \limits_{k=1}^{\alpha_{m+1}-1}|J(\lambda_1,\ldots,\lambda_m,k)| \leq |J(\lambda_1,\ldots,\lambda_m)| \left(1-\frac{1}{\alpha_{m+1}}\right)\,.\\ \end{split} \end{equation} The theorem follows from \eqref{mstrich}, \eqref{hypo} and \eqref{innerschaetz}. \end{proof} \begin{rem} Since $\alpha_j \in \mathbb{N}$ for $j \leq m$, the conditions $\vartheta_j < \alpha_j$ in the definition of the set $\mathcal{M}$ may likewise be replaced by the equivalent conditions $\lambda_j \leq \alpha_j-1$, where $\lambda_j$ are the coefficients in the continued fraction expansion of $t$, see \eqref{kette1}, \eqref{kette2}\,. \end{rem} \section{Dirichlet series related to Farey sequences} \label{dirichlet_section} We define the sawtooth function $\beta_0:\mathbb{R} \to \mathbb{R}$ by \begin{equation*} \beta_{0}(x)=\begin{cases} x-\lfloor x \rfloor-\frac12 &\ \text{for}\ x \in \mathbb{R} \setminus{\mathbb{Z}}\, ,\\ 0 &\ \text{for}\ x \in \mathbb{Z}\,.\\ \end{cases} \end{equation*} With $x>0$ the 1-periodic function $B_{x,0}: \mathbb{R} \to \mathbb{R}$ is the arithmetic mean of $B_{x}^-$, $B_{x}^+=B_x$, see \eqref{Bxdef}, hence \begin{equation}\label{Bx0def} B_{x,0}\left(t \right)= \frac{1}{x}\sum \limits_{k \leq x} \beta_{0}(kt) =\frac12\left(B_{x}^-(t) + B_{x}^+(t) \right)\,. \end{equation} \begin{lem}\label{B0_rational} For (relatively prime) numbers $a \in \mathbb{Z}$ and $b \in \mathbb{N}$ we have $$ |B_{x,0}(a/b)| \leq \frac{b}{x} $$ for all $x>0$. \end{lem} \begin{proof} Without loss of generality we may assume that $a \in \mathbb{Z}$ and $b \in \mathbb{N}$ are relatively prime and that $b\geq2$, since $B_{x,0}(0)=0$. For $m \in \mathbb{N}$ we define $$ t_{a/b}(m)=1+2b\beta\left(\frac{am}{b}\right)= 2am-2b\left\lfloor\frac{am}{b}\right\rfloor-b+1\,,$$ and obtain from \cite[(7) in the proof of Lemma 2.2]{Ku2} for all $x>0$ that \begin{equation}\label{tabschaetz} \left|\sum \limits_{m \leq x} t_{a/b}(m)\right| \leq b(b+1)\,. \end{equation} Using \eqref{Bxjump} we obtain \begin{equation*} \begin{split} B_{x,0}(a/b)&= B_x(a/b)+\frac{1}{2x}\left\lfloor\frac{x}{b}\right\rfloor\\ &=\frac{1}{2bx}\sum \limits_{m \leq x}\left(t_{a/b}(m)-1 \right)+\frac{1}{2x}\left\lfloor\frac{x}{b}\right\rfloor\\ &=\frac{1}{2bx}\sum \limits_{m \leq x} t_{a/b}(m)+ \frac{1}{2x}\left(\left\lfloor\frac{\lfloor x \rfloor}{b}\right\rfloor-\frac{\lfloor x \rfloor}{b}\right)\,,\\ \end{split} \end{equation*} hence we see from \eqref{tabschaetz} with $b \geq 2$ that \begin{equation*} \left| B_{x,0}(a/b) \right| \leq \frac{b+1}{2x}+\frac{1}{2x} \leq \frac{b}{x}\,. \end{equation*} \end{proof} Using Theorem \ref{Bx_thm}(a), Lemma \ref{B0_rational}, \eqref{Bx0def}, \eqref{Bxjump} and for $x \in \mathbb{R}$ the symmetry relationship \begin{equation*} B_n\left(\frac{a}{b}-\frac{x}{bn}\right)=-B_{n}^-\left(\frac{b-a}{b}+\frac{x}{bn}\right)\,, \end{equation*} we obtain the following result, which has the counterparts \cite[Theorem 3.2]{Ku2} and \cite[Theorem 2.2]{Ku4} in the theory of Farey-fractions: \begin{thm}\label{rescaled_limit} Assume that $a/b \in \mathcal{F}^{ext}_n$ and put $$ \tilde{\eta}_{a,b}(n,x)=b\,B_n\left(\frac{a}{b}+\frac{x}{bn}\right) \,, \qquad x \in \mathbb{R}\,. $$ Then for $n \to \infty$ the sequence of functions $\tilde{\eta}_{a,b}(n,\cdot)$ converges uniformly on each interval $[-x_*,x_*]$, $x_*>0$ fixed, to the limit function $\tilde{\eta}$ in \eqref{etalimit}. \end{thm} For the following two results we apply Theorem \ref{ostrowski} and recall \eqref{Snfinal}, \eqref{jstarabschaetz}. Due to Theorem \ref{rescaled_limit} the functions $B_n$ cannot converge uniformly to zero on any given interval. Instead we have the following \begin{thm}\label{B0_mass} Let $\Theta : [1,\infty) \to [1,\infty)$ be monotonically increasing with $\begin{displaystyle} \lim \limits_{n \to \infty} \Theta(n)=\infty\,.\end{displaystyle}$ We fix $n \in \mathbb{N}$, $\varepsilon >0$, put $m=\lfloor 4 \log n\rfloor$, use \eqref{kette1}, recall $J_*=(0,1) \setminus \mathbb{Q}$ and define $$\mathcal{M}_n= \left\{t \in J_*\,:\,\vartheta_j(t) <1+\lfloor\Theta(n)\log n\rfloor \quad \mbox{for~all ~ } j=1,\ldots,m \right\}\,. $$ Then $\begin{displaystyle} \lim \limits_{n \to \infty}|\mathcal{M}_n|=1\end{displaystyle}$ and \begin{equation} \label{betteresti} \left| B_{n,0}(t) \right|=\left| B_{n}(t) \right| \leq 2 \, \frac{\log^2 n}{n}\,\Theta(n) \end{equation} for all $n \geq 3$ and all $t \in \mathcal{M}_n$. \end{thm} \begin{proof} We apply Ostrowski's theorem on any number $t \in \mathcal{M}_n$ with continued fraction expansion $t = \langle 0,\lambda_1,\lambda_2,\ldots \rangle$ and obtain $j_* \leq m$ from \eqref{jstarabschaetz}, since $j_*$ is an integer number. From $j_* \leq m$ and $t \in \mathcal{M}_n$ we conclude that $\lambda_k \leq \Theta(n) \log n$ for $k=1,\ldots,j_*$, and the desired inequality follows with \eqref{Snfinal}\,. The first statement follows from Theorem \ref{kettenabschaetz} via \begin{equation*} \begin{split} |\mathcal{M}_n| &\geq \left(1-\frac{1}{ 1+\lfloor\Theta(n)\log n\rfloor} \right)^{2m}\\ &\geq \left(1-\frac{1}{\Theta(n)\log n} \right)^{2m} \geq \left(1-\frac{1}{\Theta(n)\log n} \right)^{8\log n}\,,\\ \end{split} \end{equation*} since the right-hand side tends to $1$ for $n \to \infty$. \end{proof} \begin{rem} The sets $\mathcal{M}_n$ in the previous theorem are chosen in such a way that the large values $B_n(t)$ from the peaks of the rescaled limit function around the rational numbers with small denominators predicted by Theorem \ref{rescaled_limit} can only occur in the small complements $J_* \setminus \mathcal{M}_n$ of these sets. However, the quality of the estimates of the values $B_n(t)$ on the sets $\mathcal{M}_n$ depends on the different choices of the growing function $\Theta$. For example, $\Theta(n)=1+\log\left(1+\log n\right)$ gives a much smaller bound than $\Theta(n)=16 \sqrt{n}/(4+\log n)^2$, whereas the latter choice leads to a much smaller value of $|J_* \setminus \mathcal{M}_n|=1-|\mathcal{M}_n|$\,. \end{rem} \begin{thm}\label{B0_mass_almost_everywhere} Let $\Theta : [1,\infty) \to [1,\infty)$ be monotonically increasing with $\begin{displaystyle} \lim \limits_{n \to \infty} \Theta(n)=\infty\,.\end{displaystyle}$ We fix $n \in \mathbb{N}$, $\varepsilon >0$, use \eqref{kette1}, recall $J_*=(0,1) \setminus \mathbb{Q}$ and put $$\tilde{\mathcal{M}_n}= \left\{t \in J_*\,:\,\vartheta_j(t) < 1+\lfloor\Theta(n)j^{1+\varepsilon}\rfloor \quad\text{for~all~} j \in \mathbb{N} \right\}\,. $$ Then $|\tilde{\mathcal{M}}|=1$ for $\tilde{\mathcal{M}}= \bigcup \limits_{n=1}^{\infty}\tilde{\mathcal{M}_n}$, and for all $t \in \tilde{\mathcal{M}}$ there exists an index $n_0=n_0(t,\varepsilon)$ with $$\left| B_{n,0}(t) \right|=\left| B_{n}(t) \right| \leq \frac{(4\log n)^{2+\varepsilon}}{2n}\,\Theta(n) \quad \mbox{for ~ all ~} n \geq n_0\,. $$ The complement $J_* \setminus \tilde{\mathcal{M}}$ is an uncountable null set which is dense in the unit interval $(0,1)$. \end{thm} \begin{proof} The function $\Theta$ is monotonically increasing, hence $ \tilde{\mathcal{M}_1} \subseteq \tilde{\mathcal{M}_2}\subseteq \tilde{\mathcal{M}_3}\ldots\,, $ and we have \begin{equation}\label{kettenmass1} |\tilde{\mathcal{M}}|= \lim \limits_{n \to \infty} |\tilde{\mathcal{M}_n}|\,. \end{equation} For all $n,k \in \mathbb{N}$ we define $$\tilde{\mathcal{M}_{n,k}}= \left\{t \in J_*\,:\,\vartheta_j(t) < 1+\lfloor\Theta(n)j^{1+\varepsilon}\rfloor \quad\text{for~all~} j =1,\ldots,k \right\}\,. $$ Then $\tilde{\mathcal{M}_n}= \bigcap \limits_{k=1}^{\infty}\tilde{\mathcal{M}_{n,k}}$ and \begin{equation}\label{kettenmass2} |\tilde{\mathcal{M}_n}|= \lim \limits_{k \to \infty} |\tilde{\mathcal{M}_{n,k}}| \end{equation} from $ \tilde{\mathcal{M}_{n,1}} \supseteq \tilde{\mathcal{M}_{n,2}}\supseteq \tilde{\mathcal{M}_{n,3}}\ldots\,. $ It follows from Theorem \ref{kettenabschaetz} for all $n,k \in \mathbb{N}$ that \begin{equation*} |\tilde{\mathcal{M}_{n,k}}| \geq \prod \limits_{j=1}^{k}\left(1-\frac{1}{\Theta(n)j^{1+\varepsilon}} \right)^2 \geq \prod \limits_{j=1}^{\infty}\left(1-\frac{1}{\Theta(n)j^{1+\varepsilon}} \right)^2\,. \end{equation*} The product on the right-hand side is independent of $k$ and converges to $1$ for $n \to \infty$, hence $|\tilde{\mathcal{M}}|=1$ from \eqref{kettenmass1}, \eqref{kettenmass2}\,. Each rational number in the interval $(0,1)$ is arbitrary close to a member of the complement $J_* \setminus \tilde{\mathcal{M}}$, and the complement contains all $t=\langle 0,\lambda_1,\lambda_2,\lambda_3,\ldots \rangle$ for which $(\lambda_j)_{j \in \mathbb{N}}$ increases faster then any polynomial. We conclude that $J_* \setminus \tilde{\mathcal{M}}$ is an uncountable null set which is dense in the unit interval $(0,1)$. Now we choose $t \in \tilde{\mathcal{M}}$ and obtain $n_0 \in \mathbb{N}$ with $t \in \tilde{\mathcal{M}}_{n_0}$. Then $t \in \tilde{\mathcal{M}}_{n}$ for all $n \geq n_0$, and we may assume that $n_0 \geq 3$. Note that $n_0$ may depend on $t$ as well as on $\varepsilon$. We have $t = \langle 0,\lambda_1,\lambda_2,\lambda_3,\ldots \rangle$ and $$ \lambda_j \leq \Theta(n)j^{1+\varepsilon} $$ for all $n \geq n_0$ and all $j \in \mathbb{N}$. We finally obtain from \eqref{Snfinal}, \eqref{jstarabschaetz} that $$ n|B_n(t)| = |S(n,t)| \leq \frac12 \, \sum \limits_{k=1}^{j_*} \lambda_k \leq \frac{j_*}{2}\Theta(n)j_*^{1+\varepsilon} \leq \frac12 \Theta(n) (4\log n)^{2+\varepsilon}\,, \quad n \geq n_0\,. $$ \end{proof} \begin{rem} We replace $\varepsilon$ by $\varepsilon/2$, choose $\Theta(n)=1+\log\left(1+\log n\right)$ in the previous theorem and obtain the following result of Lang, see \cite{Lang1966} and \cite[III,\S 1]{Lang1995} for more details: For $\varepsilon > 0$ and almost all $t \in \mathbb{R}$ we have \begin{equation*} |S(n,t)| \leq\left(\log n \right)^{2+\varepsilon} \quad \mbox{~for ~} n \geq n_0(t,\varepsilon) \end{equation*} with a constant $n_0(t,\varepsilon) \in \mathbb{N}$. Here the sum $S(n,t)$ is given by \eqref{Sxdef}\,. This doesn't contradict Theorem \ref{Bx_L2}, because the pointwise estimates of $S(n,t)$ and $B_n(t)$ in Theorem \ref{B0_mass_almost_everywhere} are only valid for sufficiently large values of $n \geq n_0(t,\varepsilon)$, depending on the choice of $t$ and $\varepsilon$. We conclude from Theorem \ref{B0_mass} that the major contribution of $\|B_n\|_2$ comes from the small complement of $\mathcal{M}_n$. Indeed, the crucial point in Theorem \ref{B0_mass} is that it holds for \textit{all} $n \geq 3$, but not so much the fact that the upper bound in estimate \eqref{betteresti} is slightly better than that in Theorem \ref{B0_mass_almost_everywhere}. \end{rem} For $k \in \mathbb{N}$ and $x>0$ the 1-periodic functions $q_{k,0},\Phi_{x,0} : \mathbb{R} \to \mathbb{R}$ corresponding to \eqref{familien} are defined as follows: \begin{equation*} q_{k,0}(t) =-\sum \limits_{d | k} \mu(d)\, \beta_{0} \left(\frac{kt}{d}\right) ~\,, \end{equation*} \begin{equation*} \Phi_{x,0}\left(t \right)= \frac{1}{x}\,\sum \limits_{k \leq x} q_{k,0}(t) =- \frac{1}{x}\sum \limits_{j \leq x} \sum \limits_{k \leq x/j}\mu(k)\,\beta_{0}\left(jt \right)\,. \end{equation*} In the half-plane $H = \{ s \in \mathbb{C}\,:\,\Re(s)>1\}$ the parameter-dependent Dirichlet series $F_{\beta}, F_{q} : \mathbb{R} \times H \to \mathbb{C}$ are given by \begin{equation*} F_{\beta}(t,s) =\sum \limits_{k=1}^{\infty} \frac{\beta_{0}(kt)}{k^s} \,,\quad F_{q}(t,s) =\sum \limits_{k=1}^{\infty} \frac{q_{k,0}(t)}{k^s} \,. \end{equation*} Now Theorem \ref{B0_mass_almost_everywhere} and \eqref{Hecke_series} immediately gives \begin{thm}\label{F_thm} For $t \in \mathbb{R}$ and $\Re(s)>1$ we have with absolutely convergent series and integrals \begin{itemize} \item[(a)] \begin{equation*} \begin{split} \frac{1}{s}\,F_{\beta}(t,s)=\int \limits_{1}^{\infty} B_{x,0}(t)\frac{dx}{x^s}\,,\quad \frac{1}{s}\,F_{q}(t,s)=\int \limits_{1}^{\infty} \Phi_{x,0}(t)\frac{dx}{x^s}\,.\\ \end{split} \end{equation*} \item[(b)] \begin{equation*} F_{q}(t,s)=-\frac{1}{\zeta(s)}\,F_{\beta}(t,s)\,. \end{equation*} \end{itemize} For almost all $t$ the function $F_{\beta}(t,\cdot)$ has an analytic continuation to the half-plane $\Re(s)>0$\,. \end{thm} \section{Appendix: Plots of the limit functions $h$ and $\tilde{\eta}$}\label{appendix} \begin{center} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{h25.eps} \caption{Plot of $h(x)$ for $-25 \leq x \leq 25$\,. \label{h25eps}} \end{figure} \end{center} \begin{center} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{h50.eps} \caption{Plot of $h(x)$ for $25 \leq x \leq 50$\,. \label{h50eps}} \end{figure} \end{center} \begin{center} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{h500.eps} \caption{Plot of $h(x)$ for $50 \leq x \leq 500$\,. \label{h500eps}} \end{figure} \end{center} \begin{center} \begin{figure}[H] \includegraphics[width=0.7\textwidth]{eta8.eps} \caption{Plot of $\tilde{\eta}(x)$ for $-8 \leq x \leq 8$\,. \label{eta8eps}} \end{figure} \end{center}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The design of efficient flow control strategies for wall-bounded turbulent flows aimed at reducing drag or energy expenditure remains one of the most relevant and challenging goals in fluid mechanics \citep{spalart2011drag}. For instance, approximately half the propulsive power generated by airliner engines is employed to overcome the frictional drag caused by turbulent boundary layers; however, the scope of this challenge is much broader than aeronautical engineering. Transport of fluids by pipeline, ubiquitous in industry, nearly always occurs in a turbulent flow regime for which the energy requirements can be significant. This is even more critical for long distance pipelines designed to transport water, petroleum, natural gas or minerals suspensions over thousands of kilometres. Another example is the interstage pumping in processing plants, which can consume a significant fraction of the energy used in the overall process. Consequently, efficient flow control strategies for turbulent boundary layers could achieve a dramatic impact on modern economies \citep*{kim2011physics,mckeon2013experimental}. In the present work, we focus on turbulent pipe flows. \citet{kim2011physics} indicated that designing efficient flow control strategies requires a deep understanding of the physical mechanisms that act in wall-bounded turbulent flows, especially the self-sustaining near-wall cycle \citep*{kim1987turbulence, jimenez1999autonomous}. Its driving mechanism is often understood in terms of the interaction of fluctuations with the mean shear and operator non-normality \citep{Schmid:2007, Alamo.Jimenez:2006}. As a consequence, recent turbulence reduction control strategies are often analysed \citep{Kim.Bewley:2007, kim2011physics} or designed \citep{Sharma.Morrison.McKeon.ea:2011} with reduction of non-normality in mind. However, recent work has revealed the importance of the critical layer amplification mechanism in asymptotically high Reynolds number flow solutions \citep*{Blackburn.Hall.Sherwin:2013} and in wall-bounded turbulence \citep*{McKeonSharma2010}. In this context, recent experimental findings in high Reynolds number wall-bounded turbulent flows highlight the relevance of other coherent structures that scale with outer variables, with streamwise length scales of several integral lengths. These flow structures, known as very large-scale motions (VLSM), were reported by \citet{kim1999very}, \citet*{guala2006large} and \citet{monty2007large}, who found that VLSM consist of long meandering narrow streaks of high and low streamwise velocity that contain an increasingly significant fraction of the turbulent kinetic energy and shear stress production with increasing Reynolds number. Thus the contribution of these flow structures to the overall wall drag will be of the utmost importance at very high Reynolds number. \citet{hutchins2007evidence} observed that these VLSM can reach locations near the wall, thus flow control strategies applied to the wall may have a strong influence on these motions. Therefore, control of these VLSM structures may help achieve a drag increase or reduction in high-Reynolds pipe flow. \citet{McKeonSharma2010} and \citet{sharma2013coherent} showed that the sustenance of these structures may in part be attributed to the critical layer amplification mechanism. Consequently, it is important to understand the influence of flow actuation on the critical layer. VLSM become energetically non-negligible, in the sense of producing a second peak in the streamwise turbulence intensity, at friction Reynolds number $\Ret>10^4$ \citep*{smits2011high}. Even though computational experiments are almost unaffordable at these Reynolds numbers, the behaviour of these structures can be observed in pipe flow experiments at moderate bulk-flow Reynolds numbers $\Rey=12\,500$, as shown by the proper orthogonal decomposition of PIV data carried out by \citet{hellstrom2011visualizing} and \citet{hellstrom2014energetic}. Transpiration control, which is the application of suction and blowing at the wall, can be used effectively to manipulate turbulent flow. One of the first applications of transpiration can be found in the seminal work by \citet*{choi1994active} in which they developed an active, closed-loop flow control strategy known as opposition control, consisting of a spatially distributed unsteady transpiration at a channel wall. Their transpiration was a function of the wall-normal velocity at a location close the wall and they were the first to demonstrate that a significant drag reduction can be achieved by a zero net mass flux transpiration. \citet{sumitani1995direct} investigated the effect of (open-loop) uniform steady blowing and suction in a channel flow. They applied blowing at one wall and suction at the other, and concluded that injection of flow decreases the friction coefficient and activates near-wall turbulence, hence increasing the Reynolds stresses, and that suction has the opposite effect. \citet{jimenez2001turbulent}, in their numerical investigation of channel flows with porous surfaces, considered a flow control strategy based on active porosity, subsequently converted to an equivalent to static transpiration, and showed that the near-wall cycle of vortices/streaks can be influenced by the effect of transpiration. Furthermore, \citet{min2006sustained} showed that sustained sub-laminar drag can be obtained in a turbulent channel flow by applying a travelling sinusoidal (varicose) transpiration at certain frequencies. \cite{luchini2008acoustic} pointed out that steady streaming induced by the transpiration plays a major role in the drag reduction mechanism. \cite{hoepffner2009pumping} performed numerical simulations of a channel with travelling waves in the axial direction of wall deformation and travelling waves of blowing and suction in the streamwise direction. They discovered that the streaming induced by wall deformation induces an increased flow rate while that produced by travelling waves of blowing and suction generate a decrease in flow rate (the streaming flow is however not the only contributor to overall flow rate). \citet*{quadrio2007effect} performed a parametric investigation of low-amplitude steady wall transpiration in turbulent channel flows, finding both drag-increasing and drag-reducing configurations. They observed that while the frictional drag was dramatically increased at small wavenumbers, a reduction in drag was possible above a threshold wavenumber, related to the length scales of near-wall structures. The drag modifications were explained by two physical mechanisms: interaction with turbulence, consisting of a reduction in turbulence fluctuations by extracting turbulent fluid and blowing laminar fluid, and generation of a steady streaming opposite to the mean flow. \cite*{woodcock2012induced} carried out a perturbation analysis of travelling wall transpiration in a \twod\ channel, finding that the flux induced by the streaming opposes the bulk flow. They conjectured that for three-dimensional flows and beyond a certain transpiration amplitude, the transpiration effects will only depend on the wavespeed. In the present work we examine the effect of high- and low-amplitude transpiration in turbulent pipe flow via direct numerical simulation (DNS) at a moderate bulk flow Reynolds number $\Rey=10\,000$, corresponding to friction Reynolds number $\Ret=314$. We focus on the effect of steady wall-normal blowing and suction that varies sinusoidally in the streamwise direction and with both high and low transpiration amplitudes. The dataset consists of a wall transpiration parameter sweep in order to assess the effect of the transpiration parameters on the turbulence statistics and identify drag increasing and reducing pipe configurations, as well as to permit quantitative comparisons with previous trends observed in channel flows by \citet{quadrio2007effect}. Although the present values of Reynolds number are not sufficient for a clear separation between all the scales, we will draw special attention to the influence of the flow control on VLSM-like structures. As shown by \citet{hellstrom2011visualizing}, motions corresponding to these large flow structures can be observed even at the considered bulk flow Reynolds number. A resolvent analysis \citep{McKeonSharma2010} will be employed to obtain the flow dynamics which are most amplified, with and without steady transpiration. This model-based framework consists of a gain analysis of the \NavSto\ equations in the wavenumber/frequency domain, which yields a linear relationship between the fluctuating velocity fields excited by the non-linear terms sustaining the turbulence. This linear operator depends on the mean profile, which is in turn sustained by the Reynolds stresses generated by the fluctuations. This framework has been already successfully employed in flow control by \cite*{luhar2014opposition}, who modeled opposition control targeting near-wall cycle structures. \citet{sharma2013coherent} employed the same framework to recreate the behaviour of complex coherent structures, VLSMs among them, from a low-dimensional subset of resolvent modes. In the present context, the analysis permits the identification of the flow structures that are amplified/damped by the effect of wall transpiration and how their spatial functions are distorted by the transpiration. We expand on this information in \S\ref{sec:resolanalysis} of the following. The adoption of this critical layer framework in turn leads to an analysis of the flow in the Fourier domain. Dynamic mode decomposition (DMD) \citep{Schmid2010,RowleyEtAlJFM2009}, which works to pick out the dominant frequencies via snapshots from the DNS data set, is its natural counterpart. In \S\ref{sec:6}, a DMD analysis on the simulation data will also be carried out to identify the most energetic flow structures at a given frequency and provide additional insight into the flow dynamics. \section{Direct numerical simulations} \label{sec:DNS} A spectral element--Fourier direct numerical simulation (DNS) solver \citep{hugh2004} is employed to solve the incompressible \NavSto\ equations in non-dimensional form \begin{eqnarray} \bm{\nabla \cdot \hat{u}} & = & 0 \\ \partial_t\bm{\hat{u}}+\bm{\hat{u}\cdot\nabla\hat{u}} & = & -\bm{\nabla}p + \Rey^{-1}\nabla^2\bm{\hat{u}} + \bm{f} \label{eqn:NSE} \end{eqnarray} where $Re=U_b D/\nu$ is the Reynolds number based on the bulk mean velocity $U_b$, the pipe diameter $D$ and a constant kinematic viscosity $\nu$, $\bm{\hat{u}}=(u,v,w)$ is the velocity vector expressed in cylindrical coordinates $(x,r,\theta)$, $p$ is the modified or kinematic pressure, $\bm{f}=(f_x,0,0)$ is a forcing vector. A pipe with a periodic domain of length $L=4\upi{R}$, where $R$ is the pipe's outer radius, has been considered. No-slip boundary conditions for the streamwise and azimuthal velocity are applied at the pipe wall; transpiration in the wall-normal direction is applied to the flow by imposing the boundary condition, \begin{equation} v(x,R,\theta)=A\sin(k_c x) \, , \label{eq:bc} \end{equation} which represents steady sinusoidal wall-normal flow transpiration along the streamwise direction with an amplitude $A$ and a streamwise wavenumber $k_c$. Additionally, $k_c$ must be an integer multiple of the fundamental wavenumber in the axial direction $2\upi/L$ to enforce a zero net mass flux over the pipe wall. A sketch of the configuration is shown in figure~\ref{fig:geo}. The constant streamwise body force per unit mass $f_x$ is added in (\ref{eqn:NSE}) to ensure that the velocity and pressure are streamwise periodic. (A simple physical equivalent is a statistically steady flow of liquid driven by gravity in a vertical pipe which is open to the atmosphere at each end.) The body force $f_x$ is calculated on the basis of a time-average force balance in the streamwise direction between the body force exerted on the volume of fluid in the pipe and the traction exerted by the wall shear stress, thus \begin{equation} \rho f_x L \upi R^2 = \tau_w 2 \upi R L \, , \label{eq:balance} \end{equation} with $\tau_w$ being the mean wall shear stress. Equivalently, it can be shown that \begin{equation} \frac{f_x R}{2U_b^2}=\left(\frac{u_\tau}{U_b}\right)^2 =\left(\frac{\Ret}{\Rey}\right)^2, \label{eq:balance2} \end{equation} where $u_\tau=(\tau_w/\rho)^{1/2}$ is the friction velocity, and $\Ret=u_\tau R/\nu$ is defined as the friction Reynolds number. The low-$\Rey$ Blasius correlation \citep{blasius1913law} for turbulent flow in a smooth pipe \begin{equation} \Ret=99.436 \times 10^{-3}\Rey^{7/8} \, , \label{eq:blasius} \end{equation} is employed to estimate the body force $f_x$ from (\ref{eq:balance2}). In the present work, $\Ret=314$ was set on the basis that for the zero-transpiration case, $\Rey=10\,000$; consequently while $\Ret$ and $f_x$ are constants for the remainder of this examination, $\Rey$ takes on different values for different transpiration parameters. This is a direct indication of the drag reducing/increasing effect resulting from transpiration. In order to alleviate this difference in $\Rey$, we will employ an alternative outer scaling with the Reynolds number based on the smooth pipe bulk mean velocity $U_b^s$, \begin{equation} \Rey_s = \frac{ U^s_b D}{\nu} \end{equation} independently of whether the transpiration is applied or not. The fact that $\Rey_s$ is constant will be exploited later in comparing controlled and uncontrolled pipe flows via the streamwise momentum equation. \begin{figure} \centerline{\includegraphics*[width=0.8\linewidth]{figure-01.png}} \caption{Physical domain and transpiration boundary condition} \label{fig:geo} \end{figure} The spatial discretization employs a \twod\ spectral element mesh in a meridional cross-section and Fourier expansion in the azimuthal direction, thus the flow solution is written as \begin{equation} \bm{\hat{u}}(x,r,\theta,t)= \sum_{\pm n} \bm{\hat{u}}_n (x,r,t)\ce^{\ci n\theta} \, . \label{eq:FourierDecomp} \end{equation} Note that the boundary conditions in (\ref{eq:bc}) preserve homogeneity in the azimuthal direction, so this Fourier decomposition still holds for a pipe with wall-normal transpiration. The time is advanced employing a second-order velocity-correction method developed by \cite*{karniadakis1991high}. The numerical method, including details of its spectral convergence in cylindrical coordinates, is fully described in \citet{hugh2004}. The solver has been previously employed for DNS of turbulent pipe flow by \citet{chin2010influence}, \citet{saha2011influence}, \citet{saha2014scaling}, \citet{saha2015comparison}, \cite{SKOB-JFM-2015}, and validated against the $\Ret=314$ smooth-wall experimental data of \citet{den1997reynolds} in \citet{boc07}. We use a mesh similar to that employed for the straight-pipe case of \citet{SKOB-JFM-2015}, also at $\Ret=314$. The grid consists of 240 elements in the meridional semi-plane with a 11th-order nodal shape functions and 320 Fourier planes around the azimuthal direction, corresponding to a total of approximately $1.1\times10^7$ computational nodes. For transpiration cases in which the flow rate is significantly increased, a finer mesh consisting of $1.6\times10^7$ degrees of freedom has been additionally employed. Simulations are restarted from a snapshot of the uncontrolled pipe flow, transient effects are discarded by inspecting the temporal evolution of the energy of the azimuthal Fourier modes derived from (\ref{eq:FourierDecomp}) and then statistics are collected until convergence. Typically, 50--100 wash-out times ($L/U_b$) equivalently to approximately 5000--10\,000 viscous time units are required for convergence of the statistics. In what follows, we will use either the \NavSto\ equations (\ref{eqn:NSE}) non-dimensionalized with the smooth smooth pipe bulk velocity $U^s_b$, i.e with Reynolds number $Re_s$ independently of the transpiration, or non-dimensionalized with wall scaling. This viscous scaling is denoted with a $+$ superscript. \section{Flow control results} \label{sec:3} \begin{figure} \centerline{\includegraphics*[width=1\linewidth]{figure-02.png}}\caption{Transpiration effect on the flow rate by changing (\textit{a}) wavenumber $k_c$ at constant amplitude $A^+=0.7$ (\textit{b}) amplitude $A^+$ at constant wavenumber $k_c=2$.} \label{fig:Q} \end{figure} The independent effects of the two transpiration parameters, $A$ and $k_c$, are investigated first. The range of parameters are similar to the one employed by \citet{quadrio2007effect} in a channel. Simulations consisting of a parameter sweep while maintaining the other fixed have been carried out at $\Ret=314$. Following the classical Reynolds decomposition, the total velocity has been decomposed as the sum of the mean flow $\bm{u}_0$ and a fluctuating velocity $\bm{u}$, which reads \begin{equation} {\bm {\hat{u}}}(x,r,\theta,t)= {\bm u}_0(x,r) + {\bm u}(x,r,\theta,t) \, , \label{def1} \end{equation} with the mean flow obtained by averaging the total flow in time and the azimuthal direction as \begin{equation} \bm{u}_0(x,r) = \lim_{T \to \infty} \frac{1}{T} \int_0^T \frac{1}{2\upi} \int_0^{2\upi} {\bm{\hat{u}}}(x,r,\theta,t) \cd t \cd \theta \, . \label{def2} \end{equation} For simplicity in the notation in what follows, we will use $\langle~\rangle$ to denote an average in time and azimuthal direction. Hence, $\bm{u}_0(x,r) = \langle{\bm{\hat{u}}} \rangle $ . Note that the streamwise spatial dependence of the mean flow permits a non-zero mean in the wall normal direction, hence $\bm{u}_0(x,r)=(u_0,v_0,0)$. Turbulence statistics additionally averaged in the streamwise direction are denoted with a bar \begin{equation} {{\bar{\bm u}_0}}(r) = \frac{1}{L}\int_0^L {\bm u}_0(x,r) \cd x \, . \label{def3} \end{equation} In terms of flow control effectiveness, here we define drag-reducing or drag-increasing configurations as those that reduce or increase the streamwise flow rate with respect to the smooth pipe. Mathematically, \begin{equation} \Delta Q=\frac{\int_0^R {\Delta{\bar{u}_0}} r \cd r}{\int_0^R {\bar{u}^s_0} r \cd r} \left\{ \begin{array}{ll} < 0 & \mbox{drag-increasing}\, ,\\[2pt] >0 & \mbox{drag-reducing}\, , \end{array} \right. \label{eq:Q} \end{equation} where $\Delta{u_0}= (\bar{u}_0^c - \bar{u}_0^s)$, being $c$ and $s$ superscripts to denote controlled and smooth pipe respectively. Figure \ref{fig:Q}(\textit{a}) shows the percentage variation of the flow rate induced by transpiration with different wavenumber $k_c$ at constant amplitude $A^+=0.7$, equivalent to $0.22\%$ of the bulk velocity. A large drag increase is observed at small wavenumbers and it asymptotically diminishes until a small increase in flow rate is achieved for $k_c \simeq 9$. This small drag reduction slowly lessens and it is observed up to $k_c=40$. Figure \ref{fig:Q}(\textit{b}) shows the transpiration influence on the flow rate by increasing the amplitude $A^+$ at a constant wavenumber $k_c=2$. A drag increase from zero transpiration up to $A^+=4$ is observed. The maximum increase is achieved between $A^+=1$ and $A^+=2$. In agreement with the conjecture posed by \cite{woodcock2012induced}, increasing the amplitude beyond $A^+=4$ does not significantly change the value of the drag increase. \subsection{Comparison with channel flow} We compare our flow control results with the steady streamwise transpiration study of \citet{quadrio2007effect} in a channel. The variation of the mean friction coefficient $\Delta C_f$ with respect to the smooth pipe is employed to assess the drag reduction/increase. The mean friction coefficient is defined as \begin{equation} C_f = \frac{2 \tau_w}{\rho U_b^2} \, . \end{equation} Given that the mean wall shear stress is fixed, the mean friction coefficient variation can be expressed solely as function of the ratio between the smooth and controlled pipe bulk velocities, hence \begin{equation} \Delta C_f= \frac{1}{(1+\Delta Q)^2} -1 \, . \end{equation} in which the definition of $\Delta Q$ in (\ref{eq:Q}) is employed. Note that negative values of $\Delta C_f$ correspond to drag reduction. Figure \ref{fig:Quadrio} shows the comparison of the variation of the friction coefficient versus transpiration wavelength at fixed amplitude $A^+=0.7$ in pipe and channel flow. Low and high-$Re$ results are presented for the channel case. Similar behaviour is observed, in which a drag reduction can be achieved in both channel and pipe flows. The maximum friction variation is smaller for the pipe than for the channel. In addition, the rate at which the friction coefficient increases with wavelength is higher for the channel than for the pipe. We note that the manner in which parameters are varied differs between the two studies: while \citet{quadrio2007effect} fixed the bulk velocity to arrange a constant bulk Reynolds number during the parameter sweep, here we maintained a constant friction Reynolds number $Re_\tau$. It is also worth noting that because of the obvious difference in geometry, identical results are not expected. \begin{figure} \centerline{\includegraphics*[width=0.78\linewidth]{figure-03.png}} \caption{Comparison of the variation of the friction coefficient versus transpiration wavelength at fixed amplitude $A^+=0.7$. Solid circles, pipe flow at $Re_\tau=314$ (present results); open circles, channel flow at $Re_\tau=400$ \citep{quadrio2007effect}; squares, channel flow at $Re_\tau=180$ \citep{quadrio2007effect}.} \label{fig:Quadrio} \end{figure} \section{Turbulence statistics \label{sec:4}} \label{sec:turb} \subsection{Constant transpiration amplitude} \begin{figure} \centerline{\includegraphics*[width=1\linewidth]{figure-04.png}} \caption{Comparison of profile data for the smooth-wall pipe (solid line) and pipe with transpiration at constant amplitude $A^+=0.7$ and different wavenumbers $k_c$. (\textit{a}), mean flow normalized by $U_b$, with dashed lines for linear sublayer and fitted log law; (\textit{b}), axial turbulent intensity; (\textit{c}), radial turbulent intensity; (\textit{d}), Reynolds shear stress. } \label{fig:UB_k} \end{figure} The effect of transpiration is evident in the behaviour of the mean velocity characteristics. Figure \ref{fig:UB_k}(\textit{a}) shows the effect of changing the transpiration wavenumber $k_c$ at a small constant amplitude $A^+=0.7$ on the mean streamwise velocity $\bar{u}_0(r)$ compared to the smooth pipe. Additionally, dashed lines for the linear sublayer and fitted log law \citep{den1997reynolds} are shown as reference. In agreement with figure \ref{fig:Q}(\textit{a}), we observe reduced flow rates in figure~\ref{fig:UB_k}(\textit{a}) for $k_c < 8$ with a small increase occurring around $k_c=10$. Two interesting features are noticed. First, the viscous sublayer is no longer linear. Since the mean wall-shear $\tau_w$ is constant, all the profiles must collapse as $y^+=(R^+-r^+)$ approaches zero. In other words, the value of ${\partial u_0}/{\partial y}$ at the wall is the same for all cases considered. Second, the outer-layer of all the profiles are parallel, suggesting that the overlap region can be expressed as \begin{equation} \bar{u}_0^+=2.5\ln(y^+) + 5.4 + \Delta T^+ \, , \end{equation} with the transpiration factor, $\Delta T$ arising in a similar way as the roughness factor of \citet{hama1954boundary}, or the corrugation factor of \citet{SKOB-JFM-2015}. However, we note that the present Reynolds number may not be sufficiently high for the emergence of a self-similar log law in the overlap region. Although there is not a well-defined log layer at such low $\Rey$, we could refer to it as log layer for convenience. The present results suggest what will happen at a greater $\Rey$. Figure \ref{fig:defect} shows a defect velocity scaling of the mean profiles. A collapse of the mean velocity profiles is observed in the outer layer, hence Townsend's wall similarity hypothesis \citep{Townsend1976} applies in the present low amplitude transpiration cases. Figure \ref{fig:UB_k}(\textit{b}) and (\textit{c}) show the influence of the transpiration on the turbulent intensities, presented as root mean squares. At this small amplitude, the transpiration mainly affects the location and value of the maximum turbulent intensities and all profiles collapse as $y^+$ approaches the centerline, indicating that small amplitude transpiration does not alter the turbulent activity in the outer layer. Note that the peak in axial turbulent intensity moves due to the reduction in shear at the same wall-normal location in all the cases, which is apparent in the man velocity profiles in Fig. \ref{fig:UB_k}(\textit{a}). Figure \ref{fig:UB_k}(\textit{d}) shows that the influence of small amplitude transpiration on the Reynolds shear stress $\tau_{uv} = \langle u^c v^c \rangle $. We observed that the transpiration influence on $\tau_{uv}$ is only significant at small wavenumbers and, as with the turbulent intensities, small amplitude transpiration does not change the outer layer of the profile. The small changes in the shear stress are not enough to explain the significant variations in the mean flow, hence it is inferred that additional effects related to steady streaming and non-zero mean streamwise gradients play a major role in the momentum balance. \begin{figure} \centerline{\includegraphics*[width=0.7\linewidth]{figure-05.png}} \caption{Defect velocity law for profile data of the smooth-wall pipe (solid line) and pipe with transpiration at constant amplitude $A^+=0.7$ and different wavenumbers $k_c$. A collapse of all the mean velocity profiles is observed in the outer layer hence small amplitude effects do not alter the turbulent activity in the outer layer.} \label{fig:defect} \end{figure} \begin{figure} \centerline{\includegraphics*[width=1\linewidth]{figure-06.png}} \caption{Comparison of profile data for the smooth-wall pipe (solid line) and pipe with transpiration at constant wavenumber $k_c=2$ and different amplitudes $A^+$. (\textit{a}), mean flow normalized by $U_b$, with dashed lines for linear sublayer and fitted log law; (\textit{b}), axial turbulent intensity; (\textit{c}), radial turbulent intensity; (\textit{d}), Reynolds shear stress. } \label{fig:UB_A} \end{figure} \subsection{Constant transpiration wavenumber} The effect of changing the amplitude $A^+$ at a constant forcing wavenumber $k_c=2$ in the mean streamwise velocity $u_0$ is shown in figure~\ref{fig:UB_A}(\textit{a}). For $A^+<2$, results are similar to those observed for small amplitudes. At $A^+=2$ the flow rate is dramatically decreased and the overlap region is reduced compared to the small amplitude cases. At $A^+=4$ the parallel overlap region is not existent. This large amplitude can significantly increase the mean velocity in the outer layer. Similarly, we observe that large amplitude transpiration shifts the location of maximum turbulent intensities towards the centerline, as seen in Figures \ref{fig:UB_A}(\textit{b}) and (\textit{c}). We can state that large amplitudes $A^+ > 2$ can influence the turbulent activity in the outer layer. Figure \ref{fig:UB_A}(\textit{d}) shows that high amplitudes have a similar effect on the shear stress and how a high amplitude $A^+= 4$ can dramatically reduce the Reynolds shear stress and shift the location of the maximum towards the centerline. These tendencies in $A^+$ and $k_c$ suggest that very high values of the transpiration amplitude combined with high wavenumber $k_c$ can be explored in order to find significant drag-reducing configuration. This has been confirmed through the investigation of a static transpiration case with an amplitude $A^+=10$ and wavenumber $k_c=10$, yielding an increase in flow rate of $\Delta Q=19.3\%$. This case will be investigated in detail in the next section. \section{Streamwise momentum balance} \label{sec:5} As previously mentioned, we have inferred from figures~\ref{fig:UB_k} and~\ref{fig:UB_A} that the changes in Reynolds stress are not enough to explain the changes in the mean velocity profile; additional effects related to steady streaming and non-zero mean streamwise gradients could play a major role in the momentum balance. Here we analyze the streamwise momentum balance in order to identify these additional effects. The axial momentum equation averaged in time and azimuthal direction for a pipe flow controlled with static transpiration is \begin{equation} f_x + \frac{1}{r}\frac{\partial}{\partial{r}}(-r\tau_{uv}^c - ru^c_0v^c_0 + r\Rey_s^{-1}\frac{\partial{u^c_0}}{\partial{r}}) + \mathcal{N}_x = 0 \, . \label{controlled} \end{equation} with $\mathcal{N}_x$ being the sum of terms with $x-$derivatives, representing the non-homogeneity of the flow in the streamwise direction \citep{fukagata2002contribution}. \begin{equation} \mathcal{N}_x = \frac{\partial(u_0^c u_0^c)}{\partial x} + \frac{\partial \langle u^c u^c \rangle }{\partial x} - \Rey_s^{-1}\frac{\partial^2 u_0^c}{\partial x^2} \,. \end{equation} where $\langle \rangle$ denotes averaging in time and the azimuthal direction. Equation ({\ref{controlled}}) for a smooth pipe yields \begin{equation} f_x + \frac{1}{r}\frac{\partial}{\partial{r}}(-r\tau_{uv}^s + r\Rey_s^{-1}\frac{\partial{u^s_0}}{\partial{r}}) = 0 \, , \label{smooth} \end{equation} Since the body force $f_x$ and Reynolds number $\Rey_s^{-1}$ have the same value in (\ref{controlled}) and (\ref{smooth}), these two equations can be subtracted and integrated in the wall normal direction to give \begin{equation} \Rey_s^{-1}{\Delta{u_0}(x,r)}=\int_0^r \Delta{\tau} r^\prime \mathrm{d}r^\prime + \int_0^r u^c_0v^c_0 r^\prime \mathrm{d}r^\prime + \int_0^r\mathcal{N}^\prime_x r^\prime \mathrm{d}r^\prime \, , \end{equation} in which $\mathcal{N}_x$ has been previously integrated with respect to the wall normal direction to yield $\mathcal{N}^\prime_x$. This equation can be additionally averaged in the axial direction to identify three different terms playing a role in the modification of the mean profile \begin{equation} \Delta{\bar{u}_0}(r)=RSS+ST+NH \, , \label{balance} \end{equation} where \begin{eqnarray} RSS(r) &=& \frac{-\Rey_s}{L}\int_0^L\int_0^r \Delta{\tau} r^\prime \mathrm{d}r^\prime \mathrm{d}x\, , \\ ST(r) &=& \frac{-\Rey_s}{L}\int_0^L\int_0^r u^c_0v^c_0 r^\prime \mathrm{d}r^\prime\mathrm{d}x\, , \label{ST} \\ NH(r) &=& \frac{\Rey_s}{L}\int_0^L\int_0^r\mathcal{N}^\prime_x r^\prime\mathrm{d}r^\prime \, \mathrm{d}x\, . \label{NH} \end{eqnarray} The first term $RSS$ represents the interaction of the transpiration with the Reynolds shear stress, and its behaviour can be inferred from the turbulence statistics shown in Figures \ref{fig:UB_k}(\textit{d}) and \ref{fig:UB_A}(\textit{d}). The second term $ST$ is associated with the steady streaming produced by the static transpiration. As briefly mentioned in the introduction, this term defined in (\ref{ST}) consists of an additional flow rate due to the interaction of the wall-normal transpiration with the flow convecting downstream \citep{luchini2008acoustic}. This term can be alternatively understood as a coherent Reynolds shear stress. The velocity averaged in time and azimuthal direction defined in (\ref{def2}) can be decomposed into a mean profile ${{\bar{\bm u}_0}}(r)$ and a steady deviation from that mean velocity profile $\bm{u}_0^\prime(x,r) $ \begin{equation} \bm{u}_0(x,r) = {{\bar{\bm u}_0}}(r) + {\bm u}_0^\prime(x,r) \, . \label{eq:triple} \end{equation} Taking into account this decomposition, the integrand of the steady streaming term now reads \begin{equation} u^c_0 v^c_0 = {{\bar{u}^c_0}} {{\bar{v}^c_0}} + {{\bar{u}^c_0}} {v^c_0}^\prime \, + {{\bar{v}^c_0}} {u^c_0}^\prime \, + {u^c_0}^\prime {v^c_0}^\prime \,. \end{equation} and its axial average is reduced to \begin{equation} \frac{1}{ L} \int_0^L u^c_0 v^c_0 \mathrm{d}x = \frac{1}{ L} \int_0^L {u^c_0}^\prime {v^c_0}^\prime \mathrm{d}x \,, \end{equation} since ${{\bar{v}^c_0}}$ must be zero because of the continuity equation. Hence the steady streaming term $ST$ is generated by coherent Reynolds shear stress induced by the deviation of the velocity from the axial mean profile. We highlight that the substitution of the decomposition (\ref{eq:triple}) in the Reynolds decomposition in (\ref{def1}) leads to a triple decomposition, in which the deviation from the mean velocity profile plays the role of a coherent velocity fluctuation. The third term $NH$ defined in (\ref{NH}) corresponds to the axial non-homogeneity in the flow induced by the transpiration. It consists of non-zero mean streamwise gradients generated by the transpiration. Because of the low amplitudes considered in previous works \citep{quadrio2007effect}, this last term has not been isolated before and has been implicitly absorbed in an interaction with turbulence term that accounts for both $ST$ and $NH$ terms. Finally, we note that the identity derived by \citet{fukagata2002contribution} (FIK identity) could be alternatively employed for our analysis. However, the difference in bulk flow Reynolds numbers between uncontrolled and controlled cases favors the present approach. \subsection{Representative transpiration configurations} The relative importance of the three terms acting in the transpiration is investigated by inspecting three representative transpiration configurations in terms of drag modification. Table \ref{tab:cases} lists the contributions to the change in flow rate of these configurations: (I) the largest drag-reducing case found ($\Delta Q=19.3\%$) consisting of a large amplitude and transpiration wavenumber $(A^+,k_c)=(10,10)$, (II) a small drag-reducing case ($\Delta Q=0.4\%$) induced by a tiny amplitude at a large wavenumber $(A^+,k_c)=(0.7,10)$, and (III) a large drag-increasing case ($\Delta Q=-36.1\%$) with small wavenumber and amplitude $(A^+,k_c)=(2,2)$. We first compare the Reynolds shear stress generated by the fluctuating velocity $\tau_{uv}$ and the shear stress arising from the deviation velocity ${u^c_0}^\prime {v^c_0}^\prime$. The radial distributions of the three different cases are showed in figure~\ref{fig:3RSS}. We observe that the streaming or coherent Reynolds shear stress is dominant close to the wall and opposes the Reynolds stress generated by the fluctuating velocity. Far from the wall, the Reynolds stress is governed by the fluctuating velocity. \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{c l c c r| r r r } & & $A^+$ & $k_c$ & $ \Delta Q (\%)$ & $\int RSS (\%)$ & $\int ST(\%)$ & $\int NH(\%)$ \\ [3pt] (I) & large drag-reduction & 10 & 10 & 19.3 & 28.4 & $-20.4$ & 11.3 \\ (II)& small drag-reduction & 0.7 & 10 & 0.4& 1.5 & $-1.7$ & 0.6 \\ (III)& drag-increase & 2 & 2 & $-36.1$ & 11.2 & $-29.6$ & $-17.7$ \\ \end{tabular} \caption{Contributions to the change in flow rate of different transpiration configurations based on integration of Equation (\ref{balance})($\int$ denotes integration in wall normal direction).} \label{tab:cases} \end{center} \end{table} \begin{figure} \centerline{\includegraphics*[width=1\linewidth]{figure-07.png}} \caption{\label{fig:3RSS}Radial distribution of the Reynolds shear stress generated by the fluctuating velocity $\tau_{uv}$ and the deviation velocity ${u^c_0}^\prime {v^c_0}^\prime$ (\textit{a}) case I: large drag-decrease $(A^+,k_c)=(10,10)$, (\textit{b}) case II: small drag-decrease $(A^+,k_c)=(0.7,10)$, (\textit{c}) case III: drag-increase $(A^+,k_c)=(2,2)$.} \end{figure} Figure~\ref{fig:3cases} shows the radial profile of the different contributions to the mean profile and figure~\ref{fig:meanshear} the corresponding mean profiles of the cases considered. Figure \ref{fig:3cases}(\textit{a}) corresponds to the drag-decrease case $(A^+,k_c)=(10,10)$. It can be observed that the contributions to the increase in flow rate are caused by a large reduction of Reynolds shear stress (28.4\%) and the streamwise gradients produced in the flow at such large amplitude and wavenumber (11.3\%). However, these contributions are mitigated by the opposite flow rate induced by the steady streaming ($-20.4\%$). Figure~\ref{fig:3cases}(\textit{b}) corresponds to a low amplitude static transpiration $A^+=0.7$ at the same axial wavenumber as case (\textit{b}), yielding a very small reduction in flow rate. Similarly, the effect on the Reynolds shear stress (1.5\%) and the non-homogeneity effects (0.6\%) contribute to increase the flow rate while the streaming produced is opposite to the mean flow, but of a lesser magnitude ($-1.7\%$). Figure \ref{fig:3cases}(\textit{c}) represents the drag-increase case $(A^+,k_c)=(2,2)$. As opposed to the two previous cases, the main contribution to the drag is caused by a large steady streaming in conjunction with a large non-zero streamwise gradients opposite to the mean flow. Although the decrease in Reynolds stress is significant and favorable towards a flow rate increase, the sum of the other two terms is much larger. In terms of the variation of mean profile, we observe a similar behaviour in all cases: there is a decrease in the mean streamwise velocity profile close to the wall induced by the streaming term or coherent Reynolds shear stress. Within the buffer layer, the difference in mean velocity decreases until a minimum is achieved and then it increases to a maximum in the overlapping region. A smooth decrease towards the centerline is then observed. \begin{figure} \centerline{\includegraphics*[width=1\linewidth]{figure-08.png}} \caption{\label{fig:3cases}Radial distribution of the different terms in (\ref{balance}) to the streamwise momentum balance. Solid line $\Delta{\bar{u}_0}(r)$; \textcolor{blue}{-$\cdot$-} Reynolds shear stress term$RSS$; \textcolor{red}{$--$} steady streaming terms $ST$; \textcolor{magenta}{$\cdot$$\cdot$$\cdot$$\cdot$} non-homogeneous terms $NH$. (\textit{a}) case I: large drag-decrease $(A^+,k_c)=(10,10)$, (\textit{b}) case II: small drag-decrease $(A^+,k_c)=(0.7,10)$, (\textit{c}) case III: drag-increase $(A^+,k_c)=(2,2)$.} \end{figure} \begin{figure} \centerline{\includegraphics*[width=1\linewidth]{figure-09.png}} \caption{\label{fig:meanshear} (\textit{a}) Streamwise mean profile (\textit{b}) Mean wall shear along the axial direction} \end{figure} Streamlines and kinetic energy of the \twod\ mean velocities $\bm{u}_0(x,r)$ are shown in figure~\ref{fig:means}. Preceding the analysis of the flow dynamics, we observe that blowing is associated with areas of high kinetic energy while suction areas slow down the flow. Despite the large amplitudes employed, we observe an apparent absence of recirculation areas in these mean flows. Figure \ref{fig:meanshear}(\textit{b}) shows the mean wall shear along the axial direction. Only a very small flow separation occurs in the large drag-decrease $(A^+,k_c)=(10,10)$. Hence flow separation effects are not associated with the observed drag modifications. \begin{figure} \begin{center} \includegraphics*[width=1\linewidth]{figure-10.png} \caption{Two-dimensional mean flow: streamlines and normalized kinetic energy for each different case (white to blue) ($a$) reference case: smooth pipe ($b$) drag-reducing case $(A^+,k_c)=(10,10)$ ($c$) neutral case $(A^+,k_c)=(0.7,10)$ ($d$) drag-increasing case $(A^+,k_c)=(2,2)$ \label{fig:means} } \end{center} \end{figure} \subsection{Flow control energetic performance} \label{sec:energy} As stated by \citet{quadrio2011drag}, a flow control system study must always be accompanied with its respective energetic perfomance analysis in order to determine its validity in real applications. The control performance indices in the sense of \citet{kasagi2009toward} are employed here to assess the energy effectiveness of the transpiration. We define a drag reduction rate as \begin{equation} W = (P_c - P_s)/{P_s} \, , \label{eq:P} \end{equation} where $P$ is the power required to drive the pipe flow and the subscripts $c$ and $s$ refer again to controlled and smooth pipe. The index $W$ can also be interpreted as the proportional change in the power developed by the constant body force $f_x$ as a consequence of the transpiration. Note from (\ref{eq:P}) and (\ref{eq:pump}) that the drag reduction rate $W$ is equivalent to the flow rate variation $\Delta Q$. The power required to drive the flow is given by the product of the body force times the bulk velocity \begin{equation} P=f_x \upi R^2 L U_b \, . \label{eq:pump} \end{equation} A net energy saving rate $S$ is defined to take into account the power required to operate the flow control \begin{equation} S=\left(P_s - (P_c + P_{in})\right)/{P_s} \, . \end{equation} The mean flow momentum and energy equation are employed to obtain the power required to apply the transpiration control, as also done by \citet{marusic2007laminar} and \citet{Mao2015cylinder}. The power employed to apply the transpiration control reads \begin{equation} P_{in} = \int_0^{2\upi} \int_0^{L} \left( \frac{1}{2}v_0(x,R)^3 + p(x,R)v_0(x,R) \right) \, \cd x R \cd \theta . \end{equation} with $v_0(x,R)=A\sin(k_c x)$. The first term in the integral represents the rate at which energy is introduced or removed as kinetic energy in the flow through the pipe wall. This term is exactly zero because the change of net mass flux is zero and the velocity is sinusoidal, imposed by the transpiration boundary condition in (\ref{eq:bc}). The second term represents the rate of energy expenditure by pumping flow against the local pressure at the wall. Finally, the effectiveness of the transpiration control is defined as the ratio between the change in pumping power and power required to apply the transpiration control, which reads \begin{equation} G=(P_s-P_c)/P_{in} \, . \end{equation} Note that the net power saving can be alternatively written as \begin{equation} S=W(1-G^{-1}) \, , \end{equation} which indicates that an effectiveness $G$ higher than one is required for a positive power gain. As reported by \citet{kasagi2009toward}, typical maxima for active feedback control systems are in the range of $G\sim100$ and $S\sim0.15$ and for predetermined control strategies, such as spanwise wall-oscillation control in channel \citep{quadrio2004critical} or streamwise travelling transpiration in channel \citep{min2006sustained}, a range of $2\lesssim G\lesssim6$ and $0.05\lesssim S\lesssim0.25$ was found. \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{l c c | r r r} & $A^+$ & $k_c$ & $ W$ & $G $ & $S $ \\ [3pt] (I) large drag-reduction & 10 & 10 & 0.19& 1.02 & 0.004 \\ (II) small drag-reduction & 0.7 & 10 & 0.04 & 0.21 & -0.014 \\ (III) drag-increase & 2 & 2 & -0.36 & -3.73 & -0.46 \\ \end{tabular} \caption{Flow control energetic performance indices. $W$ represents the drag reduction rate, $G$ the effectiveness of the transpiration control and $S$ the net energy saving rate.} \label{tab:FCPI} \end{center} \end{table} The resulting flow control energetic performance indices are listed in table~\ref{tab:FCPI}. Note that the change in pumping power rate $W$ is equivalent to the flow rate variation $\Delta Q$. Despite the large drag reduction produced by the configuration $(A^+,k_c)=(10,10)$, the net energy saving rate is marginal and the small drag reduction caused at $(A^+,k_c)=(0.7,10)$ has an effectiveness less than one, with a net energy expenditure. The case $(A^+,k_c)=(2,2)$ is interesting, as it shows a high effectiveness in decreasing the flow rate and has the potential to relaminarize the flow at lower $Re_\tau$. For instance, a reduction in flow rate $\Delta Q=-36\%$ at $Re_\tau=115$ could reduce the critical bulk flow Reynolds below $Re<2040$ and thus achieve a relaminarization of the flow. Finally, we remark that the particular case $(A^+,k_c)=(10,10)$ is probably not the globally optimal configuration for steady transpiration and net energy saving rates and effectiveness in the range of typical values for the flow control strategies reported by \citet{kasagi2009toward} might be possible, if the full parameter space was considered. \section{Flow dynamics} \label{sec:6} As a first step in establishing a relationship between changes in flow structures and drag reduction or increase mechanisms, this section describes how the most amplified and energetic flow structures are affected by the transpiration. \subsection{Resolvent analysis} \label{sec:resolanalysis} The loss of spatial homogeneity in the axial direction is the main challenge when incorporating transpiration effects into the resolvent model. One solution is to employ the \twod\ resolvent framework of \citet{gomez_pof_2014}. This method is able to deal with flows in which the mean is spatially non-homogeneous. The dependence on the axial coordinate $x$ is retained in the formulation, in contrast with the classical \oned\ resolvent formulation of \citet{McKeonSharma2010}. However, the method leads to a singular value decomposition problem with storage requirements of order $\mathcal{O}(N_r^2N_x^2)$, with $N_r$ and $N_x$ being the resolution in the radial and axial directions respectively, as opposed to the original formulation which was of order $N_r^2$. Consequently, the \twod\ method is not practical for a parameter sweep owing to the large computational effort required. In the following we present a computationally cheaper and simple alternative based on a triple decomposition of the total velocity. Based on the decompositions in (\ref{def1}) and (\ref{eq:triple}), the total velocity is decomposed as a sum of the axial mean profile, a steady but spatially varying deviation from the axial profile and a fluctuating velocity \begin{equation} \label{eq:triple} \bm{\hat{u}}(x,r,\theta,t)=\bar{\bm{u}}(r) + \bm{u}^\prime(x,r) + \bm{u}(x,r,\theta,t). \end{equation} The fluctuating velocity is expressed as a sum of Fourier modes. These are discrete since the domain has a fixed periodic length and is periodic in the azimuth: \begin{equation} \bm{\hat{u}}(x,r,\theta,t)=\bar{\bm{u}}(r) + \bm{u}_l^\prime(r) \ce^{\ci lx} + \sum_{(k,n,\omega) \neq (l,0,0) } \bm{u}_{k,n,\omega}(r) \ce^{\ci(kx+n\theta-\omega t)} + \mathrm{c.c.}, \end{equation} where $k$, $n$ and $\omega$ are the axial, azimuthal wavenumber and the temporal frequency, respectively. Note than a \cc\ must be added because $\bm{\hat{u}}$ is real. Without loss of generality, we have assumed that the deviation velocity can be expressed as a single Fourier mode with axial wavenumber $l$. As a consequence of the triadic interaction between the spatial Fourier mode of the deviation velocity and the fluctuating velocity, there is a coupling of the fluctuating velocity at $(k,n,\omega)$ with that at $(k\pm l,n,\omega)$ (see Appendix A). Similarly, the non-linear forcing terms generated by the fluctuating velocity are written as $\bm{f}_{k,n,\omega}=(\bm{u\cdot\nabla u})_{k,n,\omega}$. Taking (\ref{eq:triple}) into account, it follows that the Fourier-transformed \NavSto\ equation (\ref{eqn:NSE}) yields the linear relation \begin{equation} \bm{u}_{k,n,\omega}=\mathcal{H}_{k,n,\omega}\left(\bm{f}_{k,n,\omega} + \mathcal{C}_{k,n,\omega} \bm{u}_{k \pm l,n,\omega} \right) \, \end{equation} for $(k,n,\omega) \neq (0,0,0)$ and $(k,n,\omega) \neq (\pm l,0,0)$, where $\mathcal{C}_{k,n,\omega}$ is a coupling operator representing the triadic interaction between deviation and fluctuating velocity. The triadic interaction can be considered as another unknown forcing and it permits lumping all forcing terms as \begin{equation} \bm{g}_{k,n,\omega} = \bm{f}_{k,n,\omega} + \mathcal{C}_{k,n,\omega} \bm{u}_{k \pm l,n,\omega} \, , \label{coupling} \end{equation} hence the following linear velocity--forcing relation is obtained \begin{equation} \label{eq.exres} \bm{u}_{k,n,\omega}=\mathcal{H}_{k,n,\omega}\bm{g}_{k,n,\omega} \, . \end{equation} The resolvent operator $\mathcal{H}_{k,n,\omega}$ acts as a transfer function between the fluctuating velocity and the forcing of the non-linear terms, thus it provides information on which combination of frequencies and wavenumber are damped/excited by wall transpiration effects. Equation (\ref{discrtH}) shows the resolvent $\mathcal{{H}}_{k,n,\omega}$. The first, second and third rows corresponds to the streamwise, wall-normal and azimuthal momentum equation, respectively. \begin{equation} \setlength{\arraycolsep}{0pt} \renewcommand{\arraystretch}{1.3} \mathcal{{H}}_{k,n,\omega}(r) =\left[ \begin{array}{ccc} \ci (k u_0 - \omega) - \Rey^{-1}(D + r^{-2}) & \partial_r{u_0} & 0 \\ 0 & \ci (k u_0 - \omega) - Re^{-1}D & -2\ci nr^{-2}\Rey^{-1} \\ 0 & -2\ci nr^{-2}\Rey^{-1} & \ci (k u_0 - \omega) - \Rey^{-1}D \\ \end{array} \right]^{-1} , \label{discrtH} \end{equation} with \begin{equation} D=-k^2 - (n^2 +1)r^{-2} + \partial^2_r + r^{-1}\partial_r \, . \end{equation} The physical interpretation of the present resolvent formulation is the same to that of the original formulation \citep{McKeonSharma2010}. Figure~\ref{fig:15D} presents the new resolvent formulation (\ref{eq.exres}) by means of a block diagram. The mean velocity profile is sustained in the $(k,n,\omega)=(0,0,0)$ equation via the Reynolds stress $f_{0,0,0}$ and interactions with deviation velocity. Similarly, the deviation velocity is also sustained via the forcing $f_{l,0,0}$ and interactions with the mean flow in the mean flow equation corresponding to $(k,n,\omega)=(l,0,0)$. The deviation velocity drives the triadic interactions generated via the operator $\mathcal{C}_{k,n,\omega}$ and, closing the loop, the mean profile restricts how the fluctuating velocity responds to the non-linear forcing via the resolvent operator $\mathcal{H}_{k,n,\omega}$. We note that it is relatively straightforward to generalize the block diagram in figure~\ref{fig:15D} if the deviation velocity is composed of multiple axial wavenumbers, or even frequencies. \begin{figure} \begin{center} \includegraphics*[width=0.6\linewidth]{figure-11.png} \caption{\label{fig:15D} Diagram of the new triple-decomposition-based resolvent model. The mean velocity profile is sustained in the $(k,n,\omega)=(0,0,0)$ equation via the Reynolds stress $f_{0,0,0}$ and the interactions with deviation velocity. Similarly, the deviation velocity is also sustained via the forcing $f_{l,0,0}$ and interactions with the mean flow in the $(k,n,\omega)=(l,0,0)$ equation. The original model of \citet{McKeonSharma2010} is represented by the subset within the dashed border. } \end{center} \end{figure} We highlight that this formulation represents the exact route for modifying a turbulent flow using the mean profile $u_0$. Despite the non-homogeneity of the flow in our present cases of interest, the operator $\mathcal{H}_{k,n,\omega}$ is identical to the one developed by \citet{McKeonSharma2010} for \oned\ mean flows and thus it only depends on the axial mean $\bar{u}(r)$; the deviation velocity does not appear in the resolvent, but it does appear in the mean flow equation. Also note that the equation of continuity enforces the condition that the mean profile of wall-normal velocity $v_0$ is zero in all cases. Hence the modification of the mean profile $\bar{u}_0$ is sufficient to analyze the dynamics of the flow. We recall that \citet{mckeon2013experimental} used a comparable decomposition in order to assess the effect of a synthetic large-scale motion on the flow dynamics. Following the analysis of \citet{McKeonSharma2010}, a singular value decomposition (SVD) of the resolvent operator \begin{equation} \mathcal{H}_{k,n,\omega}=\sum_m {\bm{\psi}}_{k,n,\omega,m} \sigma_{k,n,\omega,m} {\bm{\phi}}^*_{k,n,\omega,m} \end{equation} delivers an input-output amplification relation between response modes $\bm{\psi}_{k,n,\omega,m}$ and forcing modes $\bm{\phi}_{k,n,\omega,m}$ through the magnitude of the corresponding singular value $\sigma_{k,n,\omega,m}$. Here the subscript $m$ is an index that ranks singular values from largest to smallest. The non-linear terms $\bm{g}_{k,n,\omega}$ can be decomposed as a sum of forcing modes to relate the amplification mechanisms to the velocity fields, \begin{equation} \bm{g}_{k,n,\omega}=\sum_m \chi_{k,n,\omega,m} \bm{\phi}_{k,n,\omega,m} \end{equation} where the unknown forcing coefficients $\chi_{n,\omega,n}$ represent the unknown mode interactions and Reynolds stresses. The decomposition of the fluctuating velocity field is then constructed as a weighted sum of response modes \begin{equation} \bm{u}(x,r,\theta,t)= \sum_{(k,n,\omega) \neq (l,0,0) } \chi_{k,n,\omega,1}\sigma_{k,n,\omega,1}\bm{\psi}_{k,n,\omega,1}\ce^{\ci(kx + n\theta-\omega{t})} \, , \label{resolde} \end{equation} in which the low-rank nature of the resolvent, $\sigma_{k,n,\omega,1} \gg \sigma_{k,n,\omega,2}$, is exploited \citep{McKeonSharma2010,sharma2013coherent,luhar2014opposition}. A numerical method similar to the one developed by \cite{McKeonSharma2010} is employed for the discretization of the resolvent operator $ \mathcal{H}_{k,n,\omega}$. Following the approach of \citet{meseguer2003linearized}, wall normal derivatives are computed using Chebychev differentiation matrices, properly modified to avoid the axis singularity. Notice that instead of projecting the velocity into a divergence-free basis, here the divergence-free velocity fields are enforced by adding the continuity equation as an additional column and row in the discretized resolvent \citep{luhar2014opposition}. The velocity boundary conditions at the wall are zero Dirichlet. The velocity profile inputs of the four cases subject to study were shown in figure~\ref{fig:meanshear}. \subsection{Fourier analyses and DMD} It may be observed in (\ref{resolde}) that the energy associated with each Fourier mode ${\bm u}_{k,n,\omega}$ is proportional to its weighting $\chi_{k,n,\omega,1}\sigma_{k,n,\omega,1}$, under the rank-1 assumption. The resolvent analysis yields the amplification properties in $(k,n,\omega)$ but it does not provide information on the amplitude of non-linear forcing $\chi_{k,n,\omega,1}$. As briefly exposed in \S\ref{sec:intro}, a snapshot-based DMD analysis \citep{Schmid2010,RowleyEtAlJFM2009} is carried out on the DNS data in order to unveil the unknown amplitudes of the non-linear forcing terms $\chi_{k,n,\omega,1}$. DMD obtains the most energetic flow structures, i.e., the set of wavenumber/frequencies $(k,n,\omega)$ corresponding to the maximum product of amplitude forcing and amplification. As shown by \citet{chen2012variants}, the results from a DMD analysis o statistically steady flows such as those considered are equivalent to a discrete Fourier transform of the DNS data once the time-mean is subtracted. This is confirmed if values of decay/growth of the DMD eigenvalues are close to zero. Hence, the DMD modes obtained are marginally stable, and can be considered Fourier modes. The norms of these modes indicates the energy corresponding to each set of wavenumber/frequencies and can unveil the value of the unknown forcing coefficients $\chi_{k,n,\omega,1}$ \citep{gomez_iti_2014}. To avoid additional post-processing of the DNS data in the DMD analysis, we directly employ \twod\ snapshots with Fourier expansions in the azimuthal direction, obtained from the DNS based on (\ref{eq:FourierDecomp}). Given an azimuthal wavenumber $n$, two matrices of snapshots equispaced in time are constructed as \begin{eqnarray} \mathcal{U}^1 & = & \left[\begin{array}{cccc} \hat{\bm u}_n(x,r,t_1) & \hat{\bm u}_n(x,r,t_2) & ... & \hat{\bm u}_n(x,r,t_{{N_s}-1}) \end{array} \right] \, , \\ \mathcal{U}^2 & = & \left[\begin{array}{cccc} \hat{\bm u}_n(x,r,t_2) & \hat{\bm u}_n(x,r,t_3) & ... & \hat{\bm u}_n(x,r,t_{N_s}) \end{array} \right] \, . \end{eqnarray} The size of these snapshot matrices is $N_rN_x \times{N_s-1}$ with $N_s$ being the number of snapshots employed. DMD consists of the inspection of the properties of the linear operator $\mathcal{A}$ that relates the two snapshot matrices as \begin{equation} \mathcal{A} \mathcal{U}^1 = \mathcal{U}^2 \, ; \end{equation} the linear operator $\mathcal{A}$ is commonly known as the Koopman operator. In the present case, it can be proved that the eigenvectors of this operator and the Fourier modes of the DNS data are equivalent if the snapshot matrices are not rank deficient and the eigenvalues of $\mathcal{A}$ are isolated. The reader is referred to the work of \citet{chen2012variants} and the review by \citet{mezic2013analysis} for a rigorous derivation of this equivalence. To obtain the eigenvectors of $\mathcal{A}$, we employ the DMD algorithm based on the SVD of the snapshot matrices developed by \citet{Schmid2010}. This algorithm circumvents any rank-deficiency in the snapshot matrices and it provides a small set of eigenvectors ordered by energy norm. The dataset consist of 1200 DNS snapshots equispaced over $\mathcal{O}(40)$ wash-out times $L/u_0(R)$. We anticipate that because of the discretization employed in the DMD analysis, the Fourier modes obtained are \twod\ and can contain multiple axial wavenumbers $k$. A spatial Fourier transform in the axial direction can be carried out in order to identify the dominant axial wavenumber $k$ of a DMD mode. In the following, we employ the resolvent analysis and DMD to address how the most amplified and energetic flow structures are manipulated by the transpiration. This is the first step in establishing a relation between the changes in flow structures and drag reduction or increase mechanisms. \subsection{Amplification and energy} \begin{figure} \begin{center} \includegraphics*[width=1\linewidth]{figure-12.png} \caption{\label{fig:sigmas}Distribution of amplification $\log_{10}(\sigma_{k,6,\omega,1})$ (\textit{a}) reference: smooth pipe (\textit{b}) case I: large drag-decrease $(A^+,k_c)=(10,10)$, (\textit{c}) case II: small drag-decrease $(A^+,k_c)=(0.7,10)$, (\textit{d}) case III: drag-increase $(A^+,k_c)=(2,2)$. Symbols denote the frequency corresponding to the most energetic \twod\ DMD modes with $k=0,1,2,3$ dominant wavenumber. Dashed lines indicate the wavespeed corresponding to the centerline velocity $\bar{u}^c(R)$ for each case.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics*[width=1\linewidth]{figure-13.png} \caption{\label{fig:DMD} In each of the four blocks, the upper diagram represents each transpiration configuration. Below, iso-surfaces of $50 \%$ of the maximum/minimum streamwise velocity are represented corresponding to ({\em top}) DMD mode with dominant wavenumbers $k=1$ and $n=6$ (red squares in figure~\ref{fig:sigmas}) ({\em middle}) DMD modes axially-filtered at $k=1$ ({\em bottom}) resolvent mode associated with $k=1$ and $n=6$ at the same frequency. (\textit{a}) reference: smooth pipe (\textit{b}) case I: large drag-decrease $(A^+,k_c)=(10,10)$, (\textit{c}) case II: small drag-decrease $(A^+,k_c)=(0.7,10)$, (\textit{d}) case III: drag-increase $(A^+,k_c)=(2,2)$. } \end{center} \end{figure} We focus on the effect of transpiration on large scale motions. Hence the broad parameter space $(k,n,\omega)$ is reduced by taking into consideration findings from the literature. As discussed in detail by \cite{sharma2013coherent}, VLSMs in pipe flows can be represented with resolvent modes of lengths scales $(k,n)=(1,6)$ and with a convective velocity $c=2/3$ of the centerline streamwise velocity. This representation is based on the work of \citet{monty2007large} and \cite{bailey2010experimental}, which experimentally investigated the spanwise length scale associated with the VLSM and found to be of the order of the outer length scale, $n=6$. Hence we consider only the azimuthal wavenumber $n=6$. Reference values of the amplification are provided in contours in Figure \ref{fig:sigmas}(\textit{a}), which show the distribution of resolvent amplification $\log_{10}(\sigma_{k,6,\omega,1})$ for the smooth pipe in a continuum set of $(k,\omega)$ wavenumbers for $n=6$. We observe a narrow band of high amplification caused by a critical-layer mechanism \citep{McKeonSharma2010}. This critical-layer mechanism can be described by examining the resolvent operator. The diagonal terms of the inverse of the resolvent matrix in (\ref{discrtH}) read \begin{equation} h_{ii} = \left[ \ci (ku_0 - \omega) u + Re^{-1}\nabla^2 u \right]^{-1} \, , \end{equation} thus, for a given value of the Laplacian, there is a large amplification if the wavespeed $c=\omega/k$ matches the mean streamwise velocity, i.e.\ $c=u_0$. That means that flow structures that travel at the local mean velocity create high amplification. Furthermore, we note that straight lines that pass through the origin in figure~\ref{fig:sigmas} correspond to constant wavespeed values. The wavespeed corresponding to the centerline velocity is indicated by a dashed line. Turning first to figure~\ref{fig:sigmas}(\textit{a}), the symbols represent the dominant wavenumber and the frequency corresponding to the four most energetic DMD modes at $n=6$. The value of the energy corresponding to each the four modes is similar and omitted. As explained by \citet{gomez_pof_2014}, a sparsity is observed in energy as a consequence of the critical layer mechanism in a finite length periodic domain; only structures with an integer axial wavenumber, $k_i=1,2,\ldots$ can exist in the flow because of the finite length periodic domain with length $L=4\pi R$. This fact creates a corresponding sparsity in frequency. For each integer axial wavenumber $k_i$ there is a frequency $\omega_i$ for which the critical layer mechanism occurs, i.e.\ $\omega_i=k_i c=k_i u_0(r_c)$, with $r_c$ being the wall-normal location of the critical layer. This energy sparsity behaviour is clearly observed in the reference case, as shown in figure~\ref{fig:sigmas}(\textit{a}), in which the peak frequencies are approximately harmonics. Thus they are approximately aligned in a constant wavespeed line. We also observe that, for a given $k$, the frequencies corresponding to the peaks of energy differ from the most amplified frequency. As observed by \cite{gomez_iti_2014}, this is related to the major role that the non-linear forcing $\chi_{n,\omega,n}$ maintaining the turbulence plays in the resolvent decomposition (\ref{resolde}). Nevertheless, we observe that the frequency corresponding to the most energetic modes can be found in the proximity of the high amplification band. Furthermore, the fact there is only one narrow band of amplification indicates a unique critical layer and that each frequency corresponds to only one wavenumber. This result is confirmed through the similar features exhibited by the most energetically relevant flow structures, arising from DMD of the DNS data, and the resolvent modes associated with the same frequency and wavenumber. For instance, figure~\ref{fig:DMD}(\textit{a}) presents a comparison of the DMD and resolvent mode corresponding to $k=1$ (red square in figure~\ref{fig:sigmas}(\textit{a})). We observe that both DMD and resolvent modes present a unique dominant wavenumber $k=1$ with a similar wall-normal location of its maximum/minimum velocity. The DMD analysis makes use of \twod\ DNS snapshots based on the Fourier decomposition in (\ref{eq:FourierDecomp}) and so it permits multiple axial wavenumbers. Hence, the DMD modes are not constrained to one unique dominant axial wavenumber. This does not apply to the resolvent modes, which only admit one wavenumber $k$ by construction of the model. This uniqueness of a dominant streamwise wavenumber in the most energetic flow structures of a smooth pipe flow has also been observed in the work of \citet{gomez_pof_2014}. Additionally, figure~\ref{fig:DMD}(\textit{a}) presents DMD modes axially-filtered to $k=1$ to ease comparisons with the resolvent modes. We consider it most appropriate to show the axial velocity because it is the energetically dominant velocity component in these flow structures. Additionally, the flow structures corresponding to positive and negative azimuthal wavenumber $n=\pm 6$ have been summed; a single azimuthal wavenumber necessarily corresponds to a helical shape. Figure \ref{fig:sigmas}(\textit{b}) represents the amplification results of case I, corresponding to large drag-decrease with $(A^+,k_c)=(10,10)$. The effect of transpiration on the flow dynamics is significant; two constant-wavespeed rays of high amplification may be observed. One of the wavespeeds is similar than the one corresponding the critical layer in the reference case while the second one is much faster, almost coincident with the centerline velocity. Also the amplification in the area between these two constant wavespeed lines is increased with respect to the reference case. Hence multiple axial wavenumber $k$ could be amplified at each frequency. This is confirmed in figure~\ref{fig:DMD}(\textit{b}), which shows that the DMD mode features waviness corresponding to multiple wavenumbers. Although the dominant axial wavenumber $k=1$ can be visually identified, other wavenumbers corresponding to interactions of this $k=1$ with the transpiration wavenumber $k_c=10$ are also observed. The waviness in figure~\ref{fig:DMD}(\textit{b}) is steady; it does not travel with the $k=1$ flow structure. This has been confirmed via animations of the flow structures and it was expected as a result of the steady transpiration. As mentioned before, the resolvent modes arise from a \oned\ model based on a $(k,n,\omega)$ Fourier decomposition hence they only contain one axial wavenumber $k$. As such, a single resolvent mode cannot directly replicate the waviness. However, it provides a description of the dominant wavenumber flow structure. Nevertheless, the axially-filtered DMD mode in figure~\ref{fig:DMD}(\textit{b}) presents a similar structure to the resolvent mode. Similarly to the reference case, the wavespeed based on the dominant wavenumber of the DMD modes are aligned in a constant wavespeed line near the high amplification regions, as seen in \ref{fig:sigmas}(\textit{b}) . Figure \ref{fig:sigmas}(\textit{c}) shows that the small transpiration $(A^+,k_c)=(0.7,10)$ does not have a significant influence on the flow dynamics. The main difference with respect to the reference case is that the amplification line is slightly broader in this case. Consistent with this result, the DMD mode in Figure \ref{fig:DMD}(\textit{b}) shows a tiny waviness corresponding to other wavenumbers. Similarly to the previous cases, the resolvent mode replicates the dominant axial wavenumber flow structure. The amplification and energy results corresponding to the drag-increase case $(A^+,k_c)=(2,2)$ are shown in Figure~\ref{fig:sigmas}(\textit{d}). We observe that, in agreement with the decrease of bulk velocity, the transpiration slows down the flow dynamics; the wavespeed corresponding to the critical layer is slower than in previous cases, i.e., a steeper dashed line in Figure~\ref{fig:sigmas}(\textit{d}). Additionally, the critical layer is broad in contrast to the reference case, indicating that multiple axial wavenumbers could be excited. As such, the DMD mode in figure~\ref{fig:DMD}(\textit{d}) shows a dominant axial wavenumber $k=1$ which interacts with others in order to produce two steady localized areas of fluctuating velocity. These clusters are located slightly after the two blowing sections, which corresponding to the high velocity areas in figure~\ref{fig:means}(\textit{d}). As in previous cases, the resolvent mode at $k=1$ captures the same dynamics of the filtered DMD mode. \section{Discussion and conclusions} \label{sec:end} The main features of low- and high-amplitude wall transpiration applied to pipe flow have been investigated by means of direct numerical simulation at Reynolds number $Re_\tau=314$. Turbulence statistics have been collected during parameter sweeps of different transpiration amplitudes and wavenumbers. The effect of transpiration is assessed in terms of changes in the bulk axial flow. We have shown that for low amplitude transpiration the mean streamwise velocity profile follows a velocity defect law, so the outer flow is unaffected by transpiration. This indicates that the flow still obeys Townsend's similarity hypothesis \citep{Townsend1976}, as it is usually observed in rough walls. Hence, low amplitude transpiration has a similar effect as roughness or corrugation in a pipe. On the other hand, high amplitude transpiration has a dramatic effect on the outer layer of the velocity profile and the overlap region is substituted by a large increase in streamwise velocity. We have observed that a transpiration configuration with a small transpiration wavenumber leads to long regions of suction in which the streamwise mean velocity is significantly reduced and the fluctuating velocity can be suppressed. In contrast, a large transpiration wavenumber can speed up the outer layer of the streamwise mean profile with respect to the uncontrolled pipe flow, even at small amplitudes. These trends in amplitude and wavenumber have permitted the identification of transpiration configurations that lead to a significant drag increase or decrease. For instance, we have shown that a wall transpiration that combines a large amplitude with a large wavenumber creates a large increase in flow rate. A comparison with the channel flow data of \citet{quadrio2007effect} revealed that application of low amplitude transpiration to the pipe flow leads to similar quantitative results. The obtained turbulence statistics showed that the changes in Reynolds stress induced by the transpiration are not sufficient to explain the overall change in the mean profile. An analysis of the streamwise momentum equation revealed three different physical mechanisms that act in the flow: modification of Reynolds shear stress, steady streaming and generation of non-zero mean streamwise gradients. Additionally, a triple decomposition of the velocity based on a mean profile, a \twod\ deviation from the mean profile and a fluctuating velocity, has been employed to examine the streamwise momentum equation. This decomposition showed that the steady streaming term can be interpreted as a coherent Reynolds stress generated by the deviation velocity. The contribution of this coherent Reynolds stress to the momentum balance is important close to the wall and it affects the viscous sublayer, which is no longer linear under the influence of transpiration. The behaviour of these terms has been examined by selected transpiration cases of practical interest in terms of drag modification. For all cases considered, the steady streaming terms opposes to the flow rate while the change in Reynolds shear stress are always positive. This concurs with the numerical simulations of \citet{quadrio2007effect,hoepffner2009pumping} and the perturbation analysis of \citet{woodcock2012induced}. Additionally, we have observed that the contribution of non-zero mean streamwise gradients is significant. This contribution can be negative for small transpiration wavenumbers; the turbulent fluctuations are suppressed in the large areas of suction, favoring non-homogeneity effects in the axial direction. A description of the change in the flow dynamics induced by the transpiration has been obtained via the resolvent analysis methodology introduced by \citet{McKeonSharma2010}. This framework has been extended to deal with pipe flows with an axially-invariant cross-section but with mean spatial periodicity induced by changes in boundary conditions. The extension involves a triple decomposition based on mean, deviation and fluctuating velocities. This new formulation opens up a new avenue for modifying turbulence using only the mean profile and it could be applied to investigate the flow dynamics of pulsatile flows or changes induced by dynamic roughness. In the present investigation, this input--output analysis showed that the critical-layer mechanism dominates the behaviour of the fluctuating velocity in pipe flow under transpiration. However, axially periodic transpiration actuation acts to delocalize the critical layer by distorting the mean flow, so that multiple wavenumbers can be excited. This produces waviness of the flow structures. The resolvent results in this case are useful as a tool to interpret the dynamics but are less directly useful to predict the effects of transpiration, since transpiration feeds directly into altering the mean flow, which itself is required as an input to the resolvent analysis. This limitation partly arises owing to the use of steady actuation, since low-amplitude time-varying actuation directly forces inputs to the resolvent rather than altering its structure (see figure~\ref{fig:15D}) The critical layer mechanism concentrates the response to actuation in the wall-normal location of the critical layer associated with the wavespeed calculated from the frequency and wavenumber of the actuation. This leads us to believe that, within this framework, dynamic actuation may be more useful for directly targeting specific modes in localised regions of the flow. DMD of the DNS data confirmed that the transpiration mainly provides a waviness to the leading DMD modes. This corrugation of the flow structures is steady and corresponds to interactions of the critical-layer induced wavenumber with the transpiration wavenumber. As such, this waviness is responsible for generating steady streaming and non-zero mean streamwise gradients, which it turns modifies the streamwise momentum balance, hence enhancing or decreasing the drag. Finally, a performance analysis indicated that all the transpiration configurations considered are energetically inefficient. In the most favorable case, the benefit obtained by the drag reduction induced by the transpiration marginally exceeds the cost of applying transpiration. Nevertheless, this is an open loop active flow control system. A passive roughness-based flow control system able to mimic the effect of transpiration would be of high practical interest. Hence, we remark that experience gained through this investigation serves to extend this methodology towards manipulation of flow structures at higher Reynolds numbers; this is the subject of an ongoing investigation. \section*{Acknowledgments} The authors acknowledge financial support from the Australian Research Council through the ARC Discovery Project DP130103103, from Australia's National Computational Infrastructure via Merit Allocation Scheme Grant d77, and from the U.S. Office of Naval Research, grant \#N000141310739 (BJM). \section*{Appendix A. Triadic interaction induced by the deviation velocity} Without loss of generality, we write the deviation velocity as a single Fourier mode with $l$ axial wavenumber. Then the triple decomposition reads: \begin{equation} \hat{\bm u}(x,r,\theta,t)=\underbrace{\bar{\bm u}(r)}_{\mbox{A}} + \underbrace{{\bm u}_l^\prime(r) e^{\mathrm{i}lx}}_{\mbox{B}} + \overbrace{\sum_{(k,n,\omega) \neq (l,0,0)} {\bm u}_{k,n,\omega}(r) e^{\mathrm{i}(kx+n\theta-\omega t)}}^{\mbox{C}} + \mathrm{c.c.} \end{equation} As an example, we substitute this decomposition in the nonlinear terms $\hat{u} \partial_x \hat{u}$ \begin{equation} \hat{u} \partial_x \hat{u}=A \partial_x A +A \partial_x B + A \partial_x C + B \partial_x A + B \partial_x B + B \partial_x C + C \partial_x A + C \partial_x B + C \partial_x C \, , \end{equation} since $\partial_x A = 0$ we obtain \begin{equation} \hat{u} \partial_x \hat{u}= A \partial_x B + A \partial_x C + B \partial_x B + B \partial_x C + C \partial_x B + C \partial_x C \, . \end{equation} Next, we study which terms are orthogonal to the complex exponential functions corresponding to $(k,n,\omega) \neq (0,0,0)$ and $(k,n,\omega) \neq (l,0,0)$. Thus \begin{equation} \hat{u} \partial_x \hat{u}= \cancel{A \partial_x B} + \underbrace{A \partial_x C}_{\mbox{$k \bar{u} u_{k,n,\omega}$}} + \cancel{B \partial_x B} + \overbrace{B \partial_x C}^{\mbox{$(k\pm l){u^\prime_l} u_{k\pm l,n,\omega}$}} + \underbrace{C \partial_x B}_{\mbox{$\pm l {u^\prime_l} u_{k \pm l,n,\omega}$}} + \overbrace{C \partial_x C}^{\mbox{$f_{k,n,\omega}$}} \, . \end{equation} As a consequence of the triadic interaction between the spatial Fourier mode of the deviation velocity and the fluctuating velocity, there is a coupling of the fluctuating velocity at $(k,n,\omega)$ with that at $(k \pm l,n,\omega)$. This interaction generates the new terms $B \partial_x C$ and $C \partial_x B$. These two terms are represented by the coupling operator $\mathcal{C}_{k,n,\omega}$ in the resolvent formulation (\ref{coupling}). Note that the term $A \partial_x B$ is included in the deviation equation $(k,n,\omega) = (l,0,0)$ while the term $B \partial_x B$ contributes to the mean flow equation $(k,n,\omega) = (0,0,0)$ as coherent Reynolds stress. \bibliographystyle{jfm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $R$ be a finite ring equipped with a weight $w$. Two linear codes $C, D \le {_RR^n}$ are \emph{isometrically equivalent} if there is an isometry between them, i.e., an $R$-linear bijection $\varphi: C \longrightarrow D$ that satisfies $w(\varphi(c))=w(c)$ for all $c\in C$. We say that $\varphi$ \emph{preserves} the weight $w$. MacWilliams in her doctoral dissertation \cite{macw63} and later Bogart, Goldberg, and Gordon~\cite{bogagoldgord78} proved that, in the case where $R$ is a finite field and $w$ is the Hamming weight, every isometry is the restriction of a {\em monomial transformation} $\Phi$ of the ambient space $_RR^n$. A monomial transformation of $_RR^n$ is simply a left linear mapping $\Phi: R^n\longrightarrow R^n$ the matrix representation of which is a product of a permutation matrix and an invertible diagonal matrix. Said another way, every Hamming isometry over a finite field extends to a monomial transformation. This result is often called the \emph{MacWilliams Extension Theorem} or the \emph{MacWilliams Equivalence Theorem}. With increased interest in linear codes over finite rings there arose the natural question: could the Extension Theorem be proved in the context of ring-linear coding theory? This question appeared complicated, as two different weights were pertinent: the traditional Hamming weight $w_{\rm H}$ and also a new weight $w_{\rm hom}$ called the \emph{homogeneous} weight by its discoverers Constantinescu and Heise~\cite{consheis97}. In~\cite{wood99} Wood proved the MacWilliams Extension Theorem for all linear codes over finite Frobenius rings equipped with the Hamming weight. In the commutative case he showed in the same paper that the Frobenius property was not only sufficient but also necessary. In the non-commutative case, the necessity of the Frobenius property was proved in \cite{wood08a}. Inspired by the paper of Constantinescu, Heise, and Honold~\cite{consheishono96} which used combinatorial methods to prove the Extension Theorem for homogeneous weights on $\mathbb Z_m$, Greferath and Schmidt~\cite{grefschm00} showed that the Extension Theorem is true for linear codes over finite Frobenius rings when using the homogeneous weight. Moreover, they showed that for all finite rings every Hamming isometry between two linear codes is a homogeneous isometry and vice versa. The situation can be viewed as follows: for $R$ a finite ring, and either the Hamming weight or the homogeneous weight, the Extension Theorem holds for all linear codes in $R^n$ if and only if the ring is Frobenius. This is a special case of more general results by Greferath, Nechaev, and Wisbauer~\cite{grefnechwisb04} who proved that if the codes are submodules of a quasi-Frobenius bi-module $_RA_R$ over any finite ring $R$, then the Extension Theorem holds for the Hamming and homogeneous weights. The converse of this was proved by Wood in \cite{wood09}. \medskip Having understood all requirements on the algebraic side of the problem, we now focus on the metrical aspect. This paper aims to further develop a characterization of all weights on a finite (Frobenius) ring, for which the corresponding isometries satisfy the Extension Theorem. In our discussion we will assume that the weights in question are bi-invariant, which means that $w(ux) = w(x) =w(xu)$ for all $x\in R$ and $u\in R^\times$. Our main results do not apply to weights with smaller symmetry groups such as the Lee or Euclidean weight (on $R=\ensuremath{\field{Z}}_m$, except for $m\in\{2,3,4,6\}$), despite their importance for ring-linear coding theory. The goal of this paper is to give a necessary and sufficient condition that a bi-invariant weight $w$ must satisfy in order for the Extension Theorem to hold for isometries preserving $w$. We are not able to characterize all such weights when the underlying ring is an arbitrary Frobenius ring, but we do achieve a complete result for \emph{principal ideal rings}. These are rings in which each left or right ideal is principal, and they form a large subclass of the finite Frobenius rings. The present work is a continuation and generalization of earlier work on this topic \cite{wood97, wood99a, grefhono05, grefhono06, wood09, grefmcfazumb13}. As in \cite{grefhono06, grefmcfazumb13} the M\"obius function on the partially ordered set of (principal, right) ideals is crucial for the statement and proof of our main characterization result; however, in contrast to these works we do not need the values of the M\"obius function explicitly, but use its defining properties instead to achieve a more general result. Our restriction to principal ideal rings stems from our method of proof, which requires the annihilator of a principal ideal to be principal. The main result was proved for the case of finite chain rings in \cite[Theorem~3.2]{grefhono05} (and in a more general form in \cite[Theorem~16]{wood97}), in the case $\ensuremath{\field{Z}}_m$ in \cite[Theorem~8]{grefhono06}, for direct products of finite chain rings in \cite[Theorem~22]{grefmcfazumb13}, and for matrix rings over finite fields in \cite[Theorem~9.5]{wood09} (see Example~\ref{ex:examples} below). The main result gives a concrete manifestation of \cite[Proposition~12]{wood97} and \cite[Theorem~3.1]{wood99a}. Further to \cite{grefmcfazumb13} we prove that our condition on the weight is not only sufficient, but also necessary for the Extension Theorem, using an argument similar to that in \cite{grefhono06, wood08a}. \medskip Here is a short summary of the contents of the paper. In Section~\ref{sec:prelims} we review the terminology of Frobenius rings, M\"obius functions, and orthogonality matrices needed for the statements and proofs of our main results. In addition, we prove a result (Corollary~\ref{cor_smult}) that says that a right-invariant weight $w$ on $R$ satisfies the Extension Property if the Hamming weight $w_{\rm H}$ is a correlation multiple of $w$. In Section~\ref{sec:orthog-matrices} we show that the Extension Property holds for a bi-invariant weight if and only if its orthogonality matrix is invertible. The main results are stated in Section~\ref{sec:bi-inv-wts}. By an appropriate unimodular change of basis, the orthogonality matrix can be put into triangular form, with a simple expression for the diagonal entries (Theorem~\ref{thm:WQ}). The Main Result (Theorem~\ref{maintheorem}) then says that the Extension Property holds if and only if all the diagonal entries of the orthogonality matrix are nonzero. A proof of Theorem~\ref{thm:WQ} is given in Section~\ref{sec:proof}. \medskip This paper is written in memory of our friend, teacher, and colleague Werner Heise who, sadly, passed away in February 2013 after a long illness. Werner has been very influential in ring-linear coding theory through his discovery of the homogeneous weight on ${\mathbb Z}_m$ (``Heise weight'') and subsequent contributions. \section{Notation and Background}% \label{sec:prelims} In all that follows, rings $R$ will be finite, associative and possess an identity $1$. The group of invertible elements (units) will be denoted by $R^\times$ or $\ensuremath{U}$. Any module $_RM$ will be unital, meaning $1m=m$ for all $m\in M$. \subsection*{Frobenius Rings} We describe properties of Frobenius rings needed in this paper, as in \cite{honold01}. The character group of the additive group of a ring $R$ is defined as $\widehat{R}:={\rm Hom}_{\mathbb Z}(R,{\mathbb C}^\times)$. This group has the structure of an $R,R$-bimodule by defining $\chi^r(x):=\chi(rx)$ and $^r\chi(x):=\chi(xr)$ for all $r,x\in R$, and for all $\chi\in \widehat{R}$. The \emph{left socle} ${\rm soc}(_RR)$ is defined as the sum of all minimal left ideals of $R$. It is a two-sided ideal. A similar definition leads to the \emph{right socle} ${\rm soc}(R_R)$ which is also two-sided, but will not necessarily coincide with its left counterpart. A finite ring $R$ is \emph{Frobenius} if one of the following four equivalent statements holds: \begin{itemize}\itemsep=1mm \item $_RR \cong {_R\widehat{R}}$. \item $R_R \cong {\widehat{R}_R}$. \item ${\rm soc}(_RR)$ is left principal. \item ${\rm soc}(R_R)$ is right principal. \end{itemize} For a finite Frobenius ring the left and right socles coincide. Crucial for later use is the fact that finite Frobenius rings are quasi-Frobenius and hence possess a perfect duality. This means the following: Let $L(_RR)$ denote the lattice of all left ideals of $R$, and let $L(R_R)$ denote the lattice of all right ideals of $R$. There is a mapping $\perp: L(_RR) \longrightarrow L(R_R),\; I \mapsto I^\perp$ where $I^\perp:= \{x\in R \mid Ix=0\}$ is the right annihilator of $I$ in $R$. This mapping is an order anti-isomorphism between the two lattices. The inverse mapping associates to every right ideal its left annihilator. \subsection*{Principal Ideal Rings} A ring $R$ is \emph{left principal} if every left ideal is left principal, similarly a ring is \emph{right principal} if every right ideal is right principal. If a ring is both left principal and right principal it is a {\em principal ideal ring}. Nechaev in \cite{nechaev73} proved that ``a finite ring with identity in which every two-sided ideal is left principal is a principal ideal ring." Hence every finite left principal ideal ring is a principal ideal ring. Further, as argued in \cite{nechaev73}, the finite principal ideal rings are precisely the finite direct sums of matrix rings over finite chain rings. They form a subclass of the class of finite Frobenius rings (since, for example, their one-sided socles are principal). \subsection*{M\"obius Function} The reader who is interested in a more detailed survey of the following is referred to \cite[Chapter~IV]{aigner}, \cite{rota64}, or \cite[Chapter~3.6]{stanley}. For a finite partially-ordered set (poset) $P$, we have the incidence algebra \[ {\mathbb A}(P) \;:=\; \{\,f: P\times P \longrightarrow {\mathbb Q} \mid\, x \not\le y \;\; \mbox{implies} \;\; f(x,y)=0 \,\} \:. \] Addition and scalar multiplication in ${\mathbb A}( P)$ are defined point-wise; multiplication is convolution: \[ (f*g)(a,b) = \sum_{a \le c \le b} f(a,c) \, g(c,b) \:. \] The invertible elements are exactly the functions $f\in {\mathbb A}(P)$ satisfying $f(x,x) \ne 0$ for all $x\in P$. In particular, the characteristic function of the partial order of $P$ given by \[ \zeta: P\times P \longrightarrow {\mathbb Q} \:, \quad (x,y) \mapsto \left\{\begin{array}{lcl} 1 & : & x\le y\\ 0 & : & \mbox{otherwise} \end{array}\right. \] is an invertible element of ${\mathbb A}(P)$. Its inverse is the {\em M\"obius function\/} $\mu: P\times P \longrightarrow {\mathbb Q}$ implicitly defined by $\mu(x,x) = 1$ and \[ \sum_{x\le t \le y} \mu(x,t) \;=\; 0 \] if $x<y$, and $\mu(x,y) = 0$ if $x \not\le y$. \subsection*{Weights and Code Isometries} Let $R$ be any finite ring. By a \emph{weight} $w$ we mean any $\ensuremath{\field{Q}}$-valued function $w: R \longrightarrow {\mathbb Q}$ on $R$, without presuming any particular properties. As usual we extend $w$ additively to a weight on $R^n$ by setting \[ w: R^n \longrightarrow {\mathbb Q} \:,\quad x \mapsto \sum_{i=1}^n w(x_i) \:. \] The \emph{left} and \emph{right symmetry groups} of $w$ are defined by \[ G_{\mathrm{lt}}(w) := \{ u \in U: w(ux) = w(x), x \in R\} \:, \quad G_{\mathrm{rt}}(w) := \{ v \in U: w(xv) = w(x), x \in R\} \:. \] A weight $w$ is called \emph{left} (resp.\ \emph{right}) \emph{invariant} if $G_{\mathrm{lt}}(w) = U$ (resp.\ $G_{\mathrm{rt}}(w) = U$). A (left) \emph{linear code} of length $n$ over $R$ is a submodule $C$ of $ {}_{R}R^{n}$. A {\em $w$-isometry\/} is a linear map $\varphi : C \longrightarrow {}_{R}R^{n}$ with $w(\varphi(x)) = w(x)$ for all $x \in C$, i.e., a mapping that preserves the weight $w$. A \emph{monomial transformation} is a bijective (left) $R$-linear mapping $\Phi : R^{n} \longrightarrow R^{n}$ such that there is a permutation $\pi\in S_n$ and units $u_1, \ldots, u_n\in \ensuremath{U}$ so that \[ \Phi(x_1, \ldots, x_n) \;=\; ( x_{\pi(1)} u_1 , \dots , x_{\pi(n)} u_n ) \] for every $(x_1 , \dots , x_n ) \in R^{n}$. In other words, the matrix that represents $\Phi$ with respect to the standard basis of $_RR^n$ decomposes as a product of a permutation matrix and an invertible diagonal matrix. A \emph{$G_{\mathrm{rt}}(w)$-monomial transformation} is one where the units $u_i$ belong to the right symmetry group $G_{\mathrm{rt}}(w)$. A $G_{\mathrm{rt}}(w)$-monomial transformation is a $w$-isometry of $R^n$. We say that a finite ring $R$ and a weight $w$ on $R$ satisfy the \emph{Extension Property} if the following holds: For every positive length $n$ and for every linear code $C\le {_RR^n}$, every injective $w$-isometry $\varphi : C \longrightarrow {_RR}^{n}$ is the restriction of a $G_{\mathrm{rt}}(w)$-monomial transformation of $_RR^{n}$. That is, every injective $w$-isometry $\varphi$ extends to a monomial transformation that is itself a $w$-isometry of $R^n$. \medskip Let $w: R \longrightarrow {\mathbb Q}$ be a weight and let $f: R \longrightarrow {\mathbb Q}$ be any function. We define a new weight $wf$ as \[ wf: R \longrightarrow {\mathbb Q} \:, \quad x \mapsto \sum_{r\in R} w(rx)\,f(r) \:. \] By the operation of \emph{right correlation} $(w,f)\mapsto wf$, the vector space $V := \ensuremath{\field{Q}}^R$ of all weights on $R$ becomes a right module $V_\ensuremath{A}$ over $\ensuremath{A} = \ensuremath{\field{Q}}[(R,\cdot)]$, the rational semigroup algebra of the multiplicative semigroup $(R,\cdot)$ of the ring (see~\cite{grefmcfazumb13}). For $r\in R$ denote by $e_r$ the weight where $e_r(r) = 1$ and $e_r(s) = 0$ for $s\ne r$. Then $we_r$ is simply given by $(we_r)(x) = w(rx)$. Denote the natural additive extension of $wf$ to $R^n$ by $wf$ also. \begin{lemma}\label{lem_smult} Let $C \le {_RR^n}$ be a linear code and let $\varphi: C \longrightarrow R^n$ be a $w$-isometry, then $\varphi$ is also a $wf$-isometry for any function $f: R \longrightarrow {\mathbb Q}$. \end{lemma} \begin{proof} For all $x\in C$ we compute \begin{align*} (wf) (\varphi(x)) & \;=\; \sum_{r\in R} w(r\varphi(x)) \, f(r) \;=\; \sum_{r\in R} w(\varphi(rx)) \, f(r) \\ & \;=\; \sum_{r\in R} w(rx) \, f(r) \;=\; (wf)(x) \:. \qedhere \end{align*} \end{proof} For a weight $w$ consider the $\ensuremath{\field{Q}}$-linear map $\tilde w: \ensuremath{A} \to V$, $f\mapsto wf$. By Lemma~\ref{lem_smult}, if $\varphi$ is a $w$-isometry then $\varphi$ is a $w'$- isometry for all $w' \in\im\tilde w$. Note that $\im\tilde w = w\ensuremath{A} \le V_\ensuremath{A}$. \subsection*{Weights on Frobenius Rings} Now let $R$ be a finite Frobenius ring. We describe two approaches that ultimately lead to the same criterion for a weight $w$ to satisfy the Extension Property. \subsubsection*{Approach~1} From earlier work~\cite{wood99} we know that the Hamming weight $w_{\rm H}$ satisfies the Extension Property. Combining this fact with Lemma~\ref{lem_smult}, we immediately obtain the following result. \begin{cor}\label{cor_smult} Let $R$ be a finite Frobenius ring and let $w$ be a weight on $R$ such that $G_{\mathrm{rt}}(w) = U$ and $wf = w_{\rm H}$ for some function $f:R\to\ensuremath{\field{Q}}$. Then $w$ satisfies the Extension Property. \end{cor} In other words, if $w$ is right-invariant and $w_{\rm H}\in\im\tilde w$ then $w$ satisfies the Extension Property. How can we make sure that $w_{\rm H}\in\im\tilde w$? One idea is to show that the $\ensuremath{\field{Q}}$-linear map $\tilde w$ is bijective: Using the natural basis $(e_r)_{r\in R}$ for $V$ and the property $(we_r)(s) = w(rs)$ it is easy to see that $\tilde w$ is described by the transpose of the matrix $(w(rs))_{r,s\in R}$. However, if the weight function $w$ is left- or right-invariant {\em or} satisfies $w(0) = 0$ then this matrix is not invertible. Therefore we work with a ``reduced'' version of the map $\tilde w$. As before, let $V := \ensuremath{\field{Q}}^R$ be the vector space of all weights on $R$, and let $V_0^\ensuremath{U}$ be the subspace of all weights $w$ satisfying $w(0) = 0$ that are right-invariant. Similarly, we define the subspace $^\ensuremath{U} V_0$ of all weights $w$ with $w(0) = 0$ that are left-invariant. The corresponding invariant subspaces of $\ensuremath{A} = \ensuremath{\field{Q}}[(R,\cdot)]$ are $\ensuremath{A}_0^\ensuremath{U}$ and $^\ensuremath{U} \ensuremath{A}_0$, where $\ensuremath{A}_0 := \ensuremath{A} / \ensuremath{\field{Q}} e_0$. If $w$ is a weight in $V_0^\ensuremath{U}$ then $wf\in V_0^\ensuremath{U}$ for {\em any} function $f:R\to\ensuremath{\field{Q}}$, i.e., $\im\tilde w \le V_0^\ensuremath{U}$. In this case we could examine the bijectivity of the $\ensuremath{\field{Q}}$-linear map $\tilde w: \ensuremath{A}_0^\ensuremath{U} \to V_0^\ensuremath{U}$ (the restriction of the above map $\tilde w$). But this map does not have a nice matrix representation; setting $e_{s\ensuremath{U}} = \sum_{r\in s\ensuremath{U}}e_r$ and letting $(e_{s\ensuremath{U}})_{s\ensuremath{U}\ne 0}$ be the natural basis for $\ensuremath{A}_0^\ensuremath{U}$ and for $V_0^U$, the entries of the matrix turn out to be sums of several values $w(rus)$. However, if we work with the restriction $\tilde w: {}^\ensuremath{U} \ensuremath{A}_0 \to V_0^\ensuremath{U}$ instead and if the weight $w$ is bi-invariant (i.e., both left- and right-invariant), then, with respect to the natural bases, this $\ensuremath{\field{Q}}$-linear map does have a nice matrix description, namely the orthogonality matrix. This will be explained below. If this map $\tilde w$ is invertible, then $w$ satisfies the Extension Property by Corollary~\ref{cor_smult}. Note: Since $\im\tilde w$ is a submodule of $V_\ensuremath{A}$ it follows that $w_{\rm H}\in\im\tilde w$ if and only if $\im\tilde w_{\rm H} \le \im\tilde w$. Actually, $\im\tilde w_{\rm H} = V_0^\ensuremath{U}$ (see Proposition~\ref{prop_om-reverse} below), so that $w_{\rm H}\in\im\tilde w$ if and only if $V_0^U \subseteq \im\tilde w$. This is why it is a sensible approach to investigate the surjectivity/bijectivity of the map $\tilde w$. \subsubsection*{Approach~2} The same orthogonality matrix that appears in Approach~1 also appears in \cite{wood97}. By \cite[Proposition~12]{wood97} (also, \cite[Theorem~3.1]{wood99a} and \cite[Section~9.2]{wood09}), the invertibility of the orthogonality matrix of $w$ implies that a $w$-isometry preserves the so-called \emph{symmetrized weight composition} associated with $G_{\mathrm{rt}}(w)$. Then, \cite[Theorem~10]{wood97} shows that any injective linear homomorphism that preserves the symmetrized weight composition associated with $G_{\mathrm{rt}}(w)$ extends to a $G_{\mathrm{rt}}(w)$-monomial transformation. Thus, if the orthogonality matrix is invertible, any $w$-isometry extends to a $G_{\mathrm{rt}}(w)$-monomial transformation, and hence $w$ satisfies the Extension Property. \subsection*{Orthogonality Matrices} Let $R$ be a finite Frobenius ring. There is a one-to-one correspondence between left (resp., right) principal ideals and left (resp., right) $\ensuremath{U}$-orbits. Each $\ensuremath{U}$-orbit is identified with the principal ideal of which its elements are the generators (\cite[Proposition~5.1]{wood99}, based on work of Bass). Define for $r, s\in R\setminus \{0\}$ the functions $\varepsilon_{R r} (x) = \size{\ensuremath{U} r}^{-1} $ if $x \in \ensuremath{U} r$, i.e., if $Rr = Rx$, and zero otherwise; similarly, let $e_{sR} (x) = e_{sU}(x) = 1$ if $xR = sR$ and zero otherwise. Then $(\varepsilon_{R r})$ and $(e_{sR})$ are bases for $^\ensuremath{U} \ensuremath{A}_0$ and $V_0^\ensuremath{U}$, as $R r$ and $sR$ vary over all left and right nonzero principal ideals of $R$, respectively. For a bi-invariant weight $w$, define the \emph{orthogonality matrix} of $w$ by $W_0 = \big(w(rs) \big){}_{Rr\ne 0,\,sR\ne 0}$. That is, the entry in the $Rr, sR$-position is the value of the weight $w$ on the product $rs \in R$. The value $w(rs)$ is well-defined, because $w$ is bi-invariant. Note that $W_0$ is square; this follows from work of Greferath \cite{gref02} that shows the equality of the number of left and right principal ideals in a finite Frobenius ring. \begin{prop}\label{prop_matrix} Suppose $w$ is bi-invariant with $w(0)=0$. Then \[ w \, \varepsilon_{R r} = \sum_{sR\ne 0} w(rs) \, e_{s R} \] for nonzero $R r$, where the sum extends over all the nonzero right principal ideals $s R$. In particular, the matrix representing the $\ensuremath{\field{Q}}$-linear map $\tilde w: {}^\ensuremath{U} \ensuremath{A}_0 \to V_0^\ensuremath{U}$, $f\mapsto wf$, with respect to the bases $(\varepsilon_{R r})$ and $(e_{sR})$, is the transpose of the matrix $W_0$. \end{prop} \begin{proof} Since $w\in V_0^\ensuremath{U}$ we have $w \, \varepsilon_{R r} \in V_0^\ensuremath{U}$, and therefore \[ w \, \varepsilon_{R r} = \sum_{sR\ne 0} (w \, \varepsilon_{R r})(s) \, e_{s R} \:. \] Calculating, using that $w\in {}^\ensuremath{U} V_0$, we get: \[ (w \, \varepsilon_{R r})(s) = \sum_{t \in R} w(ts) \, \varepsilon_{R r}(t) = \sum_{t \in Ur} \size{\ensuremath{U} r}^{-1} w(ts) = w(rs) \:. \qedhere \] \end{proof} In the algebraic viewpoint of \cite{grefmcfazumb13}, $V_0^{\ensuremath{U}}$ is a right module over $^\ensuremath{U} \!\ensuremath{A}_0$. Then, $W_0$ is invertible if and only if $w$ is a generator for $V_0^{\ensuremath{U}}$. If $R$ is a finite field and $w = w_{\rm H}$, the Hamming weight on $R$, then $W_0$ is exactly the orthogonality matrix considered by Bogart, Goldberg, and Gordon~\cite[Section~2]{bogagoldgord78}. More general versions of the matrix $W_0$ have been utilized in \cite{wood97, wood99a, wood09}. \begin{example} For $R={\mathbb Z}_4$ the Lee weight $w_{\rm Lee}$ assigns $0 \mapsto 0$, $1\mapsto 1$, $2\mapsto 2$ and $3\mapsto 1$. It is a bi-invariant weight function, as is the Hamming weight $w_{\rm H}$ on $R$. Based on the natural ordering of the (nonzero) principal ideals of $R$ as $2R < R$ the orthogonality matrix for $w_{\rm Lee}$ is \[ W_0^{\rm Lee} \, = \, \left[ \begin {array}{cc} 0 & 2 \\ 2 & 1 \end{array}\right], \] whereas the orthogonality matrix for $w_{\rm H}$ is given by \[ W_0^{\rm H} \, = \, \left[ \begin {array}{cc} 0 & 1\\ 1 & 1 \end{array}\right]. \] Both of these matrices are invertible over ${\mathbb Q}$ as observed in \cite{gref02}, where it was shown that the Extension Property is satisfied. \end{example} \section{Orthogonality Matrices and the Extension Theorem}% \label{sec:orthog-matrices} In the present section we will show that invertibility of the orthogonality matrix of a bi-invariant weight is necessary and sufficient for that weight to satisfy the Extension Property. We split this result into two statements. \begin{prop} Let $R$ be a finite Frobenius ring and let $w$ be a bi-invariant weight on $R$. If the orthogonality matrix $W_0$ of $w$ is invertible, then $w$ satisfies the Extension Property. \end{prop} \begin{proof} Approach~1: by Proposition~\ref{prop_matrix} the matrix $W_0$ describes the $\ensuremath{\field{Q}}$-linear map $\tilde w: {}^\ensuremath{U} \ensuremath{A}_0 \to V_0^\ensuremath{U}$, $f\mapsto wf$. Hence if $W_0$ is invertible the map $\tilde w$ is bijective, and in particular $w_{\rm H}\in\im\tilde w$. Thus by Corollary~\ref{cor_smult} the weight $w$ satisfies the Extension Property. Approach~2: apply \cite[Proposition~12]{wood97} or \cite[Theorem~3.1]{wood99a}. \end{proof} We remark that in the foregoing discussion, $\ensuremath{\field{Q}}$ could be replaced throughout by any field $K$ containing $\ensuremath{\field{Q}}$, for example $K = \ensuremath{\field{C}}$. \begin{prop}\label{prop_om-reverse} Let $R$ be a finite Frobenius ring, and let $w$ be a bi-invariant rational weight on $R$ that satisfies the Extension Property. Then the orthogonality matrix $W_0$ of $w$ is invertible. \end{prop} \begin{proof} The proof mimics that of \cite[Theorem~4.1]{wood08a} and \cite[Proposition~7]{grefhono06}. Assume $W_0$ singular for the sake of contradiction. Then there exists a nonzero rational vector $v = (v_{cR})_{cR\ne 0}$ such that $W_0 v = 0$. Without loss of generality, we may assume that $v$ has integer entries. We proceed to build two linear codes $C_+, C_-$ over $R$. Each of the codes will have only one generator. The generator for $C_{\pm}$ is a vector $g_{\pm}$ with the following property: for each ideal $cR \le R_R$ with $v_{cR} > 0$ (for $g_+$), resp., $v_{cR} < 0$ (for $g_-$), the vector $g_{\pm}$ contains $\abs{v_{cR}}$ entries equal to $c$. To make these two generators annihilator-free, we append to both a trailing $1\in R$. The typical codeword in $C_{\pm}$ is hence of the form $a g_{\pm}$ for suitable $a \in R$. We compare $w(a g_+)$ and $w(a g_-)$ for every $a \in R$ by calculating the difference $D( a ) = w(a g_+) - w(a g_-)$. By our construction of the generators $g_{\pm}$, we have \[ D( a ) \;=\; \sum_{cR\ne 0} w(ac) \, v_{cR} \;=\; (W_0 v)_{Ra} \;=\; 0 \:, \] for all $a\in R$. Thus $a g_+ \mapsto a g_-$ forms a $w$-isometry from $C_+$ to $C_-$. The codes, however, are not monomially equivalent because their entries come from different right $\ensuremath{U}$-orbits. \end{proof} We summarize our findings in the following theorem. \begin{theorem} A rational bi-invariant weight function on a finite Frobenius ring satisfies the Extension Property if and only if its orthogonality matrix is invertible. \end{theorem} The ultimate goal is to give necessary and sufficient conditions on a bi-invariant weight $w$ on a finite Frobenius ring $R$ so that its orthogonality matrix $W_0$ is invertible. We are able to derive such a result for finite principal ideal rings. \subsection*{Extended Orthogonality Matrices} Let $R$ be a finite Frobenius ring and let $w$ be a bi-invariant weight function with $w(0) = 0$. The orthogonality matrix for the weight $w$ was defined as $W_0 = \big( w(rs) \big){}_{Rr\ne 0,\,sR\ne 0}$. Now define the {\em extended} orthogonality matrix for $w$ as $W = \big( w(rs) \big){}_{Rr,\,sR}$. In order to examine the invertibility of $W_0$ we obtain a formula for $\det W$, the determinant of the matrix $W$. (Note that $\det W$ is well-defined up to multiplication by $\pm1 $, the sign depending on the particular orderings of the rows and columns of $W$.) First we relate $\det W$ to $\det W_0$, viewing $w(0)$ as an indeterminate. \begin{prop}\label{prop_W-W0} The determinant $\det W_0$ is obtained from $\det W$ by dividing $\det W$ by $w(0)$ and then setting $w(0) = 0$. \end{prop} \begin{proof} We treat $w(0)$ as an indeterminate $w_0$. Up to a sign change in $\det(W)$, we may assume that the rows and columns of $W$ are arranged so that the first row is indexed by $R0$ and the first column is indexed by $0R$. Then $W$ has the form \[ W = \left[ \begin{array}{c|c} w_0 & w_0 \,\cdots\, w_0 \\ \hline w_0 & \\ \vdots & W' \\ w_0 & \end{array} \right] . \] By subtracting the first row from every other row, we find that $\det W = w_0 \det(W'-w_0J)$, where $J$ is the all-one matrix. Finally the matrix $W_0$ equals the matrix $W'-w_0J$ evaluated at $w_0 = 0$, so that $\det W_0 = \det(W' - w_0J)|_{w_0 = 0}$. \end{proof} Note that the extended orthogonality matrix $W$ is not invertible for weights $w$ satisfying $w(0) = 0$. \section{Bi-invariant Weights with Invertible Orthogonality\\ Matrix on Principal Ideal Rings}% \label{sec:bi-inv-wts} Let $R$ be a finite principal ideal ring, and let $w$ be a bi-invariant weight on $R$. Assume $W$ is the extended orthogonality matrix of $w$. We are interested in the determinant of $W$ and look for a way to evaluate this determinant. We will define an invertible matrix $(Q_{cR,Rx})_{cR,\, Rx}$ with determinant $\pm 1$ and multiply $W$ by $Q$ from the right to arrive at $WQ$; then $\det(WQ) = \pm\,\det(W)$. The most significant advantage of considering $WQ$, rather than $W$, is that $WQ$ will be a lower triangular matrix for which we can easily calculate the determinant. Define for any finite ring the matrix $Q$ by \[ Q_{cR, Rx} \;:=\; \mu( (Rx)^\perp, cR ) \: , \] for $cR\le R_R$ and $Rx\le {_RR}$, where $\mu$ is the M\"obius function of the lattice $L^*$ of all right ideals of $R$. \begin{lemma}\label{lemQinvertible} For a finite principal ideal ring $R$, the matrix $Q$ is an invertible matrix with determinant $\pm1$. \end{lemma} \begin{proof} We claim that the inverse of $Q$ is given by $T_{Ra, bR} := \zeta(bR, (Ra)^\perp)$, where $\zeta$ is the indicator function of the poset $L^*$, meaning \[ \zeta(xR, yR) \;=\; \left\{\begin{array}{ccl} 1 & : & xR \le yR \:, \\ 0 & : & \mbox{otherwise} \:. \end{array}\right.\] We compute the product $TQ$, \[ (TQ)_{Ra,Rx} \;=\; \sum_{cR} { \zeta(cR, (Ra)^\perp) \, \mu ( (Rx)^\perp , cR)} \:. \] By the definition of $\zeta$ and the fact that $\mu((Rx)^\perp,cR) =0$ unless $(Rx)^\perp \le cR$, the expression above simplifies to \[ (TQ)_{Ra,Rx} \;= \sum_{ (Rx)^\perp \le cR \le (Ra)^\perp} {\mu ((Rx)^\perp , cR)} \:, \] which is $1$ for $(Rx)^\perp = (Ra)^\perp$ and $0$ otherwise by the definition of the M\"obius function. The matrix $T$ is upper triangular with $1$s on the main diagonal. Thus $\det T$ and hence $\det Q$ equal $\pm 1$. (The $\pm 1$ allows for different orders of rows and columns.) \end{proof} \begin{example} Let $R := \ensuremath{\field{F}}_q[x,y] / \langle x^2, y^2 \rangle$, which is a commutative local Frobenius ring. (When $q=2^k$, $R$ is isomorphic to the group algebra over $\ensuremath{\field{F}}_{2^k}$ of the Klein $4$-group.) Here, $(Rxy)^\perp = xR + yR$ is not principal and thus the above proof does not apply; in fact, the matrix $Q$ turns out to be singular in this case. On the other hand, the Frobenius ring $R$ is not a counter-example to the main result below. In fact, $\det(W_0) = \pm q \, w(xy)^{q+3}$ satisfies the formula in \eqref{eq:det-W0-factorization} below (up to a nonzero multiplicative constant), so that the main result still holds over $R$. \end{example} We are now ready to state the main theorems. The proof of the next result is contained in the final section. \begin{theorem}\label{thm:WQ} If $R$ is a finite principal ideal ring, then the matrix $WQ$ is lower triangular. The diagonal entry at position $(Ra, Ra)$ is $\sum\limits_{dR \le aR} w( d ) \, \mu(0, dR)$. \end{theorem} We conclude that the determinant of $WQ$ and hence that of $W$ is given by \[ \det(W) \;=\; \pm\, \det(WQ) \;=\; \pm\prod_{aR} \, \sum_{dR\le aR} w(d)\, \mu(0,dR) \:. \] Applying Proposition~\ref{prop_W-W0} we find the determinant of $W_0$ to be \begin{equation} \label{eq:det-W0-factorization} \det(W_0) \;=\; \pm\prod_{aR\ne 0} \, \sum_{0\ne dR\le aR} w(d)\, \mu(0,dR) \:, \end{equation} as in $\det(W)$ the term $aR = 0$ provides a factor of $w(0)$ which gets divided away, and in each remaining term the contribution from $dR =0R$ is $w(0)$ which is set equal to $0$. This yields our main result: a characterization of all bi-invariant weights on a principal ideal ring that satisfy the Extension Property. \begin{theorem}[Main Result]\label{maintheorem} Let $R$ be a finite principal ideal ring and let $\mu$ be the M\"obius function of the lattice $L^*$ of all right ideals of $R$. Then a bi-invariant rational weight $w$ on $R$ satisfies the Extension Property if and only if \[ \sum_{0\ne dR\le aR} w(d)\, \mu(0,dR) \;\ne\, 0\quad \mbox{for all $aR \ne 0$} \:. \] \end{theorem} The condition in Theorem~\ref{maintheorem} needs to be checked only for nonzero right ideals $aR\leq\soc(R_R)$, since we have $\mu(0,dR)=0$ if $dR\not\leq\soc(R_R)$ (see \cite[Proposition~2]{st:homippi}, for example) and since every right ideal contained in $\soc(R_R)$ is principal. As a consequence, the Extension Property of $w$ depends only on the values of $w$ on the socle of $R$. \begin{example} For a chain ring $R$, the main result simply says that a bi-invariant weight function $w$ satisfies the Extension Property if and only if it does not vanish on the socle of $R$ (compare with \cite{grefhono05} and \cite[Theorem~9.4]{wood09}). For $R = {\mathbb Z}_4$, it states that a bi-invariant weight will satisfy the Extension Property if and only if $w(2) \ne 0$. \end{example} \begin{example} Let $R := {\mathbb Z}_m$. The nonzero ideals in $\soc(\ensuremath{\field{Z}}_m)$ are of the form $a\ensuremath{\field{Z}}_m$ with $a\mid m$ and $m/a>1$ square-free. The M\"obius function of such an ideal is $\mu(0,a\ensuremath{\field{Z}}_m)=\mu(m/a)=(-1)^r$, where $\mu(\cdot)$ denotes the one-variable M\"obius function of elementary number theory and $r$ is the number of different prime divisors of $m/a$. According to Theorem~\ref{maintheorem}, an invariant weight $w$ on $\ensuremath{\field{Z}}_m$ has the Extension Property if and only if \[ \sum_{s\mid\frac{m}{a}} w(sa) \, \mu \Big( \frac{m}{sa} \Big) \;=\; (-1)^r \, \sum_{s\mid\frac{m}{a}} w(sa) \, \mu(s) \;\ne\; 0\] for all (positive) divisors $a$ of $m$ such that $m/a$ is square-free and $>1$. We thus recover the main theorem of \cite{grefhono06}. \end{example} \begin{example}\label{ex:examples} Let $R := {\rm Mat}_n(\ensuremath{\field{F}}_q)$, $n\geq2$, the ring of $n\times n$ matrices over the finite field $\ensuremath{\field{F}}_q$ with $q$ elements, so that $\ensuremath{U} = {\rm GL}_n(\ensuremath{\field{F}}_q)$. The ring $R$ is a finite principal ideal ring that is not a direct product of chain rings. For each matrix $A\in R$, the left $\ensuremath{U}$-orbit $\ensuremath{U} A$ can be identified with the row space of $A$, and similarly, the right $\ensuremath{U}$-orbits correspond to the column spaces. Let $w$ be a bi-invariant weight on $R$. Its value $w(A)$ depends only on the rank of the matrix $A$, and therefore we can write $w([{\rm rank}\,A]) := w(A)$. Now for $n=2$, the main result says that $w$ satisfies the Extension Property if and only if $w([1])\ne 0$ and $q\,w([2]) \ne (q+1)\,w([1])$. For $n=3$, $w$ satisfies the Extension Property if and only if $w([1])\ne 0$, $\,q\,w([2]) \ne (q+1)\,w([1])$, and $q^3\,w([3]) + (q^2+q+1)\,w([1]) \ne (q^2+q+1)\,q\,w([2])$. It was shown in \cite[Theorem~9.5]{wood09} that the relevant non-vanishing sums are \begin{equation}\label{eq:wmatrixterm} \sum_{i=1}^{s}{(-1)^i q^{(\upontop{i}{2})} \left[\upontop{s}{i}\right]_q w([i]) } \:, \end{equation} where $[\upontop{s}{i}]_q$ is the $q$-nomial (Cauchy binomial) defined as \[ \left[\upontop{k}{l}\right]_q \,:=\; \frac{(1 - q^k)(1-q^{k-1}) \dots (1-q^{k-l+1})} {(1-q^l)(1-q^{l-1}) \dots (1-q)} \:. \] The {\em rank metric} $w([k]) := k$ satisfies these conditions. First we state the Cauchy binomial theorem: \begin{equation*}\label{eq:cauchythm} \prod_{i=0}^{k-1}{(1 + xq^i)} \;=\; \sum_{j=0}^{k}{\left[\upontop{k}{j}\right]_q q^{(\upontop{j}{2})} x^j} \:. \end{equation*} Now we write the term in \eqref{eq:wmatrixterm} for the rank metric, changing the sign and including $i=0$ trivially in the sum. This can then be seen as the evaluation of a derivative. \[ \sum_{i=0}^{s}{i(-1)^{i-1} \ q^{(\upontop{i}{2})} \left[\upontop{s}{i}\right]_q } \;=\; \left. \frac{d}{dx}\sum_{i=0}^{s}{x^i q^{(\upontop{i}{2})} \left[\upontop{s}{i}\right]_q }\right|_{x=-1} \:. \] Applying the Cauchy binomial theorem and evaluating the derivative yields: \[ \left. \frac{d }{dx} \prod_{i=0}^{s-1}{(1+xq^i)} \right|_{x=-1} \;=\; \left( \prod_{i=0}^{s-1}{(1-q^i)} \right) \left( \sum_{i=0}^{s-1}{\frac{q^i}{1-q^i}} \right) \:. \] Both expressions on the right are nonzero provided $q$ is not $\pm 1$, independent of $s$. Hence the rank metric satisfies the Extension Property for all $q$ and $n$. \end{example} \begin{example} More generally, let $R = {\rm Mat}_n(S)$ be a matrix ring over a finite chain ring $S$. Then $\soc(R_R) = \soc({}_RR) = {\rm Mat}_{n\times n}(\soc S) \cong {\rm Mat}_{n\times n}(\ensuremath{\field{F}}_q)$ as a (bi-)module over the residue class field $S/\rad S\cong\ensuremath{\field{F}}_q$. Hence the previous example applies and characterizes all bi-invariant weights $w\colon R\to\ensuremath{\field{Q}}$ having the Extension Property. \end{example} \begin{example} Any finite semisimple ring is a direct product of matrix rings over finite fields and therefore a principal ideal ring. Hence, the main result also applies to this case. \end{example} \section{A Proof of Theorem~\ref{thm:WQ}}% \label{sec:proof} We perform the matrix multiplication and see that the entry of $WQ$ in position $(Ra,Rb)$ is given by the expression \[ (WQ)_{Ra,Rb} \;=\; \sum_{cR} W_{Ra,cR} \, Q_{cR,Rb} \;=\; \sum_{cR} w(ac)\, \mu((Rb)^\perp ,cR) \:. \] According to the definition of the M\"obius function, $\mu((Rb)^\perp, cR)$ can be nonzero only when $(Rb)^\perp \le cR$ (or: when $cR \in [(Rb)^\perp, R]$, using interval notation on the lattice $L^*$ of all right (necessarily principal) ideals of $R$). With this in mind we write \begin{equation}\label{main} (WQ)_{Ra,Rb} \;= \sum_{cR \in [(Rb)^\perp, R]} w(ac) \, \mu((Rb)^\perp ,cR) \:. \end{equation} \subsection*{Diagonal Entries} The diagonal terms of $WQ$ are given by \[ (WQ)_{Ra,Ra} \;= \sum_{ cR \in [(Ra)^\perp,R]} w(ac)\, \mu((Ra)^\perp , cR) \:. \] For an element $a\in R$ consider the left multiplication operator $L_a: R \longrightarrow aR,\; t\mapsto at$. The mapping $L_a$ is a (right) $R$-linear mapping with kernel $(Ra)^\perp$, and the isomorphism theorem yields an induced order isomorphism of intervals \[ \nu_a:[(Ra)^\perp , R] \longrightarrow [0,aR] \:, \quad J \mapsto aJ \:. \] It follows that if $J_1, J_2\in [(Ra)^\perp,R]$, then $\mu(J_1,J_2) = \mu ( \nu_a(J_1), \nu_a(J_2) ) = \mu (aJ_1, aJ_2)$. The diagonal term simplifies to \begin{align*} (WQ)_{Ra,Ra} &\;=\; \sum_{cR \in [(Ra)^\perp,R]} w(ac)\, \mu((Ra)^\perp , cR) \\ &\;=\; \sum_{acR \in [0,a R]} w(ac)\, \mu(0, acR) \\ &\;=\; \sum_{dR \in [0,a R]} w(d)\, \mu(0, dR) \:, \end{align*} where we have applied the above interval isomorphism with $J_1 = (Ra)^\perp$ and $J_2 = cR$, followed by the relabeling $acR=dR$. Finally, observe that the formula $(WQ)_{Ra,Ra} = \sum_{dR \in [0,a R]} w(d)\, \mu(0, dR)$ does not depend on the choice of generator $a$ for the left ideal $Ra$. Indeed, any other generator has the form $ua$, where $u$ is a unit of $R$. Left multiplication by $u$ induces an order isomorphism of intervals $\nu_u: [0,aR] \longrightarrow [0,uaR]$, so that $\mu(0,dR) = \mu(0,udR)$ for all $dR \in [0,aR]$. Since $w$ is left-invariant, we have $w(ud) = w(d)$, and the right side of the formula is well-defined. \subsection*{Lower Triangularity} Now let us return to the general form of the matrix $WQ$ given in \eqref{main}. We would like to prove that $WQ$ is lower triangular, i.e., that $Rb \nleq Ra$ will imply that $(WQ)_{Ra,Rb}=0$. To that end, assume \begin{equation} \label{eq:lower-tri} Rb \nleq Ra \:. \end{equation} As above, the left multiplication operator $L_a$ induces a mapping $\lambda_a : [0,R] \rightarrow [0,aR]$, which in turn induces a partition on $[0,R]$ in a natural way. We first rewrite the general expression for $(WQ)_{Ra,Rb}$ taking into account this partition. \[ (WQ)_{Ra,Rb} \;= \sum_{dR \in [0, aR]} w(d) \mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{\lambda_a(cR)=dR} \mu((Rb)^\perp ,cR) \:. \] Our goal is to examine the inner sum and show that it vanishes for every $dR$ in question. In other words, we will show that \[ \mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{\lambda_a(cR)=dR} \mu((Rb)^\perp ,cR) \;=\; 0 \:,\quad \mbox{for all $dR \le aR$} \:. \] We do this by induction on $dR$ in the partially ordered set $[0,aR]$. Accordingly, we assume the existence of some $dR\in [0,aR]$ which is minimal with respect to the property that \[ \mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{\lambda_a(cR)=dR} \mu((Rb)^\perp ,cR) \;\ne\; 0 \:. \] Consider the right ideal $K := L_a^{-1}(dR) = \sum\limits_{acR \le dR} cR$. For this ideal we have $(Ra)^\perp \le K$, and moreover, $cR \le K$ is equivalent to $acR \le dR$. For this reason \[ \mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{acR\le dR} \mu((Rb)^\perp ,cR) \;=\; \sum_{ cR \in [(Rb)^\perp , K]} \mu((Rb)^\perp ,cR) \:. \] By properties of $\mu$, the latter expression is nonzero if and only if $K=(Rb)^\perp$. This would however imply $(Rb)^\perp \ge (Ra)^\perp$ (because $(Ra)^\perp \le K$) and hence $Rb\le Ra$, contrary to assumption \eqref{eq:lower-tri}. Hence, we conclude \[ 0 \;= \mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR\le dR} \mu((Rb)^\perp ,cR) \;= \mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR=dR} \mu((Rb)^\perp ,cR) + \mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR<dR} \mu((Rb)^\perp ,cR) \:. \] In this equation the minimality property of $dR$ implies that the last term vanishes. This finally forces \[ \mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR=dR} \mu((Rb)^\perp ,cR) \;=\; 0 \:, \] contradicting the minimality property of $dR$. Lower triangularity follows and this finishes the proof of Theorem~\ref{thm:WQ}.\qed Note that this proof heavily relies on the hypothesis that $R$ is a finite principal ideal ring. For a general finite Frobenius ring the architecture of a proof will need to be vastly restructured. Nonetheless, we conjecture that the main result, as stated, holds over any finite Frobenius ring. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} All graphs considered in this paper are finite and simple (undirected, loopless and without multiple edges). Let $G=(V,E)$ be a graph and $k\in \mathbb{N}$ and $[k]:=\{i|\ i\in \mathbb{N},\ 1\leq i\leq k \}$. A $k$-coloring (proper k-coloring) of $G$ is a function $f:V\rightarrow [k]$ such that for each $1\leq i\leq k$, $f^{-1}(i)$ is an independent set. We say that $G$ is $k$-colorable whenever $G$ admits a $k$-coloring $f$, in this case, we denote $f^{-1}(i)$ by $V_{i}$ and call each $1\leq i\leq k$, a color (of $f$) and each $V_{i}$, a color class (of $f$). The minimum integer $k$ for which $G$ has a $k$-coloring, is called the chromatic number of G and is denoted by $\chi(G)$. Let $G$ be a graph, $f$ be a $k$-coloring of $G$ and $v$ be a vertex of $G$. The vertex $v$ is called colorful ( or color-dominating or $b$-dominating) if each color $1\leq i\leq k$ appears on the closed neighborhood of $v$ (\ $f(N[v])=[k]$ ). The $k$-coloring $f$ is said to be a fall $k$-coloring (of $G$) if each vertex of $G$ is colorful. There are graphs $G$ for which $G$ has no fall $k$-coloring for any positive integer $k$. For example, $C_{5}$ (a cycle with 5 vertices) and graphs with at least one edge and one isolated vertex, have not any fall $k$-colorings for any positive integer $k$. The notation ${\rm Fall}(G)$ stands for the set of all positive integers $k$ for which $G$ has a fall $k$-coloring. Whenever ${\rm Fall}(G)\neq\emptyset$, we call $\min({\rm Fall}(G))$ and $\max({\rm Fall}(G))$, fall chromatic number of $G$ and fall achromatic number of $G$ and denote them by $\chi_{f}(G)$ and $\psi_{f}(G)$, respectively. Every fall $k$-coloring of a graph $G$ is a $k$-coloring, hence, for every graph $G$ with ${\rm Fall}(G)\neq\emptyset$, $\chi(G)\leq\chi_{f}(G)\leq\psi_{f}(G)$. Let $G$ be a graph, $k\in \mathbb{N}$ and $f$ be a $k$-coloring of $G$. The coloring $f$ is said to be a colorful $k$-coloring of $G$ if each color class contains at least one colorful vertex. The maximum integer $k$ for which $G$ has a colorful $k$-coloring, is called the $b$-chromatic number of $G$ and is denoted by $\phi(G)$ (or $b(G)$ or $\chi_{b}(G)$). Every fall $k$-coloring of $G$ is obviously a colorful $k$-coloring of $G$ and therefore, for every graph G with ${\rm Fall}(G)\neq\emptyset$, $\chi(G)\leq\chi_{f}(G)\leq\psi_{f}(G)\leq\phi(G)$. Assume that $G$ is a graph, $k\in \mathbb{N}$ and $f$ is a $k$-coloring of $G$ and $v$ be a vertex of $G$. The vertex $v$ is called a Grundy vertex (with respect to $f$) if each color $1\leq i<f(v)$ appears on the neighborhood of $v$. The k-coloring $f$ is called a Grundy $k$-coloring (of $G$) if each color class of $G$ is nonempty and each vertex of $G$ is a Grundy vertex. The maximum integer $k$ for which $G$ has a Grundy $k$-coloring, is called the Grundy chromatic number of $G$ and is denoted by $\Gamma(G)$. Also, the k-coloring $f$ is said to be a partial Grundy $k$-coloring (of $G$) if each color class contains at least one Grundy vertex. The maximum integer $k$ for which $G$ has a partial Grundy $k$-coloring, is called the partial Grundy chromatic number of $G$ and is denoted by $\partial\Gamma(G)$. Every Grundy $k$-coloring of $G$ is a partial Grundy $k$-coloring of $G$ and every colorful $k$-coloring of $G$ is a partial Grundy $k$-coloring of $G$. Also, every fall $k$-coloring of $G$ is obviously a Grundy $k$-coloring of $G$ and therefore, for every graph G with ${\rm Fall}(G)\neq\emptyset$, $\chi(G)\leq\chi_{f}(G)\leq\psi_{f}(G)\leq \left\{\begin{array}{c} \phi(G) \\ \Gamma(G) \end{array}\right.\leq \partial\Gamma(G)$. Let $G$ be a graph and $k\in \mathbb{N}$ and $f$ be a $k$-coloring of $G$. The k-coloring $f$ is said to be a complete $k$-coloring (of $G$) if there is an edge between any two distinct color classes. The maximum integer $k$ for which $G$ has a complete $k$-coloring, is called the achromatic number of $G$ and is denoted by $\psi(G)$. Every partial Grundy $k$-coloring of $G$ is obviously a complete $k$-coloring of G and therefore, $\chi(G)\leq\chi_{f}(G)\leq\psi_{f}(G)\leq \left\{\begin{array}{c} \phi(G) \\ \Gamma(G) \end{array}\right.\leq\partial\Gamma(G)\leq\psi(G)$. The terminology fall coloring was firstly introduced in 2000 in \cite{dun} and has received attention recently, see \cite{MR2096633},\cite{MR2193924},\cite{dun},\cite{las}. The colorful coloring of graphs was introduced in 1999 in \cite{irv} with the terminology $b$-coloring. The concept of Grundy number of graphs was introduced in 1979 in \cite{chr}. Also, achromaric number of graphs was introduced in 1970 in \cite{har}. Let $n \in \mathbb{N}$ and for each $1\leq i\leq n$, $G_{i}$ be a graph. The graph with vertex set $\bigcup_{i=1}^{n}(\{i\}\times V(G_{i}))$ and edge set $$[\bigcup_{i=1}^{n}\{\{(i,x),(i,y)\}| \{x,y\}\in E(G_{i})\}]\bigcup[\bigcup_{1\leq i<j\leq n}\{ \{(i,a),(j,b)\}|a\in V(G_{i}),b\in V(G_{j})\}]$$ is called the join graph of $G_{1},...,G_{n}$ and is denoted by $\bigvee_{i=1}^{n}G_{i}$. Cockayne and Hedetniemi proved in 1976 in \cite{coc} (but not with the terminology "fall coloring") that if $G$ has a fall $k$-coloring and $H$ has a fall $l$-coloring for positive integers $k$ and $l$, then, $G\bigvee H$ has a fall $(k+l)$-coloring. \begin{theorem}{ Let $n\in \mathbb{N}\setminus\{1\}$ and for each $1\leq i\leq n$, $G_{i}$ be a graph. Then, ${\rm Fall}(\bigvee_{i=1}^{n}G_{i})\neq\emptyset$ iff for each $1\leq i\leq n$, ${\rm Fall}(G_{i})\neq\emptyset$.} \end{theorem} \begin{proof}{ First suppose that ${\rm Fall}(\bigvee_{i=1}^{n}G_{i})\neq\emptyset$. Consider a fall $k$-coloring $f$ of $\bigvee_{i=1}^{n}G_{i}$. The colors appear on $\{i\}\times V(G_{i})$ form a fall $|f(\{i\}\times V(G_{i}))|$-coloring of $G_{i}$ ( Let $S$ be the set of colors appear on $\{i\}\times V(G_{i})$ and $\alpha,\beta\in S$ and $\alpha\neq\beta$ and $x\in \{i\}\times V(G_{i})$ and $f(x)=\alpha$ and suppose that none of the neighbors of $x$ in $\{i\}\times V(G_{i})$ have the color $\beta$. Since $f$ is a fall $k$-coloring of $\bigvee_{i=1}^{n}G_{i}$, there exists a vertex $y\in V(\bigvee_{i=1}^{n}G_{i})$ such that $\{x,y\}\in E(\bigvee_{i=1}^{n}G_{i})$ and $f(y)=\beta$. On the other hand, since $\beta\in S$, there exists a vertex $z$ in $\{i\}\times V(G_{i})$ such that $f(z)=\beta$. Since $x$ and $y$ are adjacent, $z$ and $y$ are adjacent, too. Also, $f(z)=f(y)=\beta$, which is a contradiction. Therefore, $S$ forms a $|S|$-coloring of the induced subgraph of $\bigvee_{i=1}^{n}G_{i}$ on $\{i\}\times V(G_{i})$ and also for $G_{i}$.) and therefore, ${\rm Fall}(G_{i})\neq\emptyset$. Conversely, suppose that for each $1\leq i\leq n$, $k_{i}\in {\rm Fall}(G_{i})$. For each $1\leq i\leq n$, construct a fall $k_{i}$-coloring of the induced subgraph of $\bigvee_{i=1}^{n}G_{i}$ on $\{i\}\times V(G_{i})$ with the color set $\{(\sum_{j=1}^{i-1}k_{j})+1,(\sum_{j=1}^{i-1}k_{j})+2,...,(\sum_{j=1}^{i-1}k_{j})+k_{i}\}$. This forms a fall $(\sum_{i=1}^{n}k_{i})$-coloring of $\bigvee_{i=1}^{n}G_{i}$ and therefore, ${\rm Fall}(\bigvee_{i=1}^{n}G_{i})\neq\emptyset$. }\end{proof} The proof of the following obvious theorem has omitted for the sake of brevity. \begin{theorem}{ \label{6parts} Let $n\in \mathbb{N}\setminus\{1\}$ and for each $1\leq i\leq n$, $G_{i}$ be a graph. Then, 1)If for each $1\leq i\leq n$, ${\rm Fall}(G_{i})\neq\emptyset$, then, ${\rm Fall}(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}{\rm Fall}(G_{i}):=\{a_{1}+...+a_{n}|\ a_{1}\in {\rm Fall}(G_{1}),...,\ a_{n}\in {\rm Fall}(G_{n})\}$ and $\chi_{f}(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}\chi_{f}(G_{i})$ and $\psi_{f}(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}\psi_{f}(G_{i})$. 2) $\chi(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}\chi(G_{i})$. 3) $\phi(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}\phi(G_{i})$. 4) $\Gamma(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}\Gamma(G_{i})$. 5) $\partial\Gamma(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}\partial\Gamma(G_{i})$. 6) $\psi(\bigvee_{i=1}^{n}G_{i})=\sum_{i=1}^{n}\psi(G_{i})$.} \end{theorem} In \cite{dun}, Dunbar, et al. asked the following questions. \\ 1*) whether or not there exists a graph $G$ with ${\rm Fall}(G)\neq\emptyset$ which satisfies $\chi_{f}(G)-\chi(G)\geq3$? They noticed that $\chi_{f}(C_{4}\square C_{5})=4$ and $\chi(C_{4}\square C_{5})=3$, also, $\chi_{f}(C_{5}\square C_{5})=5$ and $\chi(C_{5}\square C_{5})=3$. \\ 2*) Can $\chi_{f}(G)-\chi(G)$ be arbitrarily large? \\ 3*) Does there exist a graph $G$ with ${\rm Fall(G)}\neq\emptyset$ which satisfies $\chi(G)<\chi_{f}(G)<\psi_{f}(G)<\phi(G)<\partial\Gamma(G)<\psi(G)$? \\ Since $\chi_{f}(C_{4}\square C_{5})=4$ and $\chi(C_{4}\square C_{5})=3$, Theorem \ref{6parts} implies that For each $n\in \mathbb{N}$, $\chi_{f}(\bigvee_{i=1}^{n}(C_{4}\square C_{5}))-\chi(\bigvee_{i=1}^{n}(C_{4}\square C_{5}))=4n-3n=n$ and this gives an affirmative answer to the problems 1* and 2*. Also, the Theorem \ref{6parts} and the following theorem, give an affirmative answer to all 3 questions immediately. \begin{theorem}{ \label{thm} For each integer $\varepsilon>0$, there exists a graph $G$ with $Fall(G)\neq\emptyset$ which the minimum of $\chi_{f}(G)-\chi(G)$, $\psi_{f}(G)-\chi_{f}(G)$, $(\delta(G)+1)-\psi_{f}(G)$, $\Gamma(G)-\psi_{f}(G)$, $\phi(G)-\psi_{f}(G)$, $(\Delta(G)+1)-\partial\Gamma(G)$, $\psi(G)-\partial\Gamma(G)$, $\partial\Gamma(G)-\Gamma(G)$ is greater than $\varepsilon$.} \end{theorem} \begin{proof}{ Let $\varepsilon>2$ be an arbitrary integer and let's follow the following steps. Step1) Set $G_{1}:=\bigvee_{i=1}^{\varepsilon+1}(C_{4}\square C_{5})$. As stated above, $\chi_{f}(G_{1})-\chi(G_{1})=\varepsilon+1$. \\ Step2) Set $G_{2}:=K_{\varepsilon+3,\varepsilon+3}- ({\rm \ an\ arbitrary}\ 1-factor\ )$. One can easily observe that $\psi_{f}(G_{2})-\chi_{f}(G_{2})=(\varepsilon+3)-2=\varepsilon+1$. \\ Step3) Set $G_{3}:=K_{(\varepsilon+2,\varepsilon+2)}$. Then, $(\delta(G_{3})+1)-\psi_{f}(G_{3})=((\varepsilon+2)+1)-2=\varepsilon+1$. \\ Step4) Let $P_{\varepsilon+3}$ be a path with $\varepsilon+3$ vertices. Add $\varepsilon+2$ pendant vertices to each of its vertices and denote the new graph by $G_{4}$. It is readily seen that $\phi(G_{4})-\psi_{f}(G_{4})\geq(\varepsilon+3)-2=\varepsilon+1$. \\ Step5) Let $T(1)$ be the tree with only one vertex and for each $k\geq1$, $T(k+1)$ be the graph obtained by adding a new pendant vertex to each vertex of $T(k)$. $T(\varepsilon+3)$ is a tree which its Grundy number is $\varepsilon+3$ and $\psi_{f}(T(\varepsilon+3))\leq \delta(T(\varepsilon+3))+1\leq2$. Hence, if we set $G_{5}:=T(\varepsilon+3)$, then, $\Gamma(G_{5})-\psi_{f}(G_{5})=\varepsilon+1$. \\ Step6) Let $G_{6}$ be the graph obtained by adding $i-2$ pendant vertices to each vertex $v_{i}(3\leq i\leq\varepsilon+5)$ of the path $v_{1}v_{2}\ldots v_{\varepsilon+5}$. Obviously, $\partial\Gamma(G_{6})\geq\varepsilon+5$ and $\Gamma(G_{6})\leq4$. So, $\partial\Gamma(G_{6})-\Gamma(G_{6})\geq\varepsilon+1$. \\ Step7) Set $G_{7}:=K_{\varepsilon+2,\varepsilon+2}$. $(\Delta(G_{7})+1)-\partial\Gamma(G_{7})=(\varepsilon+3)-2=\varepsilon+1$. \\ Step8) Set $G_{8}:=P_{\frac{(\varepsilon+4)(\varepsilon+3)}{2}}$. Obviously, $\psi(G_{8})\geq\varepsilon+4$ and $\partial\Gamma(G_{8})\leq\triangle(G_{8})+1\leq3$. Hence, $\psi(G_{8})-\partial\Gamma(G_{8})\geq\varepsilon+1$. \\ Step9) Set $G:=\bigvee_{i=1}^{8}G_{i}$. For each $1\leq i\leq 8$, ${\rm Fall}(G_{i})\neq\emptyset$. Hence, by Theorem \ref{6parts}, the fact that $\delta(\bigvee_{i=1}^{8}G_{i})\geq\sum_{i=1}^{8}\delta(G_{i})$, and $\Delta(\bigvee_{i=1}^{8}G_{i})\geq\sum_{i=1}^{8}\Delta(G_{i})$, $G_{9}$ is a suitable graph for this theorem and also for questions 1*, 2* and 3*. }\end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper, we study the cyclicity problem with respect to the forward shift operator $S_b$ acting on the de Branges--Rovnyak space $\HH(b)$, associated to a function $b$ belonging to the closed unit ball of $H^\infty$ and satisfying $\log(1-|b|)\in L^1(\mathbb T)$. This problem of cyclicity has a long and outstanding history and many efforts have been dedicated to solving it in various reproducing kernel Hilbert spaces. It finds its roots in the pioneering work of Beurling who showed that cyclicity of a function $f$ in the Hardy space $H^2$ is equivalent to $f$ being outer. Brown and Shields studied the cyclicity problem in the Dirichlet spaces $\mathcal{D}_\alpha$ for polynomials that do not have zeros inside the disc, but that may have some zeros on its boundary. Such functions are cyclic in $\mathcal{D}_\alpha$ if and only if $\alpha \leq 1$. They also proved that the set of zeros on the unit circle (in the radial sense) of cyclic functions in the Dirichlet space $\mathcal{D}_\alpha$ has zero logarithmic capacity, and this led them to ask whether any outer function with this property is cyclic \cite{BS1}. This problem is still open although there has been relevant contributions to the topic by a number of authors; e.g. see \cite{B2, BC1, EFKR1, EFKR2, HS1, RS1}. We also mention the paper \cite{MR3475457} where the authors prove the Brown--Shields conjecture in the context of some particular Dirichlet type spaces $\mathcal D(\mu)$, which happen to be related to our context of de Branges--Rovnyak spaces \cite{MR3110499}. \par\smallskip The de Branges--Rovnyak spaces $\HH(b)$ (see the precise definition in Section~\ref{sec2}) have been introduced by L. de Branges and J. Rovnyak in the context of model theory (see \cite{MR0215065}). A whole class of Hilbert space contractions is unitarily equivalent to $S^*|\HH(b)$, for an appropriate function $b$ belonging to the closed unit ball of $H^{\infty}$. Here $S$ is the forward shift operator on $H^2$ and $S^*$, its adjoint, is the backward shift operator on $H^2$. The space $\HH(b)$ is invariant with respect to $S^*$ for every $b$ in the closed unit ball of $H^\infty$, and $S^*$ defines a bounded operator on $\HH(b)$, endowed with its own Hilbert space topology. On the contrary, $\HH(b)$ is invariant with respect to $S$ if and only if $\log(1-|b|)\in L^1(\mathbb T)$, i.e. if and only if $b$ is a non-extreme point of the closed unit ball of $H^{\infty}$. See \cite[Corollary 25.2]{FM2}. In \cite{MR3309352}, it is proved that if $\log(1-|b|)\in L^1(\mathbb T)$, the cyclic vectors of $S^*|\HH(b)$ are precisely the cyclic vectors of $S^*$ which live in $\HH(b)$. Note that cyclic vectors of $S^*$ have been characterised by Douglas--Shapiro--Shields \cite{MR203465}. The result of \cite{MR3309352} is based on a nice description, due to Sarason, of closed invariant subspaces of $S^*|\HH(b)$. See also \cite[Corollary 24.32]{FM2}. Unfortunately an analogous description of closed invariant subspaces of $S_b=S|\HH(b)$ remains an unsolved and difficult problem. In \cite{MR4246975}, Gu--Luo--Richter give an answer in the case where $b$ is a rational function which is not inner, generalising a result of Sarason \cite{MR847333}. \par\smallskip The purpose of this paper is to study the cyclic vectors of $S_b$ when the function $b$ is such that $\log(1-|b|)\in L^1(\mathbb T)$. In Section~\ref{sec2}, we present a quick overview of some useful properties of de Branges--Rovnyak spaces. Then, in Section~\ref{sec3}, we give some general facts on cyclic vectors and completely characterise holomorphic functions in a neighborhood of the closed unit disc which are cyclic for $S_b$. In Section~\ref{sec4}, we give a characterisation of cyclic vectors for $S_b$ when $b$ is rational (and not inner). Of course, this characterisation can be derived from the description, given in \cite{MR4246975}, of invariant subspaces of $S_b$ when $b$ is a non-inner rational function. Nevertheless, we will give a more direct and easier proof of this characterisation. Finally, Section~\ref{sec5} will be devoted to the situation where $b=(1+I)/2$, where $I$ is a non-constant inner function such that the associated model space $K_I=\HH(I)$ has an orthonormal basis of reproducing kernels. \par\bigskip \section{Preliminaries on $\HH(b)$ spaces}\label{sec2} \subsection{Definition of de Branges-Rovnyak spaces} Let $$\operatorname{ball}(H^{\infty}) := \Big\{b \in H^{\infty}: \|b\|_{\infty} = \sup_{z \in \D} |b(z)| \leq 1\Big\}$$ be the closed unit ball of $H^{\infty}$, the space of bounded analytic functions on the open unit disk $\D$, endowed with the sup norm. For $b \in \operatorname{ball}(H^{\infty})$, the \emph{de Branges--Rovnyak space} $\HH(b)$ is the reproducing kernel Hilbert space on $\mathbb D$ associated with the positive definite kernel $k_{\lambda}^b$, $\lambda\in\D$, defined as \begin{equation}\label{eq:original kernel H(b)} k^{b}_{\lambda}(z) = \frac{1 - \overline{b(\lambda)}b(z)}{1 - \overline{\lambda} z}, \quad z \in \D. \end{equation} It is known that $\HH(b)$ is contractively contained in the well-studied Hardy space $H^2$ of analytic functions $f$ on $ \D $ for which $$\|f\|_{H^2} := \Big(\sup_{0 < r < 1} \int_{\T} |f(r \xi)|^2 dm(\xi)\Big)^{\frac{1}{2}}<\infty,$$ where $m$ is the normalised Lebesgue measure on the unit circle $\T = \{\xi \in \C: |\xi| = 1\}$ \cite{Duren, garnett}. For every $f \in H^2$, the radial limit $\lim_{r \to 1^{-}} f(r \xi) =: f(\xi)$ (even the non-tangential limit $f(\xi):=\lim_{\substack{z\to\xi\\ \sphericalangle}}f(z)$) exists for $m$-a.e. $\xi \in \T$, and \begin{equation}\label{knnhHarsysu} \|f\|_{H^2} = \Big(\int_{\T} |f(\xi)|^2 dm(\xi)\Big)^{\frac{1}{2}}. \end{equation} Though $\HH(b)$ is contractively contained in $H^2$, it is generally not closed in the $H^2$ norm. It is known that $\HH(b)$ is closed in $H^2$ if and only if $b=I$ is an inner function, meaning that $|I(\zeta)|=1$ for a.e. $\zeta\in\mathbb T$. In this case, $\HH(b)=K_I=(I H^2)^\perp$ is the so-called \emph{model space} associated to $I$. Note that $K_I=H^2\cap I\overline{zH^2}$ (see \cite[Proposition 5.4]{MR3526203}), and then $K_I=\ker T_{\bar{I}}$, where $T_{\bar{I}}$ is the Toeplitz operator with symbol $\bar I$ defined on $H^2$ as $T_{\bar I}f=P_+(\bar I f)$, where $P_+$ denotes the orthogonal projection from $L^2$ onto $H^2$. \par\smallskip We refer the reader to the book \cite{MR1289670} by Sarason and to the monograph \cite{FM1}, \cite{FM2} by Fricain and Mashreghi for an in-depth study of de Branges-Rovnyak spaces and their connections to numerous other topics in operator theory and complex analysis. \par\smallskip In this paper, we will always assume that $b$ is a non-extreme point of $\operatorname{ball}(H^{\infty})$, which is equivalent to requiring that $\log(1-|b|)\in L^1(\mathbb T)$. Under this assumption, there is a unique outer function $a$, called \emph{the pythagorean mate} for $b$, such that $a(0)>0$ and $|a|^2+|b|^2=1$ a.e. on $\mathbb T$. There are two important subspaces of $\HH(b)$ which can be defined via this function $a$. The first one is the space $\MM(a)=aH^2$, equipped with the range norm \[ \|af\|_{\MM(a)}=\|f\|_2,\qquad f\in H^2. \] The second one is $\MM(\bar a)=T_{\bar a}H^2$, equipped also with the range norm \[ \|T_{\bar a}f\|_{\MM(\bar a)}=\|f\|_2,\qquad f\in H^2. \] Note that since $a$ is outer, the Toeplitz operator $T_{\bar a}$ is one-to-one and so the above norm is well defined. It is known that $\MM(a)$ is contractively contained into $\MM(\bar a)$, which itself is contractively contained into $\HH(b)$. See \cite[Theorem 23.2]{FM2}. Note that $\MM(a)$ is not necessarily closed in $\HH(b)$. See \cite[Theorem 28.35]{FM2} for a characterisation of closeness of $\MM(a)$ in $\HH(b)$-norm. There is also an important relation between $\HH(b)$ and $\MM(\bar a)$ which gives a recipe to compute the norm in $\HH(b)$. Indeed, if $f\in H^2$, then $f\in\HH(b)$ if and only if there is a function $f^+\in H^2$ satisfying $T_{\bar b}f=T_{\bar a}f^+$ (and then $f^+$ is necessarily unique). Moreover, in this case, we have \[ \|f\|_b^2=\|f\|_2^2+\|f^+\|_2^2. \] Also, \begin{equation}\label{Inconnu} \langle f,g \rangle_b=\langle f,g \rangle_2+\langle f^+,g^+ \rangle_2\quad\textrm{ for every }f,g\in\HH(b). \end{equation} See \cite[Theorem 23.8]{FM2}. Finally, let us recall that $\HH(b)=\MM(\bar a)$ if and only if $(a,b)$ forms a corona pair, that is \[ \tag{HCR} \qquad \inf_{\mathbb D}(|a|+|b|)>0. \] See \cite[Theorem 28.7]{FM2}. \par\smallskip A crucial fact on de Branges-Rovnyak space is that the space $\HH(b)$ is invariant with respect to the shift operator $S: f \mapsto zf$ if and only if the function $b$ is non-extreme. Since we will consider in this paper only the case where $b$ is non-extreme, $\HH(b)$ is indeed invariant by $S$, and $S$ defines a bounded operator on $\HH(b)$, endowed with its own Hilbert space topology, which we will denote by $S_b$. The functions $z^n$ belong to $\HH(b)$ for every $n\ge 0$. Actually, we have \begin{equation}\label{eq:density-polynomial} \mbox{Span}(z^n:n\geq 0)=\HH(b), \end{equation} where $\mbox{Span}(A)$ denotes the closed linear span generated by vectors from a certain family $A$. In other words, the polynomials are dense in $\HH(b)$. See \cite[Theorem 23.13]{FM2}. Note that \eqref{eq:density-polynomial} exactly means that the constant function $1$ is cyclic for $S_b$. \par\smallskip Another tool which will turn out to be useful when studying the cyclicity for the shift operator is the notion of \emph{multiplier}. Recall that the set $\mathfrak M(\HH(b))$ of multipliers of $\HH(b)$ is defined as \[ \mathfrak M(\HH(b))=\{\varphi\in\mbox{Hol}(\mathbb D): \varphi f\in \HH(b),\forall f\in\HH(b)\}. \] Using the closed graph theorem, it is easy to see that when $\varphi\in \mathfrak M(\HH(b))$, then $M_\varphi$, the multiplication operator by $\varphi$, is bounded on $\HH(b)$. The algebra of multipliers is a Banach algebra when equipped with the norm $\|\varphi\|_{\mathfrak M(\HH(b)}=\|M_\varphi\|_{\mathcal L(\HH(b))}$. Using standard arguments, we see that $\mathfrak M(\HH(b))\subset H^\infty\cap\HH(b)$. In general, this inclusion is strict. See \cite[Example 28.24]{FM2}. However, we will encounter below a situation (when $b$ is a rational function which is not a finite Blaschke product) where we have the equality $\mathfrak M(\HH(b))= H^\infty\cap\HH(b)$. \par\smallskip \subsection{Some properties of the reproducing kernels of $H^2$ in de Branges-Rovnyak spaces} Recall that we are supposing that $b$ is non-extreme. If we denote by $k_\lambda(z)=(1-\overline{\lambda}z)^{-1}$ the reproducing kernel of $H^2$ at the point $\lambda\in\D$, then $k_\lambda$ belongs to $\HH(b)$ and \begin{equation}\label{eq:density-crk} \mbox{Span}(k_\lambda:\lambda\in\mathbb D)=\HH(b). \end{equation} See \cite[Corollary 23.26]{FM2} or \cite[Lemma 7]{MR3390195}. We also know (see \cite[Theorem 23.23]{FM2}) that $bk_\lambda\in \HH(b)$ for every $\lambda\in\mathbb D$, and that for every $f\in\HH(b)$ we have \begin{equation}\label{eq1EZD:lem-completeness} \langle f,k_\lambda \rangle_b=f(\lambda)+\frac{b(\lambda)}{a(\lambda)}f^+(\lambda)\quad\mbox{and}\quad \langle f,bk_\lambda\rangle_b=\frac{f^+(\lambda)}{a(\lambda)}\cdot \end{equation} Using these two equations, we can produce an interesting complete family in $\HH(b)$ which will be of use to us. \begin{Lemma}\label{Lem:completenes} Let $b$ be a non-extreme point of the closed unit ball of $H^{\infty}$, and let $c$ be a complex number with $|c|<1$. Then \[ \mbox{Span}(k_\mu-cbk_\mu:\mu\in\mathbb D)=\HH(b). \] \end{Lemma} \begin{proof} Let $h\in\HH(b)$, and assume that for every $\mu\in\mathbb D$, $h$ is orthogonal in $\HH(b)$ to $k_\mu-cbk_\mu$. According to \eqref{eq1EZD:lem-completeness}, we have \[ 0=h(\mu)+\frac{b(\mu)}{a(\mu)}h^+(\mu)-\overline{c}\frac{h^+(\mu)}{a(\mu)}\cdot \] This can be rewritten as $ah=-bh^++\overline{c}h^+$. Multiplying this equality by $\bar b$ and using the fact that $|a|^2+|b|^2=1$ a.e. on $\mathbb T$, we obtain \[ a(\bar b h-\bar a h^+)=-(1-\overline{c}\bar b)h^+. \] Note that $|1-\overline{c}\bar b|\geq 1-|c|>0$, and so the last identity can be written as \[ \frac{\bar bh-\bar ah^+}{1-\overline{c}\bar b}=-\frac{h^+}{a}\cdot \] On the one hand, this equality says that $\frac{h^+}{a}$ belongs to $ L^2$ and since $a$ is outer, we have $\frac{h^+}{a}\in H^2$. See \cite[page 43]{MR1864396}. On the other hand, by definition of $h^+$, the function $\bar bh-\bar ah^+$ belongs to $\overline{H^2_0}$ and since $(1-\overline{c}\bar b)^{-1}$ is in $ \overline{H^\infty}$, we also have $\frac{h^+}{a}\in \overline{H^2_0}$. Then $\frac{h^+}{a}$ belongs to $ H^2\cap \overline{H^2_0}=\{0\}$. Finally we get that $h^+=0$ and thus that $h=0$. \end{proof} \par\smallskip \subsection{Boundary evaluation points on $\HH(b)$} An important tool in the cyclicity problem will be the boundary evaluation points for $\HH(b)$. It is known that the description of these points depends on the inner-outer factorisation of $b$. Recall that any $b$ in $\operatorname{ball}(H^{\infty})$ can be decomposed as \begin{equation} \label{E:dec-b-bso} b(z)= B(z) S_\sigma(z) O(z), \qquad z \in \D, \end{equation} where \[ B(z) = \gamma\,\prod_{n\ge 1} \left(\, \frac{|a_n|}{a_n}\frac {a_n-z}{1-\overline{a}_nz} \,\right) \quad\textrm{is a Blaschke product,} \] with $|\gamma|=1$, $a_n\in\D$ for every $n\ge1$, and $\sum_{n\ge 1}(1-|a_n|)<+\infty$, \[ S_\sigma(z) = \exp\left( -\int_\T \frac{\xi+z}{\xi-z} \, d\sigma(\xi) \right) \quad\textrm{is a singular inner function,} \] with $\sigma$ a positive finite Borel measure on $\T$ which is singular with respect to the Lebesgue measure, and \[ O(z) = \exp\left( \int_\T \frac{\xi+z}{\xi-z} \log|b(\xi)| \,dm(\xi) \right) \] is the outer part of $b$. Now, let $E_0(b)$ be the set of points $\zeta\in\mathbb T$ satisfying the following condition: \begin{equation} \sum_n\frac{1-|a_n|^2}{|\zeta-a_n|^{2}}+\int_{\T} \frac{d\sigma(\xi)}{|\zeta-\xi|^{2}}+\int_{\T} \frac{\big| \log|b(\xi)| \big|}{|\zeta-\xi|^{2}} \,\, dm(\xi)<\infty. \end{equation} It is proved in \cite{MR2390675} that for every point $\zeta\in\mathbb T$, every function $f\in \HH(b)$ has a non-tangential limit at $\zeta$ if and only if $\zeta\in E_0(b)$. This is also equivalent to the property that $b$ has an {\emph{angular derivative (in the sense of Carath\'eodory) at $\zeta$}, meaning that $b$ and $b'$ both have a non-tangential limit at $\zeta$ and $|b(\zeta)|=1$. Moreover, in this case, the linear map \begin{equation}\label{eq:radial-limit-ADC} f\longmapsto f(\zeta):=\lim_{\substack{z\to\zeta\\ \sphericalangle}}f(z) \end{equation} is bounded on $\HH(b)$. The function $k_\zeta^b$ defined by \[ k_{\zeta}^b(z)=\frac{1 - \overline{b(\zeta)}b(z)}{1 - \overline{\zeta} z}, \quad z \in \D, \] belongs to $\HH(b)$, and \[ \langle f,k_{\zeta}^b\rangle_b=f(\zeta)\quad\textrm{ for every } f\in\HH(b). \] We call the function $k_\zeta^b$ \emph{the reproducing kernel of $\HH(b)$ at the point $\zeta$}, and \eqref{eq:radial-limit-ADC} means that the reproducing kernels $k_{z}^b$ tend weakly to $k_{\zeta}^b$ as $z\in\D$ tends non-tangentially to $\zeta$. See \cite[Theorem 25.1]{FM2}. There is also a nice connection between the boundary evaluation points and the point spectrum of $S_b^*$ in the case where $b$ is a non-extreme point in $\operatorname{ball}(H^{\infty})$: for $\zeta\in\mathbb T$, we have that \begin{equation}\label{point-spectrum-boundary} \bar\zeta \mbox{ is an eigenvalue for }S_b^* \mbox{ if and only if } b \mbox{ has an angular derivative at }\zeta. \end{equation} \par\smallskip The boundary evaluation points play a particular role in the description of certain orthogonal basis of reproducing kernels in model spaces $K_I$, the so-called {\emph{Clark basis}}. Given an inner function $I$ and $\alpha\in\T$, recall that by Herglotz theorem, there is a unique finite positive Borel measure $\sigma_{\alpha}$ on $\mathbb T$, singular with respect to the Lebesgue measure, such that \begin{equation}\label{def-clark-measure} \frac{1-|I(z)|^2}{|\alpha-I(z)|^2}=\int_{\mathbb T}\frac{1-|z|^2}{|\xi-z|^2}\,d\sigma_{\alpha}(\xi),\qquad z\in\mathbb D. \end{equation} The collection $(\sigma_{\alpha})_{\alpha\in\T}$ is the family of \emph{Clark measures} of $I$. \par\smallskip Let $E_\alpha=\{\zeta\in E_0(I): I(\zeta)=\alpha\}.$ By \cite[Theorem 21.5]{FM2}, the point $\zeta$ belongs to $E_\alpha$ if and only if the measure $\sigma_\alpha$ has an atom at $\zeta$. In this case, \begin{equation}\label{eq:nice-formula-derivative-norme-kernel-clark-measure} \sigma_\alpha(\{\zeta\})=\dfrac{1}{|I'(\zeta)|}=\dfrac{1}{\|k_{\zeta}^I\|_2^2}\cdot \end{equation} See \cite[Theorems 21.1 and 21.5]{FM2}. When $\sigma_\alpha$ is a discrete measure, its support is exactly the set $E_\alpha$, which is necessarily countable, and we write it as \begin{equation} E_\alpha=\{\zeta_n:n\geq 1\}=\{\zeta\in E_0(I): I(\zeta)=\alpha\}. \end{equation} Then, in this case, Clark proved in \cite{MR301534} that the family $\{k_{\zeta_n}^I: n\geq 1\}$ forms an orthogonal basis of $K_I$ (and the family $\{{k_{\zeta_n}^I}/{\|k_{\zeta_n}^I\|_2}: n\geq 1\}$ forms an orthonormal basis of $K_I$). It is called \emph{the Clark basis of $K_I$} associated to point $\alpha\in\mathbb T$. \subsection{A description of $\HH(b)$ when $b$ is a rational function} Although the contents of the space $\HH(b)$ may seem mysterious for a general non-extreme $b \in \operatorname{ball}(H^{\infty})$, it turns out that when $b$ is a rational function (and not a finite Blaschke product -- in which case $b$ is an inner function, and thus extreme), the description of $\HH(b)$ is quite explicit. Since our $b$ is a non-extreme point of $\operatorname{ball}(H^{\infty})$, it admits a pythagorean mate $a$, which is also a rational function. In fact, the function $a$ can be obtained from the Fej\'{e}r--Riesz theorem (see \cite{MR3503356}). Let $ \zeta_1, \dots, \zeta_n $ denote the {\em distinct} roots of $ a $ on $ \T $, with corresponding multiplicities $ m_1, \dots, m_n $, and define the polynomial $ a_1 $ by \begin{equation}\label{eq:definition of a} a_1(z): =\prod_{k=1}^n (z-\zeta_k)^{m_k}. \end{equation} Results from \cite{MR3110499, MR3503356} show that $\HH(b)$ has an explicit description as \begin{equation}\label{eq:formula for H(b)} \HH(b)=a_1H^2 \oplus \P_{N-1}=\mathcal{M}(a_1) \oplus \P_{N-1}, \end{equation} where $ N=m_1+\dots+m_n $, and $\P_{N-1}$ denotes the set of polynomials of degree at most $N-1$. Since $a/a_1$ is invertible in $H^\infty=\mathfrak M(H^2)$, note that $\mathcal M(a)=\mathcal M(a_1)$. The notation $\oplus$ above denotes a topological direct sum in $\HH(b)$. But this sum may not be an orthogonal one. See \cite{MR3503356}. In particular, $\mathcal{M}(a_1) \cap \P_{N - 1} = \{0\}$. Moreover, if $ f\in\HH(b) $ is decomposed with respect to \eqref{eq:formula for H(b)} as \begin{equation}\label{uUUiipPPS} f=a_1\widetilde{f}+p, \quad \mbox{where $\widetilde{f} \in H^2$ and $p \in \P_{N - 1}$}, \end{equation} an equivalent norm on $ \HH(b) $ (to the natural one induced by the positive definite kernel $k_{\lambda}^{b}$, $\lambda\in\D$, above) is \begin{equation}\label{eq:norm in h(b)} \vvvert a_1\widetilde f+p\vvvert^{2}_{b}:=\|\widetilde{f}\|^2_{H^2}+\|p\|^2_{H^2}. \end{equation} Note that the functions $\widetilde{f} \in H^2$ and $p \in \P_{N - 1}$ appearing in the decomposition (\ref{uUUiipPPS}) are unique. It is important to note that $ \vvvert \cdot\vvvert_b $ is only equivalent to the original norm $\|\cdot\|_b$ associated to the kernel in~\eqref{eq:original kernel H(b)}, and its scalar product as well as the reproducing kernels and the adjoints of operators defined on $\HH(b)$ will be different. However, the cyclicity problem for $S_b$ does not depend on the equivalent norm we consider. So, in the rational case, there is no problem to work with the norm given by \eqref{eq:norm in h(b)}. \par\smallskip \par\smallskip Note also that when the zeros $\zeta_1,\ldots,\zeta_n$ of the polynomial $a_1$ are simple (i.e. when $m_k=1$, $1\le k\le n$), then the space $\HH(b)$ coincides with a Dirichlet type space $\mathcal{D}(\mu)$, where $\mu$ is a finite sum of Dirac masses at the points $\zeta_k$, $1\le k\le n$. See \cite{MR3110499}. So our results are also connected to the works \cite{EFKR1} and \cite{EFKR2} on the cyclicity problem for Dirichlet spaces. \par\smallskip Using \eqref{uUUiipPPS} and the standard estimate that any $g \in H^2$ satisfies \begin{equation}\label{gggbigggooooo} |g(z)| \leq \frac{\|g\|_{2}}{\sqrt{1 - |z|^2}} \quad \mbox{for all $z \in \D$}, \end{equation} we see that for fixed $1 \leq k \leq n$ and for each $f \in \HH(b)$ we have \begin{equation}\label{q} f(\zeta_k) = \lim_{\substack{z\to\zeta_k\\ \sphericalangle}}f(z) = p(\zeta_k), \end{equation} where $f = a_1 \widetilde{f} + p$ with $\widetilde{f} \in H^2$ and $p \in \P_{N - 1}$. In particular, \begin{equation}\label{eq:E0b-rationnel} E_0(b)=\{\zeta_k:1\leq k\leq n\}. \end{equation} Finally, let us mention that when $b \in \operatorname{ball}(H^{\infty})$ is a rational function and not a finite Blaschke product, then $\mathfrak M(\HH(b))=H^\infty\cap \HH(b)$. See \cite{MR3967886}. \par\smallskip \subsection{A description of $\HH(b)$ when $b=(1+I)/2$, with $I$ an inner function} There is another situation where we have an explicit description of the space \(\HH(b)\): this is when \(b=\frac{1+I}{2}\) and \(I\) is an inner function with \(I\not\equiv 1\). In this case, \(b\) is a non-extreme point of \(\textrm{ball}(H^{\infty})\), and its Pythagorean mate (up to a unimodular constant) is \(a=\frac{1-I}{2}\). Moreover, \((a,b)\) satisfies (HCR), since \(|a|^{2}+|b|^{2}\geq\frac{1}{2}\) on $\mathbb D$. In particular \(\HH(b)=\mathcal M(\bar a)\), with equivalent norms. \par\smallskip Under the assumption that $I(0)\not = 0$, it is proved in \cite{MR3850543} that \begin{equation}\label{eq:orthogonalite-hb-ma-KI} \HH(b)=\mathcal M(a)\stackrel{\perp}{\oplus}_b K_I, \end{equation} where the direct sum \(\stackrel{\perp}{\oplus}_b\) is orthogonal with respect to the $\HH(b)$ norm. In particular, every $f\in\HH(b)$ can be written in a unique way as \begin{equation}\label{decomposition2-hb} f=(1-I)g_1+g_2,\qquad \mbox{with }g_1\in H^2\mbox{ and }g_2\in K_I. \end{equation} It turns out that the same proof holds without any assumption on the value of $I(0)$. For completeness's sake, we present it in Lemma \ref{machin} below. We also give an equivalent norm on $\HH(b)$ analogue to (\ref{eq:norm in h(b)}). \begin{Lemma}\label{machin} Let \(I\) be an inner function with \(I\not\equiv 1\), and let \(b=(1+I)/2\). Then the following assertions hold: \begin{enumerate} \item [\emph{(i)}] \(\HH(b)=(1-I)H^{2}\stackrel{\perp}{\oplus}_bK_{I}\), where \(\stackrel{\perp}{\oplus}_b\) denotes an orthogonal direct sum in \(\HH(b)\); \item[\emph{(ii)}] if for \(f=(1-I)g_{1}+g_{2}\in\HH(b)\), \(g_{1}\in H^{2}\), \(g_{2}\in K_{I}\), we define \[ |||f|||_{b}^{2}=||g_{1}||_{2}^{2}+||g_{2}||_{2}^{2}, \] then \(|||\,.\,|||_b\) is a norm on \(\HH(b)\) which is equivalent to \(||\,.\,||_{b}\). \end{enumerate} \end{Lemma} \begin{proof} (i) We have \(\HH(b)=\mathcal{M}(\bar{a})\) with equivalent norms, where \(a=\frac{1-I}{2}\) is the Pythago\-rean mate of \(b\). Also, \( \frac{\bar{a}}{a}=\frac{1-\bar{I}}{1-I}=-\bar{I} \) a.e. on \(\T\), and thus \(T_{\bar{a}/a}=-T_{\bar{I}}\). Hence \(\ker T_{\bar{a}/a}=\ker T_{\bar{I}}=K_{I}\). Moreover, \(T_{a/\bar{a}}=-T_{I}\) has closed range, and thus \begin{align} H^{2}=\textrm{Ran}(T_{I})\stackrel{\perp}{\oplus}\ker(T_{I}^{*})=\textrm{Ran}(T_{I})\stackrel{\perp}{\oplus}\ker(T_{\bar{I}}) &=T_{a/\bar{a}}H^{2}\stackrel{\perp}{\oplus}K_{I}\label{Eq 1} \end{align} (the sign $\stackrel{\perp}{\oplus}$ denotes here an orthogonal direct sum in $H^2$). Using now the fact that \(T_{\bar{a}}\) is an isometry from \(H^{2}\) onto \(\mathcal M(\bar{a})=T_{\bar{a}}H^{2}\) (equipped with the range norm), applying \(T_{\bar{a}}\) to the equation (\ref{Eq 1}), and using the identity \(T_{\bar{a}}\,T_{a/\bar{a}}=T_{a}\), we obtain \[ \HH(b)=\mathcal M(\bar{a})=\mathcal{M}(a)\stackrel{\perp}{\oplus}_{\,\bar{a}}T_{\bar{a}}\,K_{I}, \] where the notation \(\stackrel{\perp}{\oplus}_{\,\bar{a}}\) represents an orthogonal direct sum with respect to the range norm on \(\mathcal M(\bar{a})\). Since \(T_{\bar{I}}\,K_{I}=\{0\}\) and \(\bar{a}=(1-\bar{I})/2\), we have \(T_{\bar{a}}\,K_{I}=(Id-T_{\bar{I}})\,K_{I}=K_{I}\), and so \begin{equation}\label{Eq 2} \HH(b)=\mathcal M(\bar{a})=\mathcal{M}(a)\stackrel{\perp}{\oplus}_{\,\bar{a}}K_{I}=(1-I)H^{2}\stackrel{\perp}{\oplus}_{\,\bar{a}}K_{I}. \end{equation} It now remains to prove that the direct sum in this decomposition of \(\HH(b)\) is in fact orthogonal with respect to the \(\HH(b)\) norm. \par\medskip Let \(f\in H^{2}\) and \(g\in K_{I}\). Our aim is to show that \(\langle{(1-I)f},{g}\rangle_b=0\). Note that \[ T_{\bar{b}}\,g=T_{(1+\bar{I})/2}\,g=\dfrac{1}{2}\,g=T_{\bar{a}}\,g \] from which it follows that \begin{equation}\label{Eq 3} g^{+}=g. \end{equation} Moreover, since \(\bar{b}\,a=-\bar{a}\,b\) a.e. on \(\T\), we have \[ T_{\bar{b}}\,\bigl ((1-I)f \bigr)=P_{+}\,(2\bar{b}\,af)=-P_{+}(2\bar{a}\,bf)=T_{\bar{a}}\,(-2bf), \] whence we get \begin{equation}\label{Eq 4} \bigl ((1-I)f \bigr)^{+}=-2bf=-(1+I)f. \end{equation} By (\ref{Inconnu}), (\ref{Eq 3}) and (\ref{Eq 4}), it follows that \begin{align*} \langle{(1-I)f},{g}\rangle_{b}&=\langle{(1-I)f},{g}\rangle_{2}-\langle{2bf},{g}\rangle_{2} =\langle{(1-I-2b)f},{g}\rangle_{2} =-2\,\langle{If},{g}\rangle_b=0 \end{align*} because \(g\) belongs to \(K_{I}\). \par\smallskip (ii) Since \(\HH(b)=(1-I)H^{2}\stackrel{\perp}{\oplus}_{\,b}K_{I}\), we have \[ ||(1-I)g_{1}+g_{2}||_{b}^{2}=||(1-I)g_{1}||_{b}^{2}+||g_{2}||_{b}^{2},\qquad g_{1}\in H^2,\ g_{2}\in K_{I}. \] But observe that by (\ref{Inconnu}) and (\ref{Eq 4}) we have \[ ||(1-I)g_{1}||_{b}^{2}=||(1-I)g_{1}||_{2}^{2}+||(1+I)g_{1}||_{2}^{2}=4||g_{1}||_{2}^{2}, \] while we get from (\ref{Inconnu}) and (\ref{Eq 3}) that \(||g_{2}||_{b}^{2}=2||g_{2}||_{2}^{2}\). Thus \[ ||(1-I)g_{1}+g_{2}||_{b}^{2}=4||g_{1}||_{2}^{2}+2||g_{2}||_{2}^{2}, \] and from this the norm \(|||\,.\,|||_{b}\) is easily seen to be equivalent to \(||\,.\,||_{b}\). \end{proof} \par\smallskip The next result is an analogue of \eqref{q} for the case where $b=(1+I)/2$ with respect to decomposition \eqref{decomposition2-hb}. \begin{Lemma}\label{lem:existence-limite-radiale} Let $I$ be an inner function, $I\not\equiv 1$, and let $b=(1+I)/2$. Let $\zeta\in E_0(I)$ be such that $I(\zeta)=1$. Then $\zeta\in E_0(b)$. Moreover, if $f=(1-I)g_1+g_2$, $g_1\in H^2$ and $g_2\in K_I$, then $f(\zeta)=g_2(\zeta)$. \end{Lemma} \begin{proof} As mentioned above, since $\zeta\in E_0(I)$ the function $g_2$ has a non-tangential limit at the point $\zeta$. Thus it remains to prove that $(1-I)g_1$ has a zero non-tangential limit at $\zeta$. To this purpose, write for $z\in\mathbb D$ \[ \begin{aligned} (1-I(z))g_1(z)=&\frac{1-\overline{I(\zeta)}I(z)}{1-\overline{\zeta}z}(1-\overline{\zeta}z)g_1(z)\\ =&k_{\zeta}^I(z) \,\overline{\zeta}\,(\zeta-z)g_1(z)\\ =&\langle k_{\zeta}^I,k_{z}^I\rangle_2 \,\overline{\zeta}\,(\zeta-z)g_1(z). \end{aligned} \] Now, since $\zeta\in E_0(I)$, $k_{z}^I$ tends weakly to $k_\zeta^I$ as $z$ tends to $\zeta$ non-tangentially. Hence \[ \lim_{\substack{z\to \zeta\\ \sphericalangle}}\langle k_{\zeta}^I,k_{z}^I\rangle_2=\|k_\zeta^I\|_2^2<\infty. \] Moreover, using the estimate \eqref{gggbigggooooo}, we obtain that \[ \lim_{\substack{z\to \zeta\\ \sphericalangle}}(\zeta-z)g_1(z)=0, \] from which it follows that \[ \lim_{\substack{z\to \zeta\\ \sphericalangle}}(1-I(z))g_1(z)=0. \] \end{proof} In the case where $b=(1+I)/2$ and $I$ is an inner function with $I\not\equiv 1$, there is no complete characterisation of multipliers for $\HH(b)$. Nevertheless, we have at our disposal a sufficient condition which will be useful for our study of cyclicity. Before stating this result (Lemma \ref{Lem:multiplier-b-1+I}) on multipliers, we recall a well-known property of model spaces, of which we provide a proof for completeness's sake. \begin{Lemma}\label{Lem:model-space-KI.KIcontenu dans KI^2} Let $I$ be an inner function and let $f\in K_I$ and $g\in K_I\cap H^\infty$. Then $fg\in K_{I^2}$. \end{Lemma} \begin{proof} Using that $K_I=H^2\cap I \overline{zH^2}$, write $f=I\overline{z\widetilde{f}}$ and $g=I\overline{z\widetilde{g}}$, with $\widetilde{f}, \widetilde{g}\in H^2$. Since $g\in H^\infty$, we indeed have $|\widetilde{g}|=|g|\in L^\infty(\mathbb T)$, and thus $\widetilde{g}\in H^\infty$. Moreover, $f g \in H^2$, and \[ f g=I^2 \overline{z^2 \widetilde{f}\widetilde{g}}, \] whence $f g\in H^2\cap I^2\overline{zH^2}=K_{I^2}$. \end{proof} In the case where $b=(1+I)/2$, the de Branges-Rovnyak space $\mathcal H(b)$ contains a sequence of model spaces. \begin{Lemma}\label{Lem:sequence-model-spaces-contained-in-DBR} Let $I$ be an inner function, $I\not\equiv 1$, and let $b=(1+I)/2$. Then the following assertions hold: \begin{enumerate} \item the function $I$ is a multiplier of $\HH(b)$; \item for every $n\geq 1$, $K_{I^n}\subset \HH(b)$. \end{enumerate} \end{Lemma} \begin{proof} (a): Let $f\in\HH(b)$. According to \eqref{decomposition2-hb}, we can decompose $f$ as $f=(1-I)g_1+g_2$ with $g_1\in H^2$ and $g_2\in K_I$. Then \[ If=(1-I)(Ig_1)+I g_2=(1-I)(Ig_1-g_2)+g_2 \] and $Ig_1-g_2\in H^2$ and $g_2\in K_I$. Thus, using one more time \eqref{decomposition2-hb}, it follows that $If\in\HH(b)$. (b): We argue by induction. For $n=1$, the property follows from Lemma~\ref{machin}. Assume that for some $n\geq 1$, $K_{I^n}\subset \HH(b)$. It is known that $K_{I^{n+1}}=K_I\oplus I K_{I^n}$. See \cite[Lemma 5.10]{MR3526203}. The conclusion now follows from the induction assumption and (a). \end{proof} Here is now our sufficient condition for $f\in\HH(b)$ to be a multiplier of $\HH(b)$. \begin{Lemma}\label{Lem:multiplier-b-1+I} Let $I$ be an inner function, $I\not\equiv 1$, and let $b=(1+I)/2$. Assume that $f$ decomposes as $f=(1-I)g_1+g_2$, with $g_1\in H^\infty$ and $g_2\in H^\infty\cap K_I$. Then $f\in\mathfrak M(\HH(b))$. \end{Lemma} \begin{proof} We need to show that for every $\varphi\in\HH(b)$, we have $\varphi f\in\HH(b)$. According to \eqref{decomposition2-hb}, write $\varphi=(1-I)\varphi_1+\varphi_2$, with $\varphi_1\in H^2$ and $\varphi_2\in K_I$. Then \[ \varphi f=(1-I)\varphi_1 f+\varphi_2 f. \] Since $f\in H^\infty$, $\varphi_1 f\in H^2$, and so the first term $(1-I)\varphi _1 f$ belongs to $(1-I)H^2$ which is contained in $\HH(b)$. Thus it remains to prove that $\varphi_2 f\in\HH(b)$. In order to deal with this term, write \[ \varphi_2 f=(1-I)\varphi_2 g_1+g_2\varphi_2, \] and as before, since $g_1\in H^\infty$, the term $(1-I)\varphi _2 g_1$ belongs to $(1-I)H^2$, and so to $\HH(b)$. It remains to prove that $g_2\varphi_2\in\HH(b)$. Lemma~\ref{Lem:model-space-KI.KIcontenu dans KI^2} implies that $g_2\varphi_2\in K_{I^2}$, and the conclusion follows now directly from Lemma~\ref{Lem:sequence-model-spaces-contained-in-DBR}. \end{proof} \section{Some basic facts on cyclic vectors for the shift operator}\label{sec3} Recall that if $T$ is a bounded operator on a Hilbert space $\HH$, then a vector $f\in\HH$ is said to be cyclic for $T$ if the linear span of the orbit of $f$ under the action of $T$ is dense in $\HH$, i.e. if \[ \mbox{Span}(T^n f:n\geq 0)=\overline{\{p(T)f:p\in\mathbb C[X]\}}=\HH. \] When $T=S_b$ is the shift operator on $\HH(b)$, we have $p(S_b)f=pf$ for every $f\in\HH(b)$ and every polynomial $p\in\C[X]$. Thus a function $f\in\HH(b)$ is cyclic for $S_b$ if and only if \[ \overline{\{pf:p\in\mathbb C[X]\}}=\HH(b). \] In fact, it is sufficient to approximate the constant function $1$ by elements of the form $pf$, $p\in\C[X]$, to get that $f$ is cyclic for $S_b$. \begin{Lemma}\label{Lem:cyclicite-constant1} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$ and $f\in\HH(b)$. Then the following assertions are equivalent: \begin{enumerate} \item $f$ is cyclic for $S_b$. \item There exists a sequence of polynomials $(p_n)_n$ such that \[ \|p_nf-1\|_b\to 0,\mbox{ as }n\to \infty. \] \end{enumerate} \end{Lemma} \begin{proof} Follows immediately from the density of polynomials in $\HH(b)$ and the boundedness of $S_b$. \end{proof} The general meaning of our next result is that the set of zeros of a cyclic vector $f\in\HH(b)$ for $S_b$ cannot be too large. \begin{Lemma}\label{Lem1:CS} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$ and $f\in\HH(b)$. Assume that $f$ is cyclic for $S_b$. Then we have the following properties: \begin{enumerate} \item $f$ is outer; \item for every $\zeta\in E_0(b)$, $f(\zeta)\neq 0$. \end{enumerate} \end{Lemma} \begin{proof} (a) Since $f$ is cyclic for $S_b$, there exists a sequence of polynomials $(p_n)_n$ such that \begin{equation}\label{eq-cyclicity-1-approximated} \|p_nf-1\|_b\to 0,\mbox{ as }n\to \infty. \end{equation} Now, using the fact that $\HH(b)$ is contractively contained into $H^2$, we get that \[ \|p_nf-1\|_2\to 0,\mbox{ as }n\to \infty. \] That proves that $f$ is cyclic for $S$ in $H^2$, and so $f$ is outer by Beurling's theorem. \par\smallskip (b) Since the functional $f\longmapsto f(\zeta)$ is bounded on $\HH(b)$ for every $\zeta\in E_0(b)$, we deduce from \eqref{eq-cyclicity-1-approximated} that \[ |p_n(\zeta)f(\zeta)-1|\to 0,\mbox{ as }n\to \infty \] for every $\zeta\in E_0(b)$. This property implies directly that $f(\zeta)\neq 0$. \end{proof} We will encounter in the sequel of the paper some situations where the converse of Lemma~\ref{Lem1:CS} is also true, i.e. where conditions (a) and (b) of Lemma \ref{Lem1:CS} give a necessary and sufficient condition for a function $f\in\HH(b)$ to be cyclic. \par\smallskip We now provide some elementary results concerning cyclic functions for $S_b$. \begin{Lemma}\label{lem:multi-inver} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$. Suppose that $f\in\mathfrak M(\HH(b))$ and that $1/f\in \HH(b)$. Then $f$ is cyclic for $S_b$. \end{Lemma} \begin{proof} Using \eqref{eq:density-polynomial}, we see that there exists a sequence of polynomials $(p_n)_n$ such that \[ \|p_n-f^{-1}\|_b\to 0,\mbox{ as }n\to\infty. \] Now, since $f\in\mathfrak M(\HH(b))$, the multiplication operator by $f$ is bounded on $\HH(b)$, and thus we get that \[ \|p_nf-1\|_b\to 0,\mbox{ as }n\to \infty, \] which by Lemma~\ref{Lem:cyclicite-constant1} implies that $f$ is cyclic for $S_b$. \end{proof} In the following result, the set $\mbox{Hol}(\overline{\mathbb D})$ denotes the space of analytic functions in a neighborhood of the closed unit disc $\overline{\mathbb D}$. \begin{Corollary}\label{cor:cyclicity-holvoisinagedeDbar} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$. Let $f\in \mbox{Hol}(\overline{\mathbb D})$ and assume that $\inf_{\overline{\mathbb D}}|f|>0$. Then $f$ is cyclic for $S_b$. \end{Corollary} \begin{proof} When $b$ is a non-extreme point in $\operatorname{ball}(H^{\infty})$, we have $\mbox{Hol}(\overline{\mathbb D})\subset \mathfrak M(\HH(b))$. See \cite[Theorem 24.6]{FM2}. Hence $f\in \mathfrak M(\HH(b))$. Moreover, the conditions on $f$ also imply that $1/f \in \mbox{Hol}(\overline{\mathbb D})$. In particular, $1/f\in\HH(b)$. It remains to apply Lemma~\ref{lem:multi-inver} in order to get that $f$ is cyclic. \end{proof} \begin{Lemma}\label{Lem:product-multiplicateur} Let $f_1,f_2\in\mathfrak M(\HH(b))$. Then the following assertions are equivalent: \begin{enumerate} \item the product function $f_1f_2$ is cyclic for $S_b$; \item each of the functions $f_1$ and $f_2$ is cyclic for $S_b$. \end{enumerate} \end{Lemma} \begin{proof} $(a)\implies (b)$: Assume that $f_1f_2$ is cyclic. By symmetry, it suffices to prove that $f_1$ is cyclic. Let $\varepsilon>0$. There exists a polynomial $q$ such that $\|qf_1f_2-1\|_b\leq \epsilon$. Now since the polynomials are dense in $\HH(b)$, we can also find a polynomial $p$ such that $$\|f_2q-p\|_b\leq \frac{\varepsilon}{\|f_1\|_{\mathfrak M(\HH(b))}}\cdot$$ Thus \[ \begin{aligned} \|pf_1-1\|_b\leq & \|pf_1-f_1f_2q\|_b+\|f_1f_2q-1\|_b \leq \|f_1\|_{\mathfrak M(\HH(b))} \|p-f_2q\|_b+\varepsilon \leq 2\varepsilon, \end{aligned} \] which proves that $f_1$ is cyclic. $(b)\implies (a)$: Assume that $f_1$ and $f_2$ are cyclic for $\HH(b)$. Let $\varepsilon>0$. There exists a polynomial $p$ such that $\|pf_1-1\|_b\leq \epsilon$. On the other hand, there is also a polynomial $q$ such that $$\|qf_2-1\|_b\leq \frac{\varepsilon}{\|pf_1\|_{\mathfrak M(\HH(b))}}\cdot$$ Now we have \[ \begin{aligned} \|pqf_1f_2-1\|_b\leq & \|pqf_1f_2-pf_1\|_b+\|pf_1-1\|_b &\leq \|pf_1\|_{\mathfrak M(\HH(b))} \|qf_2-1\|_b+\varepsilon &\leq 2\varepsilon. \end{aligned} \] Hence the function $f_1f_2$ is cyclic. \end{proof} Our next result is motivated by the Brown--Shields conjecture and the work \cite{MR3475457} for Dirichlet type spaces $\mathcal D(\mu)$. Indeed, let $\mu$ be a positive finite measure on $\T$, and let $\mathcal D(\mu)$ be the associated Dirichlet space (i.e. the space of holomorphic functions on $\D$ whose derivatives are square-integrable when weighted against the Poisson integral of the measure $\mu$). It is shown in \cite{MR3110499}, \cite{MR3390195} that in some cases, Dirichlet spaces and de Branges-Rovnyak spaces are connected. More precisely, let $b\in \textrm{ball}(H^{\infty})$ be a rational function (which is not a finite Blaschke product), and let $a$ be its pythagorean mate. Let also $\mu$ be a positive finite measure on $\T$. Then $\mathcal D(\mu)=\HH(b)$ with equivalent norms if and only if the zeros of $a$ on $\T$ are all simple, and the support of $\mu$ is exactly the set of these zeros \cite{MR3110499}. In the context of Dirichlet spaces, the authors of \cite{MR3475457} prove the Brown--Shields conjecture when the measure $\mu$ has countable support, using two notions of capacity (which they denote $c_{\mu}(F)$ and $c_{\mu}^a(F)$ respectively) and showing that they are comparable: $c_{\mu}(F)\le c_{\mu}^a(F)\le 4 \,c_{\mu}(F)$ for every $F\subset \mathbb T$ (\cite[Lemma 3.1]{MR3475457}). In the same spirit, we introduce the following notions of {capacity} in $\HH(b)$-spaces. For a set $F\subset \mathbb T$, we define $c_1(F)$ and $c_2(F)$ as \[ c_1(F)=\inf\{\|f\|_b: f\in\HH(b),\,|f|\geq 1\mbox{ a.e. on a neighborhood of }F\}, \] and \[ c_2(F)=\inf\{\|f\|_b: f\in\HH(b),\,|f|=1\mbox{ a.e. on a neighborhood of }F\}. \] Observe that $c_1(F)\leq c_2(F)$. We do not know if $c_1(F)$ and $c_2(F)$ are comparable in general in our context of de Branges-Rovnyak spaces. \par\smallskip Our next result should be compared to \cite[Lemma 3.2]{MR3475457}. \begin{Theorem}\label{Thm:capacity} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$ and $\zeta\in\mathbb T$. Consider the following assertions: \begin{enumerate} \item $z-\zeta$ is not cyclic for $S_b$; \item $\zeta\in E_0(b)$; \item $c_1(\{\zeta\})>0$; \item $c_2(\{\zeta\})>0$. \end{enumerate} Then $(a)\iff(b)$, $(b)\implies (d)$ and $(c)\implies (a)$. \end{Theorem} \begin{proof} $(b)\implies (a)$: follows immediately from Lemma~\ref{Lem1:CS}. \par\smallskip $(a)\implies (b)$: our assumption (a) exactly means that \[ [z-\zeta]:=\mbox{Span}((z-\zeta)z^n:n\geq 0)\subsetneq \HH(b). \] Denote by $\pi$ the orthogonal projection from $\HH(b)$ onto $[z-\zeta]^\perp$. First note that $\pi(1)\neq 0$, otherwise we would have $1\in [z-\zeta]$ and then the function $z-\zeta$ would be cyclic for $S_b$, which is a contradiction. Let us now prove that $[z-\zeta]^\perp=\mathbb C \pi(1)$. For every $g\in [z-\zeta]^\perp$ and every $n\geq 0$, we have \[ 0=\langle g,(z-\zeta)z^n\rangle_b = \langle g,z^{n+1}\rangle_b-\overline{\zeta} \langle g,z^n\rangle_b. \] From this, we immediately get that \begin{equation}\label{point} \langle g,z^n\rangle_b={\overline{\zeta}}^n \langle g,1\rangle_b, \quad n\ge 0. \end{equation} This implies that $\langle \pi(1),1\rangle_b\neq 0$ (otherwise, by (\ref{point}) we would have that $\pi(1)$ is orthogonal to $z^n$ for every $n\ge0$, which implies that $ \pi(1)=0$). Secondly, if we define $c:=\frac{\langle g,1\rangle_b}{\langle \pi(1),1\rangle_b}$, then we have $\langle g-c\pi(1),z^n\rangle_b=0$ for every $n\geq 0$. By the density of polynomials in $\HH(b)$, we deduce that $g=c\pi(1)$, which proves that $[z-\zeta]^\perp$ is of dimension $1$, generated by $\pi(1)$. Now consider the continuous linear functional $\rho:\mathbb C\pi(1)\longrightarrow \mathbb C$ defined by $\rho(\alpha\pi(1))=\alpha$ for every $\alpha\in\C$. Let us check that for every $n\geq 0$, \begin{equation}\label{eq13EZ:Thm:capacity} (\rho\circ\pi)(z^n)=\zeta^n. \end{equation} For $n=0$, this is true by definition. Assume that \eqref{eq13EZ:Thm:capacity} is satisfied for some integer $n\geq 0$. Then, \[ (\rho\circ\pi)(z^{n+1})=(\rho\circ\pi)(z^n(z-\zeta))+\zeta (\rho\circ\pi)(z^n)=\zeta^{n+1}. \] By induction, we deduce \eqref{eq13EZ:Thm:capacity} and by linearity, for any polynomial $p$, we have $(\rho\circ \pi)(p)=p(\zeta)$. Now, using the continuity of $\rho$ and $\pi$, we obtain that there exists a constant $C>0$ such that \[ |p(\zeta)|\leq C \|p\|_b,\qquad \mbox{for any polynomial }p\in\C[X]. \] Denote by $L_\zeta$ the linear functional defined on $\mathbb C[X]$ by $L_\zeta(p)=p(\zeta)$, $p\in\mathbb C[X]$. Then $L_\zeta$ is continuous on $\C[X]$ endowed with the topology of $\HH(b)$. Hence it extends to a continuous linear map on $\HH(b)$. By the Riesz representation theorem, there exists a unique vector $h_\zeta\in\HH(b)$, $h_\zeta\neq 0$, such that \[ p(\zeta)=L_\zeta(p)=\langle p,h_\zeta\rangle_b, \qquad \mbox{for any polynomial }p\in\C[X]. \] Now, note that for any polynomial $p$, we have \[ \langle p,S_b^* h_\zeta\rangle_b=\langle zp,h_\zeta \rangle_b=\zeta p(\zeta)=\langle p,\overline{\zeta}h_\zeta\rangle_b, \] whence, using \eqref{eq:density-polynomial}, $S_b^*h_\zeta=\overline{\zeta}h_\zeta$. In particular, $\overline{\zeta}$ belongs to the point spectrum of $S_b^*$. But by \eqref{point-spectrum-boundary}, this implies that $b$ has an angular derivative at $\zeta$, which is equivalent to the property that $\zeta\in E_0(b)$. Note that the function $h_\zeta$ is in fact the reproducing kernel $k_\zeta^b$ of $\HH(b)$ at the point $\zeta$. \par\smallskip $(b)\implies (d)$: assume now that $\zeta\in E_0(b)$. Let $f\in\HH(b)$ be such that $|f|=1$ a.e. on a neighborhood $\mathcal O$ of $\zeta$. Let us consider the inner-outer factorisation of $f=f_i f_o$, where $f_i$ is the inner part and $f_o$ the outer part of $f$. Since by definition $|f_i|=1$ a.e. on $\mathbb T$, we have $|f_o|=1$ a.e. on $\mathcal O$. Moreover, $f_o\in\HH(b)$ and $\|f_o\|_b\leq \|f\|_b$. Indeed, $f_o=T_{\bar f_i}f$, where $T_{\bar f_i}$ is the Toeplitz operator with symbol $\bar f_i$ and $\HH(b)$ is invariant with respect to co-analytic Toeplitz operators. Furthermore, \[ \|f_o\|_b=\|T_{\bar f_i}f\|_b\leq \|f_i\|_\infty \|f\|_b=\|f\|_b. \] See \cite[Theorem 18.13]{FM2}. Since $f_o$ is outer and $\log|f_o|=0$ a.e. on $\mathcal O$, we have \[ f_o(z)=\lambda\exp\left(\int_{\mathbb T\setminus\mathcal O}\frac{\xi+z}{\xi-z}\log|f_o(\xi)|\,dm(\xi)\right), \] for some constant $\lambda\in\mathbb T$. Hence $f_o$ is analytic in a neighborhood of $\zeta$ and in particular, we deduce that $|f_o(\zeta)|=1$. Using now the fact that $\zeta\in E_0(b)$, we know that there exists a constant $C>0$ such that $|g(\zeta)|\leq C \|g\|_b$ for every $g\in\HH(b)$. Hence \[ 1=|f_o(\zeta)|\leq C \|f_o\|_b\leq C \|f\|_b \] for every function $f\in\HH(b)$ such that $|f|=1$ a.e. on a neighborhood $\mathcal O$ of $\zeta$. We deduce that $c_2(\{\zeta\})\geq C^{-1}>0$. \par\smallskip $(c)\implies (a)$: by contradiction, assume that $z-\zeta$ is cyclic for $S_b$. Then, for every $\varepsilon >0$, we can find a polynomial $q$ such that $\|q(z-\zeta)-1\|_b\leq \varepsilon$. Note that the value of the polynomial $q(z-\zeta)-1$ at $\zeta$ is $-1$. So by continuity, we can find a neighborhood $\mathcal O$ of $\zeta$ on $\mathbb T$ such that $|q(z-\zeta)-1|\geq 1/2$ on $\mathcal O$. Hence $|2(q(z-\zeta)-1)|\geq 1$ on $\mathcal O$ and by definition of $c_1(\{\zeta\})$, we obtain that \[ c_1(\{\zeta\})\leq 2\|q(z-\zeta)-1\|_b\leq 2 \varepsilon. \] Since this is true for every $\varepsilon>0$, we deduce that $c_1(\{\zeta\})=0$, which contradicts $(c)$. \end{proof} If we knew that $c_1(F)$ and $c_2(F)$ were comparable, assertions (a) to (d) in Theorem \ref{Thm:capacity} would be equivalent. This motivates the following question: \begin{question} (i) Does there exist $\kappa>0$ such that $c_2(F)\le \kappa\, c_1(F)$ for every $F\subseteq\T$? \noindent (ii) Is it true that $c_1(F)>0$ if and only if $c_2(F)>0$? \end{question} \begin{Remark} It can be easily seen from Theorem \ref{Thm:capacity} that the condition $\inf_{\overline{\mathbb D}}|f|>0$ in Corollary~\ref{cor:cyclicity-holvoisinagedeDbar} is not necessary for $f$ to be cyclic in $\HH(b)$. Indeed, let $b(z)=\frac{1+z}{2}S_{\delta_1}(z)$, where $S_{\delta_1}$ is the singular inner function associated to $\delta_1$, the Dirac measure at point $1$. See \eqref{E:dec-b-bso}. It is clear that \[ \int_{\mathbb T}\frac{d\delta_1(\xi)}{|\zeta_0-\xi|^2}=\frac{1}{|\zeta_0-1|^2}=\infty \] when $\zeta_0=1$. Hence $1\notin E_0(b)$. Therefore, by Theorem~\ref{Thm:capacity}, the function $z-1$ is cyclic for $S_b$ while $\inf_{\overline{\mathbb D}}|z-1|=0$. \end{Remark} \begin{Corollary} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$. Let $p$ be a polynomial. The following assertions are equivalent: \begin{enumerate} \item $p$ is cyclic for $S_b$. \item $p(z)\neq 0$ for every $z\in\mathbb D\cup E_0(b)$. \end{enumerate} \end{Corollary} \begin{proof} $(a)\implies (b)$: follows immediately from Lemma~\ref{Lem1:CS}. $(b)\implies (a)$: factorise the polynomial $p$ as $p(z)=c\prod_{j=1}^n (z-\zeta_j)$, where by definition the roots $\zeta_j$ belong to $\mathbb C\setminus (\mathbb D\cup E_0(b))$. On the one hand, if $|\zeta_j|>1$, then, according to Corollary~\ref{cor:cyclicity-holvoisinagedeDbar}, the function $z-\zeta_j$ is cyclic for $S_b$. On the other hand, if $|\zeta_j|=1$, then $\zeta_j\not\in E_0(b)$ and Theorem~\ref{Thm:capacity} implies that the function $z-\zeta_j$ is also cyclic for $S_b$. Thus, for every $1\leq j\leq n$, the function $z-\zeta_j$ is cyclic and it follows from Lemma~\ref{Lem:product-multiplicateur} that $p$ itself is cyclic for $S_b$. \end{proof} This result can be slightly generalised: \begin{Corollary}\label{Cor:cyclicite-fonction-holomorphe-dans-Dbar} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$. Let $f\in\mbox{Hol}(\overline{\mathbb D})$. The following assertions are equivalent: \begin{enumerate} \item $f$ is cyclic for $S_b$. \item $f$ is outer and $f(\zeta)\neq 0$ for every $\zeta\in E_0(b)$. \end{enumerate} \end{Corollary} \begin{proof} $(a)\implies (b)$: follows from Lemma~\ref{Lem1:CS}. $(b)\implies (a)$: since $f$ is outer and $f\in\mbox{Hol}(\overline{\mathbb D})$, $f$ does not vanish on the unit disc and has at most a finite number of zeros on $\mathbb T$ (otherwise by compactness and the uniqueness principle for holomorphic functions, $f$ would vanish identically). Let $\zeta_1,\zeta_2,\dots,\zeta_n$ be the (possible) zeros of $f$ on $\mathbb T$. Then there exists a function $g\in\mbox{Hol}(\overline{\mathbb D})$ with $\inf_{\overline{\mathbb D}}|g|>0$ such that \[ f(z)=\prod_{j=1}^n (z-\zeta_j) g(z),\qquad z\in\overline{\mathbb D}. \] Our assumption implies that for every $1\leq j\leq n$, $\zeta_j\notin E_0(b)$, and thus by Theorem~\ref{Thm:capacity}, the function $z-\zeta_j$ is cyclic for $S_b$. Moreover, by Corollary~\ref{cor:cyclicity-holvoisinagedeDbar}, the function $g$ is also cyclic. Now it follows from Lemma~\ref{Lem:product-multiplicateur} that $f$ itself is cyclic for $S_b$. \end{proof} \begin{Example}\label{example-rkhardycyclic} Let $b$ a non-extreme point in $\operatorname{ball}(H^{\infty})$. For every $\lambda\in\mathbb D$, $k_\lambda$ is a cyclic vector for $S_b$. Indeed, it is clear that $k_\lambda(z)=(1-\overline{\lambda}z)^{-1}$ satisfies the conditions of Corollary~\ref{Cor:cyclicite-fonction-holomorphe-dans-Dbar}. Hence $k_\lambda$ is cyclic. \ \par\smallskip In particular, by \eqref{eq:density-crk}, the set of cyclic vectors for $S_b$ spans a dense subspace in $\HH(b)$. \end{Example} \par\medskip \section{The rational case}\label{sec4} The main result of this section is a characterisation of cyclic functions for $S_b$ when $b$ is a rational function which is not a finite Blaschke product. As mentioned already in the Introduction, this result can be derived from the work \cite{MR4246975} by Luo -- Gu -- Richter, but we provide here an elementary proof, the ideas of which will turn out to be also relevant to the case where $b=(1+I)/2$ (see Section \ref{sec5} below). Note that Theorem \ref{Thm:rational-case} extends a result proved in \cite{MR3309352} in the particular case where $b(z)=(1+z)/2$. \begin{Theorem}\label{Thm:rational-case} Let $b\in \operatorname{ball}(H^{\infty})$ and assume that $b$ is rational (but not a finite Blaschke product). Let $a_1$ be the associated polynomial given by \eqref{eq:definition of a}, and let $f\in \HH(b)$. Then the following assertions are equivalent: \begin{enumerate} \item $f$ is cyclic for $S_b$. \item $f$ is an outer function and for every $1\leq k\leq n$, $f(\zeta_k)\neq 0$. \end{enumerate} \end{Theorem} \begin{proof} $(a)\implies (b)$: according to \eqref{eq:E0b-rationnel}, we know that $E_0(b)=\{\zeta_k:1\leq k\leq n\}$. Hence this implication follows from Lemma~\ref{Lem1:CS}. $(b)\implies (a)$: according to \eqref{uUUiipPPS}, write $f=a_1\widetilde{f}+p$, where $\widetilde f\in H^2$ and $p\in\P_{N-1}$. By \eqref{q}, $p(\zeta_k)\neq 0$, $1\leq k\leq n$. Let now $r\in\P_{N-1}$ be the unique polynomial satisfying the following interpolation properties: for every $1\leq k\leq n$, \[ r^{(j)}(\zeta_k)=\begin{cases} \frac{1}{p(\zeta_k)}& \mbox{if }j=0 \\ -\frac{1}{p(\zeta_k)}\,{\sum_{\ell=0}^{j-1}\binom{j}{\ell}r^{(\ell)}(\zeta_k)p^{(j-\ell)}(\zeta_k)}& \mbox{if }1\leq j\leq m_{k}-1. \end{cases} \] This polynomial $r$ can be constructed using Hermite polynomial interpolation, see for instance \cite[Chapter 1, E. 7]{MR1367960}. By Leibniz's rule, we easily see that for every $1\leq k\leq n$ and $0\leq j\leq m_k-1$, we have $(rp-1)^{(j)}(\zeta_k)=0$. Hence $a_1$ divides the polynomial $rp-1$. In other words, there exists a polynomial $q$ such that $rp-1=a_1 q$. Using that $f$ is outer, we can find a sequence of polynomials $(q_n)_n$ such that $\|q_nf+r\widetilde{f}+q\|_2\to 0$ as $n\to \infty$. Define now a sequence of polynomials $(p_n)$ by $p_n=a_1q_n+r$, $n\geq 1$. Observe that \[ \begin{aligned} p_nf-1=&(a_1q_n+r)f-1=a_1q_nf+r(a_1\widetilde{f}+p)-1\\ =&a_1(q_nf+r\widetilde{f})+rp-1=a_1(q_nf+r \widetilde{f}+q). \end{aligned} \] Then it follows from \eqref{eq:norm in h(b)} that \[ \vvvert p_nf-1\vvvert_b=\vvvert a_1(q_nf+r \widetilde{f}+q) \vvvert_b=\|q_nf+r \widetilde{f}+q\|_2\to 0\,\textrm{ as } n \to \infty. \] Therefore $f$ is cyclic for $S_b$. \end{proof} \begin{Example} Let $b(z)=\frac{1}{2}(1-z^2)$. Then it is proved in \cite{MR3503356} that $a(z)=c(z-i)(z+i)$, for some constant $c$. Thus, according to Theorem~\ref{Thm:rational-case}, a function $f\in\HH(b)$ is cyclic for $S_b$ if and only if $f$ is outer, $f(i)\neq 0$ and $f(-i)\neq 0$. \end{Example} \section{The case where $b=(1+I)/2$}\label{sec5} Our main result in this section is the following: \begin{Theorem}\label{Thm:main-b-1+I}\label{sec5-th} Let $I$ be an inner function, $I\not\equiv 1$, and assume that its Clark measure $\sigma_1$ associated to point $1$ (defined in \eqref{def-clark-measure}) is a discrete measure. Let $\{\zeta_n:n\geq 1\}=\{\zeta\in E_0(I): I(\zeta)=1\}$. Let $b=(1+I)/2$, and $f\in\HH(b)$ which we decompose according to \eqref{decomposition2-hb} as $f=(1-I)g_1+g_2$, with $g_1\in H^2$, $g_2\in K_I$. Assume that: \begin{enumerate} \item $g_1,g_2\in H^\infty$; \item $f$ is outer; \item we have \[ \sum_{n\geq 1}\frac{1}{|f(\zeta_n)|^2 |I'(\zeta_n)|}<\infty. \] \end{enumerate} Then $f$ is cyclic for $S_b$. \end{Theorem} \begin{proof} The proof proceeds along the same lines as in the rational case. According to Lemma~\ref{Lem:multiplier-b-1+I}, $f\in\mathfrak M(\HH(b))$ and by Lemma~\ref{lem:existence-limite-radiale} we have $f(\zeta_n)=g_2(\zeta_n)$, $n\geq 1$. \par\smallskip \noindent {\emph{First step}}: We claim that there exists a sequence of functions $(\psi_n)_n$ in $\HH(b)$ such that $\|\psi_n f-1\|_b\to 0$ as $n\to\infty$. \par\smallskip In order to construct the sequence $(\psi_n)_n$, let us first consider the function $r$ given by \[ r=\sum_{n=1}^\infty \frac{1}{f(\zeta_n)}\frac{k_{\zeta_n}^I}{\|k_{\zeta_n}^I\|_2^2}\cdot \] Recall that by \eqref{eq:nice-formula-derivative-norme-kernel-clark-measure}, $\|k_{\zeta_n}^I\|_2^2=|I'(\zeta_n)|$. Combining this with condition (c) and the fact that the family $(k_{\zeta_n}^I/\|k_{\zeta_n}^I\|_2)_n$ forms an orthonormal basis of $K_I$ (since $\sigma_1$ is discrete, Clark's theorem holds true), we see that the series defining the function $r$ is convergent in $K_I$. In others words, $r\in K_I$ and $r(\zeta_n)=1/f(\zeta_n)=1/g_2(\zeta_n)$ for every $n\geq 1$. \par\smallskip Let us now prove that $rg_2-1\in (1-I)H^2$. Observe that Lemma~\ref{Lem:model-space-KI.KIcontenu dans KI^2} implies that $rg_2\in K_{I^2}=K_I\oplus IK_I$. Hence there exist $\varphi_1,\varphi_2\in K_I$ such that $rg_2-1=\varphi_1+I\varphi_2-1$. Since $r(\zeta_n)g_2(\zeta_n)-1=0$, we have $\varphi_1(\zeta_n)+\varphi_2(\zeta_n)-1=0$ for every $n\geq 1$. Note that \((1-\overline{I(0)})^{-1}(1-\overline{I(0)}I)=((1-\overline{I(0)})^{-1} k_0^I \in K_{I}\) and \[ (1-\overline{I(0)})^{-1}(1-\overline{I(0)}I(\zeta _{n}))=1\qquad\textrm{for every}\ n\ge 1. \] Since the family \(\bigl (k_{\zeta _{n}}^{I} \bigr)_{n} \) is complete in \(K_{I}\), and since \(\varphi _{1}(\zeta _{n})+\varphi _{2}(\zeta _{n})=1\) for every \(n\ge 1\), we deduce that \[ \varphi _{1}+\varphi_{2}=(1-\overline{I(0)})^{-1}(1-\overline{I(0)}I). \] Hence \begin{align*} rg_{2}-1&=\varphi _{1}+I\varphi _{2}-1 =-\varphi _{2}+(1-\overline{I(0)})^{-1}(1-\overline{I(0)}I)+I\varphi _{2}-1\\ &=(1-I)(-\varphi _{2})+(1-\overline{I(0)})^{-1}(1-\overline{I(0)}I)-1. \end{align*} Observe that \[ (1-\overline{I(0)})^{-1}(1-\overline{I(0)}I)-1=(1-\overline{I(0)})^{-1}\overline{I(0)}(1-I), \] from which it follows that \[ rg_{2}-1=(1-I)(-\varphi _{2}+\overline{I(0)}(1-\overline{I(0)})^{-1}). \] This proves that \(rg_{2}-1\) belongs to \((1-I)H^{2}\). Write \(rg_{2}-1\) as \(rg_{2}-1=(1-I)g_{3}\), with \(g_{3}\in H^{2}\). \par\smallskip Using now that $f$ is outer, and that $rg_1\in H^2$ (as $g_1\in H^{\infty}$) we can find a sequence of polynomials $(q_n)_n$ such that $\|q_nf+rg_1+g_3\|_2\to 0$, as $n\to\infty$. We then define for each $n\geq 1$ a function $\psi_n$ as \[ \psi_n:=(1-I)q_n+r, \] where $q_n$ and $r$ are defined above. Note that $\psi_n\in (1-I)H^2+K_I=\HH(b)$ and \[ \begin{aligned} \psi_n f-1=&(1-I)q_nf+rf-1\\ =&(1-I)q_nf+(1-I)rg_1+rg_2-1\\ =&(1-I)(q_nf+rg_1+g_3). \end{aligned} \] It follows from Lemma~\ref{machin} that there exists a positive constant $C$ such that \[ \begin{aligned} \|\psi_nf-1\|_b =&\|(1-I)(q_nf+rg_1+g_3)\|_b\\ \le& C\, \vvvert (1-I)(q_nf+rg_1+g_3) \vvvert_b\\ =& C\, \|q_nf+rg_1+g_3\|_2, \end{aligned} \] from which it follows that $\|\psi_nf-1\|_b\to 0$ as $n\to\infty$. \par\smallskip {\emph{Second step}}: Let us now prove that there exists a sequence of polynomials $(p_{n})_n$ such that $\|p_ {_n}f-1\|_b\to 0$ as $n\to\infty$. \par\smallskip By the density of polynomials in $\HH(b)$, we can find a sequence of polynomials $(p_{n})_n$ such that $\|p_{n}-\psi_n\|_b\to 0$ as $n\to\infty$. Now write \[ \begin{aligned} \|p_{n}f-1\|_b\leq& \|p_{n}f-\psi_n f\|_b+\|\psi_n f-1\|_b\\ \leq & \|f\|_{\mathfrak M(\HH(b)} \|p_{n}-\psi_n\|_b+\|\psi_n f-1\|_b, \end{aligned} \] and by the choice of the sequence $(p_{n})_n$ and the first step, we get the conclusion of the second step. \par\smallskip We finally conclude that $f$ is cyclic for $S_b$. \end{proof} \begin{Remark} If $I$ is an inner function such that, for some $\alpha\in\mathbb T$, its Clark measure $\sigma_\alpha$ is a discrete measure and $I\not\equiv \alpha$, then we may apply Theorem~\ref{sec5-th} replacing $I$ by $\bar\alpha I$ and $b=(1+I)/2$ by $b=(1+\bar\alpha I)/2$. \end{Remark} \begin{Example} Let \(I=S_{\delta _{1}}\) be the inner function associated to the measure \(\delta _{1}\): \[ I(z)=\exp\Bigl (-\dfrac{1+z}{1-z} \Bigr) . \] In this case we can compute explicitly the Clark basis of \(K_{I}\) associated to point $1$. We have \(E_1=\{\zeta\in E_0(I):I(\zeta)=1\}=\{\zeta _{n}\;;\;n\in\mathbb{Z}\}\) with \[\zeta _{n}=\dfrac{2i\pi n-1}{2i\pi n+1}\quad\textrm{and}\quad I'(\zeta _{n})=-\dfrac{1}{2}(2i\pi n+1)^{2},\ n\in\mathbb{Z}.\] Therefore, if \(f\in\mathcal{H}(b)\) is outer, with \(f=(1-I)g_{1}+g_{2}\), \(g_{1}\in H^{\infty}\), \(g_{2}\in K_{I}\cap H^{\infty}\), and if \[ \sum_{n\in\mathbb{Z}}\dfrac{1}{|f(\zeta _{n})|^{2}}\cdot\dfrac{1}{4n^{2}\pi ^{2}+1}<+\infty, \] then \(f\) is cyclic for \(S_{b}\). \end{Example} \begin{Remark} There exists a recipe to construct an inner function satisfying the hypothesis of Theorem~\ref{sec5-th}. Let $\sigma$ be a positive discrete measure on $\mathbb T$ and let $H\sigma$ be its Herglotz transform, \[ H\sigma(z)=\int_{\mathbb T}\frac{\xi+z}{\xi-z}\,d\sigma(\xi),\qquad z\in\mathbb D. \] We easily see that $H\sigma$ defines an analytic function on $\mathbb D$ and satisfies $\Re e (H\sigma(z))\geq 0$ for every $z\in\D$. Now define a function $I$ on $\mathbb D$ as \[ I=\frac{H\sigma-1}{H\sigma+1}\cdot \] Since $\Re e H\sigma\geq 0$, it is easy to check that $I\in H^\infty$ and $|I|\leq 1$. Moreover, for every $0<r<1$ and $\zeta\in\mathbb T$, we have \begin{equation}\label{trick-construction-inner-function} |I(r\zeta)|=\frac{(\Re e (H\sigma(r\zeta))-1)^2+(\Im m (H\sigma(r\zeta))^2}{(\Re e (H\sigma(r\zeta))+1)^2+(\Im{m}(H\sigma(r\zeta))^2}\cdot \end{equation} Since $\sigma$ is a singular measure, it is well-known that for almost all $\zeta\in\mathbb T$, we have \[ \Re e(H\sigma(r\zeta))=\int_{\mathbb T}\frac{1-r^2}{|\xi-r\zeta|^2}\,d\sigma(\xi)\to 0\quad\mbox{as }r\to 1^-. \] See \cite[Corollary 3.4]{FM1}. Moreover, the radial limit of $\Im{m}(H\sigma)$ also exists and is finite for almost all $\zeta\in\mathbb T$. See \cite[page 113]{FM1}. Thus, it follows from \eqref{trick-construction-inner-function} that $|I(\zeta)|=1$ for almost all $\zeta\in\mathbb T$, meaning that $I$ is an inner function. Of course, we have $I\not\equiv 1$. Now, we easily check that \[ \frac{1-|I(z)|^2}{|1-I(z)|^2}=\Re e(H\sigma(z))=\int_{\mathbb T}\frac{1-|z|^2}{|\xi-z|^2}\,d\sigma(\xi), \] which implies by unicity of the Clark measure that $\sigma_1=\sigma$. Therefore $I$ satisfies the assumptions of Theorem~\ref{sec5-th}. \end{Remark} \begin{Corollary}\label{Cor-rk-cyclique} Let $I$ be an inner function, $I\not\equiv 1$, and assume that $\sigma_1$ is a discrete measure. Let $b=(1+I)/2$. Then $k_\lambda^b$ is a cyclic vector for $S_b$ for every $\lambda\in\mathbb D$. \end{Corollary} \begin{proof} Let us prove that $k_\lambda^b$ satisfies the assumptions $(a),(b)$ and $(c)$ of Theorem~\ref{sec5-th}. First, using that $b=(1+I)/2$, straightforward computations show that \[ \frac{1-\overline{b(\lambda)}b(z)}{1-\overline{\lambda}z}=(1-I(z))\frac{1}{4}\cdot\frac{(1-\overline{I(\lambda)})}{1-\overline{\lambda}z}+\frac{1}{2}\cdot\frac{1-\overline{I(\lambda)}I(z)}{1-\overline{\lambda}z}\cdot \] In other words, $k_\lambda^b$ can be written as $k_\lambda^b=(1-I)g_1+g_2$, with $g_1=\frac{1}{4}(1-\overline{I(\lambda)})k_\lambda$ and $g_2=\frac{1}{2}k_\lambda^I$. In particular, $g_1,g_2\in H^\infty$ and $k_\lambda^b$ satisfies the assumption $(a)$. \par\smallskip Observe now that $\Re e(1-\overline{b(\lambda)}b(z))\geq 0$ and $\Re e(1-\overline{\lambda}z)\geq 0$ for every $z\in\D$, which implies that the functions $1-\overline{b(\lambda)}b(z)$ and $1-\overline{\lambda}z$ are outer. See \cite[page 67]{MR1864396}. So $k_\lambda^b$ is outer as the quotient of two outer functions. It remains to check that $k_{\lambda}^b$ satisfies assumption $(c)$. But \[ |k_\lambda^b(\zeta_n)|=\left|\frac{1-b(\lambda)}{1-\overline{\lambda}\zeta_n}\right|\geq \frac{|1-b(\lambda)|}{1+|\lambda|}, \] and the property $(c)$ follows from the fact that \[ \sum_{n\geq 1}\frac{1}{|I'(\zeta_n)|}=\sum_{n\geq 1}\sigma_1(\{\zeta_n\})\le\sigma_1(\T)<+\infty. \] Thus $k_\lambda^b$ satisfies the assumptions $(a),(b),(c)$ of Theorem~\ref{sec5-th}, and $k_\lambda^b$ is cyclic for $S_b$. \end{proof} It should be noted that in Corollary~\ref{Cor-rk-cyclique}, the reproducing kernels $f=k_\lambda^b$, $\lambda\in \mathbb D$, which are cyclic for $S_b$, are such that $1/f\in H^\infty$. As we already observed in Lemma~\ref{lem:multi-inver}, certain invertibility conditions for $f$ make cyclicity easier. Using Theorem \ref{Thm:main-b-1+I}, we now construct a family of functions $f$ which are cyclic for $S_b$ but are such that $1/f\notin H^2$. \begin{Example} Let $I$ be a non-constant inner function, and assume that $\sigma_1$ is a discrete measure. Let $b=(1+I)/2$ and $f=(1+I)k_{\lambda}^I$ for some $\lambda\in\D$. Then $f$ is cyclic for $S_b$ and $1/f\notin L^2$. \end{Example} \begin{proof} First observe that \[ f=(1-I)(-k_\lambda^I)+2k_\lambda^I, \] so that $f=(I-I)g_1+g_2$, with $g_2=-2g_1=2k_\lambda^I \in H^\infty\cap K_I$. In particular, $f$ satisfies condition $(a)$ of Theorem~\ref{Thm:main-b-1+I}. Moreover, the function $f$ is outer as the product of two outer functions (use the same arguments as in the proof of Corollary~\ref{Cor-rk-cyclique}). Finally, since $|f(\zeta_n)|=2|k_\lambda^I(\zeta_n)|\geq |1-I(\lambda)|$, $f$ satisfies condition $(c)$. Hence by Theorem~\ref{Thm:main-b-1+I}, $f$ is cyclic for $S_b$. \par\smallskip Let us now check that $1/f\not \in L^2$. First observe that there exist two positive constants $C_1$ and $C_2$ such that \[ C_1 \frac{1}{|1+I(\zeta)|}\le \frac{1}{|f(\zeta)|}\le C_2 \frac{1}{|1+I(\zeta)|}\quad\mbox{ for a.e. }\zeta\in\T, \] because \[ \frac{1-|I(\lambda)|}{2}\leq |k_\lambda^I(\zeta)|\leq \frac{2}{1-|\lambda|}\cdot \] Now assume that $1/f$ belongs to $L^2$. Then $1/(1+I)\in L^2$. But since $1+I$ is outer, we get that $1/(1+I)\in H^2$. See \cite[page 43]{MR1864396}. As \[ \frac{1}{1+I}=\overline{\frac{I}{1+I}}\quad\mbox{ for a.e. }\zeta\in\T, \] we deduce that $1/(1+I)$ also belongs to $\overline{H^2}$. Thus $1/(1+I)$ is constant, which is a contradiction. \end{proof} In the context of Corollary~\ref{Cor-rk-cyclique}, it is easy to see that $(a,b)$ forms a corona pair, and we have seen that $k_\lambda^b$ is cyclic for $S_b$. In fact, this cyclicity result holds true under the (HCR) condition only. \begin{Proposition}\label{chose} Let $b$ be a non-extreme point in $\operatorname{ball}(H^{\infty})$, and let $a$ be its pythagorean mate. Assume that $(a,b)$ satisfies (HCR). Then the following. assertions hold: \begin{enumerate} \item $k_\lambda^b$ is cyclic for $S_b$ for every $\lambda\in\mathbb D$; \item If $b$ is furthermore assumed to be outer, then $bk_\lambda$ is also cyclic for $S_b$ for every $\lambda\in\mathbb D$. In particular, $b$ is a cyclic vector for $S_b$. \end{enumerate} \end{Proposition} \begin{proof} (a) We have $p(S_b)k_\lambda^b=(1-\overline{b(\lambda)}b)p(S_b)k_\lambda$ for every polynomial $p\in\C[X]$. Since $(a,b)$ satisfies (HCR), we have $\HH(b)=\MM(\bar a)$, and $b$ is a multiplier of $\HH(b)$. See \cite[Theorems 28.7 and 28.3]{FM2}. In particular, the multiplication operator $T=M_{1-\overline{b(\lambda)}b}$ is bounded on $\HH(b)$ and we have \[ p(S_b)k_\lambda^b=Tp(S_b)k_\lambda\qquad \textrm{for every polynomial }p\in\C[X]. \] Since $k_\lambda$ is cyclic for $S_b$ (see Example~\ref{example-rkhardycyclic}), in order to check that $k_\lambda^b$ is also cyclic for $S_b$ it is sufficient to check that $T$ has dense range. Let $h\in\HH(b)$ be such that $h\perp \mbox{Range}(T)$. Then $h\perp Tk_\mu=k_\mu-\overline{b(\lambda)}bk_\mu$ for every $\mu\in\mathbb D$. Lemma~\ref{Lem:completenes} now implies that $h=0$, proving that $T$ has dense range. It follows that $k_\lambda^b$ is cyclic for $S_b$. (b) The proof of (b) proceeds along the same lines of (a). We have \[ p(S_b)bk_\lambda=Vp(S_b)k_\lambda\qquad \textrm{for every polynomial }p\in\C[X], \] where $V=M_b$ is the multiplication operator by $b$. As previously, in order to show that $bk_\lambda$ is cyclic for $S_b$, it is sufficient to check that $V$ has a dense range. Let $h\in\HH(b)$ be such that $h\perp \mbox{Range}(V)$. Then $h\perp Vk_\mu=bk_\mu$ for every $\mu\in\mathbb D$. By \eqref{eq1EZD:lem-completeness}, it then follows that $h^+(\mu)=0$ for every $\mu\in\mathbb D$. Then $h^+=0$ and $T_{\bar b}h=T_{\bar a}h^+=0$. But, since $b$ is outer, $T_{\bar b}$ is one-to-one, which implies that $h=0$. It then follows that $bk_\lambda$ is cyclic for $S_b$. \end{proof} We finish the paper with the following question: \begin{question} Does Proposition \ref{chose} hold true without the assumption that $(a,b)$ satisfies (HCR)? \end{question} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Introduction} Detecting communities (or clusters) in graphs is a fundamental problem that has many applications, such as finding like-minded people in social networks~\cite{Ref5}, and improving recommendation systems~\cite{Ref6}. Community detection is affiliated with various problems in network science such as network structure reconstruction~\cite{Keke-Huang-2019}, networks with dynamic interactions~\cite{Zhen-Wang-2020}, and complex networks~\cite{Keke-Huang-2020}. Random graph models~\cite{CommunityDetectionInGraphs2010, esmaeili2021community} are frequently used in the analysis of community detection, prominent examples of which include the stochastic block model~\cite{esmaeili2021community,Ref1,Ref4} and the censored block model~\cite{saade2015spectral, hajek2015exact}. In the context of these models, community detection recovers latent node labels (communities) by observing the edges of a graph. Community detection utilizes several metrics for residual error as the size of the graph grows: correlated recovery \cite{Ref8, Ref9, Ref10, Ref11} (recovering the hidden community better than random guessing), weak recovery \cite{Ref12, Ref13, Ref15} (the fraction of misclassified labels in the graph vanishes with probability converging to one), and exact recovery~\cite{Ref4, Ref16, Ref17} (all nodes are classified correctly with high probability). Recovery techniques include spectral methods~\cite{Ref4,Statistical.computational.Tradeoffs.in.Planted.Problems.and.Submatrix.Localization.with.a.Growing.Number.of.Clusters.and.Submatrices}, belief propagation~\cite{mossel2014belief}, and \ac{SDP} relaxation~\cite{amini2018semidefinite}. Semidefinite programming is a computationally efficient convex optimization technique that has shown its utility in solving signal processing problems~\cite{SDP.SignalProcessing, ConvexOptimization.SignalProcessing}. In the context of community detection, \ac{SDP} was introduced in~\cite{Ref24}, where it was used for solving a minimum bisection problem, obtaining a sufficient condition that is not optimal. In~\cite{Ref25}, a \ac{SDP} relaxation was considered for a maximum bisection problem. For the binary symmetric stochastic block model,~\cite{Ref18} showed that the \ac{SDP} relaxation of maximum likelihood can achieve the optimal exact recovery threshold with high probability. These results were later extended to more general models in~\cite{Ref19}. Also, \cite{esmaeili2020community} showed the power of \ac{SDP} for solving a community detection problem in graphs with a secondary latent variable for each node. Community detection on graphs has been widely studied in part because the graph structure is amenable to analysis and admits efficient algorithms. In practice, however, the available information for inference is often not purely graphical. For instance, in a citation network, beside the names of authors, there are some additional {\em non-graph} information such as keywords and abstract that can be used and improve the performance of community detection algorithms. For illustration, consider public-domain libraries such as Citeseer and Pubmed. Citation networks in these libraries have been the subject of several community detection studies, which can be augmented by incorporating individual (non-graph) attributes of the documents that affect the likelihood of community memberships. The non-graph data assisting in the solution of graph problems is called {\em side information.} In~\cite{Ref7,our2}, the effect of side information on the phase transition of the exact recovery was studied for the binary symmetric stochastic block model. In~\cite{our3,ISIT-2018-1,ISIT-2018-2}, the effect of side information was studied on the phase transition of the weak and exact recovery as well as the phase transition of belief propagation in the single community stochastic block model. The impact of side information on the performance of belief propagation was further studied in~\cite{Ref20,ISIT-2018-1}. The contribution of this paper is the analysis \color{black} of the impact of side information on \ac{SDP} solutions for community detection. More specifically, we study the behavior of the \ac{SDP} detection threshold under the exact recovery metric. We consider graphs following the binary censored block model and the binary symmetric stochastic block model. We begin with the development of \ac{SDP} for partially-revealed labels and noisy labels, which are easier to grasp and visualize. This builds intuition for the more general setting, in which we study side information with multiple features per node, each of which is a random variable with arbitrary but finite cardinality. The former results also facilitate the understanding and interpretation of the latter. Most categories of side information give rise to a complete quadratic form in the likelihood function, which presents challenges in the analysis of their semidefinite programming relaxation. Overcoming these challenges is one of the main technical contributions of the present work. Simulation results show that the thresholds calculated in this paper can also shed light on the understanding of the behavior of \ac{SDP} in graphs of modest size. Notation: Matrices and vectors are denoted by capital letters, and their elements with small letters. $\mathbf{I}$ is the identity matrix and $\mathbf{J}$ the all-one matrix. $S \succeq 0$ indicates a positive semidefinite matrix and $S \ge 0$ a matrix with non-negative entries. $||S||$ is the spectral norm, $\lambda_{2}(S)$ the second smallest eigenvalue (for a symmetric matrix), and $\langle \cdot,\cdot\rangle$ is the inner product. We abbreviate $[n] \triangleq \{ 1, \cdots,n \}$. Probabilities are denoted by $\mathbb{P}(\cdot)$ and random variables with Bernoulli and Binomial distribution are indicated by $\mathrm{Bern}(p)$ and $\mathrm{Binom}(n,p)$, respectively. \section{System Model} This paper analyzes community detection in the presence of a graph observation as well as individual node attributes. The graphs in this paper follow the binary stochastic block model and the censored block model, and side information is in the form of either partially revealed labels, noisy labels, or an alphabet other than the labels. This paper considers a fully connected regime, guaranteeing that exact recovery is possible. Throughout this paper, the graph adjacency matrix is denoted by $G$. Node labels are independent and identically distributed across $n$, with labels $+1$ and $-1$. The vector of node labels is denoted by $X$, and a corresponding vector of side information is denoted by $Y$. The log-likelihood of the graph and side information is \begin{equation*} \log \mathbb {P}(G,Y|X) = \log\mathbb{P}(G|X)+ \log\mathbb {P}(Y|X) , \end{equation*} i.e., $G$ and $Y$ are independent given $X$. \subsection{Binary Censored Block Model} The model consists of an Erd\H{o}s-R\'enyi graph with $n$ nodes and edge probability $p=a\frac{\log n}{n}$ for a fixed $a>0$. The nodes belong to two communities represented by the binary node labels, which are latent. The entries $G_{ij}\in \{-1,0,1\}$ of the weighted adjacency matrix of the graph have a distribution that depends on the community labels $x_i$ and $x_j$ as follows: \[ G_{ij} \sim \begin{cases} p(1-\xi) \delta_{+1} + p\xi \delta_{-1} +(1-p)\delta_{0} & \text{ when } x_i = x_j\\ p(1-\xi) \delta_{-1} + p\xi \delta_{+1} +(1-p)\delta_{0} & \text{ when } x_i \neq x_j \end{cases} \] where $\delta$ is Dirac delta function and $\xi \in [0, \frac{1}{2}]$ is a constant. Further, $G_{ii}=0$ and $G_{ij}=G_{ji}$. For all $j>i$, the edges $G_{ij}$ are mutually independent conditioned on the node labels. The log-likelihood of $G$ is \begin{equation} \label{BCBM-equ1} \log \mathbb{P}(G|X) = \frac{1}{4}T_{1}X^{T}GX+C_{1}, \end{equation} where $T_{1} \triangleq \log \Big( \frac{1-\xi}{\xi} \Big)$ and $C_{1}$ is a deterministic scalar. \subsection{Binary Symmetric Stochastic Block Model} In this model, if nodes $i,j$ belong to the same community, $G_{i,j}\sim \mathrm{Bern}(p)$, otherwise $G_{ij} \sim \mathrm{Bern}(q)$ with \begin{equation*} p=a\frac{\log n}{n},\qquad q=b\frac{\log n}{n}, \end{equation*} and $a\geq b>0$. Then the log-likelihood of $G$ is \begin{equation} \label{BSSBM-equ1} \log \mathbb{P}(G|X) = \frac{1}{4}T_{1}X^{T}GX+C_{2} , \end{equation} where $T_{1}\triangleq \log \Big (\frac{p(1-q)}{q(1-p)} \Big )$ and $C_{2}$ is a deterministic scalar. \subsection{Side Information: Partially Revealed Labels} Partially-revealed side information vector $Y$ consists of elements that with probability $1-\epsilon$ are equal to the true label and with probability $\epsilon$ take value $0$, i.e., are erased. Conditioned on each node label, the corresponding side information is assumed independent from other labels and from the graph edges. Thus, the log-likelihood of $Y$ is \begin{equation} \label{P-equ1} \log \mathbb{P}(Y|X)= Y^{T}Y \log \bigg( \frac{1-\epsilon}{\epsilon} \bigg)+n \log(\epsilon). \end{equation} \subsection{Side Information: Noisy Labels } Noisy-label side information vector $Y$ consists of elements that with probability $1-\alpha$ agree with the true label ($y_i=x^*_i$) and with probability $\alpha$ are erroneous ($y_i=-x^*_i$), where $\alpha \in (0, 0.5)$. Then the log-likelihood of $Y$ is \begin{equation} \label{N-equ1} \log \mathbb{P}(Y|X)=\frac{1}{2}T_{2}X^{T}Y+T_{2}\frac{n}{2}+n \log \alpha , \end{equation} where $T_{2} \triangleq \log \big( \frac{1-\alpha}{\alpha} \big)$. \subsection{Side Information: Multiple Variables \& Larger Alphabets} In this model, we disengage the cardinality of side information alphabet from the node latent variable, and also allow for more side information random variables per node. This is motivated by practical conditions where the available non-graph information may be different from the node latent variable, and there may be multiple types of side information with varying utility for the inference. Formally, $y_{i,k}$ is the random variable representing feature $k$ at node $i$. Each feature has cardinality $M_k$ that is finite and fixed across the graph. We group these variables into a vector $y_i$ of dimension $K$, representing side information for node $i$, and group the vectors into a matrix $Y$ representing all side information for the graph.\footnote{If vectors $y_i$ have unequal dimension, matrix $Y$ will accommodate the largest vector, producing vacant entries that are defaulted to zero.} Without loss of generality, the alphabet of each feature $k$ is the set of integers $\{1, \ldots,M_k\}$. The posterior probability of the features are denoted by \begin{align*} \alpha_{+,m_{k}}^{k} &\triangleq \mathbb{P} ( y_{i,k}=m_{k} | x_{i}=1 ),\\ \alpha_{-,m_{k}}^{k} &\triangleq \mathbb{P}( y_{i,k}=m_{k} | x_{i}=-1 ) , \end{align*} where $m_{k}$ indexes the alphabet of feature $k$. Then the log-likelihood of $Y$ is \begin{align*} \log \mathbb{P} & ( Y|X )= \sum_{i=1}^{n} \log \mathbb{P} ( y_{i}|x_{i} ) \\ =&\frac{1}{2} \sum_{i=1}^{n} x_{i} \sum_{k=1}^{K} \sum_{m_{k}=1}^{M_{k}} \mathbbm{1}_{y_{i,k} = m_{k} } \log\bigg( \frac{\alpha_{+,m{k}}^{k}}{\alpha_{-,m_{k}}^{k}}\bigg) \\ &+\frac{1}{2} \sum_{i=1}^{n} \sum_{k=1}^{K} \sum_{m_{k}=1}^{M_{k}} \mathbbm{1}_{y_{i,k} = m_{k} } \log (\alpha_{+,m_{k}}^{k} \alpha_{-,m_{k}}^{k} ) , \end{align*} where $\mathbbm{1}$ is the indicator function. Define \[ \tilde{y}_{i} \triangleq \sum_{k=1}^{K} \sum_{m_{k}=1}^{M_{k}} \mathbbm{1}_{\{y_{i,k} = m_{k}\} } \log \bigg(\frac{\alpha_{+,m_{k}}^{k}}{\alpha_{-,m_{k}}^{k}}\bigg), \] and $\tilde{Y}\triangleq [\tilde{y}_{1}, \tilde{y}_{2}, \ldots , \tilde{y}_{n}]^{T}$. Then the log-likelihood of $Y$ is \begin{equation} \label{G-equ1} \log \mathbb{P} ( Y|X )= \frac{1}{2} X^{T}\tilde{Y} + C_3 , \end{equation} for some constant $C_3$. In the remainder of this paper, side information thus defined is referred to as {\em general side information}. \section{Detection via \ac{SDP}} For organizational convenience, the main results of the paper are concentrated in this section. For the formulation of \ac{SDP}, we utilize the additional variables $Z\triangleq XX^{T}$ and $W \triangleq YY^{T}$. Also, let $Z^{*} \triangleq X^{*}X^{*T}$. \subsection{Censored Block Model with Partially Revealed Labels} Combining~\eqref{BCBM-equ1} and~\eqref{P-equ1}, the maximum likelihood detector is \begin{align} \label{BCBM-P-equ1} \hat{X} =& \underset{X}{\arg\max} ~X^{T}GX \nonumber\\ &\text{subject to} \quad x_{i} \in \{\pm 1 \},\quad i\in[n] \nonumber\\ &\quad \quad \quad \quad \quad X^{T}Y=Y^{T}Y, \end{align} where the constraint $X^{T}Y=Y^{T}Y$ ensures that detected values agree with available side information. This is a non-convex problem, therefore we consider a convex relaxation~\cite{Ref16,Ref26}. Replacing $x_{i} \in \{\pm 1 \}$ with $Z_{ii}=1$, and $X^{T}Y= \pm Y^{T}Y$ with $\langle Z,W\rangle =(Y^{T}Y)^2$, \begin{align} \label{BCBM-P-equ1-1} \widehat{Z}=&\underset{Z}{\arg\max} ~\langle Z,G\rangle \nonumber\\ &\text{subject to} \quad Z=XX^{T} \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii}=1,\quad i\in [n] \nonumber\\ & \quad \quad \quad \quad \quad \langle Z,W\rangle =(Y^{T}Y)^2. \end{align} By relaxing the rank-one constraint introduced via $Z$, we obtain the following \ac{SDP} relaxation: \begin{align} \label{BCBM-P-equ2} \widehat{Z}=&\underset{Z}{\arg\max} ~\langle Z,G\rangle \nonumber\\ &\text{subject to} \quad Z\succeq 0 \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii}=1,\quad i\in [n] \nonumber\\ & \quad \quad \quad \quad \quad \langle Z,W\rangle =(Y^{T}Y)^2. \end{align} Let $\beta \triangleq \lim_{n \rightarrow \infty} -\frac{\log \epsilon}{\log n}$, where $\beta \geq 0$. \begin{Theorem} \label{Theorem 1} Under the binary censored block model and partially revealed labels, if \begin{equation*} a(\sqrt{1-\xi}-\sqrt{\xi})^2+\beta>1 , \end{equation*} then the \ac{SDP} estimator is asymptotically optimal, i.e., $\mathbb{P}(\widehat{Z}=Z^{*})\geq 1-o(1)$. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-1}. \end{proof} \begin{Theorem} \label{Theorem 2} Under the binary censored block model and partially revealed labels, if \begin{equation*} a(\sqrt{1-\xi}-\sqrt{\xi})^2+\beta<1 , \end{equation*} then for any sequence of estimators $\widehat{Z}_{n}$, $\mathbb{P}(\widehat{Z}_{n}=Z^{*}) \rightarrow 0$ as $n \rightarrow \infty $. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-2}. \end{proof} \subsection{Censored Block Model with Noisy Labels} Combining~\eqref{BCBM-equ1} and~\eqref{N-equ1}, the maximum likelihood detector is \begin{align} \label{BCBM-N-equ1} \hat{X} = & \underset{X}{\arg\max}~ T_{1}X^{T}GX+2T_{2}X^{T}Y \nonumber\\ &\text{subject to} \quad x_{i} \in \{\pm 1 \},\quad i\in[n]. \end{align} Then~\eqref{BCBM-N-equ1} is equivalent to \begin{align} \label{BCBM-N-equ2} \widehat{Z}=&\underset{Z,X}{\arg\max} ~T_{1} \langle G, Z \rangle + 2T_{2} X^{T}Y \nonumber\\ &\text{subject to} \quad Z = XX^{T} \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii} =1, \quad i \in [n ]. \end{align} Relaxing the rank-one constraint, using \begin{equation*} Z-XX^{T}\succeq 0 \Leftrightarrow \begin{bmatrix} 1 & X^{T}\\ X & Z \end{bmatrix} \succeq 0 , \end{equation*} yields the \ac{SDP} relaxation of~\eqref{BCBM-N-equ2}: \begin{align} \label{BCBM-N-equ3} \widehat{Z}=&\underset{Z,X}{\arg\max} ~T_{1} \langle G, Z \rangle + 2T_{2} X^{T}Y \nonumber\\ &\text{subject to} \quad \begin{bmatrix} 1 & X^{T}\\ X & Z \end{bmatrix} \succeq 0 \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii}=1,\quad i\in [n] . \end{align} Let $\beta \triangleq \lim_{n \rightarrow \infty} \frac{T_{2}}{\log n}$, where $\beta \geq 0$. Also, for convenience define \begin{equation*} \eta(a,\beta) \triangleq a-\frac{\gamma}{T_{1}} +\frac{\beta}{2T_{1}} \log \Bigg( \frac{ (1-\xi) (\gamma + \beta)}{\xi (\gamma - \beta)} \Bigg) , \end{equation*} where $\gamma \triangleq \sqrt{\beta^{2}+4\xi(1-\xi)a^{2}T_{1}^{2}}$. \begin{Theorem} \label{Theorem 3} Under the binary censored block model and noisy labels, if \begin{equation*} \begin{cases} \eta(a,\beta)>1 & \text{when } 0\leq \beta<aT_{1}(1-2\xi)\\ \beta>1 & \text{when } \beta \geq aT_{1}(1-2\xi) \end{cases} \end{equation*} then the \ac{SDP} estimator is asymptotically optimal, i.e., $\mathbb{P}(\widehat{Z}=Z^{*})\geq 1- o(1)$. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-3}. \end{proof} \begin{Theorem} \label{Theorem 4} Under the binary censored block model and noisy labels, if \begin{equation*} \begin{cases} \eta(a,\beta)<1 & \text{when } 0\leq \beta<aT_{1}(1-2\xi)\\ \beta<1 & \text{when } \beta \geq aT_{1}(1-2\xi) \end{cases} \end{equation*} then for any sequence of estimators $\widehat{Z}_{n}$, $\mathbb{P}(\widehat{Z}_{n}=Z^{*}) \rightarrow 0$ as $n \rightarrow \infty $. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-4}. \end{proof} \subsection{Censored Block Model with General Side Information} Combining~\eqref{BCBM-equ1} and~\eqref{G-equ1}, the \ac{SDP} relaxation is \begin{align} \label{BCBM-G-equ3} \widehat{Z}=&\underset{Z,X}{\arg\max} ~T_{1} \langle G, Z \rangle + 2X^{T}\tilde{Y} \nonumber\\ &\text{subject to} \quad \begin{bmatrix} 1 & X^{T}\\ X & Z \end{bmatrix} \succeq 0 \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii}=1,\quad i\in [n] . \end{align} The log-likelihoods and the log-likelihood-ratio of side information, combined over all features, are as follows: \begin{align*} f_{1}(n) &\triangleq \sum_{k=1}^{K} \log \frac{\alpha_{+,m_{k}}^{k}}{\alpha_{-,m_{k}}^{k}} ,\\ f_{2}(n) &\triangleq \sum_{k=1}^{K} \log \alpha_{+,m_{k}}^{k}, \\ f_{3}(n) &\triangleq \sum_{k=1}^{K} \log \alpha_{-,m_{k}}^{k} . \end{align*} Two exponential orders will feature prominently in the following results and proofs: \begin{align*} &\beta_{1} \triangleq \lim_{n\rightarrow \infty} \frac{f_{1}(n)}{\log n} ,\\ &\beta \triangleq \lim_{n\rightarrow\infty} -\frac{\max (f_2(n),f_3(n))}{\log n} . \end{align*} Although the definition of $\beta$ varies in the context of different models, its role remains the same. In each case, $\beta$ is a parameter representing the asymptotic quality of side information.\footnote{In each case, $\beta$ is proportional to the exponential order of the likelihood function.} \begin{Theorem} \label{Theorem 5} Under the binary censored block model and general side information, if \begin{equation*} \begin{cases}\eta(a, | \beta_{1} |)+\beta >1 & \text{when } | \beta_{1} | \leq aT_{1} (1-2\xi )\\ | \beta_{1} |+\beta >1 & \text{when } |\beta_1|> aT_{1} (1-2\xi ) \end{cases} \end{equation*} then the \ac{SDP} estimator is asymptotically optimal, i.e., $\mathbb{P}(\widehat{Z}=Z^{*})\geq 1- o(1)$. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-5}. \end{proof} \begin{Theorem} \label{Theorem 6} Under the binary censored block model and general side information, if \begin{equation*} \begin{cases}\eta(a, | \beta_{1} |)+\beta <1 & \text{when } | \beta_{1} | \leq aT_{1} (1-2\xi )\\ | \beta_{1} |+\beta <1 & \text{when } |\beta_1|> aT_{1} (1-2\xi ) \end{cases} \end{equation*} then for any sequence of estimators $\widehat{Z}_{n}$, $\mathbb{P}(\widehat{Z}_{n}=Z^{*}) \rightarrow 0$. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-6}. \end{proof} \subsection{Stochastic Block Model with Partially Revealed Labels} Similar to the binary censored block model with partially revealed labels, by combining~\eqref{BSSBM-equ1} and~\eqref{P-equ1}, the \ac{SDP} relaxation is \begin{align} \label{BSSBM-P-equ2} \widehat{Z}=&\underset{Z}{\arg\max} ~\langle Z,G\rangle \nonumber\\ &\text{subject to} \quad Z\succeq 0 \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii}=1,\quad i\in [n] \nonumber\\ & \quad \quad \quad \quad \quad \langle \mathbf{J} , Z \rangle = 0 \nonumber\\ & \quad \quad \quad \quad \quad \langle Z,W\rangle =(Y^{T}Y)^2, \end{align} where the constraint $\langle \mathbf{J} , Z \rangle = 0$ arises from two equal-sized communities. \begin{Theorem} \label{Theorem 7} Under the binary symmetric stochastic block model and partially revealed labels, if \begin{equation*} \big(\sqrt{a}-\sqrt{b}\big)^2+2\beta>2 , \end{equation*} then the \ac{SDP} estimator is asymptotically optimal, i.e., $\mathbb{P}(\widehat{Z}=Z^{*})\geq 1-o(1)$. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-7}. \end{proof} \begin{Remark} The converse is given by~\cite[Theorem 3]{Ref7}. \end{Remark} \subsection{Stochastic Block Model with Noisy Labels} Similar to the binary censored block model with noisy labels, by combining~\eqref{BSSBM-equ1} and~\eqref{N-equ1}, the \ac{SDP} relaxation is \begin{align} \label{BSSBM-N-equ1} \widehat{Z}=&\underset{Z,X}{\arg\max} ~T_{1} \langle G, Z \rangle + 2T_{2} X^{T}Y \nonumber\\ &\text{subject to} \quad \begin{bmatrix} 1 & X^{T}\\ X & Z \end{bmatrix} \succeq 0 \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii}=1,\quad i\in [n] \nonumber\\ & \quad \quad \quad \quad \quad \langle \mathbf{J} , Z \rangle = 0 . \end{align} For convenience let \begin{equation*} \eta(a,b,\beta) \triangleq \frac{a+b}{2}+\frac{\beta}{2}-\frac{\gamma}{T_{1}} +\frac{\beta}{2T_{1}} \log \bigg( \frac{\gamma +\beta}{\gamma -\beta} \bigg) , \end{equation*} where $\gamma \triangleq \sqrt{\beta^{2}+abT_{1}^{2}}$. \begin{Theorem} \label{Theorem 9} Under the binary symmetric stochastic block model and noisy label side information, if \begin{equation*} \begin{cases} \eta(a,b, \beta)>1 & \text{when } 0\leq \beta<\frac{T_{1}}{2}(a-b)\\ \beta>1 & \text{when } \beta \geq \frac{T_{1}}{2}(a-b) \end{cases} \end{equation*} then the \ac{SDP} estimator is asymptotically optimal, i.e., $\mathbb{P}(\widehat{Z}=Z^{*})\geq 1- o(1)$. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-9}. \end{proof} \begin{Remark} The converse is given by~\cite[Theorem 2]{Ref7}. \end{Remark} \subsection{Stochastic Block Model with General Side Information} Similar to the binary censored block model with general side information, by combining~\eqref{BSSBM-equ1} and~\eqref{G-equ1}, the \ac{SDP} relaxation is \begin{align} \label{BSSBM-G-equ1} \widehat{Z}=&\underset{Z,X}{\arg\max} ~T_{1} \langle G, Z \rangle + 2X^{T}\tilde{Y} \nonumber\\ &\text{subject to} \quad \begin{bmatrix} 1 & X^{T}\\ X & Z \end{bmatrix} \succeq 0 \nonumber\\ & \quad \quad \quad \quad \quad Z_{ii}=1,\quad i\in [n] \nonumber\\ & \quad \quad \quad \quad \quad \langle \mathbf{J} , Z \rangle = 0 . \end{align} \begin{Theorem} \label{Theorem 11} Under the binary symmetric stochastic block model and general side information, if \begin{equation*} \begin{cases} \eta(a,b, | \beta_{1} |)+\beta >1 & \text{when } | \beta_{1} | \leq T_{1}\frac{(a-b)}{2} \\ | \beta_{1} |+\beta >1 & \text{when } | \beta_{1} | > T_{1}\frac{(a-b)}{2} \end{cases} \end{equation*} then the \ac{SDP} estimator is asymptotically optimal, i.e., $\mathbb{P}(\widehat{Z}=Z^{*})\geq 1- o(1)$. \end{Theorem} \begin{proof} See Appendix~\ref{Proof-Theorem-11}. \end{proof} \begin{Remark} The converse is given by~\cite[Theorem 5]{Ref7}. \end{Remark} \section{Numerical Results} \label{Section: Numerical Results} This section produces numerical simulations that shed light on the domain of applicability of the asymptotic results obtained earlier in the paper\footnote{The code is available online at \url{https://github.com/mohammadesmaeili/Community-Detection-by-SDP} }. Table~\ref{Table 1} shows the misclassification error probability of the \ac{SDP} estimators~\eqref{BCBM-P-equ2} and \eqref{BSSBM-P-equ2} with partially revealed side information. Under the binary stochastic block model with $a=3$ and $b=1$, when the side information $\beta = 0.8$, error probability diminishes with $n$ as predicted by earlier asymptotic results. For these parameters, $\eta = 1.1 > 1$, and exact recovery is possible based on the theoretical results. When $\beta=0.2$, then $\eta=0.5 < 1$ which does not fall in the asymptotic perfect recovery regime, the misclassification error probability is much higher. Under the binary censored block model with $a=1$ and $\xi=0.2$, when the side information $\beta = 1$, error probability diminishes with $n$. For these values, $\eta =1.2 > 1$, and exact recovery is possible based on the theoretical results. When $\beta=0.3$, the misclassification error probability is much higher. For this value of $\beta$, $\eta=0.5 < 1$ which means exact recovery is not asymptotically possible. \begin{table*} \begin{minipage}[t]{0.33\textwidth} \centering \caption{SDP with partially revealed labels.} \begin{tabular}{@{}cccccc@{}} \toprule a & b & $\xi$ & $\beta$ & n & Error Probability \\ \midrule 3 & 1 & - & 0.2 & 100 & $4.1 \times 10^{-2}$ \\ 3 & 1 & - & 0.2 & 200 & $3.1 \times 10^{-2}$ \\ 3 & 1 & - & 0.2 & 300 & $2.5 \times 10^{-2}$ \\ 3 & 1 & - & 0.2 & 400 & $2.2 \times 10^{-2}$ \\ 3 & 1 & - & 0.2 & 500 & $1.9 \times 10^{-2}$ \\ 3 & 1 & - & 0.8 & 100 & $5.0 \times 10^{-4}$ \\ 3 & 1 & - & 0.8 & 200 & $3.2 \times 10^{-4}$ \\ 3 & 1 & - & 0.8 & 300 & $1.6 \times 10^{-4}$ \\ 3 & 1 & - & 0.8 & 400 & $1.2 \times 10^{-4}$ \\ 3 & 1 & - & 0.8 & 500 & $9.3 \times 10^{-5}$ \\ 1 & - & 0.2 & 0.3 & 100 & $4.1 \times 10^{-2}$ \\ 1 & - & 0.2 & 0.3 & 200 & $2.9 \times 10^{-2}$ \\ 1 & - & 0.2 & 0.3 & 300 & $2.2 \times 10^{-2}$ \\ 1 & - & 0.2 & 0.3 & 400 & $1.9 \times 10^{-2}$ \\ 1 & - & 0.2 & 0.3 & 500 & $1.7 \times 10^{-2}$ \\ 1 & - & 0.2 & 1 & 100 & $1.1 \times 10^{-3}$ \\ 1 & - & 0.2 & 1 & 200 & $4.2 \times 10^{-4}$ \\ 1 & - & 0.2 & 1 & 300 & $2.7 \times 10^{-4}$ \\ 1 & - & 0.2 & 1 & 400 & $2.1 \times 10^{-4}$ \\ 1 & - & 0.2 & 1 & 500 & $1.5 \times 10^{-4}$ \\ \bottomrule \end{tabular} \label{Table 1} \end{minipage} \hfill \begin{minipage}[t]{0.33\textwidth} \centering \caption{SDP with noisy labels} \begin{tabular}{@{}cccccc@{}} \toprule a & b & $\xi$ & $\beta$ & n & Error Probability \\ \midrule 4 & 1 & - & 0.2 & 100 & $2.0 \times 10^{-2}$ \\ 4 & 1 & - & 0.2 & 200 & $1.5 \times 10^{-2}$ \\ 4 & 1 & - & 0.2 & 300 & $1.3 \times 10^{-2}$ \\ 4 & 1 & - & 0.2 & 400 & $1.1 \times 10^{-2}$ \\ 4 & 1 & - & 0.2 & 500 & $1.0 \times 10^{-2}$ \\ 4 & 1 & - & 1 & 100 & $1.1 \times 10^{-3}$ \\ 4 & 1 & - & 1 & 200 & $7.4 \times 10^{-4}$ \\ 4 & 1 & - & 1 & 300 & $3.0 \times 10^{-5}$ \\ 4 & 1 & - & 1 & 400 & $2.7 \times 10^{-5}$ \\ 4 & 1 & - & 1 & 500 & $2.2 \times 10^{-5}$ \\ 4 & - & 0.25 & 0.1 & 100 & $2.9 \times 10^{-2}$ \\ 4 & - & 0.25 & 0.1 & 200 & $1.8 \times 10^{-2}$ \\ 4 & - & 0.25 & 0.1 & 300 & $1.4 \times 10^{-2}$ \\ 4 & - & 0.25 & 0.1 & 400 & $1.2 \times 10^{-2}$ \\ 4 & - & 0.25 & 0.1 & 500 & $1.0 \times 10^{-2}$ \\ 4 & - & 0.25 & 1.1 & 100 & $2.7 \times 10^{-3}$ \\ 4 & - & 0.25 & 1.1 & 200 & $1.0 \times 10^{-3}$ \\ 4 & - & 0.25 & 1.1 & 300 & $6.2 \times 10^{-4}$ \\ 4 & - & 0.25 & 1.1 & 400 & $4.1 \times 10^{-4}$ \\ 4 & - & 0.25 & 1.1 & 500 & $3.3 \times 10^{-4}$ \\ \bottomrule \end{tabular} \label{Table 2} \end{minipage} \hfill \begin{minipage}[t]{0.33\textwidth} \centering \caption{SDP without side information.} \begin{tabular}{@{}ccccc@{}} \toprule a & b & $\xi$ & n & Error Probability \\ \midrule 3 & 1 & - & 100 & $1.4 \times 10^{-1}$ \\ 3 & 1 & - & 200 & $1.2 \times 10^{-1}$ \\ 3 & 1 & - & 300 & $1.1 \times 10^{-1}$ \\ 3 & 1 & - & 400 & $9.8 \times 10^{-2}$ \\ 3 & 1 & - & 500 & $9.1 \times 10^{-2}$ \\ 4 & 1 & - & 100 & $2.3 \times 10^{-2}$ \\ 4 & 1 & - & 200 & $1.7 \times 10^{-2}$ \\ 4 & 1 & - & 300 & $1.6 \times 10^{-2}$ \\ 4 & 1 & - & 400 & $1.3 \times 10^{-2}$ \\ 4 & 1 & - & 500 & $1.2 \times 10^{-2}$ \\ 1 & - & 0.2 & 100 & $2.9 \times 10^{-1}$ \\ 1 & - & 0.2 & 200 & $2.5 \times 10^{-1}$ \\ 1 & - & 0.2 & 300 & $2.2 \times 10^{-1}$ \\ 1 & - & 0.2 & 400 & $2.1 \times 10^{-1}$ \\ 1 & - & 0.2 & 500 & $1.9 \times 10^{-1}$ \\ 4 & - & 0.25 & 100 & $3.0 \times 10^{-2}$ \\ 4 & - & 0.25 & 200 & $1.9 \times 10^{-2}$ \\ 4 & - & 0.25 & 300 & $1.5 \times 10^{-2}$ \\ 4 & - & 0.25 & 400 & $1.2 \times 10^{-2}$ \\ 4 & - & 0.25 & 500 & $1.1 \times 10^{-2}$ \\ \bottomrule \end{tabular} \label{Table 3} \end{minipage} \end{table*} Table~\ref{Table 2} shows the misclassification error probability of the \ac{SDP} estimators~\eqref{BCBM-N-equ3} and \eqref{BSSBM-N-equ1} with noisy labels side information. Under the stochastic block model with $a=4$ and $b=1$, when the side information $\beta = 1$, then $\eta =1.1 > 1$ and the error probability diminishes with $n$ as predicted by earlier theoretical results. When $\beta=0.2$, then $\eta =0.6 < 1$ which does not fall in the asymptotic perfect recovery regime. For this case the misclassification error is much higher. Under the censored block model with $a=4$ and $\xi=0.25$, when the side information $\beta = 1.1$, then $\eta =1.2 > 1$ and the error probability diminishes with $n$. When $\beta=0.1$, then $\eta =0.6 < 1$ which means that exact recovery is not possible asymptotically. For this value of $\beta$ and a finite $n$, the misclassification error is not negligible. For comparison, Table~\ref{Table 3} shows the misclassification error probability of the \ac{SDP} estimator {\em without} side information, i.e., $\beta = 0$. Under the binary stochastic block model, when $a=3$ ($a=4$) and $b=1$, it is seen that the error probability increases in comparison with the corresponding error probability in Table~\ref{Table 1} (Table~\ref{Table 2}) where side information is available. Also, under the binary censored block model, when $a=1$ and $\xi=0.2$ ($a=4$ and $\xi=0.25$), it is seen that the error probability increases in comparison with the corresponding error probability in Table~\ref{Table 1} (Table~\ref{Table 2}) where side information is available. Using standard arguments form numerical linear algebra, the computational complexity of the algorithms in this paper are on the order $O(mn^3 + m^2n^2)$, where $n$ is the number of nodes in the graph, and $m$ is a small constant, typically between 2 to 4, indicating assumptions of the problem that manifest as constraints in the optimization. \section{Conclusion} This paper calculated the exact recovery threshold for community detection under SDP with several types of side information. Among other insights, our results indicate that in the presence of side information, the exact recovery threshold for SDP and for maximum likelihood detection remain identical. We anticipate that models and methods of this paper may be further extended to better match the statistics of real-world graph data. \appendices \section{Proof of Theorem~\ref{Theorem 1}} \label{Proof-Theorem-1} We begin by stating sufficient conditions for the optimum solution of~\eqref{BCBM-P-equ2} matching the true labels $X^*$. \begin{Lemma} \label{BCBM-P-Lemma 1} For the optimization problem~\eqref{BCBM-P-equ2}, consider the Lagrange multipliers \begin{equation*} \mu^* , \quad D^{*}=\mathrm{diag}(d_{i}^{*}), \quad S^{*}. \end{equation*} If we have \begin{align*} &S^{*} = D^{*}+\mu^{*}W-G ,\\ &S^{*} \succeq 0, \\ &\lambda_{2}(S^{*}) > 0 ,\\ &S^{*}X^{*} =0 , \end{align*} then $(\mu^{*}, D^*, S^*)$ is the dual optimal solution and $\widehat{Z}=X^{*}X^{*T}$ is the unique primal optimal solution of~\eqref{BCBM-P-equ2}. \end{Lemma} \begin{proof} The Lagrangian of~\eqref{BCBM-P-equ2} is given by \begin{align*} L(Z,S,D,\mu)=&\langle G,Z \rangle +\langle S, Z \rangle -\langle D,Z-\mathbf{I} \rangle \\ &- \mu ( \langle W,Z\rangle - (Y^{T}Y )^{2} ) , \end{align*} where $S\succeq 0$, $D=\mathrm{diag}(d_{i})$, and $\mu \in \mathbb{R}$ are Lagrange multipliers. For any $Z$ that satisfies the constraints in~\eqref{BCBM-P-equ2}, \begin{align*} \langle G,Z\rangle & \overset{(a)}{\leq} L(Z,S^{*},D^{*},\mu^{*})\\ &=\langle D^{*},\mathbf{I}\rangle +\mu^{*}(Y^{T}Y)^{2}\\ &\overset{(b)}{=}\langle D^{*},Z^{*}\rangle +\mu^{*}(Y^{T}Y)^{2}\\ &=\langle G+S^{*}-\mu^{*} W,Z^{*}\rangle +\mu^{*}(Y^{T}Y)^{2}\\ &\overset{(c)}{=}\langle G,Z^{*}\rangle , \end{align*} where $(a)$ holds because $\langle S^{*},Z \rangle \geq 0$, $(b)$ holds because $Z_{ii}=1$ for all $i \in [n]$, and $(c)$ holds because $\langle S^{*},Z^{*} \rangle = X^{*T}S^{*}X^{*}=0$ and $\langle W,Z^{*} \rangle = (Y^{T}Y)^{2}$. Therefore, $Z^{*}$ is a primal optimal solution. Now, we will establish the uniqueness of the optimal solution. Assume $\tilde{Z}$ is another primal optimal solution. Then \begin{align*} \langle S^{*},\tilde{Z}\rangle & =\langle D^{*}-G+\mu^{*}W , \tilde{Z} \rangle \\ &=\langle D^{*},\tilde{Z} \rangle-\langle G,\tilde{Z} \rangle+ \mu^{*} \langle W,\tilde{Z} \rangle \\ &\overset{(a)}{=}\langle D^{*},Z^{*}\rangle-\langle G,Z^{*}\rangle+ \mu^{*} \langle W,Z^{*}\rangle \\ &=\langle D^{*}-G+\mu^{*}W,Z^{*}\rangle \\ &=\langle S^{*}, Z^{*}\rangle=0 , \end{align*} where $(a)$ holds because $\langle W,Z^{*}\rangle=\langle W,\tilde{Z} \rangle=(Y^{T}Y)^{2}$, $\langle G,Z^{*}\rangle=\langle G,\tilde{Z} \rangle$, and $Z_{ii}^{*}=\tilde{Z}_{ii}=1$ for all $i\in [n]$. Since $\tilde{Z}\succeq 0$ and $S^{*}\succeq 0$ while its second smallest eigenvalue $\lambda_{2}(S^{*})$ is positive, $\tilde{Z}$ must be a multiple of $Z^{*}$. Also, since $\tilde{Z}_{ii}=Z_{ii}^{*}=1$ for all $i \in [n]$, we have $\tilde{Z}=Z^{*}$. \end{proof} We now show that $S^{*} = D^{*}+\mu^{*}W-G$ satisfies other conditions in Lemma~\ref{BCBM-P-Lemma 1} with probability $1-o(1)$. Let \begin{equation} \label{BCBM-P-equ3} d_{i}^{*}= \sum_{j=1}^{n} G_{ij}x_{j}^{*}x_{i}^{*} -\mu^{*} \sum_{j=1}^{n} y_{i}y_{j}x_{j}^{*}x_{i}^{*} . \end{equation} Then $D^{*}X^{*} = GX^{*}-\mu^{*}WX^{*}$ and based on the definition of $S^{*}$ in Lemma~\ref{BCBM-P-Lemma 1}, $S^{*}$ satisfies the condition $S^{*}X^{*} =0$. It remains to show that $S^{*}\succeq 0$ and $\lambda_{2}(S^{*})>0$ with probability $1-o(1)$. In other words, we need to show that \begin{equation} \label{BCBM-P-equ1-New} \mathbb{P} \bigg( \underset{V\perp X^{*}, \| V \|=1}{\inf} V^{T}S^{*}V>0 \bigg )\geq 1-o(1) , \end{equation} where $V$ is a vector of length $n$. Since for the binary censored block model \begin{align} \label{BCBM-equ-expectation} \mathbb{E}[G]=p(1-2\xi)(X^{*}X^{*T}-\mathbf{I}) , \end{align} it follows that for any $V$ such that $V^{T}X^{*}=0$ and $ \| V \|=1$, \begin{align*} V^{T}S^{*}V=&V^{T}D^{*}V +\mu^{*}V^{T}WV -V^{T}(G-\mathbb E[G])V \\ & +p(1-2\xi). \end{align*} \begin{Lemma}\cite[Thoerem 9]{Ref19} \label{BCBM-P-Lemma 2} For any $c > 0$, there exists $c' >0$ such that for any $n \geq 1$, $ \| G-\mathbb E[G] \| \leq c'\sqrt{\log n}$ with probability at least $1-n^{-c}$. \end{Lemma} \begin{Lemma}\cite[Lemma 3]{Esmaeili.BSSBM.Partially.Revealed} \label{BCBM-P-Lemma 3} \begin{equation*} \mathbb{P} \Big ( V^{T}WV\geq \sqrt{\log n} \Big ) \leq \frac{1-\epsilon }{\sqrt{\log n}} = n^{-\frac{1}{2}+o(1)} . \end{equation*} \end{Lemma} Since $V^{T}D^{*}V \geq \min_{i\in [n]} d_{i}^{*}$ and $V^{T}(G-\mathbb E[G])V \leq \| G-\mathbb E[G] \| $, applying Lemmas~\ref{BCBM-P-Lemma 2} and~\ref{BCBM-P-Lemma 3} implies that with probability $1-o(1)$, \begin{equation} \label{BCBM-P-equ5} V^{T}S^{*}V \geq \min_{i \in [n]} d_{i}^{*} +(\mu^{*}-c^{'}) \sqrt{\log n} +p(1-2\xi) . \end{equation} \begin{Lemma} \label{BCBM-P-Lemma 4} Consider a sequence of i.i.d. random variables $\{S_1,\ldots,S_m\}$ with distribution $p(1-\xi) \delta_{+1} + p\xi \delta_{-1} +(1-p) \delta_{0}$. Let $U \sim \text{Binom} (n-1,1-\epsilon)$, $\mu^{*}<0$, and $\delta= \frac{\log n}{\log log n}$. Then \begin{align*} &\mathbb{P} \Bigg(\sum_{i=1}^{n-1} S_{i} \leq \delta \Bigg) \leq n^{-a (\sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1) } , \\ &\mathbb{P} \Bigg(\sum_{i=1}^{n-1} S_{i}-\mu^{*}U \leq \delta+\mu^{*} \Bigg) \leq \epsilon^{n [\log \epsilon +o(1) ] } . \end{align*} \end{Lemma} \begin{proof} It follows from Chernoff bound. \end{proof} It can be shown that $\sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*}$ in~\eqref{BCBM-P-equ3} is equal in distribution to $\sum_{i=1}^{n-1} S_{i}$ in Lemma~\ref{BCBM-P-Lemma 4}. Then \begin{align*} \mathbb{P} ( d_{i}^{*}\leq \delta ) = &\mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \delta \Bigg) \epsilon \\ &+ \mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*}-\mu^{*}Z_{i} \leq \delta+\mu^{*} \Bigg) (1-\epsilon) \\ \leq & \epsilon n^{-a (\sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1) } + (1-\epsilon ) \epsilon^{n \big( \log \epsilon +o(1) \big) } \\ = & e^{ \big( \frac{\log \epsilon }{\log n} -a (\sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1) \big) \log n} , \end{align*} where $Z_{i} \sim \text{Binom} (n-1,1-\epsilon)$ and $ (1-\epsilon ) \epsilon^{n (\log \epsilon +o(1)) }$ vanishes as $n \rightarrow \infty$. Recall that $\beta \triangleq \lim_{n \rightarrow \infty} -\frac{\log \epsilon}{\log n}$, where $\beta \geq 0$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*}\leq \delta ) \leq n^{ -\beta -a (\sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1) }. \end{equation*} Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg ) \geq 1-n^{1-\beta-a(\sqrt{1-\xi}-\sqrt{\xi})^{2}+o(1)} . \end{equation*} When $\beta+ a(\sqrt{1-\xi}-\sqrt{\xi})^{2}>1 $, $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ holds with probability $1-o(1)$. Combining this result with~\eqref{BCBM-P-equ5}, if $\beta + a(\sqrt{1-\xi}-\sqrt{\xi})^{2}>1 $, then with probability $1-o(1)$, \begin{equation*} V^{T}S^{*}V \geq \frac{\log n}{\log \log n} +(\mu^{*}-c^{'}) \sqrt{\log n} +p(1-2\xi) > 0 , \end{equation*} which concludes Theorem~\ref{Theorem 1}. \section{Proof of Theorem~\ref{Theorem 2}} \label{Proof-Theorem-2} Since the prior distribution of $X^{*}$ is uniform, among all estimators, the maximum likelihood estimator minimizes the average error probability. Therefore, it suffices to show that with high probability the maximum likelihood estimator fails. Let \[ F \triangleq \bigg\{\min_{i \in [n], y_{i}=0}~\sum_{j =1}^{n} G_{ij} x_{j}^{*} x_{i}^{*} \leq -1 \bigg\}. \] Then $\mathbb{P} ( \text{ML Fails} ) \geq \mathbb{P} ( F )$. If we show that $\mathbb{P} ( \text{F} ) \rightarrow 1$, the maximum likelihood estimator fails. Let $H$ denote the set of first $ \lfloor \frac{n}{\log^{2} n} \rfloor$ nodes and $e (i, H )$ denote the number of edges between node $i$ and nodes in the set $H$. Then \begin{align*} \min_{i \in [n], y_{i}=0} & ~\sum_{j =1}^{n} G_{ij} x_{j}^{*} x_{i}^{*} \leq \min_{i \in H, y_{i}=0 }~\sum_{j =1}^{n} G_{ij} x_{j}^{*} x_{i}^{*} \\ \leq & \min_{i \in H, y_{i}=0 }~\sum_{ j \in H^{c} } G_{ij} x_{j}^{*} x_{i}^{*} + \max_{i \in H, y_{i}=0 }~e (i, H ) , \end{align*} Define the events \begin{align*} &E_1 \triangleq \bigg\{\max_{i \in H, y_{i}=0 }~e (i, H ) \leq \delta -1 \bigg\}, \\ &E_2 \triangleq\Bigg\{\min_{i \in H, y_{i}=0}~\sum_{ j \in H^{c} } G_{ij} x_{j}^{*} x_{i}^{*} \leq -\delta \Bigg\}. \end{align*} Notice that $F \supset E_{1} \cap E_{2}$. Hence, to show that the maximum likelihood estimator fails, it suffices to show that $\mathbb{P} ( E_{1} ) \rightarrow 1$ and $\mathbb{P} ( E_{2} ) \rightarrow 1$. \begin{Lemma}\cite[Lemma 5]{esmaeili2019community} \label{BCBM-P-Lemma 5} When $S \sim \text{Binom}(n,p)$, for any $r\geq 1$, $\mathbb{P} ( S \geq rnp ) \leq \big(\frac{e}{r} \big)^{rnp} e^{-np}$. \end{Lemma} Since $e (i, H ) \sim \text{Binom} \big( | H |, a\frac{\log n }{n} \big)$, it follows from Lemma~\ref{BCBM-P-Lemma 5} that \begin{align*} \mathbb{P} & \big( e (i, H ) \geq \delta-1 , y_{i}=0 \big) \\ &\leq \epsilon \bigg( \frac{\log^{2} n}{ae \log \log n} - \frac{\log n}{ae} \bigg)^{1-\frac{\log n}{\log \log n}} e^{-\frac{a}{\log n}} \leq \epsilon n^{-2+o(1)} . \end{align*} Using the union bound, $\mathbb{P} ( E_{1} ) \geq 1-\epsilon n^{-1+o(1)}$. Thus, $\mathbb{P} ( E_{1} ) \to 1$. \begin{Lemma}\cite[Lemma 8]{Ref19} \label{BCBM-P-Lemma 6} Consider a sequence of i.i.d. random variables $\{S_1,\ldots,S_m\}$ with distribution $ p(1-\xi) \delta_{+1} + p\xi \delta_{-1} +(1-p) \delta_{0}$, where $m-n = o(n)$. Let $f(n) = \frac{\log n}{\log \log n}$. Then \begin{equation*} \mathbb{P} \Bigg( \sum_{i=1}^{m} S_{i} \leq -f(n) \Bigg) \geq n^{-a ( \sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1) } . \end{equation*} \end{Lemma} Using Lemma~\ref{BCBM-P-Lemma 6} and since $ \{ \sum_{ j \in H^{c} } G_{ij} x_{j}^{*} x_{i}^{*} \}_{i \in H}$ are mutually independent, \begin{align} \label{BCBM-P-equ9} \mathbb{P} ( E_{2} ) & =1 - \prod_{i\in H} \Bigg[ 1- \mathbb{P} \bigg( \sum_{ j \in H^{c} } G_{ij} x_{j}^{*} x_{i}^{*} \leq -\delta, y_{i}=0 \bigg) \Bigg] \nonumber\\ &\geq 1 - \Big[ 1- \epsilon n^{-a ( \sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1)} \Big]^{ | H |} . \end{align} Since $\beta = \lim_{n \rightarrow \infty} -\frac{\log \epsilon}{\log n}$, it follows from~\eqref{BCBM-P-equ9} that \begin{align} \label{BCBM-P-equ11} \mathbb{P} ( E_{2} ) &\geq 1 - \Big[ 1- n^{-\beta -a ( \sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1)} \Big]^{ | H |} \nonumber\\ &\geq 1 - \exp \Big( - n^{1-\beta -a ( \sqrt{1-\xi}-\sqrt{\xi} )^{2}+o(1)} \Big) , \end{align} using $1+x \leq e^{x}$. From~\eqref{BCBM-P-equ11}, if $a ( \sqrt{1-\xi}-\sqrt{\xi} )^{2}+\beta <1$, then $\mathbb{P} ( E_{2} ) \rightarrow 1$. Therefore, $\mathbb{P} ( F ) \rightarrow 1$ and Theorem~\ref{Theorem 2} follows. \section{Proof of Theorem~\ref{Theorem 3}} \label{Proof-Theorem-3} We begin by deriving sufficient conditions for the \ac{SDP} estimator to produce the true labels $X^*$. \begin{Lemma} \label{BCBM-N-Lemma 1} For the optimization problem~\eqref{BCBM-N-equ3}, consider the Lagrange multipliers \begin{equation*} D^{*}=\mathrm{diag}(d_{i}^{*}), \qquad S^{*}\triangleq \begin{bmatrix} S_{A}^{*} & S_{B}^{*T} \\ S_{B}^{*} & S_{C}^{*} \end{bmatrix}. \end{equation*} If we have \begin{align*} &S_{A}^{*} = T_{2}Y^{T}X^{*} , \\ &S_{B}^{*} = -T_{2}Y , \\ &S_{C}^{*} = D^{*}-T_{1}G, \\ &S^{*} \succeq 0, \\ &\lambda_{2}(S^{*}) > 0, \\ &S^{*} [1, X^{*T}]^T =0 \end{align*} then $(D^* , S^*)$ is the dual optimal solution and $\widehat{Z}=X^{*}X^{*T}$ is the unique primal optimal solution of~\eqref{BCBM-N-equ3}. \end{Lemma} \begin{proof} Define \begin{equation*} H \triangleq \begin{bmatrix} 1 & X^{T}\\ X & Z \end{bmatrix}. \end{equation*} The Lagrangian of~\eqref{BCBM-N-equ3} is given by \begin{equation*} L(Z,X,S,D)=T_{1} \langle G,Z \rangle +2T_{2} \langle Y, X \rangle +\langle S, H \rangle -\langle D,Z-\mathbf{I} \rangle , \end{equation*} where $S\succeq 0$ and $D=\mathrm{diag}(d_{i})$ are Lagrange multipliers. For any $Z$ that satisfies the constraints in~\eqref{BCBM-N-equ3}, \begin{align*} T_{1} \langle G,Z \rangle+2T_{2} \langle Y,X \rangle &\overset{(a)}{\leq} L(Z,X,S^{*},D^{*})\\ &=\langle D^{*},\mathbf{I}\rangle +S_{A}^{*}\\ &\overset{(b)}{=}\langle D^{*},Z^{*}\rangle -\langle S_{B}^{*} , X^{*} \rangle \\ &=\langle S_{C}^{*}+T_{1}G,Z^{*}\rangle - \langle S_{B}^{*} , X^{*} \rangle \\ &\overset{(c)}{=}T_{1}\langle G, Z^{*}\rangle -2\langle S_{B}^{*} , X^{*} \rangle \\ &\overset{(d)}{=}T_{1}\langle G, Z^{*}\rangle + 2T_{2}\langle Y , X^{*} \rangle , \end{align*} where $(a)$ holds because $\langle S^{*},H \rangle \geq 0$, $(b)$ holds because $Z_{ii}=1$ for all $i \in [n]$ and $S_{A}^{*}=-S_{B}^{*T}X^{*}$, $(c)$ holds because $S_{B}^{*}=-S_{C}^{*}X^{*}$, and $(d)$ holds because $S_{B}^{*}=-T_{2}Y$. Therefore, $Z^{*}=X^{*}X^{*T}$ is a primal optimal solution. Now, assume $\tilde{Z}$ is another optimal solution. \begin{align*} \langle S^{*}, & \tilde{H} \rangle =S_{A}^{*} + 2\langle S_{B}^{*},\tilde{X} \rangle + \langle D^{*} -T_{1}G , \tilde{Z} \rangle \\ &\overset{(a)}{=}S_{A}^{*} + 2\langle S_{B}^{*},X^{*} \rangle +\langle D^{*},Z^{*}\rangle -T_{1}\langle G,Z^{*}\rangle\\ &=\langle S^{*}, H^{*} \rangle=0 \end{align*} where $(a)$ holds because $\langle G,Z^{*}\rangle=\langle G,\tilde{Z} \rangle$, $Z_{ii}^{*}=\tilde{Z}_{ii}=1$ for all $i\in [n]$, and $\langle S_{B}^{*},X^{*} \rangle = \langle S_{B}^{*},\tilde{X} \rangle$. Since $\tilde{H} \succeq 0$ and $S^{*}\succeq 0$ while its second smallest eigenvalue $\lambda_{2}(S^{*})$ is positive, $\tilde{H}$ must be a multiple of $H^{*}$. Also, since $\tilde{Z}_{ii}=Z_{ii}^{*}=1$ for all $i \in [n]$, we have $\tilde{H}=H^{*}$. \end{proof} We now show that $S^{*}$ defined by $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$ satisfies other conditions in Lemma~\ref{BCBM-N-Lemma 1} with probability $1-o(1)$. Let \begin{equation} \label{BCBM-N-equ44} d_{i}^{*}=T_{1} \sum_{j=1}^{n} G_{ij}x_{j}^{*}x_{i}^{*} + T_{2}y_{i}x_{i}^{*}. \end{equation} Then $D^{*}X^{*} = T_{1}GX^{*}+T_{2}Y$ and based on the definitions of $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$ in Lemma~\ref{BCBM-N-Lemma 1}, $S^{*}$ satisfies the condition $S^{*} [1, X^{*T}]^T =0$. It remains to show that $S^{*}\succeq 0$ and $\lambda_{2}(S^{*})>0$ with probability $1-o(1)$. In other words, we need to show that \begin{equation} \label{BCBM-N-equ1 New} \mathbb{P} \bigg( \underset{V\perp [1, X^{*T}]^T, \| V \|=1}{\inf} V^{T}S^{*}V>0 \bigg) \geq 1-o(1) , \end{equation} where $V$ is a vector of length $n+1$. Let $V \triangleq [v, U^{T}]^{T}$, where $v$ is a scalar and $U \triangleq [u_{1}, u_{2}, \cdots,u_{n}]^{T}$. For any $V$ such that $V^{T}[1, X^{*T}]^T=0$ and $ \| V \|=1$, we have \begin{align} \label{BCBM-N-equ4} V&^{T}S^{*}V =v^{2} S_{A}^{*} -2T_{2}vU^{T}Y +U^{T}D^{*}U - T_{1}U^{T}GU \nonumber\\ \geq& ( 1-v^{2} ) \bigg[\min_{i \in [n]} d_{i}^{*} - T_{1} \| G-\mathbb{E}[G] \| + T_{1} p(1-2\xi) \bigg] \nonumber\\ &+v^{2} \bigg[ T_{2}Y^{T}X^{*} -2T_{2}\frac{\sqrt{n(1-v^{2})}}{|v|}-T_{1}p(1-2\xi) \bigg] , \end{align} where the last inequality holds because \begin{equation*} U^{T}D^{*}U \geq ( 1-v^{2} ) \min_{i \in [n]} d_{i}^{*} , \end{equation*} \begin{equation*} U^{T}(G-\mathbb{E}[G])U \leq ( 1-v^{2} ) \| G-\mathbb{E}[G] \| , \end{equation*} \begin{equation*} vU^{T}Y \leq |v| \sqrt{n(1-v^{2})} . \end{equation*} \begin{Lemma} \label{BCBM-New Lemma} Under the noisy label side information with noise parameter $\alpha$, \begin{equation*} \mathbb{P} \Bigg(\sum_{i=1}^{n} x_{i}^{*}y_{i} \leq \sqrt{n}\log n \Bigg) \leq e^{n\Big(\log \big(2\sqrt{\alpha(1-\alpha)}\big)+o(1) \Big)} . \end{equation*} \end{Lemma} \begin{proof} It follows from Chernoff bound. \end{proof} Using Lemma~\ref{BCBM-New Lemma}, it can be shown that with probability converging to one, $\sum_{i=1}^{n} x_{i}^{*}y_{i} \geq \sqrt{n}\log n$. Thus, \begin{equation*} v^{2} \bigg[ T_{2}\sqrt{n}\log n -2T_{2}\frac{\sqrt{n(1-v^{2})}}{|v|}-T_{1}p(1-2\xi) \bigg] \geq 0 , \end{equation*} as $n \rightarrow \infty$. Applying Lemma~\ref{BCBM-P-Lemma 2}, \begin{align} \label{BCBM-N-equ5} V^{T}S^{*}V & \geq ( 1-v^{2} ) \Big( \min_{i \in [n]} d_{i}^{*} - T_{1} c^{'}\sqrt{\log n} +T_{1}p(1-2\xi) \Big) . \end{align} \begin{Lemma} \label{BCBM-N-Lemma 2} Consider a sequence $f(n)$, and for each $n$ a sequence of i.i.d. random variables $\{S_1,\ldots,S_m\}$ with distribution $p_{1} \delta_{+1} + p_{2} \delta_{-1} +(1-p_{1}-p_{2}) \delta_{0}$, where the parameters of the distribution depend on $n$ via $p_{1} = \rho_{1}\frac{\log n}{n}$, and $p_{2} = \rho_{2}\frac{\log n}{n}$ for some positive constants $\rho_{1}, \rho_{2}$. We assume $m(n)-n=o(n)$, where in the sequel the dependence of $m$ on $n$ is implicit. Define $\omega \triangleq \lim_{n\rightarrow\infty} \frac{f(n)}{\log n}$. For sufficiently large $n$, when $\omega<\rho_1-\rho_2 $, \begin{equation} \mathbb{P} \Bigg(\sum_{i=1}^{m} S_{i} \leq f(n) \Bigg) \leq n^{-\eta^{*}+o(1)} , \label{eq:UpperUpper}\end{equation} and when $\omega >\rho_1-\rho_2$, \begin{equation} \mathbb{P}\Bigg(\sum_{i=1}^{m} S_{i} \geq f(n) \Bigg) = n^{-\eta^{*}+o(1)}, \label{eq:LowerUpper} \end{equation} where $\eta^{*} = \rho_{1} + \rho_{2} -\gamma^{*} +\frac{\omega}{2} \log \Big( \frac{\rho_{2} (\gamma^{*} + \omega)}{\rho_{1}(\gamma^{*} - \omega)} \Big)$ and $\gamma^{*} = \sqrt{\omega^{2}+4\rho_{1}\rho_{2}}$. \end{Lemma} \begin{proof} Inequality~\eqref{eq:UpperUpper} is derived by applying Chernoff bound. Equality~\eqref{eq:LowerUpper} is obtained by a sandwich argument on the probability: an upper bound derived via Chernoff bound, and a lower bound from~\cite[Lemma 15]{Ref7}. \end{proof} It follows from~\eqref{BCBM-N-equ44} that \begin{align*} \mathbb{P} ( d_{i}^{*}\leq \delta ) =& \mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta-T_{2}}{T_{1}} \Bigg) (1-\alpha ) \\ &+ \mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta+T_{2}}{T_{1}} \Bigg) \alpha , \end{align*} where $\sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*}$ is equal in distribution to $\sum_{i=1}^{n-1} S_{i}$ in Lemma~\ref{BCBM-N-Lemma 2} with $p_{1} = p(1-\xi)$ and $p_{2}=p\xi$. Recall that $\beta \triangleq \lim_{n \rightarrow \infty} \frac{T_{2}}{\log n}$, where $\beta \geq 0$. First, we bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $0\leq \beta<aT_{1}(1-2\xi)$. It follows from Lemma~\ref{BCBM-N-Lemma 2} that \begin{align*} &\mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta-T_{2}}{T_{1}} \Bigg) \leq n^{-\eta(a,\beta)+o(1)}, \\ & \mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta+T_{2}}{T_{1}} \Bigg) \leq n^{-\eta(a,\beta)+\beta+o(1)} . \end{align*} Then \begin{align*} \mathbb{P} ( d_{i}^{*} \leq \delta ) &\leq n^{-\eta(a,\beta)+o(1)} (1-\alpha ) + n^{-\eta(a,\beta)+\beta+o(1)} \alpha \\ & =n^{-\eta(a,\beta) +o(1)} . \end{align*} Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg) \geq 1-n^{1-\eta(a, \beta)+o(1)} . \end{equation*} When $\eta(a,\beta)>1 $, it follows $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ with probability $1-o(1)$. Thus, as long as $\eta(a,\beta)>1$, we can replace $\min d_i^*$ in~\eqref{BCBM-N-equ5} with $\frac{\log n}{\log \log n}$ and obtain, with probability $1-o(1)$: \begin{align*} V^{T}S^{*}V \geq& ( 1-v^{2} ) \bigg( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p(1-2\xi) \bigg)\\ > &0 , \end{align*} which concludes the first part of Theorem~\ref{Theorem 3}. We now bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $\beta>aT_{1}(1-2\xi)$. It follows from Lemma~\ref{BCBM-N-Lemma 2} that \begin{align*} &\mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta-T_{2}}{T_{1}} \Bigg) \leq n^{-\eta(a,\beta)+o(1)}, \\ &\mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta+T_{2}}{T_{1}} \Bigg) \leq 1 . \end{align*} Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\eta(a,\beta)+o(1)}+n^{-\beta+o(1)} , \end{equation*} where $\alpha = n^{-\beta +o(1)}$. Using the union bound, \begin{equation*} \mathbb{P} \Big( \min_{i \in [n]}d_{i}^{*} \geq \delta \Big) \geq 1- \Big( n^{1-\eta(a,\beta)+o(1)}+n^{1-\beta+o(1)} \Big) . \end{equation*} \begin{Lemma} \label{BCBM-N-Lemma 3} If $\beta > 1$, then $\eta(a,\beta) > 1$. \end{Lemma} \begin{proof} Define $\psi(a,\beta) \triangleq \eta(a,\beta) -\beta$. It can be shown that $\psi(a,\beta)$ is a convex function in $\beta$. At the optimal $\beta^{*} $, $\log \Big( \frac{(1-\xi) (\gamma^{*} + \beta^{*})}{\xi(\gamma^{*} - \beta^{*})} \Big) = 2T_{1}$. Then \begin{equation} \label{BCBM-N-Lemma3-equ2} \eta(a,\beta) -\beta \geq a -\frac{\gamma^{*}}{T_{1}} . \end{equation} It can be shown that at the optimal $\beta^{*}$, \begin{equation*} \frac{\gamma^{*} +\beta^{*}}{\gamma^{*} - \beta^{*}} = \frac{1-\xi}{\xi} = \frac{4\xi(1-\xi)a^{2}T_{1}^{2}}{(\gamma^{*} - \beta^{*})^{2}} . \end{equation*} Then $\gamma^{*} = \beta^{*} +2\xi aT_{1}$ and~\eqref{BCBM-N-Lemma3-equ2} is written as \begin{equation} \label{BCBM-N-Lemma3-equ3} \eta(a,\beta) -\beta \geq a -2\xi a -\frac{\beta^{*}}{T_{1}} . \end{equation} Also, it can be shown that at $\beta^{*}$, $\gamma^{*} = \frac{\beta^{*}}{1-2\xi}$. This implies that $\beta^{*} =(1-2\xi)aT_{1}$. Substituting in~\eqref{BCBM-N-Lemma3-equ3} leads to $\eta(a,\beta) - \beta \geq 0$, which implies that $\eta(a,\beta) >1$ when $\beta >1$. \end{proof} When $\beta>1$, using Lemma~\ref{BCBM-N-Lemma 3}, it follows $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ with probability $1-o(1)$. Substituting in~\eqref{BCBM-N-equ5}, if $\beta>1 $, with probability $1-o(1)$ we obtain: \begin{align*} V^{T}S^{*}V \geq& ( 1-v^{2} ) \Big( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p(1-2\xi) \Big) \\ >& 0 , \end{align*} which concludes the second part of Theorem~\ref{Theorem 3}. \section{Proof of Theorem~\ref{Theorem 4}} \label{Proof-Theorem-4} Since the prior distribution of $X^{*}$ is uniform, among all estimators, the maximum likelihood estimator minimizes the average error probability. Therefore, we only need to show that with high probability the maximum likelihood estimator fails. Let \begin{equation*} F \triangleq \Bigg \{ \min_{i \in [n] }~ \bigg( T_{1}\sum_{j =1}^{n} G_{ij} x_{j}^{*} x_{i}^{*} + T_{2}x_{i}^{*}y_{i} \bigg) \leq -T_{1} \Bigg \}. \end{equation*} Then $\mathbb{P} ( \text{ML Fails} ) \geq \mathbb{P} ( F )$. Let $H$ denote the set of first $ \lfloor \frac{n}{\log^{2} n} \rfloor$ nodes and $e (i, H )$ denote the number of edges between node $i$ and nodes in the set $H \subset [n]$. It can be shown that \begin{align*} \min_{i \in [n] }~ & \bigg( T_{1}\sum_{j \in [n]} G_{ij} x_{j}^{*} x_{i}^{*} + T_{2}x_{i}^{*}y_{i} \bigg) \\ \leq & \min_{i \in H }~ \bigg( T_{1}\sum_{j \in [n]} G_{ij} x_{j}^{*} x_{i}^{*} + T_{2}x_{i}^{*}y_{i} \bigg) \\ \leq & \min_{i \in H }~ \bigg( T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + T_{2}x_{i}^{*}y_{i} \bigg) + \max_{i \in H }~e (i, H ) . \end{align*} Define \begin{align*} &E_{1} \triangleq \bigg \{\max_{i \in H }~e (i, H ) \leq \delta -T_{1} \bigg\} , \\ &E_{2} \triangleq \bigg \{ \min_{i \in H }~ \bigg( T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + T_{2}x_{i}^{*}y_{i} \bigg) \leq -\delta \bigg\}. \end{align*} Notice that $F \supset E_{1} \cap E_{2}$ and it suffices to show $\mathbb{P} ( E_{1} ) \rightarrow 1$ and $\mathbb{P} ( E_{2} ) \rightarrow 1$ to prove that the maximum likelihood estimator fails. Since $e (i, H ) \sim \text{Binom}( | H |, a\frac{\log n }{n})$, from Lemma~\ref{BCBM-P-Lemma 5}, \begin{align*} \mathbb{P} & ( e (i, H ) \geq \delta-T_{1} ) \\ &\leq \bigg( \frac{\log^{2} n}{ae \log \log n} - \frac{T_{1} \log n}{ae} \bigg)^{T_{1}-\frac{\log n}{\log \log n}} e^{-\frac{a}{\log n}} \leq n^{-2+o(1)}. \end{align*} Using the union bound, $\mathbb{P} ( E_{1} ) \geq 1- n^{-1+o(1)}$. Let \begin{align*} E &\triangleq \bigg\{ T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + T_{2}x_{i}^{*}y_{i} \leq -\delta \bigg\}, \\ E_{\alpha} &\triangleq \bigg\{ \sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} \leq \frac{-\delta + T_{2}}{T_{1}} \bigg\}, \\ E_{1-\alpha} &\triangleq \bigg \{ \sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} \leq \frac{-\delta - T_{2}}{T_{1}} \bigg \}. \end{align*} Then \begin{align} \label{BCBM-N-equ6} \mathbb{P} ( E_{2} ) & =1 - \prod_{i\in H} [ 1- \mathbb{P} ( E ) ] \overset {(a)}{=} 1 - [ 1- \mathbb{P} ( E ) ]^{ | H |} \nonumber\\ & = 1 - [ 1-\alpha \mathbb{P} ( E_{\alpha} ) - (1-\alpha ) \mathbb{P} ( E_{1-\alpha} ) ]^{ | H |} , \end{align} where $(a)$ holds because $ \{ T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + T_{2}x_{i}^{*}y_{i} \}_{i \in H}$ are mutually independent. First, we bound $\mathbb{P}(E_{2})$ under the condition $0 \leq \beta < aT_{1} (1-2\xi )$. Using Lemma~\ref{BCBM-N-Lemma 2}, $\mathbb{P} ( E_{\alpha} ) \geq n^{-\eta(a,\beta) + \beta +o(1)}$ and $\mathbb{P} ( E_{1-\alpha} ) \geq n^{-\eta(a,\beta) +o(1)}$. It follows from~\eqref{BCBM-N-equ6} that \begin{align*} \mathbb{P} ( E_{2} ) & \overset{(a)}{\geq} 1 - \Big[ 1 - n^{-\eta(a,\beta)+o(1)} \Big]^{ | H |} \\ & \overset{(b)}{\geq} 1 - \exp \Big( -n^{1-\eta(a,\beta)+o(1)} \Big) , \end{align*} where $(a)$ holds because $\alpha =n^{-\beta + o(1)}$ and $(b)$ is due to $1+x \leq e^{x}$. Therefore, if $\eta(a,\beta) <1$, then $\mathbb{P} (E_{2} ) \rightarrow 1$ and the first part of Theorem~\ref{Theorem 4} follows. We now bound $\mathbb{P}(E_{2})$ under the condition $\beta \geq aT_{1} (1-2\xi ) $. Reorganizing~\eqref{BCBM-N-equ6}, \begin{equation} \label{BCBM-N-equ7} \mathbb{P} ( E_{2} ) = 1 - [ (1-\alpha ) \mathbb{P} ( E_{1-\alpha}^{c}) +\alpha \mathbb{P} ( E_{\alpha}^{c} ) ]^{ | H |} , \end{equation} where \begin{align*} \mathbb{P} ( E_{\alpha}^{c} ) &= \mathbb{P} \bigg( \sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} \geq \frac{-\delta + T_{2}}{T_{1}} \bigg) , \\ \mathbb{P} ( E_{1-\alpha}^{c} )&= \mathbb{P} \bigg( \sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} \geq \frac{-\delta - T_{2}}{T_{1}} \bigg) . \end{align*} Also, $\sum_{j \in H^{c}} G_{ij}x_{i}^{*}x_{j}^{*}$ is equal in distribution to $\sum_{i=1}^{ | H^{c} |-1} S_{i}$ in Lemma~\ref{BCBM-N-Lemma 2}, where $p_{1} = p(1-\xi)$ and $p_{2}=p\xi$. Then $\mathbb{P} ( E_{\alpha}^{c} ) \leq n^{-\eta(a,\beta) + \beta +o(1) }$ and $\mathbb{P} ( E_{1-\alpha}^{c} ) \leq 1$. It follows from~\eqref{BCBM-N-equ7} that \begin{align*} \mathbb{P} ( E_{2} ) & \geq 1 - \Big[(1-\alpha ) +\alpha n^{-\eta(a,\beta) + \beta +o(1) } \Big]^{ | H |} \\ & \overset{(a)}{=} 1 - \Big[ 1 -n^{-\beta+o(1) } +n^{-\eta(a,\beta) +o(1) } \Big]^{ | H |} \\ & \overset{(b)}{\geq } 1 - e^{ -n^{1-\beta +o(1) } \Big(1-n^{-\eta(a,\beta)+\beta+o(1)} \Big) }, \end{align*} where $(a)$ holds because $\alpha = n^{-\beta + o(1)}$ and $(b)$ is due to $1+x<e^{x}$. Therefore, since $\beta \leq \eta(a, \beta)$, if $\beta <1$, then $\mathbb{P} (E_{2} ) \rightarrow 1$ and the second part of Theorem~\ref{Theorem 4} follows. \section{Proof of Theorem~\ref{Theorem 5}} \label{Proof-Theorem-5} We begin by deriving sufficient conditions for the \ac{SDP} estimator to produce the true labels $X^*$. \begin{Lemma} \label{BCBM-G-Lemma 1} The sufficient conditions of Lemma~\ref{BCBM-N-Lemma 1} apply to the general side information \ac{SDP}~\eqref{BCBM-G-equ3} by replacing $S_{A}^{*} = \tilde{Y}^{T}X^{*}$ and $S_B^*=-\tilde{Y}$. \end{Lemma} \begin{proof} The proof is similar to the proof of Lemma~\ref{BCBM-N-Lemma 1}. \end{proof} It suffices to show that $S^{*}$, defined via its components $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$, satisfies other conditions in Lemma~\ref{BCBM-G-Lemma 1} with probability $1-o(1)$. Let \begin{equation} \label{BCBM-G-equ4} d_{i}^{*}=T_{1} \sum_{j=1}^{n} G_{ij}x_{j}^{*}x_{i}^{*} + \tilde{y}_{i}x_{i}^{*} . \end{equation} Then $D^{*}X^{*} = T_{1}GX^{*}+\tilde{Y}$ and based on the definitions of $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$ in Lemma~\ref{BCBM-G-Lemma 1}, $S^{*}$ satisfies the condition $S^{*} [1, X^{*T}]^T =0$. It remains to show that~\eqref{BCBM-N-equ1 New} holds, i.e., $S^{*}\succeq 0$ and $\lambda_{2}(S^{*})>0$ with probability $1-o(1)$. Let \begin{equation} \label{BCBM-G-equ1 New} y_{max} \triangleq K \max_{k, m_{k}} \bigg | \log \bigg( \frac{\alpha_{+,m{k}}^{k}}{\alpha_{-,m_{k}}^{k}} \bigg) \bigg| , \end{equation} where $k \in \{ 1,2,\cdots,K \}$ and $m_{k} \in \{ 1,2,\cdots,M_{K} \}$. For any $V$ such that $V^{T}[1, X^{*T}]^T=0$ and $ \| V \|=1$, we have \begin{align} &V^{T}S^{*}V =v^{2} S_{A}^{*} -2vU^{T} \tilde{Y} +U^{T}D^{*}U - T_{1}U^{T}GU \nonumber\\ &\geq ( 1-v^{2} ) \bigg[\min_{i \in [n]} d_{i}^{*} - T_{1} \| G-\mathbb{E}[G] \| + T_{1} p(1-2\xi) \bigg] \nonumber\\ &+v^{2} \bigg[ \tilde{Y}^{T}X^{*} -2y_{max}\frac{\sqrt{n(1-v^{2})}}{|v|}-T_{1}p(1-2\xi) \bigg] , \end{align} where the last inequality holds in a manner similar to \eqref{BCBM-N-equ4} with the difference that in the present case \begin{equation*} vU^{T}\tilde{Y} \leq |v|y_{max} \sqrt{n(1-v^{2})}. \end{equation*} \begin{Lemma} \label{BCBM-G-New Lemma} For feature $k$ of general side information, \begin{equation*} \mathbb{P} \Bigg(\sum_{i=1}^{n} x_{i}^{*}z_{i, k} \geq \sqrt{n}\log n \Bigg) \geq 1-o(1) , \end{equation*} where \begin{align*} z_{i,k} &\triangleq \sum_{m_{k}=1}^{M_{k}} \mathbbm{1}_{\{y_{i,k} = m_{k}\} } \log\bigg( \frac{\alpha_{+,m_{k}}^{k}}{\alpha_{-,m_{k}}^{k}}\bigg). \end{align*} \end{Lemma} \begin{proof} For feature $k$, let \begin{align*} \delta' &\triangleq \sqrt{n} \log n , \\ \rho_{j} &\triangleq \frac{1}{n} |\{i \in [n] : y_{i,k} = j \}|, \end{align*} where $j \in \{1, \cdots, M_{k}\}$ and $\sum_{j} \rho_{j} = 1$. Then \begin{align*} \mathbb{P} \Bigg(\sum_{i=1}^{n} x_{i}^{*}z_{i,k} \leq \delta' \Bigg) &\leq \sum_{j=1}^{M_{k}} \mathbb{P} \Bigg( \sum_{i\in A_{j}} x_{i}^{*}z_{i,k} \leq \delta' \Bigg). \end{align*} Applying Chernoff bound yields \begin{equation*} \mathbb{P} \Bigg( \sum_{i\in A_{j}} x_{i}^{*}z_{i,k} \leq \delta' \Bigg) \leq e^{n(\psi_{k,j}+o(1))}, \end{equation*} where \begin{equation*} \psi_{k,j} \triangleq \rho_{j} \log \bigg( 2\sqrt{\alpha_{+,j}^{k} \alpha_{-,j}^{k} \mathbb{P}(x_{i}^{*}=1) \mathbb{P}(x_{i}^{*}=-1)} \bigg ). \end{equation*} Since $\psi_{k,j}<0$ for any values of $\alpha_{+,j}^{k}$ and $\alpha_{-,j}^{k}$, we have \begin{align*} \mathbb{P} \Bigg(\sum_{i=1}^{n} x_{i}^{*}z_{i,k} \geq \delta' \Bigg) &\geq 1- \sum_{j=1}^{M_{k}} e^{n(\psi_{k,j}+o(1))} = 1 - o(1). \end{align*} Therefore, with probability $1-o(1)$, $\sum_{i=1}^{n} x_{i}^{*}z_{i,k} \geq \sqrt{n} \log n$ and Lemma~\ref{BCBM-G-New Lemma} follows. \end{proof} Using Lemmas~\ref{BCBM-P-Lemma 2} and~\ref{BCBM-G-New Lemma}, \begin{align} \label{BCBM-G-equ5} V^{T}S^{*}V \geq& ( 1-v^{2} ) \Big( \min_{i \in [n]}d_{i}^{*} - T_{1} c^{'}\sqrt{\log n} +T_{1}p(1-2\xi) \Big) . \end{align} It can be shown that $\sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*}$ in~\eqref{BCBM-G-equ4} is equal in distribution to $\sum_{i=1}^{n-1} S_{i}$ in Lemma~\ref{BCBM-N-Lemma 2}, where $p_{1} = p(1-\xi)$ and $p_{2}=p\xi$. Then \begin{equation} \label{BCBM-G-equ6} \mathbb{P} ( d_{i}^{*}\leq \delta ) = \sum_{m_{1}=1}^{M_{1}} \sum_{m_{2}=1}^{M_{2}} ... \sum_{m_{K}=1}^{M_{K}} P (m_{1}, ..., m_{K} ) , \end{equation} where \begin{align*} P & (m_{1}, ..., m_{K} ) \triangleq \mathbb{P} (x_{i}=1 )e^{f_{2}(n)} \mathbb{P} \Bigg( \sum_{i=1}^{n-1} S_{i} \leq \frac{\delta-f_{1}(n)}{T_{1}} \Bigg) \\ &+\mathbb{P} (x_{i}=-1 ) e^{f_{3}(n)} \mathbb{P} \Bigg( \sum_{i=1}^{n-1} S_{i} \leq \frac{\delta+f_{1}(n)}{T_{1}} \Bigg). \end{align*} First, we bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $ | \beta_{1} | \leq aT_{1} (1-2\xi )$. It follows from Lemma~\ref{BCBM-N-Lemma 2} that \begin{align*} \mathbb{P} \Bigg( \sum_{i=1}^{n-1} S_{i} \leq \frac{\delta-f_{1}(n)}{T_{1}} \Bigg) &\leq n^{-\eta(a,\beta_{1})+o(1)}, \\ \mathbb{P} \Bigg( \sum_{i=1}^{n-1} S_{i} \leq \frac{\delta+f_{1}(n)}{T_{1}} \Bigg) &\leq n^{-\eta(a,\beta_{1})-\beta_{1}+o(1)}. \end{align*} Notice that \begin{align*} \beta \triangleq \lim_{n\rightarrow\infty} -\frac{\max (f_2(n),f_3(n))}{\log n} . \end{align*} When $\beta_{1} \geq 0$, $\lim_{n\rightarrow \infty}\frac{f_{2}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty}\frac{f_{3}(n)}{\log n} = -\beta_{1}-\beta$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\eta(a,\beta_{1})-\beta+o(1)}. \end{equation*} When $ \beta_{1} < 0$, $\lim_{n\rightarrow \infty}\frac{f_{3}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty}\frac{f_{2}(n)}{\log n} = \beta_{1} - \beta$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\eta(a,\beta_{1})+\beta_{1}-\beta+o(1)} = n^{-\eta(a, | \beta_{1} |)-\beta+o(1)}. \end{equation*} Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg) \geq 1-n^{1-\eta(a, | \beta_{1} |) -\beta+o(1)} . \end{equation*} When $\eta(a, | \beta_{1} |) +\beta >1$, it follows that $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ holds with probability $1-o(1)$. Substituting into~\eqref{BCBM-G-equ5}, if $\eta(a, | \beta_{1} |) +\beta >1 $, then with probability $1-o(1)$, \begin{align*} V^{T}S^{*}V \geq& ( 1-v^{2} ) \bigg( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p(1-2\xi) \bigg) \\ > & 0 , \end{align*} which concludes the first part of Theorem~\ref{Theorem 5}. We now bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $ | \beta_{1} | \geq aT_{1} (1-2\xi )$. When $\beta_{1} \geq 0$, $\lim_{n\rightarrow \infty}\frac{f_{2}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty}\frac{f_{3}(n)}{\log n} = -\beta_{1}-\beta$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\beta+o(1)} + n^{-\beta-\beta_{1}+o(1)}. \end{equation*} When $\beta_{1} < 0$, $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n} = \beta_{1}-\beta$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\beta+\beta_{1}+o(1)} + n^{-\beta+o(1)}. \end{equation*} Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg) \geq 1-n^{1- | \beta_{1} | -\beta+o(1)} . \end{equation*} When $ | \beta_{1} | +\beta >1 $, with probability $1-o(1)$, we have $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$. Substituting into~\eqref{BCBM-G-equ5}, if $ | \beta_{1} | +\beta >1 $, then with probability $1-o(1)$, \begin{align*} V^{T}S^{*}V \geq& ( 1-v^{2} ) \bigg( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p(1-2\xi) \bigg)\\ > &0 , \end{align*} which concludes the second part of Theorem~\ref{Theorem 5}. \section{Proof of Theorem~\ref{Theorem 6}} \label{Proof-Theorem-6} Similar to the proof of Theorem~\ref{Theorem 4}, let \begin{equation*} F \triangleq \Bigg\{ \min_{i \in [n] }~ \Bigg( T_{1}\sum_{j =1}^{n} G_{ij} x_{j}^{*} x_{i}^{*} + x_{i}^{*}\tilde{y}_{i} \Bigg) \leq -T_{1} \Bigg\}. \end{equation*} Then $\mathbb{P} ( \text{ML Fails} ) \geq \mathbb{P} ( F )$ and if we show that $\mathbb{P} ( \text{F} ) \rightarrow 1$, the maximum likelihood estimator fails. Let $H$ be the set of first $ \lfloor \frac{n}{\log^{2} n} \rfloor$ nodes and $e (i, H )$ denote the number of edges between node $i$ and other nodes in the set $H$. It can be shown that \begin{align*} \min_{i \in [n] }~ & \Bigg( T_{1}\sum_{j \in [n]} G_{ij} x_{j}^{*} x_{i}^{*} + x_{i}^{*}\tilde{y}_{i} \Bigg) \\ \leq & \min_{i \in H }~ \Bigg( T_{1}\sum_{j \in [n]} G_{ij} x_{j}^{*} x_{i}^{*} + x_{i}^{*}\tilde{y}_{i} \Bigg) \\ \leq & \min_{i \in H }~ \Bigg( T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + x_{i}^{*}\tilde{y}_{i} \Bigg) + \max_{i \in H }~e (i, H ) . \end{align*} Let \begin{align*} &E_{1} \triangleq \Bigg\{ \max_{i \in H }~e (i, H ) \leq \delta -T_{1} \Bigg\}, \\ &E_{2} \triangleq \Bigg\{ \min_{i \in H }~ \Bigg( T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + x_{i}^{*}\tilde{y}_{i} \Bigg) \leq -\delta \Bigg\}. \end{align*} Notice that $F \supset E_{1} \cap E_{2}$. Then the maximum likelihood estimator fails if we show that $\mathbb{P} ( E_{1} ) \rightarrow 1$ and $\mathbb{P} ( E_{2} ) \rightarrow 1$. Since $e (i, H ) \sim \text{Binom}( | H |, a\frac{\log n }{n})$, from Lemma~\ref{BCBM-P-Lemma 5}, \begin{align*} \mathbb{P} & ( e (i, H ) \geq \delta-T_{1} ) \\ &\leq \bigg( \frac{\log^{2} n}{ae \log \log n} - \frac{T_{1} \log n}{ae} \bigg)^{T_{1}-\frac{\log n}{\log \log n}} e^{-\frac{a}{\log n}} \leq n^{-2+o(1)} . \end{align*} Using the union bound, $\mathbb{P} ( E_{1} ) \geq 1- n^{-1+o(1)}$. Let \begin{align*} E &\triangleq \Bigg\{ T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + x_{i}^{*}\tilde{y}_{i} \leq -\delta \Bigg\}, \\ E_{+} &\triangleq \Bigg\{ \sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} \leq \frac{-\delta - f_{1}(n) }{T_{1}} \Bigg\}, \\ E_{-} &\triangleq \Bigg\{ \sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} \leq \frac{-\delta + f_{1}(n)}{T_{1}} \Bigg\}. \end{align*} Define \begin{align*} P(m_{1}, ..., m_{K}) \triangleq & \mathbb{P} (x_{i}^{*}=1 ) e^{f_{2}(n)} \mathbb{P} ( E_{+} ) \nonumber\\ &+\mathbb{P} (x_{i}^{*}=-1 ) e^{f_{3}(n)} \mathbb{P} ( E_{-} ). \end{align*} Then \begin{align*} \mathbb{P} ( E_{2} ) & =1 - \prod_{i\in H} [ 1- \mathbb{P} ( E ) ] \overset {(a)}{=} 1 - [ 1- \mathbb{P} ( E ) ]^{ | H |} \\ & = 1 - \Bigg[ 1- \sum_{m_{1}=1}^{M_{1}} \cdots \sum_{m_{K}=1}^{M_{K}} P (m_{1}, ..., m_{K} ) \Bigg]^{ | H |} , \end{align*} where $(a)$ holds because $ \{ T_{1}\sum_{j \in H^{c}} G_{ij} x_{j}^{*} x_{i}^{*} + x_{i}^{*}\tilde{y}_{i} \}_{i \in H}$ are mutually independent. \label{Remark2} First, we bound $\mathbb{P}(E_2)$ under the condition $| \beta_{1} | \leq aT_{1} (1-2\xi )$. Using Lemma~\ref{BCBM-N-Lemma 2}, $\mathbb{P} ( E_{+} ) \geq n^{-\eta(a,\beta_{1})+o(1)}$ and $\mathbb{P} ( E_{-} ) \geq n^{-\eta(a,\beta_{1})+\beta_{1}+o(1)}$. When $\beta_{1}\geq 0$, $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n}= -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n}= -\beta_{1} -\beta$. Then \begin{align*} \mathbb{P} ( E_{2} ) & = 1 - \Big[ 1- n^{-\eta(a,\beta_{1})-\beta+o(1)} \Big]^{ | H |} \\ & \geq 1 - \exp \Big( -n^{1-\eta(a,\beta_{1})-\beta+o(1)} \Big) , \end{align*} using $1+x \leq e^{x}$. When $\beta_{1}<0$, $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n} = \beta_{1} -\beta $. Then \begin{align*} \mathbb{P} ( E_{2} ) & = 1 - \Big[ 1- n^{-\eta(a,\beta_{1})+\beta_{1}-\beta+o(1)} \Big]^{ | H |} \\ & \geq 1 - \exp \Big( -n^{1-\eta(a, | \beta_{1} |)-\beta+o(1)} \Big) , \end{align*} using $1+x \leq e^{x}$ and $\eta(a,\beta_{1})-\beta_{1} = \eta(a, | \beta_{1} |)$. Therefore, if $\eta(a, | \beta_{1} |) +\beta<1$, then $\mathbb{P} (E_{2} ) \rightarrow 1$ and the first part of Theorem~\ref{Theorem 6} follows. We now bound $\mathbb{P}(E_2)$ under the condition $ | \beta_{1} | \geq aT_{1} (1-2\xi )$. When $\beta_{1} \geq 0$, $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta_{1} -\beta$. Using Lemma~\ref{BCBM-N-Lemma 2}, $\mathbb{P} ( E_{+} ) \geq n^{-\eta(a, \beta_{1})+o(1)}$ and $\mathbb{P} ( E_{-} ) \geq 1-o(1)$. Then \begin{align*} \mathbb{P} & ( E_{2} ) \geq 1 - \Big[ 1 -n^{-\eta(a,\beta_{1})-\beta+o(1)} -n^{-\beta_{1}-\beta+o(1)} \Big]^{ | H |} \\ & \geq 1 - \exp \Big( -n^{1-\eta(a,\beta_{1})-\beta+o(1)} -n^{1-\beta_{1}-\beta+o(1)} \Big) , \end{align*} using $1+x \leq e^{x}$. When $\beta_{1} < 0$, $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n}= \beta_{1} -\beta$. Using Lemma~\ref{BCBM-N-Lemma 2}, $\mathbb{P} ( E_{+} ) \geq 1-o(1)$ and $\mathbb{P} ( E_{-} ) \geq n^{-\eta(a, |\beta_{1}|)+o(1)}$. Then \begin{align*} \mathbb{P} & ( E_{2} ) = 1 - \Big[ 1 -n^{\beta_{1}-\beta+o(1)} -n^{-\eta(a,|\beta_{1}|)-\beta +o(1)} \Big]^{ | H |} \\ & \geq 1 - \exp \Big( -n^{1+\beta_{1}-\beta+o(1)} -n^{1-\eta(a,|\beta_{1}|)-\beta +o(1)} \Big) , \end{align*} using $1+x \leq e^{x}$. Therefore, since $|\beta_{1}| \leq \eta(a,|\beta_{1}|)$, if $|\beta_{1}| +\beta<1$, then $\mathbb{P} (E_{2} ) \rightarrow 1$ and the second part of Theorem~\ref{Theorem 6} follows. \section{Proof of Theorem~\ref{Theorem 7}} \label{Proof-Theorem-7} We begin by deriving sufficient conditions for the solution of \ac{SDP}~\eqref{BSSBM-P-equ2} to match the true labels. \begin{Lemma} \label{BSSBM-P-Lemma 1} For the optimization problem~\eqref{BSSBM-P-equ2}, consider the Lagrange multipliers \begin{equation*} \lambda^{*} \quad , \quad \mu^* \quad , \quad D^{*}=\mathrm{diag}(d_{i}^{*}), \quad S^{*}. \end{equation*} If we have \begin{align*} &S^{*} = D^{*}+\lambda^{*} \mathbf{J}+\mu^{*}W-G ,\\ & S^{*} \succeq 0, \\ &\lambda_{2}(S^{*}) > 0 ,\\ &S^{*}X^{*} =0 , \end{align*} then $(\lambda^{*}, \mu^{*}, D^*, S^*)$ is the dual optimal solution and $\widehat{Z}=X^{*}X^{*T}$ is the unique primal optimal solution of~\eqref{BSSBM-P-equ2}. \end{Lemma} \begin{proof} The proof is similar to the proof of Lemma~\ref{BCBM-P-Lemma 1}. The Lagrangian of~\eqref{BSSBM-P-equ2} is given by \begin{align*} L(Z,S,D,\lambda,\mu)=&\langle G,Z\rangle +\langle S,Z\rangle -\langle D,Z-\mathbf{I} \rangle \\ &-\lambda \langle \mathbf{J},Z\rangle -\mu ( \langle W,Z\rangle -(Y^{T}Y)^{2} ) , \end{align*} where $S\succeq 0$, $D=\mathrm{diag}(d_{i})$, $\lambda ,\mu$ are Lagrange multipliers. Since $\langle \mathbf{J},Z \rangle = 0$, for any $Z$ that satisfies the constraints in~\eqref{BSSBM-P-equ2}, it can be shown that $\langle G,Z\rangle \leq \langle G,Z^{*}\rangle$. Also, similar to the proof of Lemma~\ref{BCBM-P-Lemma 1}, it can be shown that the optimum solution is unique. \end{proof} It suffices to show that $S^{*} = D^{*}+\lambda^{*} \mathbf{J}+\mu^{*}W-G$ satisfies other conditions in Lemma~\ref{BSSBM-P-Lemma 1} with probability $1-o(1)$. Let \begin{equation} \label{BSSBM-P-equ3} d_{i}^{*}= \sum_{j=1}^{n} G_{ij}x_{j}^{*}x_{i}^{*} -\mu^{*} \sum_{j=1}^{n} y_{i}y_{j}x_{j}^{*}x_{i}^{*} . \end{equation} Then $D^{*}X^{*} = GX^{*}-\mu^{*}WX^{*}$ and based on the definition of $S^{*}$ in Lemma~\ref{BSSBM-P-Lemma 1}, $S^{*}$ satisfies the condition $S^{*}X^{*} =0$. It remains to show that~\eqref{BCBM-P-equ1-New} holds, i.e., $S^{*}\succeq 0$ and $\lambda_{2}(S^{*})>0$ with probability $1-o(1)$. Under the binary stochastic block model, \begin{equation} \label{BSBM-expectation} \mathbb{E}[G]=\frac{p-q}{2}X^{*}X^{*T}+\frac{p+q}{2}\mathbf{J}-p\mathbf{I} . \end{equation} It follows that for any $V$ such that $V^{T}X^{*}=0$ and $ \| V \|=1$, \begin{align*} V^{T}S^{*}V=&V^{T}D^{*}V+ ( \lambda^{*} -\frac{p+q}{2} )V^{*}\mathbf{J}V+p\\ &-V^{T}(G-\mathbb E[G])V+\mu^{*}V^{T}WV . \end{align*} Let $\lambda^{*}\geq \frac{p+q}{2}$. Since $V^{T}D^{*}V \geq \min_{i\in [n]} d_{i}^{*}$ and $V^{T}(G-\mathbb E[G])V \leq \| G-\mathbb E[G] \| $, \begin{equation*} V^{T}S^{*}V \geq \min_{i \in [n]} d_{i}^{*}+p- \| G-\mathbb E[G] \|+\mu^{*}V^{T}WV . \end{equation*} \begin{Lemma}\cite[Thoerem 5]{Ref18} \label{BSSBM-P-Lemma 2} For any $c > 0$, there exists $c^{'} >0$ such that for any $n \geq 1$, $ \| G-\mathbb E[G] \| \leq c^{'}\sqrt{\log n}$ with probability at least $1-n^{-c}$. \end{Lemma} Also, it can be shown that Lemma~\ref{BCBM-P-Lemma 3} holds here. Choose $\mu^{*}< 0 $, then in view of Lemmas~\ref{BSSBM-P-Lemma 2} and~\ref{BCBM-P-Lemma 3}, with probability $1-o(1)$, \begin{equation} \label{BSSBM-P-equ4} V^{T}S^{*}V \geq \min_{i \in [n]}d_{i}^{*} +p+(\mu^{*}-c^{'})\sqrt{\log n}. \end{equation} \begin{Lemma} \label{BSSBM-P-Lemma 3} When $\delta = \frac{\log n}{\log log n}$, then \begin{equation*} \mathbb{P}(d_{i}^{*}\leq \delta ) \leq \epsilon n^{-\frac{1}{2} ( \sqrt{a}-\sqrt{b} )^{2}+o(1)} + (1-\epsilon )\epsilon^{n}. \end{equation*} \end{Lemma} \begin{proof} It follows from Chernoff bound. \end{proof} Recall that $\beta \triangleq \lim_{n \rightarrow \infty} -\frac{\log \epsilon}{\log n}$, where $\beta \geq 0$. It follows from Lemma~\ref{BSSBM-P-Lemma 3} that \begin{equation*} \mathbb{P}(d_{i}^{*}\leq \delta ) \leq n^{-\frac{1}{2} ( \sqrt{a}-\sqrt{b} )^{2} - \beta +o(1)}. \end{equation*} Then using the union bound, \begin{equation*} \mathbb{P}\bigg(\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg) \geq 1 - n^{1-\frac{1}{2} ( \sqrt{a}-\sqrt{b} )^{2} - \beta +o(1)} . \end{equation*} When $ ( \sqrt{a}-\sqrt{b} )^{2} + 2\beta> 2$, it follows that $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ holds with probability $1-o(1)$. Combining this result with~\eqref{BSSBM-P-equ4}, if $ ( \sqrt{a}-\sqrt{b} )^{2} + 2\beta> 2$, then with probability $1-o(1)$, \begin{equation*} V^{T}S^{*}V \geq \frac{\log n}{\log \log n}+p+(\mu^{*}-c^{'})\sqrt{\log n} > 0 , \end{equation*} which completes the proof of Theorem \ref{Theorem 7}. \section{Proof of Theorem~\ref{Theorem 9}} \label{Proof-Theorem-9} We begin by deriving sufficient conditions for the solution of \ac{SDP}~\eqref{BSSBM-N-equ1} to match the true labels. \begin{Lemma} \label{BSSBM-N-Lemma 1} For the optimization problem~\eqref{BSSBM-N-equ1}, consider the Lagrange multipliers \begin{equation*} \lambda^* \quad, \quad D^{*}=\mathrm{diag}(d_{i}^{*}), \quad S^{*}\triangleq \begin{bmatrix} S_{A}^{*} & S_{B}^{*T} \\ S_{B}^{*} & S_{C}^{*} \end{bmatrix}. \end{equation*} If we have \begin{align*} &S_{A}^{*} = T_{2}Y^{T}X^{*} ,\\ &S_{B}^{*} = -T_{2}Y ,\\ &S_{C}^{*} = D^{*}+\lambda^*{\mathbf J}-T_{1}G ,\\ & S^{*} \succeq 0, \\ &\lambda_{2}(S^{*}) > 0 ,\\ &S^{*} [1, X^{*T}]^T =0 , \end{align*} then $(\lambda^{*}, D^*, S^*)$ is the dual optimal solution and $\widehat{Z}=X^{*}X^{*T}$ is the unique primal optimal solution of~\eqref{BSSBM-N-equ1}. \end{Lemma} \begin{proof} The proof is similar to the proof of Lemma~\ref{BCBM-N-Lemma 1}. The Lagrangian of~\eqref{BSSBM-N-equ1} is given by \begin{align*} L(Z,X,S,D,\lambda)=&T_{1} \langle G,Z \rangle +T_{2} \langle Y, X \rangle +\langle S, H \rangle \\ &-\langle D,Z-\mathbf{I} \rangle -\lambda \langle \mathbf{J},Z \rangle , \end{align*} where $S\succeq 0$, $D=\mathrm{diag}(d_{i})$, and $\lambda \in \mathbb{R}$ are Lagrange multipliers. Since $\langle \mathbf{J},Z^{*} \rangle = 0$, for any $Z$ that satisfies the constraints in~\eqref{BSSBM-N-equ1}, it can be shown that $T_{1}\langle G, Z\rangle + T_{2}\langle Y , X \rangle \leq T_{1}\langle G, Z^{*}\rangle + T_{2}\langle Y , X^{*} \rangle$. Also, the uniqueness of optimum solution is proved similarly. \end{proof} We now show that $S^{*}$ defined by $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$ satisfies the remaining conditions in Lemma~\ref{BSSBM-N-Lemma 1} with probability $1-o(1)$. Let \begin{equation} \label{BSBM-N-equ101} d_{i}^{*}=T_{1} \sum_{j=1}^{n} G_{ij}x_{j}^{*}x_{i}^{*} + T_{2}y_{i}x_{i}^{*}. \end{equation} Then $D^{*}X^{*} = T_{1}GX^{*}+T_{2}Y$ and based on the definitions of $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$ in Lemma~\ref{BSSBM-N-Lemma 1}, $S^{*}$ satisfies the condition $S^{*} [1, X^{*T}]^T =0$. It remains to show that~\eqref{BCBM-N-equ1 New} holds, i.e., $S^{*}\succeq 0$ and $\lambda_{2}(S^{*})>0$ with probability $1-o(1)$. For any $V$ such that $V^{T}[1, X^{*T}]^T=0$ and $ \| V \|=1$, we have \begin{align} \label{BSBM-N-equ102} V^{T}&S^{*}V=v^{2} S_{A}^{*} -2vT_{2}U^{T}Y +U^{T}D^{*}U - T_{1}U^{T}GU \nonumber\\ \geq & ( 1-v^{2} ) \bigg[\min_{i \in [n]}d_{i}^{*} - T_{1} \| G-\mathbb{E}[G] \| + T_{1} p \bigg] \nonumber \\ &+v^{2} \bigg[ Y^{T}X^{*} -2T_{2} \frac{\sqrt{n(1-v^{2})}}{|v|} -T_{1}\frac{p-q}{2} \bigg] , \end{align} where the last inequality holds in a manner similar to \eqref{BCBM-N-equ4}. Using Lemma~\ref{BCBM-New Lemma}, \begin{equation} \label{BSSBM-N-equ3} V^{T}S^{*}V \geq ( 1-v^{2} ) \bigg( \min_{i \in [n]} d_{i}^{*} - T_{1} c^{'}\sqrt{\log n} +T_{1}p \bigg). \end{equation} \begin{Lemma} \label{BSSBM-N-Lemma 2} Consider a sequence $f(n)$, and for each $n$, let $S_{1} \sim \text{Binom} (\frac{n}{2}-1,p )$ and $S_{2} \sim \text{Binom} (\frac{n}{2},q )$, where $p=a\frac{\log n}{n}$, and $q=b\frac{\log n}{n}$ for some $a\geq b >0$. Define $\omega \triangleq \lim_{n\rightarrow\infty} \frac{f(n)}{\log n}$. For sufficiently large $n$, when $\omega<\frac{a-b}{2}$, \begin{equation*} \mathbb{P} \big(S_{1}-S_{2} \leq f(n) \big) \leq n^{-\eta^{*}+o(1)} , \end{equation*} where $\eta^{*} = \frac{a+b}{2}-\gamma^{*} -\frac{\omega}{2} \log \big ( \frac{a}{b}\big) +\frac{\omega}{2} \log \Big( \frac{\gamma^{*} + \omega}{\gamma^{*} - \omega} \Big)$ and $\gamma^{*} = \sqrt{\omega^{2}+ab}$. \end{Lemma} \begin{proof} It follows from Chernoff bound. \end{proof} It follows from~\eqref{BSBM-N-equ101} that \begin{align*} \mathbb{P} ( d_{i}^{*}\leq \delta ) =& \mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta-T_{2}}{T_{1}} \Bigg) (1-\alpha ) \\ &+ \mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta+T_{2}}{T_{1}} \Bigg) \alpha , \end{align*} where $\sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*}$ is equal in distribution to $S_{1}-S_{2}$ in Lemma~\ref{BSSBM-N-Lemma 2}. Recall that $\beta \triangleq \lim_{n \rightarrow \infty} \frac{T_{2}}{\log n}$, where $\beta \geq 0$. First, we bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $0 \leq \beta < \frac{T_{1}}{2}(a-b)$. It follows from Lemma~\ref{BSSBM-N-Lemma 2} that \begin{align*} &\mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta-T_{2}}{T_{1}} \Bigg) \leq n^{-\eta(a,b,\beta)+o(1)} , \\ &\mathbb{P} \Bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} \leq \frac{\delta+T_{2}}{T_{1}} \Bigg) \leq n^{-\eta(a,b,\beta)+\beta+o(1)} . \end{align*} Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\eta(a,b,\beta) +o(1)} . \end{equation*} Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg) \geq 1-n^{1-\eta(a,b,\beta)+o(1)} . \end{equation*} When $\eta(a,b,\beta)>1 $, it follows that $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ holds with probability $1-o(1)$. Substituting into~\eqref{BSSBM-N-equ3}, if $\eta(a,b,\beta)>1 $, then with probability $1-o(1)$, \begin{equation*} V^{T}S^{*}V \geq ( 1-v^{2} ) \bigg( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p \bigg) > 0 , \end{equation*} which concludes the first part of Theorem~\ref{Theorem 9}. We now bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $\beta>\frac{T_{1}}{2}(a-b)$. It follows from Lemma~\ref{BSSBM-N-Lemma 2} that \begin{align*} \mathbb{P} \bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} &\leq \frac{\delta-T_{2}}{T_{1}} \bigg ) \leq n^{-\eta(a,b,\beta)+o(1)} ,\\ \mathbb{P} \bigg( \sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*} &\leq \frac{\delta+T_{2}}{T_{1}} \bigg) \leq 1 . \end{align*} Then \begin{align*} \mathbb{P} ( d_{i}^{*} \leq \delta ) &\leq n^{-\eta(a,b,\beta)+o(1)} (1-\alpha ) +\alpha \\ & =n^{-\eta(a,b,\beta)+o(1)}+n^{-\beta+o(1)} , \end{align*} where $\alpha = n^{-\beta +o(1)}$. Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \delta \bigg) \geq 1 -n^{1-\eta(a,b,\beta)+o(1)} -n^{1-\beta+o(1)}. \end{equation*} \begin{Lemma}\cite[Lemma 8]{Ref7} \label{BSSBM-N-Lemma 3} When $\beta > 1$, then $\eta(a,b,\beta) > 1$. \end{Lemma} When $\beta>1 $, using Lemma~\ref{BSSBM-N-Lemma 3}, it follows that $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ holds with probability $1-o(1)$. Substituting into~\eqref{BSSBM-N-equ3}, if $\beta>1 $, then with probability $1-o(1)$, \begin{equation*} V^{T}S^{*}V \geq ( 1-v^{2} ) \bigg( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p \bigg) > 0 , \end{equation*} which concludes the second part of Theorem~\ref{Theorem 9}. \section{Proof of Theorem~\ref{Theorem 11}} \label{Proof-Theorem-11} We begin by deriving sufficient conditions for the solution of \ac{SDP}~\eqref{BSSBM-N-equ1} to match the true labels. \begin{Lemma} \label{BSSBM-G-Lemma 1} The sufficient conditions of Lemma~\ref{BSSBM-N-Lemma 1} apply to the general side information \ac{SDP}~\eqref{BSSBM-G-equ1} by replacing $S_{A}^{*} = \tilde{Y}^{T}X^{*}$ and $S_B^*=-\tilde{Y}$. \end{Lemma} \begin{proof} The proof is similar to the proof of Lemma~\ref{BSSBM-N-Lemma 1}. \end{proof} It suffices to show that $S^{*}$ defined by $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$ satisfies other conditions in Lemma~\ref{BSSBM-G-Lemma 1} with probability $1-o(1)$. Let \begin{equation} \label{BSBM-G-dstar} d_{i}^{*}=T_{1} \sum_{j=1}^{n} G_{ij}x_{j}^{*}x_{i}^{*} + \tilde{y}_{i}x_{i}^{*} . \end{equation} Then $D^{*}X^{*} = T_{1}GX^{*}+\tilde{Y}$ and based on the definitions of $S_{A}^{*}$, $S_{B}^{*}$, and $S_{C}^{*}$ in Lemma~\ref{BSSBM-G-Lemma 1}, $S^{*}$ satisfies the condition $S^{*} [1, X^{*T}]^T =0$. It remains to show that~\eqref{BCBM-N-equ1 New} holds, i.e., $S^{*}\succeq 0$ and $\lambda_{2}(S^{*})>0$ with probability $1-o(1)$. For any $V$ such that $V^{T}[1, X^{*T}]^T=0$ and $ \| V \|=1$, we have \begin{align*} V^{T}S^{*}V=&v^{2} S_{A}^{*} -2vT_{2}U^{T}Y +U^{T}D^{*}U - T_{1}U^{T}GU \\ \overset{(a)}{\geq}& ( 1-v^{2} ) \bigg[\min_{i \in [n]}d_{i}^{*} - T_{1} \| G-\mathbb{E}[G] \| + T_{1} p \bigg] \\ &+v^{2} \bigg[ \tilde{Y}^{T}X^{*} -2y_{max} \frac{\sqrt{n(1-v^{2})}}{|v|} -T_{1}\frac{p-q}{2} \bigg] \\ \overset{(b)}{=}& ( 1-v^{2} ) \bigg[\min_{i\in[n]}d_{i}^{*} - T_{1} \| G-\mathbb{E}[G] \| + T_{1} p \bigg] , \end{align*} where $(a)$ holds in a manner similar to \eqref{BCBM-N-equ4} and \eqref{BSBM-N-equ102}, and $(b)$ holds by applying Lemma~\ref{BCBM-G-New Lemma}. Then using Lemma~\ref{BSSBM-P-Lemma 2}, \begin{equation} \label{BSSBM-G-equ3} V^{T}S^{*}V \geq ( 1-v^{2} ) \bigg( \min_{i \in [n]}d_{i}^{*} - T_{1} c^{'}\sqrt{\log n} +T_{1}p \bigg). \end{equation} It can be shown that $\sum_{j=1}^{n} G_{ij}x_{i}^{*}x_{j}^{*}$ in~\eqref{BSBM-G-dstar} is equal in distribution to $S_{1}-S_{2}$ in Lemma~\ref{BSSBM-N-Lemma 2}. Then \begin{equation*} \mathbb{P} ( d_{i}^{*}\leq \delta ) = \sum_{m_{1}=1}^{M_{1}} \sum_{m_{2}=1}^{M_{2}} ... \sum_{m_{K}=1}^{M_{K}} P (m_{1}, ..., m_{K} ) , \end{equation*} where \begin{align*} P& (m_{1}, ..., m_{K} ) \triangleq \mathbb{P} ( x_{i}^{*} =1 )e^{f_{2}(n)} \mathbb{P} \bigg( S_{1}-S_{2} \leq \frac{\delta-f_{1}(n)}{T_{1}} \bigg) \\ &+\mathbb{P} ( x_{i}^{*} =-1 ) e^{f_{3}(n)} \mathbb{P} \bigg( S_{1}-S_{2} \leq \frac{\delta+f_{1}(n)}{T_{1}} \bigg). \end{align*} First, we bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $|\beta_{1}| \leq \frac{T_{1}}{2} (a-b )$. It follows from Lemma~\ref{BSSBM-N-Lemma 2} that \begin{align*} &\mathbb{P} \bigg( S_{1}-S_{2} \leq \frac{\delta-f_{1}(n)}{T_{1}} \bigg) \leq n^{-\eta(a,b,\beta_{1})+o(1)} , \\ & \mathbb{P} \bigg( S_{1}-S_{2} \leq \frac{\delta+f_{1}(n)}{T_{1}} \bigg) \leq n^{-\eta(a,b,\beta_{1})+\beta_{1}+o(1)} . \end{align*} Notice that \begin{align*} \beta \triangleq \lim_{n\rightarrow\infty} -\frac{\max (f_2(n),f_3(n))}{\log n} . \end{align*} When $\beta_{1} \geq 0$, $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n}= -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta_{1} -\beta$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\eta(a,b,\beta_{1})-\beta+o(1)}. \end{equation*} When $\beta_{1} < 0$, $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n} =\beta_{1} -\beta $. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\eta(a,b,\beta_{1})+\beta_{1}-\beta+o(1)}=n^{-\eta(a,b, | \beta_{1} |)-\beta+o(1)}. \end{equation*} Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg) \geq 1-n^{1-\eta(a,b, | \beta_{1} |) -\beta+o(1)} . \end{equation*} When $\eta(a,b, | \beta_{1} |) +\beta >1 $, it follows that $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ holds with probability $1-o(1)$. Substituting into~\eqref{BSSBM-G-equ3}, if $\eta(a,b, | \beta_{1} |) +\beta >1 $, then with probability $1-o(1)$, \begin{equation*} V^{T}S^{*}V \geq ( 1-v^{2} ) \bigg( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p \bigg) > 0 , \end{equation*} which concludes the first part of Theorem~\ref{Theorem 11}. We now bound $\min_{i \in [n]}d_{i}^{*}$ under the condition $ | \beta_{1} | \geq \frac{T_{1}}{2} (a-b )$. When $\beta_{1} > 0$, $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta_{1} -\beta$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\beta+o(1)} + n^{-\beta-\beta_{1}+o(1)}. \end{equation*} When $\beta_{1} < 0$, $\lim_{n\rightarrow \infty} \frac{f_{3}(n)}{\log n} = -\beta$ and $\lim_{n\rightarrow \infty} \frac{f_{2}(n)}{\log n} =\beta_{1} -\beta$. Then \begin{equation*} \mathbb{P} ( d_{i}^{*} \leq \delta ) \leq n^{-\beta+\beta_{1}+o(1)} +n^{-\beta+o(1)} . \end{equation*} Using the union bound, \begin{equation*} \mathbb{P} \bigg( \min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n} \bigg) \geq 1-n^{1- | \beta_{1} | -\beta+o(1)} . \end{equation*} When $ | \beta_{1} | +\beta >1 $, it follows that $\min_{i \in [n]}d_{i}^{*} \geq \frac{\log n}{\log \log n}$ holds with probability $1-o(1)$. Substituting into~\eqref{BSSBM-G-equ3}, if $ | \beta_{1} | +\beta >1 $, then with probability $1-o(1)$, \begin{equation*} V^{T}S^{*}V \geq ( 1-v^{2} ) \bigg( \frac{\log n}{\log \log n} - T_{1} c' \sqrt{\log n}+T_{1}p \bigg) > 0 , \end{equation*} which concludes the second part of Theorem~\ref{Theorem 11}. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Cascade effects refer to situations where the expected behavior governing a certain system appears to be \emph{enhanced} as this component is embedded into a greater system. The effects of change in a subsystem may pass through \emph{interconnections} and enforce an indirect change on the state of any remote subsystem. As such effects are pervasive---appearing in various scenarios of ecological systems, communication infrastructures, financial networks, power grids and societal networks---there is an interest (and rather a need) to understand them. Models are continually proposed to capture instances of cascading behavior, yet the \emph{universal} properties of this phenomenon remain untouched. Our goal is to capture some essence of cascade effects, and develop an axiomatic theory around it. A reflection on such a phenomenon reveals two informal aspects of it. The first aspect uncovers a notion of \emph{consequence} relation that seemingly drives the phenomenon. Capturing \emph{chains of events} seems to be inescapably necessary. The second aspect projects cascade effects onto a theory of subsystems, combinations and interaction. We should not expect any cascading behavior to occur in \emph{isolation}. The line of research will be pursued within the context of systemic failure, and set along a guiding informal question. When handed a system of interlinked subsystems, when would a \emph{small} perturbation in some subsystems induce the system to failure? The phenomenon of cascade effects (envisioned in this paper) restricts the possible systems to those satisfying posed axioms. The analysis of cascade effects shall be perceived through an analysis on these systems. We introduce a new class of (dynamical) systems that inherently capture cascading effects (viewed as \emph{consequential} effects) and are naturally amenable to combinations. We develop a general theory around those systems, and guide the endeavor towards an understanding of cascading failure. The theory evolves as an interplay of lattices and fixed points, and its results may be instantiated to commonly studied \emph{models} of cascade effects. \subsection*{Our Systems} The systems, in this introduction, will be motivated through an elementary example. This example is labeled M.0 and further referred to throughout the paper. \begin{M0*} Let $G(V,A)$ be a digraph, and define $N(S) \subseteq V$ to be the set of nodes $j$ with $(i,j) \in A$ and $i \in S$. A vertex is of one of two colors, either black or white. The vertices are initially colored, and $X_0$ denotes the set of black colored nodes. The system evolves through discrete time to yield $X_1,X_2,\cdots$ sets of black colored nodes. Node $j$ is colored black at step $m+1$ if any of its neighbors $i$ with $j \in N(i)$ is black at step $m$. Once a node is black it remains black forever. \end{M0*} Our systems will consist of a collection of states along with internal dynamics. The collection of states is a finite set $P$. The dynamics dictate the evolution of the system through the states and are \emph{governed} by a class of maps $P\rightarrow P$. The state space in M.0 is the set $2^V$ where each $S \subseteq V$ identifies a subset of \emph{black} colored nodes; the dynamics are dictated by $g: X \mapsto X \cup N(X)$ as $X_{m+1} = g X_m$. We intuitively consider some states to be \emph{worse} or \emph{less desirable} than others. The color \emph{black} may be undesirable in M.0, representing a \emph{failed} state of a node. State $S$ is then considered to be worse than state $T$ if it includes $T$. We formalize this notion by equipping $P$ with a partial order $\leq$. The order is only partial as not every pair of states may be comparable. It is natural to read $a \leq b$ in this paper as $b$ is a worse (or less desirable) state than $a$. The state space $2^V$ in M.0 is ordered by set inclusion $\subseteq$. We expect two properties from the dynamics driving the systems. We require the dynamics to be \emph{progressive}. The system may only evolve to a state considered less desirable than its initial state. We also require \emph{undesirability} to be preserved during an evolution. The less desirable the initial state of a system is, the less desirable the final state (that the system evolves to) will be. We force each map $f: P \rightarrow P$ governing the dynamics to satisfy two axioms: \begin{description} \item[A.1] If $a \in P$, then $a \leq fa$. \item[A.2] If $a, b \in P$ and $a \leq b$, then $fa \leq fb$. \end{description} The map $X \mapsto X \cup N(X)$ in M.0 satisfies both A.1 and A.2 as $S \subseteq S \cup N(S)$, and $S\cup N(S) \subseteq S'\cup N(S')$ if $S \subseteq S'$. Our interest lies in the limiting outcome of the dynamics, and the understanding we wish to develop may be solely based on the \emph{asymptotic} behavior of the system. In M.0, we are interested in the set $X_m$ for $m$ large enough as a function of $X_0$. As $V$ is finite, it follows that $X_m = X_{|V|}$ for $m \geq |V|$. We are thus interested in the map $g^{|V|} : X_0 \mapsto X_{|V|}$. More generally, as iterative composition of a map satisfying A.1 and A.2 eventually yields idempotent maps, we equip the self-maps $f$ on $P$ with a third axiom: \begin{description} \item[A.3] If $a \in P$, then $ffa = fa$. \end{description} Our class of interest is the (self-)maps (on $P$) satisfying the axioms A.1, A.2 and A.3. Each system will be identified with one such map. The system generated from an instance of M.0 corresponds to the map $X_0 \mapsto X_{|V|}$. The axioms A.1, A.2 and A.3 naturally permeate a number of areas of mathematics and logic. Within metamathematics and (universal) logic, Tarski introduced these three axioms (along with supplementary axioms) and launched his theory of consequence operator (see \cite{TAR1936} and \cite{TAR1956}). He aimed to provide a general characterization of the notion of deduction. As such, if $S$ represents a set of statements taken to be true (i.e. premises), and $Cn(S)$ denotes the set of statements that can be \emph{deduced} to be true from $S$, then $Cn$ (as an operator) obeys A.1, A.2 and A.3. Many familiar maps also adhere to the axioms. As examples, we may consider the function that maps % (i) a subset of a topological space to its topological closure, (ii) a subset of a vector space to its linear span, (iii) a subset of an algebra (e.g. group) to the subalgebra (e.g. subgroup) it generates, (iv) a subset of a Euclidean n-space to its convex hull. Such functions may be referred to as \emph{closure operators} (see e.g. \cite{BIR1936}, \cite{BIR1967}, \cite{ORE1943} and \cite{WAR1942}), and are typically objects of study in \emph{universal algebra}. \subsection*{Goal and Contribution of the Paper} This paper has three goals. The first is to introduce and motivate the class of systems. The second is to present some properties of the systems, and develop preliminary tools for the analysis. The third is to construct a setup for cascading failure, and illustrate initial insight into the setup. The paper will not deliver an exhaustive exposition. It will introduce the concepts and augment them with enough results to allow further development. We illustrate the contribution through M.0. We define $f$ and $g$ to be the systems derived from two instances $(V,A)$ and $(V,A')$ of M.0. We establish that our systems are uniquely identified with their set of fixed points. We can reconstruct $f$ knowing only the sets $S$ containing $N(S)$ (i.e. the fixed points of $f$) with no further information on $(V,A)$. We further provide a complete characterization of the systems through the fixed points. The characterization yields a remarkable conceptual and analytical simplification in the study. We equip the systems with a lattice structure, uncover operators ($+$ and $\cdot$) and express \emph{complex} systems through formulas built from \emph{simpler} systems. The $+$ operator \emph{combines} the effect of systems, possibly derived from different models. The system $f+ g$, as an example, is derived from $(V,A \cup A')$. The $\cdot$ operator \emph{projects} systems onto each other allowing, for instance, the recovery of \emph{local} evolution rule. We fundamentally aim to extract properties of $f + g$ and $f \cdot g$ through properties of $f$ and $g$ separately. We show that $+$ and $\cdot$ lend themselve to well behaved operations when systems are represented through their fixed-points. We realize the systems as interlinked components and formalize a notion of \emph{cascade effects}. Nodes in $V$ are identified with maps $e_1,\cdots, e_{|V|}$. The system $f\cdot e_i$ then defines the evolution of the color of node $i$ as a function of the system state, and is identified with the set of nodes that \emph{reach} $i$ in $(V,A)$. We draw a connection between shocks and systems, and enhance the theory with a notion of failure. We show that minimal shocks (that fail a system $h$) exhibit a unique property that uncovers \emph{complement} subsystems in $h$, termed \emph{weaknesses}. A system is shown to be \emph{injectively} decomposed into its \emph{weaknesses}, and any weakness in $h + h'$ cannot result but from the combination of weaknesses in $h$ and $h'$. We introduce a notion of $\mu$-rank of a system---akin to the (analytic) notion of a norm as used to capture the energy of a system---and show that such a notion is unique should it adhere to natural principles. The $\mu$-rank is tied to the number of connected components in $(V,A)$ when $A$ is symmetric. We finally set to understand the minimal amount of \emph{effort} required to fail a system, termed \emph{resilience}. Weaknesses reveal a dual (equivalent) quantity, termed \emph{fragility}, and further puts resilience and $\mu$-rank on comparable grounds. The fragility is tied to the size of the largest connected component in $(V,A)$ when $A$ is symmetric. It is thus possible to formally define \emph{high ranked} systems that are not necessarily \emph{fragile}. The combination of systems sets a limit on the amount of fragility the new system inherits. Combining two subsystems cannot form a fragile system, unless one of the subsystems is initially fragile. \subsection*{Outline of the Paper} Section 2 presents mathematical preliminaries. We characterize the systems in Section 3, and equip them with the operators in Section 4. We discuss component realization in Section 5, and derive properties of the systems lattice in Section 6. We discuss cascade effects in Section 7, and provide connections to formal methods in Section 8. We consider cascading failure and resilience in Section 9, and conclude with some remarks in Section 10. \section{Mathematical Preliminaries} A partially ordered set or poset $(P,\leq)$ is a set $P$ equipped with a (binary) relation $\leq$ that is reflexive, antisymmetric and transitive. The element $b$ is said to cover $a$ denoted by $a \prec b$ if $a \leq b$, $a \neq b$ and there is no $c$ distinct from $a$ and $b$ such that $a \leq c$ and $c \leq b$. A poset $P$ is graded if, and only if, it admits a rank function $\rho$ such that $\rho(a) = 0$ if $a$ is minimal and $\rho(a') = \rho(a) + 1$ if $a \prec a'$. The poset $(P,\leq)$ is said to be a lattice if every pair of elements admits a greatest lower bound (meet) and a least upper bound (join) in $P$. We define the operators $\wedge$ and $\vee$ that sends a pair to their meet and join respectively. The structures $(P,\leq)$ and $(P,\wedge,\vee)$ are then isomorphic. A lattice is distributive if, and only if, $(a \vee b) \wedge c = (a \wedge c) \vee (b \wedge c)$ for all $a$, $b$ and $c$. The pair $(a,b)$ is said to be a modular pair if $c \vee (a \wedge b) = (c \vee a) \wedge b$ whenever $c \leq b$. A lattice is modular if all pairs are modular pairs. Finally, a \emph{finite} lattice is (upper) semimodular if, and only if, $a \vee b$ covers both $a$ and $b$, whenever $a$ and $b$ cover $a \wedge b$. \subsection*{Notation} We denote $f(g(a))$ by $fga$, the composite $ff$ by $f^2$, and the inverse map of $f$ by $f^{-1}$. We also denote $f(i)$ by $f_i$ when convenient. \section{The Class of Systems}\label{sec:class} The state space is taken to be a \emph{finite} lattice $(P,\leq)$. We consider in this paper only posets $(P,\leq)$ that are lattices, as opposed to arbitrary posets. It is natural to read $a \leq b$ in this paper as $b$ is a worse (or less desirable) state than $a$. The meet (glb) and join (lub) of $a$ and $b$ will be denoted by $a\wedge b$ and $a \vee b$ respectively. A minimum and maximum element exist in $P$ (by finiteness) and will be denoted by $\check{p}$ and $\hat{p}$ respectively. A system is taken to be a map $f: P \rightarrow P$ satisfying: \begin{description} \item[A.1] If $a \in P$, then $a \leq fa$. \item[A.2] If $a, b \in P$ and $a \leq b$, then $fa \leq fb$. \item[A.3] If $a \in P$, then $ffa = fa$. \end{description} The set of such maps is denoted by $\L_P$ or simply by $\L$ when $P$ is irrelevant to the context. This set is necessarily finite as $P$ is finite. \subsubsection*{Note on Finiteness} Finiteness is not essential to the development in the paper; completeness can be used to replace finiteness when needed. We restrict the exposition in this paper to finite cases to ease non-necessary details. As every finite lattice is complete, we will make no mention of completeness throughout. \subsection{Models and Examples} The axioms A.1 and A.2 hold for typical ``models'' adopted for cascade effects. We present three models (in addition to M.0 provided in Section 1) supported on the Boolean lattice, two of which---M.1 and M.3---are standard examples (see \cite{GRA1978}, \cite{KLE2007} and \cite{MOR2000}). It can be helpful to identify a set $2^S$ with the set of all $black$ and $white$ colorings on the objects of $S$. A subset of $S$ then denotes the objects colored $black$. The model M.1 generalizes M.0 by assigning \emph{thresholds} to nodes in the graph. Node $i$ is colored $black$ when the number of neighbors colored $black$ surpasses its threshold. The model M.2 is \emph{noncomparable} to M.0 and M.1, and the model M.3 generalizes all of M.0, M.1 and M.2. \begin{M1*} Given a digraph over a set $S$ or equivalently a map $N : S \rightarrow 2^S$, a map $k:S \rightarrow \mathbb{N}$ and a subset $X_0$ of $S$, let $X_1,X_2,\cdots$ be subsets of $S$ recursively defined such that $i \in X_{m+1}$ if, and only if, either $|N_i \cap X_m| \geq k_i$ or $i\in X_m$. \end{M1*} \begin{M2*} Given a collection $\mathcal{C} \subseteq 2^S$ for some set $S$, a map $k:\mathcal{C}\rightarrow \mathbb{N}$ and a subset $X_0$ of $S$, let $X_1,X_2,\cdots$ be subsets of $S$ recursively defined such that $i \in X_{m+1}$ if, and only if, either there is a $C\in \mathcal{C}$ containing $i$ such that $|C \cap X_m| \geq k_c$ or $i\in X_m$. \end{M2*} \begin{M3*} Given a set $S$, a collection of monotone maps $\phi_i$ (one for each $i \in S$) from $2^S$ into $\{0,1\}$ (with $0<1$) and a subset $X_0$ of $S$, let $X_1,X_2,\cdots$ be subsets of $S$ recursively defined such that $i \in X_{m+1}$ if, and only if, either $\phi_i (X_m) = 1$ or $i\in X_m$. \end{M3*} We necessarily have $X_{|S|} = X_{|S|+1}$ in the three cases above, and the map $X_0 \mapsto X_{|S|}$ is then in $\mathcal{L}_{2^S}$. The dynamics depicted above may be captured in a more general form. \begin{M4*} Given a finite lattice $L$, an order-preserving map $h: L \rightarrow L$, and $x_0 \in L$, let $x_1,x_2,\cdots \in L$ be recursively defined such that $x_{m+1} = x_m \vee h(x_m)$. \end{M4*} We have $x_{|L|} = x_{|L|+1}$ and the map $x_0 \mapsto x_{|L|}$ is in $\mathcal{L}_L$. The axioms allow greater variability if the state space is modified or augmented accordingly. Nevertheless, this paper is only concerned with systems of the above form. \subsubsection*{Note on Realization} Modifications of instances of M.i (e.g. altering values of $k$ in M.1) may not alter the system function. As the interest lies in understanding \emph{universal} properties of final evolution states, the analysis performed should be invariant under such modifications. However, analyzing the systems directly through their \emph{form} (as specified through M.0, M.1, M.2 and M.3) is bound to rely heavily on the representation used. Introducing the axioms and formalism enables an understanding of systems that is independent of their representation. It is then a separate question as to whether or not a system may be realized through some form, or whether or not restrictions on form translate into interesting properties on systems. Not all systems supported on the Boolean lattice can be realized through the form M.0, M.1 or M.2. However, every system in $\mathcal{L}_{2^S}$ may be realized through the form M.3. Indeed, if $f \in \mathcal{L}_{2^S}$, then for every $i \in S$ define $\phi_i : 2^S \rightarrow \{0,1\}$ where $\phi_i(a) = 1$ if, and only if, $i \in f(a)$. The map $\phi_i$ is monotone as $f$ satisfies A.2. Realization is further briefly discussed in Section~\ref{sec:comp}. \subsection{Context, Interpretation and More Examples.} A more \emph{realistic} interpretation of the models M.i comes from a more realistic interpretation of the state space. This work began as an endeavor to understand the mathematical structure underlying models of diffusion of behavior commonly studied in the social sciences. The setup there consists of a population of interacting agents. In a societal setting, the agents may refer to individuals. The interaction of the agents affect their behaviors or opinions. The goal is to understand the spread of a certain behavior among agents given certain interaction patterns. Threshold models of behaviors (captured by M.0, M.1, M.2 and M.3) have appeared in the work of Granovetter \cite{GRA1978}, and more recently in \cite{MOR2000}. Such models are key models in the literature, and have been later considered by computer scientists, see. e.g., \cite{KLE2007} for an overview. The model described by M.1 is known as the linear threshold model. An individual adopts a behavior, and does not change it thereafter, if at least a certain number of its neighbors adopts that behavior. Various variations can also be defined, see e.g.\ M.2 and M.3, and again \cite{KLE2007} for an overview. The cascading intuition in all the variations however remains unchanged. These models can generally be motivated through a game theoretic setup. We will not be discussing such setups in this paper. The \emph{no-recovery} aspect of the models considered may be further relaxed by introducing appropriate time stamps. One such connection is described in \cite{KLE2007}. We are however interest in the instances where no-recovery occurs. The models may also be given an interpretation in epidemiology. Every agent may either be healthy or infected. Interaction with an infected individual causes infections. This is in direct resemblance to M.0. Stochastics can also be added, either for a realistic approach or often for tractability. There is also a vast literature on processes over graphs, see e.g., \cite{DUR1997} and \cite{NEW2010}. Our aim is to capture the consequential effects that are induced by the interaction of several entities. We thus leave out any stochatics for the moment; they may be added later with technical work. On a different end, inspired by cascading failure in electrical grids, consider the following simple resistive circuit. The intent is to guide the reader into a more realistic direction. \begin{center} \begin{circuitikz}[scale=0.7, transform shape]\draw (0,1) to [short, o-o, l=$L_1$] (3,1) to [short, o-o, l=$L_2$] (6,1) (6,1) -- (6,0.5) to[american voltage source] (6,-1) node[ground]{}; \draw (0,1) -- (0,0.5) to[R] (0,-1) node[ground]{}; \draw (3,1) -- (3,0.5) to[R] (3,-1) node[ground]{}; \end{circuitikz} \end{center} If line $L_2$ is disconnected from the voltage source, then line $L_1$ will also be disconnected from the source. Indeed, the current passing through $L_1$ has to pass through $L_2$. The converse is, of course, not true. This interdependence between $L_1$ and $L_2$ is easily captured by a system in $\mathcal{L}_{2^{\{L_1,L_2\}}}$. More general dependencies (notably failures caused by a redistribution of currents) can be captured, and concurrency can be taken care of by going to power sets. Indeed, M.0 also captures general reachability problems, where a node depicts an element of the state space. Specifically, let $S$ be a set of states of some system, and consider a reflexive and transitive relation $\rightarrow$ such that $a \rightarrow b$ means that state $b$ is reachable from state $a$. The map $2^S \rightarrow 2^S$ where $A \mapsto \{b : a \rightarrow b \text{ for some } a \in A\}$ satsfies A.1, A.2, and A.3 when $2^S$ is ordered by inclusion. This work abstracts out the essential properties that gives rise to these situations. The model M.3 depicts the most general form over the boolean lattice. In M.3, the set $S$ can be interpreted to contain $n$ events, and an element of $2^S$ then depicts which events have occured. A system is then interpreted as a collection of (monotone) implications: if such and such event occurs, then such event occurs. The more general model M.4 will be evoked in Section \ref{sec:formal}, while treating connections to formal methods and semantics of programming languages. \subsubsection*{On Closure Operators} As mentioned in the introduction, the maps satisfying A.1, A.2 and A.3 are often known as closure operators. On one end, they appeared in the work of Tarski (see e.g., \cite{TAR1936} and \cite{TAR1956}). On another end, they appeared in the work of Birkhoff, Ore and Ward (see e.g., \cite{BIR1936}, \cite{ORE1943} and \cite{WAR1942}, respectively). The first origin reflects the consequential relation in the effects considered. The second origin reflects the theory of interaction of multiple systems. Closure operators appear as early as \cite{MOO1911}. They are intimately related to Moore families or closure systems (i.e., collection of subsets of $S$ containing $S$ and closed under intersection) and also to Galois connections (see e.g., \cite{BIR1967} Ch. V and \cite{EVE1944}). Every closure operator corresponds to a Moore family (see e.g., Subsection \ref{subsec:fp}). This connection will be extensively used throughout the paper. Most of the properties derived in Sections \ref{sec:class} and \ref{sec:lattice} can be seen to appear in the literature (see e.g. \cite{BIR1967} Ch.\ V and \cite{CAS2003} for a recent survey). They are very elementary, and will be easily and naturally rederived whenever needed. Furthermore, every Galois connection induces one closure operator, and every closure operator arises from at least one Galois connection. Galois connection will be briefly discussed in Section \ref{sec:eval}. They will not however play a major explicit role in this paper. \subsection{The Fixed Points of the Systems}\label{subsec:fp} As each map in $\L$ sends each state to a respective fixed point, a grounded understanding of a system advocates an understanding of its fixed points. We develop such an understanding in this subsection, and characterize the systems through their fixed points. Let $\Phi$ be the map $f \mapsto \{a : fa=a\}$ that sends a system to its set of fixed points. \begin{prop} If $f\neq g$ then $\Phi f \neq \Phi g$. \end{prop} \begin{proof} If $\Phi f = \Phi g$, then $ga \leq gfa = fa$ and $fa \leq fga = ga$ for each $a$. Therefore $f = g$. \end{proof} It is obvious that each state is mapped to a fixed point; it is less obvious that, knowing only the fixed points, the system can be reconstructed uniquely. It seems plausible then to directly define systems via their fixed point, yet doing so inherently supposes an understanding of the image set of $\Phi$. \begin{prop}\label{pro:maxp} If $f \in \L_P$, then $\hat{p} \in \Phi f$. \end{prop} \begin{proof} Trivially $\hat{p} \leq f\hat{p} \leq \hat{p}$. \end{proof} Furthermore, \begin{prop}\label{pro:closedmeet} If $a,b\in \Phi f$, then $a \wedge b \in \Phi f$. \end{prop} \begin{proof} It follows from A.2 that $f(a\wedge b) \leq fa$ and $f(a\wedge b) \leq fb$. If $a,b \in \Phi f$, then $f(a\wedge b) \leq fa \wedge fb = a\wedge b$. The result follows as $a\wedge b \leq f(a \wedge b)$. \end{proof} In fact, the properties in Propositions \ref{pro:maxp} and \ref{pro:closedmeet} fully characterize the image set of $\Phi$. \begin{prop} If $S \subseteq P$ is closed under $\wedge$ and contains $\hat{p}$, then $\Phi f=S$ for some $f \in \L_P$. \end{prop} \begin{proof} Construct $f : a \mapsto \inf\{b \in S : a \leq b\}$. Such a function is well defined and satisfies A.1, A.2 and A.3. \end{proof} It follows from Propositions \ref{pro:maxp} and \ref{pro:closedmeet} that $\Phi f$ forms a lattice under the induced order $\leq$. This conclusion coincides with that of Tarski's fixed point theorem (see \cite{TAR1955}). However, one additional structure is gained over arbitrary order-preserving maps. Indeed, the meet operation of the lattice $(\Phi f,\leq)$ coincides with that of the lattice $(P,\leq)$. \begin{exa} Let $f : 2^V \rightarrow 2^V$ be the system derived from an instance $(V,A)$ of M.0. The fixed points of $f$ are the sets $S \subseteq V$ such that $S \supseteq N(S)$. If $S$ and $T$ are fixed points of $f$, then $S \cap T$ is a fixed point of $f$. Indeed, the set $S\cap T$ contains $N(S \cap T)$. The map $f$ sends each set $T$ to the intersection of all sets $S \supseteq T \cup N(S)$. Although every collection $C$ of sets in $2^V$ closed under $\cap$ and containing $V$ can form a system, it will not always be possible to find a digraph where $C$ coincides with the sets $S \supseteq N(S)$. The model M.0 is not \emph{complex} enough to capture all possible systems. \end{exa} The space $\L$ is thus far only a set, with no further mathematical structure. The theory becomes lively when elements of $\L$ become \emph{related}. \subsection{Overview Through an Example}\label{sec:running} We illustrate some main ideas of the paper through an elementary example. The example will run throughout the paper, revisited in each section to illustrate its corresponding notions and results. The example we consider is the following (undirected) instance of M.1: \begin{center} \begin{tikzpicture} [scale=.4,auto=center,every node/.style={circle,fill=black!10!white,scale=0.8}] \node (n1) at (1,10) {A,2}; \node (n2) at (4,8) {B,1}; \node (n3) at (1,6) {C,2}; \foreach \from/\to in {n1/n2,n2/n3,n1/n3} \draw (\from) -- (\to); \end{tikzpicture} \end{center} The nodes are labeled $A$, $B$ and $C$. Each node $I$ is tagged with an integer $k_I$ that denotes a \emph{threshold}. Each node can then be in either one of two colors: \emph{black} or \emph{white}. Node $I$ is colored black (and stays black forever) when at least $k_I$ neighbors are \emph{black}. In our example, node $A$ (resp. $C$) is colored black when both $B$ and $C$ (resp. $A$) are black. Node $B$ is colored black when either $A$ or $C$ are black. A node remains white otherwise. The set underlying the state space is the set of possible colorings of nodes. Each coloring may be identified with a subset of $\{A,B,C\}$ containing the black colored nodes. The state space will then be identified with $2^{\bf{3}}$, the set of all subsets of $\{A,B,C\}$. The set $2^{\bf{3}}$ admits a natural ordering by inclusion ($\subseteq$) that turns it into a lattice. It may then be represented through a \emph{Hasse diagram} as: \begin{center} \begin{tikzpicture}[scale=0.8] \node (max) at (0,3) {$ABC$}; \node (a) at (1.5,1.5) {$aBC$}; \node (b) at (0,1.5) {$AbC$}; \node (c) at (-1.5,1.5) {$ABc$}; \node (d) at (1.5,0) {$AbC$}; \node (e) at (0,0) {$aBc$}; \node (f) at (-1.5,0) {$Abc$}; \node (min) at (0,-1.5) {$abc$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \end{tikzpicture} \end{center} {\bf Notation:} We denote subsets of $\{A,B,C\}$ as strings of letters. Elements in the set are written in uppercase, while elements not in the set are written in lowercase. Thus $aBC$, $Abc$ and $abc$ denote $\{B,C\}$, $\{A\}$ and $\{\}$ respectively. The string $AC$ (with $b/B$ absent) denotes both $AbC$ and $ABC$. The system derived from our example is the map $f : 2^{\bf 3} \rightarrow 2^{\bf 3}$ satisfying A.1, A.2 and A.3 such that $A \mapsto ABC$, $C \mapsto ABC$ and all remaining states are left unchanged. The fixed points of $f$ yield the following representation. \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$\times$}; \node (a) at (1.5,1.5) {$\circ$}; \node (b) at (0,1.5) {$\circ$}; \node (c) at (-1.5,1.5) {$\circ$}; \node (d) at (1.5,0) {$\circ$}; \node (e) at (0,0) {$\times$}; \node (f) at (-1.5,0) {$\circ$}; \node (min) at (0,-1.5) {$\times$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \end{tikzpicture} \end{center} We indicate, on the diagram, a fixed point by $\times$ and a non-fixed point by $\circ$. \subsection{On the System Maps and their Interaction} \label{ap:motivation} As mentioned in the introduction, the systems of interest consist of a collection of states along with internal dynamics. The collection of states is a finite set $P$. The dynamics dictate the evolution of the system through the states and are \emph{governed} by a class $\mathcal{K}$ of maps $P\rightarrow P$. The class $\mathcal{K}$ is closed under composition, contains the identity map and satisfies: \begin{description} \item[P.1] If $a \neq b$ and $fa=b$ for some $f \in \mathcal{K}$, then $gb \neq a$ for every $g \in \mathcal{K}$. \item[P.2] If $gfa=b$ for some $f, g \in \mathcal{K}$, then $hga=b$ for some $h\in \mathcal{K}$. \end{description} The principles P.1 and P.2 naturally induce a partial order $\leq$ on the set $P$. The principles P.1 and P.2 further force the functions to be \emph{well adapted} to this order. \begin{prop} There exists a partial order $\leq$ on $P$ such that for each $f \in \mathcal{K}$: \begin{description} \item[A.1] If $a \in P$, then $a \leq fa$. \item[A.2] If $a, b \in P$ and $a \leq b$, then $fa \leq fb$. \end{description} \end{prop} \begin{proof} Define a relation $\leq$ on $P$ such that $a \leq b$ if, and only if, $b = fa$ for some $f\in \mathcal{K}$. The relation $\leq$ is reflexive and transitive as $\mathcal{K}$ is closed under composition and contains the identity map, respectively. Both antisymmetry and A.1 follow from P.1. Finally, if $a \leq b$, then $b = ga$ for some $g$. It then follows by P.2 that $fb = fga = hfa$ for some $h$. Therefore, $fa \leq fb$. \end{proof} We only alluded that the maps in $\mathcal{K}$ will govern our dynamics. No law of interaction is yet specified as to \emph{how} the maps will govern the dynamics. As the state space is finite, the interaction may be motivated by iterative (functional) composition. For some map $\phi : \mathbb{N} \rightarrow \mathcal{K}$, the system starts in a state $a_0$ and evolves through $a_1,a_2,\cdots$ with $a_{i+1} = \phi_i a_i$. We reveal properties of such an interaction. Let $\phi : \mathbb{N} \rightarrow \S \subseteq \mathcal{K}$ be a surjective map, and define a map $F_i$ recursively as $F_1 = \phi_1$ and $F_{i+1} = \phi_{i+1}F_i$. \begin{prop} For some $M$, we have $F_m = F_M$ for $m\geq M$. \end{prop} \begin{proof} It follows from A.1 that $F_1a \leq F_2a \leq \cdots$. The result then follows from finiteness of $P$. \end{proof} \begin{prop} \label{pro:idempotent} The map $F_M$ is idempotent if $\phi^{-1}f$ is a non-finite set for each $f\in \S$. \end{prop} \begin{proof} If $\phi^{-1}f$ is non-finite, then $fF = F$. If $\phi^{-1}f$ is non-finite for all $f \in \S$, then $FF = F$ as $F$ is the finite composition of maps in $\S$. \end{proof} Let $\psi : \mathbb{N} \rightarrow \S$ be another surjective map, and define a map $G_i$ recursively as $G_1 = \phi_1$ and $G_{i+1} = \phi_{i+1}G_i$. For some $N$, we necessarily get $G_{N} = G_n$ for $n \geq N$. \begin{prop} \label{pro:equalmaps} It follows that $F_M = G_N$, if $\phi^{-1}f$ and $\psi^{-1}f$ are non-finite sets for each $f\in \S$. \end{prop} \begin{proof} Define $F = F_M$ and $G = G_N$. As $F$ and $G$ are idempotent, then $FG = G$ and $GF = F$. Therefore $Fa \leq FGa = Ga$ and $Ga \leq GFa = Fa$. \end{proof} The maps governing the dynamics are to be considered as \emph{intrinsic mechanism wired into} the system. The effect of each map should not die out along the evolution of the system, but should rather keep on resurging. Such a consideration hints to an interaction insisting each map to be applied infinitely many times. There is immense variability in the order of application. However, we only want to care about the limiting outcome of the dynamics. By Proposition \ref{pro:equalmaps}, such a variability would then make no difference from our standpoint. We further know, through Proposition \ref{pro:idempotent}, that iterative composition in this setting cannot lead but to idempotent maps. We then impose---with no loss in generality---a third principle (P.3) on $\mathcal{K}$ to contain only idempotent maps. This principle gives rise to a third axiom. \begin{description} \item[A.3] For $a \in P$, $ffa = fa$. \end{description} We define $\L_P$ to be the set of maps satisfying A.1, A.2 and A.3. The set $\L_P$ is closed under composition and contains each element of $\mathcal{K}$ with P.3 imposed, including the identity map. Furthermore, the principles P.1, P.2 and P.3 remain satisfied if $\mathcal{K}$ is replaced by $\L_P$. We will then extend $\mathcal{K}$ to be equal to $\mathcal{L}_P$. This extension offers greater variability in dynamics, and there is no particular reason to consider any different set. We further consider only posets $(P,\leq)$ that are lattices, as opposed to arbitrary posets. \section{The Lattice of Systems}\label{sec:lattice} The theory of cascade effects presented in this paper is foremost a theory of combinations and interconnections. As such, functions shall be treated in relation to each other. The notion of desirability on states introduced by the partial order translates to a notion of desirability on systems. We envision that systems combined together should form less desirable systems, i.e. systems that more likely to evolve to less desirable states. Defining an order on the maps is natural to formalize such an intuition. We define the relation $\leq$ on $\L$, where $f \leq g$ if, and only if, $fa \leq ga$ for each $a$. \begin{prop}\label{pro:Llattice} The relation $\leq$ is a partial order on $\mathcal{L}$, and the poset $(\L,\leq)$ is a lattice. \end{prop} \begin{proof} The reflexivity, antisymmetry and transitivity properties of $\leq$ follow easily from A.1 and A.2. If $f,g \in \L$, then define $h : a \mapsto fa \wedge ga$. It can be checked that $h \in \L$. Let $h'$ be any lower bound of $f$ and $g$, then $h'a \leq fa$ and $h'a \leq ga$. Therefore $h'a \leq fa \wedge ga = ha$, and so every pair in $\L$ admits a greatest lower bound in $\L$. Furthermore, the map $a \mapsto \hat{p}$ is a maximal element in $\L$. The set of upper bounds of $f$ and $g$ in $\L$ is then non-empty, and necessarily contains a least element by finiteness. Every pair in $\L$ then also admits a least upper bound in $\L$. \end{proof} We may then deduce join and meet operations denoted by $+$ (combine) and $\cdot$ (project) respectively. The meet of a pair of systems was derived in the proof of Proposition \ref{pro:Llattice}. \begin{prop} If $f,g \in \L$, then $f\cdot g$ is $a \mapsto fa \wedge ga$. \end{prop} On a dual end, \begin{prop} \label{pro:join} If $f,g \in \L$, then $f+ g$ is the least fixed point of the map $h \mapsto (fg)h(fg)$. As $P$ is finite, it follows that $f + g = (fg)^{|P|}$. \end{prop} \begin{proof} Define $h_0 = (fg)^{|P|}$. Since the map $fg$ satisfies A.1 and A.2, then $h_0$ satisfies A.1 and A.2. Furthermore, iterative composition yields $(fg)^{|P|+1} = (fg)^{|P|}$. Then $h_0$ is idempotent i.e. satisfies A.3. The map $h_0$ is then a fixed point of $h \mapsto (fg)h(fg)$. Moreover, every upperbound on $f$ and $g$ is a fixed point of $h \mapsto (fg)h(fg)$. Let $h'$ be such an upperbound, then $fh'=h'$ and $gh' = h'$. It follows that $h_0h' = h'$ i.e. $h_0 \leq h'$. \end{proof} The lattice $\L_P$ has a minimum and a maximum as it is finite. The minimum element (denoted by $0$ or $0_p$) corresponds to the identity map $a \mapsto a$. The maximum (denoted by $1$ or $1_p$) corresponds to $a \mapsto \hat{p}$. \subsection{Interpretation and Examples} The $+$ operator yields the most desirable system incorporating the effect of both of its operands. The $\cdot$ operator dually yields the least desirable system whose effects are contained within both of its operands. Their use and significance is partially illustrated through the following six examples. \subsubsection*{Example 0. Intuitive interpretation of the $+$ operator} The $+$ operation combines the rules of the systems. If each of $f$ and $g$ is seen to be described by a set of monotone deduction rules, then $f+g$ is the system that is obtained from the union of these sets of rules. The intuitive picture of combining rules may also found in the characterization $f + g = (fg)^{|P|}$. Both rules of $f$ and $g$ are iteratively applied on an initial state to yield a final state. Furthermore, the order of composition does not affect the final state, as long as each system is applied enough times. This insight follows from the interaction of A.1 and A.2, and is made formal in Subsection \ref{ap:motivation}. In a societal setting, each agent' state is governed by a set of local rules. Every such set only affects the state of its corresponding agent. The aggregate (via $+$) of all the local rules then defines the whole system. It allows for an interaction between the rules, and makes way for cascade effects to emerge. In the context of failures in infrastructure, the $+$ operator enables adding new conditions for failure/disconnections in the system. This direction of aggregating local rules is further pursued in Section \ref{sec:comp} on component realization. The definition of cascade effects is further expounded in Section \ref{sec:eval}. The five examples to follow also provide additional insight. \subsubsection*{Example 1. Overview on M.0} Let $f$ and $f'$ be systems derived from instances $(V,A)$ and $(V,A')$ of M.0. If $A' \subseteq A$, then $f' \leq f$. If $A'$ and $A$ are non-comparable, an inequality may still hold as different digraphs may give rise to the same system. The system $f + f'$ is the system derived from $(V, A\cup A')$. The system $f\cdot f'$ is, however, not necessarily derived from $(V, A\cap A')$. If $(V,A)$ is a directed cycle and $(V,A')$ is the same cycle with the arcs reversed, then $f = f'$ while $(V, A\cap A')$ is the empty graph and yields the $0$ system. \subsubsection*{Example 2. Combining Update Rules} Given a set $S$, consider a subset $N_i \subseteq S$ and an integer $k_i$ for each $i \in S$. Construct a map $f_i$ that maps $X$ to $X\cup \{i\}$ if $|X\cap N_i| \geq k_i$ and to $X$ otherwise. Finally, define the map $f = f_1 + \cdots + f_n$. The map $f$ can be realized by an instance of M.1, and each of the $f_i$ corresponds to a \emph{local evolution rule}. \subsubsection*{Example 3. Recovering Update Rules} Given the setting of the previous example, define the map $e_i : X \mapsto X \cup \{i\}$. This map enables the extraction of a \emph{local} evolution rule. Indeed, $i \in (f \cdot e_i) X_0$ if, and only if, $i \in f X_0$. However, if $j \neq i$, then $j \in (f \cdot e_i) X_0$ if, and only if, $j \in X_0$. It will later be proved that $f = f\cdot e_1 + \cdots + f\cdot e_n$. The system $f$ can be realized as a combination of evolution rules, each governing the behavior of only one element of $S$. \subsubsection*{Example 4. An Instance of Boolean Systems}Consider the following two instances of M.4, where $L$ is the Boolean lattice. Iteration indices are dropped in the notation. \begin{align*} x_1 &:= x_1 \vee (x_2 \wedge x_3) & x_1 &= x_1\nonumber\\ x_2 &:= x_2 \vee x_3 & x_2 &= x_2 \vee x_3\nonumber\\ x_3 &:= x_3 & x_3 &= x_3 \vee (x_1 \wedge x_2)\nonumber \end{align*} Let $f$ and $g$ denote the system maps generated by the right and left instances. The maps $f+ g$ (left) and $f\cdot g$ (right) can then be realized as: \begin{align*} x_1 &:= x_1 \vee (x_2 \wedge x_3) &\quad x_1 &= x_1\nonumber\\ x_2 &:= x_2 \vee x_3 &\quad x_2 &= x_2 \vee x_3\nonumber\\ x_3 &:= x_3 \vee (x_1 \wedge x_2) &\quad x_3 &= x_3\nonumber \end{align*} The map $f \cdot g$ is the identity map. \subsubsection*{Example 5. Closure under Meet and Join} If $f$ and $g$ are derived from instances of M.1, then neither $f + g$ nor $f \cdot g$ are guaranteed to be realizable as instances of M.1. If they are derived from instances of M.2, then only $f + g$ is necessarily realizable as an instance of M.2. As all systems (over the Boolean lattice) can be realized as instances of M.3, both $f + g$ and $f \cdot g$ can always be realized as instances of M.3. As an example, we consider the case of M.2. If $(\mathcal{C}_f,k_f)$ and $(\mathcal{C}_g,k_g)$ are realizations of $f$ and $g$ as M.2, then $(\mathcal{C}_f\cup \mathcal{C}_g,k)$ is a realization of $f + g$, with $k$ being $k_f$ on $\mathcal{C}_f$ and $k_g$ on $\mathcal{C}_g$. However, let $S= \{a,b,c\}$ be a set, and consider $\mathcal{C}_f = \{\{a,b\}\}$ with $k_f = 1$, and $\mathcal{C}_g = \{\{b,c\}\}$ with $k_g = 1$. The set $\{a,c\}$ is not a fixed-point of $f \cdot g$. Thus, if a realization $(\mathcal{C}_{f\cdot g},k)$ of $f \cdot g$ is possible, then $\{a,b,c\}\in\mathcal{C}_{f\cdot g}$ with $k \leq 2$. However, both $\{a,b\}$ and $\{b,c\}$ are fixed-points of $f\cdot g$, contradicting such a realization. \subsection{Effect of the Operators on Fixed Points} The fixed point characterization uncovered thus far is independent of the order on $\L_P$. The map $\Phi : f \mapsto \{a: fa = a\}$ is also well behaved with respect to the $+$ and $\cdot$ operations. For $S, T \subseteq P$, we define their set meet $S \wedge T$ to be $\{a\wedge b : a\in S \text{ and } b \in T\}$. \begin{prop} If $f,g \in \L$, then $\Phi (f+ g) = \Phi f \cap \Phi g$ and $\Phi (f \cdot g) = \Phi f \wedge \Phi g$. \end{prop} \begin{proof} If $a \in \Phi f \cap \Phi g$, then $ga \in \Phi f$. As $fga = a$, it follows that $a \in \Phi(f+ g)$. Conversely, as $(f+g)g = (f+g)$, if $(f+g)a = a$, then $(f+g)ga = a$ and so $ga = a$. By symmetry, if $(f+g)a=a$, then $fa=a$. Thus if $a \in \Phi (f+ g)$, then $a\in \Phi f \cap \Phi g$. Furthermore, $(f\cdot g)a = a$ if, and only if, $fa \wedge ga = a$ and the result $\Phi (f \cdot g) = \Phi f \wedge \Phi g$ follows. \end{proof} Combination and projection lend themselves to simple operations when the maps are viewed as a collection of fixed points. Working directly in $\Phi \L$ will yield a remarkable conceptual simplification. \subsection{Summary on Fixed Points: The Isomorphism Theorem} Let $\mathcal{F}$ be the collection of all $S \subseteq P$ such that $\hat{p} \in S$ and $a\wedge b \in S$ if $a, b \in S$. Ordering $\mathcal{F}$ by reverse inclusion $\supseteq$ equips it with a lattice structure. The join and meet of $S$ and $T$ in $\mathcal{F}$ are, respectively, set intersection $S \cap T$ and set meet $S \wedge T = \{a\wedge b : a\in S \text{ and } b \in T\}$. The set $S \wedge T$ may also be obtained by taking the union of $S$ and $T$ and closing the set under $\wedge$. \begin{thm}\label{thm:iso} The map $\Phi : f \mapsto \{a : fa=a\}$ defines an isomorphism between $(\L,\leq,+,\cdot)$ and $(\mathcal{F},\supseteq,\cap,\wedge)$. \qed \end{thm} Such a result is well known in the study of \emph{closure operators}, and is relatively simple. We refer the reader, for instance, to \cite{BIR1936}, \cite{ORE1943} and \cite{WAR1942} for pieces of this theorem, and to \cite{BIR1967} Ch V and \cite{CAS2003} for a broader overview, more insight and references. Nevertheless, the implications of it on the theory at hand can be remarkable. Our systems will be interchangeably used as both maps and subsets of $P$. The isomorphism enables a conceptual simplification, that enables emerging objects to be interpreted as systems exhibiting \emph{cascade effects}. \subsection{Overview Through An Example (Continued)} We continue the running example. Our example is realized as a \emph{combination} of three evolution rules: one pertaining to each node. For instance, the rule of node $A$ may be realized as: \begin{center} \begin{tikzpicture} [scale=.4,auto=center,every node/.style={circle,fill=black!10!white,scale=0.8}] \node (n1) at (1,10) {A,2}; \node (n2) at (4,8) {B,3}; \node (n3) at (1,6) {C,3}; \foreach \from/\to in {n2/n1,n3/n1} \draw (\from) -> (\to); \end{tikzpicture} \end{center} The threshold $3$ is just a large enough integer so that the colors of node $B$ or node $C$ do not change/evolve, regardless of the coloring on the graph. The system derived from such a realization is the map $f_A : 2^{\bf 3} \rightarrow 2^{\bf 3}$ satisfying A.1, A.2 and A.3 such that $BC \mapsto ABC$ and all remaining states are left unchanged. A fixed point representation yields: \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$\times$}; \node (a) at (1.5,1.5) {$\circ$}; \node (b) at (0,1.5) {$\times$}; \node (c) at (-1.5,1.5) {$\times$}; \node (d) at (1.5,0) {$\times$}; \node (e) at (0,0) {$\times$}; \node (f) at (-1.5,0) {$\times$}; \node (min) at (0,-1.5) {$\times$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \end{tikzpicture} \end{center} Similarly the maps $f_B$ and $f_C$ derived for the rules of $B$ and $C$ are represented (respectively from left to right) through their fixed points as: \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$\times$}; \node (a) at (1.5,1.5) {$\times$}; \node (b) at (0,1.5) {$\circ$}; \node (c) at (-1.5,1.5) {$\times$}; \node (d) at (1.5,0) {$\circ$}; \node (e) at (0,0) {$\times$}; \node (f) at (-1.5,0) {$\circ$}; \node (min) at (0,-1.5) {$\times$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \node (maxmax) at (4.5,3) {$\times$}; \node (aa) at (6,1.5) {$\times$}; \node (bb) at (4.5,1.5) {$\times$}; \node (cc) at (3,1.5) {$\circ$}; \node (dd) at (6,0) {$\times$}; \node (ee) at (4.5,0) {$\times$}; \node (ff) at (3,0) {$\times$}; \node (minmin) at (4.5,-1.5) {$\times$}; \draw (minmin) -- (dd) -- (aa) -- (maxmax) -- (bb) -- (ff) (ee) -- (minmin) -- (ff) -- (cc) -- (maxmax) (dd) -- (bb); \draw[preaction={draw=white, -,line width=6pt}] (aa) -- (ee) -- (cc); \end{tikzpicture} \end{center} Our overall descriptive rule of the dynamics is constructed by a \emph{descriptive} combination of the evolution rules of $A$, $B$ and $C$. With respect to the objects \emph{behind} those rules, the overall system is obtained by a $+$ combination of the \emph{local} systems. Indeed we have $f = f_A + f_B + f_C$, and such a combination is obtained by only keeping the fixed points that are common to all three systems. \section{Components Realization}\label{sec:comp} The systems derived from instances of ``models'' \emph{forget} all the componental structure described by the model. Nodes in M.0 and M.1 are bundled together to form the Boolean lattice, and the system is a monolithic map from $2^V$ to $2^V$. We have not discussed any means to recover components and interconnection structures from systems. We might want such a recovery for at least two reasons. First, we may be interested in understanding specific subparts of the modeled system. Second, we may want to realize our systems as instances of other models. In state spaces isomorphic to $2^S$ for some $S$, components may often be identified with the elements of $S$. In the case of M.0 and M.1, the components are represented as nodes in a graph. Yet, two elements of $S$ might also be tightly coupled as to form a single component. It is also less clear what the components can be in non-Boolean lattices as state spaces. We formalize such a flexibility by considering the set $\mathcal{E}$ of all maps $0_q \times 1_{q'}$ in $\L_Q{\times}\L_{Q'} \subseteq \L_{Q{\times}Q'}$ for $Q {\times} Q' = P$. The map $0_q \times 1_{q'}$ sends $(q,q') \in Q\times Q'$ to $(q,\hat{q}')$ where $\hat{q}'$ is the maximum element of $Q'$. Indeed, the system $0_q$, being the identity, keeps $q$ unchanged in $Q$. The system $1_{q'}$, being the maximum system, sends $q'$ to the maximum element $\hat{q}'$ of $Q'$. We refer to the maps of $\mathcal{E}$ as \emph{elementary} functions (or systems). A \emph{component realization} of $P$ is a collection of systems $e_A,\cdots,e_H$ in $\mathcal{E}$ where: \begin{align} &e_A + \cdots + e_H = 1\nonumber\\ &e_I \cdot e_J = 0 \quad \text{for all }I\neq J\nonumber \end{align} For a different perspective, we consider a direct decomposition of $P$ into lattices $A,\cdots,H$ such that $A \times \cdots \times H = P$. An element $t$ of $P$ can be written either as a tuple $(t_A,\cdots,t_H)$ or as a string $t_A \cdots t_H$. If $(t_A,\cdots,t_H)$ and $(t'_A,\cdots,t'_H)$ are elements of $P$, then: \begin{align} (t_A,\cdots,t_H) \vee (t'_A,\cdots,t'_H) &= (t_A \vee t'_A,\cdots,t_H \vee t'_H)\label{eq:joinproduct}\\ (t_A,\cdots,t_H) \wedge (t'_A,\cdots,t'_H) &= (t_A \wedge t'_A,\cdots,t_H \wedge t'_H)\label{eq:meetproduct}. \end{align} Indeed, the join (resp.\ meet) in the product lattice, is the product of the joins (resp.\ meets) in the factor lattices. Maps $e_A,\cdots,e_H$ can be defined as $e_I : ti \mapsto t\hat{i}$, that keeps $t$ unchanged and maps $i$ to the maximum element $\hat{i}$ of $I$. These maps belongs to $\L_P$, and together constitute a component realization as defined above. Conversely, each component realization gives rise to a direct decomposition of $P$. \begin{thm}\label{thm:decom} Let $e_A, \cdots, e_H$ be a component realization of $P$. If $f \in \L_P$, then $f = f\cdot e_A + \cdots + f\cdot e_H$. \end{thm} \begin{proof} It is immediate that $f\cdot e_A + \cdots + f\cdot e_H \leq f$. To show the other inequality, consider $t \notin \Phi f$. Then $t_I \neq (ft)_I$ for some $I$. Furthermore, if $t' \geq t$ with $t'_I = t_I$, then $t'\notin \Phi f$. Assume $t \in \Phi (f\cdot e_I)$, then $t = s \wedge r$ for some $s \in \Phi f$ and $r \in \Phi e_I$. It then follows that $r_I = \hat{i}$, the maximum element of $I$. Therefore $s_I = t_I$ and $s \geq t$ contradicting the fact that $s \in \Phi f$. \end{proof} The map $f\cdot e_I$ may evolve only the $I$-th \emph{component} of the state space. \begin{prop}\label{pro:evolveIcomponent} If $s\in P$ is written as $ti$, then $(f \cdot e_I)s = t(fs)_I$, where $(fs)_I$ is the projection of $fs$ onto the component $I$. \end{prop} \begin{proof} We have $(f \cdot e_I)s = fs \wedge e_Is = f(ti) \wedge t\hat{i} = t(fs)_I$. The last equality follows from Equation \ref{eq:meetproduct}. \end{proof} It is also the evolution rule governing the state of component $I$ as a function of the full system state. \begin{prop}\label{pro:joincomponent} Let $e_A, \cdots, e_H$ be a component realization of $P$. If $f \in \L_P$, then $fa = (f\cdot e_A)a \vee \cdots \vee (f\cdot e_H)a$ for every $a \in P$. \end{prop} \begin{proof} It is immediate that $(f\cdot e_A)a \vee \cdots \vee (f\cdot e_H)a \leq fa$. The other inequality follows from combining Proposition \ref{pro:evolveIcomponent} and Equation \ref{eq:joinproduct}. \end{proof} \begin{exa} Let $f$ be the system derived from an instance $(V,A)$ of M.0. We consider the maps $e_i : X \mapsto X\cup\{i\}$ for $i \in V$. The collection $\{e_i\}$ forms a \emph{component realization} where $e_i$ corresponds to node $i$ in the graph. The system $f \cdot e_i$ may be identified with the \emph{ancestors} of $i$, namely, nodes $j$ where a directed path from $j$ to $i$ exists. A realization (in the form of M.0) of $f \cdot e_i$ then colors $i$ $black$ whenever any ancestor of it is $black$, leaving the color of all other nodes unchanged. Combining the maps $f \cdot e_i$ recovers the map $f$. \end{exa} \emph{Interconnection structures} (e.g. digraphs as used in M.1) may be further derived by defining projection and inclusion maps accordingly and requiring the systems to satisfy some fixed-point conditions. Such structures can be interpreted as systems in $\L_{\L_P}$. They will not be considered in this paper. \subsection{Defining Cascade Effects} Given a component realization $e_A, \cdots, e_H$, define a collection of maps $f_A,\cdots,f_H$ where $f_I \leq e_I$ dictates the evolution of the state of component $I$ as a function of $P$. These update rules are typically combined to form a system $f = f_A + \cdots + f_H$. \emph{Cascade effects} are said to occur when $f \cdot e_I \neq f_I$ for some $I$. The behavior governing a certain (sub)system $I$ is \emph{enhanced} as this component is embedded into the greater system. We should consider the definition provided, in this subsection, as conceptually illustrative rather than useful and complete. The main goal of the paper is to define a class of systems exhibiting cascade effects. It is not to define what cascade effects are. We instead refer the reader to \cite{ADAM:Dissertation} for an actionable definition and a study of these effects. We will however revisit this definition in Section \ref{sec:eval} with more insight. The conditions under which such effects occurs depend on the properties of the operations. If $\cdot$ distributes over $+$, then this behavior is never bound to occur; this will seldom be the case as will be shown in the next section. \subsection{Overview Through An Example (Continued)} We continue the running example. On a dual end, if we wish to view the nodes $A$, $B$ and $C$ as distinct entities, we may define a component realization $e_A$, $e_B$ and $e_C$ represented (respectively from left to right) as: \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$\times$}; \node (a) at (1.5,1.5) {$\circ$}; \node (b) at (0,1.5) {$\times$}; \node (c) at (-1.5,1.5) {$\times$}; \node (d) at (1.5,0) {$\circ$}; \node (e) at (0,0) {$\circ$}; \node (f) at (-1.5,0) {$\times$}; \node (min) at (0,-1.5) {$\circ$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \node (maxmax) at (4.5,3) {$\times$}; \node (aa) at (6,1.5) {$\times$}; \node (bb) at (4.5,1.5) {$\circ$}; \node (cc) at (3,1.5) {$\times$}; \node (dd) at (6,0) {$\circ$}; \node (ee) at (4.5,0) {$\times$}; \node (ff) at (3,0) {$\circ$}; \node (minmin) at (4.5,-1.5) {$\circ$}; \draw (minmin) -- (dd) -- (aa) -- (maxmax) -- (bb) -- (ff) (ee) -- (minmin) -- (ff) -- (cc) -- (maxmax) (dd) -- (bb); \draw[preaction={draw=white, -,line width=6pt}] (aa) -- (ee) -- (cc); \node (maxmaxmax) at (9,3) {$\times$}; \node (aaa) at (10.5,1.5) {$\times$}; \node (bbb) at (9,1.5) {$\times$}; \node (ccc) at (7.5,1.5) {$\circ$}; \node (ddd) at (10.5,0) {$\times$}; \node (eee) at (9,0) {$\circ$}; \node (fff) at (7.5,0) {$\circ$}; \node (minminmin) at (9,-1.5) {$\circ$}; \draw (minminmin) -- (ddd) -- (aaa) -- (maxmaxmax) -- (bbb) -- (fff) (eee) -- (minminmin) -- (fff) -- (ccc) -- (maxmaxmax) (ddd) -- (bbb); \draw[preaction={draw=white, -,line width=6pt}] (aaa) -- (eee) -- (ccc); \end{tikzpicture} \end{center} Local evolution rules may be recovered through the systems $f\cdot e_A$, $f \cdot e_B$ and $f \cdot e_C$. Those are likely to be different than $f_A$, $f_B$ and $f_C$ as they also take into account the \emph{effects} resulting from their combination. The systems $f\cdot e_A$, $f \cdot e_B$ and $f \cdot e_C$ are generated by considering $\Phi f \cup \Phi e_I$ and closing this set under $\cap$. They are represented (respectively from left to right) as: \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$\times$}; \node (a) at (1.5,1.5) {$\circ$}; \node (b) at (0,1.5) {$\times$}; \node (c) at (-1.5,1.5) {$\times$}; \node (d) at (1.5,0) {$\circ$}; \node (e) at (0,0) {$\times$}; \node (f) at (-1.5,0) {$\times$}; \node (min) at (0,-1.5) {$\times$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \node (maxmax) at (4.5,3) {$\times$}; \node (aa) at (6,1.5) {$\times$}; \node (bb) at (4.5,1.5) {$\circ$}; \node (cc) at (3,1.5) {$\times$}; \node (dd) at (6,0) {$\circ$}; \node (ee) at (4.5,0) {$\times$}; \node (ff) at (3,0) {$\circ$}; \node (minmin) at (4.5,-1.5) {$\times$}; \draw (minmin) -- (dd) -- (aa) -- (maxmax) -- (bb) -- (ff) (ee) -- (minmin) -- (ff) -- (cc) -- (maxmax) (dd) -- (bb); \draw[preaction={draw=white, -,line width=6pt}] (aa) -- (ee) -- (cc); \node (maxmaxmax) at (9,3) {$\times$}; \node (aaa) at (10.5,1.5) {$\times$}; \node (bbb) at (9,1.5) {$\times$}; \node (ccc) at (7.5,1.5) {$\circ$}; \node (ddd) at (10.5,0) {$\times$}; \node (eee) at (9,0) {$\times$}; \node (fff) at (7.5,0) {$\circ$}; \node (minminmin) at (9,-1.5) {$\times$}; \draw (minminmin) -- (ddd) -- (aaa) -- (maxmaxmax) -- (bbb) -- (fff) (eee) -- (minminmin) -- (fff) -- (ccc) -- (maxmaxmax) (ddd) -- (bbb); \draw[preaction={draw=white, -,line width=6pt}] (aaa) -- (eee) -- (ccc); \end{tikzpicture} \end{center} The system $f \cdot e_A$ captures the fact that node $A$ can become black if only $C$ is colored black. A change in $f_A$ would, however, require both $B$ and $C$ to be black. Recombining the obtained local rules is bound to recover the overall system, and indeed $f = f\cdot e_A + f\cdot e_B + f\cdot e_C$ as can be checked by keeping only the common fixed points. \section{Properties of the Systems Lattice} \emph{Complex} systems will be built out of \emph{simpler} systems through expressions involving $+$ and $\cdot$. The power of such an expressiveness will come from the properties exhibited by the operators. Those are trivially derived from the properties of the lattice $\L$ itself. \begin{prop} The following propositions are equivalent. (i) The set $P$ is linearly ordered. (ii) The lattice $\L_P$ is distributive. (iii) The lattice $\L_P$ is modular. \end{prop} \begin{proof} Property (ii) implies (iii) by definition. If $P$ is linearly ordered, then $\L$ is a Boolean lattice, as any subset of $P$ is closed under $\wedge$. Therefore (i) implies (ii). Finally, it can be checked that $(f,g)$ is a modular pair if, and only if, $\Phi(f\cdot g)= \Phi(f)\cup\Phi(g)$ i.e., $\Phi(f)\cup\Phi(g)$ is closed under $\wedge$. If $\L_P$ is modular, then each pair of $f$ and $g$ is modular. In that case, each pair of states in $P$ are necessarily comparable, and so (iii) implies (i). \end{proof} The state spaces we are interested in are not linearly ordered. Non-distributivity is natural within the interpreted context of cascade effects, and has at least two implications. First, the decomposition of Theorem \ref{thm:decom} cannot follow from distributivity, and relies on a more subtle point. Second, cascade effects (as defined in Section \ref{sec:comp}) are bound to occur in non-trivial cases. The loss of modularity is suggested by the asymmetry in the behavior of the operator. The $+$ operator corresponds to set intersection, whereas the $\cdot$ operator (is less convenient) corresponds to a set union followed by a closure under $\wedge$. Nevertheless, the lattice will be \emph{half} modular. \begin{prop} The lattice $\L_P$ is (upper) semimodular. \end{prop} \begin{proof} It is enough to prove that if $f \cdot g \prec f$ and $f \cdot g \prec g$, then $f \prec f+ g$ and $g \prec f+ g$. If $f \cdot g$ is covered by $f$ and $g$, then $|\Phi f - \Phi g| = |\Phi g - \Phi f| = 1$. Then necessarily $f + g$ covers $f$ and $g$. \end{proof} Semi-modularity will be fundamental in defining the $\mu$-rank of a system in Section 7. The lattice $\L$ is equivalently a graded poset, and admits a rank function $\rho$ such that $\rho(f + g) + \rho(f \cdot g) \leq \rho(f) + \rho(g)$. The quantity $\rho(f)$ is equal to the number of non-fixed points of $f$ i.e. $|P - \Phi f|$. More properties may still be extracted, up to full characterization of the lattice. Yet, such properties are not needed in this paper. \subsection{Additional Remarks on the Lattice of Systems}\label{sec:addRemarks} This subsection illustrates some basic lattice theoretic properties on $2^{\bf 2}$, represented through its Hasse diagram below. We follow the notation of the running example (see e.g., Subsection \ref{sec:running}). \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$AB$}; \node (a) at (1.5,1.5) {$aB$}; \node (c) at (-1.5,1.5) {$Ab$}; \node (e) at (0,0) {$ab$}; \draw (a) -- (max) -- (c) (a) -- (e) -- (c); \end{tikzpicture} \end{center} The lattice $\L_{2^{\bf 2}}$ may be represented as follows. The systems are labeled through their set of fixed-points. \begin{center} \begin{tikzpicture}[scale=1.2] \node (max) at (0,3) {$\{AB\}$}; \node (a) at (1.5,2) {$\{aB,AB\}$}; \node (b) at (0,2) {$\{ab,AB\}$}; \node (c) at (-1.5,2) {$\{Ab,AB\}$}; \node (d) at (1.5,1) {$\{ab,aB,AB\}$}; \node (f) at (-1.5,1) {$\{ab,Ab,AB\}$}; \node (min) at (0,0) {$\{ab,aB,Ab,AB\}$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (min) -- (f) -- (c) -- (max) (d) -- (b); \end{tikzpicture} \end{center} A map $f \in \L_P$ will be called \emph{prime} if $P - \Phi f$ is closed under~$\wedge$. Those maps will be extensively used in Section \ref{sec:failure}. All the systems are prime (i.e. have the set of non-fixed points closed under $\cap$) except for $\{ab,AB\}$. The lattice $\L_{2^{\bf 2}}$ is (upper) semimodular as a pair of systems are covered by their join ($+$) whenever they cover their meet ($\cdot$). All pairs form modular pairs except for the pair $\{Ab,AB\}$ and $\{aB,AB\}$. The lattice $\L_{2^{\bf 2}}$ is graded, and the (uniform) rank of a system is equal to the number of its non-fixed points as can be checked. \subsubsection*{On Atoms and Join-irreducible elements} An atom is an element that covers the minimal element of the lattice. In $\L_{2^{\bf 2}}$, those are $\{ab,aB,AB\}$ and $\{ab,Ab,AB\}$. A join-irreducible element is an element that cannot be written as a join of \emph{other} elements. An atom is necessarily a join-irreducible element, however the converse need not be true. The systems $\{aB,AB\}$ and $\{bA,AB\}$ are join-irreducible but are not atoms. The join-irreducible elements in $\L_P$ may be identified with the pairs $(s,t) \in P\times P$ such that $t$ covers $s$. They can be identified with the edges in the Hasse diagram of $P$. For a covering pair $(s,t)$, define $f_{st}$ to be the least map such that $s \mapsto t$. Then $f_{st}$ is join-irreducible for each $(s,t)$, and every element of $\L_P$ is a join of elements in $\{f_{st}\}$. \begin{prop} The map $f_{st}$ is prime for every $(s,t)$. \end{prop} \begin{proof} The map $f_{st}$ is the least map such that $s \mapsto t$. It follows that $s$ is the least non-fixed point of $f_{st}$, and that every element greater than $t$ belongs to $\Phi f_{st}$. If $a, b \notin \Phi f_{st}$, then their meet $a\wedge b$ is necessarily not greater than $t$, for otherwise we get $a,b \in \Phi f_{st}$. If $a\wedge b$ is comparable to $t$, then $a\wedge b = s \notin \Phi f_{st}$. If $a\wedge b$ is non-comparable to $t$, then $(a \wedge b) \wedge t = s$, and so again $a \wedge b \notin \Phi f_{st}$. \end{proof} \subsubsection*{On Coatoms and Meet-irreducible Elements} A coatom is an element that is covered by the maximal element of the lattice. In $\L_{2^{\bf 2}}$, those are $\{ab,AB\}$, $\{aB,AB\}$ and $\{Ab,AB\}$. In general, the coatoms of $\L$ are exactly the systems $f$ where $|\Phi f| = 2$. Note that the maximal element $\hat{p}$ of $P$ is always contained in $\Phi f$. \begin{prop} Every $f \in \L_P$ is a meet of coatoms. \end{prop} \begin{proof} For each $a \in P$, let $c_a \in \L$ be such that $\Phi c_a = \{a,\hat{p}\}$. If $\Phi f = \{a,b,\cdots, h\}$, then $f = c_a \cdot c_b \cdot \cdots \cdot c_h$. \end{proof} Such lattices are called co-atomistic. The coatoms, in this case, are the only elements that cannot be written as a meet of \emph{other} elements. \section{On Least Fixed-Points and Cascade Effects} \label{sec:eval} The systems are defined as maps $P \rightarrow P$ taking in an input and yielding an output. The interaction of those systems (via the operator $+$) however does not depend on functional composition or application. It is only motivated by them, and the input-ouput functional structure has been discarded throughout the analysis. It will then also be more insightful to not view $f(a)$ as functional application. Such a change of viewpoint can be achieved via a good use of least fixed-points. The change of view will also lead us the a more general notion of cascade effects. We may associate to every $a \in P$ a system $\free(a): - \mapsto - \vee a$ in $\mathcal{L}_P$. We can then interpret $f(a)$ differently: \begin{prop} The element $f(a)$ is the least fixed-point of $f + \free(a)$. \end{prop} \begin{proof} We have $f(a) = \wedge \{p \in \Phi(f) : a \leq p\} = \wedge \{p \in \Phi(f) \cap \Phi(\free(a))\}$. The result follows as $\Phi(f) \cap \Phi(\free(a)) = \Phi(f + \free(a))$. \end{proof} The map $\free : P \rightarrow \mathcal{L}_P$ is order-preserving. It also preserves joins. Indeed, if $a, b \in P$, then $\free(a) + \free(b) = \free(a \vee b)$. Conversely, as each map in $\mathcal{L}_P$ admits a least fixed-point, we define $\eval : \mathcal{L}_P \rightarrow P$ to be the map sending a system to its least fixed-point. The map $\eval$ is also order-preserving, and we obtain: \begin{thm} If $a \in P$ and $f \in \mathcal{L}_P$, then: \begin{equation*} \free(a) \leq f \quad \text{ if, and only if, } \quad a \leq \eval(f) \end{equation*} \end{thm} \begin{proof} If $\free(a) \leq f$, then $a \leq b$ for every fixed-point $b$ of $f$. Conversely, if $a \leq \eval(f)$, then $\{b \in P : a \leq b\}$ contains $\Phi(f)$, the set of fixed points of $f$. \end{proof} The pair of maps $\free$ and $\eval$ are said to be adjoints, and form a Galois connection (see e.g., \cite{BIR1967} Ch. V, \cite{EVE1944}, \cite{ORE1944} and \cite{ERN1993} for a treatment on Galois connections). The intuition of cascading phenomena can be seen to partly emerge from this Galois connection. By duality, the map $\eval$ preserves meets. Indeed, the least fixed-point of $f \cdot g$ is the meet of the least fixed-points of $f$ and $g$. The map $\eval$ does not however always preserve joins. Such a fact causes cascading intuition to arise. For some pairs $f, g \in \mathcal{L}_P$, we get: \begin{equation}\label{eq:inequality} \eval(f + g) \neq \eval(f) \vee \eval(g) \end{equation} Generally, two systems interact to yield, combined, something greater than what they yield separately, then combined. Specifically, consider $f \in \mathcal{L}_P$ and $a \in P$ such that $\eval(f) \leq a$. If $\eval(f + \free(a)) \neq \eval(f) \vee \eval(\free(a))$, then $f(a) \neq a$. In this case, the point $a$ \emph{expanded} under the map $f$, and cascading effects have thus occured. The paper will not pursue this line of direction. This direction is extensively pursued in \cite{ADAM:Dissertation}. Also, a definition of cascade effects was already introduced in Section \ref{sec:comp}. We thus briefly revisit it and explain the connection to the inequality presented. The inequality can be further explained by the semimodularity of the lattice, but such a link will not be pursued. \subsection{Revisiting Component Realization} Given a component realization $e_A, \cdots, e_H$ of $P$, we let $f_A,\cdots,f_H$ be a collection of maps where $f_I \leq e_I$ dictates the evolution of the state of component $I$ as a function of $P$. If $f = f_A + \cdots + f_H$, then recall from Section \ref{sec:comp} that \emph{cascade effects} are said to occur when $f \cdot e_I \neq f_I$ for some $I$. We will illustrate how this definition links to the inequality obtained from the Galois connection. For simplicity, we consider only two components $A$ and $B$. Let $e_A,e_B$ be a component realization of $P$, and consider two maps $f_A,f_B$ where $f_I \leq e_I$. Define $f = f_A + f_B$. If $f \cdot e_A \neq f_A$, then $(f \cdot e_A)a \neq f_Aa = a$ for some fixed point $a$ of $f_A$. We then have $fa \neq f_A a \vee f_B a$. As $fa = (f \cdot e_A)a \vee (f \cdot e_B)a$ by Proposition \ref{pro:joincomponent}, we get: \begin{equation} \label{eq:connection} \eval\big( f_A + \free(a) + f_B + \free(a) \big) \neq \eval\big( f_A + \free(a)\big) \vee \eval\big( f_B + \free(a)\big) \end{equation} Conversely, if Equation \ref{eq:connection} holds, then either $f_A a \neq (f\cdot e_A) a$ or $f_B a \neq (f\cdot e_B) a$. \subsection{More on Galois Connections} The inequality in Equation \ref{eq:inequality} gives rise to cascading phenomena in our situation. It is induced by the Galois connection between $\free$ and $\eval$, and the fact that $\eval$ does not preserve joins. The content of the lattices can however be changed, keeping the phenomenon intact. Both the lattice of systems $\mathcal{L}_P$ and the lattice of states $P$ can be replaced by other lattices. If we can setup another such inequality for the other lattices, then we would have created cascade effects in a different situation. We refer the reader to \cite{ADAM:Dissertation} for a thorough study along those lines. The particular class of systems studied in this paper is however somewhat special. Indeed, every system itself arises from a Galois connection. Thus, if we focus on a particular system $f$, then we get a Galois connection induced by the inclusion: \begin{equation*} \Phi(f) \rightarrow P \end{equation*} And indeed, cascade effects will emerge whenever $a \vee_{\Phi(f)} b \neq a \vee_P b$. This direction will not be further discussed in the paper. This double presence of Galois connections seems to be merely a coincidence. It implies however that we can recover cascading phenomena in our situation at two levels: either at the level of systems interacting or at the level of a unique system with its states interacting. \subsection{Higher-Order Systems} For a lattice $P$, we constructed the lattice $\mathcal{L}_P$. By iterating the construction once, we may form $\mathcal{L}_{\mathcal{L}_P}$. Through several iterations, we may recursively form $\mathcal{L}^{m+1}_P = \mathcal{L}_{\mathcal{L}^m_P}$ with $\mathcal{L}^0_P = P$. Systems in $\mathcal{L}^m_P$ take into account nested if-then statements. The construction induces a map $\eval: \mathcal{L}^{m+1}_P \rightarrow \mathcal{L}^{m}_P$, sending a system to its least fixed-point. We thus recover a sequence: \begin{equation*} \cdots \rightarrow \mathcal{L}^{3}_P \rightarrow \mathcal{L}^{2}_P \rightarrow \mathcal{L}_P \rightarrow P \end{equation*} The $\free$ map construction induces an inclusion $\mathcal{L}^{m}_P \rightarrow \mathcal{L}^{m+1}_P$ for every $m$. We may then define an infinite lattice $\mathcal{L}_P^{\infty} = \bigcup^\infty_{m=1} \mathcal{L}^m_P$ that contains all finite higher-order systems. We may also decide to complete $\mathcal{L}_P^{\infty}$ in a certain sense to take into account infinite recursion. Such an idea have extensively recurred in denotational semantics and domain theory (see e.g., \cite{SCO1972}, \cite{SCO1972A} and \cite{SCO1972B}) to yield semantics to programming languages, notably the $\lambda$-calculus. This idea will however not be further pursued in this paper. \section{Connections to Formal Methods}\label{sec:formal} The ideas developed in this paper intersect with ideas in formal methods and semantics of languages. To clarify some intersections, we revisit the axioms. A map $f: P \rightarrow P$ belongs to $\mathcal{L}_P$ if it satisfies: \begin{description} \item[A.1] If $a \in P$, then $a \leq fa$. \item[A.2] If $a, b \in P$ and $a \leq b$, then $fa \leq fb$. \item[A.3] If $a \in P$, then $ffa = fa$. \end{description} The axiom A.2 may generally be replaced by one requiring the map to be \emph{scott-continuous}, see e.g. \cite{SCO1972B} for a definition. Every scott-continuous function is order-preserving, and in the case of finite lattices (as assumed in this paper) the converse is true. The axiom A.3 may then be discarded, and fixed points can generally be recovered by successive iterations of the map (ref. the Kleene fixed-point theorem). The axiom A.1 equips the systems with their expansive nature. The more important axiom is A.2 (or potentially scott-continuity) which is adaptive to the underlying order. Every map satisfying A.2 can be \emph{closed} into a map satisfying A.1 and A.2, by sending $f(-)$ to $- \vee f(-)$. The least fixed-points of both coincide. The interplay of A.1 and A.2 ensures that concurrency of update rules in the systems does not produce any conflicts. The argument is illustrated in Proposition \ref{pro:join}, and is further fully refined in Subsection \ref{ap:motivation}. The systems can however capture concurrency issues by considering power sets. As an example, given a Petri net, we may construct a map sending a set of initial token distribution, to the set of all possible token distributions that can be \emph{caused} by such an initial set. This map is easily shown to satisfy the axioms A.1, A.2 and A.3. A more elaborate interpretation of the state space, potentially along the lines of event structures as described in \cite{PLO1981}, may lead to further connections for dealing with concurrency issues. The interplay of lattices and least fixed-point appears throughout efforts in formal methods and semantics of languages. We illustrate the relevance of A.1 and A.2 via the simple two-line program \texttt{Prog}: \begin{verbatim} 1. while ( x > 5 ) do 2. x := x - 1; \end{verbatim} We define a state of this program to be an element of $\Sigma := \mathbb{N} \times \{in_1,out_1,in_2,out_2\}$. A number in $\mathbb{N}$ denotes the value assigned to \texttt{x}, and $in_i$ (resp.\ $out_i$) indicates that the program is entering (resp.\ exiting) line $i$ of the program. For instance, $(7,out_2)$ denotes the state where \texttt{x} has value $7$ right after executing line $2$. We define a finite execution trace of a program to be a sequence of states that can be reached by some execution of the program in finite steps. A finite execution trace is then an element of $\Sigma^*$, the semigroup of all finite strings over the alphabet $\Sigma$. Two elements $s, s' \in \Sigma^*$ can be concatenated via $s \circ s'$. We then define $f : 2^{\Sigma^*} \rightarrow 2^{\Sigma^*}$ such that: \begin{align}\label{eq:trace} B \mapsto f(B) := & \big\{(n,in_1) : n \in \mathbb{N} \big\}\nonumber\\ \cup & \big\{ tr\circ(n,out_1) : tr \in B \text{ and } tr \in \Sigma^* \circ (n,in_1) \big\}\nonumber\\ \cup & \big\{ tr\circ(n,in_2) : tr \in B \text{ and } tr \in \Sigma^* \circ(n,out_1) \text{ and } n>5 \big\}\\ \cup & \big\{ tr\circ(n,out_2): tr \in B \text{ and } tr \in \Sigma^* \circ(n+1,in_2)\big\}\nonumber \\ \cup & \big\{ tr\circ(n,in_1) : tr \in B \text{ and } tr \in \Sigma^* \circ(n,out_2)\big\}\nonumber \end{align} The map $f$ satisfies A.1 and A.2. If $B_{sol} \subseteq \Sigma^*$ is the set of finite excution traces, then $B_{sol} \supseteq f(B_{sol})$. Furthermore, $B_{sol}$ is the least fixed of $f$. This idea is pervasive in obtaining semantics of programs. The maps $f$, in deriving semantics, are however typically only considered to be order-preserving (or Scott-continuous). The connection to using maps satisfying both A.1 and A.2 somewhat hinges on the fact that for every order-preserving map $h$, the least fixed-point of $h(-)$ and $-\vee h(-)$ coincide. The map $f$ may also be closed under A.3 via successive iterations, without modifying the least fixed-point, to yield a map in $\mathcal{L}_{2^{\Sigma^*}}$. We refer the reader to \cite{NIE1999} Ch 1 for an overview of various methods along the example we provide, the work on abstract interpretation (see e.g., \cite{COU1977} and \cite{COU2000}) for more details on traces and semantics, and the works \cite{SCO1972}, \cite{SCO1972A} and \cite{SCO1972B} for the relevance of A.2 (or Scott-continuity) in denotational semantics. In a general poset, non-necessarily boolean, we recover the form of M.4. Galois connections also appear extensively in abstract interpretation. The methods of abstract interpretation can be enhanced and put to use in approximating (and further understanding) the systems in this paper. Various ideas present in this paper may be further linked to other areas. That ought not be surprising as the axioms are very minimal and natural. From this perspective, the goal of this work is partly to guide efforts, and very effective tools, in the formal methods community into dealing with cascade-like phenomena. \subsection{Cascading Phenomena in this Context} We also illustrate cascade effects, as described in Section \ref{sec:eval}, in the context of programs. Consider another program \texttt{Prog'}: \begin{verbatim} 1. while ( x is odd ) do 2. x := x - 1; \end{verbatim} Each of \texttt{Prog} and \texttt{Prog'} ought to be thought of as a partial description of a \emph{larger} program. Their interaction yields the simplest program allowing both descriptions, namely: \begin{verbatim} 1. while ( x > 5 ) or ( x is odd ) do 2. x := x - 1; \end{verbatim} Let $f$ and $g$ be the maps (satisfying A.1 and A.2) attributed to \texttt{Prog} and \texttt{Prog'} respectively, as done along the lines of Equation \ref{eq:trace}. The set of finite execution traces of the combined program is then the least fixed-point of $f \vee g$, where $(f \vee g)B = fB \cup gB$. Note that $f \vee g$ then satisfies both A.1 and A.2. Cascade effects then appear upon interaction. The interaction of the program descriptions is bound to produce new traces that cannot be accounted for by the traces of the separate programs. Indeed, every trace containing: $$(5,out_2)\circ(5,in_1)\circ(5,out_1)\circ(5,in_2)$$ allowed in the combined program is not allowed in neither of the separate programs. Formally, define a map $\eval$ that sends a function $2^{\Sigma^*} \rightarrow 2^{\Sigma^*}$ satisfying A.1 and A.2 to its least fixed point. The map $\eval$ is well defined as $2^{\Sigma^*}$ is a complete lattice. We then get an inequality: \begin{equation*} \eval( f \vee g) \neq \eval(f) \cup \eval(g) \end{equation*} We may also link back to systems in $\mathcal{L}$ and the cascade effects' definition provided for them. If $\bar{f}$ and $\bar{g}$ denote the closure of $f$ and $g$ in $2^{\Sigma^*}$ to satisfy A.3 (e.g. via iterative composition in the case of scott-continuous functions), then the closure of $f \vee g$ corresponds to $\bar{f} + \bar{g}$. Of course, for every $h$ satisfying A.1 and A.2, both $h$ and $\bar{h}$ have the same least fixed-point. We then have: \begin{equation*} \eval( \bar{f} + \bar{g}) \neq \eval( \bar{f}) \cup \eval(\bar{g}) \end{equation*} The paper will mostly be concerned with properties of the systems in $\mathcal{L}$. The direction of directly studying the inequality will not be pursued in the paper. It is extensively pursued in \cite{ADAM:Dissertation}. \section{Shocks, Failure and Resilience} \label{sec:failure} The theory will be interpreted within cascading failure. The informal goal is to derive conditions and insight determining whether or not a system hit by a shock would fail. Such a statement requires at least three terms---\emph{hit}, \emph{shock} and \emph{fail}---to be defined. The situation, in the case of the models M.i, may be interpreted as follows. Some components (or agents) initially fail (or become infected). The dynamics then lead other components (or agents) to fail (or become infected) in turn. The goal is to assess the conditions under which a large fraction of the system's components fail. Such a state may be reached even when a very small number of components initially fail. This section aims to quantify and understand the resilence of the system to initial failures. Not only may targeted componental failures be inflicted onto the system, but also external (exogenous) rules may act as shocks providing conditional failures in the systems. A shock in this respect is to be regarded as a system. This remark is the subject of the next subsection. \subsection{A Notion of Shock} Enforcing a \emph{shock} on a system would intuitively yield an evolved system incorporating the effects of the shock. Forcing such an intuition onto the identity system leads us to consider shocks as systems themselves. Any shock $s$ is then an element of $\L_P$. Two types of shocks may further be considered. \emph{Push shocks} evolve state $\check{p}$ to some state $a$. \emph{Pull shocks} evolve some state $a$ directly to $\hat{p}$. Allowing arbitrary $+$ and $\cdot$ combinations of such systems generates $\L$. The set of shocks is then considered to be the set $\L$. Shocks trivially inherit all properties of systems, and can be identified with their fixed points as subsets of $P$. Finally, a shock $s$ \emph{hits} a system $f$ to yield the system $f+ s$. \begin{exa} One example of shocks (realized through the form of M.i) inserts element to the initial set $X_0$ to obtain $X_0'$. This shock corresponds to the (least) map in $\L$ that sends $\emptyset$ to $X_0'$. Equivalently, this shock has as a set of fixed points the principal (upper) order filter of the lattice $P$ generated by the set $X_0'$ (i.e. the fixed points are all, and only, the sets containing $X_0'$). Further shocks may be identified with decreasing $k_i$ or adding an element $j$ to $N_i$ for some $i$. \end{exa} \subsubsection*{Remark} It will often be required to restrict the space of shocks. There is no particular reason to do so now, as any shock can be well justified, for instance, in the setting of M.3. We may further wish to keep the generality to preserve symmetry in the problem, just as we are not restricting the set of systems. \subsection{A Notion of Failure} A shock is considered to fail a system if the mechanisms of the shock combined with those of the system evolve the most desirable state to the least desirable state. Shock $s$ fails system $f$ if, and only if, $s+f = 1$. In the context of M.i, failure occurs when $X_{|S|}$ contains all the elements of $S$. This notion of failure is not restrictive as it can simulate other notions. As an example, for $C \subseteq P$, define $u_C \in \L$ to be the least system that maps $a$ to $\hat{p}$ if $a\in C$. Suppose shock $s$ ``fails'' $f$ if $(f+s) a \geq c$ for some $c \in C$ and all $a$. Then $s$ ``fails'' $f$ if, and only if, $f + s + u_c = 1$. The notion may further simulate notions of failure arising from monotone propositional sentences. If we suppose that $(s_1,s_2,s_3)$ ``fails'' $(f_1,f_2,f_3)$ if ($s_1$ fails $f_1$) and (either $s_2$ fails $f_2$ or $s_3$ fails $f_3$), then there is a map $\psi$ into $\L$ such that $(s_1,s_2,s_3)$ ``fails'' $(f_1,f_2,f_3)$ if, and only if, $\psi(s_1,s_2,s_3) + \psi(f_1,f_2,f_3) = 1$. We can generally construct a monomorphism $\psi : \L_P {\times} \L_Q \rightarrow \L_{P{\times}Q}$ such that $s + f = 1$ and (or) $t + g = 1$ if, and only if, $\psi(s,t) + \psi(f,g) =~1$. \subsection{Minimal Shocks and Weaknesses of Systems} We set to understand the class of shocks that fail a system. We define the collection $\S_f$: \begin{equation*} \S_f = \{ s \in \L : f + s = 1\} \end{equation*} As a direct consequence of Theorem \ref{thm:iso}, we get: \begin{cor}\label{pro:failcond} Shock $s$ belongs to $S_f$ if, and only if, $\Phi f \cap\Phi s = \{\hat{p}\}$ \end{cor} For instances of M.i, it is often a question as to whether or not there is some $X_0$ with at most $k$ elements, where the final set $X_{|S|}$ contains all the elements of $S$. Such a set exists if, and only if, for some set $X$ of size $k$, all sets containing it are non-fixed points (with the exception of $S$). If $s \leq s'$ and $s \in S_f$, then $s' \in S_f$. Thus, an understanding of $\S_f$ may come from an understanding of its minimal elements. We then focus on the \emph{minimal shocks} that fail a system $f$, and denote the set of those shocks by $\check{S}_f$: \begin{equation*} \check{\S}_f = \{ s \in \S_f : \text{for all }t \in S_f, \text{ if }t \leq s \text{ then } t=s\} \end{equation*} A map $f \in \L_P$ will be called \emph{prime} if $P - \Phi f$ is closed under~$\wedge$. A prime map $f$ is naturally complemented in the lattice, and we define $\neg f$ to be (the prime map) such that $\Phi (\neg f) = P - (\Phi f - \{\hat{p}\})$. If $f$ is prime, then $\neg \neg f = f$. \begin{prop} The system $f$ admits a unique minimal shock that fails it, i.e. $|\check{\S}_f| = 1$ if, and only if, $f$ is prime. \end{prop} \begin{proof} If $f$ is prime, then $\neg f \in \S_f$. The map $\neg f$ is also the unique minimal shock as if $s \in \S_f$, then $\Phi s \subseteq \Phi \neg f$ by Proposition \ref{pro:failcond}. Conversely, suppose $f$ is not prime. Then $a = b \wedge c$ for some $a \in \Phi f$ and $b, c \notin \Phi f$. Define $b' = fb$ and $c' = fc$ and consider the least shocks $s_0, s_{b'}$ and $s_{c'}$ such that $s_0\check{p} = a, s_{b'} b' = \hat{p}$ and $s_{c'} c' = \hat{p}$. Furthermore, define $s_b$ and $s_c$ such that $s_b a = b$ and $s_c a = c$. Then $b \in \Phi s_b$ and $c \in \Phi s_c$. Finally, $s_0 + s_b + s_{b'}$ and $s_0 + s_c + s_{c'}$ belong to $\S_f$, but their meet is not in $\S_f$ as $a$ is a fixed point of $(s_0 + s_b + s_{b'}) \cdot (s_0 + s_c + s_{c'})$. This contradicts the existence of a minimal element in $\S_f$. \end{proof} As an example, consider an instance of M.1 where ``the underlying graph is undirected'' i.e. $i \in N_j$ if, and only if, $j \in N_i$. Define $f$ to be the map $X_0 \mapsto X_{|S|}$. If $f(\emptyset) = \emptyset$ and $f(S-\{i\}) = S$ for all $i$, then $|\check{S}_f|\neq 1$ i.e. there are at least two minimal shock that fail $f$. Indeed, consider a minimal set $X$ such that $fX \neq X$. If $Y = (X \cup N_i) - \{i\}$ for some $i \in X$, then $fY \neq Y$. However, $f (X\cap Y) = X \cap Y$ by minimality of~$X$. \begin{thm} If $s$ belongs to $\check{\S}_f$, then $s$ is prime. \end{thm} \begin{proof} Suppose $s$ is not prime. Then, there exists a minimal element $a = b \wedge c$ such that $a \in \Phi s$ and $b,c \notin \Phi s$. We consider $(b,c)$ to be \emph{minimal} in the sense that for $(b',c')\neq (b,c)$, if $b' \wedge c' = a$, $b'\leq b$ and $c' \leq c$ then either $b' \in \Phi s$ or $c' \in \Phi s$. As $a \in \Phi s$ and $s \in \S_f$, it follows that $a \notin \Phi f$. Therefore, at least one of $b$ or $c$ is not in $\Phi f$. Without loss of generality, suppose that $b \notin \Phi f$. If for each $x \in \Phi s$ non-comparable to $b$, we show that $b \wedge x \in \Phi s$, then it would follow that $s$ is not minimal as $\Phi s \cup \{b\}$ is closed under $\wedge$ and would constitute a shock $s' \leq s$ that fails $f$. Consider $x \in \Phi s$, and suppose $b \wedge x \notin \Phi s$. If $a \leq x$, then we get $(b \wedge x) \wedge c = a$ contradicting the minimality of $(b,c)$. If $a$ and $x$ are not comparable, then $a \wedge x \neq a$. But $a \wedge x \in \Phi s$ and $a \wedge x = (b \wedge x) \wedge c$ with both $(b \wedge x)$ and $c$ not in $\Phi s$, contradicting the minimality of $a$. \end{proof} Dually, we define the set of prime systems \emph{contained} in $f$. \begin{equation*} \mathcal{W}_f = \{ w \leq f : w \text{ is prime}\} \end{equation*} \begin{prop} If $f \in \L$ and $\mathcal{W}_f = \{w_1, \cdots, w_m\}$, then $f = w_1 + \cdots + w_m$. \nonumber \end{prop} \begin{proof} All join-irreducible elements of $\L$ are prime (see Subsection \ref{sec:addRemarks}). Therefore $\mathcal{W}_f$ contains all join-irreducible elements less than $f$, and $f$ is necessarily the join of those elements. \end{proof} Keeping only the maximal elements of $\mathcal{W}_f$ is enough to reconstruct $f$. We define: \begin{equation*} \hat{\mathcal{W}}_f = \{ w \in \mathcal{W}_f : \text{for all }v \in \mathcal{W}_f, \text{ if }w \leq v \text{ then } v=w\} \end{equation*} \begin{prop} The operator $\neg$ maps $\check{\S}_f$ to~$\hat{\mathcal{W}}_f$ bijectively. \end{prop} \begin{proof} If $f$ is prime, then $\neg\neg f = f$. It is therefore enough to show that if $s \in \check{\S}_f$, then $\neg s \in \hat{\mathcal{W}}_f$ and that if $w \in \hat{\mathcal{W}}_f$, then $\neg w \in \check{\S}_f$. For each $s \in \check{\S}_f$, as $\neg s \leq f$, there is a $w \in \hat{\mathcal{W}_f}$ such that $\neg s \leq w$. Then $\neg w \leq s$, and so $s = \neg w$ as $s$ is minimal. By symmetry we get the result. \end{proof} We will term prime functions in $\mathcal{W}_f$ as \emph{weaknesses} of $f$. Every system can be decomposed injectively into its maximal weaknesses, and to each of those weaknesses corresponds a unique minimal shock that leads a system to failure. A minimal shock fails a system because it complements one maximal weakness of the system. Furthermore, whenever an arbitrary shock $s$ fails $f$ that is because a prime subshock $s'$ of $s$ complements a weakness $w$ in $f$. \subsection{\texorpdfstring{$\mu$-}Rank, Resilience and Fragility} We may wish to quantify the \emph{resilience} of a system. One interpretation of it may be the minimal amount of \emph{effort} required to fail a system. The word \emph{effort} presupposes a mapping that assigns to each shock some magnitude (or energy). As shocks are systems, such a mapping should coincide with one on systems. Let $\mathbb{R}^+$ denote the non-negative reals. We expect a notion of magnitude $r : \L \rightarrow \mathbb{R}^+$ on the systems to satisfy two properties. \begin{description} \item[R.1] $r(f) \leq r(g)$ if $f \leq g$ \item[R.2] $r(f + g) = r(f) + r(g) - r(f \cdot g)$ if $(f,g)$ are modular. \end{description} The less desirable a system is, the higher the magnitude the system has. It is helpful to informally think of a modular pair $(f,g)$ as a pair of systems that do not \emph{interfere} with each other. In such a setting, the magnitude of the combined system adds up those of the subsystems and removes that of the common part. The rank function $\rho$ of $\L$ necessarily satisfies R.1 and R.2 as $\L$ is semi-modular. It can also be checked that, for any additive map $\mu : 2^P \rightarrow \mathbb{R}^+$, the map $f \mapsto \mu(P - \Phi f)$ satisfies the two properties. Thus, measures $\mu$ on $P$ can prove to be a useful source for maps capturing magnitude. However, any notion of magnitude satisfying R.1 and R.2 is necessarily induced by a measure on the state space. \begin{thm} Let $r$ be a map satisfying R.1 and R.2, then there exists an additive map $\mu : 2^P \rightarrow \mathbb{R}^+$ such that $r(f) = \mu(P - \Phi f) + r(0)$. \end{thm} \begin{proof} A co-atom in $\L$ is an element covered by the system $1$. For each $f$, there is a sequence of co-atoms $c_1,\cdots,c_m \in \L$ such that if $f_i = c_1 \cdot \cdots \cdot c_i$, then $(f_i,c_{i+1})$ is a modular pair, $f_i + c_{i+1} = 1$ and $f_m = f$. It then follows by R.2 that $r(f_{i} + c_{i+1}) = r(f_i) + r(c_{i+1}) - r(f_i \cdot c_{i+1})$. Therefore $r(f) = r(1) - \sum_{i=1}^m r(1) - r(c_i)$. Let $c_a$ be the co-atom with $a \in \Phi c_a$, and define $\mu(\{a\}) = r(1) - r(c_a)$ and $\mu(\{\hat{p}\}) = 0$. It follows that $r(0) = r(1) - \mu(P)$ and so $r(f) = r(0) + \mu(P) - \mu(\Phi f)$. Equivalently $r(f) = \mu(P - \Phi f) + r(0)$. \end{proof} As it is natural to provide the identity system $0$ with a zero magnitude, we consider only maps $r$ additionally satisfying: \begin{description} \item[R.3] $r(0) = 0.$ \end{description} Let $r$ be a map satisfying R.1, R.2 and R.3 induced by the measure $\mu$. If $\mu S = |S|$, then $r$ is simply the rank function $\rho$ of $\L$. We thus term $r$ (for a general $\mu$) as a $\mu$-rank on $\L$. The notion of $\mu$-rank is \emph{similar} to that of a norm as defined on Banach spaces. Scalar multiplication is not defined in this setting, and does not translate (directly) to the algebra presented here. However, the $\mu$-rank does give rise to a metric on $\L$. \begin{exa} Let $f$ be the system derived from an instance $(V,A)$ in M.0, and let $\mu$ be the counting measure on $2^V$ i.e. $\mu S = |S|$. If $A$ is symmetric, then the system $f$ has $2^c$ fixed points where $c$ is the number of connected components in the graph. The $\mu$-rank of $f$ is then $2^{|V|} - 2^c$. \end{exa} Let $r$ be a $\mu$-rank. The quantity we wish to understand (termed \emph{resilience}) would be formalized as follows: \begin{equation*} \resilience(f) = \min_{s \in \S_f} r(s) \end{equation*} We may dually define the following notion (termed \emph{fragility}): \begin{equation*} \fragility(f) = \max_{w \in \mathcal{W}_f} r(w) \end{equation*} \begin{prop} We have $\fragility (f) + \resilience(f) = r(1)$ \end{prop} \begin{proof} We have $\min_{s \in \check{S}_f} r(s) = \min_{w \in \hat{W}_f} r(\neg w)$ and $r(\neg w) = r(1) - r(w)$ for $w \in \hat{W_f}$. \end{proof} \begin{exa} Let $f$ be the system derived from an instance $(V,A)$ in M.0, and let $\mu$ be the counting measure on $2^V$ i.e. $\mu S = |S|$. If $A$ is symmetric, then the resilience/fragility of $f$ is tied to the size of the largest connected component of the graph. Let us define $n = |V|$. If $(V,A)$ had one component, then $\resilience(f) = 2^{n-1}$. If $(V,A)$ had $m$ components of sizes $c_1 \geq \cdots \geq c_m$, then $\resilience(f) = 2^{n - 1} + 2^{n - c_1 - 1} + \cdots + 2^{n - (c_1 + \cdots + c_{m-1}) - 1}$. As $r(1) = 2^n - 1$, it follows that $\fragility(f) = 2^n - 1 - \resilience(f)$. \end{exa} The quantity we wish to understand may be either one of $\resilience$ or $\fragility$. However, the dual definition $\fragility$ puts the quantity of interest on a comparable ground with the $\mu$-rank of a system. It is always the case that $\fragility(f) \leq r(f)$. Furthermore, equality is not met unless the system is prime. It becomes essential to quantify the inequality gap. Fragility arises only from a certain \emph{alignment} of the non-fixed points of the systems, formalized through the \emph{prime} property. Not all high ranked systems are fragile, and combining systems need not result in fragile systems although rank is increased. It is then a question as to whether or not it is possible to combine resilient systems to yield a fragile systems. To give insight into such a question, we note the following: \begin{prop} If $w \in \mathcal{W}_{f + g}$, then $w \leq u + v$ for some $u \in \mathcal{W}_f$ and $v \in \mathcal{W}_g$. \end{prop} \begin{proof} As $\neg w + f + g = 1$, it follows that $f \in \S_{\neg w + g}$. Then there is a $u \leq f$ in $\check{\S}_{\neg w + g}$. As $\neg w + u + g = 1$, it follows that $g \in {\S}_{\neg w + u}$. Then is a $v \leq f$ in $\check{\S}_{\neg w + u}$. Finally, we have $\neg w + u + v = 1$, therefore $w \leq u + v$. \end{proof} Thus a weakness can only form when combining systems through a combination of weaknesses in the systems. The implication is as follows: \begin{cor}\label{cor:fragilityBound} We have $\fragility(f + g) \leq \fragility(f) + \fragility(g)$. \end{cor} \begin{proof} For every $w \in \mathcal{W}_{f+ g}$, we have $r(w) \leq max_{(u,v) \in W_{f}{\times}\mathcal{W}_g} r(u) + r(v)$ as $w \leq u + v$ for some $u \in W_f$ and $v \in W_g$. \end{proof} It is not possible to combine two systems with low fragility and obtain a system with a significantly higher fragility. Furthermore, we are interested in the gap $r(f + g) - \fragility(f + g)$. If $\fragility(f) \geq \fragility(g)$, then $r(f) - 2\fragility(f)$ is a lower bound on the gap. One should be careful as such a lowerbound may be trivial in some cases. If $P$ is linearly ordered, then $\fragility(f) = r(f)$ for all $f$. The bound in this case is negative. However, if $P$ is a Boolean lattice and $\mu S = |S|$, then $r(f) - \fragility(f)$ may be in the order of $|P| = r(1)$ with $\fragility(f) \leq 2^{(-\log|P|)/2} r(f)$. Other notions of resilience (eq. fragility) may be introduced. One notion can consider a convex combination of the $\mu$-rank of the $k$ highest-ranked shocks failing a system. The notion introduced in the paper primarily serves to illustrate the type of insight our approach might yield. Any function on the minimal shocks (failing a system) is bound to translate to a dual function on weaknesses. \subsubsection*{Remark} The statement of Corollary \ref{cor:fragilityBound} may be perceived to be counterintuitive. This may be especially true in the context of cascading failure. The statement however should not be seen to indicate that the axioms defining a system and the dynamics preclude interesting phenomena. Indeed, it is the definition of fragility (and specifically the choice of the set of shocks over which we maximize) that gives rise to such a statement. The statement does not imply that fragility does not emerge from the combination of resilient systems, but only that we have a bound on how much fragility increases through combinations. The statement should not also diminish the validity of the definition of fragility, as it naturally arises from the mathematical structure of the problem. Another, potentially more intuitive, statement on \emph{fragility} may however be recovered by a modification of the notion of fragility (or dually the notion of resilience) as follows. We have considered so far every system to be a possible shock. Variations on the notion of resilience may be obtained by restricting the set of possible shocks. For instance, let us suppose that only systems of the form $s_a: p \mapsto p \vee a$ with $a\in P$ are possible shocks. In the case of boolean lattices, these shocks can be interpreted as initially marking a subset of components (or agents) as failed (or infected). These systems correspond, via their set of fixed-points, to the principal upper order filters of the lattice $P$. The notion of resilience then relates to the minimum number of initial failures (on the level of components) that lead to the failure of the whole system (i.e., all components). It is then rarely the case that two resilient systems when combined yield a resilient system. Indeed, if $a \vee b = \hat{p}$ with $a$ and $b$ distinct from $\hat{p}$, the maximum element of $P$, then both $s_a$ and $s_b$ have some resilence. The system $s_a + s_b$ has however no resilience at all, as it maps every $p$ to the maximum element $\hat{p}$. The space of possible shocks may be modified, changing the precise definition of fragility and yielding different statements. In case there are no restrictions on shocks, we obtain Corollary \ref{cor:fragilityBound}. We do not restrict shocks in the paper, as a first analysis, due to the lack of a good reason to destroy symmetry between shocks and systems. The non-restriction allows us to capture the notion of a prime system and attain a characterization of fragility in terms of maximal weaknesses. \subsection{Overview Through An Example (Continued)} We continue the running example. The maximal weaknesses of the system $f$ are the maximal subsystems of $f$ where the set of non-fixed points is closed under $\cap$. The system $f$ has two maximal weaknesses, represented as: \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$\times$}; \node (a) at (1.5,1.5) {$\times$}; \node (b) at (0,1.5) {$\circ$}; \node (c) at (-1.5,1.5) {$\circ$}; \node (d) at (1.5,0) {$\times$}; \node (e) at (0,0) {$\times$}; \node (f) at (-1.5,0) {$\circ$}; \node (min) at (0,-1.5) {$\times$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \node (maxmax) at (4.5,3) {$\times$}; \node (aa) at (6,1.5) {$\circ$}; \node (bb) at (4.5,1.5) {$\circ$}; \node (cc) at (3,1.5) {$\times$}; \node (dd) at (6,0) {$\circ$}; \node (ee) at (4.5,0) {$\times$}; \node (ff) at (3,0) {$\times$}; \node (minmin) at (4.5,-1.5) {$\times$}; \draw (minmin) -- (dd) -- (aa) -- (maxmax) -- (bb) -- (ff) (ee) -- (minmin) -- (ff) -- (cc) -- (maxmax) (dd) -- (bb); \draw[preaction={draw=white, -,line width=6pt}] (aa) -- (ee) -- (cc); \end{tikzpicture} \end{center} The left (resp. right) weakness corresponds to the system failing when A (resp. C) is colored black. The left weakness is the map where $A \mapsto ABC$ leaving remaining states unchanged; the right weakness is the map where $C \mapsto ABC$ leaving remaining states unchanged. The system $f$ then admits two corresponding minimal shocks that fail it. Those are complements to the weaknesses in the lattice. \begin{center} \begin{tikzpicture}[scale=0.6] \node (max) at (0,3) {$\times$}; \node (a) at (1.5,1.5) {$\circ$}; \node (b) at (0,1.5) {$\times$}; \node (c) at (-1.5,1.5) {$\times$}; \node (d) at (1.5,0) {$\circ$}; \node (e) at (0,0) {$\circ$}; \node (f) at (-1.5,0) {$\times$}; \node (min) at (0,-1.5) {$\circ$}; \draw (min) -- (d) -- (a) -- (max) -- (b) -- (f) (e) -- (min) -- (f) -- (c) -- (max) (d) -- (b); \draw[preaction={draw=white, -,line width=6pt}] (a) -- (e) -- (c); \node (maxmax) at (4.5,3) {$\circ$}; \node (aa) at (6,1.5) {$\times$}; \node (bb) at (4.5,1.5) {$\times$}; \node (cc) at (3,1.5) {$\circ$}; \node (dd) at (6,0) {$\times$}; \node (ee) at (4.5,0) {$\circ$}; \node (ff) at (3,0) {$\circ$}; \node (minmin) at (4.5,-1.5) {$\circ$}; \draw (minmin) -- (dd) -- (aa) -- (maxmax) -- (bb) -- (ff) (ee) -- (minmin) -- (ff) -- (cc) -- (maxmax) (dd) -- (bb); \draw[preaction={draw=white, -,line width=6pt}] (aa) -- (ee) -- (cc); \end{tikzpicture} \end{center} The left (resp. right) minimal shock can be interpreted as initially coloring node $A$ (resp. node $C$) black. For a counting measure $\mu$, the $\mu$-rank of $f$ is $5$, whereas the fragility of $f$ is $3$. The resilience of $f$ in that case is $4$. For a system with non-trivial rules on the components, the lowest value of fragility attainable is $1$. It is attained when all the nodes have a threshold of $2$. The highest value attainable, however, is actually $3$. Indeed, the system would have required the same amount of effort to fail it if all thresholds where equal to $1$. Yet changing all the thresholds to $1$ would necessarily increase the $\mu$-rank to $6$. \subsection{Recovery Mechanisms and Kernel Operators} Cascade effects, in this paper, have been mainly driven by the axioms A.1 and A.2. The axiom A.1 ensures that the dynamics do not permit recovery. Those axioms however do not hinder us from considering situations where certain forms of recovery are permitted, e.g., when fault-protection mechanisms are built into the systems. Such situations may be achieved by dualizing A.1, and by considering multiple maps to define our fault-protected system. Specifically, we define a recovery mechanism $k$ to be map $k : P \rightarrow P$ satisfying: \begin{description} \item[K.1] If $a \in P$, then $ka \leq a$. \item[A.2] If $a, b \in P$ and $a \leq b$, then $ka \leq kb$. \item[A.3] If $a \in P$, then $kka = ka$. \end{description} The axiom K.1 is derived from A.1 by only reversing the order. As such, a recovery mechanism $k$ on $P$ is only a system on the dual lattice $P^{\dual}$, obtained by reversing the partial order. The maps satisfying K.1, A.2 and A.3 are typically known as \emph{kernel operators}, and inherit (by duality) all the properties of the systems described in this paper. We may then envision a system equipped with fault-protection mechanisms as a pair $(k,f)$ where $f$ is system in $\L_P$ and $k$ is a recovery mechanism, i.e., a system in $\L_{P^{\dual}}$. The pair $(k,f)$ is then interpreted as follows. An initial state of failure is inflicted onto the system. Let $a \in P$ be the initial state. Recovery first occurs via the dynamics of $k$ to yield a more desireable state $k(a)$. The \emph{dynamics} of $f$ then come into play to yield a state $f(k(a))$. The collection of pairs $(k,f)$ thus introduce a new class of systems, whose properties build on those developed in this paper. If the axiom A.3 is discarded, iteration of maps in the form $(fk)^n$ may provide a more realistic account of the interplay of failures and recovery mechanisms. In general, the map $fk$ will satisfy neither A.1 nor K.1. A different type of analysis might thus be involved to understand these new system. Several questions may be posed in such a setting. For a design-question example, let us consider $P$ to be a graded poset. What is the recovery mechanism $k$ of minimum $\mu$-rank, whereby $f(k(a))$ has rank (in $P$) less than $l$ for every $a \in P$ with rank less than $l'$? Other design or analysis questions may posed, inspired by the example question. This direction of recovery however will not be further investigated in this paper. \subsubsection*{Remark} Another form of recovery may be achieved by \emph{removing rules} from the system. Such a form may be acheived via the $\cdot$ operator. Indeed, the system $f \cdot g$ is the most undesireable system that includes the common rules of both $f$ and $g$. If $g$ is viewed as a certain complement of some system we want to remove from $f$, then we recover the required setting of recovery. The notion of complement systems is well-defined for prime systems. For systems that are not prime, it may be achieved by complementing the set of fixed-points, adding the maximum element $\hat{p}$ and then closing the obtained set under meets. \section{Concluding Remarks} Finiteness is not necessary (as explained in Section 3) for the development. The axioms A.1, A.2 and A.3 can be satisfied when $P$ is an infinite lattice, and $\Phi f$ (for every $f$) is complete whenever $P$ is complete. Nevertheless, the notion $\mu$-rank should be \emph{augmented} accordingly, and non-finite component realizations should be allowed. Furthermore, \emph{semimodularity} on infinite lattices (still holds, yet) requires stronger conditions than what is presented in this paper on finite lattices. Finally, the choice of the state space and order relation allows a good flexibility in the modeling exercise. State spaces may be augmented accordingly to capture desired instances. But order-preserveness is intrinsic to what is developed. This said, hints of negation (at first sight) might prove not to be integrable in this framework. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The size and intricacies of quantum systems have long motivated quantum computing (QC) research~(\cite{montanaro:16}). Recently, quantum algorithms have incorporated variational techniques~(\cite{peruzzo:14,farhi:14}), creating the subfield of quantum machine learning (QML)~\cite{mehta:19}, which includes applications like optimization and quantum chemistry. Much like its classical counterpart, QML leverages gradient-based methods to seek optimal solutions, but with the added benefit of an exponentially large solution kernel~(\cite{schuld:21}). For instance, the most common form of QML uses quantum expectation values to define objective functions and classical gradient descent to optimize quantum networks. Both QC and QML ultimately seek to demonstrate a tangible improvement over classical computing for useful tasks, a concept known as quantum advantage. As QC and QML are still nascent fields, simulation remains essential to both the characterization of quantum systems at scale~(\cite{carleo:19}), as well as the design heuristic algorithms~(\cite{cerezo:21}). However, simulating quantum algorithms on informative scales is a challenging task, as quantum state space overwhelms classical resources with even relatively few quantum bits (qubits). This limitation is partially overcome with networks of factorized tensors~(\cite{bridgeman:17}), which are typically more efficient than the traditional vector representation of quantum states, providing up to an exponential reduction in overhead with respect to the number of qubits $n$. In tensor network-based simulation, both quantum states, gates, and operators are expressed in a factorized form, which compress the quantum circuit by including only relevant ranks. Operations can be efficiently done directly in the factorized form without having to reconstruct any dense tensor. The individual factors are contracted, or summed over at corresponding indices, along an optimized contraction path. These paths are typically chosen to minimize either memory or runtime~(\cite{smith:18}). Quantum tensor contractions consist of highly parallelizable and arithmetically intensive operations, making them prime candidates for GPU acceleration~(\cite{cuTensor}). A similar approach has proven useful in deep learning, for instance by expressing the weights of linear~(\cite{novikov:15}) and convolutional~(\cite{kossaifi2020factorized}) layers of deep nets in factorized form. However, this approach remains under-explored in quantum simulation. \begin{figure}[!b] \centering \includegraphics[width=0.93\textwidth]{tlquantum} \caption{TensorLy-Quantum sits atop the hierarchy of TensorLy libraries, inheriting its extensive tensor methods and providing full Autograd support for the features (circuits, operations, density operators, and algorithms) required by quantum simulation.} \label{fig:diagram} \end{figure} While a variety of quantum simulators exist~(\cite{oakridge,qiskit,rigetti}), relatively few utilize tensor methods. Among those that do, integration with essential machine learning tools, such as automatic differentiation (i.e., PyTorch Autograd), is rare~(\cite{gray:18}). Moreover, advanced functionalities, like tensor regression, while used in machine learning~(\cite{panagakis2021tensor}), are so far unused in quantum simulation. To address such deficits, we have created TensorLy-Quantum, a PyTorch API for efficient tensor-based simulation of QC and QML protocols. It is a member of the TensorLy family of libraries and makes use of its extensive features and optimized implementations~(\cite{kossaifi:19}). As a result, it is the first quantum library with direct support for tensor decomposition, regression, and algebra, which have proven fruitful in a myriad of fields and stand to enrich QC and QML research. Uniquely, TensorLy-Quantum provides built-in support for Multi-Basis Encoding for MaxCut problems~(\cite{patti:21}) and was used to develop Markov Chain Monte Carlo-based QML~(\cite{pattiMCMC:21}). Through these features, TensorLy-Quantum aims to provide extensive tensor-based quantum simulation capabilities, providing a simple and flexible API for quantum algorithms research. Moreover, its optimized functionality and lightweight API would make it an efficient QML backend to supplement other quantum simulators. \section{TensorLy-Quantum} TensorLy-Quantum offers functionalities essential to quantum computation (Fig.~\ref{fig:diagram}). The library provides a high-level interface for quantum circuit simulation, with an API that follows the PyTorch Module structure and offers end-to-end support for automatic differentiation via PyTorch Autograd. Users can seamlessly design quantum circuits by combining either pre-defined or customizable quantum gates, operators, and states. The library also provides extensive circuit operations, ranging from pre-contraction techniques that simplify contraction path search, to dynamic partial traces that compactly evaluate quantum circuit outputs. In addition to specializing in factorized representations, such as Matrix-Product State (known as tensor-train in the Machine-Learning literature~\cite{oseledets2011tensor}), TensorLy-Quantum also supports efficient analysis on quantum density operators, both pure and reduced, including partial traces and information metrics. TensorLy-Quantum is designed to bridge the gap between practitioners of quantum and classical machine learning, providing an intuitive and Pythonic interface that is supplemented with extensive documentation and 97\% unit-test coverage. TensorLy-Quantum leverages the deep tensorized network capabilities of TensorLy and TensorLy-Torch, using these to mitigate the significant computational overhead posed by quantum state space. It is suitable for both CPUs and GPUs, and that it acquires the GPU acceleration of these parent libraries. Likewise, while TensorLy-Quantum is PyTorch-based, TensorLy's flexible backend structure enables dynamic transitions between many of the most utilized Python libraries for machine learning and numerical analysis. Due to its strategic location atop the TensorLy ecosystem (Fig.~\ref{fig:diagram}), TensorLy-Quantum is exceptionally positioned to accelerate and innovate quantum simulation. In what follows, we illustrate the scalability and speed of our library, particularly on GPU. \section{Performance} Due to its efficiency and scalability, TensorLy-Quantum holds world records for the number of qubits simulated in a successful quantum optimization algorithm. These records include both the largest single-qubit implementation of MaxCut~(\cite{patti:21}) and a multi-GPU scaleup that used cuQuantum as a backend for tensor network contraction~(\cite{NVIDIA:21, cuQuantum}). \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \quad \subfloat[\centering]{{\includegraphics[height=0.4\textwidth]{benchmark1.png} }} \hfill \subfloat[\centering]{{\includegraphics[height=0.4\textwidth]{benchmark2.png} }} \quad \vspace{-20pt} \caption{Logarithmic-scale runtime comparison for (left) full-rank partial trace from $n$ to $m$ qubits and (right) contraction operations on large-scale networks of factorized tensors.} \label{fig:1} \end{figure} We highlight this performance with numerical experiments, showcasing both operations on the full-rank density operator (a reshaped, matrix-like quantum state representation) and tensor contraction functionalities of TensorLy-Quantum. In the density operator experiments, TensorLy-Quantum outperforms the leading software, QuTip~(\cite{johansson:12}), by two orders of magnitude on CPU for partial traces of $n=20$ to $m=14,10$ qubit systems and four orders of magnitude on GPU. TensorLy-Quantum can also complete full-rank calculations on larger quantum systems than its predecessors through the use of compact tensor algebra. Experiments on networks of factorized tensors include a forward pass of the expectation value $\langle H \rangle$, and the full gradient calculation $\nabla \langle H \rangle$, where $H$ is a transverse-field Ising model Hamiltonian of $n=500$ qubits. The circuit ansatze contain $5000$ gates and the gradient calculation constitutes a full backpropagation with PyTorch Autograd. GPU acceleration provides a $20$x speedup over the CPU implementation. Moreover, we emphasize that without factorized tensor methods, simulations of this size are impossible, as matrix-encoded Hamiltonians require $\sim 10^6$ GB of memory for systems of merely $n \sim 25$ qubits. Contraction was accomplished with the Opt-Einsum library~(\cite{smith:18}) and all GPU experiments were done on an NVIDIA A100 GPU. \section{Conclusions} TensorLy-Quantum is an open-source library designed to streamline the workflow of QC and QML researchers. It provides highly optimized operations for large-scale and compute heavy quantum circuit simulations, buttressed by extensive documentation and high-coverage unit-testing. Its API seamlessly integrates with PyTorch and provides an interface that is amenable to diverse scientific backgrounds, accessibly packaging state-of-the-art tensor network operations alongside quantum protocols. Moreover, its lightweight API and optimized functions would make it an excellent backend for existing quantum simulators. As TensorLy-Quantum uses the TensorLy libraries as a backend, it offers direct access to tools unavailable in other quantum APIs, like tensor regression and decomposition, as well as convenient conversion to the native data structures of numerous backends. In future releases, we will expand both the efficiency and scope of TensorLy-Quantum, adding features such as causality-simplified contraction, quantum state compression, and novel quantum protocols, as well as support for more factorized representations, using TensorLy's existing tensor decomposition. \vskip 0.2in
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Generating high-quality samples from complex distribution is one of the fundamental problems in machine learning. One of the approaches is by generative adversarial networks (GAN)~\cite{goodfellow2014generative}. Inside the GAN, there is a pair of generator and discriminator trained in some adversarial manner. GAN is recognized as an efficient method to sample from high-dimensional, complex distribution given limited examples, though it is not easy to converge stably. Major failure modes include gradient vanishing and mode collapsing.A long line of research is conducted to combat instability during training, e.g.~\cite{mao2017least,arjovsky2017wasserstein,gulrajani2017improved,miyato2018spectral,zhang2018self,brock2018large}. Notably, WGAN~\cite{arjovsky2017wasserstein} and WGAN-GP~\cite{gulrajani2017improved} propose to replace the KL-divergence metric with Wasserstein distance, which relieve the gradient vanishing problem; SNGAN~\cite{miyato2018spectral} takes a step further by replacing the gradient penalty term with spectral normalization layers, improving both the training stability and sample quality over the prior art. Since then, many successful architectures, including SAGAN~\cite{zhang2018self} and BigGAN~\cite{brock2018large}, utilize the spectral normalization layer to stabilize training. The core idea behind the WGAN or SNGAN is to constrain the smoothness of discriminator. Examples include weight clipping, gradient norm regularization, or normalizing the weight matrix by the largest singular value. However, all of them incur non-negligible computational overhead (higher-order gradient or power iteration). \par Apart from spectral normalization, a bundle of other techniques are proposed to aid the training further; among them, essential techniques are two time-scale update rule (TTUR)~\cite{heusel2017gans}, self attention~\cite{zhang2018self} and large batch training~\cite{brock2018large}. However, both of them improve the Inception score(IS)~\cite{salimans2016improved} and Frechet Inception Distance(FID)~\cite{heusel2017gans} at the cost of slowing down the per iteration time or requiring careful tuning of two learning rates. Therefore, it would be useful to seek some alternative ways to reach the same goal but with less overhead. In particular, we answer the following question: \begin{quote}\textit{Can we train a plain ResNet generator and discriminator without architectural innovations, spectral normalization, TTUR or big batch size?} \end{quote} \par In this paper, we give an affirmative answer to the question above. We will show that all we need is adding another adversarial training loop to the discriminator. Our contributions can be summarized as follows \vspace{-5pt} \begin{itemize}[leftmargin=*,noitemsep] \item We show a simple training algorithm that accelerates the training speed and improves the image quality all at the same time. \item We show that with the training algorithm, we could match or beat the strong baselines (SNGAN and SAGAN) with plain neural networks (the raw ResNet without spectral normalization or self-attention). \item Our algorithm is widely applicable: experimental results show that the algorithm works on from tiny data (CIFAR10) to large data (1000-class ImageNet) using $4$ or less GPUs. \end{itemize} \paragraph{Notations} Throughout this paper, we use $G$ to denote the generator network and $D$ to denote the discriminator network (and the parameters therein). $x\in\mathbb{R}^d$ is the data in $d$ dimensional image space; $z$ is Gaussian random variable; $y$ is categorical variable indicating the class id. Subscripts $r$ and $f$ indicates real and fake data. For brevity, we denote the Euclidean 2-norm as $\|\cdot\|$. $\mathcal{L}$ is the loss function of generator or discriminator depending on the subscripts. \vspace{-5pt} \section{Related Work} \subsection{Efforts to Stabilize GAN Training}\label{GAN_background} Generative Adversarial Network (GAN) can be seen as a min-max game between two players: the generator ($G$) and discriminator ($D$). At training time, the generator transforms Gaussian noise to synthetic images to fool the discriminator, while the discriminator learns to distinguish the fake data from the real. The loss function is formulated as \begin{equation} \label{eq:orig_GAN} \min_G \max_D \mathop{\mathbb{E}}_{x\sim \mathbb{P}_{\text{real}}}\big[\log D(x)\big]+\mathop{\mathbb{E}}_{x \sim G(z)}\Big[\log\big(1-D(x)\big)\Big], \end{equation} where $\mathbb{P}_{\text{real}}$ is the distribution of real data, and $z$ is sampled from the standard normal distribution. Our desired solution of \eqref{eq:orig_GAN} is $G(z)=\mathbb{P}_{\text{real}}$ and $D(x)=0.5$; however, in practice we can rarely get this solution. Early findings show that training $G$ and $D$ with multi-layer CNNs on GAN loss~\eqref{eq:orig_GAN} does not always generate sensible images even on simple CIFAR10 data. Researchers tackle this problem by having better theoretical understandings and intensive empirical experiments. Theoretical works mainly focus on the local convergence to the Nash equilibrium~\cite{mescheder2017numerics,mescheder2018training,wang2019solving} by analyzing the eigenvalues of the Jacobian matrix derived from gradient descent ascent (GDA). Intuitively, if the eigenvalues $\lambda_i$ (maybe complex) lie inside the unit sphere , i.e. $\|\lambda_i\| \le 1$, and the ratio $\frac{Im(\lambda_i)}{Re(\lambda_i)}$ is small, then the convergence to local equilibrium will be fast. \begin{table}[htb] \centering \caption{\label{tab:compare-GANs}Comparing the key features and training tricks in GAN models. Our model is a simple combination of ResNet backbone and adversarial training. The model size is calculated for the ImageNet task.} \begin{tabular}{ccccc} \toprule & SNGAN & SAGAN & BigGAN & \textbf{FastGAN (ours)} \\ \midrule ResNet $G/D$ & \Yes & \Yes & \Yes & \Yes \\ Wider $D$ & \No & \Yes & \Yes & \No \\ Shared embedding \& skip-z & \No & \No & \Yes & \No \\ Self attention & \No & \Yes & \Yes & \No\\ TTUR & \No & \Yes & \Yes & \No\\ Spectral normalization & \Yes & \Yes & \Yes & \No\\ Orthogonal regularization & \No & \No & \Yes & \No \\ Adversarial training & \No & \No & \No & \Yes \\ Model size & 72.0M & 81.5M & 158.3M & 72.0M \\ \bottomrule \end{tabular} \end{table} Empirical findings mainly focus on regularization techniques, evolving from weight clipping~\cite{arjovsky2017wasserstein}, gradient norm regularization~\cite{gulrajani2017improved}, spectral normalization~\cite{miyato2018spectral}, regularized optimal transport~\cite{sanjabi2018convergence} and Lipschitz regularization~\cite{zhou2019lipschitz}, among others. Our method could also be categorized as a regularizer~\cite{DBLP:journals/corr/abs-1906-01527}, in which we train the discriminator to be robust to adversarial perturbation on either real or fake images. \subsection{Follow the Ridge (FR) algorithm} A recent correction to the traditional gradient descent ascent (GDA) solver for \eqref{eq:orig_GAN} is proposed~\cite{wang2019solving}. Inside FR algorithm, the authors add a second-order term that drags the iterates back to the ``ridge'' of the loss surface $\mathcal{L}(G, D)$, specifically, \begin{equation} \label{eq:FR-iter} \begin{aligned} w_G &= w_G-\eta_G\nabla_{w_G}\mathcal{L},\\ w_D &= w_D+\eta_D\nabla_{w_D}\mathcal{L}+\eta_GH^{-1}_{w_Dw_D}H_{w_Dw_G}\nabla_{w_G}\mathcal{L}, \end{aligned} \end{equation} where we denote $\mathcal{L}$ as the GAN loss in \eqref{eq:orig_GAN}, $H_{w_Dw_D}=\frac{\partial^2\mathcal{L}}{\partial w_D^2}$ and $H_{w_Dw_G}=\frac{\partial^2\mathcal{L}}{\partial w_D\partial w_G}$. With the new update rule, the authors show less rotational behavior in training and an improved convergence to the local minimax. The update rule~\eqref{eq:FR-iter} involves Hessian vector products and Hessian inverse. Comparing with the simple GDA updates, this is computationally expensive even by solving a linear system. Interestingly, under certain assumptions, we can see our adversarial training scheme could as an approximation of \eqref{eq:FR-iter} with no overhead. As a result, our FastGAN converges to the local minimax with fewer iterations (and wall-clock time), even though we dropped the bag of tricks (Table~\ref{tab:compare-GANs}) commonly used in GAN training. \vspace{-5pt} \subsection{\label{adv_training}Adversarial training} Adversarial training~\cite{goodfellow2014explaining} is initially proposed to enhance the robustness of the neural network. The central idea is to find the adversarial examples that maximize the loss and feed them as synthetic training data. Instances of which include the FGSM~\cite{goodfellow2014explaining}, PGD~\cite{madry2017towards}, and free adversarial training~\cite{shafahi2019adversarial}. All of them are approximately solving the following problem \begin{equation} \label{eq:adv_training} \begin{split} \min_{w} \frac{1}{N}\sum_{i=1}^N \Big[\max_{\|\epsilon\|\leq c_{\max}}\mathcal{L}(f(x_i+\epsilon;w),y_i)\Big], \end{split} \end{equation} where $\{(x_i,y_i)\}_{i\in[N]}$ are the training data; $f$ is the model parameterized by $w$; $\mathcal{L}$ is the loss function; $\epsilon$ is the perturbation with norm constraint $c_{\max}$. It is nontrivial to solve \eqref{eq:adv_training} efficiently, previous methods solve it with alternative gradient descent-ascent: on each training batch $(x_i, y_i)$, they launch projected gradient ascent for a few steps and then run one step gradient descent~\cite{madry2017towards}. In this paper, we study how adversarial training helps GAN convergence. To the best of our knowledge, only \cite{zhou2018don} and \cite{liu2019rob} correlate to our idea. However, \cite{zhou2018don} is about unconditional GAN, and none of them can scale to ImageNet. In our experiments, we will include \cite{liu2019rob} as a strong baseline in smaller datasets such as CIFAR10 and subset of ImageNet. \vspace{-7pt} \section{FastGAN -- Free AdverSarial Training for GAN} Our new GAN objective is a three-level min-max-min problem: \begin{equation} \label{eq:faster-rob-GAN} \min_G \max_D \mathop{\mathbb{E}}_{x\sim \mathbb{P}_{\text{real}}}\big[\min_{\|\epsilon\|\le c_{\max}}\log D(x+\epsilon)\big]+\mathop{\mathbb{E}}_{x \sim G(z)}\Big[\min_{\|\epsilon\|\le c_{\max}}\log\big(1-D(x+\epsilon)\big)\Big]. \end{equation} Note that this objective function is similar to the one proposed in RobGAN~\cite{liu2019rob}. However, RobGAN cannot scale to large datasets due to the bottleneck introduced by adversarial training (see our experimental results), and no satisfactory explanations are provided to justify why adversarial training improves GAN. We made two improvements: First of all, we employ the recent free adversarial training technique~\cite{shafahi2019adversarial}, which simultaneously updates the adversaries and the model weights. Second, in the label-conditioned GAN setting, we improved the loss function used in RobGAN (both are based on cross-entropy rather than hinge loss). In the following parts, we first show the connection to follow-the-ridge update rules~\eqref{eq:FR-iter} and then elaborate more on the algorithmic details as well as a better loss function. \subsection{From \textit{Follow-the-Ridge} (FR) to FastGAN} Recently \cite{wang2019solving} showed that the Follow-the-Ridge (FR) optimizer improves GAN training, but FR relies on Hessian-vector products and cannot scale to large datasets. We show that solving \eqref{eq:faster-rob-GAN} by one-step adversarial training can be regarded as an efficient simplification of FR, which partially explains why the proposed algorithm can stabilize and speed up GAN training. To show this, we first simplify the GAN training problem as follows: \begin{itemize}[leftmargin=*,noitemsep] \item Inside the minimax game, we replace the generator network by its output $x_f$. This dramatically simplifies the notations as we no longer need to calculate the gradient through G. \item The standard GAN loss on fake images~\cite{goodfellow2014generative} is written as $\log\big(1-D(G(z)\big)$. However, we often modify it to $-\log D(G(z))$ to mitigate the gradient vanishing problem. \end{itemize} We first consider the original GAN loss: \begin{equation} \label{eq:minmax-problem} \min_{G}\max_{D}\mathcal{L}(G, D) \triangleq\mathop{\mathbb{E}}_{x_r\sim \mathbb{P}_{\text{real}}}\big[\log D(x_r)\big]-\mathop{\mathbb{E}}_{x_f\sim G}\big[\log D(x_f)\big]. \end{equation} With FR algorithm, the update rule for $D$ (parameterized by $w$) can be written as\footnote{In this equation, we simplify the FR update ($H_{yy}^{-1}$ is dropped). The details are shown in the appendix.} \begin{equation} \label{eq:FR-update} w^+ \xleftarrow{\text{FR}} w+\eta_D\Big(\underbrace{\frac{\nabla_wD(x_r)}{D(x_r)}-\frac{\textcolor{blue}{\nabla_w D(x_f)}}{D(x_f)}}_{\text{Gradient ascent}}\Big)-\eta_G\underbrace{H_{wx}\nabla_{x_f}\mathcal{L}(w, x_f)}_{\text{off-the-ridge correction}}, \end{equation} where $H_{wx}=\frac{\partial^2 \mathcal{L}}{\partial w\partial x_f}$. The last term is regarded as a correction for ``off-the-ridge move'' by $G$-update. If we decompose it further, we will get \begin{equation} \label{eq:FR-correction-term} H_{wx}\nabla_{x_f}\mathcal{L}=\frac{\nabla^2_{w, x_f} D(x_f)\nabla_{x_f}D(x_f)}{D(x_f)^2}-\frac{\|\nabla_{x_f} D(x_f)\|^2\textcolor{blue}{\nabla_w D(x_f)}}{D(x_f)^3}. \end{equation} Since the first-order term in \eqref{eq:FR-correction-term} is already accounted for in \eqref{eq:FR-update} (both are highlighted in blue), the second-order term in \eqref{eq:FR-correction-term} plays the key role for fast convergence in FR algorithm. However, the algorithm involves Hessian-vector products in each iteration and not very efficient for large data such as ImageNet. Next, we show that our adversarial training could be regarded as a Hessian-free way to perform almost the same functionality. Recall in the adversarial training of GAN, the loss (Eq.~\eqref{eq:faster-rob-GAN}) becomes (fixing $c_{\max}=1$ for brevity) \begin{equation} \label{eq:RobGAN} \begin{aligned} \mathcal{L}^{\text{adv}}(G, D) &\triangleq\mathop{\mathbb{E}}_{x_r\sim \mathbb{P}_{\text{real}}}\big[\min_{\|\epsilon\|_2\le 1}\log D(x_r+\epsilon)\big]-\mathop{\mathbb{E}}_{x_f\sim G}\max_{\|\epsilon\|_2\le 1}\log D(x_f+\epsilon)\\ &\approx \mathop{\mathbb{E}}_{x_r\sim\mathbb{P}_{\text{real}}}\Big[\log D(x_r)-\frac{\|\nabla_{x_r}D(x_r)\|}{D(x_r)}\Big]-\mathop{\mathbb{E}}_{x_f\sim G}\Big[\log D(x_f)+\frac{\|\nabla_{x_f}D(x_f)\|}{D(x_f)}\Big], \end{aligned} \end{equation} here we assume the inner minimizer only conducts one gradient descent update (similar to the algorithm proposed in the next section), and use first-order Taylor expansion to approximate the inner minimization problem. So the gradient descent/ascent updates will be \begin{equation} \resizebox{.5\linewidth}{!}{$ w^+ \xleftarrow{\text{GDA}} w+\eta_D\Big(\underbrace{\frac{\nabla_wD(x_r)}{D(x_r)}-\frac{\nabla_w D(x_f)}{D(x_f)}}_{\text{Gradient ascent}}\Big)-\overrightarrow{\bm{\bigtriangledown}}$. } \end{equation} Here we define $\overrightarrow{\bm{\bigtriangledown}}$ as the gradient correction term introduced by adversarial training, i.e. \begin{equation} \label{eq:gradient-correction-from-ADV} \resizebox{.93\linewidth}{!}{$ \overrightarrow{\bm{\bigtriangledown}}=\sum\limits_{x\in\{x_r, x_f\}}\nabla_w\Big(\frac{\|\nabla_{x}D(x)\|}{D(x)}\Big)=\sum\limits_{x\in\{x_r, x_f\}}\frac{\nabla^2_{w,x}D(x)\nabla_{x}D(x)}{\|\nabla_{x}D(x)\|D(x)}-\frac{\|\nabla_{x}D(x)\|^2\nabla_wD(x)}{D(x)^2}.$} \end{equation} Comparing~\eqref{eq:gradient-correction-from-ADV} with~\eqref{eq:FR-correction-term}, we can see both of them are a linear combination of second-order term and first-order term, except that the FR is calculated on fake images $x_f$ while our adversarial training can be done on $x_r$ or $x_f$. The two becomes the same up to a scalar $D(x)$ if 1-Lipschitz constraint is enforced on the discriminator $D$, in which case $\|\nabla_x D(x)\|=1$. The constraint is commonly seen in previous works, including WGAN, WGAN-GP, SNGAN, SAGAN, BigGAN, etc. However, we do not put this constraint explicitly in our FastGAN. \subsection{Training algorithm} Our algorithm is described in Algorithm~\ref{alg:fastrobgan}, the main difference from classic GAN training is the extra for-loop inside in updating the discriminator. In each iteration, we do $\texttt{MAX\_ADV\_STEP}$ steps of adversarial training to the discriminator network. Inside the adversarial training loop, the gradients over input images (denoted as $g_x$) and over the discriminator weights (denoted as $g_{w_D}$) can be obtained with just one backpropagation. We train the discriminator with fake data $(x_f, y_f)$ immediately after one adversarial training step. A handy trick is applied by reusing the same fake images (generated at the beginning of each $\texttt{D\_step}$) multiple times -- we found no performance degradation and faster wall-clock time by avoiding some unnecessary propagations through $G$. \begin{algorithm}[htb] \DontPrintSemicolon \SetAlgoLined \caption{\label{alg:fastrobgan}FastGAN training algorithm} \KwData{Training set $\{(x_i,y_i)\}$, max perturbation $c_{\max}$, learning rate $\eta$.} Initialize: generator $G$, discriminator $D$, and perturbation $\epsilon$.\\ \While{not converged}{ \tcc{Train discriminator for MAX\_D\_STEP steps} \For{\texttt{D\_step=1..MAX\_D\_STEP}}{ \tcc{Prepare real and fake data, same data will be used multiple times in the nested for-loop.} $z\gets \mathcal{N}(0, 1)$; $y_f\gets \texttt{Categorical}(1..C)$\\ $x_f \gets G(z, y_f)$ \\ $x_r, y_r \gets \texttt{sample}(\{x_i,y_i\}, i\in[N])$ \\ \For{\texttt{adv\_step=1..MAX\_ADV\_STEP}}{ \tcc{Conduct free adversarial training on real data} $g_{x},\ g_{w_D} \gets \nabla_{x_{r}} \mathcal{L}\big(D(x_r+\epsilon),y_r\big)$, $\nabla_{w_D} \mathcal{L}\big(D(x_r+\epsilon),y_r\big)$\\ $w_D \gets w_D + \eta \cdot g_{w_D}$\\ $\epsilon \gets \proj_{\|\cdot\|\le c_{\max}}\big(\epsilon - c_{\max} \cdot \sign(g_{x})\big)$\\ \tcc{Reuse the fake data during adversarial training for best speed} $g_{w_D}\gets \nabla_{w_D}\mathcal{L}\big(D(x_f), y_f)\big)$\\ $w_D\gets w_D+\eta\cdot g_{w_D}$\\ } } \tcc{Train generator for one step} $z\gets \mathcal{N}(0, 1)$; $y_f\gets \texttt{Categorical}(1..C)$\\ $x_{f} \gets G(z, y_f)$ \\ $g_{w_G} \gets \nabla_{w_G}\mathcal{L}\big(D(x_f),y_f\big)$ \\ $w_G \gets w_G - \eta \cdot g_{w_G}$ } \end{algorithm} Contrary to the common beliefs~\cite{heusel2017gans,zhang2018self} that it would be beneficial (for stability, performance and convergence rate, etc.) to have different learning rates for generator and discriminator, our empirical results show that once the discriminator undergoes robust training, it is no longer necessary to tune learning rates for two networks. \subsection{\label{sec:improved_objective_loss}Improved loss function} Projection-based loss function~\cite{miyato2018cgans} is dominating current state-of-the-art GANs (e.g., \cite{wu2019logan,brock2018large}) considering the stability in training; nevertheless, it does not imply that traditional cross-entropy loss is an inferior choice. In parallel to other works~\cite{gong2019twin}, we believe ACGAN loss can be as good as projection loss after slight modifications. First of all, consider the objective of discriminator $\max_D \mathcal{L}_D\coloneqq\mathcal{L}_D^r+\mathcal{L}_D^f$ where $\mathcal{L}_D^r$ and $\mathcal{L}_D^f$ are the likelihoods on real and fake minibatch, respectively. For instance, in ACGAN we have \begin{equation} \label{eq:AC-GAN-loss} \mathcal{L}_D^r=\mathbb{E}_{(x_r, y_r)\sim \mathbb{P}_{\text{real}}}\big[\log \Pr\big(\text{real}\wedge y_r|D(x_r)\big)\big], \ \ \ \mathcal{L}_D^f=\mathbb{E}_{(x_f, y_f)\sim G}\big[\log \Pr\big(\text{fake}\wedge y_f|D(x_f)\big)\big]. \end{equation} We remark that in the class-conditioning case, the discriminator contains two output branches: one is for binary classification of real or fake, the other is for multi-class classification of different class labels. The log-likelihood should be interpreted under the joint distribution of the two. However, as pointed out in~\cite{gong2019twin,liu2019rob}, the loss~\eqref{eq:AC-GAN-loss} encourages a degenerate solution featuring a mode collapse behavior. The solution of TwinGAN~\cite{gong2019twin} is to incorporate another classifier (namely ``twin-classifier'') to help generator $G$ promoting its diversity; while RobGAN~\cite{liu2019rob} removes the classification branch on fake data: \begin{equation} \label{eq:RobGAN-loss} \mathcal{L}_D^f=\mathbb{E}_{(x_f,y_f)\sim G}\big[\log \Pr\big(\text{fake}|D(x_f)\big)\big]. \end{equation} Overall, our FastGAN uses a similar loss function as RobGAN~\eqref{eq:RobGAN-loss}, except we changed the adversarial part from probability to hinge loss~\cite{lim2017geometric,miyato2018spectral}, which reduces the gradient vanishing and instability problem. As to the class-conditional branch, FastGAN inherits the auxiliary classifier from ACGAN as it is more suitable for adversarial training. However, as reported in prior works~\cite{odena2017conditional,miyato2018cgans}, training a GAN with auxiliary classification loss has no good intra-class diversity. To tackle this problem, we added a KL term to $\mathcal{L}_D^f$. Therefore, $\mathcal{L}_D^r$ and $\mathcal{L}_D^f$ become: \begin{equation} \label{eq:FastGAN-Ld} \begin{aligned} &\mathcal{L}_D^r=\mathbb{E}_{(x_r,y_r)\sim \mathbb{P}_\text{real}}\big[-\max\big(0, 1-D(x_r)\big)+\Pr\big(y_r|D(x_r)\big)\big],\\ &\mathcal{L}_D^f=\mathbb{E}_{(x_f,y_f)\sim G}\big[-\max\big(0, 1+D(x_f)\big)\big]-\alpha_c^f\cdot\mathsf{KL}(\Pr(y_f|D(x_f)), \mathcal{U})\big]. \end{aligned} \end{equation} where $\alpha_c^f\in(0, 1)$ is a coefficient, $\mathcal{U}=(1/C, 1/C, \dots, 1/C)^T$ is a uniform categorical distribution among all $C$-classes. Our loss~\eqref{eq:FastGAN-Ld} is in sharp difference to ACGAN loss in~\eqref{eq:AC-GAN-loss}: in ACGAN, the discriminator gets rewarded by assigning high probability to $y_f$; while our FastGAN is encouraged to assign a uniform probability to all labels in order to enhance the intra-class diversity of the generated samples. Additionally, we found that it is worth to add another factor $\alpha_c^g$ to $\mathcal{L}_G$ to balance between image diversity and fidelity, so the generator loss in FastGAN is defined as \begin{equation} \label{eq:FastGAN-Lg} \mathcal{L}_G = \mathbb{E}_{(x_f,y_f)\sim G}\big[-\mathop{\mathbb{E}}[ D(x_f)] - \alpha_c^g\cdot\log \Pr(y_f|D(x_f))\big]. \end{equation} Overall, in the improved objectives of FastGAN, $G$ is trained to minimize $\mathcal{L}_G$ while $D$ is trained to maximize $\mathcal{L}_D^r+\mathcal{L}_D^f$. \vspace{-5pt} \section{\label{experiments}Experiments} \vspace{-5pt} In this section, we test the performance of FastGAN on a variety of datasets. For the baselines, we choose SNGAN~\cite{miyato2018spectral} with projection discriminator, RobGAN~\cite{liu2019rob}, and SAGAN~\cite{zhang2018self}. Although better GAN models could be found, such as the BigGAN~\cite{brock2018large} and the LOGAN~\cite{wu2019logan}, we do not include them because they require large batch training (batch size~$>$~1k) on much larger networks (see Table~\ref{tab:compare-GANs} for model sizes). \textbf{Datasets.}\quad We test on following datasets: CIFAR10, CIFAR100~\cite{krizhevsky2009learning}, ImageNet-143~\cite{miyato2018spectral,liu2019rob}, and the full ImageNet~\cite{russakovsky2015imagenet}. Notablly the ImageNet-143 dataset is an $143$-class subset of ImageNet~\cite{russakovsky2015imagenet}, first seen in SNGAN~\cite{miyato2018spectral}. We use both 64x64 and 128x128 resolutions in our expriments. \textbf{Choice of architecture.}\quad As our focus is not on architectural innovations, for a fair comparison, we did the layer-by-layer copy of the ResNet backbone from SNGAN (spectral normalization layers are removed). So our FastGAN, SNGAN, and RobGAN are directly comparable, whereas SAGAN is bigger in model size. For experiments on CIFAR, we follow the architecture in WGAN-GP~\cite{gulrajani2017improved}, which is also used in SNGAN~\cite{miyato2018spectral,miyato2018cgans}. \textbf{Optimizer.}\quad We use Adam~\cite{kingma2014adam} with learning rate $\eta_0=0.0002$ and momentum $\beta_1=0.0$, $\beta_2=0.9$ (CIFAR/ImageNet-143) or $\beta_2=0.99$ (ImageNet) for both $D$ and $G$. We use exponential decaying learning rate scheduler: $\eta_t = \eta_0 \cdot e^{t/\kappa }$ where $t$ is the iteration number. Other hyperparameters are attached in appendix. \subsection{CIFAR10 and CIFAR100} We report the experimental results in Table~\ref{tab:cifar-exp}. We remind that SAGAN does not contain official results on CIFAR, so we exclude it form this experiment. The metrics are measured after all GANs stop improving, which took $\sim 2.5\times 10^4$ seconds. As we can see, our FastGAN is better than SNGAN and RobGAN at CIFAR dataset in terms of both IS score and FID score. Furthermore, to compare the convergence speed, we exhibit the learning curves of all results in Figure~\ref{fig:conv_cifar}. From this figure, we can observe a consistent acceleration effect from FastGAN. \begin{table}[htb] \caption{\label{tab:cifar-exp}Results on CIFAR10 and CIFAR100. IS indicates the inception score (higher better), FID is the Fréchet Inception Distance (lower better). Time is measured in seconds. Wining results are displayed in \textbf{bold}, collapsed results are displayed in \textcolor{cyan}{blue}.} \centering \scalebox{0.9}{ \begin{tabular}{lcccccc} \toprule & \multicolumn{3}{c}{CIFAR10} & \multicolumn{3}{c}{CIFAR100} \\ \cmidrule(r){2-4} \cmidrule(l){5-7} & IS $\uparrow$ & FID $\downarrow$ & Time $\downarrow$ & IS $\uparrow$ & FID $\downarrow$ & Time $\downarrow$ \\ \midrule Real data & $11.24\pm.10$ & $5.30$ & -- & $14.79\pm .15$ & $5.91$ & -- \\ \midrule SNGAN & $7.47\pm.13$ & $14.59$ & $2.45\times 10^4$ & $7.86\pm .21$ & $18.29$ & $2.56\times 10^4$ \\ RobGAN & $6.95\pm .06$ & $20.75$ & $2.50\times 10^4$ & $7.24\pm.21$ & $28.27$ & $2.6\times 10^4$ \\ FastGAN & $\bm{7.76\pm .12}$ & $\bm{12.97}$ & $\bm{2.28\times 10^4}$ & $\bm{8.87\pm .06}$ & $\bm{17.27}$ & $\bm{2.31\times 10^4}$\\ \bottomrule + Revert to RobGAN loss & $7.14\pm.18$ & $20.90$ & $2.28\times 10^4$ & $\textcolor{cyan}{6.26\pm .19}$ & $\textcolor{cyan}{43.69}$ & $\textcolor{cyan}{2.31\times 10^4}$\\ + Disable adv. training & $7.79\pm .12$ & $13.93$ & $2.28\times 10^4$ & $8.38\pm .21$ & $19.43$ & $2.31\times 10^4$\\ + Constant lr. & $\bm{8.09\pm.11}$ & $14.67$ & $2.28\times 10^4$ & $\bm{8.95\pm.31}$ & $19.53$ & $2.31\times 10^4$\\ + Disable KL-term~\eqref{eq:FastGAN-Ld} & $7.37\pm .17$ & $16.26$ & $2.28\times 10^4$ & $\textcolor{cyan}{7.42\pm.18}$ & $\textcolor{cyan}{24.02}$ & $\textcolor{cyan}{2.31\times 10^4}$\\ \bottomrule \end{tabular}} \end{table} \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{figs/cifar.pdf} \end{center} \vspace{-7pt} \caption{Comparing the convergence of GANs on CIFAR10 (a, b) and CIFAR100 (c, d) datasets.} \label{fig:conv_cifar} \end{figure} \par Next, we study which parts of our FastGAN attribute to performance improvement. To this end, we disable some essential modifications of FastGAN (last four rows in Table~\ref{tab:cifar-exp}). We also try different $\alpha_c^g$ in the experiments of CIFAR100 in Figure~\ref{fig:alpha_cg} and show it effectively controls the tradeoff between diversity and fidelity. \begin{figure}[htb] \centering \includegraphics[width=0.9\linewidth]{./figs/alpha_g.pdf} \caption{The effect of $\alpha_c^g$ on the diversity - fidelity tradeoff in CIFAR100 experiments. Specifically in the last row, when $\alpha_c^g\in [0.8, 1.0]$ the distributions are unimodal (roses all in the same color: either dark red or peach); when $\alpha_c^g\in [0.2, 0.4]$, distributions are multimodal so we could see roses of more colors. If the coefficient is as small as $0.1$, the distributions become more complex, but we could no longer recognize the images.} \label{fig:alpha_cg} \end{figure} \subsection{ImageNet-143} ImageNet-143 is a dataset first appeared in SNGAN. As a test suite, the distribution complexity stands between CIFAR and full ImageNet as it has 143 classes and 64x64 or 128x128 pixels. We perform experiments on this dataset in a similar way as in CIFAR. The experimental results are shown in Table~\ref{Tab:imgnet143gq}, in which we can see FastGAN overtakes SNGAN and RobGAN in both IS and FID metrics. As to the convergence rate, as shown in Figure~\ref{fig:conv_catdog}, FastGAN often requires $2{\sim}3$ times less training time compared with SNGAN and RobGAN to achieve the same scores. \begin{table}[htb] \caption{Results on ImageNet-143.} \label{Tab:imgnet143gq} \centering \resizebox{0.95\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule & \multicolumn{3}{c}{64x64 pixels} & \multicolumn{3}{c}{128x128 pixels} \\ \cmidrule(r){2-4}\cmidrule(l){5-7} & IS $\uparrow$ & FID $\downarrow$ & Time $\downarrow$ & IS $\uparrow$ & FID $\downarrow$ & Time $\downarrow$ \\ \midrule Real data & $27.9\pm 0.42$ & $0.48$ & -- & $53.01\pm 0.56$ & $0.42$ & -- \\ \midrule SNGAN & $10.70\pm 0.17$ & $29.70$ & $2.2\times 10^5$ & $26.12\pm 0.29$ & $30.83$ & $5.9\times 10^5$ \\ RobGAN & $24.62\pm0.31$ & $14.64$ & $1.8\times 10^5$ & $33.81\pm 0.47$ & $33.98$ & $1.8\times 10^5$ \\ FastGAN & $22.23\pm 0.35$ & $\bm{12.04}$ & $\bm{3.6\times 10^4}$ & $40.41\pm 0.49$ & $\bm{14.48}$ & $\bm{7.1\times 10^4}$ \\ \midrule + Revert to RobGAN loss & $25.49\pm0.29$ & $14.03$ & $3.6\times 10^4$ & $\bm{45.94\pm0.52}$ & $25.38$ & $7.1\times 10^4$\\ + Disable KL-term~\eqref{eq:FastGAN-Ld} & $\bm{25.61\pm0.47}$ & $13.94$ & $3.6\times 10^4$ & $42.54\pm 0.40$ & $17.51$ & $7.1\times 10^4$ \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[htb] \begin{center} \includegraphics[width=1.0\linewidth]{figs/catdog.pdf} \end{center} \vspace{-7pt} \caption{Comparing the convergence of GANs on 64x64 resolution (in (a) and (b)) and 128x128 resolution (in (c) and (d)) of the ImageNet-143 dataset.} \label{fig:conv_catdog} \end{figure} \subsection{ImageNet} \begin{table}[h!] \caption{Results on full ImageNet.} \label{Tab:imgnetgq} \centering \scalebox{0.83}{ \begin{tabular}{lccc|lccc} \toprule & IS $\uparrow$ & FID $\downarrow$ & Time $\downarrow$ & & IS $\uparrow$ & FID $\downarrow$ & Time $\downarrow$ \\ \midrule Real data & $217.43\pm 5.17$ & $1.55$ & -- & & & \\ \midrule \multicolumn{4}{l}{\textit{Trained with batch size $=64$}} & \multicolumn{4}{l}{\textit{Trained with batch size $=256$}}\\ SNGAN & $36.8$ & $27.62$ & $2.98\times 10^6$ & SAGAN & $52.52$ & \bm{$18.65$} & $1.79\times 10^6$ \\ FastGAN & \bm{$51.43\pm 2.51$} & \bm{$22.60$} & \bm{$1.34\times 10^6$} & FastGAN & \bm{$65.57\pm2.65$} & $19.41$ & \bm{$1.10\times 10^6$}\\ \bottomrule \end{tabular}} \end{table} In this experiment, we set SNGAN and SAGAN as the baselines. RobGAN is not included because no ImageNet experiment is seen in the original paper, nor can we scale RobGAN to ImageNet with the official implementation. The images are scaled and cropped to 128x128 pixels. A notable fact is that SNGAN is trained with batch size $=64$, while SAGAN is trained with batch size $=256$. To make a fair comparison, we train our FastGAN with both batch sizes and compare them with the corresponding official results. From Table~\ref{Tab:imgnetgq}, we can generally find the FastGAN better at both metrics, only the FID score is slightly worse than SAGAN. Considering that our model contains no self-attention block, the FastGAN is $13\%$ smaller and $57\%$ faster than SAGAN in training. \vspace{-10pt} \section{Conclusion} \vspace{-10pt} In this work, we propose the FastGAN, which incorporates free adversarial training strategies to reduce the overall training time of GAN. Furthermore, we further modify the loss function to improve generation quality. We test FastGAN from small to large scale datasets and compare it with strong prior works. Our FastGAN demonstrates better generation quality with faster convergence speed in most cases.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $\mathbb{S}$ denote the real vector space of bounded linear self-adjoint operators, $\mathbb{P}\subset\mathbb{S}$ denote the cone of positive definite operators on a Hilbert space $\mathcal{H}$ equipped with the operator norm $\|\cdot\|$. Let \begin{equation*} d_\infty(A,B):=\|\log(A^{-1/2}BA^{-1/2})\|=\spr(\log(A^{-1}B)) \end{equation*} denote the Thompson metric, which turns $(\mathbb{P},d_\infty)$ into a complete metric space such that the topology generated by $d_\infty$ agrees with the relative operator norm topology \cite{thompson}. The Karcher mean \cite{karcher,sturm}, originally defined on $\mathbb{P}$ for finite dimensional $\mathcal{H}$ \cite{bhatiaholbrook,moakher} as a non-commutative generalization of the geometric mean \cite{kubo}, has been intensively investigated in the last decade \cite{bhatiakarandikar,limpalfia,lawsonlim1}. For a $k$-tuple of operators $\mathbb{A}:=(A_1,\ldots, A_k)$ with corresponding weight $\omega:=(w_1,\ldots,w_k)$ where $A_i\in\mathbb{P}$, $w_i>0$ and $\sum_{i=1}^kw_i=1$ the Karcher mean $\Lambda(\omega,\mathbb{A})$ is defined as the unique solution of the Karcher equation \begin{equation}\label{eq:intro1} \sum_{i=1}^kw_i\log(X^{-1/2}A_iX^{-1/2})=0 \end{equation} for $X\in\mathbb{P}$. The existence and uniqueness in the infinite dimensional case was proved by Lawson-Lim \cite{lawsonlim1}, generalizing the approximation technique of power means given in the finite dimensional case by Lim-P\'alfia \cite{limpalfia}. The power mean $P_t(\omega,\mathbb{A})$ for $t\in(0,1]$ is defined as the unique solution of the operator equation \begin{equation}\label{eq:intro2} \sum_{i=1}^kw_iX\#_tA_i=X \end{equation} where $A\#_tB=A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{t}A^{1/2}$ is the geometric mean of $A,B\in\mathbb{P}$. It is proved in \cite{lawsonlim1} that $t\mapsto P_t(\omega,\mathbb{A})$ is a decreasing sequence in the strong operator topology, strong operator converging to $\Lambda(\omega,\mathbb{A})$, extending the result of \cite{limpalfia}. A further generalization of $\Lambda$ to Borel probability measures with bounded support has been done in \cite{kimlee,palfia2} by integrating with respect to a Borel probability measure $\mu$ in \eqref{eq:intro1} and \eqref{eq:intro2} instead of taking sums. Let $\mathcal{P}^1(\mathbb{P})$ denote the convex set of $\tau$-additive Borel probability measures $\mu$ on $(\mathbb{P},\mathcal{B}(\mathbb{P}))$ such that $\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)<+\infty$ for all $X\in\mathbb{P}$. Recall that a Borel measure $\mu$ is $\tau$-\emph{additive} if $\mu(\bigcup_\alpha U_\alpha)=\sup_\alpha \mu(U_\alpha)$ for all directed families $\{U_\alpha: \alpha\in D\}$ of open sets. In this paper for $\mu\in\mathcal{P}^1(\mathbb{P})$ we consider the operator equation \begin{equation}\label{eq:intro3} \int_{\mathbb{P}}\log(X^{-1/2}A_iX^{-1/2})d\mu(A)=0, \end{equation} and establish the existence and uniqueness of the solution for $X\in\mathbb{P}$ which provides the extension of the map $\Lambda(\cdot)$ to the case of $L^1$-probability measures over the infinite dimensional cone $(\mathbb{P},d_\infty)$. In particular existence is established by approximation with finitely supported measures in the $L^1$-Wasserstein distance, by extending the fundamental $L^1$-Wasserstein contraction property \begin{equation*} d_\infty(\Lambda(\mu),\Lambda(\nu))\leq W_1(\mu,\nu) \end{equation*} for any $\mu,\nu\in\mathcal{P}^1(\mathbb{P})$ originally introduced by Sturm on $\mathrm{CAT}(0)$-spaces \cite{sturm}. We also prove the uniqueness of solution of \eqref{eq:intro3}, and develop a nonlinear ODE theory for $\Lambda$ by considering the Cauchy problem \begin{equation}\label{eq:introCP} \begin{split} X(0)&:=X\in\mathbb{P},\\ \dot{X}(t)&=\int_{\mathbb{P}}\log_{X(t)}Ad\mu(A), \end{split} \end{equation} for $t\in[0,\infty)$, where $\log_XA=X^{1/2}\log\left(X^{-1/2}AX^{-1/2}\right)X^{1/2}$ is the relative operator entropy \cite{fujii,fujiiseo}. We prove that the solutions of \eqref{eq:introCP} can be constructed by a discrete backward Euler-scheme converging in the $d_\infty$ distance, generalizing the classical Crandall-Liggett techniques developed in Banach spaces \cite{crandall}. In order to obtain the discretizations, we introduce the nonlinear resolvent \begin{equation}\label{eq:introResolvent} J_{\lambda}^{\mu}(X):=\Lambda\left(\frac{\lambda}{\lambda+1}\mu+\frac{1}{\lambda+1}\delta_X\right) \end{equation} for $\lambda>0$. The advantage is that $J_{\lambda}^{\mu}$ is a strict contraction with respect to $d_\infty$, and satisfies the \emph{resolvent identity} necessary to obtain the $O(\sqrt{\lambda})$ convergence rate estimate of the exponential formula to the solution of \eqref{eq:introCP} along with the semigroup property in \cite{crandall}. We further obtain the exponential contraction rate estimate \begin{equation*} d_\infty(\gamma(t),\eta(t))\leq e^{-t}d_\infty(\gamma(0),\eta(0)) \end{equation*} valid for two solution curves of \eqref{eq:introCP} with varying initial points. This large time behavior also ensures the uniqueness of stationary points, and the uniqueness of the solution of \eqref{eq:intro3}. Furthermore, using the same exponential contraction estimate, we perform additional analysis of the Frech\'et-differential of the left hand side of \eqref{eq:intro3} at the unique solution $\Lambda(\mu)$, eventually proving the norm convergence conjecture of power means $P_t$ to $\Lambda$ as $t\to 0$, a problem first mentioned in \cite{lawsonlim1} as a possible strengthening of the strong operator convergence. From the analysis of the Frech\'et-differential, a resolvent convergence also follows which leads to a continuous-time Trotter-Kato-type product formula that is closely related to the law of large numbers of Sturm \cite{sturm} and its deterministic "nodice" counterparts proved in \cite{Hol,limpalfia2} valid in $\mathrm{CAT}(0)$-spaces. In particular we prove that for a sequence $\mu_n$ of finitely supported probability measures $W_1$-convergent to $\mu$, we have \begin{equation*} \lim_{n\to\infty}S^{\mu_n}(t)=S^{\mu}(t) \end{equation*} in $d_\infty$, where $S^{\mu}(t)$ denotes the solution of the Cauchy problem \eqref{eq:introCP} corresponding to $\mu$. Under the assumption $\mu_n=\sum_{i=1}^n\frac{1}{n}\delta_{Y_i}$, we also prove the explicit product formula \begin{equation*} \lim_{m\to\infty}\left(F^{\mu_n}_{t/m}\right)^m=S^{\mu_n}(t) \end{equation*} in $d_\infty$, where $F^{\mu_n}_{\rho}:=J_{\rho/n}^{\delta_{Y_n}}\circ\cdots\circ J_{\rho/n}^{\delta_{Y_1}}$ with $J_{\rho}^{\delta_{A}}(X):=X\#_{\frac{\rho}{\rho+1}}A$ in the spirit of \eqref{eq:introResolvent}. The above formula is advantageous, since it only contains iterated geometric means of only two operators, hence explicitly calculable. It must be noted that although similar results are available for the Karcher mean in the $\mathrm{CAT}(0)$ \cite{bacak,stojkovic,sturm} or even $\mathrm{CAT}(k)$ \cite{ohta0,ohta} setting, all these techniques break down in the infinite dimensional case $(\mathbb{P},d_\infty)$ due to the nonexistence of a convex potential function so that the solutions of \eqref{eq:introCP} is a gradient flow, the non-differentiability of the squared distance function $d_\infty^2(A,\cdot)$ and the "non-commutativity" in the sense of \cite{ohta} of the operator norm $\|\cdot\|$ appearing in the formula for $d_\infty$. Even proving the resolvent convergence necessary for a Trotter-Kato product formula is non-trivial in the infinite dimensional case due to the lack of local compactness of the manifold $\mathbb{P}$. The paper is organized as follows. In section 2 we gather all the necessary information available for the Karcher mean of finitely supported measures in relation with the distance $d_\infty$ and the $L^1$-Wasserstein distance $W_1$, and in section 3 we extend the domain of $\Lambda$ to $\mathcal{P}^1(\mathbb{P})$ by $W_1$-continuity and in section 4 we prove its uniqueness as a solution of \eqref{eq:intro3}. In section 5 we develop the ODE theory corresponding to \eqref{eq:introCP} by generalizing the argumentation in \cite{crandall}. In section 6 we develop the theory of approximation semigroups for the ODE flow \eqref{eq:introCP}. In section 7 we establish the $d_\infty$ convergence of the approximating resolvents necessary for establishing the Trotter-Kato formula of section 6 by combining the large time behavior of the solutions of \eqref{eq:introCP} with operator theoretic techniques. As a byproduct we prove the norm convergence conjecture of $P_t$ to $\Lambda$ as $t\to 0$. The last section gathers the consequences of the earlier sections, establishing a continuous-time result corresponding to the law of large numbers. \section{Preliminaries} Let $\mathcal{P}^1(\mathbb{P})$ denote the convex set of $\tau$-additive Borel probability measures $\mu$ on $(\mathbb{P},\mathcal{B}(\mathbb{P}))$ such that $\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)<+\infty$ for all $X\in\mathbb{P}$. Notice that $\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)<+\infty$ implies by Proposition 23 of Chapter 4 in \cite{royden}, the \emph{uniform intergability} of $d_\infty$, that is \begin{equation}\label{eq:UnifromIntegrable} \lim_{R\to\infty}\int_{d_\infty(X,A)\geq R}d_\infty(X,A)d\mu(A)=0. \end{equation} More generally we say that a sequence $\mu_n\in\mathcal{P}^1(\mathbb{P})$ is uniformly integrable if \begin{equation*} \lim_{R\to \infty}\limsup_{n\to\infty}\int_{d_\infty(X,A)\geq R}d_\infty(X,A)d\mu_n(A)=0 \end{equation*} for a (thus all) $X\in\mathbb{P}$. For $\tau$-additive measures one may realize the complement of the support as a directed union of open sets of measure $0$ hence the complement has measure $0$, and the support has measure $1$. For the separability of the support of $\sigma$-additive measures over metric spaces, see \cite{lawson} \begin{proposition}\label{P:separableSupp} Let $\mu\in\mathcal{P}^1(\mathbb{P})$. Then the support $\supp(\mu)$ is separable. \end{proposition} The $L^1$-Wasserstein distance between $\mu,\nu\in\mathcal{P}^1(\mathbb{P})$ is defined as \begin{equation*} W_1(\mu,\nu)=\inf_{\gamma\in\Pi(\mu,\nu)}\int_{\mathbb{P}\times\mathbb{P}}d_\infty(A,B)d\gamma(A,B) \end{equation*} where $\Pi(\mu,\nu)$ denotes the set of all $\tau$-additive Borel probability measures on the product space $\mathbb{P}\times\mathbb{P}$ with marginals $\mu$ and $\nu$. We consider $\tau$-additive measures, since the following is not true in general for $\sigma$-additive Borel probability measures, however it holds for $\tau$-additive ones: \begin{proposition}[Theorem 8.3.2. \& Example 8.1.6. \cite{bogachev}]\label{P:weakW1agree} The topology generated by the Wasserstein metric $W_1(\cdot,\cdot)$ on $\mathcal{P}^1(\mathbb{P})$ agrees with the weak-$*$ (also called weak) topology of $\mathcal{P}^1(\mathbb{P})$ on uniformly integrable sequences of probability measures. Moreover finitely supported probability measures are $W_1$-dense in $\mathcal{P}^1(\mathbb{P})$. \end{proposition} \begin{proof} Since the support of any member of $\mathcal{P}^1(\mathbb{P})$ is separable and for a $\tau$-additive probability measure its support has measure $1$, the proofs of Kantorovich duality go through when restricted to the supports of $\mu,\nu\in\mathcal{P}^1(\mathbb{P})$ thus basically arriving at the Polish metric space case, see for example Theorem 6.9 in \cite{villani} and Theorem 8.10.45 in \cite{bogachev}. In particular Theorem 6.9 in \cite{villani} proves the equivalence between the two topologies. Then by Varadarajan's theorem which can be found as Theorem 11.4.1. in \cite{dudley} we have that for any $\mu\in\mathcal{P}^1(\mathbb{P})$ the empirical probability measures $\mu_n:=\sum_{i=1}^n\frac{1}{n}\delta_{Y_i}$ converge weakly to $\mu$ almost surely on the Polish metric space $(\supp(\mu),d_\infty)$, where $Y_i$ is a sequence of i.i.d. random variables on the Polish metric space $(\supp(\mu),d_\infty)$ with law $\mu$. So for each bounded continuous function $f$ on $(\supp(\mu),d_\infty)$ we have $\int_{\supp(\mu)}fd\mu_n\to\int_{\supp(\mu)}fd\mu$ which happens outside of a set of measure $0$. So on the complement we have weak convergence of $\mu_n$ to $\mu$. Now, one is left with checking that $\mu_n$ is a uniformly integrable sequence which follows from the uniform integrability of $\mu$ itself. \end{proof} \begin{definition}[strong measurability, Bochner integral]\label{D:BochnerIntegrable} Let $(\Omega,\Sigma,\mu)$ be finite measure space and let $f:\Omega\mapsto\mathbb{P}$. Then $f$ is \emph{strongly measurable} if there exists a sequence of simple functions $f_n$, such that $\lim_{n\to\infty}f_n(\omega)=f(\omega)$ almost surely. The function $f:\Omega\mapsto\mathbb{P}$ is \emph{Bochner integrable} if the following are satisfied: \begin{itemize} \item[(1)] $f$ is strongly measurable; \item[(2)] there exists a sequence of simple functions $f_n$, such that $\lim_{n\to\infty}\int_{\Omega}\|f(\omega)-f_n(\omega)\|d\mu(\omega)=0$ \end{itemize} In this case we define the \emph{Bochner integral} of $f$ by $$\int_{\Omega}f(\omega)d\mu(\omega):=\lim_{n\to\infty}\int_{\Omega}f_n(\omega)d\mu(\omega).$$ \end{definition} It is well known that a strongly measurable function $f$ on a finite measure space $(\Omega,\Sigma,\mu)$ is Bochner integrable if and only if $\int_{\Omega}\|f(\omega)\|d\mu(\omega)<\infty$. The logarithm map $\log:{\Bbb P}\to \mathbb{S}$ is differentiable and is contractive from the exponential metric increasing (EMI) property (\cite{LL07}) \begin{eqnarray}\label{E:EMI}||\log X-\log Y||\leq d_{\infty}(X,Y), \ \ \ X,Y\in {\Bbb P}. \end{eqnarray} This property reflects the seminegative curvature of the Thompson metric, which can be realized as a Banach-Finsler metric arising from the Banach space norm on $\mathbb{S}$: for $A\in {\Bbb P},$ the Finsler norm of $X\in T_A {\Bbb P}=\mathbb{S}$ is given by $\Vert X\Vert_{A}= \Vert A^{-1/2}XA^{-1/2}\Vert$ and the exponential and logarithm maps are \begin{eqnarray}\label{E:BF} \exp_A(X)&=&A^{1/2}\exp(A^{-1/2}XA^{-1/2})A^{1/2},\\ \log_A(X)&=&A^{1/2}\log(A^{-1/2}XA^{-1/2})A^{1/2}. \end{eqnarray} Notice that also $\log_AX=A\log(A^{-1}X)$. \begin{lemma}\label{L:LogIntegrable} For all $\mu\in\mathcal{P}^1(\mathbb{P})$ and $X\in\mathbb{P}$, the Bochner integral $\int_{\mathbb{P}}\log_{X}Ad\mu(A)$ exists. \end{lemma} \begin{proof} First of all, notice that $A\mapsto X\log(X^{-1}A)$ is strongly measurable. Indeed since $A\mapsto X\log(X^{-1}A)$ is norm continuous, hence $d_\infty$ to norm continuous and it is almost separably valued, thus by the Pettis measurability theorem it is strongly measurable. Then \begin{equation*} \begin{split} \int_{\mathbb{P}}\|X\log(X^{-1}A)\|d\mu(A)&\leq \int_{\mathbb{P}}\|X^{1/2}\|\|\log(X^{-1/2}AX^{-1/2})\|\|X^{1/2}\|d\mu(A)\\ &=\|X\|\int_{\mathbb{P}}\|\log(X^{-1/2}AX^{-1/2})\|d\mu(A)\\ &=\|X\|\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)<\infty \end{split} \end{equation*} which shows Bochner integrability. \end{proof} \begin{definition}[Karcher equation/mean]\label{D:Karcher} For a $\mu\in\mathcal{P}^1(\mathbb{P})$ the \emph{Karcher equation} is defined as \begin{equation}\label{eq:D:Karcher} \int_{\mathbb{P}}\log_XAd\mu(A)=0, \end{equation} where $X\in\mathbb{P}$. If \eqref{eq:D:Karcher} has a unique solution in $X\in\mathbb{P}$, then it is called the Karcher mean and is denoted by $\Lambda(\mu)$. \end{definition} \begin{definition}[Weighted geometric mean]\label{D:GeometricMean} Let $A,B\in\mathbb{P}$ and $t\in[0,1]$. Then for $(1-t)\delta_A+t\delta_B=:\mu\in\mathcal{P}^1(\mathbb{P})$ the Karcher equation \begin{equation*} \int_{\mathbb{P}}\log_XAd\mu(A)=(1-t)\log_XA+t\log_XB=0 \end{equation*} has a unique solution $A\#_tB=\Lambda(\mu)$ called the \emph{weighted geometric mean} and \begin{equation*} A\#_tB=A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^tA^{1/2}=A\left(A^{-1}B\right)^t. \end{equation*} \end{definition} By the dominated convergence theorem and Lemma~\ref{L:LogIntegrable} we have the following: \begin{lemma}\label{L:gradContinuous} For each $X\in\mathbb{P}$ and $\mu\in\mathcal{P}^1(\mathbb{P})$ the function $X\mapsto \int_{\mathbb{P}}\log_XAd\mu(A)$ is $d_\infty$ to norm continuous. \end{lemma} \begin{proof} Pick a sequence $X_n\to X$ in the $d_\infty$ topology in $\mathbb{P}$. Then \begin{equation}\label{eq1:L:gradContinuous} \begin{split} &\left\|\int_{\mathbb{P}}\log_{X_n}Ad\mu(A)-\int_{\mathbb{P}}\log_{X}Ad\mu(A)\right\|\\ &\leq\int_{\mathbb{P}}\left\|\log_{X_n}A-\log_{X}A\right\|d\mu(A)\\ &\leq\int_{\mathbb{P}}\left\|\log_{X_n}A\right\|+\left\|\log_{X}A\right\|d\mu(A)\\ &\leq\|X_n\|\int_{\mathbb{P}}d_\infty(X_n,A)d\mu(A)+\|X\|\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)<\infty, \end{split} \end{equation} thus $\left\|\log_{X_n}A-\log_{X}A\right\|$ is integrable. Since $d_\infty$ agrees with the relative norm topology, we have that \begin{equation*} F_n(A):=\left\|\log_{X_n}A-\log_{X}A\right\|\to 0 \end{equation*} point-wisely for every $A\in\mathbb{P}$ as $n\to\infty$. Then by the dominated convergence theorem we obtain \begin{equation*} \begin{split} \lim_{n\to\infty}\int_{\mathbb{P}}\left\|\log_{X_n}A-\log_{X}A\right\|d\mu(A)&=\int_{\mathbb{P}}\lim_{n\to\infty}\left\|\log_{X_n}A-\log_{X}A\right\|d\mu(A)\\ &=0. \end{split} \end{equation*} In view of \eqref{eq1:L:gradContinuous} this proves the assertion. \end{proof} For some further known facts below, see for example \cite{lawsonlim1}. \begin{lemma}\label{L:geometricMeanContraction} For a fixed $A\in\mathbb{P}$ and $t\in[0,1]$ the function $f(X):=A\#_tX$ is a contraction on $(\mathbb{P},d_\infty)$ with Lipschitz constant $(1-t)$. \end{lemma} \begin{proposition}[Power means, cf. \cite{lawsonlim1,limpalfia}]\label{P:PowerMeans} Let $t\in(0,1]$, $A_i\in\mathbb{P}$ for $1\leq i\leq n$ and let $\omega=(w_1,\ldots,w_n)$ be a probability vector so that $\mu=\sum_{i=1}^nw_i\delta_{A_i}\in\mathcal{P}^1(\mathbb{P})$. Then the function \begin{equation*} f(X):=\int_{\mathbb{P}}X\#_tAd\mu(A) \end{equation*} is a contraction on $(\mathbb{P},d_\infty)$ with Lipschitz constant $(1-t)$, and thus the operator equation \begin{equation}\label{eq:P:PowerMeans} \int_{\mathbb{P}}X\#_tAd\mu(A)=X \end{equation} has a unique solution in $X\in\mathbb{P}$ which is denoted by $P_t(\mu)$. \end{proposition} \begin{theorem}[see \cite{lawsonlim1,limpalfia}]\label{T:PowerMeanLimit} Let $t\in(0,1]$, $A_i\in\mathbb{P}$ for $1\leq i\leq n$ and let $\omega=(w_1,\ldots,w_n)$ be a probability vector so that $\mu=\sum_{i=1}^nw_i\delta_{A_i}\in\mathcal{P}^1(\mathbb{P})$. Then for $1\geq s\geq t>0$ we have $P_s(\mu)\geq P_t(\mu)$ and the strong operator limit \begin{equation}\label{eq:T:PowerMeanLimit} X_0:=\lim_{t\to 0+}P_t(\mu) \end{equation} exists and $X_0=\Lambda(\mu)$. \end{theorem} By the above we define $P_0(\mu):=\Lambda(\mu)$. \begin{proposition}[\cite{lawsonlim1,limpalfia}]\label{P:PowerPoperties} The function $P_t(\cdot)$ is operator monotone. That is, let $t\in[0,1]$, $A_i\leq B_i\in\mathbb{P}$ for $1\leq i\leq n$ and let $\omega=(w_1,\ldots,w_n)$ be a probability vector so that $\mu=\sum_{i=1}^nw_i\delta_{A_i},\nu=\sum_{i=1}^nw_i\delta_{B_i}\in\mathcal{P}^1(\mathbb{P})$. Then \begin{equation}\label{eq:P:PowerPoperties} P_t(\mu)\leq P_t(\nu). \end{equation} \end{proposition} \begin{theorem}[see Theorem 6.4. \cite{lawsonlim1}]\label{T:KarcherExist} Let $A_i\in\mathbb{P}$ for $1\leq i\leq n$ and let $\omega=(w_1,\ldots,w_n)$ be a probability vector. Then for $\mu=\sum_{i=1}^nw_i\delta_{A_i}$ the equation \eqref{eq:D:Karcher} has a unique positive definite solution $\Lambda(\mu)$. In the special case $n=2$, we have \begin{equation}\label{eq:T:KarcherExist} \Lambda((1-t)\delta_{A}+t\delta_{B})=A\#_tB \end{equation} for any $t\in[0,1]$, $A,B\in\mathbb{P}$. \end{theorem} \begin{proposition}[see Proposition 2.5. \cite{lawsonlim1}]\label{P:KarcherW1contracts} Let $A_i,B_i\in\mathbb{P}$ for $1\leq i\leq n$. Then $\Lambda$ for $\mu=\frac{1}{n}\sum_{i=1}^n\delta_{A_i}$ and $\nu=\frac{1}{n}\sum_{i=1}^n\delta_{B_i}$ satisfies \begin{equation}\label{eq0:P:KarcherW1contracts} d_{\infty}(\Lambda(\mu),\Lambda(\nu))\leq\sum_{i=1}^n\frac{1}{n}d_\infty(A_i,B_i), \end{equation} in particular by permutation invariance of $\Lambda$ in the variables $(A_1,\ldots,A_n)$ we have \begin{equation}\label{eq:P:KarcherW1contracts} d_{\infty}(\Lambda(\mu),\Lambda(\nu))\leq\min_{\sigma\in S_n}\sum_{i=1}^n\frac{1}{n}d_\infty(A_i,B_{\sigma(i)})=W_1(\mu,\nu). \end{equation} \end{proposition} \section{Extension of $\Lambda$ by $W_1$-continuity} We extend $\Lambda$ and its contraction properties by using continuity and contraction property of it with respect to $W_1$, along with the approximation properties of $\mathcal{P}^1(\mathbb{P})$ with respect to the metric $W_1$. \begin{lemma}\label{L:converge} Let $X,Y\in\mathbb{P}$ and $\mu_n,\mu\in\mathcal{P}^1(\mathbb{P})$ and $\supp(\mu_n),\supp(\mu)\subseteq Z\subset\mathbb{P}$ where $Z$ is closed and separable. Assume also that $X\to Y$ in $d_\infty$, $\mu_n\to \mu$ in $W_1$. Then \begin{equation*} \int_{\mathbb{P}}\log_{X}Ad\mu_n(A)\to \int_{\mathbb{P}}\log_{Y}Ad\mu(A) \end{equation*} in the weak Banach space topology. \end{lemma} \begin{proof} Let $x,y\in\mathbb{P}$ and $\mu,\nu\in\mathcal{P}^1(\mathbb{P})$. For any real-valued norm continuous linear functional $l^*$ we have \begin{equation}\label{eq1:L:converge} \begin{split} &\left|\left\langle \int_{\mathbb{P}}\log_{x}Ad\mu_n(A)-\int_{\mathbb{P}}\log_{y}Ad\mu(A),l^* \right\rangle\right| \\ &\leq \left|\left\langle \int_{\mathbb{P}}\log_{x}Ad\mu(A)-\int_{\mathbb{P}}\log_{y}Ad\mu(A),l^* \right\rangle\right|\\ &\quad +\left|\left\langle \int_{\mathbb{P}}\log_{x}Ad\mu_n(A)-\int_{\mathbb{P}}\log_{x}Ad\mu(A),l^* \right\rangle\right|\\ &\leq \left|\left\langle \int_{\mathbb{P}}\log_{x}Ad\mu(A),l^* \right\rangle-\left\langle\int_{\mathbb{P}}\log_{y}Ad\mu(A),l^* \right\rangle\right|\\ &\quad +\left|\int_{\mathbb{P}}\left\langle\log_{x}A,l^* \right\rangle d\mu_n(A)-\int_{\mathbb{P}}\left\langle\log_{x}A,l^* \right\rangle d\mu(A)\right|. \end{split} \end{equation} If $x\to y$ in $d_\infty$, then the first term in the above converges to $0$ by Lemma~\ref{L:gradContinuous}. Now the complete metric space $(Z,d_\infty)$ is separable, so we can apply some well known theorems for the metric $W_1$ restricted for probability measures with support included in $Z$. In fact Proposition 7.1.5 in \cite{AGS} tells us that $\mu_n$ has uniformly integrable $1$-moments, i.e. \begin{equation*} \lim_{R\to \infty}\sup_{n}\int_{d_\infty(x,A)\geq R}d_\infty(x,A)d\mu_n(A)=0. \end{equation*} Now, for the second term in \eqref{eq1:L:converge} we have the estimates \begin{equation*} \begin{split} \int_{\mathbb{P}}\left|\left\langle\log_{x}A,l^* \right\rangle\right| d\mu_n(A)&\leq \|l^*\|_*\|x\|\int_{\mathbb{P}}\|\log(x^{-1/2}Ax^{-1/2})\|d\mu_n(A)\\ &\leq \|l^*\|_*\|x\|\int_{\mathbb{P}}d_\infty(x,A)d\mu_n(A), \end{split} \end{equation*} which means that $\int_{\mathbb{P}}\left|\left\langle\log_{x}A,l^* \right\rangle\right| d\mu_n(A)$ is uniformly integrable as well. In \cite{bogachev} Lemma 8.4.3. says that if $\xi_\alpha \to\xi$ in the weak-$*$ topology for Baire probability measures $\xi_\alpha,\xi$ on a topological space $X$, then for every real-valued continuous function $f$ on $X$ satisfying $\lim_{R\to \infty}\sup_{\alpha}\int_{|f|\geq R}|f|d\xi_\alpha=0,$ we have $\lim_{\alpha}\int_{X}fd\xi_\alpha=\int_{X}fd\xi$. Thus the second term of \eqref{eq1:L:converge} also converges to $0$. \end{proof} \begin{theorem}\label{T:LambdaExists} For all $\mu\in\mathcal{P}^1(\mathbb{P})$ there exists a solution of \eqref{eq:D:Karcher} denoted by $\Lambda(\mu)$ (with slight abuse of notation), which satisfies \begin{equation}\label{eq:T:LambdaExists} d_\infty(\Lambda(\mu),\Lambda(\nu))\leq W_1(\mu,\nu) \end{equation} for all $\nu\in\mathcal{P}^1(\mathbb{P})$. \end{theorem} \begin{proof} Let $\mu\in\mathcal{P}^1(\mathbb{P})$. Then by Proposition~\ref{P:weakW1agree} there exists a $W_1$-convergent sequence of finitely supported probability measures $\mu_n\in\mathcal{P}^1(\mathbb{P})$ such that $W_1(\mu,\mu_n)\to 0$. By Theorem~\ref{T:KarcherExist} $\Lambda(\mu_n)$ exists for any $n$ in the index set. We also have that $W_1(\mu_m,\mu_n)\to 0$ as $m,n\to\infty$ and by \eqref{eq:P:KarcherW1contracts} it follows that $d_\infty(\Lambda(\mu_m),\Lambda(\mu_n))\to 0$ as $m,n\to\infty$, i.e. $\Lambda(\mu_n)$ is a $d_\infty$ Cauchy sequence. Thus we define \begin{equation}\label{Q} \tilde{\Lambda}(\mu):=\lim_{n\to\infty}\Lambda(\mu_n). \end{equation} Since \eqref{eq:T:LambdaExists} holds by Proposition~\ref{P:KarcherW1contracts} for finitely supported probability measures, we extend \eqref{eq:T:LambdaExists} to the whole of $\mathcal{P}^1(\mathbb{P})$ by $W_1$-continuity, using the $W_1$-density of finitely supported probability measures in $\mathcal{P}^1(\mathbb{P})$. Then by construction for all $n$ we have \begin{equation*} \int_{\mathbb{P}}\log_{\Lambda(\mu_n)}Ad\mu_n(A)=0, \end{equation*} thus by Lemma~\ref{L:converge} we have \begin{equation*} \int_{\mathbb{P}}\log_{\Lambda(\mu_n)}Ad\mu_n(A)\to \int_{\mathbb{P}}\log_{\tilde{\Lambda}(\mu)}Ad\mu(A) \end{equation*} weakly, that is \begin{equation*} \int_{\mathbb{P}}\log_{\tilde{\Lambda}(\mu)}Ad\mu(A)=0. \end{equation*} \end{proof} \begin{definition}[Karcher mean]\label{D:KarcherMean} Given a $\mu\in\mathcal{P}^1(\mathbb{P})$, we define $\Lambda(\mu)$ as the limit obtained in Theorem~\ref{T:LambdaExists}. Notice that the limit does not depend on the actual approximating sequence of measures due to \eqref{eq:T:LambdaExists}. \end{definition} \section{The uniqueness of $\Lambda$} In this section we establish the uniqueness of the solution of \eqref{eq:D:Karcher}. We will need the following result that establishes this for probability measures with bounded support. \begin{theorem}[Theorem 6.13. \& Example 6.1. in \cite{palfia2}]\label{T:KarcherExist2} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ such that $\supp(\mu)$ is bounded. Then the Karcher equation \eqref{eq:D:Karcher} has a unique positive definite solution $\Lambda(\mu)$. \end{theorem} The following result is well known for Wasserstein spaces over general metric spaces, we provide its proof for completeness. \begin{proposition}\label{P:W_1convex} The $W_1$ distance is convex, that is for $\mu_1,\mu_2,\nu_1,\nu_2\in\mathcal{P}^1(\mathbb{P})$ and $t\in[0,1]$ we have \begin{equation}\label{eq:P:W_1convex} W_1((1-t)\mu_1+t\mu_2,(1-t)\nu_1+t\nu_2)\leq (1-t)W_1(\mu_1,\nu_1)+tW_1(\mu_2,\nu_2). \end{equation} \end{proposition} \begin{proof} Let $\omega_1\in\Pi(\mu_1,\nu_1), \omega_2\in\Pi(\mu_2,\nu_2)$ where $\Pi(\mu,\nu)\subseteq\mathcal{P}(\mathbb{P}\times\mathbb{P})$ denote the set of all couplings of $\mu,\nu\in\mathcal{P}^1(\mathbb{P})$. Then $(1-t)\omega_1+t\omega_2\in\Pi((1-t)\mu_1+t\mu_2,(1-t)\nu_1+t\nu_2)$ and we have \begin{equation*} \begin{split} W_1&((1-t)\mu_1+t\mu_2,(1-t)\nu_1+t\nu_2)\\ &=\inf_{\gamma\in\Pi((1-t)\mu_1+t\mu_2,(1-t)\nu_1+t\nu_2)}\int_{\mathbb{P}\times\mathbb{P}}d_\infty(A,B)d\gamma(A,B)\\ &\leq\int_{\mathbb{P}\times\mathbb{P}}d_\infty(A,B)d((1-t)\omega_1+t\omega_2)(A,B)\\ &=(1-t)\int_{\mathbb{P}\times\mathbb{P}}d_\infty(A,B)d\omega_1(A,B)+t\int_{\mathbb{P}\times\mathbb{P}}d_\infty(A,B)d\omega_2(A,B), \end{split} \end{equation*} thus by taking infima in $\omega_1\in\Pi(\mu_1,\nu_1), \omega_2\in\Pi(\mu_2,\nu_2)$ \eqref{eq:P:W_1convex} follows. \end{proof} \begin{theorem}\label{T:L1KarcherUniqueness Let $\mu\in\mathcal{P}^1(\mathbb{P})$. Then the Karcher equation \eqref{eq:D:Karcher} has a unique solution in $\mathbb{P}$. \end{theorem} \begin{proof} Let $X\in\mathbb{P}$ be a solution of \eqref{eq:D:Karcher}, i.e. \begin{equation*} \int_{\mathbb{P}}\log_{X}Ad\mu(A)=0. \end{equation*} Let $B(X,R):=\{Y\in\mathbb{P}:d_\infty(Y,X)<R\}$. Then since $\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)<+\infty$ from Proposition 23 of Chapter 4 in \cite{royden} it follows that \begin{equation}\label{eq0:T:L1KarcherUniqueness2} \lim_{R\to\infty}\int_{\mathbb{P}\setminus B(X,R)}d_\infty(X,A)d\mu(A)=0. \end{equation} For $R\in[0,\infty)$, if $\mu(\mathbb{P}\setminus B(X,R))>0$ define \begin{equation*} E(R):=\frac{1}{\mu(\mathbb{P}\setminus B(X,R))}\int_{\mathbb{P}\setminus B(X,R)}\log(X^{-1/2}AX^{-1/2})d\mu(A) \end{equation*} and $E(R):=0$ otherwise. Also define $Z(R):=X^{1/2}\exp(E(R))X^{1/2}$ and $\mu_R\in\mathcal{P}^1(\mathbb{P})$ by \begin{equation*} \mu_R:=\mu|_{B(X,R)}+\mu(\mathbb{P}\setminus B(X,R))\delta_{Z(R)} \end{equation*} where $\mu|_{B(X,R)}$ is the restriction of $\mu$ to $B(X,R)$. Note that $\mu_R$ has bounded support for any $R\in(0,\infty)$. Next, we claim that $\lim_{R\to\infty}W_1(\mu_R,\mu)=0$. If $W_1(\mu_{R_0},\mu)=0$ for some $R_0>0$ then $W_1(\mu_R,\mu)=0$ for all $R\geq R_0$ and we are done, so assume $W_1(\mu_{R},\mu)\neq 0$. We have \begin{equation*} \begin{split} W_1(\mu_R,\mu)&=W_1\left(\mu|_{B(X,R)}+\mu(\mathbb{P}\setminus B(X,R))\delta_{Z(R)},\right.\\ &\left.\quad\quad\quad\mu|_{B(X,R)}+\mu(\mathbb{P}\setminus B(X,R))\frac{1}{\mu(\mathbb{P}\setminus B(X,R))}\mu|_{\mathbb{P}\setminus B(X,R)}\right)\\ &\leq\mu(B(X,R))W_1\left(\frac{1}{\mu(B(X,R))}\mu|_{B(X,R)},\frac{1}{\mu(B(X,R))}\mu|_{B(X,R)}\right)\\ &\quad+\mu(\mathbb{P}\setminus B(X,R))W_1\left(\delta_{Z(R)},\frac{1}{\mu(\mathbb{P}\setminus B(X,R))}\mu|_{\mathbb{P}\setminus B(X,R)}\right)\\ &=\int_{\mathbb{P}\setminus B(X,R)}d_\infty(Z(R),A)d\mu(A)\\ &\leq\int_{\mathbb{P}\setminus B(X,R)}d_\infty(Z(R),X)+d_\infty(X,A)d\mu(A)\\ &=\int_{\mathbb{P}\setminus B(X,R)}\|E(R)\|d\mu(A)+\int_{\mathbb{P}\setminus B(X,R)}d_\infty(X,A)d\mu(A)\\ &=\left\|\int_{\mathbb{P}\setminus B(X,R)}\log(X^{-1/2}AX^{-1/2})d\mu(A)\right\|+\int_{\mathbb{P}\setminus B(X,R)}d_\infty(X,A)d\mu(A)\\ &\leq\int_{\mathbb{P}\setminus B(X,R)}\left\|\log(X^{-1/2}AX^{-1/2})\right\|d\mu(A)+\int_{\mathbb{P}\setminus B(X,R)}d_\infty(X,A)d\mu(A)\\ &=2\int_{\mathbb{P}\setminus B(X,R)}d_\infty(X,A)d\mu(A) \end{split} \end{equation*} where to obtain the first inequality we used \eqref{eq:P:W_1convex}. This proves our claim by \eqref{eq0:T:L1KarcherUniqueness2}. On one hand, since $\mu_R$ has bounded support for all $R\in(0,\infty)$ by Theorem~\ref{T:KarcherExist2} it follows that the Karcher equation \begin{equation}\label{eq1:T:L1KarcherUniqueness2} \int_{\mathbb{P}}\log_YAd\mu_R(A)=0 \end{equation} has a unique solution in $\mathbb{P}$ and that must be $\Lambda(\mu_R)$ by Theorem~\ref{T:LambdaExists}. On the other hand, we have that by definition $X$ is also a solution of \eqref{eq1:T:L1KarcherUniqueness2}, thus $\Lambda(\mu_R)=X$ for all $R\in(0,\infty)$. Now by Proposition~\ref{P:weakW1agree} we choose a sequence of finitely supported probability measures $\mu_n\in\mathcal{P}^1(\mathbb{P})$ that is $W_1$-converging to $\mu$ so by Theorem~\ref{T:LambdaExists} $\Lambda(\mu_n)\to\Lambda(\mu)$. Then, by the claim $W_1(\mu_R,\mu_n)\to 0$ as $R,n\to \infty$, thus by the contraction property \eqref{eq:T:LambdaExists} $d_\infty(\Lambda(\mu_R),\Lambda(\mu_n))\to 0$, that is $d_\infty(X,\Lambda(\mu_n))\to 0$ and also $\Lambda(\mu_n)\to\Lambda(\mu)$ proving that $X=\Lambda(\mu)$, thus the uniqueness of the solution of \eqref{eq:D:Karcher}. \end{proof} \begin{remark} Many properties of $\Lambda$ now carries over to the $L^1$-setting. The interested reader can consult section 6 in \cite{palfia2} and \cite{lawson}. In particular the stochastic order introduced in \cite{kimlee,lawson} extends the usual element-wise order of uniformly finitely supported measures by introducing upper sets: $U\subseteq\mathbb{P}$ is upper if for an $X\in\mathbb{P}$ there exists an $Y\in U$ such that $Y\leq X$, then $X\in U$. Then the \emph{stochastic order} for $\mu,\nu\in\mathcal{P}^{1}(\mathbb{P})$ is defined as $\mu\leq\nu$ if $\mu(U)\leq\nu(U)$ for all upper sets $U\subseteq\mathbb{P}$. Then the results in \cite{lawson} applies and if $\mu\leq\nu$ then $\Lambda(\mu)\leq \Lambda(\nu)$. This can also be proved by applying the results of section 6 in \cite{kimlee} to the infinite dimensional setting with the monotonicity results of \cite{palfia2} for measures with bounded support. \end{remark} \section{An ODE flow of $\Lambda$} The fundamental $W_1$-contraction property \eqref{eq:T:LambdaExists} enables us to develop an ODE flow theory for $\Lambda$ that resembles the gradient flow theory of its potential function in the finite dimensional $\mathrm{CAT}(0)$-space case, see \cite{limpalfia2,ohta} and the monograph \cite{bacak}. Given a $\mathrm{CAT}(\kappa)$-space $(X,d)$, the \emph{Moreau-Yoshida} resolvent of a lower semicontinuous function $f$ is defined as \begin{equation*} J_{\lambda}(x):=\argmin_{y\in X}f(y)+\frac{1}{2\lambda}d^2(x,y) \end{equation*} for $\lambda>0$. Then the gradient flow $S(t)$ semigroup of $f$ is defined as \begin{equation*} S(t)x_0:=\lim_{n\to\infty}(J_{t/n})^nx_0 \end{equation*} for $t\in[0,\infty)$ and starting point $x_0\in X$, see \cite{bacak}. However in the infinite dimensional case substituting $d_\infty$ in place of $d$ in the above formulas leads to many difficulties, in particular $d_\infty^2$ is not uniformly convex, moreover $d_\infty$ is not differentiable, since the operator norm $\|\cdot\|$ is an $L^{\infty}$-type norm, hence not smooth. Also the potential function $f$ is not known to exist in the infinite dimensional case of $\mathbb{P}$, since there exists no finite trace on $\mathcal{B}(\mathcal{H})$ to be used to define any Riemannian metric on $\mathbb{P}$. However if we use the formulation of the critical point gradient equation corresponding to the definition of $J_\lambda$ above, we can obtain a reasonable ODE theory in our setting for $\Lambda$. \begin{definition}[Resolvent operator] Given $\mu\in\mathcal{P}^1(\mathbb{P})$ we define the resolvent operator for $\lambda>0$ and $X\in\mathbb{P}$ as \begin{equation}\label{eq:D:resolvent} J_{\lambda}^{\mu}(X):=\Lambda\left(\frac{\lambda}{\lambda+1}\mu+\frac{1}{\lambda+1}\delta_X\right), \end{equation} a solution we obtained in Theorem~\ref{T:LambdaExists} of the Karcher equation \begin{equation*} \frac{\lambda}{\lambda+1}\int_{\mathbb{P}}\log_{Z}Ad\mu(A)+\frac{1}{\lambda+1}\log_{Z}(X)=0 \end{equation*} for $Z\in\mathbb{P}$ according to Definition~\ref{D:KarcherMean}. \end{definition} The resolvent operator exists for $\lambda\in [0,\infty]$ and provides a continuous path from $X$ to $\Lambda(\mu)$. An alternative such operator is $$\Lambda(X\#_{t}\mu), \ \ \ \ t\in [0,1]$$ where $$ X\#_{t} \mu = f_{*}(\mu), \ \ f(A):=X\#_{t}A.$$ We readily obtain the following fundamental contraction property of the resolvent. \begin{proposition}[Resolvent contraction]\label{P:ResolventContraction} Given $\mu\in\mathcal{P}^1(\mathbb{P})$, for $\lambda>0$ and $X,Y\in\mathbb{P}$ we have \begin{equation}\label{eq:P:ResolventContraction} d_\infty(J_{\lambda}^\mu(X),J_{\lambda}^\mu(Y))\leq \frac{1}{1+\lambda}d_\infty(X,Y). \end{equation} \end{proposition} \begin{proof} Let $\mu_\alpha\in\mathcal{P}^1(\mathbb{P})$ be a net of finitely supported measures $W_1$-converging to $\mu$ by Proposition~\ref{P:weakW1agree}. Then by the triangle inequality and Proposition~\ref{P:KarcherW1contracts} we get \begin{equation*} \begin{split} &d_\infty(J_{\lambda}^\mu(X),J_{\lambda}^\mu(Y))\\ &\leq d_\infty(J_{\lambda}^\mu(X),J_{\lambda}^{\mu_\alpha}(X))+d_\infty(J_{\lambda}^{\mu_\alpha}(X),J_{\lambda}^{\mu_\alpha}(Y))+d_\infty(J_{\lambda}^{\mu_\alpha}(Y),J_{\lambda}^\mu(Y))\\ &\leq d_\infty(J_{\lambda}^\mu(X),J_{\lambda}^{\mu_\alpha}(X))+\frac{1}{1+\lambda}d_\infty(X,Y)+d_\infty(J_{\lambda}^{\mu_\alpha}(Y),J_{\lambda}^\mu(Y)). \end{split} \end{equation*} Since $d_\infty(J_{\lambda}^\mu(Z),J_{\lambda}^{\mu_\alpha}(Z))\to 0$ as $\alpha\to\infty$ by \eqref{eq:T:LambdaExists}, taking the limit $\alpha\to\infty$ in the above chain of inequalities yields the assertion. \end{proof} \begin{proposition}[Resolvent identity]\label{P:ResolventIdentity} Given $\mu\in\mathcal{P}^1(\mathbb{P})$, for $\tau>\lambda>0$ and $X\in\mathbb{P}$ we have \begin{equation}\label{eq:P:ResolventIdentity} J_{\tau}^\mu(X)=J_{\lambda}^\mu\left(J_{\tau}^\mu(X)\#_{\frac{\lambda}{\tau}}X\right). \end{equation} \end{proposition} \begin{proof} First suppose that $\mu=\sum_{i=1}^nw_i\delta_{A_i}$ where $A_i\in\mathbb{P}$ for $1\leq i\leq n$ and $\omega=(w_1,\ldots,w_n)$ a probability vector. By \eqref{eq:D:resolvent} we have \begin{equation*} \tau\int_{\mathbb{P}}\log_{J_{\tau}^\mu(X)}Ad\mu(A)+\log_{J_{\tau}^\mu(X)}X=0 \end{equation*} and from that it follows that \begin{equation*} \begin{split} \lambda\int_{\mathbb{P}}\log_{J_{\tau}^\mu(X)}Ad\mu(A)+\frac{\lambda}{\tau}\log_{J_{\tau}^\mu(X)}X&=0,\\ \lambda\int_{\mathbb{P}}\log_{J_{\tau}^\mu(X)}Ad\mu(A)+\log_{J_{\tau}^\mu(X)}\left(J_{\tau}^\mu(X)\#_{\frac{\lambda}{\tau}}X\right)&=0, \end{split} \end{equation*} and the above equation still uniquely determines $J_{\tau}^\mu(X)$ as its only positive solution by Theorem~\ref{T:KarcherExist}, thus establishing \eqref{eq:P:ResolventIdentity} for finitely supported measures $\mu$. The general $\mu\in\mathcal{P}^1(\mathbb{P})$ case of \eqref{eq:P:ResolventIdentity} is obtained by approximating $\mu$ in $W_1$ by a net of finitely supported measures $\mu_{\alpha}\in\mathcal{P}^1(\mathbb{P})$ and using \eqref{eq:T:LambdaExists} to show that $J_{\lambda}^{\mu_\alpha}(X)\to J_{\lambda}^\mu(X)$ in $d_\infty$ and also the fact that $\#_t$ appearing in \eqref{eq:P:ResolventIdentity} is also $d_\infty$-continuous, hence obtaining \eqref{eq:P:ResolventIdentity} in the limit as $\mu_\alpha\to\mu$ in $W_1$. \end{proof} \begin{proposition}\label{P:ResolventBound} Given $\mu\in\mathcal{P}^1(\mathbb{P})$, $\lambda>0$ and $X\in\mathbb{P}$ we have \begin{equation}\label{eq:P:ResolventBound} \begin{split} d_\infty(J_{\lambda}^\mu(X),X)&\leq \frac{\lambda}{1+\lambda}\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)\\ d_\infty\left(\left(J_{\lambda}^\mu\right)^n(X),X\right)&\leq n\frac{\lambda}{1+\lambda}\int_{\mathbb{P}}d_\infty(X,A)d\mu(A). \end{split} \end{equation} \end{proposition} \begin{proof} By Theorem~\ref{T:LambdaExists} $J_{\lambda}^\mu(X)$ is a solution of \begin{equation}\label{eq1:P:ResolventBound} \lambda\int_{\mathbb{P}}\log_{J_{\lambda}^\mu(X)}Ad\mu(A)+\log_{J_{\lambda}^\mu(X)}X=0, \end{equation} hence we have \begin{equation*} \begin{split} d_\infty(J_{\lambda}^\mu(X),X)&=\left\|\log\left(J_{\lambda}^\mu(X)^{-1/2}XJ_{\lambda}^\mu(X)^{-1/2}\right)\right\|\\ &=\lambda\left\|\int_{\mathbb{P}}\log\left(J_{\lambda}^\mu(X)^{-1/2}AJ_{\lambda}^\mu(X)^{-1/2}\right)d\mu(A)\right\|\\ &\leq \lambda\int_{\mathbb{P}}\left\|\log\left(J_{\lambda}^\mu(X)^{-1/2}AJ_{\lambda}^\mu(X)^{-1/2}\right)\right\|d\mu(A)\\ &=\lambda\int_{\mathbb{P}}d_\infty(J_{\lambda}^\mu(X),A)d\mu(A) \end{split} \end{equation*} Given $J_{\lambda}^\mu(X)\in\mathbb{P}$ we can solve \eqref{eq1:P:ResolventBound} for $X\in\mathbb{P}$, thus by Proposition~\ref{P:ResolventContraction} we also have \begin{equation*} d_\infty(J_{\tau}^\mu(X),X)=d_\infty\left(J_{\tau}^\mu(X),J_{\tau}^\mu\left(\left(J_{\tau}^\mu\right)^{-1}(X)\right)\right)\leq \frac{1}{1+\lambda}d_\infty\left(X,\left(J_{\tau}^\mu\right)^{-1}(X)\right), \end{equation*} hence the first inequality in \eqref{eq:P:ResolventBound} follows. The second inequality in \eqref{eq:P:ResolventBound} follows from the first by the estimate \begin{equation*} \begin{split} d_\infty\left(\left(J_{\lambda}^\mu\right)^n(X),X\right)&\leq \sum_{i=0}^{n-1}d_\infty\left(\left(J_{\lambda}^\mu\right)^{n-i}(X),\left(J_{\lambda}^\mu\right)^{n-(i+1)}(X)\right)\\ &\leq \sum_{i=0}^{n-1}(1+\lambda)^{-n+(i+1)}d_\infty\left(J_{\lambda}^\mu(X),X\right)\\ &\leq nd_\infty\left(J_{\lambda}^\mu(X),X\right). \end{split} \end{equation*} \end{proof} In what follows we will closely follow the arguments in \cite{crandall} to construct the semigroups corresponding to the resolvent above. $B(k,l)$ denotes the binomial coefficient. \begin{lemma}[a variant of Lemma 1.3 cf. \cite{crandall}]\label{L:Crandall} Let $\mu\in\mathcal{P}^1(\mathbb{P})$, $\tau\geq\lambda>0$; $n\geq m$ be positive integers and $X\in\mathbb{P}$. Then \begin{equation*} \begin{split} d_\infty&\left(\left(J_{\tau}^\mu\right)^n(X),\left(J_{\lambda}^\mu\right)^m(X)\right)\\ &\leq (1+\lambda)^{-n}\sum_{j=0}^{m-1}\alpha^j\beta^{n-j}B(n,j)d_\infty\left(\left(J_{\tau}^\mu\right)^{m-j}(X),X\right)\\ &\quad+\sum_{j=m}^{n}(1+\lambda)^{-j}\alpha^m\beta^{j-m}B(j-1,m-1)d_\infty\left(\left(J_{\lambda}^\mu\right)^{n-j}(X),X\right) \end{split} \end{equation*} where $\alpha=\frac{\lambda}{\tau}$ and $\beta=\frac{\tau-\lambda}{\tau}$. \end{lemma} \begin{proof} For integers $j$ and $k$ satisfying $0\leq j\leq n$ and $0\leq k\leq m$, put \begin{equation*} a_{k,j}:=d_\infty\left(\left(J_{\lambda}^\mu\right)^{j}(X),\left(J_{\tau}^\mu\right)^{k}(X)\right). \end{equation*} For $j,k>0$ by Proposition~\ref{P:ResolventContraction} and Proposition~\ref{P:ResolventIdentity} we have \begin{equation*} \begin{split} a_{k,j}&=d_\infty\left(\left(J_{\lambda}^\mu\right)^{j}(X),J_{\lambda}^\mu\left(\left(J_{\tau}^\mu\right)^{k}(X)\#_{\frac{\lambda}{\tau}}\left(J_{\tau}^\mu\right)^{k-1}(X)\right)\right)\\ &\leq (1+\lambda)^{-1}d_\infty\left(\left(J_{\lambda}^\mu\right)^{j-1}(X),\left(J_{\tau}^\mu\right)^{k}(X)\#_{\frac{\lambda}{\tau}}\left(J_{\tau}^\mu\right)^{k-1}(X)\right)\\ &\leq (1+\lambda)^{-1}\left[\frac{\tau-\lambda}{\tau}d_\infty\left(\left(J_{\lambda}^\mu\right)^{j-1}(X),\left(J_{\tau}^\mu\right)^{k}(X)\right)\right.\\ &\quad\left.+\frac{\lambda}{\tau}d_\infty\left(\left(J_{\lambda}^\mu\right)^{j-1}(X),\left(J_{\tau}^\mu\right)^{k-1}(X)\right)\right]\\ &=(1+\lambda)^{-1}\frac{\lambda}{\tau}a_{k-1,j-1}+(1+\lambda)^{-1}\frac{\tau-\lambda}{\tau}a_{k,j-1}, \end{split} \end{equation*} where to obtain the second inequality we used Proposition~\ref{P:KarcherW1contracts} for $\#_t$. From here, the rest of the proof follows along the lines of Lemma 1.3 in \cite{crandall}. \end{proof} We quote the following Lemma 1.4. from \cite{crandall}: \begin{lemma} Let $n\geq m>0$ be integers, and $\alpha,\beta$ positive numbers satisfying $\alpha+\beta=1$. Then \begin{equation*} \sum_{j=0}^{m}B(n,j)\alpha^j\beta^{n-j}(m-j)\leq \sqrt{(n\alpha-m)^2+n\alpha\beta}, \end{equation*} and \begin{equation*} \sum_{j=m}^{n}B(j-1,m-1)\alpha^m\beta^{j-m}(n-j)\leq \sqrt{\frac{m\beta}{\alpha^2}+\left(\frac{m\beta}{\alpha^2}+m-n\right)^2}. \end{equation*} \end{lemma} \begin{theorem}\label{T:ExponentialFormula} For any $X,Y\in\mathbb{P}$ and $t>0$ the curve \begin{equation}\label{eq1:T:ExponentialFormula} S(t)X:=\lim_{n\to\infty}\left(J_{t/n}^{\mu}\right)^{n}(X) \end{equation} exists where the limit is in the $d_\infty$-topology and it is Lipschitz-continuous on compact time intervals $[0,T]$ for any $T>0$. Moreover it satisfies the contraction property \begin{equation}\label{eq2:T:ExponentialFormula} d_\infty\left(S(t)X,S(t)Y\right)\leq e^{-t}d_\infty(X,Y), \end{equation} and for $s>0$ verifies the semigroup property \begin{equation}\label{eq3:T:ExponentialFormula} S(t+s)X=S(t)(S(s)X), \end{equation} and the flow operator $S:\mathbb{P}\times(0,\infty)\mapsto \mathbb{P}$ extends by $d_\infty$-continuity to $S:\mathbb{P}\times[0,\infty)\mapsto \mathbb{P}$. \end{theorem} \begin{proof} The proof closely follows that of Theorem I in \cite{crandall} using the previous estimates of this section. In particular for $n\geq m>0$ one obtains \begin{equation}\label{eq4:T:ExponentialFormula} d_\infty\left(\left(J_{t/n}^\mu\right)^{n}(X),\left(J_{t/m}^\mu\right)^{m}(X)\right)\leq 2t\left(\frac{1}{m}-\frac{1}{n}\right)^{1/2}\int_{\mathbb{P}}d_\infty(X,A)d\mu(A) \end{equation} so $\lim_{n\to\infty}\left(J_{t/n}^\mu\right)^{n}(X)$ exists proving \eqref{eq1:T:ExponentialFormula}. Also $\left(J_{t/n}^\mu\right)^{n}$ satisfies \begin{equation*} d_\infty\left(\left(J_{t/n}^\mu\right)^{n}(X),\left(J_{t/n}^\mu\right)^{n}(Y)\right)\leq \left(1+\frac{t}{n}\right)^{-n}d_\infty(X,Y), \end{equation*} hence also \eqref{eq2:T:ExponentialFormula}. We also have \begin{equation}\label{eq5:T:ExponentialFormula} d_\infty\left(S(s)X,S(t)X\right)\leq 2|s-t|\int_{\mathbb{P}}d_\infty(X,A)d\mu(A) \end{equation} proving Lipschitz-continuity in $t$ on compact time intervals. The proof of the semigroup property is exactly the same as in \cite{crandall}. \end{proof} Before stating the next result we need another auxiliary lemma describing the asymptotic behavior of $J_{t/n}^{\mu}(X)$. \begin{lemma}\label{L:ResolventAsymptotics} Let $\mu\in\mathcal{P}^1(\mathbb{P})$, $\lambda>0$ and $X\in\mathbb{P}$. Then \begin{equation}\label{eq:L:ResolventAsymptotics} \log_{J_{\lambda}^{\mu}(X)}X=X-J_{\lambda}^{\mu}(X)+O\left(\lambda^2\right). \end{equation} \end{lemma} \begin{proof} Let $C:=\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)$. Then by Proposition~\ref{P:ResolventBound} \begin{equation*} e^{-\lambda\left(1+\lambda\right)^{-1}C}-I\leq J_{\lambda}^{\mu}(X)^{-1/2}XJ_{\lambda}^{\mu}(X)^{-1/2}-I\leq e^{\lambda\left(1+\lambda\right)^{-1}C}-I, \end{equation*} hence \begin{equation*} e^{-\lambda C}-I\leq J_{\lambda}^{\mu}(X)^{-1/2}XJ_{\lambda}^{\mu}(X)^{-1/2}-I\leq e^{\lambda C}-I, \end{equation*} which yields \begin{equation}\label{eq1:L:ResolventAsymptotics} \sum_{k=1}^\infty (-1)^k\frac{\left(\lambda C\right)^k}{k!}\leq J_{\lambda}^{\mu}(X)^{-1/2}XJ_{\lambda}^{\mu}(X)^{-1/2}-I\leq \sum_{k=1}^\infty \frac{\left(\lambda C\right)^k}{k!}. \end{equation} In view of the series expansion \begin{equation*} \log(z)=\sum_{k=1}^\infty(-1)^{k-1}\frac{(z-I)^k}{k} \end{equation*} uniformly convergent for $\|z-I\|<1$, we get \begin{equation*} \begin{split} \log\left(J_{\lambda}^{\mu}(X)^{-1/2}XJ_{\lambda}^{\mu}(X)^{-1/2}\right)&\\ =J_{\lambda}^{\mu}(X)^{-1/2}&XJ_{\lambda}^{\mu}(X)^{-1/2}-I+O\left(\lambda^2\right), \end{split} \end{equation*} from which the assertion follows. \end{proof} The proof of the following theorem in essence is analogous to that of Theorem II in \cite{crandall}. \begin{theorem}\label{T:StrongSolution} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ and $X\in\mathbb{P}$. Then for $t>0$, the curve $X(t):=S(t)X$ provides a strong solution of the Cauchy problem \begin{equation*} \begin{split} X(0)&:=X,\\ \dot{X}(t)&=\int_{\mathbb{P}}\log_{X(t)}Ad\mu(A), \end{split} \end{equation*} where the derivative $\dot{X}(t)$ is the Fr\'echet-derivative. \end{theorem} \begin{proof} Due to the semigroup property of $S(t)$, it is enough to check that \begin{equation*} \lim_{t\to 0+}\frac{S(t)X-X}{t}=\int_{\mathbb{P}}\log_{X}Ad\mu(A) \end{equation*} where the limit is in the norm topology. We have \begin{equation*} \begin{split} \frac{S(t)X-X}{t}&=\lim_{n\to\infty}\frac{\left(J_{t/n}^{\mu}\right)^n(X)-X}{t}\\ &=\lim_{n\to\infty}\frac{1}{n}\frac{\sum_{i=0}^{n-1}J_{t/n}^{\mu}\left(\left(J_{t/n}^{\mu}\right)^i(X)\right)-\left(J_{t/n}^{\mu}\right)^i(X)}{t/n} \end{split} \end{equation*} and also \begin{equation*} \frac{t}{n}\int_{\mathbb{P}}\log_{\left(J_{t/n}^{\mu}\right)^i(X)}Ad\mu(A)+\log_{\left(J_{t/n}^{\mu}\right)^i(X)}\left(J_{t/n}^{\mu}\right)^{i-1}(X)=0. \end{equation*} Then by Lemma~\ref{L:ResolventAsymptotics} we have \begin{equation*} \frac{S(t)X-X}{t}=\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\int_{\mathbb{P}}\log_{\left(J_{t/n}^{\mu}\right)^i(X)}Ad\mu(A)+O\left(\frac{t}{n}\right), \end{equation*} which combined with the estimates in Proposition~\ref{P:ResolventBound} and Lemma~\ref{L:gradContinuous} proves the assertion. \end{proof} \begin{proposition}\label{P:StationaryFlow} Let $\mu\in\mathcal{P}^1(\mathbb{P})$. Then the semigroup $S(t)\Lambda(\mu)$ generated in Theorem~\ref{T:ExponentialFormula} is stationary, that is $S(t)\Lambda(\mu)=\Lambda(\mu)$ for all $t>0$. \end{proposition} \begin{proof} It is enough to show that $J^\mu_\lambda(\Lambda(\mu))=\Lambda(\mu)$ for any $\lambda>0$. Indeed by substitution $\Lambda(\mu)$ is a solution of \begin{equation*} \frac{\lambda}{\lambda+1}\int_{\mathbb{P}}\log_{Z}Ad\mu(A)+\frac{1}{\lambda+1}\log_{Z}(\Lambda(\mu))=0 \end{equation*} but this solution is unique by Theorem~\ref{T:L1KarcherUniqueness} and by definition \eqref{eq:D:resolvent} it is $J^\mu_\lambda(\Lambda(\mu))$. \end{proof} \begin{problem} Are the solution curves $\gamma:[0,\infty)\mapsto \mathbb{P}$ of the Cauchy problem in Theorem~\ref{T:StrongSolution} unique? \end{problem} \section{Approximating semigroups and Trotter-Kato product formula} In this section we develop the theory of approximating semigroups that will lead to a Trotter-Kato product formula for the nonlinear ODE semigroups of the Karcher mean. \begin{lemma}\label{L:ApproxResolvent} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map with respect to $d_\infty$. Let $\lambda,\rho>0$ and $Y\in\mathbb{P}$. Then the map \begin{equation*} G_{\lambda,\rho,Y}(X):=\Lambda\left(\frac{1}{1+\lambda/\rho}\delta_{Y}+\frac{\lambda/\rho}{1+\lambda/\rho}\delta_{F(X)}\right) \end{equation*} is a strict contraction with Lipschitz constant $\frac{\lambda/\rho}{1+\lambda/\rho}<1$. Consequently the map $G_{\lambda,\rho,Y}$ has a unique fixed point denoted by $J_{\lambda,\rho}(Y)$. \end{lemma} \begin{proof} By Proposition~\ref{P:KarcherW1contracts} for $X_1,X_2\in\mathbb{P}$ we get \begin{equation*} d_\infty(G_{\lambda,\rho,Y}(X_1),G_{\lambda,\rho,Y}(X_2))\leq \frac{\lambda/\rho}{1+\lambda/\rho}d_\infty(F(X_1),F(X_2))\leq \frac{\lambda/\rho}{1+\lambda/\rho}d_\infty(X_1,X_2), \end{equation*} thus by Banach's fixed point theorem $G_{\lambda,\rho,Y}$ has a unique fixed point denoted by $J_{\lambda,\rho}(Y)$, hence \begin{equation}\label{eq:L:ApproxResolvent1} J_{\lambda,\rho}(Y)=\Lambda\left(\frac{1}{1+\lambda/\rho}\delta_{Y}+\frac{\lambda/\rho}{1+\lambda/\rho}\delta_{F(J_{\lambda,\rho}(Y))}\right). \end{equation} \end{proof} \begin{lemma}\label{L:ApproxResolventContr} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map with respect to $d_\infty$. Then for $\lambda,\rho>0$ the map $J_{\lambda,\rho}:\mathbb{P}\mapsto\mathbb{P}$ is nonexpansive. \end{lemma} \begin{proof} By Proposition~\ref{P:KarcherW1contracts} and Lemma~\ref{L:ApproxResolvent} for $X_1,X_2\in\mathbb{P}$ and $t:=\frac{\lambda/\rho}{1+\lambda/\rho}<1$ we get \begin{equation*} \begin{split} d_\infty(J_{\lambda,\rho}(X_1),J_{\lambda,\rho}(X_2))&\leq (1-t)d_\infty(X_1,X_2)+td_\infty(F(J_{\lambda,\rho}(X_1)),F(J_{\lambda,\rho}(X_2)))\\ &\leq (1-t)d_\infty(X_1,X_2)+td_\infty(J_{\lambda,\rho}(X_1),J_{\lambda,\rho}(X_2)), \end{split} \end{equation*} from which $d_\infty(J_{\lambda,\rho}(X_1),J_{\lambda,\rho}(X_2))\leq d_\infty(X_1,X_2)$ follows. \end{proof} \begin{lemma}\label{L:ApproxResolventEst} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map. Then for $\lambda,\rho>0$ and $X\in\mathbb{P}$ we have \begin{equation}\label{eq:L:ApproxResolventEst} \frac{d_\infty(X,J_{\lambda,\rho}(X))}{\lambda}\leq \frac{d_\infty(X,F(X))}{\rho}. \end{equation} \end{lemma} \begin{proof} By Lemma~\ref{L:ApproxResolvent} we have $\lim_{n\to\infty}G_{\lambda,\rho,X}^n(X)=J_{\lambda,\rho}(X)$. Then \begin{equation*} \begin{split} d_\infty(X,J_{\lambda,\rho}(X))&\leq \sum_{n=1}^{\infty}d_\infty(G_{\lambda,\rho,X}^{n-1}(X),G_{\lambda,\rho,X}^n(X))\\ &\leq \sum_{n=1}^{\infty}\left(\frac{\lambda/\rho}{1+\lambda/\rho}\right)^{n-1}d_\infty(X,G_{\lambda,\rho,X}(X))\\ &\leq \frac{1}{1-\frac{\lambda/\rho}{1+\lambda/\rho}}d_\infty(X,G_{\lambda,\rho,X}(X))=\frac{\lambda}{\rho}d_\infty(X,F(X)), \end{split} \end{equation*} since $d_\infty(X,G_{\lambda,\rho,X}(X))=\frac{\lambda/\rho}{1+\lambda/\rho}d_\infty(X,F(X))$. \end{proof} \begin{lemma}[Resolvent Identity]\label{L:ApproxResolventIdentity} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map. Then for $\lambda>\mu,\rho>0$ and $X\in\mathbb{P}$ we have \begin{equation}\label{eq:L:ApproxResolventIdentity} J_{\lambda,\rho}(X)=J_{\mu,\rho}\left(J_{\lambda,\rho}(X)\#_{\frac{\mu}{\lambda}}X\right). \end{equation} \end{lemma} \begin{proof} First of all notice that for $A,B\in\mathbb{P}$ we have that the curve $c(t)=A\#_tB$ for $t\in[0,1]$ has the property that for any $s\leq u\in[0,1]$ the curve $v(t):=c(s)\#_tc(u)$ is a connected subset of the curve $c$, i.e. $v(t)=c(s+t(u-s))$. Now consider the curve $\gamma(t):=X\#_tF(J_{\lambda,\rho}(X))$ for $t\in[0,1]$. Then by definition it follows that the points $J_{\lambda,\rho}(X)$ and $Z:=J_{\lambda,\rho}(X)\#_{\frac{\mu}{\lambda}}X$ are also on the curve $\gamma$, hence by the above $c(t):=Z\#_tF(J_{\lambda,\rho}(X))$ for $t\in[0,1]$ is a connected subset of the curve $\gamma$. Then to conclude our assertion, by \eqref{eq:L:ApproxResolvent1} and Theorem~\ref{T:KarcherExist}, it suffices to show that \begin{equation*} \frac{d_\infty(Z,J_{\lambda,\rho}(X))}{d_\infty(Z,F(J_{\lambda,\rho}(X)))}=\frac{\mu}{\rho+\mu}. \end{equation*} Indeed, let $a:=d_\infty(X,F(J_{\lambda,\rho}(X)))$, $b:=d_\infty(J_{\lambda,\rho}(X),F(J_{\lambda,\rho}(X)))$ and $d:=d_\infty(Z,J_{\lambda,\rho}(X))$. Then $b=\frac{\rho}{\rho+\lambda}a$, $d=(a-b)\frac{\mu}{\lambda}=\frac{\lambda}{\rho+\lambda}\frac{\mu}{\lambda}a=\frac{\mu}{\rho+\lambda}a$, thus we have \begin{equation*} \frac{d_\infty(Z,J_{\lambda,\rho}(X))}{d_\infty(Z,F(J_{\lambda,\rho}(X)))}=\frac{d}{d+b}=\frac{\mu}{\rho+\mu}. \end{equation*} \end{proof} \begin{lemma}\label{L:ApproxResolventEst2} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map. Then for $\lambda,\rho>0,n\in\mathbb{N}$ and $X\in\mathbb{P}$ we have \begin{equation}\label{eq:L:ApproxResolventEst2} d_\infty(J_{\lambda,\rho}^n(X),X)\leq n\frac{\lambda}{\rho}d_\infty(X,F(X)). \end{equation} \end{lemma} \begin{proof} By the triangle inequality, Lemma~\ref{L:ApproxResolventContr} and Lemma~\ref{L:ApproxResolventEst} we have \begin{equation*} \begin{split} d_\infty(J_{\lambda,\rho}^n(X),X)&\leq \sum_{i=1}^nd_\infty(J_{\lambda,\rho}^i(X),J_{\lambda,\rho}^{i-1}(X))\\ &\leq nd_\infty(J_{\lambda,\rho}(X),X)\\ &\leq n\frac{\lambda}{\rho}d_\infty(X,F(X)). \end{split} \end{equation*} \end{proof} \begin{lemma} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map, $\tau\geq\lambda>0$, $\rho>0$; $n\geq m$ be positive integers and $X\in\mathbb{P}$. Then \begin{equation*} \begin{split} d_\infty&\left(\left(J_{\tau,\rho}\right)^n(X),\left(J_{\lambda,\rho}\right)^m(X)\right)\\ &\leq \sum_{j=0}^{m-1}\alpha^j\beta^{n-j}B(n,j)d_\infty\left(\left(J_{\tau,\rho}\right)^{m-j}(X),X\right)\\ &\quad+\sum_{j=m}^{n}\alpha^m\beta^{j-m}B(j-1,m-1)d_\infty\left(\left(J_{\lambda,\rho}\right)^{n-j}(X),X\right) \end{split} \end{equation*} where $\alpha=\frac{\lambda}{\tau}$ and $\beta=\frac{\tau-\lambda}{\tau}$. \end{lemma} \begin{proof} Using the Resolvent Identity Lemma~\ref{L:ApproxResolventIdentity}, the estimates are obtained in the same way as in Lemma~\ref{L:Crandall}. \end{proof} \begin{theorem}\label{T:ExponentialFormulaAppr} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map. Then for any $X,Y\in\mathbb{P}$ and $t,\rho>0$ the curve \begin{equation}\label{eq1:T:ExponentialFormulaAppr} S_{\rho}(t)X:=\lim_{n\to\infty}\left(J_{t/n,\rho}\right)^{n}(X) \end{equation} exists where the limit is in the $d_\infty$-topology with estimate \begin{equation}\label{eq2:T:ExponentialFormulaAppr} d_\infty(S_{\rho}(t)X,\left(J_{t/n,\rho}\right)^{n}(X))\leq \frac{2t}{\sqrt{n}}\frac{d_\infty(X,F(X))}{\rho}, \end{equation} and satisfies the Lipschitz estimate \begin{equation}\label{eq3:T:ExponentialFormulaAppr} d_\infty(S_{\rho}(t)X,S_{\rho}(s)X)\leq 2\frac{d_\infty(X,F(X))}{\rho}|t-s| \end{equation} for any $t,s\geq 0$. Moreover it also satisfies the contraction property \begin{equation}\label{eq4:T:ExponentialFormulaAppr} d_\infty\left(S_{\rho}(t)X,S_{\rho}(t)Y\right)\leq d_\infty(X,Y), \end{equation} for $s>0$ verifies the semigroup property \begin{equation}\label{eq5:T:ExponentialFormulaAppr} S_{\rho}(t+s)X=S_{\rho}(t)(S_{\rho}(s)X), \end{equation} and the flow operator $S_{\rho}:\mathbb{P}\times(0,\infty)\mapsto \mathbb{P}$ extends by $d_\infty$-continuity to $S_{\rho}:\mathbb{P}\times[0,\infty)\mapsto \mathbb{P}$. \end{theorem} \begin{proof} We closely follow the proof of Theorem~\ref{T:ExponentialFormula}. Using the previous lemmas we similarly obtain \begin{equation}\label{eq:T:ExponentialFormulaAppr1} \begin{split} d_\infty&\left(\left(J_{\tau,\rho}\right)^n(X),\left(J_{\lambda,\rho}\right)^m(X)\right)\\ &\leq \sum_{j=0}^{m-1}\alpha^j\beta^{n-j}B(n,j)d_\infty\left(\left(J_{\tau,\rho}\right)^{m-j}(X),X\right)\\ &\quad+\sum_{j=m}^{n}\alpha^m\beta^{j-m}B(j-1,m-1)d_\infty\left(\left(J_{\lambda,\rho}\right)^{n-j}(X),X\right)\\ &\leq \sum_{j=0}^{m-1}\alpha^j\beta^{n-j}B(n,j)(m-j)\frac{d_\infty(X,F(X))}{\rho}\\ &\quad+\sum_{j=m}^{n}\alpha^m\beta^{j-m}B(j-1,m-1)(n-j)\frac{d_\infty(X,F(X))}{\rho}\\ &\leq \left[\lambda\sqrt{\left(n\frac{\tau}{\lambda}-m\right)^2+n\frac{\tau}{\lambda}\frac{\lambda-\tau}{\lambda}}\right.\\ &\quad\left.+\tau\sqrt{\frac{\lambda^2}{\tau^2}\frac{\lambda-\tau}{\lambda}m+\left(\frac{\lambda}{\tau}\frac{\lambda-\tau}{\lambda}m+m-n\right)^2}\right]\frac{d_\infty(X,F(X))}{\rho}\\ &=\left[\sqrt{(n\tau-m\lambda)^2+n\tau(\lambda-\tau)}\right.\\ &\quad\left.+\sqrt{m\lambda(\lambda-\tau)+(m\lambda-n\tau)^2}\right]\frac{d_\infty(X,F(X))}{\rho}. \end{split} \end{equation} For $\tau=\frac{t}{n},\lambda=\frac{t}{m}$, the above reads \begin{equation*} d_\infty\left(\left(J_{t/n,\rho}\right)^{n}(X),\left(J_{t/m,\rho}\right)^{m}(X)\right)\leq 2t\left|\frac{1}{n}-\frac{1}{m}\right|^{1/2}\frac{d_\infty(X,F(X))}{\rho}, \end{equation*} so the limit in \eqref{eq1:T:ExponentialFormulaAppr} exists by completeness and satisfies \eqref{eq2:T:ExponentialFormulaAppr}, moreover the above also yield the Lipschitz estimate \eqref{eq3:T:ExponentialFormulaAppr}. The rest of the properties is routine to prove, by following the steps of the proof of Theorem~\ref{T:ExponentialFormula}. \end{proof} \begin{lemma}\label{L:ApprErrorEst} Let $F:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map, $\rho>0$ and let $S_\rho(t)$ be the semigroup constructed in Theorem~\ref{T:ExponentialFormulaAppr}. Then for $t>0$, $X\in\mathbb{P}$ and $m\in\mathbb{N}$ we have \begin{itemize} \item[(i)]$S_\rho(t)X=S_1(t/\rho)X$, \item[(ii)]$d_\infty(F^m(X),S_\rho(t)X)\leq \left[\frac{t}{\rho}-m+2\sqrt{\left(\frac{t}{\rho}-m\right)^2+\frac{t}{\rho}}\right]d_\infty(X,F(X))$. \end{itemize} \end{lemma} \begin{proof} The proof of (i) follows directly from the fact that for $\rho,\lambda>0$, we have $J_{\lambda,\rho}=J_{\lambda/\rho,1}$. We turn to the proof of (ii) which is more involved. Proposition~\ref{P:KarcherW1contracts} and \eqref{eq:L:ApproxResolvent1} yields that \begin{equation*} \begin{split} d_\infty(F^m(X),(J_{t/n,1})^n(X))&\leq \frac{1}{1+\frac{t}{n}}d_\infty(F^m(X),(J_{t/n,1})^{n-1}(X))\\ &\quad+\frac{\frac{t}{n}}{1+\frac{t}{n}}d_\infty(F^m(X),F((J_{t/n,1})^{n}(X))). \end{split} \end{equation*} Using the above inequality recursively, we get \begin{equation*} \begin{split} d_\infty(F^m(X),(J_{t/n,1})^n(X))&\leq \left(1+\frac{t}{n}\right)^{-n}d_\infty(F^m(X),X)\\ &+\frac{t}{n}\sum_{k=1}^n\left(1+\frac{t}{n}\right)^{-(n-k)}d_\infty(F^{m}(X),F((J_{t/n,1})^{k}(X)))\\ &\leq \left(1+\frac{t}{n}\right)^{-n}md_\infty(F(X),X)\\ &+\frac{t}{n}\sum_{k=1}^n\left(1+\frac{t}{n}\right)^{-(n-k)}d_\infty(F^{m-1}(X),(J_{t/n,1})^{k}(X)). \end{split} \end{equation*} For $n\in\mathbb{N}$ define \begin{equation*} \begin{split} f_n(s)&:=\sum_{k=1}^n\left(1+\frac{t}{n}\right)^{-(n-k)}1_{\left(\frac{(k-1)t}{n},\frac{kt}{n}\right]}(s),\\ g_n(s)&:=\sum_{k=1}^nd_\infty(F^{m-1}(X),(J_{t/n,1})^{k}(X))1_{\left(\frac{(k-1)t}{n},\frac{kt}{n}\right]}(s), \end{split} \end{equation*} so that the above becomes \begin{equation*} d_\infty(F^m(X),(J_{t/n,1})^n(X))\leq \left(1+\frac{t}{n}\right)^{-n}md_\infty(F(X),X)+\int_{0}^tf_n(s)g_n(s)ds. \end{equation*} We will show that $f_n(s)\to e^{-(t-s)}$ and $g_n(s)\to d_\infty(F^{m-1}(X),S_1(s)X)$ for $s\in(0,t]$ and that $\sup_{n\in\mathbb{N},s\in[0,t]}|f_n(s)||g_n(s)|<\infty$. Then by dominated convergence we will have \begin{equation}\label{eq:L:ApprErrorEst1} \begin{split} d_\infty(F^m(X),S_1(t)X)&\leq e^{-t}md_\infty(X,F(X))\\ &\quad+\int_{0}^te^{-(t-s)}d_\infty(F^{m-1}(X),S_1(s)X)ds. \end{split} \end{equation} Firstly it is routine to see that $f_n(s)\to e^{-(t-s)}$. To prove the other claim, let $n\in\mathbb{N}$ and $s\in(0,t]$. There is a unique $0<k_{s,n}\leq n$ such that \begin{equation*} \frac{(k_{s,n}-1)t}{n}<s \leq \frac{k_{s,n}t}{n}. \end{equation*} Substituting $k_{s,n},\frac{t}{n},m,\frac{s}{m}$ for $n,\tau,m,\lambda$ respectively in \eqref{eq:T:ExponentialFormulaAppr1} gives \begin{equation*} \begin{split} d_\infty((J_{t/n,1})^{k_{s,n}}(X),&(J_{s/m,1})^{m}(X))\leq \left[\sqrt{\left(\frac{k_{s,n}t}{n}-s\right)^2+\frac{k_{s,n}t}{n}\left(\frac{s}{m}-\frac{t}{n}\right)}\right.\\ &\left.+\sqrt{s\left(\frac{s}{m}-\frac{t}{n}\right)+\left(\frac{k_{s,n}t}{n}-s\right)^2}\right]2d_\infty(X,F(X)) \end{split} \end{equation*} and taking the limit $m\to \infty$ we get \begin{equation}\label{eq:L:ApprErrorEst2} \begin{split} d_\infty((J_{t/n,1})^{k_{s,n}}(X),&S_1(s)(X))\leq \left[\sqrt{\left(\frac{k_{s,n}t}{n}-s\right)^2-\frac{k_{s,n}t}{n}\frac{t}{n}}\right.\\ &\left.+\sqrt{\left(\frac{k_{s,n}t}{n}-s\right)^2-s\frac{t}{n}}\right]2d_\infty(X,F(X)). \end{split} \end{equation} We also have \begin{equation*} \begin{split} |g_n(s)-d_\infty(F^{m-1}(X),S_1(s)X)|=&|d_\infty(F^{m-1}(X),(J_{t/n,1})^{k_{s,n}}(X))\\ &-d_\infty(F^{m-1}(X),S_1(s)X)| \end{split} \end{equation*} which combined with \eqref{eq:L:ApprErrorEst2} and the triangle inequality yields \begin{equation*} g_n(s)\to d_\infty(F^{m-1}(X),S_1(s)X), \end{equation*} so \eqref{eq:L:ApprErrorEst1} follows. Now if $d_\infty(F(X),X)=0$, then $J_{\lambda,\rho}(X)=X$ for all $\lambda,\rho>0$, and we have $S_\rho(s)X=X$ and (ii) follows. Assume $d_\infty(F(X),X)>0$ and for $m\geq 0, s\geq 0$ let \begin{equation*} \phi_m(s):=\frac{d_\infty(F^m(X),S_1(s)X)}{d_\infty(F(X),X)}, \end{equation*} so that \eqref{eq:L:ApprErrorEst1} gives \begin{equation*} \begin{split} \phi_m(t)&\leq e^{-t}m+\int_{0}^te^{-(t-s)}\phi_{m-1}(s)ds\\ &=e^{-t}\left(m+\int_{0}^te^{s}\phi_{m-1}(s)ds\right), \end{split} \end{equation*} so that if $f_m(s):=e^s\phi_m(s)$, then \begin{equation}\label{eq:L:ApprErrorEst3} f_m(t)\leq m+\int_{0}^tf_{m-1}(s)ds. \end{equation} It is straightforward to check that \begin{equation*} f_m(t)=e^t(t-m)+2\sum_{j=0}^m(m-j)\frac{t^j}{j!} \end{equation*} satisfies the recursion \eqref{eq:L:ApprErrorEst3} with equality, so that \begin{equation*} \phi_m(t)\leq(t-m)+2\sum_{j=0}^m(m-j)\frac{t^j}{j!}e^{-t}. \end{equation*} In view of the estimate \begin{equation*} \begin{split} \sum_{j=0}^\infty\frac{|j-m|m^j\alpha^j}{j!}&\leq e^{\frac{m\alpha}{2}}\sum_{j=0}^\infty\frac{(j-m)^2m^j\alpha^j}{j!}\\ &=e^{m\alpha}\left[m^2(\alpha-1)^2+m(\alpha-1)+m\right]^{1/2} \end{split} \end{equation*} from \cite{miyadera}, we get that \begin{equation*} \begin{split} \phi_m(t)&\leq(t-m)+2\sum_{j=0}^m(j-m)\frac{t^j}{j!}e^{-t}\\ &\leq(t-m)+2e^{-t}\sum_{j=0}^\infty\frac{|j-m|m^j\left(\frac{t}{m}\right)^j}{j!}\\ &\leq(t-m)+2e^{-t}e^{\frac{t}{2}}\sum_{j=0}^\infty\frac{(j-m)^2m^j\left(\frac{t}{m}\right)^j}{j!}\\ &=(t-m)+2\left[(t-m)^2+t\right]^{1/2}. \end{split} \end{equation*} Thus, (ii) follows from the above combined with (i). \end{proof} \begin{proposition}\label{P:TrotterApprox} For $\rho>0$ let $F_\rho:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map, and let $S_\rho(t)$ denote the semigroup generated by $J_{\lambda,\rho}$ corresponding to the nonexpansive map $F:=F_\rho$ for each $\rho>0$ in Theorem~\ref{T:ExponentialFormulaAppr}. If \begin{equation*} J_{\lambda,\rho}(X)\to J_\lambda^\mu(X) \end{equation*} in $d_\infty$ as $\rho\to 0+$ for a fixed $\mu\in\mathcal{P}^1(\mathbb{P})$ and all $X\in\mathbb{P}$, then \begin{equation}\label{eq:P:TrotterApprox} S_\rho(t)X\to S(t)X \end{equation} in $d_\infty$ for all $X\in\mathbb{P}$ as $\rho\to 0+$, where $S(t)$ is the semigroup generated by $J_\lambda^\mu$ in Theorem~\ref{T:ExponentialFormula}. Moreover the limit in \eqref{eq:P:TrotterApprox} is uniform on compact time intervals. \end{proposition} \begin{proof} Fix a $T>0$, $X\in\mathbb{P}$ and let $0<t<T$. For all $\lambda>0$ the assumption implies \begin{equation}\label{eq1:P:TrotterApprox} \frac{d_\infty(X,J_{\lambda,\rho}(X))}{\lambda}\to \frac{d_\infty(X,J_{\lambda}^\mu(X))}{\lambda}\leq \int_{\mathbb{P}}d_\infty(X,A)d\mu(A) \end{equation} as $\rho\to 0+$, where the inequality follows from \eqref{eq:P:ResolventBound}. We also have the following estimates \begin{equation}\label{eq2:P:TrotterApprox} \begin{split} d_\infty(S_\rho(t)X,S(t)X)&\leq d_\infty(S_\rho(t)X,S_\rho(t)J_{\lambda,\rho}(X))+d_\infty(S_\rho(t)J_{\lambda,\rho}(X),S(t)X)\\ &\leq d_\infty(X,J_{\lambda,\rho}(X))+d_\infty(S_\rho(t)J_{\lambda,\rho}(X),S(t)X) \end{split} \end{equation} and \begin{equation}\label{eq3:P:TrotterApprox} \begin{split} d_\infty(S_\rho(t)J_{\lambda,\rho}(X),S(t)X)&\leq d_\infty(S_\rho(t)J_{\lambda,\rho}(X),\left(J_{t/n,\rho}\right)^nJ_{\lambda,\rho}(X))\\ &\quad+d_\infty(\left(J_{t/n,\rho}\right)^nJ_{\lambda,\rho}(X),\left(J_{t/n,\rho}\right)^n(X))\\ &\quad+d_\infty(\left(J_{t/n,\rho}\right)^n(X),\left(J_{t/n}^\mu\right)^n(X))\\ &\quad+d_\infty\left(\left(J_{t/n}^\mu\right)^n(X),S(t)X\right). \end{split} \end{equation} We need to find upper bound on the terms in \eqref{eq3:P:TrotterApprox}. Firstly by \eqref{eq2:T:ExponentialFormulaAppr} we have \begin{equation*} \begin{split} d_\infty(S_{\rho}(t)J_{\lambda,\rho}(X),\left(J_{t/n,\rho}\right)^{n}J_{\lambda,\rho}(X))&\leq \frac{2t}{\sqrt{n}}\frac{d_\infty(J_{\lambda,\rho}(X),F_\rho(J_{\lambda,\rho}(X)))}{\rho}\\ &=\frac{2t}{\sqrt{n}}\frac{d_\infty(X,J_{\lambda,\rho}(X))}{\lambda} \end{split} \end{equation*} where equality follows from \eqref{eq:L:ApproxResolvent1}. Since $J_{\lambda,\rho}$ is a contraction, for all $n\in\mathbb{N}$ we get \begin{equation*} d_\infty(\left(J_{t/n,\rho}\right)^nJ_{\lambda,\rho}(X),\left(J_{t/n,\rho}\right)^n(X))\leq d_\infty(J_{\lambda,\rho}(X),X) \end{equation*} and by \eqref{eq4:T:ExponentialFormula} we get \begin{equation*} d_\infty\left(\left(J_{t/n}^\mu\right)^n(X),S(t)X\right)\leq \frac{2t}{\sqrt{n}}\int_{\mathbb{P}}d_\infty(X,A)d\mu(A). \end{equation*} From the above we obtained the following: \begin{equation}\label{eq4:P:TrotterApprox} \begin{split} d_\infty(S_\rho(t)X,S(t)X)&\leq 2d_\infty(J_{\lambda,\rho}(X),X)+\frac{2t}{\sqrt{n}}\frac{d_\infty(X,J_{\lambda,\rho}(X))}{\lambda}\\ &\quad+\frac{2t}{\sqrt{n}}\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)\\ &\quad+d_\infty(\left(J_{t/n,\rho}\right)^n(X),\left(J_{t/n}^\mu\right)^n(X)). \end{split} \end{equation} Now let $\epsilon>0$. Choose $\lambda_0>0$ so that $\lambda_0\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)<\epsilon$. By \eqref{eq1:P:TrotterApprox}, there exists a $\delta>0$ such that for $\rho<\delta$ we have \begin{equation*} \frac{d_\infty(X,J_{\lambda_0,\rho}(X))}{\lambda_0}\leq \frac{d_\infty(X,J_{\lambda_0}^\mu(X))}{\lambda_0}+\epsilon. \end{equation*} Thus, \begin{equation*} d_\infty(X,J_{\lambda_0,\rho}(X))\leq \lambda_0\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)+\epsilon\lambda_0<2\epsilon \end{equation*} for $\rho<\delta$. Next, choose an $n_0\in\mathbb{N}$ such that \begin{equation*} \frac{2t}{\sqrt{n}}\left[\frac{d_\infty(X,J_{\lambda_0,\rho}(X))}{\lambda}+\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)+\right]<\epsilon \end{equation*} for all $n\geq n_0$ and $t<T$. Finally for $t<T$ we estimate \begin{equation*} \begin{split} d_\infty(\left(J_{t/n_0,\rho}\right)^{n_0}(X),\left(J_{t/n_0}^\mu\right)&^{n_0}(X))\leq d_\infty(\left(J_{t/n_0,\rho}\right)^{n_0}(X),\left(J_{t/n_0,\rho}\right)^{n_0-1}J_{t/n_0}^\mu(X))\\ & +d_\infty(\left(J_{t/n_0,\rho}\right)^{n_0-1}J_{t/n_0}^\mu(X),\left(J_{t/n_0}^\mu\right)^{n_0-1}J_{t/n_0}^\mu(X)) \end{split} \end{equation*} which, by induction together with the assumption of the assertion, yields the existence of $\rho_0>0$ such that for $\rho<\rho_0$ we get that \begin{equation*} d_\infty(\left(J_{t/n_0,\rho}\right)^{n_0}(X),\left(J_{t/n_0}^\mu\right)^{n_0}(X))<\epsilon. \end{equation*} Combining the above with \eqref{eq4:P:TrotterApprox} we obtain \eqref{eq:P:TrotterApprox}. To show that for a fixed $X\in\mathbb{P}$ the convergence in \eqref{eq:P:TrotterApprox} is uniform for $t<T$, pick $\tau\in(0,T)$. By the triangle inequality and contraction property of $S_\rho(t),S(t)$, \eqref{eq3:T:ExponentialFormulaAppr} and \eqref{eq5:T:ExponentialFormula} we get \begin{equation*} \begin{split} d_\infty(S(t)X,S_\rho(t)X)&\leq d_\infty(S(t)X,S_\rho(t)J_{\lambda,\rho}(X))+d_\infty(S_\rho(t)J_{\lambda,\rho}(X),S_\rho(t)X)\\ &\leq d_\infty(S(t)X,S(\tau)X)+d_\infty(S(\tau)X,S_\rho(\tau)J_{\lambda,\rho}(X))\\ &\quad+d_\infty(S_\rho(\tau)J_{\lambda,\rho}(X),S_\rho(t)J_{\lambda,\rho}(X))+d_\infty(J_{\lambda,\rho}(X),X)\\ &\leq 2|t-\tau|\int_{\mathbb{P}}d_\infty(X,A)d\mu(A)+d_\infty(S(\tau)X,S_\rho(\tau)J_{\lambda,\rho}(X))\\ &\quad+2|t-\tau|\frac{d_\infty(J_{\lambda,\rho}(X),F_\rho(J_{\lambda,\rho}(X)))}{\rho}+d_\infty(J_{\lambda,\rho}(X),X). \end{split} \end{equation*} Now as in the first part, we can fix a $\lambda>0$ such that for sufficiently small $\rho>0$ the quantity $d_\infty(J_{\lambda,\rho}(X),X)$ becomes arbitrarily small. We again have that $\frac{d_\infty(J_{\lambda,\rho}(X),F_\rho(J_{\lambda,\rho}(X)))}{\rho}=\frac{d_\infty(J_{\lambda,\rho}(X),X)}{\lambda}$, and for fixed $\lambda>0$, this term is bounded as $\rho\to 0+$. We have also seen before that $d_\infty(S(\tau)X,S_\rho(\tau)J_{\lambda,\rho}(X))$ is small for small $\rho>0$. Now we can use the compactness of $[0,T]$ to conclude that the convergence is uniform on $[0,T]$ in \eqref{eq:P:TrotterApprox}. \end{proof} \begin{theorem}\label{T:TrotterFormula} For each $\rho>0$ let $F_\rho:\mathbb{P}\mapsto\mathbb{P}$ be a nonexpansive map and let $J_{\lambda,\rho}$ be the resolvent generated by $F_\rho$ in \eqref{eq:L:ApproxResolvent1} for each $\rho>0$. If \begin{equation*} J_{\lambda,\rho}(X)\to J_\lambda^\mu(X) \end{equation*} in $d_\infty$ as $\rho\to 0+$ for a fixed $\mu\in\mathcal{P}^1(\mathbb{P})$ and all $X\in\mathbb{P}$, then \begin{equation}\label{eq:T:TrotterFormula} (F_{\frac{t}{n}})^n(X)\to S(t)X \end{equation} in $d_\infty$ for all $X\in\mathbb{P}$ as $n\to \infty$, where $S(t)$ is the semigroup generated by $J_\lambda^\mu$ in Theorem~\ref{T:ExponentialFormula}. Moreover the limit in \eqref{eq:T:TrotterFormula} is uniform on compact time intervals. \end{theorem} \begin{proof} Fix $T>0$, let $X\in\mathbb{P}$ and let $0<t\leq T$. Let $S_\rho(t)$ denote the semigroup generated by $J_{\lambda,\rho}$. We have \begin{equation}\label{eq1:T:TrotterFormula} d_\infty(S(t)X,(F_{\frac{t}{n}})^n(X))\leq d_\infty(S(t)X,S_{\frac{t}{n}}(t)X)+d_\infty(S_{\frac{t}{n}}(t)X,(F_{\frac{t}{n}})^n(X)). \end{equation} For $\rho>0$, $n\in\mathbb{N}$ and $\lambda>0$ we have \begin{equation}\label{eq2:T:TrotterFormula} \begin{split} d_\infty(S_{\rho}(n\rho)X,(F_{\rho})^n(X))&\leq d_\infty(S_{\rho}(n\rho)X,S_{\rho}(n\rho)J_{\lambda,\rho}(X))\\ &\quad+d_\infty(S_{\rho}(n\rho)J_{\lambda,\rho}(X),(F_{\rho})^n(J_{\lambda,\rho}(X)))\\ &\quad+d_\infty((F_{\rho})^n(J_{\lambda,\rho}(X)),(F_{\rho})^n(X))\\ &\leq 2d_\infty(J_{\lambda,\rho}(X),X)\\ &\quad+d_\infty(S_{\rho}(n\rho)J_{\lambda,\rho}(X),(F_{\rho})^n(J_{\lambda,\rho}(X))). \end{split} \end{equation} For $\rho=\frac{t}{n}$ and $\lambda>0$ we have by Lemma~\ref{L:ApprErrorEst} that \begin{equation}\label{eq3:T:TrotterFormula} \begin{split} d_\infty(S_{\rho}(n\rho)J_{\lambda,\rho}(X)&,(F_{\rho})^n(J_{\lambda,\rho}(X)))\\ &\leq 2\sqrt{\left(n-\frac{n\rho}{\rho}\right)^2+\frac{n\rho}{\rho}}d_\infty(J_{\lambda,\rho}(X),F_\rho(J_{\lambda,\rho}(X)))\\ &=2\sqrt{n}\rho\frac{d_\infty(J_{\lambda,\rho}(X),X)}{\lambda}\leq 2\frac{T}{\sqrt{n}}\frac{d_\infty(J_{\lambda,\rho}(X),X)}{\lambda} \end{split} \end{equation} where the equality follows from \eqref{eq:L:ApproxResolvent1}. We have already seen in the proof of Proposition~\ref{P:TrotterApprox} after \eqref{eq4:P:TrotterApprox} how the convergence of the resolvents $J_{\lambda,\rho}(X)\to J_\lambda^\mu(X)$ imply estimates on $d_\infty(J_\lambda^\mu(X),X)$, $d_\infty(J_{\lambda,\rho}(X),X)$ and $\frac{d_\infty(J_{\lambda,\rho}(X),X)}{\lambda}$. Thus the estimates \eqref{eq1:T:TrotterFormula}, \eqref{eq2:T:TrotterFormula} and \eqref{eq3:T:TrotterFormula} along with Proposition~\ref{P:TrotterApprox} implies uniform convergence in \eqref{eq:T:TrotterFormula} on compact time intervals. \end{proof} \section{Convergence of resolvents} In this section we prove the convergence of the resolvents $J_{\lambda,\rho}(X)\to J_\lambda^\mu(X)$ in $d_\infty$ for finitely supported measures $\mu=\sum_{i=1}^n\frac{1}{n}\delta_{A_i}\in\mathcal{P}^1(\mathbb{P})$ in Theorem~\ref{T:ResolventConv}. This result will play a key role later in proving a continuous time law of large numbers type result for $\Lambda$. The analysis of this section also proves the norm convergence of power means to the Karcher mean solving this conjecture mentioned in \cite{lawsonlim1}. Let $\mathbb{P}^*$ denote the dual cone of $\mathbb{P}$, i.e. the cone of all non-negative norm continuous linear functionals on $\mathbb{P}$. \begin{lemma}\label{L:NormingLinearFunct} Let $A,B\in\mathbb{P}$. Then there exists an $\omega\in\mathbb{P}^*$ with $\omega(I)=1$ such that either \begin{equation*} e^{d_\infty(B\#_tA,B)}\omega(B)=\omega(B\#_tA) \end{equation*} or \begin{equation*} \omega(B\#_tA)=e^{d_\infty(B\#_tA,B)}\omega(B) \end{equation*} holds for all $t\in[0,1]$. \end{lemma} \begin{proof} By definition we have that $$d_\infty(A,B)=\log\max\{\inf\{\alpha>0:A\leq\alpha B\},\inf\{\alpha>0:B\leq\alpha A\}\}.$$ Assume first that $B=I$. Then \begin{equation*} e^{d_\infty(I\#_tA,I)}=\max\{\|A^t\|,\|A^{-t}\|\}, \end{equation*} since we have \begin{equation*} \inf\{\alpha>0:A\leq\alpha I\}=\|A\|. \end{equation*} We also have that there exists a net $v_\alpha\in\mathcal{H}$ with $|v_\alpha|=1$ and $\lim_{\alpha\to\infty}|Av_\alpha|=\|A\|$. That is \begin{equation*} \|A\|^2=\lim_{\alpha\to\infty}v_\alpha^*A^*Av_\alpha=\lim_{\alpha\to\infty}v_\alpha^*A^2v_\alpha=\lim_{\alpha\to\infty}\omega_\alpha(A^2), \end{equation*} where $v_\alpha^*(\cdot)v_\alpha=:\omega_\alpha\in\mathbb{P}^*$ and $\omega_\alpha(I)=\|\omega_\alpha\|_{*}=1$. Since the convex set $\{\nu\in\mathbb{P}^*:\nu(I)=\|\nu\|_{*}=1\}$ is weak-$*$ compact, there exists a subnet of $\omega_\alpha$ again denoted by $\omega_\alpha$ that has a limit point $\omega\in\{\nu\in\mathbb{P}^*:\nu(I)=\|\nu\|_{*}=1\}$. Then the state $\omega$ satisfies $\|A\|^2=\omega(A^2)$ which is equivalent to $\|A\|=\omega(A)$, more generally by the monotonicity of the power function it follows that \begin{equation*} \|A^t\|=\omega(A^t). \end{equation*} If $e^{d_\infty(A,I)}=\|A\|$, this yields that \begin{equation*} \omega(I\#_tA)=\omega(A^t)=e^{d_\infty(I\#_tA,I)}\omega(I). \end{equation*} In the other case when $e^{d_\infty(A,I)}=\|A^{-1}\|$ by the same argument as above, we can find an $\omega\in\{\nu\in\mathbb{P}^*:\nu(I)=\|\nu\|_{*}=1\}$ such that \begin{equation*} e^{-d_\infty(I\#_tA,I)}\omega(I)=\omega(I\#_tA). \end{equation*} Now the case of arbitrary $B\in\mathbb{P}$ follows by considering first \begin{equation*} \omega\left(\left(B^{-1/2}AB^{-1/2}\right)^t\right)=e^{d_\infty\left(\left(B^{-1/2}AB^{-1/2}\right)^t,I\right)}\omega(I) \end{equation*} which is equivalent to \begin{equation*} \hat{\omega}(B\#_tA)=e^{d_\infty(I\#_t(B^{-1/2}AB^{-1/2}),I)}\hat{\omega}(B)=e^{d_\infty(B\#_tA,B)}\hat{\omega}(B) \end{equation*} where $\hat{\omega}(X):=\frac{1}{\omega(B^{-1})}\omega(B^{-1/2}XB^{-1/2})$. The other equality in the assertion follows similarly from the case $B=I$. \end{proof} We will use the notation $$\phi_\mu(X):=\int_{\mathbb{P}}\log_XAd\mu(A)$$ for $\mu\in\mathcal{P}^1(\mathbb{P})$ and $X\in\mathbb{P}$. In the remaining parts of this section we assume that the map $\phi_\mu:\mathbb{P}\mapsto\mathbb{S}$ is Fr\'{e}chet-differentiable, for example this is the case if $\mu$ is finitely supported. \begin{proposition}\label{P:FrechetDBounded} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ and $X\in\mathbb{P}$. Let $D\phi_\mu(X)[V]$ denote the Fr\'{e}chet-derivative of $\phi_\mu$ in the direction $V\in\mathbb{S}$. Then the linear map $D\phi_\mu(\Lambda(\mu)):\mathbb{S}\mapsto\mathbb{S}$ is injective, in particular \begin{equation}\label{eq:P:FrechetDBounded} \frac{1}{\|\Lambda(\mu)^{-1}\|\|\Lambda(\mu)\|}\leq \|D\phi_\mu(\Lambda(\mu))\| \end{equation} where $\|D\phi_\mu(\Lambda(\mu))\|:=\sup_{V\in\mathbb{S},\|V\|=1}\|D\phi_\mu(\Lambda(\mu))[V]\|$. \end{proposition} \begin{proof} Let $X,Y\in\mathbb{P}$ and according to \eqref{eq1:T:ExponentialFormula} let $\gamma(t):=S(t)X$, $\eta(t):=S(t)Y$. Then by \eqref{eq2:T:ExponentialFormula} we have \begin{equation}\label{eq1:P:FrechetDBounded} d_\infty\left(\gamma(t),\eta(t)\right)\leq e^{-t}d_\infty(X,Y). \end{equation} Then by Lemma~\ref{L:NormingLinearFunct} there exists an $\omega\in\mathbb{P}^*$ with $\omega(I)=1$ such that either \begin{equation}\label{eq2:P:FrechetDBounded} \omega(X)=e^{d_\infty(X,Y)}\omega(Y) \end{equation} or \begin{equation}\label{eq2.1:P:FrechetDBounded} \omega(Y)=e^{d_\infty(X,Y)}\omega(X). \end{equation} Assume that \eqref{eq2:P:FrechetDBounded} holds. In general we have that \begin{equation*} \gamma(t)\leq e^{d_\infty\left(\gamma(t),\eta(t)\right)}\eta(t) \end{equation*} which combined with \eqref{eq1:P:FrechetDBounded} yields \begin{equation*} \gamma(t)\leq e^{e^{-t}d_\infty(X,Y)}\eta(t). \end{equation*} Thus, since $\omega$ is positive we have \begin{equation}\label{eq3:P:FrechetDBounded} \omega(\gamma(t))\leq e^{e^{-t}d_\infty(X,Y)}\omega(\eta(t)) \end{equation} where for $t=0$ we have equality by \eqref{eq2:P:FrechetDBounded}. Hence we may take the derivative of \eqref{eq3:P:FrechetDBounded} at $t=0$ to get \begin{equation}\label{eq4:P:FrechetDBounded} \omega(\dot{\gamma}(0))\leq e^{d_\infty(X,Y)}\omega(\dot{\eta}(0))-d_\infty(X,Y)e^{d_\infty(X,Y)}\omega(\eta(0)). \end{equation} If \eqref{eq2.1:P:FrechetDBounded} holds then we start from \begin{equation*} \eta(t)\leq e^{d_\infty\left(\gamma(t),\eta(t)\right)}\gamma(t) \end{equation*} to obtain \begin{equation}\label{eq4.1:P:FrechetDBounded} \omega(\dot{\eta}(0))\leq e^{d_\infty(X,Y)}\omega(\dot{\gamma}(0))-d_\infty(X,Y)e^{d_\infty(X,Y)}\omega(\gamma(0)). \end{equation} by a similar argument. Now assume that $\eta(0)=\Lambda(\mu)$ and $\gamma(0)=\eta(0)\#_sZ$ for $Z\in\mathbb{P}$ and $s\in[0,1]$. Then by Theorem~\ref{T:StrongSolution} we have that \begin{equation}\label{eq5:P:FrechetDBounded} \begin{split} \dot{\eta}(0)&=\phi_\mu(\Lambda(\mu))=0,\\ \dot{\gamma}(0)&=\phi_\mu(\gamma(0)). \end{split} \end{equation} First assume that \eqref{eq4:P:FrechetDBounded} holds so that by \eqref{eq5:P:FrechetDBounded} and Lemma~\ref{L:NormingLinearFunct} we have \begin{equation*} \begin{split} \omega(\phi_\mu(\gamma(0)))&\leq -d_\infty(\eta(0),\eta(0)\#_sZ)e^{d_\infty(\eta(0),\eta(0)\#_sZ)}\omega(\eta(0))\\ &=-sd_\infty(\Lambda(\mu),Z)e^{sd_\infty(\Lambda(\mu),Z)}\omega(\Lambda(\mu)) \end{split} \end{equation*} for all $s\in[0,1]$. Since $\phi_\mu(\gamma(0))=\phi_\mu(\Lambda(\mu)\#_sZ)$, the above yields \begin{equation}\label{eq6:P:FrechetDBounded} \omega(\phi_\mu(\Lambda(\mu)\#_sZ))\leq -sd_\infty(\Lambda(\mu),Z)e^{sd_\infty(\Lambda(\mu),Z)}\omega(\Lambda(\mu)). \end{equation} For $s=0$ we have equality in \eqref{eq6:P:FrechetDBounded} since the two sides both equal to $0$, thus we can differentiate \eqref{eq6:P:FrechetDBounded} at $s=0$ to get \begin{equation*} \omega(D\phi_\mu(\Lambda(\mu))[\log_{\Lambda(\mu)}(Z)])\leq -d_\infty(\Lambda(\mu),Z)\omega(\Lambda(\mu)) \end{equation*} and it follows that \begin{equation}\label{eq7:P:FrechetDBounded} \omega(\Lambda(\mu))\leq -\omega\left(D\phi_\mu(\Lambda(\mu))\left[\frac{1}{\|\log(\Lambda(\mu)^{-1/2}Z\Lambda(\mu)^{-1/2})\|}\log_{\Lambda(\mu)}(Z)\right]\right). \end{equation} In the other case when \eqref{eq4.1:P:FrechetDBounded} holds, by a similar argument we obtain \begin{equation*} d_\infty(\Lambda(\mu),Z)\omega(\Lambda(\mu))\leq \omega(D\phi_\mu(\Lambda(\mu))[\log_{\Lambda(\mu)}(Z)]) \end{equation*} and thus \begin{equation}\label{eq7.1:P:FrechetDBounded} \omega(\Lambda(\mu))\leq \omega\left(D\phi_\mu(\Lambda(\mu))\left[\frac{1}{\|\log(\Lambda(\mu)^{-1/2}Z\Lambda(\mu)^{-1/2})\|}\log_{\Lambda(\mu)}(Z)\right]\right). \end{equation} As $Z$ ranges over $\mathbb{P}$, the expression $\log_{\Lambda(\mu)}(Z)$ attains all possible values in $\mathbb{S}$ for which either we have \eqref{eq7:P:FrechetDBounded} or \eqref{eq7.1:P:FrechetDBounded} moreover $\omega(\Lambda(\mu))>0$ since $\omega\in\mathbb{P}^*$ and $\omega(I)=1$ and there exists $\epsilon>0$ such that $\epsilon I\leq \Lambda(\mu)$, thus $D\phi_\mu(\Lambda(\mu)):\mathbb{S}\mapsto\mathbb{S}$ is an injective bounded linear map. The inequality \eqref{eq:P:FrechetDBounded} follows \eqref{eq7:P:FrechetDBounded} or \eqref{eq7.1:P:FrechetDBounded}. \end{proof} We need a few facts from the theory of bounded linear operators. We denote the predual of the von Neumann algebra $\mathcal{B}(\mathcal{H})$ by $\mathcal{B}(\mathcal{H})_{*}$ which is the ideal of trace class operators on $\mathcal{H}$. If we restrict to the self-adjoint part $\mathbb{S}$, then $\mathbb{S}$ is a real Banach space with predual $\mathbb{S}_*:=\{X\in\mathcal{B}(\mathcal{H})_{*},X^*=X\}$ and dual space $\mathbb{S}^*:=\{X\in\mathcal{B}(\mathcal{H})^{*},X^*=X\}$ where $\mathcal{B}(\mathcal{H})^{*}$ denotes the predual of the \emph{universal enveloping von Neumann algebra} $\mathcal{B}(\mathcal{H})^{**}$. Since $\mathbb{S}^*$ is the self-adjoint part of the ideal of trace-class operators $\mathcal{B}(\mathcal{H})^{*}$ in the universal enveloping von Neumann algebra $\mathcal{B}(\mathcal{H})^{**}$ and thus is the unique predual of the von Neumann algebra $\mathcal{B}(\mathcal{H})^{**}$, we have that any given $X\in\mathbb{S}^*$ can be uniquely decomposed as $X=X^{+}-X^{-}$ where $X^{+},X^{-}\geq 0$ and such that the support projections of $X^{+}$ and $X^{-}$ are orthogonal by Theorem III.4.2. in \cite{takesaki}. The locally convex topology $\sigma(\mathbb{S}_*,\mathbb{S})$ is called the $\sigma$-weak or ultraweak operator topology on $\mathbb{S}$. \begin{lemma}\label{L:FrechetDenseRange} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ and $X\in\mathbb{P}$. Then the linear map $D\phi_\mu(\Lambda(\mu)):\mathbb{S}\mapsto\mathbb{S}$ has dense range in the ultraweak operator topology. \end{lemma} \begin{proof} Firstly, notice that for $X,A\in\mathbb{P}$ and invertible $C\in\mathcal{B}(\mathcal{H})$ we have \begin{equation*} \begin{split} C\log_XAC^*&=CX\log(X^{-1}A)C^*\\ &=CXC^*C^{-*}\log(X^{-1}A)C^*\\& =CXC^*\log(C^{-*}X^{-1}C^{-1}CAC^*)\\ &=\log_{CXC^*}(CAC^*), \end{split} \end{equation*} hence $CD\log_XA[V]C^*=D\log_{CXC^*}(CAC^*)[CVC^*]$, where the differentiation is with respect to the variable $X$ and $A$ is fixed. Thus for arbitrary $V\in\mathbb{S}$ it also follows that \begin{equation*} D\phi_\mu(\Lambda(\mu))[V]=\Lambda(\mu)^{1/2}D\phi_{\hat{\mu}}(I)[\Lambda(\mu)^{-1/2}V\Lambda(\mu)^{-1/2}]\Lambda(\mu)^{1/2} \end{equation*} where $d\hat{\mu}(A):=d\mu(\Lambda(\mu)^{1/2}A\Lambda(\mu)^{1/2})$ and $\Lambda(\hat{\mu})=I$ as well. Thus it is enough to prove that the range of $D\phi_\mu(\Lambda(\mu))$ is dense when $\Lambda(\mu)=I$. So without loss of generality assume that $\Lambda(\mu)=I$. It is well known that on a locally convex space $X$, a linear operator $T:X\mapsto X$ has dense range if and only if there are no nonzero linear functionals in the dual space $X^*$ which vanish on the range of the map $T$. So assume on the contrary that there exists a nonzero ultraweakly continuous linear functional $\tau\in\mathbb{S}_*$ such that $\tau(D\phi_\mu(\Lambda(\mu))[V])=0$ for all $V\in\mathbb{S}$. In what follows we will reverse the construction given in the proof of Lemma~\ref{L:NormingLinearFunct}. Consider the unique decomposition $\tau=\tau_+-\tau_-$ where $\tau_+,\tau_-\geq 0$ and the support projections $s(\tau_+),s(\tau_-)$ of $\tau_+,\tau_-$ are orthogonal. Let $X:=\exp(s(\tau_+)-s(\tau_-))$. Then by the orthogonality of $s(\tau_+),s(\tau_-)$ we have \begin{equation*} X=\exp(s(\tau_+))\oplus \exp(-s(\tau_-))\oplus I_{E/(\mathrm{Rg}(s(\tau_+))\cup \mathrm{Rg}(s(\tau_-)))} \end{equation*} if we restrict the domain of $s(\tau_+),s(\tau_-)$ to their range respectively. Consider the states $\hat{\tau}_+,\hat{\tau}_-\in\mathbb{S}^*$ defined as \begin{equation*} \begin{split} \hat{\tau}_+(\cdot)&:=\frac{1}{\tau_+(I)}\tau_+(\cdot),\\ \hat{\tau}_-(\cdot)&:=\frac{1}{\tau_-(I)}\tau_-(\cdot). \end{split} \end{equation*} By construction it follows that they are both norming linear functionals for $X$ in the following sense: \begin{equation*} \begin{split} \hat{\tau}_+(X)&=e,\\ \hat{\tau}_-(X^{-1})&=e. \end{split} \end{equation*} So we can follow the argumentation from \eqref{eq2:P:FrechetDBounded} with $Y=I$ and the state $\omega:=\hat{\tau}_+$ to arrive at \eqref{eq7:P:FrechetDBounded}, that is \begin{equation}\label{eq1:L:FrechetDenseRange} \hat{\tau}_+(I)\leq -\hat{\tau}_+\left(D\phi_\mu(I)\left[\frac{1}{\|\log(X)\|}\log(X)\right]\right). \end{equation} Similarly, in the other case we choose $\omega:=\hat{\tau}_-$ in \eqref{eq2.1:P:FrechetDBounded} to obtain \eqref{eq7.1:P:FrechetDBounded}, which is \begin{equation}\label{eq2:L:FrechetDenseRange} \hat{\tau}_-(I)\leq \hat{\tau}_-\left(D\phi_\mu(I)\left[\frac{1}{\|\log(X)\|}\log(X)\right]\right). \end{equation} We have that $\tau=\tau_+-\tau_-$, hence \eqref{eq1:L:FrechetDenseRange} and \eqref{eq2:L:FrechetDenseRange} yields \begin{equation*} 0<\tau_+(I)\hat{\tau}_+(I)+\tau_-(I)\hat{\tau}_-(I)\leq-\tau\left(D\phi_\mu(I)\left[\frac{1}{\|\log(X)\|}\log(X)\right]\right) \end{equation*} contradicting the initial assumption $\tau(D\phi_\mu(\Lambda(\mu))[V])=0$ for all $V\in\mathbb{S}$. \end{proof} \begin{lemma}\label{L:FrechetCommutesRepr} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ and $X\in\mathbb{P}$. Let $\pi:\mathcal{B}(\mathcal{H})\mapsto\mathcal{A}$ be a unital $*$-representation into a unital $C^*$-algebra $\mathcal{A}$. Then the linear map $D\phi_\mu(\Lambda(\mu)):\mathbb{S}\mapsto\mathbb{S}$ commutes with $\pi$, i.e. \begin{equation*} \pi\left(D\phi_\mu(\Lambda(\mu))[V]\right)=D\phi_{\hat{\mu}}(\Lambda(\hat{\mu}))[\pi(V)] \end{equation*} where $d\hat{\mu}(A):=d\mu(\pi^{-1}(A))$. Moreover the linear map $D\phi_\mu(\Lambda(\mu)):\mathbb{S}\mapsto\mathbb{S}$ is ultraweakly continuous. \end{lemma} \begin{proof} Firstly, since $\pi$ is a $*$-representation it is automatically norm continuous, hence $d\hat{\mu}(A)$ is well defined by the inverse image of the continuous map $\pi$. Secondly the continuous function $\log$ can be defined as \begin{equation*} \log(X)=\int_0^\infty\frac{\lambda}{\lambda^2+1}I-(\lambda I+X)^{-1}d\lambda, \end{equation*} thus \begin{equation}\label{eq1:L:FrechetCommutesRepr} D\log(X)[V]=\int_0^\infty(\lambda I+X)^{-1}V(\lambda I+X)^{-1}d\lambda, \end{equation} where both integrals converge in the norm topology. Then it is easy to see that \begin{equation*} \pi\left(D\log(X)[V]\right)=D\log(\pi(X))[\pi(V)] \end{equation*} and similarly $\Lambda(\cdot)$ and $D\phi_\mu(\Lambda(\mu))[\cdot]$ commutes with $\pi$ as well. The ultraweak continuity of $D\phi_\mu(\Lambda(\mu)):\mathbb{S}\mapsto\mathbb{S}$ can be deduced from the formula \begin{equation*} \begin{split} D\phi_\mu(\Lambda(\mu))[V]=-\Lambda(\mu)\int_{\mathbb{P}}\int_0^\infty&\left(\lambda I+\Lambda(\mu)^{-1}A\right)^{-1}\Lambda(\mu)^{-1}V\Lambda(\mu)^{-1}A\\ &\times\left(\lambda I+\Lambda(\mu)^{-1}A\right)^{-1}d\lambda d\mu(A) \end{split} \end{equation*} which is derived using \eqref{eq1:L:FrechetCommutesRepr}. \end{proof} \begin{proposition}\label{P:FrechetDIso} Let $\mu\in\mathcal{P}^1(\mathbb{P}(\mathcal{H}))$ and $X\in\mathbb{P}(\mathcal{H})$. Then the linear map $D\phi_\mu(\Lambda(\mu)):\mathbb{S}(\mathcal{H})\mapsto\mathbb{S}(\mathcal{H})$ is a Banach space isomorphism. \end{proposition} \begin{proof} By Proposition~\ref{P:FrechetDBounded} we know that $D\phi_\mu(\Lambda(\mu))$ is injective and bounded below, hence its range is norm closed. As before, we assume without loss of generality that $\Lambda(\mu)=I$. Thus it remains to show that $D\phi_\mu(I):\mathbb{S}(\mathcal{H})\mapsto\mathbb{S}(\mathcal{H})$ has norm dense range. So, on the contrary assume that the range of $D\phi_\mu(I):\mathbb{S}(\mathcal{H})\mapsto\mathbb{S}(\mathcal{H})$ is not norm dense. Then there exists a nonzero norm continuous linear functional $\omega\in\mathbb{S}(\mathcal{H})^*$ such that \begin{equation}\label{eq1:P:FrechetDIso} \omega\left(D\phi_\mu(I)[V]\right)=0 \end{equation} for all $V\in\mathbb{S}(\mathcal{H})$. Let $\pi_u:\mathcal{B}(\mathcal{H})\mapsto\mathcal{B}(\mathcal{H}_{\pi_u})$ denote the \emph{universal representation} on the GNS direct sum Hilbert space $\mathcal{H}_{\pi_u}$. Notice that $\pi_u$ is unital. Notice that $\omega\in\mathcal{B}(\mathcal{H}_{\pi_u})_{*}$, hence by Lemma~\ref{L:FrechetCommutesRepr} we have \begin{equation}\label{eq2:P:FrechetDIso} \begin{split} 0=\omega\left(D\phi_\mu(I)[V]\right)&=\tr\left\{\omega \pi_u\left(D\phi_\mu(I)[V]\right)\right\}\\ &=\tr\left\{\omega D\phi_{\hat{\mu}}(\pi_u(I))[\pi_u(V)]\right\}\\ &=\tr\left\{\omega D\phi_{\hat{\mu}}(I)[\pi_u(V)]\right\} \end{split} \end{equation} where $d\hat{\mu}(A):=d\mu(\pi_u^{-1}(A))$. We also know that the range of $\pi_u$ is ultraweakly dense in the universal enveloping von Neumann algebra $\mathcal{B}(\mathcal{H})^{**}$, hence by the ultraweak continuity in Lemma~\ref{L:FrechetCommutesRepr} the map $D\phi_{\hat{\mu}}(I)[\pi_u(\cdot)]:\mathbb{S}(\mathcal{H})\mapsto\mathbb{S}(\mathcal{H}_{\pi_u})$ ultraweak continuously extends to the linear map $D\phi_{\hat{\mu}}(I):\mathbb{S}(\mathcal{H}_{\pi_u})\mapsto\mathbb{S}(\mathcal{H}_{\pi_u})$. Then by the ultraweak continuity of $\omega$ on $\mathcal{B}(\mathcal{H}_{\pi_u})$ and \eqref{eq2:P:FrechetDIso} we get that \begin{equation*} \omega\left(D\phi_{\hat{\mu}}(I)[Z]\right)=\tr\left\{\omega D\phi_{\hat{\mu}}(I)[Z]\right\}=0 \end{equation*} for all $Z\in\mathbb{S}(\mathcal{H}_{\pi_u})$. This means that $\omega$ on $\mathbb{S}(\mathcal{H}_{\pi_u})$ is a nonzero ultraweakly continuous linear functional vanishing on the range of $D\phi_{\hat{\mu}}(I):\mathbb{S}(\mathcal{H}_{\pi_u})\mapsto\mathbb{S}(\mathcal{H}_{\pi_u})$ contradicting the ultraweak density of the range of $D\phi_{\hat{\mu}}(I)$ proved in Lemma~\ref{L:FrechetDenseRange}. \end{proof} As a warm-up to more involved computations to follow we prove the norm convergence conjecture of the power means to the Karcher mean mentioned first in \cite{lawsonlim1}. The conjecture states that $\lim_{t\to 0+}P_t(\mu)=\Lambda(\mu)$ in the norm topology, where $\mu\in\mathcal{P}^1(\mathbb{P})$ is finitely supported. More generally one can assume that the integral in \eqref{eq:P:PowerMeans} is bounded for all $t\in[0,1]$. That is the case if all moments $\int_{\mathbb{P}}d_\infty^p(X,A)d\mu(A)<+\infty$ for all $p\geq 1$ and $X\in\mathbb{P}$. \begin{lemma}\label{L:PowerCont} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ with $\int_{\mathbb{P}}d_\infty^p(X,A)d\mu(A)<+\infty$ for all $p\geq 1$ and $X\in\mathbb{P}$. Consider the function $F:[-1,1]\times\mathbb{P}\mapsto\mathbb{S}$ defined as \begin{equation}\label{eq:L:PowerCont} F(t,X):= \begin{cases} \int_{\mathbb{P}}\frac{1}{t}[X\#_tA-X]d\mu(A),&\text{if }t\neq 0,\\ \int_{\mathbb{P}}\log_XAd\mu(A),&\text{if }t=0. \end{cases} \end{equation} Then $F$ and its Fr\'echet derivative $DF[\cdot]$ with respect to the variable $X$ is a norm continuous function if we equip the product space $[-1,1]\times\mathbb{P}$ with the max norm generated by the individual Banach space norms on each factor. \end{lemma} \begin{proof} The function $F$ is a smooth function everywhere except when $t=0$, so we have to consider only this case. The norm continuity of $F$ follows easily from the fact that \begin{equation*} \lim_{t\to 0}\frac{1}{t}[X\#_tA-X]=\log_XA. \end{equation*} We also have for $t\neq 0$ and $V\in\mathbb{S}$ that \begin{equation*} \begin{split} D\left(\frac{1}{t}[X\#_tA-X]\right)&[V]=\frac{1}{t}D\left(X\left(X^{-1}A\right)^t-X\right)\\ &=\frac{1}{t}\left\{V\left((X^{-1}A)^t-I\right)\right.\\ &\quad\left.+XD\exp\left(t\log(X^{-1}A)\right)\left[tD\log(X^{-1}A)[-X^{-1}VX^{-1}A]\right]\right\}\\ &=V\frac{1}{t}\left((X^{-1}A)^t-I\right)\\ &\quad+XD\exp\left(t\log(X^{-1}A)\right)\left[D\log(X^{-1}A)[-X^{-1}VX^{-1}A]\right]. \end{split} \end{equation*} Taking the limit $t\to 0$ in the above, we obtain \begin{equation*} \lim_{t\to 0}D\left(\frac{1}{t}[X\#_tA-X]\right)[V]=V\log(X^{-1}A)+XD\log(X^{-1}A)[-X^{-1}VX^{-1}A], \end{equation*} i.e. we derived that \begin{equation*} D\log_XA[V]=\lim_{t\to 0}D\left(\frac{1}{t}[X\#_tA-X]\right)[V] \end{equation*} where differentiation is with respect to the variable $X$. Moreover it is easy to see, that the limit above is uniform for $\|V\|\leq 1$. \end{proof} \begin{theorem}[Continuity of $P_t$]\label{T:PowerNormCont} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ with $\int_{\mathbb{P}}d_\infty^p(X,A)d\mu(A)<+\infty$ for all $p\geq 1$ and $X\in\mathbb{P}$. Then the family $P_t(\mu)$ is norm continuous in $t\in[-1,1]$, in particular \begin{equation}\label{eq:T:PowerNormCont} \Lambda(\mu)=\lim_{t\to 0}P_t(\mu) \end{equation} in norm. \end{theorem} \begin{proof} We will use the Banach space version of the implicit function theorem, see for instance Theorem 4.9.3 in \cite{loomis}. Consider the one-parameter family of functions $F:[-1,1]\times\mathbb{P}\mapsto\mathbb{S}$ defined in \eqref{eq:L:PowerCont}. Then by Lemma~\ref{L:PowerCont} the function $F$ and its Fr\'echet derivative $DF$ with respect to the second variable is continuous in the norm topology. Moreover $F(0,\Lambda(\mu))=0$ and $DF(0,\Lambda(\mu))[0,\cdot]$ is a Banach space isomorphism by Proposition~\ref{P:FrechetDIso}. Therefore by the implicit function theorem (Theorem 4.9.3 \cite{loomis}) there exists an open interval $(a,-a)$ of $0\in[-1,1]$ and a norm continuous function $\hat{P}(t)$ such that the operator equation \begin{equation}\label{eq1:T:PowerNormCont} F(t,\hat{P}(t))=0 \end{equation} is satisfied on $(-a,a)$, moreover it is uniquely satisfied there by the function $\hat{P}(t)$. Thus it follows by Proposition~\ref{P:PowerMeans} that $\hat{P}(t)=P_t(\mu)$ and also $\hat{P}(0)=\Lambda(\mu)$. Since $\hat{P}(t)$ varies continuously in $(-a,a)$, therefore $P_t$ does as well. \end{proof} The following convergence result is essential for proving the Trotter-type convergence formula for approximation semigroups. \begin{theorem}\label{T:ResolventConv} Let $\mu=\sum_{i=1}^n\frac{1}{n}\delta_{A_i}\in\mathcal{P}^1(\mathbb{P})$. For $\rho>0$ let $F_\rho:=J_{\rho/n}^{\delta_{A_n}}\circ\cdots\circ J_{\rho/n}^{\delta_{A_1}}$ where $J_{\rho}^{\delta_{A}}(X):=X\#_{\frac{\rho}{\rho+1}}A$ in the spirit of \eqref{eq:D:resolvent}. In particular $F_\rho:\mathbb{P}\mapsto\mathbb{P}$ is a contraction with respect to $d_\infty$. For $\lambda>0$ let $J_{\lambda,\rho}$ denote the approximating resolvent corresponding to $F_{\rho}$ defined in \eqref{eq:L:ApproxResolvent1}. Then \begin{equation}\label{eq:T:ResolventConv} J_{\lambda}^{\mu}(X)=\lim_{\rho\to 0+}J_{\lambda,\rho}(X) \end{equation} in norm, where $J_{\lambda}^{\mu}$ is defined by \eqref{eq:D:resolvent}. \end{theorem} \begin{proof} The proof in principle is similar to the proof of Theorem~\ref{T:PowerNormCont} in the sense that we will use the implicit function theorem in the same way. Fix an $X\in\mathbb{P}$ and let $\nu:=\mu+\frac{1}{\lambda}\delta_X$. Consider the function $F:\mathbb{R}\times\mathbb{P}\mapsto\mathbb{S}$ defined as \begin{equation}\label{eq1:T:ResolventConv} F(\rho,Y):= \begin{cases} \frac{1}{\rho}[J_{\rho/\lambda}^{\delta_{X}}\circ F_\rho(Y)-Y],&\text{if }\rho\neq 0,\\ \int_{\mathbb{P}}\log_YAd\nu(A),&\text{if }\rho=0. \end{cases} \end{equation} Notice that for $\rho\neq 0$ we have \begin{equation}\label{eq2:T:ResolventConv} \begin{split} F(\rho,Y)&=\frac{1}{\rho}\left\{\left(\cdots(Y\#_{\frac{\rho}{\rho+n}}A_n)\#_{\frac{\rho}{\rho+n}}\cdots)\#_{\frac{\rho}{\rho+n}}A_1\right)\#_{\frac{\rho}{\rho+\lambda}}X-Y\right\}\\ &=\frac{1}{\rho}[Y_n\#_{\frac{\rho}{\rho+\lambda}}X-Y_n]+\sum_{i=0}^{n-1}\frac{1}{\rho}[Y_i\#_{\frac{\rho}{\rho+n}}A_{n-i}-Y_i] \end{split} \end{equation} where $Y_0:=Y$ and $Y_{i+1}:=Y_i\#_{\frac{\rho}{\rho+n}}A_{n-i}$ for $0\leq i\leq n-1$. Thus as $\rho\to 0$ we have $Y_i\to Y$ and by similar calculations on each summand in the above as in the proof of Lemma~\ref{L:PowerCont}, we get that \begin{equation*} \lim_{\rho\to 0}F(\rho,Y)=\frac{1}{\lambda}\log_YX+\sum_{i=1}^{n}\frac{1}{n}\log_YA_i \end{equation*} in the norm topology. The convergence of the Fr\'echet derivative with respect to $Y$ is a bit more delicate calculation starting from \eqref{eq2:T:ResolventConv} using induction to calculate the derivatives of the $Y_i$ defined recursively by composition of geometric means $\#_{\frac{\rho}{\rho+n}}$, but the principles are the same as in the proof of Lemma~\ref{L:PowerCont} and the calculation is left to the reader. Now the remaining part of the proof follows the lines of the corresponding part of the proof of Theorem~\ref{T:PowerNormCont}. \end{proof} \section{A Continuous-time law of large numbers for $\Lambda$} Here we combine the results of the previous sections to obtain convergence theorems valid for the nonlinear semigroups solving the Cauchy problem in Theorem~\ref{T:StrongSolution}. \begin{theorem}\label{T:contlln} Let $\mu\in\mathcal{P}^1(\mathbb{P})$ and let $\{Y_i\}_{i\in\mathbb{N}}$ be a sequence of independent, identically distributed $\mathbb{P}$-valued random variables with law $\mu$. Let $\mu_n:=\sum_{i=1}^n\frac{1}{n}\delta_{Y_i}\in\mathcal{P}^1(\mathbb{P})$ denote the empirical measures. Let $S^{\mu}(t)$ and $S^{\mu_n}(t)$ denote the semigroups corresponding to the resolvents $J^{\mu}_\lambda$ and $J^{\mu_n}_\lambda$ according to Theorem~\ref{T:ExponentialFormula} for $t>0$. Then almost surely \begin{equation}\label{eq:T:contlln1} \lim_{n\to\infty}S^{\mu_n}(t)=S^{\mu}(t) \end{equation} uniformly in $d_\infty$ on compact time intervals. Moreover let $F^{\mu_n}_{\rho}:=J_{\rho/n}^{\delta_{Y_n}}\circ\cdots\circ J_{\rho/n}^{\delta_{Y_1}}$ where $J_{\rho}^{\delta_{A}}(X):=X\#_{\frac{\rho}{\rho+1}}A$ in the spirit of \eqref{eq:D:resolvent}. Then almost surely \begin{equation}\label{eq:T:contlln2} \lim_{n\to\infty}\lim_{m\to\infty}(F^{\mu_n}_{t/m})^m=S^{\mu}(t) \end{equation} uniformly in $d_\infty$ on compact time intervals. \end{theorem} \begin{proof} Let $X\in\mathbb{P}$. By Proposition~\ref{P:separableSupp} the $\supp(\mu)$ is separable and $\supp(\mu_n)\subseteq \supp(\mu)$. Thus by Varadarajan's theorem \cite{villani} for empirical barycenters on the complete Polish metric space $(\supp(\mu),d_\infty)$, the sequence $\mu_n$ converges weakly to $\mu$ almost surely and then by Proposition~\ref{P:weakW1agree} we get $W_1(\mu_n,\mu)\to 0$ almost surely. Then by Theorem~\ref{T:LambdaExists} we have that $J^{\mu_n}_\lambda(Z)\to J^{\mu}_\lambda(Z)$ almost surely in $d_\infty$ for any $Z\in\mathbb{P}$. By Theorem~\ref{T:ExponentialFormula} we have that $S^{\mu}(t)X:=\lim_{m\to\infty}\left(J_{t/m}^{\mu}\right)^{m}(X)$ and $S^{\mu_n}(t)X:=\lim_{m\to\infty}\left(J_{t/m}^{\mu_n}\right)^{m}(X)$ uniformly on compact time intervals. The estimate \begin{equation*} \begin{split} d_\infty(\left(J^{\mu_n}_{t/m}\right)^{m}(X),\left(J^{\mu}_{t/m}\right)&^{m}(X))\leq d_\infty(\left(J^{\mu_n}_{t/m}\right)^{m}(X),\left(J^{\mu_n}_{t/m}\right)^{m-1}J^{\mu}_{t/m}(X))\\ & +d_\infty(\left(J^{\mu_n}_{t/m}\right)^{m-1}J_{t/m}^\mu(X),\left(J_{t/m}^\mu\right)^{m-1}J_{t/m}^\mu(X)) \end{split} \end{equation*} by induction yields the existence of $n_0\in\mathbb{N}$, such that for $n>n_0$ we get that \begin{equation*} d_\infty(\left(J^{\mu_n}_{t/m}\right)^{m}(X),\left(J_{t/m}^\mu\right)^{m}(X))<\epsilon \end{equation*} for a fixed $m$, thus by the triangle inequality we get \eqref{eq:T:contlln1} almost surely. Uniform convergence can be showed similarly along the lines of the proof of Proposition~\ref{P:TrotterApprox}. Now \eqref{eq:T:contlln2} follows from \eqref{eq:T:contlln1} combined with Theorem~\ref{T:ResolventConv} and Theorem~\ref{T:TrotterFormula}. \end{proof} \subsection*{Acknowledgments} The work of Y.~Lim was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MEST) No.2015R1A3A2031159. The work of M.~P\'alfia was supported in part by the "Lend\"ulet" Program (LP2012-46/2012) of the Hungarian Academy of Sciences and the National Research Foundation of Korea (NRF) grant funded by the Korea government(MEST) No.2016R1C1B1011972.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:Introduction} DNS is an integral part of the Internet infrastructure. Unfortunately, it does not offer privacy, i.\,e., the so-called resolvers (recursive nameservers) can see all queries sent to them in the clear. Resolvers can learn about users' habits and interests, which may infringe their privacy if the resolver is not run by a trusted party, but by a third party such as Google, whose resolver 8.8.8.8 serves more than 130 billion queries per day on average \cite{googledns}. The discussions about limiting tracking via cookies spurred by the ``Do not track'' initiative may result in DNS queries becoming the next target for tracking and profiling purposes \cite{Conrad12-dnssecurity}. According to \cite{HBF:2013} behavior-based tracking based on DNS queries may be feasible. Integrating mechanisms for confidentiality into DNS is difficult because of the need for compatibility with existing infrastructure. Fundamental changes to the protocol are implemented very slowly, as previous attempts have shown: Although the initial DNSSEC security extensions have been proposed in 1999 \cite{rfc2535}, the majority of users still can not profit from their benefits today. Unfortunately, DNSSEC does not address privacy issues due to an explicit design decision \cite{rfc4033}. Currently, there is no indication that facilities for privacy-preserving resolution will be integrated into the DNS architecture in the short term. Previous research efforts have focused on interim solutions, i.\,e., add-ons and tools that enable users who care for privacy to protect themselves against profiling and tracking efforts. The objective consists in designing and evaluating suitable privacy enhancing techniques in such a way that users do not have to rely on or trust the existing DNS infrastructure. The ``range query'' scheme by Zhao et al. \cite{Zhao:2007a} is one of those efforts. The basic idea consists in \emph{query obfuscation}, i.\,e., sending a set of dummy queries (hence the term ``range'') with random hostnames along with the actual DNS query to the resolver. So far the security of range query schemes has only been analyzed within a simplistic theoretical model that considers the obtainable security for \emph{singular queries}. In this paper we study the security offered by range queries for a more complex real-world application, namely \emph{web surfing}, which is one of the use cases Zhao et al. envision in \cite{Zhao:2007a}. In contrast to singular queries, downloading websites typically entails a number of inter-related DNS queries. Our results indicate that the range query scheme offers less protection than expected in this scenario, because dependencies between consecutive queries are neglected. The main contribution of this paper is to \textbf{demonstrate that random set range queries offer considerably less protection than expected in the web surfing use case}. We demonstrate that a curious resolver (the adversary) can launch a semantic intersection attack to disclose the actually retrieved website with high probability. We also show how the effectiveness of the attack can be reduced, and we identify a number of challenges that have to be addressed before range query schemes are suitable for practice. The paper is structured as follows. In Sects.~2 and 3 we review existing work and fundamentals. Having described our dataset in Sect.~4, we continue with theoretical and empirical analyses in Sects.~5 and 6. We study countermeasures in Sect.~7 and discuss our results in Sect.~8. We conclude in Sect.~9. \section{Related Work} \label{sec:relatedWork} The basic DNS range query scheme was introduced by Zhao et al. in \cite{Zhao:2007a}; there is also an improved version \cite{Zhao:2007b} inspired by private information retrieval \cite{Chor:1995}. Although the authors suggest their schemes especially for web surfing applications, they fail to demonstrate their practicability using empirical results. Castillo-Perez and Garcia-Alfaro propose a variation of the original range query scheme \cite{Zhao:2007a} using multiple DNS resolvers in parallel \cite{Castillo-Perez:2008,Castillo-Perez:2009}. They evaluate its performance for ENUM and ONS, two protocols that store data within the DNS infrastructure. Finally, Lu and Tsudik propose PPDNS \cite{Lu:2010}, a privacy-preserving resolution service that relies on CoDoNs \cite{RamasubramanianS04-codons}, a next-generation DNS system based on distributed hashtables and a peer-to-peer infrastructure, which has not been widely adopted so far. The aforementioned publications study the security of range queries for singular queries issued independently from each other. In contrast, \cite{FederrathFHP11-dnsmixes} observes that consecutively issued queries that are dependent on each other have implications for security. They describe a timing attack that allows an adversary to determine the actually desired website and show that consecutive queries have to be serialized in order to prevent the attack. \section{Fundamentals} \label{sec:fundamentals} \subsection{Random Set DNS Range Query Scheme} \label{sec:dnsrq} In this paper we focus on the basic ``random set'' DNS range query scheme as introduced in \cite{Zhao:2007a}. Zhao et al. stipulate that each client is equipped with a large database of valid domain names \textbf{(dummy database)}. Each time the client wants to issue a DNS query to a resolver, it randomly draws (without replacement) $N-1$ \emph{dummy names} from the database, and sends $N$ queries to the resolver in total. When all replies have been received from the resolver, the replies for the dummy queries are discarded and the desired reply is presented to the application that issued the query. Zhao et al. claim that this strategy leaves the adversary with a chance of $\frac{1}{N}$ to guess the desired domain name. The value of $N$ is a security parameter, which is supposed to be chosen according to the user's privacy expectations and performance needs. \subsection{Query Patterns} \label{sec:patterns} The semantic intersection attack exploits the fact that typical websites embed content from multiple servers, causing clients to issue a burst of queries for various domain names in a deterministic fashion, whenever they visit the site. For example, visiting \emph{google.com} will also trigger a DNS request for \emph{ssl.gstatic.com}, as the site includes some resources from that domain. We call the set of domain names that can be observed upon visiting a site its \emph{query pattern} $p$, i.\,e., $p(\mathrm{google.com}) = \{\mathrm{google.com},\mathrm{ssl.gstatic.com}\}$. In Sect.~\ref{sec:dataset}, we will show that many popular websites do have query patterns that can be used for this attack. Using range queries, each individual query from a pattern $p$ is hidden in a set of $N-1$ randomly chosen queries, leading to $|p|$ sets, each containing $N$ queries, being sent to the resolver in order to retrieve all the domain names required to visit the corresponding website. We refer to $N$ as the \emph{block size} of the range query scheme and to each individual range query as a \emph{block}. Note that the client uses standard DNS queries to deliver the range query, because it uses a conventional DNS resolver, i.\,e., a single range query with a block size of $N$ causes $N$ individual DNS queries. \subsection{The Semantic Intersection Attack} \label{sec:attack} An adversary, who is in possession of a database that contains the query patterns for a set of websites he is interested in \textbf{(pattern database)}, can check whether one of these patterns can be matched to consecutive query blocks received by the client. As all the dummy names are drawn independently from each other from the dummy database, it is quite unlikely that the client will draw the pattern of a different website by chance. Therefore, the adversary can be optimistic that he will only find a single pattern in the set of consecutive range queries he receives from the client, i.\,e., the pattern of the actually desired website. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{pictures/modes} \caption{Distinguishability of blocks for the resolver} \label{fig:attack:1} \end{figure} From the viewpoint of the adversary there are two different scenarios, depending on how well the adversary can distinguish consecutive blocks (cf. Fig.~\ref{fig:attack:1}). The adversary may either be able to identify all the queries that belong to the first block, but be unable to determine which of the remaining queries belongs to which of the remaining blocks (\textbf{1BD}, 1st block distinguishable), or be able to distinguish all individual blocks, i.\,e., be able to determine for all queries to which block they belong (\textbf{ABD}, all blocks distinguishable). The difference between the 1BD and the ABD scenario becomes evident by considering the following example. When a user visits the site \emph{\url{http://www.rapecrisis.org.uk}}, her browser will issue a query for \emph{www.rapecrisis.org.uk}. Moreover, it will issue two additional queries, for \emph{twitter.com} and \emph{www.rapecrisislondon.org}, once the HTML page has been parsed. For illustrative purposes we assume that range queries with $N=3$ are used. In the \textbf{ABD scenario} the adversary might, for instance, observe a first block of queries for (\emph{cnn.com}, \emph{www.rapecrisis.org.uk}, \emph{img.feedpress.it}), then a second block for (\emph{github.com}, \emph{twitter.com}, \emph{s.ebay.de}), and finally a third block for (\emph{www.rapecrisislondon.org}, \emph{ytimg.com}, \emph{conn.skype.com}). In contrast, in the \textbf{1BD scenario} the adversary might observe a first block with (\emph{cnn.com}, \emph{www.rapecrisis.org.uk}, \emph{img.feedpress.it}) and a second block with (\emph{github.com}, \emph{twitter.com}, \emph{www.rapecrisislondon.org}, \emph{s.ebay.de}, \emph{ytimg.com}, \emph{conn.skype.com}). The first block is distinguishable in both scenarios, because the web browser has to resolve the \emph{primary domain name} in order to learn the IP address of the main web server. This IP address is received within the replies that belong to the first block of queries. After the browser has downloaded the HTML file from the main web server, it will issue queries for the \emph{secondary domain names} in order to retrieve all embedded content hosted on other web servers. Given a \emph{pattern database DB} that contains primary and secondary domain names of websites, the adversary proceeds as follows in order to carry out the intersection attack in the \textbf{ABD scenario}: \begin{enumerate} \item From \emph{DB} the adversary selects all patterns, whose primary domain name is contained in the first block, obtaining the set of candidates $C$. \item The adversary selects all patterns with length $|p|$, which is the number of observed blocks, from $C$ to obtain $C_{|p|}$. \item For each pattern $q$ in $C_{|p|}$ the adversary performs a \emph{block-wise set intersection}: $q$ is a \emph{matching pattern}, if all of its domain names are dispersed among the blocks in a plausible fashion, i.\,e., iff \begin{enumerate} \item each block contains at least 1 element from $q$, and \item each element of $q$ is contained in at least 1 block, and \item $q$ can be completely assembled by drawing one element from each block. \end{enumerate} \end{enumerate} In the \textbf{1BD scenario} the adversary has to use a different approach, because there are only two blocks observable: \begin{enumerate} \item From the pattern database the adversary selects all patterns, whose primary domain name is contained in the first block, thus obtaining the set of candidate patterns $C$. \item For each pattern $q$ in $C$ the adversary performs a \emph{block-wise set intersection}: $q$ is a \emph{matching pattern}, if all of its secondary domain names are contained within the second block. \end{enumerate} Note that due to caching, the adversary cannot reliably determine $|p|$ in the 1BD scenario. Due to variations in the lookup time of different domain names, the stub resolver on the client may already receive replies (and cache the results) for some domain names before all range queries have been submitted to the resolver. However, if the range query client happens to draw one of the cached domain names as a dummy, the stub resolver will not send another query, but answer it immediately from its cache. As a result, some queries will not reach the adversary and the effective size of consecutive blocks will vary. Therefore, the adversary cannot easily determine $|p|$ in the 1BD scenario in order to filter the set $C$. For now, we neglect the fact that caching may also affect the desired queries (cf. Sect.~\ref{sec:discussion} for a discussion of this issue). In the remainder of the paper we focus on the \textbf{1BD scenario}, which we deem to be more realistic than the ABD scenario. Contemporary web browsers issue the queries for the secondary queries in parallel. Thus, when the range query client constructs range queries for each of the desired domain names, the individual queries of all the blocks will be interleaved, causing uncertainty about the composition of the individual blocks. On the other hand, the ABD scenario is relevant for range query schemes that submit all queries contained in a block in a single message. We will consider the effect of this approach in Sect.~\ref{sec:evaluation:ex3}. \section{Dataset} \label{sec:dataset} In order to evaluate the feasibility of the semantic intersection attack, we performed probabilistic analyses and implemented a simulator that applies the attack to the patterns of actual websites. For this purpose we obtained the query patterns of the top $100{,}000$ websites of the ``Alexa Toplist'' (\url{http://www.alexa.com}) with the headless Webkit-based browser PhantomJS (\url{http://phantomjs.org}).\footnote{\label{fnote:github}The source code of our crawler and simulator as well as all experimental data is available at \url{https://github.com/Semantic-IA}} As PhantomJS was not able to reach and retrieve all of the websites contained in the Toplist at the time of the data collection (May 2013) the cleaned dataset contains with $|P|=92{,}880$ patterns and $|Q|=216{,}925$ unique queries. The average pattern length (\textit{mean value}) is $13.02$ with a standard deviation of $14.28$. The distribution of pattern lengths as displayed in Fig.~\ref{fig:patternlengths} shows that, while patterns of the length 1 are frequent, patterns of higher lengths make up the majority of the dataset. The longest pattern consists of $315$ queries. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{data-lengths/histogram} \hfill \includegraphics[width=0.48\textwidth]{data-lengths/cdf} \caption{Histogram and cumulative distribution of pattern lengths} \label{fig:patternlengths} \end{figure} \section{Probabilistic Analysis} \label{sec:analysis} Before we carry out any practical evaluation using our simulator, we want to get an expectation of the likelihood of \textbf{ambiguous results}, which occur if the client happens to draw all the domain names of another website from the dummy database while the range queries needed for the desired website are assembled. If the client draws all domain names of a different pattern by chance and distributes the individual names among the blocks in a plausible fashion, the adversary will observe two patterns: the pattern of the actually desired website as well as the \textbf{random pattern}. \subsection{Modeling the Probability of Ambiguous Results} \label{sec:themodel} In the 1BD scenario an ambiguous result occurs if the primary domain name of a random pattern (the domain name of the corresponding website) is selected as a dummy in the first block, and all remaining elements of the pattern are contained in the union of the remaining blocks.\footnote{In the 1BD scenario the query distribution between the remaining blocks is irrelevant, as long as all needed queries occur at least once in the union of the blocks.} The probability for an ambiguous result can be modeled as a series of hypergeometric distributions. A hypergeometric distribution $h(k|N;M;n)$ describes the probability of drawing $k$ elements with a specific property when drawing $n$ elements out of a group of $N$ elements, of which $M$ have the desired property: \begin{equation} \label{eq:analysis:1} h(k|N;M;n) := \frac{{M \choose k}{N-M \choose n-k}}{{N \choose n}} \end{equation} First, we need to obtain the probability to draw the first element of a pattern of the correct length $n$ into the first block of queries. As the variables of the hypergeometric distribution overlap with those we use to describe the properties of a range query, we substitute them for their equivalents in our range query notation. $N$ is equal to $|Q|$, the number of names in the dummy database. $M$ equals to the number of patterns of the correct length, which we will write as $|P_n|$. In our case, the parameter $n$ of the hypergeometric distribution corresponds to $N-1$, as we will draw $N-1$ dummy names into the first block. By substituting these values into Eq.~\ref{eq:analysis:1}, we obtain the probability $p(n, k)$ of drawing exactly $k$ beginnings of patterns of the length $n$: \begin{equation} \label{eq:analysis:2} p(n, k) := \frac{{|P_n| \choose k}{|Q|-|P_n| \choose (N-1)-k}}{{|Q| \choose N-1}} \end{equation} In addition to that, we need to determine the probability of drawing the remaining $k*(n-1)$ queries into the second block, which contains the remaining $(n-1)*(N-1)$ randomly drawn dummy names in the 1BD scenario. To complete our $k$ patterns, we need to draw $k*(n-1)$ specific dummy names. The probability of success is described by the function $q(n,k)$, which is given in Eq.~\ref{eq:analysis:3}. \begin{equation} \label{eq:analysis:3} q(n, k) := \frac{{n-1 \choose n-1}^k {|Q|-(n-1)*k \choose (n-1)*(N-1)-(n-1)}}{{|Q| \choose (n-1)*(N-1)}} = \frac{{|Q|-(n-1)*k \choose (n-1)*(N-1)-(n-1)*k}}{{|Q| \choose (n-1)*(N-1)}} \end{equation} The two probabilities $p(n,k)$ and $q(n,k)$ can now be combined to receive the probability of drawing $k$ complete patterns of the correct length $n$: \begin{equation} \label{eq:analysis:4} P(n, k) := p(n,k)*q(n,k) \end{equation} In this context, the expected value of $P(n,k)$ for different values of $n$ is of interest, as it describes the average number of patterns we expect to see. The expected value, in general, is defined as: \begin{equation} \label{eq:analysis:5} E(X) := \sum\limits_{i \in I}(x_i p_i) \end{equation} In our case, $x_1$ is $k$, as it describes the number of patterns, and $p_i$ equals $P(n,k)$ as the probability of drawing $k$ patterns, i.\,e., the expected value is \begin{equation} \label{eq:analysis:6} E(n) := 1 + \sum\limits_{k=1}^{N-1} (P(n,k)*k) \end{equation} We are adding 1 to the result, as the original pattern will always be present. Equation~\ref{eq:analysis:6} will only calculate the expected value for patterns of a specific length. However, as the adversary does not know the length of the pattern with certainty in the 1BD scenario, we have to consider patterns of any length. For that, we have to use a modified variant of Eq.~\ref{eq:analysis:3}: \begin{equation} \label{eq:analysis:7} q(n, k, M) := \frac{{|Q|-(n-1)*k \choose (M-1)*(N-1)-(n-1)*k}}{{|Q| \choose (M-1)*(N-1)}} \end{equation} In Eq.~\ref{eq:analysis:7}, $n$ is the length of the random pattern, while $M$ is the length of the original pattern. Accordingly, we modify Eq.~\ref{eq:analysis:4} and Eq.~\ref{eq:analysis:6}: \begin{equation} \label{eq:analysis:8} P(n, k, M) := p(n,k)*q(n,k,M) \end{equation} \begin{equation} \label{eq:analysis:9} E(M) := 1 + \sum\limits_{n=1}^M\sum\limits_{k=1}^{N-1} (P(n,k,M)*k) \end{equation} Finally, to determine the expected mean value of the number of detected patterns given a specific block size $N$, we calculate \begin{equation} \label{eq:analysis:10} F(N) = \frac{1}{|P|}*\sum\limits_{M=1}^{L}(E(M)*|P_M|) \end{equation} where $L$ is the length of the longest pattern, and $|P_M|$ the number of patterns having length $M$. \subsection{Analytical Result} \label{sec:analyticalresult} \begin{table}[t] \centering \caption{Expected avg. number of detected patterns $F(N)$ for varying block sizes $N$} \ \begin{tabular*}{0.4\textwidth}{@{\extracolsep{\fill}}rrrr} \toprule $N$ & $10$ & $50$ & $100$ \\ \midrule $F(N)$ & $1.35$ & $2.93$ & $4.83$\\ \bottomrule \end{tabular*} \label{tab:analysis:1} \end{table} The results (cf. Table~\ref{tab:analysis:1}) indicate that an adversary will, on average, detect only very few random patterns. As expected, the privacy expectation for singular queries ($\frac{1}{N}$) does not apply to the web surfing scenario. Note that for reasons of conciseness we have provided a slightly simplified model, which disregards overlaps between patterns. Actually, the adversary must expect to find a slightly \emph{higher} number of patterns, because a domain name that is contained within multiple patterns only has to be drawn once to be detected as part of all patterns. Nevertheless, the analysis is instructive and provides us with a baseline for the empirical evaluations that we will describe in the following. \section{Evaluation} \label{sec:evaluation} In order to evaluate the effectiveness of the semantic intersection attack in a realistic scenario, we developed a simulator that enables us to efficiently test different attack strategies and various assumptions about the knowledge of the adversary. In the following we present results for the 1BD scenario. \paragraph{Methodology} Given a dataset the simulator will generate range queries for all the patterns from the dataset and perform the semantic intersection attack. We are interested in the influence of two factors on the effectiveness of the attack, namely the \emph{block size} $N$, and the \emph{size of the dummy database} $|Q|$ that contains the dummy names. If the range query scheme was to be used in practice, these two factors could be easily influenced by the user. Thus, it is worthwhile to analyze their effect on the attainable privacy. In the following, we will use the metric of \textbf{$k$-identifiability}, which is derived from the well-known metric $k$-anonymity \cite{Sweene02-kanonymity}: A set of consecutively observed range queries is said to be $k$-identifiable, if the adversary finds \emph{exactly} $k$ matching patterns of websites in his pattern database. For conciseness we will show the cumulative distribution of the fraction of $k$-identifiable patterns, i.\,e., the fraction of patterns that are $k$-identifiable or less than $k$-identifiable. \subsection{Results of Experiment 1: Variation of Block Size} \label{sec:evaluation:ex1} For the purpose of this analysis, we consider three different block sizes: $N=10$, $N=50$, and $N=100$. \cite{FederrathFHP11-dnsmixes} has shown that the median latency exceeds 1200\,ms for a block size of $N=100$, rendering higher values impractical for practical use. Based on the result of Sect. \ref{sec:analysis}, we expect to receive some, but not many ambiguous results, i.\,e., instances where the whole pattern of a different website appears in a set of consecutively observed range queries by chance. Intuitively, the larger the block size, the more random patterns will occur. Accordingly, we expect the effectiveness of the attack to degrade with increasing block sizes. \afterpage{% \clearpage\clearpage \begin{table}[t] \centering \caption{Results for varying block sizes $N$ given the whole dummy database} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}rrrrrr} \toprule $N$ & $S$ & 1-identifiable & $\leq 5$-identifiable & median(k) & max(k) \\ \midrule $10$ & $216{,}925$ & $62\,\%$ & $99\,\%$ & $1$ & $6$ \\ $50$ & $216{,}925$ & $8\,\%$ & $88\,\%$ & $3$ & $14$ \\ $100$ & $216{,}925$ & $1\,\%$ & $43\,\%$ & $6$ & $18$ \\ \bottomrule \end{tabular*} \label{tab:evaluation:ex1:1} \end{table} \begin{figure}[h] \centering \includegraphics{pictures/M2-VarN} \caption{Distribution of $k$-identifiability for varying block sizes $N$ (whole database)} \label{fig:evaluation:ex1:1} \end{figure} } As can be seen in Table~\ref{tab:evaluation:ex1:1} and Fig.~\ref{fig:evaluation:ex1:1}, the smallest block size provides little privacy, with $62\,\%$ of patterns being 1-identifiable. Consequently, the median of the observed $k$-identifiability values is $1$. $99\,\%$ of patterns are 5-identifiable or better. No pattern is more than 6-identifiable. For a larger block size of $N=50$, only $8\,\%$ of patterns are 1-identifiable, but the cumulative distribution quickly approaches $100\,\%$. All patterns are 14-identifiable or less, and the median of all observed $k$-identifiability values is $3$, i.\,e., for $50\,\%$ of the websites the adversary can narrow down the actually desired site to a set of 3 or less sites, which is far smaller than the baseline probability of $\frac{1}{50}$ for finding the desired domain name in the first block. As expected, $N=100$ is most effective: $0.8\,\%$ of patterns are 1-identifiable, but still $43\,\%$ of patterns are at most 5-identifiable. Generally, we can observe diminishing returns when the block size is increased. While the increase from $N=10$ to 50 leads to $54\,\%$ less 1-identifiable patterns, adding another 50 queries per block only decreases the fraction by $7.2$ percentage points. The same is true for the maximum $k$-identifiability, which increases by eight and four, respectively. On overall, the results indicate that range queries provide far less privacy than suggested by Zhao et al. in the web surfing scenario. \paragraph{1BD-improved} We also considered an improved attack algorithm that guesses the length of the desired patterns based on the total number of observed queries in the second block, resulting in a range of possible pattern lengths. This allows the adversary to reject all patterns that do not fall into this range. As a result $80\,\%$ ($N=100$) and $94\,\%$ ($N=10$) of all patterns are 1-identifiable. Due to space constraints, we are unable to adequately cover the calculations to estimate the length in this paper, but we have released an implementation including the relevant documentation in the source code repository (see \Cref{fnote:github}). \subsection{Results of Experiment 2: Variation of Dummy Database} \afterpage{% \clearpage\clearpage \begin{table}[t] \centering \caption{Results for varying dummy database sizes $S$ given the block size $N=50$} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}rrrrrr} \toprule $N$ & $S$ & 1-identifiable & $\leq 5$-identifiable & median(k) & max(k) \\ \midrule $50$ & $2{,}000$ & $19\,\%$ & $92\,\%$ & $3$ & $14$ \\ $50$ & $20{,}000$ & $16\,\%$ & $95\,\%$ & $3$ & $11$ \\ $50$ & $200{,}000$ & $9\,\%$ & $88\,\%$ & $3$ & $13$ \\ \bottomrule \end{tabular*} \label{tab:evaluation:ex2:1} \end{table} \begin{figure}[h] \centering \includegraphics{pictures/M2-VarS} \caption{Distribution of $k$-identifiability for varying dummy database sizes $S$; $N=50$} \label{fig:evaluation:ex2:1} \end{figure} } Generating and maintaining a dummy database is a non-trivial task for the client, which gets harder the larger the database is supposed to be. Accordingly, the importance of the size of the dummy database is of interest. We assume that the client's dummy database is always a subset of the pattern database of the adversary, because, in general, the adversary will have access to more resources than the client, and collecting patterns scales very well. We compare the effectiveness of three different database sizes ($S=2000$, $20{,}000$ and $200{,}000$). The domain names are chosen by drawing patterns from the full pattern database (without replacement) and adding all domain names of each pattern to the dummy database. This process continues until exactly $S$ unique domain names have been found. We select full patterns to increase the chance that the client randomly chooses a full pattern when drawing dummies. We used a fixed block size of $N=50$ for this experiment. Fig.~\ref{fig:evaluation:ex2:1} shows that the differences are quite small on overall. Thus, the biggest effect of varying the database is the change in the percentage of 1-identifiable patterns: The percentage of 1-identifiable patterns drops by three percentage points when the dummy database size is increased from $S=2000$ to $S=20{,}000$, and by another 7 points on the second increase to $S=200{,}000$. The observed changes have a much smaller effect than the variation of the block size; however, regardless of these results, a larger database is always desirable to prevent other attacks, such as the enumeration of the client's database. \subsection{Effect of Pattern Length on Site Identifiability} Now that we know the effect of varying the block size, the composition of the different $k$-identifiabilities is of interest. With this information, we can determine \textbf{whether websites with longer or shorter patterns are more at risk} to be identified. Intuitively, shorter patterns should generally have lower $k$-identifiabilities, as comparatively few dummies are drawn to obfuscate them, decreasing the chance of drawing a whole pattern. Conversely, longer patterns should generally achieve higher $k$-identifiabilities, as they use a higher number of dummy domain names. We will now test this hypothesis by analyzing the composition of the different $k$-identifiabilities, using the results of our simulation with a block size of $N=50$ and the full dummy database ($S=216{,}925$). \begin{table}[t] \centering \caption{Number of patterns $n_k$, mean length $\overline{|p|}$ and standard deviation $\mathrm{SD}$ aggregated by resulting $k$-identifiability ($N=50$, $S=216{,}925$)} \label{tab:evaluation:reasons:1} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}rrrrrrrrrrr} \toprule $k$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $\geq10$ \\ \midrule $n_k$ & $7{,}693$ & $18{,}790$ & $23{,}184$ & $19{,}784$ & $12{,}497$ & $6{,}532$ & $2{,}875$ & $1{,}077$ & $336$ & $121$ \\ $\overline{|p|}$ & $10.59$ & $11.43$ & $12.52$ & $13.54$ & $14.43$ & $15.45$ & $16.22$ & $17.65$ & $17.09$ & $19.47$ \\ $\mathrm{SD}$ & $12.16$ & $13.24$ & $13.65$ & $14.55$ & $15.02$ & $16.14$ & $16.65$ & $17.71$ & $15.35$ & $19.68$ \\ \bottomrule \end{tabular*} \end{table} As can be seen in Table~\ref{tab:evaluation:reasons:1}, the mean pattern length rises almost linearly with increasing $k$-identifiability, which supports our hypothesis. The standard deviation exhibits a similar behavior, albeit with a slightly lower and less uniform growth rate. We could reproduce this result for other block and database sizes. The correlation is more distinct for larger block sizes. Smaller block sizes do not show this behavior as clearly, as the range of $k$-identifiabilities is too small to show any distinct trend. \subsection{Results of Experiment 3: ABD Scenario} \label{sec:evaluation:ex3} So far, we concentrated on the 1BD scenario (cf. Sect.~\ref{sec:attack}). We will now consider the ABD scenario by repeating the experiment from Sect.~\ref{sec:evaluation:ex1}, simulating an adversary that can distinguish individual blocks: In the ABD scenario the adversary is able to 1-identify between $87\,\%$ ($N=100$) and $97\,\%$ ($N=10$) of all domain names, vastly improving on the results of 1BD ($1\,\%$ and $62\,\%$, respectively). The increased accuracy is due to two effects: Firstly, in the ABD scenario the adversary can derive $|p|$, the length of the obfuscated pattern, and filter the set of candidate patterns accordingly (cf. Sect.~\ref{sec:attack}). Secondly, the probability that another matching pattern is drawn from the dummy database by chance is much smaller when it has to meet the three ABD conditions The contribution of these two effects to the overall effectiveness obtained for ABD can be analyzed by reviewing the results obtained for the baseline (1BD) in comparison to 1BD-improved (cf. Sect.\ref{sec:evaluation:ex1}) and ABD: The results for 1BD-improved, which filters candidate patterns using a vague estimation of $|p|$, already show a significant increase: For $N=50$ the fraction of 1-identifiable sites is $83\,\%$ for 1BD-improved, while it is only $8\,\%$ for 1BD. On the other hand, the fraction of 1-identifiable websites obtained for ABD, where matching patterns have to meet the additional conditions and the exact value of $|p|$ is known, rises only by another 6 percentage points (reaching $89\,\%$) compared to 1BD-improved. While this sort of analysis can not conclusively prove that the effect of filtering by length is larger than the effect of filtering via the ABD conditions, we note that the additional benefit of these conditions is comparatively small when the adversary can estimate the length of the obfuscated pattern. This result indicates that range query schemes that are supposed to provide privacy in a web surfing scenario have to be devised and implemented in a way that the adversary cannot infer the length of the obfuscated query pattern. \section{Countermeasures} \label{sec:countermeasures} Having shown the weaknesses of the range query scheme against a pattern-based attack strategy, we will now discuss possible countermeasures. First, we will discuss and evaluate a pattern-based dummy selection strategy. Afterwards, we will consider other strategies that could be used to hinder the adversary. \subsection{Pattern-Based Dummy Selection Strategy} \label{sec:countermeasures:improved-dummy} In the original dummy selection strategy, the client sampled the dummies independently and randomly from his dummy database. In contrast, the client will now draw \emph{whole patterns} from his database. When querying the resolver for a desired pattern, the client will draw $N-1$ random patterns of the same length and use them as dummies. If not enough patterns of the correct length are available, the client will combine two shorter patterns to obtain a concatenated pattern with the correct length. Intuitively, this approach ensures that the adversary will always detect $N$ patterns. The results of our evaluation, shown in Table~\ref{tab:countermeasures:improved-dummy:1}, confirm this conjecture. All patterns are exactly $N$-identifiable. \begin{table}[t] \centering \caption{Statistics for varying block sizes $N$ using the pattern-based dummy construction strategy} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}rrrrrr} \toprule $N$ & $S$ & 1-identifiable & $\leq 5$-identifiable & median(k) & max(k) \\ \midrule $10$ & $216{,}925$ & $0\,\%$ & $0\,\%$ & $10$ & $10$ \\ $50$ & $216{,}925$ & $0\,\%$ & $0\,\%$ & $50$ & $50$ \\ $100$ & $216{,}925$ & $0\,\%$ & $0\,\%$ & $100$ & $100$ \\ \bottomrule \end{tabular*} \label{tab:countermeasures:improved-dummy:1} \end{table} However, in real-world usage scenarios, the length of the pattern the client is about to query cannot be known in advance. As the dummies for the first element of the pattern have to be chosen before the query can be sent, the client has no way to be sure of the pattern length of the desired website, as these values may change over time when a website changes. This leads to uncertainty about the correct length of the dummy patterns. A wrong choice of pattern length may be used by the adversary to identify the original pattern. Future research could study more sophisticated dummy selection strategies, drawing from experience gained in the field of obfuscated web search \cite{BalsaTD12}. \subsection{Other Countermeasures} As described in the previous section the pattern-based dummy selection strategy is subject to practical limitations. We will briefly cover other countermeasures that may be used to improve the privacy of clients. This list is not exhaustive. The first option is to use a variable value for $N$ that changes on each block. This will raise the difficulty of determining the length of the original pattern, as long as the adversary cannot distinguish individual blocks. This change would render 1BD-improved useless, as it depends on a fixed number of chosen dummies per block (although similar optimizations could be found that would still improve on the performance of the trivial algorithm). However, this would not impact the performance of the ABD algorithm, as it does not rely on uniform block sizes Another improvement that may make the pattern-based strategy more feasible would be to round up the length of the target pattern to the next multiple of a number $x > 1$. The additional queries (``padding'') could be chosen randomly, or by choosing patterns of the correct length. Finally, other privacy-enhancing techniques, such as mixes and onion routing \cite{chaum81-mix,dingledine04tor}, can be employed to counter monitoring and tracking efforts. However, these general-purpose solutions are not specifically designed for privacy-preserving DNS resolution and may introduce significant delays into the resolution process. \section{Discussion} \label{sec:discussion} We designed our experimental setup to stick as closely as possible to reality. However, for reasons of conciseness and clarity we have neglected some effects. In the following we will discuss whether they affect the validity of our conclusions. Firstly, the results are implicitly biased due to a closed-world assumption, i.\,e., our results have been obtained on a dataset of limited size. However, as the Toplist of Alexa contains a large variety of websites we are confident that the results are valid for a large fraction of sites in general. Moreover, we have only evaluated the effectiveness of the attack for the \emph{home pages}; the evaluation of the attack on individual sub-pages is left for future work. Secondly, while we considered the effects of caching of dummy queries in the 1BD scenario, we disregarded caching of the desired queries: The client may still have (parts of) a pattern in his local cache, resulting in incomplete patterns being sent to the resolver. However, the adversary may adapt to caching by remembering the TTL of all responses he sent to a client and matching the patterns against the union of the received domain names and the cached entries. Moreover, an adversary who wants to determine all websites a user visits needs the patterns of all websites on the Internet. Such a database would be non-trivial to generate and maintain. However, a \emph{reactive} adversary may visit any domain name he receives a query for and store the pattern for that domain name in its pattern database, making a slightly delayed identification possible Finally, we disregarded changing patterns as well as DNS prefetching techniques, which cause longer and more volatile patterns. However, a determined adversary will have no problems in addressing these issues. \section{Conclusion} \label{sec:conclusion} We demonstrated that random set range queries offer considerably less protection than expected in the use case of web surfing. Our attack exploits characteristic query patterns, which lead to greatly reduced query privacy compared to the estimations made by Zhao et al. in their original work. Moreover, we proposed and evaluated an improved range query scheme using query patterns to disguise the original pattern. We encourage researchers to consider the effects of semantic interdependencies between queries when designing new schemes for query privacy, as the rising pervasiveness of social networking buttons, advertising and analytics makes singular queries less and less common for web surfing. \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Nearly three decades after their discovery \cite{Bend}, the high-temperature cuprate superconductors have so far eluded a comprehensive explanation. These layered materials contain CuO$_2$ layers, which are antiferromagnetic insulators in the undoped limit and become superconducting upon doping \cite{RMP_reviews}. The hole-doped side shows a more robust superconductivity, extending to higher temperatures and over a wider range of dopings. The first step towards deciphering the mechanism of superconductivity in these compounds is a proper description of the motion of these holes in the CuO$_2$ layer -- this has become one of the most studied problems in condensed matter theory \cite{RMP_reviews,Kane89}. Its solution should elucidate the nature of the quasiparticles that eventually bind together into the pairs that facilitate the superconducting state. Despite significant effort, even what is the minimal model that correctly describes this low-energy quasiparticle is still not clear. There is general agreement that the parent compounds are charge-transfer insulators \cite{Zaanen}, and wide consensus that most of their low-energy physics is revealed by studies of a single CuO$_2$ layer, modeled in terms of Cu $3d_{x^2-y^2}$ and O $2p$ orbitals. Because only ligand $2p$ orbitals hybridize with the $3d_{x^2-y^2}$ orbitals, it is customary to ignore the other O $2p$ orbitals; this leads to the well-known three-band Emery model \cite{Emery}. However, the Emery model is perceived as too complicated to study so it is often further simplified to a one-band $t$-$J$ model that describes the dynamics of a Zhang-Rice singlet (ZRS) \cite{Zhang,George}. We known that the $t$-$J$ model with only nearest-neighbor (nn) hopping $t$ is certainly not the correct model because it predicts a nearly flat quasiparticle energy along $(0,\pi)-(\pi,0)$, unlike the substantial dispersion found experimentally \cite{Wells,Leung95,Damascelli}. However, its extension with longer range hopping, the $t$-$t'$-$t''$-$J$ model, was shown to reproduce the correct dispersion \cite{Leung97} for values of the 2$^{\rm nd}$ and 3$^{\rm rd}$ nn hoppings in rough agreement with those estimated from density functional theory \cite{ok} and cluster calculations \cite{eskes}. Taking this agreement as proof that this is the correct model, both the $t$-$J$ model and its parent, the Hubbard model \cite{amol}, have been studied very extensively in the context of cuprate physics. Here we show that the quasiparticle of the $t$-$t'$-$t''$-$J$ model is {\em qualitatively} different from that of the $U_{dd} \rightarrow \infty$ limit of the three-band Emery model \cite{comment}. While both models predict a dispersion in quantitative agreement with that measured experimentally (for suitable values of the parameters), the factors controlling the quasiparticle dynamics are very different. It has long been known that both the longer-range hopping and the spin fluctuations play a key role in the dynamics of the quasiparticle of the $t$-$t'$-$t''$-$J$ model. Here we use a non-perturbative variational method, which agrees well with available exact diagonalization (ED) results, to show that spin-fluctuations and longer-range hopping control the quasiparticle dispersion in different parts of the Brillouin zone, and to explain why. In contrast, using the same variational approach, it was recently argued that spin fluctuations play no role in the dispersion of the quasiparticle of the $U_{dd} \rightarrow \infty$ limit of the Emery model \cite{Hadi}. This claim is supported by additional results we present here. This major difference in the role played by spin fluctuations in determining the quasiparticle dynamics shows that these models do not describe the same physics. This suggests that the $t$-$t'$-$t''$-$J$ model is not suitable for the study of cuprates in the hole-doped regime, although it and related one-band models may be valid in the electron-doped regime. As we argue below, it may be possible to ``fix'' one-band models by addition of other terms, although we do not expect this to be a fruitful enterprise. Instead, we believe that what is needed is a concerted effort to understand the predictions of the Emery model. Our results in Ref. \cite{Hadi} and here that spin fluctuations of the AFM background do not play a key role in the quasiparticle dynamics of this three-band model, contrary to what was believed to be the case based on results from one-band models, should simplify this task. The article is organized as follows. In Section II we review the three-band Emery model and briefly discuss the emergence of the one-band and simplified three-band models in the asymptotic limit of strong correlations on the Cu sites. Section III describes the variational method, which consists in keeping a limited number of allowed magnon configurations in the quasiparticle cloud. Section IV presents our results for both one- and three-band models, and their interpretation. Finally, Section V contains a summary and a detailed discussion of the implications of these results. \section{Models} A widely accepted starting point for the description of a CuO$_2$ layer is the three-band Emery model \cite{Emery}: \begin{multline} \label{Emr} \mathcal{H} = T_{pp} + T_{pd} + \Delta_{pd}\sum_{i \in {\rm O}, \sigma}n_{i, \sigma} \\+ U_{pp}\sum_{i \in {\rm O}}n_{i, \downarrow}n_{i, \uparrow} + U_{dd}\sum_{i \in {\rm Cu}}n_{i, \downarrow}n_{i, \uparrow}. \end{multline} The sets ${\rm O}$ and ${\rm Cu}$ contain the ligand O $2p$ and the Cu $3d_{x^2-y^2}$ orbitals respectively, sketched in Fig. \ref{fig1}(a). For ${i \in {\rm O}}$, $n_{i, \sigma}=p^{\dagger}_{i,\sigma}p_{i, \sigma}$ is the number of spin-$\sigma$ holes in that $2p$ orbital. Similar notation is used for the $3d$ orbitals, their hole creation operators being $d^\dagger_{i,\sigma}$, $i\in {\rm Cu}$. $T_{pp}$ is the kinetic energy of the holes moving on the O sublattice, described by a Hamiltonian with first ($t_{pp}$) and second ($t'_{pp}$) nearest-neighbour (nn) hopping: \begin{multline} T_{pp} = t_{pp}\sum_{i\in {\rm O},{\boldsymbol\delta},\sigma} r_{\boldsymbol\delta}p^{\dagger}_{i,{\sigma}}p^{}_{i+{\boldsymbol\delta},\sigma}\\ - t'_{pp}\sum_{i \in {\rm O},\sigma} p^{\dagger}_{i,\sigma}(p^{}_{i - {\boldsymbol\epsilon},\sigma}+p^{}_{i + {\boldsymbol\epsilon},\sigma}). \label{Tpp} \end{multline} The lattice constant is set to $a=1$. The vectors ${\boldsymbol\delta}= \pm(0.5, 0.5), \pm(0.5,-0.5)$ are the distances between any O and its four nn O sites, and $r_{{\boldsymbol\delta}} = \pm 1$ sets the sign of each nn $pp$ hopping integral in accordance with the overlap of the $2p$ orbitals involved, see Fig. \ref{fig1}(a). Next nn hopping is included only between O $2p$ orbitals pointing toward a common bridging Cu, separated by ${\boldsymbol\varepsilon}= (1,0)$ or $(0,1)$; hybridization with the $4s$ orbital of the bridging Cu further boosts the value of this hopping integral. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig1.eps} \caption{(color online) (a) Sketch of the CuO$_2$ layer, with O are marked by squares and Cu marked by circles. The relevant orbitals are drawn at a few sites. The arrows indicate the hopping terms included in the three-band Emery model; (b) Sketch of the one-band $t$-$t'$-$t''$-$J$ model. Cu sites host spin degrees of freedom, except at sites where a Zhang-Rice singlet is centered (red circle). The arrows indicate the various terms in this Hamiltonian; (c) Sketch of the $U_{dd}\rightarrow \infty$ limit of the Emery model. Cu sites host spin degrees of freedom but the doped holes (red filled square) move on the O lattice. The arrows indicate the various terms in this Hamiltonian; (d) The full Brillouin zone of the CuO$_2$ lattice (the outer square) which encloses the magnetic Brillouin zone (shaded). } \label{fig1} \end{figure} $T_{pd}$ is the kinetic energy of holes moving between neighbour Cu and O orbitals: \begin{equation} T_{pd} = t_{pd}\sum_{i \in {\rm Cu},\bf u,\sigma} r_{\bf u}d^{\dagger}_{i,{\sigma}}p^{}_{i+ \bf u,\sigma} + h.c., \end{equation} where ${\bf u} = (\pm 0.5,0), (0, \pm 0.5)$ are the distances between a Cu and its four nn O sites, and $r_{\bf u}$ are the signs of the overlaps of the corresponding orbitals. It is this term that provides the main justification for ignoring the other sets of $2p$ orbitals, because symmetry forbids hopping of Cu holes from $3d_{x^2-y^2}$ orbitals into the non-ligand O orbitals. We further discuss this assumption below. $\Delta_{pd}$ is the charge transfer energy which ensures that in the parent compound the O $2p$ orbitals are fully occupied ({\em i.e.} contain no holes). Finally, $U_{pp}$ and $U_{dd}$ are the Hubbard repulsion in the $2p$ and $3d$ orbitals, respectively. Longer range Coulomb interaction between holes on O and Cu can also be added, but for the single doped-hole problem analyzed here, it leads to a trivial energy shift. Strong correlations due to the large $U_{dd}$, combined with the big Hilbert space with its three-orbitals basis, and the need for a solution for a hole concentration equal (in the parent compound) or larger (in the hole doped case) than one per Cu, make this problem very difficult to solve. While progress is been made with a variety of techniques \cite{3band} (which however have various restrictions, such as rather high-temperatures and/or small clusters for quantum Monte Carlo methods, and/or additional approximations, such as setting $t_{pp}=0$ for convenience), it is far more customary to further simplify this model before attempting a solution. A reasonable way forward is to use the limit $U_{dd} \rightarrow \infty$ to forbid double occupancy of the $3d$ orbitals. As a result, in the undoped ground-state there is one hole per Cu. Virtual hopping processes lead to antiferromagnetic (AFM) superexchange between the resulting spin degrees of freedom, so that the parent compound is a Mott insulator with long-range AFM order \cite{comm0}. Upon doping, holes enter the O band and the issue is how to accurately describe their dynamics as they interact with the spins at the Cu sites. Here we compare two such descriptions for {\em the single doped hole case}. \subsection{One-band models} In their seminal work \cite{Zhang}, Zhang and Rice argued that the doped hole occupies the linear combination of the four $O$ 2p ligand orbitals surrounding a central Cu, that has the same ${x^2-y^2}$ symmetry like the Cu $3d$ orbital hosting the spin. Furthermore, exchange locks the two holes in a low-energy Zhang-Rice singlet (ZRS). They also argued that the dynamics of this composite object, which combines charge and spin degrees of freedom, is well captured by the one-band model: \begin{equation} \label{Hamm} {\cal{H}} = {\cal P}\left[\hat{T} + \hat{T'}+ \hat{T}''\right]{\cal P}+ \mathcal{H}_\mathrm{AFM}. \end{equation} The first term describes the hopping of the ZRS (marked in Fig. \ref{fig1}(b) by the ``missing spin'' locked in the ZRS) on the square lattice of Cu sites that hosts it. Originally only nn hopping $T$ was included: $\hat{T} = t \sum_{\langle i, j\rangle}^{} d^\dagger_{i,\sigma} d_{j,\sigma} + h.c.$, with $i,j \in {\rm Cu}$. The projector ${\cal P}$ removes doubly occupied states, therefore this term allows only Cu spins neighboring the ZRS to exchange their location with the ZRS. This mimics the more complex reality of the doped hole moving on the O sublattice and forming ZRS with different Cu spins. Although in Ref. \cite{Zhang} it was argued that only nn ZRS hoping is important, longer-range 2nd ($\hat{T}'$) and 3rd ($\hat{T}''$) nn hopping was later added the model on a rather ad-hoc basis. As discussed below, this is needed in order to find a quasiparticle dispersion similar to that measured experimentally. These terms are defined similarly to $\hat{T}$ with hopping integrals $t'$ and $t''$, respectively. For cuprates, $t'/t\sim-0.3, t''/t\sim 0.2$ are considered to be representative values \cite{ts0, ts}, in agreement with various estimates \cite{ok,eskes}. In the following, we refer to this as the $t$-$t'$-$t''$-$J$ model, whereas if $t'=t''=0$ we call it the $t$-$J$ model. The term $\mathcal{H}_\mathrm{AFM} = J\sum_{\langle i,j\rangle}{\bf S}_i\cdot {\bf S}_{j}$ describes the nn AFM superexchange between the other Cu spins ${\bf S}_i$, with $J/t\sim 0.3$ for cuprates \cite{ts}. It leads to AFM order in the undoped system \cite{comm0}, and also controls the energy of the cloud of magnons that are created in the vicinity of the ZRS, as it moves through the magnetic background. The $t$-$J$ model also emerges as the $U\rightarrow \infty$ limit of the Hubbard model \cite{amol}, but with additional terms of order $J$. One of them, $-J/4 \sum_{\langle i, j\rangle}^{}n_i n_j$, gives trivial energy shifts for both the undoped and the single-hole doped cases of interest to us in this work, so its presence can be safely ignored in this context. More interesting is the so-called three-site term ${\cal P}\hat{T}_{3s} {\cal P}$ \cite{3site}, where \begin{multline} \label{3site} \hat{T}_{3s} = \frac{J}{4}\sum_{i \in {\rm Cu}, \sigma}\sum_{\boldsymbol \epsilon\neq\boldsymbol \epsilon'}(d^{\dagger}_{i + \boldsymbol \epsilon', \sigma}n_{i,-\sigma}d_{i + \boldsymbol \epsilon, \sigma} \\ -d^{\dagger}_{i + \boldsymbol \epsilon', \sigma}d^{\dagger}_{i,-\sigma}d_{i, \sigma}d_{i + \boldsymbol \epsilon,-\sigma}) \end{multline} describes ZRS hopping through an intermediate Cu site and permits spin swapping with the spin at this intermediate site. As shown below, this term influences the quasiparticle dispersion but it is not clear that it should be included in the one-band model, because the original Hamiltonian is the Emery, not the Hubbard, model. In fact, a perturbational derivation of the low-energy Hamiltonian obtained by projecting the three-band model onto ZRS states reveals a much more complicated Hamiltonian than the $t$-$t'$-$t''$-$J$ model, with many other terms \cite{ALigia}. We are not aware of a systematic study of their impacts, but their presence underlies one important issue with this approach: the hoped-for simplification due to the significant decrease in the size of the Hilbert space comes at the expense of a Hamiltonian whose full expression \cite{ALigia} is very complicated. Using instead simpler versions like the $t$-$t'$-$t''$-$J$ model may result in qualitatively different physics than that of the full one-band model. Here we argue that this is indeed the case. \subsection{Simplified three-band model} An alternative is to begin at the same starting point, {\em i.e.} the limit $U_{dd} \rightarrow \infty$ resulting in spin degrees of freedom at the Cu sites. However, the O sublattice on which the doped hole moves is kept in the model, not projected out like in the one-band approach, see Fig. \ref{fig1}(c). This leads to a bigger Hilbert space than for one-band models (yet smaller than for the Emery model) but because spin and charge degrees of freedom are no longer lumped together, the resulting low-energy Hamiltonian is simpler and makes it easier to understand its physics. The effective model for a layer with a single doped hole, which for convenience we continue to call ''the three-band model'' although it is its $U_{dd}\rightarrow \infty$ approximation, was derived in Ref. \cite{Bayo} and reads: \begin{equation} \label{eff} \mathcal{H}_\mathrm{eff} = \mathcal{H}_\mathrm{AFM} + \mathcal{H}_{J_{pd}} + T_{pp} +T_\mathrm{swap}. \end{equation} The meaning of its terms is as follows: $$\mathcal{H}_\mathrm{AFM} = J_{dd}\sum_{\langle i, j \rangle'} {\bf S}_{i}\cdot{\bf S}_{j}$$ is again the nn AFM superexchange between the Cu spins ${\bf S}_{i}$, so $J_{dd}\equiv J$ of the one-band models. The main difference is indicated by the presence of the prime, which reflects the absence of coupling for the pair that has the doping hole on their bridging O. The next term, $$\mathcal{H}_{J_{pd}}= J_{pd} \sum_{i\in O, {\bf u}} {\bf s}_{i}\cdot{\bf S}_{i+{\bf u}}$$ is the exchange of the hole's spin ${\bf s}_{i} = {1\over 2} \sum_{\alpha,\beta}^{} p^\dagger_{i\alpha} {\boldsymbol \sigma}_{\alpha\beta} p_{i\beta}$ with its two nn Cu spins. It arises from virtual hopping of a hole between a Cu and the O hosting the doped hole. Like in Eq. (\ref{Emr}), $T_{pp}$ is the kinetic energy of the doping hole as it moves on the O sublattice. It is supplemented by $T_\mathrm{swap}$ which describes effective hopping mediated by virtual processes where a Cu hole hops onto an empty O orbital, followed by the doping hole filling the now empty Cu state \cite{Bayo}. This results in effective nn or next nn hopping of the doped hole, with a swapping of its spin with that of the Cu involved in the process. The explicit form of this term is: $$ T_\mathrm{swap} = -t_{sw}\sum_{i \in {\rm Cu}, {\bf u \ne u'}}\sum_{ \sigma, \sigma'} s_{\bf u-u'}p^{\dagger}_{i+ \bf u,{\sigma}}p^{}_{i+\bf u',\sigma'}|i_{\sigma'}\rangle\langle i_\sigma|, \label{Ts} $$ reflecting the change of the Cu spin located at ${\bf R}_i$ from $\sigma$ to $\sigma'$ as the hole changes its spin from $\sigma'$ to $\sigma$ while moving to another O. The sign $s_{\boldsymbol\eta}=\pm 1$ is due to the overlaps of the 2$p$ and 3$d$ orbitals involved in the process. For typical values of the parameters of the Emery model \cite{RMP_reviews} and using $J_{dd}$ ($\sim 0.15$ eV) as the unit of energy, the dimensionless values of the other parameters are $t_{pp}\sim 4.1$, $t'_{pp}\sim 0.6t_{pp}$, $t_{sw}\sim 3.0 $ and $J_{pd}\sim 2.8$. We use these values in the following, noting that the results are not qualitatively changed if they are varied within reasonable ranges. For complete technical details of the derivation of this effective Hamiltonian and further discussions of higher order terms, as well as a comparison with other work along similar lines \cite{others}, the reader is referred to the supplemental material of Ref. \cite{Bayo}. \section{Method} The ground state of the undoped layer is not a simple N{\'e}el-ordered state. This is due to the spin fluctuations term $\mathcal{H}_\mathrm{sf}=J_{}/2 \sum_{\langle i, j\rangle} (S_{i}^-S_{j}^++S_{i}^+S_{j}^-)$ present in ${\cal H}_{\rm AFM}$, which play an important role in lower dimensions. A 2D solution can only be obtained numerically, for finite size systems \cite{2DQMC}. The absence of an analytic description of the AFM background has been an important barrier to understanding what happens upon doping, because the undoped state itself is so complex. It is also the reason why most progress has been computational in nature and mostly restricted to finite clusters. While such results are very valuable, it can be rather hard to gauge the finite-size effects and, more importantly, to gain intuition about the meaning of the results. Because our goal is to verify whether the two kinds of models have equivalent quasiparticles, which requires us to understand qualitatively what controls their dynamics, we take a different approach. We use a quasi-analytic variational method valid for an infinite layer, so that finite-size effects are irrelevant. By systematically increasing the variational space we can gauge the accuracy of our guesses and, moreover, also gain intuition about the importance of various configurations and the role played by various terms in the Hamiltonians. Where possible, we compare our results with those obtained by exact diagonalization (ED) for small clusters, providing further proof for the validity of our method. For simplicity, in the following we focus on the one-band model; the three-band model is treated similarly, as already discussed in Ref. \cite{Hadi}. Because we do not have an analytic description of the AFM background wavefunction, we divide the task into two steps. \subsection{Quasiparticle in a N\'eel background} In the first step we completely ignore the spin fluctuations by setting $\mathcal{H}_\mathrm{sf} \rightarrow 0$, to obtain the so-called $t$-$t'$-$t''$-$J_z$ model. As a result, the undoped layer is described by a N\'eel state $|{\rm N}\rangle$ with up/down spins on the A/B sublattice, without any spin-fluctuations. One may expect this to be a very bad starting point, given the importance of spin-fluctuations for a 2D AFM. At the very least, this will allow us to gauge how important these spin fluctuations really are, insofar as the quasiparticle dynamics is concerned, when we include them in step two. It is also worth remembering that the cuprates are 3D systems with long-range AFM order stabilized up to rather high temperatures by inter-layer coupling, in the undoped compounds. The spin fluctuations must therefore be much less significant in the undoped state than is the case for a 2D layer, so our starting point may be closer to reality than a wavefunction containing the full description of the 2D spin fluctuations. Magnons (spins wrongly oriented with respect to their sublattice) are created or removed when the ZRS hops between the two magnetic sublattices. The creation of an additional magnon costs up to $2J$ in Ising exchange energy as up to four bonds involving the magnon now become FM. This naturally suggests the introduction of a variational space in terms of the maximum number of magnons included in the calculation. This variational calculation is a direct generalization of that of Ref. \cite{MonaHolger}, where the quasiparticle of the $t$-$J_z$ model was studied including configurations with up to 7 {\em adjacent} magnons. That work showed that keeping configurations with up to three magnons is already accurate if $t/J$ is not too large, so here we restrict ourselves to this smaller variational space. (Note that three is the minimum number of magnons to allow for Trugman loops \cite{Trugman}, see discussion below, so a lower cutoff is not acceptable). The configurations included are the same as in Ref. \cite{Trugman}, where this type of approach was first pioneered. To be more specific, our goal is to calculate the one-hole retarded Green's function at zero temperature: \begin{equation} \label{Gkw} G({\bf k},\omega) = \langle {\rm N}|d^{\dagger}_{\bf k,\uparrow}\hat{G}(\omega)d_{\bf k,\uparrow}|{\rm N}\rangle, \end{equation} where $\hat{G}(\omega)=\lim_{\eta\rightarrow 0^+} (\omega-\mathcal{H}+i\eta)^{-1}$ is the resolvent of the one-band Hamiltonian (\ref{Hamm}) and $d_{\bf k,\uparrow}= \frac{1}{\sqrt{N}}\sum_{i \in {\rm Cu}_A}e^{i{\bf k}\cdot{\bf R}_i}d_{i,\uparrow}.$ Here $N\rightarrow \infty$ is the number of sites in each magnetic sublattice, ${\bf k}$ is restricted to the magnetic Brillouin zone depicted in Fig. \ref{fig1}(d), and we set $\hbar=1$. The spectrum is identical if the quasiparticle is located on the spin-down sublattice: conservation of the total $z$-axis spin guarantees that there is no mixing between these two subspaces with different spin. Taking the appropriate matrix element of the identity $\hat{G}(\omega)(\omega + i \eta - \mathcal{H})=1$ leads to the equation of motion: \begin{equation} \label{G1} [\omega + i \eta -J - \epsilon({\bf k})]G({\bf k},\omega) -t\sum_{\beps }F_1({\bf k},\omega, {\beps})=1. \end{equation} The four vectors ${\beps= \pm(1,0), \pm(0,1)}$ point to the nn Cu sites, $J$ is the Ising exchange energy cost for adding the ZRS (four AFM bonds are removed) and \begin{equation} \label{eps} \epsilon({\bf k}) = 4t' \cos k_x\cos k_y +2t''[\cos(2k_x)+\cos(2k_y)] \end{equation} is the kinetic energy of the ZRS moving freely on its own magnetic sublattice. NN hopping creates a magnon as the hole moves to the other magnetic sublattice; this introduces the one-magnon propagators: $$F_1({\bf k},\omega, {\beps})=\frac{1}{\sqrt{{N}}}\sum_{i\in {\rm Cu}_A}e^{i{\bf k}\cdot{\bf R}_i}\langle {\rm N}|d^{\dagger}_{\bf k,\uparrow}\hat{G}(\omega) S^-_id_{i+ {\beps} ,\downarrow}|{\rm N}\rangle$$ with the hole on the B sublattice and therefore a present magnon on a nn A site, to conserve the total spin. Equation (\ref{G1}) is exact (for an Ising background) but it requires the $F_1({\bf k},\omega, {\beps})\equiv F_1(\beps)$ propagators to solve. Their equations of motion (EOM) are obtained similarly: \begin{multline} \label{F1} [\omega + i \eta -{5\over 2}J] F_1( {\beps}) = t'\sum_{\beps'\perp \beps}F_1({\beps'})+t''F_1(-{\beps}) \\+t[G({\bf k},\omega)+F_2({\beps, \beps})+\sum_{\beps'\perp \beps}F_2({\beps, \beps'})]. \end{multline} Note that 2$^{\rm nd}$ and 3$^{\rm rd}$ nn hopping keeps the hole on the B sublattice and thus conserve the number of magnons, linking $F_1$ to other $F_1$ propagators. However, nn hopping links $F_1$ to $G({\bf k},\omega)$ if the hole hops back to the A sublattice by removing the existing magnon, but also to two-magnon propagators, $F_2$, if it hops to a different A site than that hosting the first magnon. The equation above imposes the variational restriction that the two magnons are adjacent, so only $ F_2({\bf k},\omega, {\beps, \beps'})=\sum_{i\in {\rm Cu}_A}\frac{e^{i{\bf k}\cdot{\bf R}_i}}{\sqrt{{N}}}\langle {\rm N}|d^{\dagger} _{\bf k,\uparrow}\hat{G}(\omega) S^-_iS^+_{i+ {\beps}}d_{i+ {\beps + \beps'} ,\uparrow}|{\rm N}\rangle$ with $\beps + \beps' \ne 0$ are kept. This is a good approximation for the low-energy quasiparticle whose magnons are bound in its cloud, and thus spatially close. (Because fewer AFM bonds are disrupted, these configurations cost less exchange energy than those with the magnons apart). Of course, the hole could also travel far from the first magnon (using 2$^{\rm nd}$ and 3$^{\rm rd}$ nn hopping) before returning to the A sublattice to create a second magnon far from the first. Such higher energy states---ignored here but which we consider in the three-band model, see below---contribute to the spin-polaron+one magnon continuum which appears above the quasiparticle band. The relevance of this higher-energy feature is discussed below. EOM for the new propagators are generated similarly. We do not write them here because they are rather cumbersome, but it is clear that the EOM for $F_2$ link them other $F_2$, as well as to some of the $F_1$ and to three-magnon propagators $F_3$. Again we only keep those propagators consistent with the variational choice of having the 3 magnons on adjacent sites. Since 4-magnon configurations are excluded, the EOM for $F_3$ link them only to other $F_3$ and to various $F_2$. The resulting closed system of coupled linear equations is solved numerically to find all these propagators, including $G({\bf k},\omega)$. With $G({\bf k},\omega)$ known, we can find the quasiparticle dispersion $E({\bf k})$ as the lowest pole of the spectral function $A({\bf k}, \omega)=-{1\over \pi} \mbox{ Im} G({\bf k},\omega)$. Of course, this is the quasiparticle in a N\'eel background, {\em i.e.} when the spin fluctuations of the AFM background are completely ignored. \subsection{Quasiparticle in a background with spin fluctuations} To estimate the effect of the background spin fluctuations (due to spin flipping of pairs of nn AFM spins, described by $\mathcal{H}_\mathrm{sf}$) we again invoke a variational principle. Spin fluctuations occuring far from the ZRS should have no effect on its dynamics, since they are likely to be ``undone'' before the hole arrives in their neighborhood (they can be thought of as vacuum fluctuations). The spin fluctuations that influence the dynamics of the hole must be those that occur in its immediate vicinity and either remove from the quasiparticle cloud pairs of nn magnons generated its motion, or add to it pairs of magnons through such AFM fluctuations. For consistency, we keep the same variational configurations here like we did at the previous step. Then, Eq. (\ref{G1}) acquires an additional term on the l.h.s. equal to: $-{J\over 2} \sum_{{\beps+\beps'}\ne 0} e^{i{\bf k}\cdot ({\beps+\beps'})}F_2({\bf k}, \omega, {\beps, \beps'})$, describing processes where a pair of magnons is created through spin-fluctuations near the hole. Similarly, the EOMs for $F_1$/$F_2$/$F_3$ acquire terms proportional to $F_3$/$G$/$F_1$ respectively, because spin fluctuations add/remove a pair of magnons to/from their clouds. This modified system of linear equations has a different solution for $G({\bf k}, \omega)$, which accounts for the effects of the spin fluctuations that occur close to the hole. Comparison with the previous results will allow us to gauge how important these ``local'' spin fluctuations are to the quasiparticle's dynamics. Accuracy can be systematically improved by increasing the variational space, and implementation of such generalizations is straightforward. As shown next, the results from the variational calculation with configurations of up to three magnons located on adjacent sites compares well against available ED results and allows us to understand what determines the quasiparticle's dispersion, so we do not need to consider a bigger variational space. The three-band model is treated similarly, with the variational space again restricted to the same configurations with up to three adjacent magnons. Results for a quasiparticle in the N\'eel background (no spin fluctuations) were published in Ref. \cite{Hadi}, where the reader can find details about the corresponding EOMs (see also Ref. \cite{Hadithesis}). Here we will focus primarily on the effect of local spin fluctuations. These are introduced as explained above, by adding to the EOMs terms consistent with the variational space and whose magnon count varies by two. \section{Results} \subsection{One-band model} We first present results for the one-band model. It is important to note upfront that it has long been known that both spin fluctuations and the longer range hopping must be included in order to obtain the correct dispersion for its quasiparticle \cite{Leung95,Leung97}. (By ``correct dispersion'' we mean one in agreement with the experimental data \cite{Wells,Damascelli}). Our results confirm these facts, as shown next. The novelty is, therefore, not in finding these results, but in using them to prove the validity of our variational method and, more importantly, to untangle the specific role that spin fluctuations and long-range hopping play in arriving at this dispersion. To the best of our knowledge, this had not been known prior to this work. The quasiparticle dispersion $E({\bf k})$ is shown in Fig. \ref{fig3} for various models: in panel (a) we set $t'=t''=0$ and freeze spin-fluctuations ($t$-$J_z$ model). In panel (b), spin fluctuations close to the hole are turned on as discussed; for simplicity we call this the $t$-$J$ model, although the true $t$-$J$ model includes all spin fluctuations. In panel (c), we further add the longer range hopping; for simplicity, we call this the $t$-$t'$-$t''$-$J$ model although, again, spin fluctuations are allowed only near the hole. Finally, in panel (d) we keep the longer range hopping but freeze the spin fluctuations; this is the $t$-$t'$-$t''$-$J_z$ model. Panel (e) shows model dispersions explained below. \begin{figure}[t] \center \includegraphics[width=.99\columnwidth]{fig2.eps} \caption{(color online) Quasiparticle energy $E({\bf k})$ along several cuts in the Brillouin zone for various one-band models. In all cases $J/t=0.3$ while $t'=t''=0$ in (a),(b), and $t'/t=-0.3, t''/t=0.2$ in (c),(d). Lines show the results of the variational calculation with the spin fluctuations frozen in (a) and (d), or allowed only near the hole in (b) and (c). Symbols in (b) and (c) are the corresponding ED results for a 32-site cluster \cite{Leung95,Leung97}. (e) Dispersion $E_{qp}({\bf k)}$ of Eq. (\ref{disp}) for $E_0=0$ and $t_2=-1, t_3=0$ (black); $t_2=1, t_3=0.5$ (red); $t_2=-1; t_3=2/3$ (green). } \label{fig3} \end{figure} The quality of our variational approximation is illustrated in panels (b) and (c). Its results (thick lines) are in fair agreement with those of exact diagonalization (ED) for a 32-site cluster, which includes all spin fluctuations \cite{Leung95,Leung97}. Results in panel (c) agree well with those measured experimentally \cite{Leung97,Damascelli}. Our bandwidths are somewhat different; some of this may be due to finite-size effects, as the ED bandwidth varies with cluster size \cite{Bayo}. This also suggests that more configurations need to be included before full convergence is reached by our variational method (these would increase the bandwidth in panel (b) and decrease it in panel (c), see below). This is supported by Ref. \cite{MonaHolger}, where full convergence for the $t$-$J_z$ model was reached when configurations with up to 5 magnons were included. Nevertheless, the agreement is sufficiently good to conclude that the essential aspects of the quasiparticle physics are captured by the three-magnon variational calculation, and to confirm that it suffices to include spin fluctuations only near the hole. \begin{figure*}[t] \center \includegraphics[width=1.99\columnwidth]{fig3.eps} \caption{ (color online) (a) Shortest Trugman loop that generates a $t_{2, \rm TL}$ contribution. A much smaller $t_{3, \rm TL}$ is also generated, see Ref. \cite{MonaHolger}; Generation of (b) $t_{2, \rm sf}$ and (c) $t_{3, \rm sf}$ terms due to spin-fluctuations; (d) Process that renormalizes $t''$. The square shows the location of the hole, while the circles show magnons (wrongly oriented spins). The remaining spins are in their N\'eel order orientation and are not shown explicitly. The short thick arrows indicate the next step in the process, while the thin arrows in the final sketch show the effective quasiparticle hopping generated by those processes. } \label{fig4} \end{figure*} These results clearly demonstrate that both spin fluctuations and longer-range hopping are needed to achieve the correct quasiparticle dispersion shown in panel (c), with deep, nearly isotropic minima at $({\pi\over 2}, {\pi\over 2})$. Absence of longer-range hopping leads to a rather flat dispersion along $(0,\pi)-(\pi,0)$, see panel (b); this fact has long been known \cite{Leung95}. In panel (d), we show that if the longer range hopping is included but spin fluctuations are absent, the dispersion is rather flat along the $(0,0)-(\pi,\pi)$ direction. We are not aware of previous studies of this case. Although $E({\bf k})$ looks very different in the four cases, it turns out that all can be understood in a simple, unified picture. The key insight is that the quasiparticle lives on one magnetic sublattice, because of spin conservation. As a result, its generic dispersion must be of the form: \begin{equation} \label{disp} E_{qp}({\bf k}) = E_0+ 4t_2 \cos k_x \cos k_y + 2 t_3 ( \cos 2k_x + \cos 2k_y) \end{equation} {\em i.e.} like the bare hole dispersion $\epsilon({\bf k})$ of Eq. (\ref{eps}), but with renormalized 2$^{\rm nd}$ and 3$^{\rm rd}$ nn hoppings $t'\rightarrow t_2; t'' \rightarrow t_3$. There cannot be any effective nn hopping of the quasiparticle because this would move it to the other sublattice; this cannot happen without changing the magnetic background, so $t_1=0$. Longer range hoppings that keep the quasiparticle on the same sublattice may also be generated dynamically, but their magnitude is expected to be small compared to $t_2,t_3$, hence Eq. (\ref{disp}). Thus, understanding the shape of the quasiparticle dispersion requires understanding the values of $t_2$ and $ t_3$. We begin the analysis with the $t$-$J_z$ model. Its quasiparticle is extremely heavy, as shown in panel (a). Note that the vertical scale is an order of magnitude smaller than for the other panels. The reason is that every time the hole hops, it moves to the other magnetic sublattice and it must either create or remove a magnon, to conserve the total spin. As the hole moves away from its original location, it leaves behind a string of magnons whose energy increases roughly linearly with its length. This could be expected to result in confinement (infinite effective mass), but in fact the quasiparticle acquires a finite dispersion by executing Trugman loops (TL) \cite{Trugman}, the shortest of which is sketched in Fig. \ref{fig4}(a). By going nearly twice along a closed loop, creating a string of magnons during the first round and removing them during the second round, the hole ends up at a new location on the same magnetic sublattice. Only 2$^{\rm nd}$ and 3$^{\rm rd}$ nn hopping terms can be generated through TL irrespective of their length, and $|t_{3,\rm TL}|\ll |t_{2,\rm TL}|\ll J$ if $t/J \sim 3$ \cite{MonaHolger}. Indeed, setting $t_2 <0, t_3\rightarrow 0$ in $E_{qp}({\bf k})$ of Eq. (\ref{disp}) leads to the black curve in Fig. \ref{fig3}(e) \cite{c1}, which has the same shape as that of panel (a) (the bandwidth is proportional to $|t_{2,\rm TL}|$). This dispersion is wrong not just quantitatively but also qualitatively, with $({\pi\over 2}, {\pi\over 2})$ as a saddle point instead of the ground state. Clearly, ignoring both longer range hopping and spin fluctuations changes completely the dynamics of the quasiparticle. When the spin fluctuations are turned on in the $t$-$J$ model, they act on a time scale $\tau_{\rm sf} \sim 1/J$ much faster than the slow dynamics due to TL, $\tau_{\rm TL} \sim 1/|t_{2,\rm TL}|$. The main contributions to $t_2$ and $t_3$ now come from processes like those sketched in Fig. \ref{fig4}(b) and (c), where spin fluctuations remove pairs of magnons created by nn hopping of the hole, leading to $t_{2,{\rm sf}}\gg |t_{2, \rm TL}|, t_{3,\rm sf}\gg |t_{3, \rm TL}|$. Moreover, we expect $t_{2,\rm sf}= 2 t_{3,\rm sf}$ because these effective hoppings are generated by similar processes but there are twice as many leading to 2$^{\rm nd}$ compared to 3$^{\rm rd}$ nn hopping, as the hole can move on either side of a plaquette. Indeed, the $t$-$J$ dispersion of panel (b) has a shape similar to that of $E_{qp}({\bf k})$ with $t_2=2t_3$, shown as a red curve in Fig. \ref{fig3}(e) \cite{c2}. Because $E_{qp}(k, \pi-k) = E_0 - 2t_2 + (4t_3-2t_2)\cos 2k$, it has a perfectly flat dispersion along $(0,\pi)-(\pi,0)$ for $t_2=2t_3$. The dispersion in panel (b) is not perfectly flat along this cut, so in reality $t_{2} \approx 2 t_{3}$. The small correction from the factor of 2 is likely due to higher order processes, as well as contributions from TL (which remain active). Ignoring it, we find the corresponding bandwidth $E_{qp}(0,0)-E_{qp}({\pi\over 2}, {\pi\over 2}) = 4t_2+8t_3=8t_2$, suggesting that the effective hoppings generated with spin fluctuations are of the order $t_{2,\rm sf}\approx 2 t_{3,\rm sf}\approx J/4$. Next, we consider what happens if instead of (local) spin fluctuations, we turn on longer-range hopping. Unlike in the $t$-$J_z$ model, the quasiparticle of the $t$-$t'$-$t''$-$J_z$ model should be light because the longer range hoppings $t', t''$ allow the hole to move freely on its magnetic sublattice. It can therefore efficiently remove magnons created through its nn hopping, without having to complete the time-consuming Trugman loops. The presence of the magnon cloud renormalizes these bare hoppings to smaller values, as is typical for polaron physics. Figure \ref{fig4}(d) shows one such process that renormalizes $t''\rightarrow t''^*$. Similar processes (not shown) renormalize $t'\rightarrow t'^*$ so both hopping integrals should be renormalized by comparable factors. As a result, we expect a dispersion like $E_{qp}({\bf k})$ but now with $t_2/t_3 = t'^*/t''^* \approx t'/t''$, if we ignore the small TL contributions. This indeed agrees with the result in panel (d), as shown by its comparison with the green curve in Fig. \ref{fig3}(e) where $E_{qp}({\bf k})$ is plotted for $t_2/t_3 = t'/t''= -1.5$. For $t_2=-2t_3$, $E_{qp}({\bf k})$ would be perfectly flat along $(0,0)-(\pi,\pi)$. Thus, the change in the relative sign explains why now the dispersion is nearly flat along $(0,0)-(\pi,\pi)$ and maximal along $(0,\pi)-(\pi,0)$, in contrast to the previous case. However, while $t_{2,\rm sf}/t_{3,\rm sf}\approx$ 2 is always expected for the $t$-$J$ model so its dispersion must have a shape like in Fig. \ref{fig3}(b), in the $t$-$t'$-$'t''$-$J_z$ model the ratio $t'^*/t''^*$ mirrors the ratio $t'/t''$. If this had a very different value than $\approx -2$, the quasiparticle dispersion would change accordingly. These results allow us to understand the dispersion of the $t$-$t'$-$t''$-$J$ quasiparticle. This must have contributions from both the spin fluctuations and the renormalized longer range hoppings, plus much smaller TL terms, because the processes giving rise to them are now all active. Indeed, the curve in panel (c) of Fig. \ref{fig3} is roughly equal to the sum of those in panels (b) and (d). The isotropic minimum at $\left({\pi\over2}, {\pi\over2}\right)$ is thus an accident, since the dispersion along $(0,0)-(\pi,\pi)$ is controlled by spin fluctuations, and that along $(0,\pi)-(\pi,0)$ is due to the renormalized longer range hoppings. More precisely, because $t_{2,\rm sf} \approx 2 t_{3,\rm sf}$, the contributions coming from spin fluctuation interfere destructively for momenta along $(0,\pi)-(\pi,0)$ so dispersion here is controlled by the renormalized $t'^*\approx -1.5 t''^*$, and viceversa. If $t_{2,\rm sf} \approx |t'^*|$ (which happens to hold because $J \sim |t'|$), the sum gives nearly isotropic dispersion near $\left({\pi\over2}, {\pi\over2}\right)$. If we change parameters significantly, the dispersion becomes anisotropic (not shown). Before moving on to contrast this behavior with that of the quasiparticle of the three-band model, we briefly discuss the effect of the three-site term of Eq. (\ref{3site}). The variational results for the four models are shown in Fig. \ref{fig5}. Where direct comparisons can be made, they are again in good quantitative agreement with other work where this term has been included, such as in Ref. \cite{Bala}. Its inclusion has a qualitative effect only for the $t$-$J_z$ model, where the shape of the dispersion is changed in its presence. This is not very surprising because, as discussed, the Trugman loops which control behavior in that case are very slow processes, and their effect can easily be undone by terms that allow the hole to move more effectively. The three-site term is such a term and its presence increases the bandwidth not just for the $t$-$J_z$ model, but for all cases. For the other three models, however, the inclusion of this term changes the dispersion only quantitatively: the bandwidth is increased but the overall shape is not affected much. The biggest change is along $(0,0)-(\pi,\pi)$, as expected because the three-site term generates effective 2nd and 3rd nn hoppings with the {\em same sign} and a 2/1 ratio, {\em i.e.} similar to $t_{2,sf}$ and $t_{3,df}$. As a result, its presence mimics (and boosts) the effect of the local spin fluctuations. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig4.eps} \caption{ (color online) Quasiparticle dispersion when the three-site term of Eq. (\ref{3site}) is included in the one-band Hamiltonians (dashed lines). For comparison, the dispersions without this term are also show (full lines from Fig. \ref{fig3}) } \label{fig5} \end{figure} It is interesting to note that if we allow this term to be large enough, we could obtain a dispersion with the correct shape {\em even in the absence of spin fluctuations}. However, the scale of this term is set by $J$, it is not a free parameter. As a result, we conclude that with its proper $J$ energy scale, this terms does not change qualitatively the behavior of the quasiparticle of the one-band model (apart from the $t$-$J_z$ case), although its inclusion may, in principle, allow for better fits of the experimental data. \subsection{Three-band model} Results for the simplified three-band model with the spin-fluctuations frozen off were discussed in Ref. \cite{Hadi}. To keep this work self-contained, we show in Fig. \ref{fig6} the most relevant data for the issue of interest, namely the quasiparticle dispersion $E({\bf k})$ obtained in a variational calculation with the maximum number of magnons $n_m = 0 - 3$. These results already suffice to illustrate the qualitative difference between the quasiparticle dynamics in the one-band and the three-band models. \begin{figure}[t] \center \includegraphics[width=\columnwidth]{fig5.eps} \caption{ (color online) $E({\bf k})$ along several cuts in the Brillouin zone for the three-band model. The results are for the variational calculation with the spin-fluctuations turned off and configurations with up to $n_m$ magnons allowed. The ``restricted'' calculations labelled $n_{2,r}, n_{3,r}$ imposed the additional constraint that the magnons are on adjacent sites. While the bandwidth is strongly renormalized with increasing $n_m$, the nearly isotropic dispersion around the ground-state at $({\pi\over 2}, {\pi\over2})$ is a consistent feature. See text for more details.} \label{fig6} \end{figure} The $n_m=0$ curve plots the dispersion if no magnons are allowed, {\em i.e.} not only the spin fluctuations of the AFM background but also spin-flip processes due to $J_{pd}$ and $T_{\rm swap}$ are turned off. It is important to emphasize that the resulting dispersion does contain a very important contribution from the terms in $T_{\rm swap}$ describing hopping of the hole past Cu spins with the same spin projection, so that the spin-swap leaves the spins unchanged. In fact, it is the interference between these terms with those of $T_{pp}$ that leads to this interesting bare dispersion, which already has deep, nearly-isotropic minima near $({\pi\over 2}, {\pi\over2})$. If we set $t_{sw}=0$, the bare dispersion due to only $T_{pp}$ has the ground-state at $(\pi, \pi)$, whereas if $T_{pp}=0$, the dispersion due to the allowed terms in $T_{\rm swap}$ is perfectly flat because the hole is then trapped near a like Cu spin. However, as long as $t_{pp} \sim t_{sw}$, the isotropic minimum emerges at $({\pi\over 2}, {\pi\over2})$. In this context, it is useful to note that in many numerical studies of the Emery model, $t_{pp}$ was set to zero simply for convenience. Our results suggest that this choice changes the quasiparticle dynamics qualitatively, and is therefore unjustified. The bare hole dispersion in the three-band model thus already mimics this key aspect of the correct quasiparticle dispersion, unlike in the one-band model. Allowing the quasiparticle cloud to emerge by allowing the hole to create and absorb magnons in its vicinity, through spin-flip processes controlled by $J_{pd}$ and $T_{\rm swap}$, further renormalizes the bandwidth (a typical polaronic effect) without affecting the existence of the isotropic dispersion near $({\pi\over 2}, {\pi\over2})$. This magnon cloud is very important, however, to stabilize the low-energy quasiparticle, as demonstrated by the significant lowering of the total energy. In particular, at least one magnon must be present in order for a ZRS-like object to be able to form, and indeed the $n_m=1$ curve is pushed down by $\sim 10 J_{dd}$ compared to the bare dispersion. We further analyze the relevance of the ZRS solution below. The small difference between the $n_m=2$ and $n_m=2,r$ results proves that magnons indeed sit on adjacent sites in the cloud. (The latter solution imposes this constraint explicitly, whereas the former allows the magnons to be at any distance from each other. In both cases, the hole can be arbitrarily far from the magnons, although, as expected, configurations where the hole is close to the last emitted magnon have the highest weight in the quasiparticle eigenstates.) At higher energies, however, these two solutions are qualitatively different. The former contains the expected quasiparticle+one-magnon continuum starting at $E_{1, gs}+2J_{dd}$, where $E_{1,gs}$ is the ground-state energy of the quasiparticle with $n_m=1$, and $2J_{dd}$ is the energy cost to create a magnon far from it. Their sum is the energy above which higher-energy (excited) states must appear in the spectrum, describing the quasiparticle plus one magnon not bound to its cloud. The presence of this continuum guarantees that in the fully converged limit, the quasiparticle bandwidth cannot be wider than $2J_{dd}$, since the quasiparticle band is always ``flattened out'' below this continuum (another typical polaronic behavior). For both $n_{2,r}$ and $n_{3,r}$ calculations, the quasiparticle is already heavy enough that its dispersion fits below the corresponding continuum. This is why enlarging the variational space with configurations needed to describe this feature, with at least one magnon located far from the cloud, does not affect the quasiparticle dispersion much (see Ref. \cite{Hadi} for more discussion). The bandwidth of the $n_m=3,r$ dispersion is in decent agreement with numerical results for this model, as discussed next, suggesting that this variational calculation is close to fully converged. The fact that the cloud is rather small should not be a surprise. The variational approach explicitly imposes the constraint that there is at most one magnon at a site. As magnons sit on adjacent sites when bound in the quasiparticle cloud, they prefer to occupy a compact area to minimize their exchange energy cost, thus creating a domain in the other N\'eel state (down-up instead of up-down). The hole prefers to sit on the edge of this domain, because being inside it is equally disadvantageous to being outside, {\em i.e.} far from magnons. However, since on the boundary the hole can interact with only one magnon at one time, a large and costly domain is unlikely. We can now contrast the dynamics of the quasiparticle in the three-band model {\em if the spin fluctuations are frozen out} with the corresponding one-band model, namely the $t$-$t'$-$t''$-$J_z$ case. Both have a quasiparticle with a small, few-magnon cloud, and a bandwidth $\approx 2 J = 2J_{dd}$. The key difference is that the three-band model already shows a dispersion with the correct shape, whereas for the one-band model the dispersion is much too flat along $(0,0)-(\pi,\pi)$. This difference is traced back to the fact that in the one-band model, the bare hole dispersion also suffers from this same problem if $t'/t''\sim -1.5$, unlike that of the three-band model. As a result, spin fluctuations are necessary to find the correct dispersion in the one-band model, as already shown, but their role in the three-band model should be rather limited. \begin{figure}[t] \center \includegraphics[width=\columnwidth]{fig6.eps} \caption{ (color online) $E({\bf k})$ along several cuts in the Brillouin zone for the three-band model in the restricted variational approximations with (a) $n_m=2$ and (b) $n_m=3$. Circles show ED results for $S_T={1\over 2}$ from Ref. \cite{Bayo} for a 32 Cu + 64 O cluster, shifted to have the same ground-state energy. Full lines show the results of Fig. \ref{fig6}, without spin fluctuations. Orange lines with square symbols are the results if spin fluctuations occur near the hole. The dashed green line in panel (b) is the dispersion when spin fluctuations are allowed to locally create/remove a pair of magnons only if no other magnons are present/remain in the system. See text for more details.} \label{fig7} \end{figure} To confirm this conjecture, we consider the effect of local spin fluctuations on the dispersion of the three-band model quasiparticle. In Fig. \ref{fig7}, results for the restricted variational approach with $n_m=2,r$ and $n_m=3,r$ ({\em i.e.} up to two or up to three magnons on adjacent sites) are compared to the ED results of Ref. \cite{Bayo}, shown by the black full circles. Full lines (red and blue, respectively) show the results of Fig. \ref{fig6}, without spin fluctuations. Orange lines with squares show the dispersion with spin fluctuations turned on near the hole. The dashed green line in panel (b) shows an intermediate result when spin fluctuations are allowed to create a pair of magnons only if there is no magnon in the system, and to remove a pair if only two magnons are present (the orange line also includes contributions from processes where spin fluctuations add a pair of magnons when a magnon is already present, and its reversed process). The effect of spin fluctuations is similar to that found in the one-band models, as expected because the AFM background is modeled identically. They again have very little effect on the $(\pi,0)-(0,\pi)$ dispersion; beside a small shift to lower energies, this bandwidth is only slightly increased, bringing it into better agreement with the ED values for $n_m=3$. Like for one-band models, spin fluctuations lead to a more significant increase of the $(0,0)-(\pi,\pi)$ dispersion. For $n_m=3$, it changes from being too narrow without spin fluctuations, to too wide in their presence. (The $n_m=2$ overestimate of the bandwidth is expected, see discussion in Ref. \cite{Hadi}). The increased energy near $(0,0) = (\pi,\pi)$ may seem problematic but one must remember that in reality, $E({\bf k})$ is flattened below a continuum that appears at $2 J_{dd}$ above the ground state. The continuum is absent in this restricted calculation because configurations with a magnon far from the cloud, which give rise to it, are not included. This explains why the overestimated bandwidth is possible. In the presence of the continuum, states that overlap with it hybridize with it and a discrete state (the quasiparticle) is pushed below its edge. This will lower the value at $(0,0)$ and lead to good quantitative agreement everywhere with the ED results. (The keen reader may note that the ED bandwidth is also slightly wider than $2J_{dd}$, but one must remember both the finite size effects of cluster ED, and the existence of a $S_T={3\over 2}$ polaron in its spectrum \cite{Bayo}. Since the total spin ${\hat {\bf S}}_T^2$ is not a good quantum number in our variational approximation, its quasiparticle probably overlaps somewhat with both the $S_T={1\over 2}$ and $S_T={3\over 2}$ spin-polarons, so it is not clear which ED states to compare against). Our results show that spin fluctuations have a similar effect in both models. However, while they are essential for restoring the proper shape of the dispersion in one-band models, they are much less relevant for the three-band model. This is a direct consequence of the different shape of the bare bands, as discussed, but also to having $J\sim |t'|$ while $J_{dd} \sim t_{pp}/4$. In the three-band model, the quasiparticle creates and absorbs magnons while moving freely on the O sublattice, on a timescale that is faster than that over which spin fluctuations act, and so their effect is limited. In contrast, in the one-band model, the timescale for free propagation of the hole on the same magnetic sublattice (controlled by $t', t''$) is comparable with the spin fluctuations' timescale, and therefore the effect of spin fluctuations is much more significant. They are especially important along $(0,0)-(\pi,\pi)$, where the bare dispersion of one-band models is nearly flat. \section{Discussion and summary} In this work, we used a variational method to study and compare the quasiparticle of $t$-$J$ and $t$-$t'$-$t''$-$J$ one-band models, to that of a (simplified) three-band model that is the intermediary step between the full three-band Emery model and the one-band models. Our variational method generates the BBGKY hierarchy of equations of motions for a propagator of interest (here, the retarded one-hole propagator), but simplified by setting to zero the generalized propagators related to projections on states that are not within the variational space. Its physical motivation is very simple: if the variational space is properly chosen, {\em i.e.} if it contains the configurations with the highest weight contributions to the quasiparticle eigenstates, then the ignored propagators are indeed small because their residue at the $\omega =E({\bf k})$ pole is proportional to their weight (Lehmann representation). Setting them to zero should thus be an accurate approximation. Numerically, the motivation is also clear: because the resulting simplified hierarchy of coupled equations can be solved efficiently, we can quite easily study a quasiparticle (or a few \cite{MonaPRL}) on an infinite plane, thus avoiding finite-size effects and getting full information about ${\bf k}$ dependence, not just at a few values. Moreover, by enlarging the variational space and by turning off various terms in the Hamiltonian, both of which lead to changes in the EOM and thus the resulting propagators, one can infer whether the calculation is close to convergence and isolate and understand the effect of various terms, respectively. The ability to efficiently make such comparisons is essential because it allows us to gain intuition about the resulting physics. Our results show that even though for reasonable values of the parameters, the quasiparticle dispersion $E({\bf k})$ has similar shapes in both models, the underlying quasiparticle dynamics is very different. In the three-band model, the bare dispersion of the hole on the O sublattice, due to $t_{pp}$ and spin-swap hopping $t_{\rm sw}$ past Cu with parallel spins, already has a deep isotropic minimum near $\left({\pi\over2}, {\pi\over2}\right)$, unlike the bare $\varepsilon({\bf k})$ of the one-band models. When renormalized due to the magnon cloud, it produces a quasiparticle dispersion with the correct shape in the whole Brillouin zone, even in the absence of spin fluctuations. In contrast, for the one-band models the inclusion of spin fluctuations is necessary for the correct dispersion to emerge. This shows that the quasiparticle dynamics is controlled by different physics in the two models, and this is likely to play a role at finite concentrations as well. Our results thus raise strong doubts on whether the one-band $t$-$t'$-$t''$-$J$ model truly describes the same physics like the three-band model. We can think of three possible explanations to explain these differences: (i) The $t$-$t'$-$t''$-$J$ model is the correct one-band model, but its true parameters have values quite different from the ones used here. Indeed, if the bare $\varepsilon({\bf k})$ dispersion had isotropic minima at $({\pi\over2}, {\pi\over 2})$, and if its renormalized bandwidth would be of the order of $2J$, then spin fluctuations could not change it much, similar to what is observed in the three-band model. This explanation can be ruled out. An isotropic bare dispersion $\varepsilon(k,k) \approx \varepsilon(k,\pi-k)$ requires that $|t''/t'| \gg 1$, which is physically unreasonable. (ii) The $t$-$t'$-$t''$-$J$ model has a different quasiparticle because its underlying assumption, {\em i.e.} the existence of the low-energy ZRS, is wrong. This would mean that not only this specific one-band model but any other one obtained through such a projection would be invalid. \begin{figure}[t] \center \includegraphics[width=\columnwidth]{fig7.eps} \caption{ (color online) (a) Overlap $p_{\rm ZRS}$ between the ZRS Bloch state of Eq. (\ref{zrs}) and the quasiparticle eigenstate, as obtained in the restricted variational calculations with $n_m=1,2,3$, without (full lines) and with (full symbols) local spin fluctuations included. The empty squares and the dashed line show the weight of the ZR triplet. (b) Overlap $p_{\rm ZRS}$ normalized with respect to the probability to have no magnon ($p_0$) or to have one magnon near the hole ($p_1$), in the quasiparticle eigenstate. See text for more details. } \label{fig8} \end{figure} We can test this hypothesis by calculating the overlap between the three-band quasiparticle and a ZRS Bloch state. The latter is defined in the only possible way that is consistent with N\'eel order: \begin{equation} \label{zrs} |{\rm ZRS},{\bf k}\rangle = {1\over \sqrt{N}} \sum_{i\in {\rm Cu}_{\downarrow} }^{}e^{i {\bf k} \cdot {\bf R}_i} { p^\dagger_{x^2-y^2, i, \uparrow} - p^\dagger_{x^2-y^2, i, \downarrow}S_i^{+}\over \sqrt{2}} |{\rm N} \rangle \end{equation} where $$ p^\dagger_{x^2-y^2, i, \sigma}={1\over 2} \left[p^\dagger_{i+{{\bf x}\over 2},\sigma } + p^\dagger_{i+{{\bf y}\over 2},\sigma} - p^\dagger_{i-{{\bf x}\over 2}, \sigma}- p^\dagger_{i-{{\bf y}\over 2}, \sigma } \right] $$ is the linear combination of the $p$ orbitals neighbor to the Cu located at $i$, that has the overall $x^2-y^2$ symmetry (our choice for the signs of the lobs is shown in Fig. \ref{fig1}). Note that with this definition $|\langle {\rm ZRS},{\bf k}| {\rm ZRS},{\bf k}\rangle|^2=1$ for any ${\bf k}$, so there are no normalization problems \cite{Zhang}. We define $p_{\rm ZRS} = |\langle qp, {\bf k}| {\rm ZRS},{\bf k}\rangle|^2$ as the overlap between the quasiparticle eigenstate of momentum ${\bf k}$ and this ZRS Bloch state. Its value can be calculated from the appropriate residues of the zero- and one-magnon propagators at $\omega = E({\bf k})$, and is shown in Fig. \ref{fig8}(a). We do not plot the $n_m=0$ results because a singlet cannot form if the Cu spins cannot flip. (For $n_m=0$, there is overlap with the spin-up hole component of $|{\rm ZRS},{\bf k}\rangle$, and we find that $p_{\rm ZRS}$ varies from 0 at $(0,0)$ and $(0,\pi)$ to 0.5 at $({\pi\over2},{\pi\over2})$, but the same answer is found for a triplet. Interestingly, this proves that the bare hole dispersion already has eigenstates with the $x^2-y^2$ symmetry near $({\pi\over2}, {\pi\over2})$). For $n_m=1$ we find $p_{\rm ZRS} \sim 0.9$ in the entire Brillouin zone. Clearly, in this very small variational space, locking into a ZRS is the best way for the doped hole to lower its energy. However, the value of $p_{\rm ZRS}$ decreases fairly significantly for $n_m=2,r$ and $n_m=3,r$. First, note that turning the spin fluctuations on or off has almost no effect on $p_{\rm ZRS}$. This is consistent with our conclusion that local spin fluctuations do not influence the nature of the quasiparticle in the three-band model: clearly, its wavefunction is not changed in their presence. The decrease of $p_{\rm ZRS}$ with increasing $n_m$ could be due either to increased contributions to the eigenstate from many-magnon configurations (which have no overlap with $|{\rm ZRS},{\bf k}\rangle$), and/or from competing states such as a ZR triplet, and/or singlets or triplets with the hole occupying a linear combination of O orbitals with $s, p_x$ or $p_y$ instead of $x^2-y^2$ symmetry. The latter possibility can also be ruled out because overlaps with those Bloch states are found to be small. The largest such contribution is from the ZR triplet state, shown in Fig. \ref{fig8}(a) by the dashed line and open squares for $n_m=3,r$ without and with local spin fluctuations, respectively. This overlap is much smaller than with the ZRS singlet. Another way to confirm this is displayed in Fig. \ref{fig8}(b), where we compare $p_{\rm ZRS}$ to $p_0+p_1$, where $p_0$ is the probability to find the hole without any magnons, and $p_1$ is the probability to find one magnon adjacent to the hole, in the quasiparticle eigenstate. Note that $p_0+p_1 <1$ even for the one-magnon variational approximation because the hole can also be located away from the magnon. As $n_m$ increases, $p_0+p_1$ decreases even more as configurations with two or more magnons now also contribute to the normalization. These configurations with two or more magnons, and those with one magnon not adjacent to the hole, have no overlap with $|{\rm ZRS},{\bf k}\rangle$, explaining the decrease in the magnitude of $p_{\rm ZRS}$. However, the ratio $p_{\rm ZRS}/(p_0+p_1) > 0.9$ in the whole Brillouin zone, confirming that this part of the wavefunction has a predominant ZRS-like nature. This is certainly the case near the $({\pi\over 2},{\pi\over 2})$ point, where the overlap is converged to 1. Interestingly, at the antinodal points this ratio decreases with increasing $n_m$, and here the overlap with the ZR triplet is largest, see Fig. \ref{fig8}(a), suggesting that a ZRS description is less accurate in this region. Thus, the zero- and one- (adjacent) magnon parts of the wavefunction have significant overlap with the ZRS Bloch state. However, $p_{\rm ZRS}\sim 0.5$ is a rather small value, and it is not clear whether the dressing with more magnons is consistent with this ZRS picture or not. It is possible that the two- and three-magnon components of the wavefunction have significant overlap with a ZRS+one magnon and ZRS+two magnon configurations, but they could also have quite different nature. It is not clear to us how to verify which is the actual situation. If these two- and three-magnon components have significant non-ZRS character, however that is defined in this case, then clearly the difference observed in the results from one- and three-band models would be likely due to this non-ZRS nature. If, on the other hand, one takes these results to support the idea that a low-energy projection onto ZRS states is valid, then this is not the origin of the discrepancy in the quasiparticle behavior. In this case, it must follow that: (iii) The $t$-$t'$-$t''$-$J$ is not the correct one-band model because there are additional important terms generated by the projection onto the ZRS states, like those discussed in Ref. \cite{ALigia} or the three-site terms, which it neglects. If (iii) is indeed the explanation for the different behavior of the quasiparticles of the one- and three-band models, then in our opinion this implies that the strategy of using one-band models to study cuprates is unlikely to succeed. The main reason for this strategy, as mentioned, is to make the Hilbert space as small as possible for computational convenience. This, however, is only useful if the Hamiltonian is also fairly simple. In principle one could test additional terms that could be included in one-band models by using methods like ours, to figure out which insure that the resulting behavior mirrors that of the three-band model. Even if this enterprise was successful and the ``fix'' was relatively simple, {\em i.e.} only a few additional terms and corresponding parameters are necessary, it is important to emphasize that this improved one-band model would still {\em not} describe correctly cuprates at finite doping. Additional terms must be included to correctly account for the effective interactions between quasiparticles in one-band models, as we demonstrated in Ref. \cite{Mirko}. From a technical point of view, their origin is simple to understand. Even for the simplified three-band model, the presence of additional holes leads to additional terms in the Hamiltonian \cite{BayoPRB}, because the intermediary states are different and this affects the projection onto states with no-double occupancy on Cu. This is going to become even more of an issue if a subsequent projection onto ZRS-like states has to be performed, and may well result in an unmanageably complex Hamiltonian. This is why we believe that the (simplified) three-band model is a safer option to pursue. Its computational complexity is not that much worse than for one-band models, whereas the Hamiltonian is certainly simpler. In fact, our demonstration here that there is no need to accurately capture the spin fluctuations of the AFM background in order to gain a reasonable understanding of the quasiparticle behavior, makes its study significantly simpler. In particular, it allowed us to study one hole on an infinite layer very simply and efficiently. Generalizations to few holes \cite{MonaPRL} and to finite concentrations could also turn out to be easier to carry out than the effort of finding the correct form for a one-band Hamiltonian. Of course, there is no guarantee that the (simplified) three-band model captures all physics needed to explain cuprates, either. It is possible that important aspects of the Emery model were lost through the projection onto spin degrees of freedom at the Cu sites (this, however, would affect the one-band models just as much). Even the Emery model itself may not be general enough; for instance, a generalization to a 5-band model including non-ligand O $2p$ orbitals might be needed, as suggested recently in Ref. \cite{Hirsch}. We note that such a generalization can be easily handled by our method (provided that one can still project onto spin degrees of freedom at the Cu sites), as showed in Ref. \cite{Hadi} where we found that these states do not change the quasiparticle dispersion much, although they do have an effect on its wavefunction. While careful investigation of such scenarios is left as future work, one clear lesson from this study is that obtaining the correct dispersion for the quasiparticle of an effective model is {\em not sufficient} to validate that model. The dispersion can have the correct shape for the wrong reasons, as we showed to be the case for the $t$-$t'$-$t''$-$J$ model, where it is due to the interplay between the effects of the longer-range hopping and spin-fluctuations. The same dispersion is obtained for the simplified three-band model, however in this case the spin-fluctuations play essentially no role, so the underlying physics is very different. This difference is very likely to manifest itself in other properties, therefore these models are not equivalent despite the similar dispersion of their quasiparticles. \acknowledgments We thank Walter Metzner and Peter Horsch for discussions and suggestions. This work was supported by NSERC, QMI, and UBC 4YF (H.E.).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Titan has a dense atmosphere, composed of $\mathrm{N_2}$ and $\mathrm{CH_4}$, and many trace gases such as hydrocarbons (e.g. $\mathrm{C_2H_6}$, $\mathrm{C_2H_2}$) and nitriles (e.g. $\mathrm{HCN}$, $\mathrm{HC_3N}$) produced by its rich photochemistry. Like Earth, Titan has a stratosphere, located between 50~km ($\sim 100$~mbar) and 400~km ($\sim 0.01$~mbar), characterized by the increase of its temperature with altitude because of the absorption of incoming sunlight by methane and hazes. Titan's atmosphere undergoes strong variations of insolation, due to its obliquity ($26.7^{\circ}$) and to the eccentricity of Saturn's orbit around the Sun (0.0565). \\ The Cassini spacecraft monitored Titan's atmosphere during 13 years (from 2004 to 2017), from northern winter to summer solstice. Its data are a unique opportunity to study the seasonal evolution of its stratosphere, especially with mid-IR observations from Cassini/CIRS (Composite InfraRed Spectrometer, \citet{Flasar2004}). They showed that at pressures lower than 5~mbar, the stratosphere exhibits strong seasonal variations of temperature and composition related to changes in atmospheric dynamics and radiative processes. For instance, during northern winter (2004-2008), high northern latitudes were enriched in photochemical products such as HCN or $\mathrm{C_4H_2}$, while there was a "hot spot" in the upper stratosphere and mesosphere (0.1 - 0.001~mbar, \citet{Achterberg2008, Coustenis2007, Teanby2007b,Vinatier2007}). These observations were interpreted as evidence of subsidence above the North pole during winter, which is a part of the pole-to-pole atmospheric circulation cell predicted for solstices by Titan GCMs (Global Climate Models, \citet{Lora2015,Lebonnois2012a,Newman2011}). These models also predict that the circulation pattern should reverse around equinoxes, via a transitional state with two equator-to-pole cells. These changes began to affect the South pole in 2010, when measurements showed that pressures inferior to 0.03~mbar exhibited an enrichment in gases such as HCN or $\mathrm{C_2H_2}$, which propagated downward during autumn, consistent with the apparition of a new circulation cell with subsidence above the South pole \citep{Teanby2017,Vinatier2015}.\\ Some uncertainties remain about the seasonal evolution of the lower part of the stratosphere, i.e. at pressures from 5~mbar (120~km) to 100~mbar (tropopause, 50~km). Different estimates of radiative timescales have been calculated for this region. In \citet{Strobel2010}, the radiative timescales in this region vary from 0.2 Titan years at 5~mbar to 2.5 Titan years at 100~mbar. This means that the lower stratosphere should be the transition zone from parts of the atmosphere which are sensitive to seasonal insolation variations, to parts of the atmosphere which are not. In contrast, in the radiative-dynamical model of \citet{Bezard2018}, radiative timescales are between 0.02 Titan year at 5~mbar and 0.26 Titan year at 100~mbar, implying that this whole region should exhibit a response to the seasonal cycle.\\ From northern winter to equinox, CIRS mid-IR observations showed that temperature variations were lower than 5~K between 5~mbar and 10~mbar \citep{Bampasidis2012,Achterberg2011}. Temporal variations intensified after spring equinox, as \citet{Coustenis2016} measured a cooling by 16~K and an increase in gases abundances at $70^{\circ}$S from 2010 to 2014, at 10~mbar, associated with the autumn subsidence above the South pole. \citet{Sylvestre2018} showed that this subsidence affects pressure levels as low as 15~mbar as they measured strong enrichments in $\mathrm{C_2N_2}$, $\mathrm{C_3H_4}$, and $\mathrm{C_4H_2}$~at high southern latitudes from 2012 to 2016 with CIRS far-IR observations. However, we have little information on temperatures and their seasonal evolution for pressures greater than 10~mbar. Temperatures from the surface to 0.1~mbar can be measured by Cassini radio-occultations, but the published profiles were measured mainly in 2006 and 2007 \citep{SchinderFlasarMaroufEtAl2011,Schinder2012}, so they provide little information on seasonal variations of temperature. \\ In this study, we analyse all the available far-IR Cassini/CIRS observations to probe temperatures from 6~mbar to 25~mbar, and measure the seasonal variations of lower stratospheric temperatures. As these data were acquired throughout the Cassini mission from 2004 to 2017, and cover the whole latitude range, they provide a unique overview of the thermal evolution of the lower stratosphere from northern winter to summer solstice, and a better understanding of the radiative and dynamical processes at play in this part of Titan's atmosphere.\\ \section{Data analysis} \subsection{Observations} We measure lower stratospheric temperatures using Cassini/CIRS \citep{Flasar2004} spectra. CIRS is a thermal infrared spectrometer with three focal planes operating in three different spectral domains: 10 - 600$~\mathrm{cm^{-1}}$ (17 - 1000$~\mathrm{\mu m}$) for FP1, 600 - 1100$~\mathrm{cm^{-1}}$ (9 - 17 $~\mathrm{\mu m}$) for FP3, and 1100 - 1400$~\mathrm{cm^{-1}}$ (7 - 9$~\mathrm{\mu m}$) for FP4. FP1 has a single circular detector with an angular field of view of 3.9~mrad, which has an approximately Gaussian spatial response with a FWHM of 2.5 mrad. FP3 and FP4 are each composed of a linear array of ten detectors. Each of these detectors has an angular field of view of 0.273~mrad. \\ In this study, we use FP1 far-IR observations, where nadir spectra are measured at a resolution of 0.5$~\mathrm{cm^{-1}}$, in "sit-and-stare" geometry (i.e the FP1 detector probes the same latitude and longitude during the whole duration of the acquisition). In this type of observation, the average spatial field of view is 20$^\circ$ in latitude. An acquisition lasts between 1h30 and 4h30, allowing the recording of 100 to 330 spectra. The spectra from the same acquisition are averaged together, which increases the S/N by a factor $\sqrt{N}$ (where N is the number of spectra). As a result, we obtain an average spectrum where the rotational lines of $\mathrm{CH_4}$~(between 70$~\mathrm{cm^{-1}}$ and 170$~\mathrm{cm^{-1}}$) are resolved and can be used to retrieve Titan's lower stratospheric temperature. An example averaged spectrum is shown in Fig. \ref{fig_spec}.\\ We analysed all the available observations with the characteristics mentioned above. As shown in table \ref{table_obs}, this type of nadir far-IR observation has been performed throughout the Cassini mission (from 2004 to 2017), at all latitudes. Hence, the analysis of this dataset enables us to get an overview of Titan's lower stratosphere and its seasonal evolution.\\ \begin{figure}[!h] \includegraphics[width=1\columnwidth]{Spectra_89N_0703} \caption{Example of average spectrum measured with the FP1 detector of Cassini/CIRS (in black) and its fit by NEMESIS (in red). The measured spectrum was obtained after averaging 106 spectra observed at $89^{\circ}$N in March 2007. The rotational lines of $\mathrm{CH_4}$~are used to retrieve stratospheric temperature. The "haystack" feature is visible only at high latitudes during autumn and winter. } \label{fig_spec} \end{figure} \subsection{Retrieval method} We follow the same method as \citet{Sylvestre2018}. We use the portion of the spectrum between 70~$\mathrm{cm^{-1}}$ and 400~$\mathrm{cm^{-1}}$, where the main spectral features are: the ten rotational lines of $\mathrm{CH_4}$~(between 70$~\mathrm{cm^{-1}}$ and 170$~\mathrm{cm^{-1}}$), the $\mathrm{C_4H_2}$~band at $220~\mathrm{cm^{-1}}$, the $\mathrm{C_2N_2}$~band at $234~\mathrm{cm^{-1}}$, and the $\mathrm{C_3H_4}$~band at $327~\mathrm{cm^{-1}}$ (see Fig. \ref{fig_spec}). The continuum emission comes from the collisions between the three main components of Titan's atmosphere (N$_2$, $\mathrm{CH_4}$, and H$_2$), and from the spectral contributions of the hazes. \\ We retrieve the temperature profile using the constrained non-linear inversion code NEMESIS \citep{Irwin2008}. We define a reference atmosphere, which takes into account the abundances of the main constituents of Titan's atmosphere measured by Cassini/CIRS \citep{Coustenis2016,Nixon2012,Cottini2012,Teanby2009}, Cassini/VIMS \citep{Maltagliati2015}, ALMA \citep{Molter2016} and Huygens/GCMS\citep{Niemann2010}. We also consider the haze distribution and properties measured in previous studies with Cassini/CIRS \citep{deKok2007,deKok2010b,Vinatier2012}, and Huygens/GCMS \citep{Tomasko2008b}. We consider four types of hazes, following \citet{deKok2007}: hazes 0 ($70~\mathrm{cm^{-1}}$ to $400~\mathrm{cm^{-1}}$), A (centred at $140~\mathrm{cm^{-1}}$), B (centred at $220~\mathrm{cm^{-1}}$) and C (centred at $190~\mathrm{cm^{-1}}$). For the spectra measured at high northern and southern latitudes during autumn and winter, we add an offset from 1 to $3~\mathrm{cm^{-1}}$ to the nominal haze B cross-sections between 190~$\mathrm{cm^{-1}}$ and 240~$\mathrm{cm^{-1}}$, as in \citet{Sylvestre2018}. This modification improves the fit of the continuum in the "haystack" which is a strong emission feature between 190~$\mathrm{cm^{-1}}$ and 240~$\mathrm{cm^{-1}}$ (see Fig. \ref{fig_spec}) seen at high latitudes during autumn and winter (e.g. in \citet{Coustenis1999, deKok2007, Anderson2012, Jennings2012, Jennings2015}). The variation of the offset allows us to take into account the evolution of the shape of this feature throughout autumn and winter. The composition of our reference atmosphere and the spectroscopic parameters adopted for its constituents are fully detailed in \citet{Sylvestre2018}.\\ We retrieve the temperature profile and scale factors applied to the \textit{a priori} profiles of $\mathrm{C_2N_2}$, $\mathrm{C_4H_2}$, $\mathrm{C_3H_4}$, and hazes 0, A, B and C, from the spectra using the constrained non-linear inversion code NEMESIS \citep{Irwin2008}. This code generates synthetic spectra from the reference atmosphere. At each iteration, the difference between the synthetic and the measured spectra is used to modify the profile of the retrieved variables, and minimise a cost function, in order to find the best fit for the measured spectrum. \\ The sensitivity of the spectra to the temperature can be measured with the inversion kernels for the temperature (defined as $K_{ij}~=~\frac{\partial I_i}{\partial T_j}$, where $I_i$ is the radiance measured at wavenumber $w_i$, and $T_j$ the temperature at pressure level $p_j$) for several wavenumbers. The contribution of the methane lines to the temperature measurement can be isolated by defining their own inversion kernels $K^{CH_4}_{ij}$ as follows: \begin{equation} K^{CH_4}_{ij} = K_{ij} - K^{cont}_{ij} \end{equation} \noindent where $K^{cont}_{ij}$ is the inversion kernel of the continuum for the same wavenumber. Figure \ref{fig_cf} shows $K^{CH_4}_{ij}$ for three of the rotational methane lines in the left panel, and the comparison between the sum of the 10 $K^{CH_4}_{ij}$ (for the 10 rotational $\mathrm{CH_4}$~lines) and inversion kernels for the continuum ($K^{cont}_{ij}$ at the wavenumbers of the $\mathrm{CH_4}$~lines and $K_{ij}$ outside of the $\mathrm{CH_4}$~lines) in the right panel. The $\mathrm{CH_4}$~lines allow us to measure lower stratospheric temperatures generally between 6~mbar and 25~mbar, with a maximal sensitivity at 15~mbar. The continuum emission mainly probes temperatures at higher pressures, around the tropopause and in the troposphere. The continuum emission mostly originates from the $\mathrm{N_2}$-$\mathrm{N_2}$ and $\mathrm{N_2}$-$\mathrm{CH_4}$~collisions induced absorption with some contribution from the hazes, for which we have limited constraints. However, Fig. \ref{fig_cf} shows that the continuum emission comes from pressure levels located several scale heights below the region probed by the $\mathrm{CH_4}$~lines, so the lack of constraints on the hazes and tropospheric temperatures does not affect the lower stratospheric temperatures which are the main focus of this study.\\ \begin{figure}[!h] \includegraphics[width=1\columnwidth]{CF_72N_0704_v2} \caption{Sensitivity of temperature measurements at $72^{\circ}N$ in April 2007. \textit{Left panel}: Normalised inversion kernels $K^{CH_4}_{ij}$ in three of the $\mathrm{CH_4}$~rotational lines. \textit{Right panel:} Comparison between the inversion kernels in the continuum ($K^{cont}_{ij}$ for three of the $\mathrm{CH_4}$~lines in dot-dashed lines, and $K_{ij}$ for other wavenumbers in the continuum in dashed lines) and the sum of the inversion kernels $K^{CH_4}_{ij}$ of the $\mathrm{CH_4}$~rotational lines. $\mathrm{CH_4}$~rotational lines dominate the temperature retrievals in the lower stratosphere, generally from 6 to 25~mbar (and up to 35~mbar, depending on the datasets). The continuum emission probes temperatures at pressures higher than 50~mbar, mainly in the troposphere.} \label{fig_cf} \end{figure} \subsection{Error sources} The main error sources in our temperature retrievals are the measurement noise and the uncertainties related to the retrieval process such as forward modelling errors or the smoothing of the temperature profile. The total error on the temperature retrieval is estimated by NEMESIS and is in the order of 2~K from 6~mbar to 25~mbar.\\ The other possible error source is the uncertainty on $\mathrm{CH_4}$~abundance, as \citet{Lellouch2014} showed that it can vary from 1\% to 1.5\% at 15 mbar. We performed additional temperature retrievals on several datasets, in order to assess the effects of these variations on the temperature retrievals. First, we selected datasets for which $\mathrm{CH_4}$~abundance was measured by \citet{Lellouch2014}. In Figure \ref{fig_TCH4}, we show examples of these tests for two of these datasets: $52^{\circ}$N in May 2007 and $15^{\circ}$S in October 2006, for which \citet{Lellouch2014} measured respective $\mathrm{CH_4}$~abundances of $q_{CH_4} = 1.20 \pm 0.15\%$ and $q_{CH_4} = 0.95 \pm 0.08 \%$ (the nominal value for our retrievals is $q_{CH_4} = 1.48 \pm 0.09\%$ from \citet{Niemann2010}). At $52^{\circ}$N, the temperature profile obtained with the methane abundance from \citet{Lellouch2014} does not differ by more than 4~K from the nominal temperature profile. At 15 mbar (where the sensitivity to temperature is maximal in our retrievals), the difference of temperature between these two profiles is 2~K. Even a $\mathrm{CH_4}$~volume mixing ratio as low as 1\% yields a temperature only 4~K warmer than the nominal temperature at 15~mbar. At $15^{\circ}$S, the difference of temperature between the nominal retrieval and the retrieval with the methane abundance retrieved by \citet{Lellouch2014} ($q_{CH_4}=0.95\%$), is approximately 9~K on the whole pressure range.\\ We performed additional temperature retrievals using CIRS FP4 nadir spectra measured at the same times and latitudes as the two datasets shown in Figure \ref{fig_TCH4}. In FP4 nadir spectra, the methane band $\nu_4$ is visible between $1200~\mathrm{cm^{-1}}$ and $1360~\mathrm{cm^{-1}}$. This spectral feature allows us to probe temperature between 0.1~mbar and 10~mbar, whereas methane rotational lines in the CIRS FP1 nadir spectra generally probe temperature between 6~mbar and 25~mbar. Temperature can thus be measured with both types of retrievals from 6~mbar to 10~mbar. We performed FP4 temperature retrievals with the nominal methane abundance and the abundances measured by \citet{Lellouch2014}, as shown in Figure \ref{fig_TCH4}. FP4 temperature retrievals seem less sensitive to changes in the methane volume mixing ratio, as they yield a maximal temperature difference of 3~K at $52^{\circ}$N , and 4~K at $15^{\circ}$S between 6~mbar and 10~mbar. In both cases, FP1 and FP4 temperature retrievals are in better agreement in their common pressure range when the nominal methane abundance ($q_{CH_4}=1.48\%$) is used for both retrievals. This suggests that $q_{CH_4}=1.48\%$ is the best choice, at least in the pressure range covered by both types of temperature retrievals (from 6~mbar to 10 mbar). Changing the abundance of $\mathrm{CH_4}$~in the whole stratosphere seems to induce an error on the temperature measurements between 6~mbar and 10 mbar (up to 9~K at $15^{\circ}$S), which probably affects the temperature at 15~mbar in the FP1 retrievals, because of the vertical resolution of nadir retrievals (represented by the width of the inversion kernels in Fig. \ref{fig_cf}). Consequently, assessing the effects of $\mathrm{CH_4}$~abundance variations on temperature at 15~mbar by changing $q_{CH_4}$ in the whole stratosphere seems to be a very unfavourable test, and the uncertainties on temperature determined by this method are probably overestimated for the FP1 temperature retrievals. Overall, when retrieving temperature from CIRS FP1 nadir spectra with $q_{CH_4}=1\%$ for datasets spanning different times and latitudes, we found temperatures warmer than our nominal temperatures by 2~K to 10~K at 15~mbar, with an average of 5~K. In \cite{Lellouch2014}, authors found that temperature changes by 4-5 K on the whole pressure range when varying $q_{CH_4}$ at $15^{\circ}$S, but they determined temperatures using FP4 nadir and limb data, which do not probe the 15 mbar pressure level.\\ \begin{figure} \centering \includegraphics[width=1\columnwidth]{Temperature_profiles_52N_0705}\\ \includegraphics[width=1\columnwidth]{Temperature_profiles_15S_0610}\\ \caption{Temperature profiles from CIRS FP1 and FP4 nadir observations at $52^{\circ}$N in May 2007 (top panel) and $15^{\circ}$S in October 2006 (bottom panel), retrieved with the methane abundances measured by \citet{Niemann2010} (nominal value in this study) and \citet{Lellouch2014}. In both cases, the nominal value from \citet{Niemann2010} yields a better agreement between the two types of observations.} \label{fig_TCH4} \end{figure} \section{Results} \label{sect_res} \begin{figure}[!hp] \includegraphics[width=1\columnwidth]{carte_2d_6mbar_v2}\\ \includegraphics[width=1\columnwidth]{carte_2d_15mbar_v2} \caption{Evolution of temperatures at 6~mbar (120~km) and 15~mbar (85~km) from northern winter (2004) to summer (2017). The length of the markers shows the average size of the field of view of the CIRS FP1 detector. Temperatures exhibit similar strong seasonal changes at both pressure levels, especially at the poles.} \label{fig_ev_saiso} \end{figure} \begin{figure}[!h] \includegraphics[width=1\columnwidth]{var_saiso_GCM} \caption{Meridional distribution of temperatures at 6~mbar (120~km) and 15~mbar (85~km), for three different seasons: late northern winter (2007, blue triangles), mid-spring (2013, green circles), and near summer solstice (from July 2016 to September 2017, red diamonds). The plain lines are the meridional distributions given by GCM simulations at comparable seasons (see section \ref{sect_discu}). In both observations and model the meridional gradient of temperatures evolves from one season to another at both pressure levels.} \label{fig_var_saiso} \end{figure} Figures \ref{fig_ev_saiso} and \ref{fig_var_saiso} show the temperatures measured with Cassini/CIRS far-IR nadir data at 6~mbar (minimal pressure probed by the CIRS far-IR nadir observations) and 15~mbar (pressure level where these observations are the most sensitive). Figure \ref{fig_ev_saiso} maps the seasonal evolution of temperatures throughout the Cassini mission (from 2004 to 2017, i.e. from mid-northern winter to early summer), while Figure \ref{fig_var_saiso} is focused on the evolution of the meridional gradient of temperature from one season to another. In both figures, both pressure levels exhibit significant seasonal variations of temperature and follow similar trends. Maximal temperatures are reached near the equator in 2005 (152~K at 6~mbar, 130~K at 15~mbar, at $18^{\circ}$S, at $L_S=300^{\circ}$), while the minimal temperatures are reached at high southern latitudes in autumn (123~K at 6~mbar, 106~K at 15~mbar at $70^{\circ}$S in 2016, at $L_S=79^{\circ}$).\\ The maximal seasonal variations of temperature are located at the poles for both pressure levels. At high northern latitudes ($60^\circ$N - $90^\circ$N), at 15~mbar, the temperature increased overall from winter to summer solstice. For instance at $70^{\circ}$N, temperature increased by 10~K from January 2007 to September 2017. At 6~mbar, temperatures at $60^{\circ}$N stayed approximately constant from winter to spring, whereas latitudes poleward from $70^{\circ}$N warmed up. At $85^{\circ}$N, the temperature increased continuously from 125~K in March 2007 to 142~K in September 2017.\\ In the meantime, at high southern latitudes ($60^\circ$S - $90^\circ$S), at 6~mbar and 15~mbar, temperatures strongly decreased from southern summer (2007) to late autumn (2016). It is the largest seasonal temperature change we measured in the lower stratosphere. At $70^{\circ}$S, temperature decreased by 24~K at 6~mbar and by 19~K at 15~mbar between January 2007 and June 2016. This decrease seems to be followed by a temperature increase toward winter solstice. At $70^{\circ}$S, temperatures varied by $+8$~K at 6~mbar from June 2016 to April 2017. Temperatures at high southern latitudes began to evolve in November 2010 at 6~mbar, and 2 years later (in August 2012) at 15~mbar.\\ Other latitudes experience moderate seasonal temperature variations. At low latitudes (between $30^{\circ}$N and $30^{\circ}$S), temperature decreased overall from 2004 to 2017 at both pressure levels. For instance, at the equator, at 6~mbar temperature decreased by 6~K from 2006 to 2016. At mid-southern latitudes, temperatures stayed constant from summer (2005) to mid-autumn (June 2012 at 6~mbar, and May 2013 at 15~mbar), then they decreased by approximately 10~K from 2012-2013 to 2016. At mid-northern latitudes temperatures increased overall from winter to spring. At $50^{\circ}$N, temperature increased from 139~K to 144~K from 2005 to 2014. In Figure \ref{fig_var_saiso}, at 6~mbar and 15~mbar, the meridional temperature gradient evolves from one season to another. During late northern winter, temperatures were approximately constant from $70^{\circ}$S to $30^{\circ}$N, and then decreased toward the North pole. In mid-spring, temperatures were decreasing from equator to poles. Near the summer solstice, at 15~mbar, the meridional temperature gradient reversed compared to winter (summer temperatures constant in northern and low southern latitudes then decreasing toward the South Pole), while at 6~mbar, temperatures globally decrease from the equator to the South pole and $70^{\circ}$N, then increase slightly between $70^{\circ}$N and $90^{\circ}$N. At 15~mbar, most of these changes in the shape of the temperature distribution occur because of the temperature variations poleward from $60^{\circ}$. At 6~mbar, temperature variations occur mostly in the southern hemisphere at latitudes higher than $40^{\circ}$S, and near the North pole at latitudes higher than $70^{\circ}$N.\\ \begin{figure}[!h] \includegraphics[width=1\columnwidth]{temp_profiles} \caption{Temperature variations in the lower stratosphere during the Cassini mission for different latitudes. The blue profiles were measured during northern winter (in 2007). The red profiles were measured in late northern spring (in 2017 for $85^{\circ}$N, in 2016 for the other latitudes). The seasonal temperature variations are observed at most latitudes, and on the whole probed pressure range.} \label{fig_grad_saiso_vert} \end{figure} Figure \ref{fig_grad_saiso_vert} shows the first and the last temperature profiles measured with CIRS nadir far-IR data, for several latitudes. As in Fig. \ref{fig_ev_saiso}, the maximal temperature variations are measured at high southern latitudes for all pressure levels. At $70^{\circ}$S, the temperature decreased by 25~K at 10~mbar. Below 10~mbar the seasonal temperature difference decreases rapidly with increasing pressure until it reaches 10~K at 25~mbar, whereas it is nearly constant between 5~mbar and 10~mbar. $85^{\circ}$N also exhibits a decrease of the seasonal temperature gradient below the 10~mbar pressure level, although it is less pronounced than near the South pole. At $45^{\circ}$S, the temperature decreased by approximately 10~K from 2007 to 2016, over the whole probed pressure range. At the equator, the temperature varies by -5~K from 2005 to 2016 at 6~mbar and the amplitude of this variation seems to decrease slightly with increasing pressure until it becomes negligible at 25~mbar. However the amplitude of these variations is in the same range as the uncertainty on temperature due to potential $\mathrm{CH_4}$~variations. \\ \section{Discussion} \label{sect_discu} \subsection{Comparison with previous results} \begin{figure}[!h] \includegraphics[width=1\columnwidth]{comp_prev_studies_v5} \caption{Comparison of nadir FP1 temperatures with previous studies. \textit{Top left panel:} Comparison between CIRS nadir FP1 (triangles) and CIRS nadir FP4 temperatures at 6~mbar (circles, \citet{Bampasidis2012}[1], and \citet{Coustenis2016}[2]) in 2010 (cyan) and 2014 (purple). \textit{Right panel:} Comparison between temperature profiles from CIRS nadir FP1 observations (thick solid lines), CIRS nadir FP4 observations (thin dot-dashed lines, \citet{Coustenis2016}[2]), and Cassini radio-occultation (thin dashed line, \citet{SchinderFlasarMaroufEtAl2011}[3]). Our results are in good agreement with CIRS FP4 temperatures, but diverge somewhat from radio-occultation profiles with increasing pressure. \textit{Bottom left panel:} Comparison between temperatures at 15~mbar from our CIRS FP1 nadir measurements (magenta triangles), Cassini radio-occultations in 2006 and 2007 (cyan circles, \citet{SchinderFlasarMaroufEtAl2011,Schinder2012}, [3], [4]), and the Huygens/HASI measurement in 2005 (yellow diamond, \citet{Fulchignoni2005}, [5]).The dashed magenta line shows the potential effect of the $\mathrm{CH_4}$~variations observed by \citet{Lellouch2014}. If we take into account this effect, the agreement between our data, the radio-occultations and the HASI measurements is good.} \label{fig_prev_studies} \end{figure} Figure \ref{fig_prev_studies} shows a comparison between our results and previous studies where temperatures have been measured in the lower stratosphere at similar epochs, latitudes and pressure levels. In the top left and right panels, our temperature measurements are compared to results from CIRS FP4 nadir observations \citep{Bampasidis2012, Coustenis2016} which probe mainly the 0.1-10~mbar pressure range. In the top left panel, the temperatures measured at 6~mbar by these two types of observations are in good agreement for the two considered epochs (2009-2010 and 2014). We obtain similar meridional gradients with both types of observations, even if FP4 temperatures are obtained from averages of spectra over bins of $10^{\circ}$ of latitudes (except at $70^{\circ}$N and $70^{\circ}$S where the bins are $20^{\circ}$ wide in latitude), whereas the average size in latitude of the field of view of the FP1 detector is $20^{\circ}$. It thus seems than the wider latitudinal size of the FP1 field of view has little effect on our temperature measurements. In the right panel, our temperature profiles are compared to two profiles measured by \citet{Coustenis2016} using CIRS FP4 nadir observations (at $50^{\circ}$S in April 2010, and at $70^{\circ}$S in June 2012), and with Cassini radio-occultations measurements from \citet{SchinderFlasarMaroufEtAl2011,Schinder2012}, which probe the atmosphere from the surface to 0.1~mbar (0 - 300~km). CIRS FP1 and FP4 temperature profiles are in good averall agreement. The profile we measured at $28^{\circ}$S in February 2006 and the corresponding radio-occultation profile are within error bars for pressures lower than 13~mbar, then the difference between them increases up to 8~K at 25~mbar. The bottom left panel of Fig. \ref{fig_prev_studies} shows the radio-occultation temperatures in 2006 and 2007 compared to CIRS nadir FP1 temperatures at 15~mbar, where their sensitivity to the temperature is maximal. Although, the radio-occultations temperatures are systematically higher than the CIRS temperatures by 2~K to 6~K, they follow the same meridional trend. CIRS FP1 temperatures at the equator are also lower than the temperature measured by the HASI instrument at 15~mbar during Huygens descent in Titan's atmosphere in 2005. If we take into account the effect of the spatial variations of $\mathrm{CH_4}$~at 15~mbar observed by \citet{Lellouch2014} by decreasing the $\mathrm{CH_4}$~abundance to 1\% (the lower limit in \citet{Lellouch2014}) in the CIRS FP1 temperature measurements (dashed line in the middle panel of Fig. \ref{fig_prev_studies}), the agreement between the three types of observations is good in the southern hemisphere. The differences between radio-occultations, HASI and CIRS temperatures might also be explained by the difference of vertical resolution. Indeed nadir observations have a vertical resolution in the order of 50~km while radio-occultations and HASI observations have respective vertical resolutions of 1~km and 200~m around 15~mbar.\\ \subsection{Effects of Saturn's eccentricity} \begin{figure}[!h] \includegraphics[width=1\columnwidth]{eccentricity_equator_free_T0} \caption{Temporal evolution of Titan's lower stratospheric temperatures at the equator ($5^{\circ}$N - $5^{\circ}$S) at 6~mbar (left panel) and 15~mbar (right panel), compared with a simple model of the evolution of the temperature as a function of the distance between Titan and the Sun (green line). The reduced $\chi^2$ between this model and the observations is 0.95 at 6~mbar and 1.07 at 15~mbar. The amplitude of the temperature variations at Titan's equator throughout the Cassini mission can be explained by the effect of Saturn's eccentricity.} \label{fig_eccentricity} \end{figure} Because of Saturn's orbital eccentricity of 0.0565, the distance between Titan and the Sun varies enough to affect significantly the insolation. For instance, throughout the Cassini mission, the solar flux received at the equator has decreased by 19\% because of the eccentricity. We make a simple model of the evolution of the temperature $T$ at the equator as a function of the distance between Titan and the Sun. In this model we assume that the temperature $T$ at the considered pressure level and at a given time depends only on the absorbed solar flux $F$ and we neglect the radiative exchanges between atmospheric layers: \begin{equation} \epsilon \sigma T^4 = F \end{equation} \noindent where $\epsilon$ is the emissivity of the atmosphere at this pressure level, and $\sigma$ the Stefan-Boltzmann constant. $T$ can thus be defined as a function of the distance $d$ between Titan and the Sun: \begin{equation} T^4 = \frac{\alpha L_{\odot}}{16\epsilon\sigma\pi d^2} \label{eq_T_dist} \end{equation} \noindent where $L_\odot$ is the solar power, and $\alpha$ the absorptivity of the atmosphere. If we choose a reference temperature $T_0$ where Titan is at a distance $d_0$ from the Sun, a relation similar to (\ref{eq_T_dist}) can be written for $T_0$. If we assume $\epsilon$ and $\alpha$ to be constant, $T$ can then be written as: \begin{equation} T = T_0 \sqrt{\frac{d_0}{d}} \end{equation} Figure \ref{fig_eccentricity} shows a comparison between this model and the temperatures measured between $5^{\circ}$N and $5^{\circ}$S from 2006 to 2016, at 6~mbar and 15~mbar. We choose $T_0$ as the temperature at the beginning of the observations (December 2005/January 2006) which provides the best fit between our model and the observations while being consistent with the observations at the same epoch ($T_0=151.7$~K at 6~mbar, and $T_0=129$~K at 15~mbar). At 6~mbar, we measure a temperature decrease from 2006 to 2016. This is similar to what has been measured at 4~mbar by \citet{Bezard2018} with CIRS mid-IR observations, whereas their radiative-dynamical model predicts a small temperature maximum around the northern spring equinox (2009). At 15~mbar, equatorial temperatures are mostly constant from 2005 to 2016, with a marginal decrease in 2016. Our model predicts temperature variations of 8~K at 6~mbar and 7~K at 15~mbar from 2006 to 2016. Both predictions are consistent with the measurements and with radiative timescales shorter than one Titan year at 6~mbar and 15~mbar, as in \citet{Bezard2018} where they are respectively equal to 0.024~Titan year and 0.06~Titan year. At both pressure levels, the model captures the magnitude of the temperature change, but does not fully match its timing or shape (especially in 2012-2014), implying that a more sophisticated model is needed. The remaining differences between our model and the temperature measurements could be decreased by adding a temporal lag to our model (2-3~years at 6~mbar and 3-4~years at 15~mbar), but the error bars on the temperature measurements are too large to constrain the lag to a value statistically distinct from zero. Even with this potential lag, the agreement between the model and the temperatures measured at 6~mbar shows that the amplitude of the temporal evolution throughout the Cassini mission may be explained by the effects of Saturn's eccentricity. At 15~mbar, given the error bars and the lack of further far-IR temperature measurements at the equator in 2016 and 2017, it remains difficult to draw a definitive conclusion about the influence of Saturn's eccentricity at this pressure level.\\ \subsection{Implication for radiative and dynamical processes of the lower stratosphere} In Section \ref{sect_res}, we showed that in the lower stratosphere, the seasonal evolution of the temperature is maximal at high latitudes, especially at the South Pole. At 15~mbar, the strong cooling of high southern latitudes started in 2012, simultaneously with the increase in $\mathrm{C_2N_2}$, $\mathrm{C_4H_2}$, and $\mathrm{C_3H_4}$~abundances measured at the same latitudes and pressure-level in \citet{Sylvestre2018}. We also show that this cooling affects the atmosphere at least down to the 25~mbar pressure level (altitude of 70~km). The enrichment of the gases and cooling are consistent with the onset of a subsidence above the South Pole during autumn, as predicted by GCMs \citep{Newman2011, Lebonnois2012a}, and inferred from previous CIRS observations at higher altitudes \citep{Teanby2012, Vinatier2015, Coustenis2016}. As Titan's atmospheric circulation transitions from two equator-to-poles cells (with upwelling above the equator and subsidence above the poles) to a single pole-to-pole cell (with a descending branch above South Pole), this subsidence drags downward photochemical species created at higher altitudes toward the lower stratosphere. \citet{Teanby2017} showed that enrichment in trace gases may be so strong that their cooling effect combined with the insolation decrease may exceed the adiabatic heating between 0.3~mbar and 10~mbar (100 - 250~km). Our observations show that this phenomena may be at play down as deep as 25~mbar.\\ We compare retrieved temperature fields with results of simulations from IPSL 3D-GCM \citep{Lebonnois2012a} with an updated radiative transfer scheme \citep{Vatantd'Ollone2017} now based on a flexible \textit{correlated-k} method and up-to-date gas spectroscopic data \citep{Rothman2013}. It does not take into account the radiative feedback of the enrichment in hazes and trace gases in the polar regions, but it nevertheless appears that there is a good agreement in terms of seasonal cycle between the model and the observations. As shown in Figure \ref{fig_var_saiso}, at 6~mbar meridional distributions and values of temperatures in the model match well the observations. It can be pointed out that in both model and observations there is a noticeable asymmetry between high southern latitudes where the temperature decreases rapidly from the equinox to winter, and high northern latitudes which evolve more slowly from winter to summer. For instance, in both CIRS data and model, between 2007 and 2013 at 6~mbar and $70^{\circ}$N the atmosphere has warmed by only about 2~K, while in the meantime at $70^{\circ}$S it has cooled by about 10-15~K. This is consistent with an increase of radiative timescales at high northern latitudes (due to lower temperatures, \citet{Achterberg2011}) which would remain cold for approximately one season even after the return of sunlight. Figure \ref{fig_map_temp_gcm70N} shows the temporal evolution of the temperature at $70^{\circ}$N over one Titan year in the lower stratosphere in the GCM simulations and also emphasizes this asymmetry between the ingress and egress of winter at high latitudes. In Figure \ref{fig_var_saiso}, at 15~mbar modeled temperatures underestimate the observations by roughly 5-10~K, certainly due to a lack of infrared coolers such as clouds condensates \citep{Jennings2015}. However, observations and simulations exhibit similar meridional temperature gradients for the three studied epochs, and similar seasonal temperature evolution. For instance, in 2016-2017 we measured a temperature gradient of -11~K between the North and South Pole, whereas GCM simulations predict a temperature gradient of -12~K. At $70^{\circ}$S, temperature decreases by 10~K between 2007 and 2016-2017 in the GCM and in our observations. Besides, at 15~mbar, the seasonal behaviour remains the same as at 6~mbar, although more damped. Indeed comparison with GCM results also supports the idea that the seasonal effects due to the variations of insolation are damped with increasing depth in the lower stratosphere and ultimately muted below 25 mbar, as displayed in Figure \ref{fig_map_temp_gcm70N}. At lower altitudes the seasonal cycle of temperature at high latitudes is even inverted with temperatures increasing in the winter and decreasing in summer. Indeed at these altitudes, due to the radiative timescales exceeding one Titan year, temperature is no more sensitive to the seasonal variations of solar forcing, but to the interplay of ascending and descending large scale vertical motions of the pole-to-pole cell, inducing respectively adiabatic heating above winter pole and cooling above summer pole, as previously discussed in \citet{Lebonnois2012a}. Further analysis of simulations – not presented here - also show that after 2016, temperatures at high southern latitudes began to slightly increase again at 6~mbar, which is consistent with observations, whereas at 15~mbar no change in the trend is observed, certainly due to a phase shift of the seasonal cycle between the two altitudes induced by the difference of radiative timescales, which is also illustrated in Figure \ref{fig_map_temp_gcm70N}. \\ \begin{figure}[!h] \includegraphics[width=1\columnwidth]{Map_Temperature_70N} \caption{Seasonal evolution of Titan's lower stratospheric temperatures modeled by the IPSL 3D-GCM at 70$^{\circ}$N - between 5~mbar and 50~mbar, starting at northern spring equinox. In the pressure range probed by the CIRS far-IR observations (from 6~mbar to 25~mbar), there is a strong asymmetry between the rapid temperature changes after autumn equinox ($L_S = 180^{\circ}$) and the slow evolution of the thermal structure after spring equinox ($L_S = 0^{\circ}$). } \label{fig_map_temp_gcm70N} \end{figure} We also show in Figure \ref{fig_grad_saiso_vert} that at high southern latitudes, from 6 to 10~mbar seasonal temperature variations are approximately constant with pressure and can be larger than 10~K, whereas they decrease with increasing pressure below 10~mbar. This transition at 10~mbar may be caused by the increase of radiative timescales in the lower stratosphere. \citet{Strobel2010} estimated that the radiative timescale increases from one Titan season at 6~mbar to half a Titan year at 12~mbar. It can thus be expected that this region should be a transition zone between regions of the atmosphere where the atmospheric response to the seasonal insolation variations is significant and comes with little lag, to regions of the atmosphere where they are negligible. However, this transition should be observable at other latitudes such as $45^{\circ}$S, whereas Figure \ref{fig_grad_saiso_vert} shows a seasonal gradient constant with pressure at this latitude. Furthermore, in \citet{Bezard2018}, the authors show that the method used to estimate radiative timescales in \citet{Strobel2010} tends to overestimate them, and that in their model radiative timescales are less than a Titan season down to the 35~mbar pressure level, which is more consistent with the seasonal variations measured at $45^{\circ}$S.\\ The 10~mbar transition can also be caused by the interplay between photochemical, radiative and dynamical processes at high latitudes. Indeed, as photochemical species are transported downward by the subsidence above the autumn/winter pole, build up and cool strongly the lower atmosphere, the condensation level of species such as HCN, $\mathrm{HC_3N}$, $\mathrm{C_4H_2}$ or $\mathrm{C_6H_6}$ may be shifted upward, toward the 10~mbar level. Hence, below this pressure level, the volume mixing ratios of these gases would rapidly decrease, along with their cooling effect. Many observations, especially during the Cassini mission showed that during winter and autumn, polar regions host clouds composed of ices of photochemical species. For instance, the "haystack" feature showed in Fig. \ref{fig_spec} has been studied at both poles in \citet{Coustenis1999,Jennings2012,Jennings2015}, and is attributed to a mixture of condensates, possibly of nitrile origin. Moreover, HCN ice has been measured in the southern polar cloud observed by \citet{deKok2014} with Cassini/VIMS observations. $\mathrm{C_6H_6}$ ice has also been detected by \citet{Vinatier2018} in CIRS observations of the South Pole. The condensation curve for $\mathrm{C_4H_2}$~in \citet{Barth2017} is also consistent with the formation of $\mathrm{C_4H_2}$~ice around 10~mbar with the temperatures we measured at $70^{\circ}$S in 2016. These organic ices may also have a cooling effect themselves as \citet{Bezard2018} showed that at 9~mbar, the nitrile haze measured by \citet{Anderson2011} contributes to the cooling with an intensity comparable to the contribution of gases such as $\mathrm{C_2H_2}$ and $\mathrm{C_2H_6}$. \\ \section{Conclusion} In this paper, we analysed all the available nadir far-IR CIRS observations to measure Titan's lower stratospheric temperatures (6~mbar - 25~mbar) throughout the 13 years of the Cassini mission, from northern winter to summer solstice. In this pressure range, significant temperature changes occur from one season to another. Temperatures evolve moderately at low and mid-latitudes (less than 10~K between 6 and 15~mbar). At the equator, at 6~mbar we measure a temperature decrease mostly due to Saturn's eccentricity. Seasonal temperature changes are maximal at high latitudes, especially in the southern hemisphere where they reach up to -19~K at $70^{\circ}$S between summer (2007) and late autumn (2016) at 15~mbar. The strong seasonal evolution of high southern latitudes is due to a complex interplay between photochemistry, atmospheric dynamics with the downwelling above the autumn/winter poles, radiative processes with a large contribution of the gases transported toward the lower stratosphere, and possibly condensation due to the cold autumn polar temperatures and strong enrichments in trace gases.\\ Recent GCM simulations show a good agreement with the observed seasonal variations in this pressure range, even though these simulations do not include coupling with variations of opacity sources. In particular at high latitudes, the fast decrease of temperatures when entering winter and slower increase when getting into summer is well reproduced in these simulations. \section*{Acknowledgements} This research was funded by the UK Sciences and Technology Facilities Research council (grant number ST/MOO7715/1) and the Cassini project. JVO and SL acknowledge support from the Centre National d'Etudes Spatiales (CNES). GCM simulations have been performed thanks to computation facilities provided by the Grand Équipement National de Calcul Intensif (GENCI) on the \textit{Occigen/CINES} cluster (allocation A0040110391). This research made use of Astropy, a community-developed core Python package for Astronomy \citep{2013A&A...558A..33A}, and matplotlib, a Python library for publication quality graphics \citep{Hunter:2007} \section*{Appendix. Cassini/CIRS Datasets analysed in this study} % \onecolumn \begin{longtable}[!h]{lcccc} \caption{\label{table_obs}Far-IR CIRS datasets presented in this study. N stands for the number of spectra measured during the acquisition. FOV is the field of view. The asterisk denotes datasets where two different latitudes were observed. }\\ \hline \hline Observations & Date & N & Latitude ($^{\circ}$N) & FOV ($^{\circ}$)\\ \hline CIRS\_00BTI\_FIRNADCMP001\_PRIME & 12 Dec. 2004 & 224 & 16.4 & 20.3\\ CIRS\_003TI\_FIRNADCMP002\_PRIME & 15 Feb. 2005 & 180 & -18.7 & 18.5\\ CIRS\_005TI\_FIRNADCMP002\_PRIME & 31 Mar. 2005 & 241 & -41.1 & 25.7\\ CIRS\_005TI\_FIRNADCMP003\_PRIME & 01 Apr. 2005 & 240 & 47.8 & 28.5\\ CIRS\_006TI\_FIRNADCMP002\_PRIME & 16 Apr. 2005 & 178 & 54.7 & 29.9\\ CIRS\_009TI\_COMPMAP002\_PRIME & 06 Jun. 2005 & 184 & -89.7 & 21.1\\ CIRS\_013TI\_FIRNADCMP003\_PRIME & 21 Aug. 2005 & 192 & 30.1 & 15.5\\ CIRS\_013TI\_FIRNADCMP004\_PRIME & 22 Aug. 2005 & 248 & -53.7 & 25.0\\ CIRS\_017TI\_FIRNADCMP003\_PRIME & 28 Oct. 2005 & 119 & 20.1 & 19.8\\ CIRS\_019TI\_FIRNADCMP002\_PRIME & 26 Dec. 2005 & 124 & -0.0 & 17.6\\ CIRS\_020TI\_FIRNADCMP002\_PRIME & 14 Jan. 2006 & 107 & 19.5 & 19.7\\ CIRS\_021TI\_FIRNADCMP002\_PRIME & 27 Feb. 2006 & 213 & -30.2 & 22.5\\ CIRS\_022TI\_FIRNADCMP003\_PRIME & 18 Mar. 2006 & 401 & -0.4 & 18.4\\ CIRS\_022TI\_FIRNADCMP008\_PRIME & 19 Mar. 2006 & 83 & 25.3 & 24.1\\ CIRS\_023TI\_FIRNADCMP002\_PRIME & 01 May 2006 & 215 & -35.0 & 27.8\\ CIRS\_024TI\_FIRNADCMP003\_PRIME & 19 May 2006 & 350 & -15.5 & 21.6\\ CIRS\_025TI\_FIRNADCMP002\_PRIME & 02 Jul. 2006 & 307 & 25.1 & 21.7\\ CIRS\_025TI\_FIRNADCMP003\_PRIME & 01 Jul. 2006 & 190 & 39.7 & 25.6\\ CIRS\_028TI\_FIRNADCMP003\_PRIME & 07 Sep. 2006 & 350 & 29.7 & 19.7\\ CIRS\_029TI\_FIRNADCMP003\_PRIME & 23 Sep. 2006 & 312 & 9.5 & 19.4\\ CIRS\_030TI\_FIRNADCMP002\_PRIME & 10 Oct. 2006 & 340 & -59.1 & 23.4\\ CIRS\_030TI\_FIRNADCMP003\_PRIME & 09 Oct. 2006 & 286 & 33.9 & 19.9\\ CIRS\_031TI\_COMPMAP001\_VIMS & 25 Oct. 2006 & 160 & -14.5 & 16.3\\ CIRS\_036TI\_FIRNADCMP002\_PRIME & 28 Dec. 2006 & 136 & -89.1 & 12.6\\ CIRS\_036TI\_FIRNADCMP003\_PRIME & 27 Dec. 2006 & 321 & 78.6 & 21.0\\ CIRS\_037TI\_FIRNADCMP001\_PRIME & 12 Jan. 2007 & 161 & 75.2 & 19.1\\ CIRS\_037TI\_FIRNADCMP002\_PRIME & 13 Jan. 2007 & 107 & -70.3 & 20.6\\ CIRS\_038TI\_FIRNADCMP001\_PRIME & 28 Jan. 2007 & 254 & 86.3 & 16.7\\ CIRS\_038TI\_FIRNADCMP002\_PRIME & 29 Jan. 2007 & 254 & -39.7 & 22.0\\ CIRS\_039TI\_FIRNADCMP002\_PRIME & 22 Feb. 2007 & 23 & 69.9 & 21.2\\ CIRS\_040TI\_FIRNADCMP001\_PRIME & 09 Mar. 2007 & 159 & -49.2 & 21.1\\ CIRS\_040TI\_FIRNADCMP002\_PRIME & 10 Mar. 2007 & 109 & 88.8 & 13.3\\ CIRS\_041TI\_FIRNADCMP002\_PRIME & 26 Mar. 2007 & 102 & 61.2 & 19.3\\ CIRS\_042TI\_FIRNADCMP001\_PRIME & 10 Apr. 2007 & 103 & -60.8 & 26.0\\ CIRS\_042TI\_FIRNADCMP002\_PRIME & 11 Apr. 2007 & 272 & 71.5 & 22.6\\ CIRS\_043TI\_FIRNADCMP001\_PRIME & 26 Apr. 2007 & 263 & -51.4 & 24.7\\ CIRS\_043TI\_FIRNADCMP002\_PRIME & 27 Apr. 2007 & 104 & 77.1 & 20.0\\ CIRS\_044TI\_FIRNADCMP002\_PRIME & 13 May 2007 & 104 & -0.5 & 18.8\\ CIRS\_045TI\_FIRNADCMP001\_PRIME & 28 May 2007 & 231 & -22.3 & 22.6\\ CIRS\_045TI\_FIRNADCMP002\_PRIME & 29 May 2007 & 346 & 52.4 & 29.5\\ CIRS\_046TI\_FIRNADCMP001\_PRIME & 13 Jun. 2007 & 60 & 17.6 & 28.6\\ CIRS\_046TI\_FIRNADCMP002\_PRIME & 14 Jun. 2007 & 102 & -20.8 & 19.0\\ CIRS\_047TI\_FIRNADCMP001\_PRIME & 29 Jun. 2007 & 204 & 9.8 & 23.2\\ CIRS\_047TI\_FIRNADCMP002\_PRIME & 30 Jun. 2007 & 238 & 20.1 & 23.7\\ CIRS\_048TI\_FIRNADCMP001\_PRIME & 18 Jul. 2007 & 96 & -34.8 & 31.4\\ CIRS\_048TI\_FIRNADCMP002\_PRIME & 19 Jul. 2007 & 260 & 49.5 & 35.8\\ CIRS\_050TI\_FIRNADCMP001\_PRIME & 01 Oct. 2007 & 144 & -10.1 & 23.8\\ CIRS\_050TI\_FIRNADCMP002\_PRIME & 02 Oct. 2007 & 106 & 29.9 & 19.7\\ CIRS\_052TI\_FIRNADCMP002\_PRIME & 19 Nov. 2007 & 272 & 40.3 & 26.5\\ CIRS\_053TI\_FIRNADCMP001\_PRIME & 04 Dec. 2007 & 223 & -40.2 & 25.8\\ CIRS\_053TI\_FIRNADCMP002\_PRIME & 05 Dec. 2007 & 102 & 59.4 & 28.3\\ CIRS\_054TI\_FIRNADCMP002\_PRIME & 21 Dec. 2007 & 107 & 60.4 & 21.1\\ CIRS\_055TI\_FIRNADCMP001\_PRIME & 05 Jan. 2008 & 190 & 18.7 & 30.5\\ CIRS\_055TI\_FIRNADCMP002\_PRIME & 06 Jan. 2008 & 284 & 44.6 & 22.2\\ CIRS\_059TI\_FIRNADCMP001\_PRIME & 22 Feb. 2008 & 172 & -24.9 & 20.7\\ CIRS\_059TI\_FIRNADCMP002\_PRIME & 23 Feb. 2008 & 98 & 17.1 & 20.0\\ CIRS\_062TI\_FIRNADCMP002\_PRIME & 25 Mar. 2008 & 115 & 59.3 & 17.1\\ CIRS\_067TI\_FIRNADCMP002\_PRIME & 12 May 2008 & 286 & 29.5 & 21.0\\ CIRS\_069TI\_FIRNADCMP001\_PRIME & 27 May 2008 & 112 & -44.6 & 27.3\\ CIRS\_069TI\_FIRNADCMP002\_PRIME & 28 May 2008 & 112 & 9.5 & 19.3\\ CIRS\_093TI\_FIRNADCMP002\_PRIME & 20 Nov. 2008 & 161 & 43.7 & 21.1\\ CIRS\_095TI\_FIRNADCMP001\_PRIME & 05 Dec. 2008 & 213 & -14.0 & 20.7\\ CIRS\_097TI\_FIRNADCMP001\_PRIME & 20 Dec. 2008 & 231 & -10.9 & 23.7\\ CIRS\_106TI\_FIRNADCMP001\_PRIME & 26 Mar. 2009 & 165 & -60.3 & 19.2\\ CIRS\_107TI\_FIRNADCMP002\_PRIME & 27 Mar. 2009 & 164 & 33.5 & 30.4\\ CIRS\_110TI\_FIRNADCMP001\_PRIME & 06 May 2009 & 282 & -68.1 & 25.7\\ CIRS\_111TI\_FIRNADCMP002\_PRIME & 22 May 2009 & 168 & -27.1 & 23.1\\ CIRS\_112TI\_FIRNADCMP001\_PRIME & 06 Jun. 2009 & 218 & 48.7 & 21.0\\ CIRS\_112TI\_FIRNADCMP002\_PRIME & 07 Jun. 2009 & 274 & -58.9 & 20.2\\ CIRS\_114TI\_FIRNADCMP001\_PRIME & 09 Jul. 2009 & 164 & -71.4 & 25.4\\ CIRS\_115TI\_FIRNADCMP001\_PRIME & 24 Jul. 2009 & 146 & 50.7 & 20.1\\ CIRS\_119TI\_FIRNADCMP002\_PRIME & 12 Oct. 2009 & 166 & 0.4 & 18.3\\ CIRS\_122TI\_FIRNADCMP001\_PRIME & 11 Dec. 2009 & 212 & 39.8 & 24.7\\ CIRS\_123TI\_FIRNADCMP002\_PRIME & 28 Dec. 2009 & 186 & -46.1 & 22.3\\ CIRS\_124TI\_FIRNADCMP002\_PRIME & 13 Jan. 2010 & 272 & -1.2 & 19.0\\ CIRS\_125TI\_FIRNADCMP001\_PRIME & 28 Jan. 2010 & 156 & 39.9 & 27.5\\ CIRS\_125TI\_FIRNADCMP002\_PRIME & 29 Jan. 2010 & 280 & -44.9 & 27.3\\ CIRS\_129TI\_FIRNADCMP001\_PRIME & 05 Apr. 2010 & 119 & -45.1 & 28.2\\ CIRS\_131TI\_FIRNADCMP001\_PRIME & 19 May 2010 & 188 & -30.0 & 22.1\\ CIRS\_131TI\_FIRNADCMP002\_PRIME & 20 May 2010 & 229 & -19.8 & 21.5\\ CIRS\_132TI\_FIRNADCMP002\_PRIME & 05 Jun. 2010 & 167 & 49.4 & 27.4\\ CIRS\_133TI\_FIRNADCMP001\_PRIME & 20 Jun. 2010 & 187 & -49.7 & 36.1\\ CIRS\_134TI\_FIRNADCMP001\_PRIME & 06 Jul. 2010 & 251 & -10.0 & 20.0\\ CIRS\_138TI\_FIRNADCMP001\_PRIME & 24 Sep. 2010 & 190 & -30.1 & 21.2\\ CIRS\_139TI\_COMPMAP001\_PRIME* & 14 Oct. 2010 & 132 & -70.9 & 20.6\\ CIRS\_139TI\_COMPMAP001\_PRIME* & 14 Oct. 2010 & 108 & -53.8 & 16.7\\ CIRS\_148TI\_FIRNADCMP001\_PRIME & 08 May 2011 & 200 & -10.0 & 18.3\\ CIRS\_153TI\_FIRNADCMP001\_PRIME & 11 Sep. 2011 & 227 & 9.9 & 19.0\\ CIRS\_158TI\_FIRNADCMP501\_PRIME & 13 Dec. 2011 & 369 & -29.9 & 24.7\\ CIRS\_159TI\_FIRNADCMP001\_PRIME & 02 Jan. 2012 & 275 & -42.2 & 23.7\\ CIRS\_160TI\_FIRNADCMP001\_PRIME & 29 Jan. 2012 & 322 & -40.0 & 21.7\\ CIRS\_160TI\_FIRNADCMP002\_PRIME & 30 Jan. 2012 & 280 & -0.2 & 18.3\\ CIRS\_161TI\_FIRNADCMP001\_PRIME & 18 Feb. 2012 & 121 & 9.9 & 18.4\\ CIRS\_161TI\_FIRNADCMP002\_PRIME & 19 Feb. 2012 & 89 & -15.0 & 17.3\\ CIRS\_166TI\_FIRNADCMP001\_PRIME & 22 May 2012 & 318 & -19.9 & 19.9\\ CIRS\_167TI\_FIRNADCMP002\_PRIME & 07 Jun. 2012 & 293 & -45.4 & 21.7\\ CIRS\_169TI\_FIRNADCMP001\_PRIME & 24 Jul. 2012 & 258 & -9.7 & 20.7\\ CIRS\_172TI\_FIRNADCMP001\_PRIME & 26 Sep. 2012 & 282 & 44.9 & 18.5\\ CIRS\_172TI\_FIRNADCMP002\_PRIME & 26 Sep. 2012 & 270 & -70.4 & 23.2\\ CIRS\_174TI\_FIRNADCMP002\_PRIME & 13 Nov. 2012 & 298 & -71.8 & 21.8\\ CIRS\_175TI\_FIRNADCMP002\_PRIME & 29 Nov. 2012 & 299 & -59.9 & 19.3\\ CIRS\_185TI\_FIRNADCMP001\_PRIME & 05 Apr. 2013 & 244 & 15.0 & 20.1\\ CIRS\_185TI\_FIRNADCMP002\_PRIME & 06 Apr. 2013 & 303 & -88.9 & 16.8\\ CIRS\_190TI\_FIRNADCMP001\_PRIME & 23 May 2013 & 224 & -0.2 & 25.6\\ CIRS\_190TI\_FIRNADCMP002\_PRIME & 24 May 2013 & 298 & -45.0 & 20.0\\ CIRS\_194TI\_FIRNADCMP001\_PRIME & 10 Jul. 2013 & 186 & 30.0 & 19.7\\ CIRS\_195TI\_FIRNADCMP001\_PRIME & 25 Jul. 2013 & 186 & 19.6 & 24.5\\ CIRS\_197TI\_FIRNADCMP001\_PRIME & 11 Sep. 2013 & 330 & 60.5 & 19.4\\ CIRS\_198TI\_FIRNADCMP001\_PRIME & 13 Oct. 2013 & 187 & 88.9 & 8.7\\ CIRS\_198TI\_FIRNADCMP002\_PRIME & 14 Oct. 2013 & 306 & -69.8 & 24.0\\ CIRS\_199TI\_FIRNADCMP001\_PRIME & 30 Nov. 2013 & 329 & 68.4 & 23.9\\ CIRS\_200TI\_FIRNADCMP001\_PRIME & 01 Jan. 2014 & 187 & 49.9 & 19.6\\ CIRS\_200TI\_FIRNADCMP002\_PRIME & 02 Jan. 2014 & 210 & -59.8 & 21.3\\ CIRS\_201TI\_FIRNADCMP001\_PRIME & 02 Feb. 2014 & 329 & 19.9 & 26.8\\ CIRS\_201TI\_FIRNADCMP002\_PRIME & 03 Feb. 2014 & 234 & -39.6 & 20.9\\ CIRS\_203TI\_FIRNADCMP001\_PRIME & 07 Apr. 2014 & 187 & 75.0 & 18.0\\ CIRS\_203TI\_FIRNADCMP002\_PRIME & 07 Apr. 2014 & 239 & 0.5 & 27.5\\ CIRS\_204TI\_FIRNADCMP002\_PRIME & 18 May 2014 & 199 & 0.4 & 27.0\\ CIRS\_205TI\_FIRNADCMP001\_PRIME & 18 Jun. 2014 & 144 & -45.1 & 20.5\\ CIRS\_205TI\_FIRNADCMP002\_PRIME & 18 Jun. 2014 & 161 & 30.3 & 19.1\\ CIRS\_206TI\_FIRNADCMP001\_PRIME & 19 Jul. 2014 & 181 & -50.3 & 17.8\\ CIRS\_206TI\_FIRNADCMP002\_PRIME & 20 Jul. 2014 & 161 & 30.6 & 18.4\\ CIRS\_207TI\_FIRNADCMP001\_PRIME & 20 Aug. 2014 & 179 & -70.0 & 17.8\\ CIRS\_207TI\_FIRNADCMP002\_PRIME & 21 Aug. 2014 & 163 & 79.7 & 17.6\\ CIRS\_208TI\_FIRNADCMP001\_PRIME & 21 Sep. 2014 & 329 & -80.0 & 15.6\\ CIRS\_208TI\_FIRNADCMP002\_PRIME & 22 Sep. 2014 & 175 & 60.5 & 17.8\\ CIRS\_209TI\_FIRNADCMP001\_PRIME & 23 Oct. 2014 & 181 & -35.2 & 17.7\\ CIRS\_209TI\_FIRNADCMP002\_PRIME & 24 Oct. 2014 & 233 & 50.5 & 18.5\\ CIRS\_210TI\_FIRNADCMP001\_PRIME & 10 Dec. 2014 & 329 & -70.3 & 25.2\\ CIRS\_210TI\_FIRNADCMP002\_PRIME & 11 Dec. 2014 & 237 & -19.6 & 27.6\\ CIRS\_211TI\_FIRNADCMP001\_PRIME & 11 Jan. 2015 & 225 & 19.6 & 25.0\\ CIRS\_211TI\_FIRNADCMP002\_PRIME & 12 Jan. 2015 & 258 & 40.0 & 19.3\\ CIRS\_212TI\_FIRNADCMP002\_PRIME & 13 Feb. 2015 & 257 & -40.0 & 30.1\\ CIRS\_213TI\_FIRNADCMP001\_PRIME & 16 Mar. 2015 & 187 & -31.6 & 19.6\\ CIRS\_213TI\_FIRNADCMP002\_PRIME & 16 Mar. 2015 & 258 & 23.4 & 20.5\\ CIRS\_215TI\_FIRNADCMP001\_PRIME & 07 May 2015 & 250 & -50.0 & 31.0\\ CIRS\_215TI\_FIRNADCMP002\_PRIME & 08 May 2015 & 232 & -30.0 & 21.7\\ CIRS\_218TI\_FIRNADCMP001\_PRIME & 06 Jul. 2015 & 249 & -20.0 & 19.9\\ CIRS\_218TI\_FIRNADCMP002\_PRIME & 07 Jul. 2015 & 232 & -40.0 & 25.2\\ CIRS\_222TI\_FIRNADCMP001\_PRIME & 28 Sep. 2015 & 125 & 30.0 & 21.7\\ CIRS\_222TI\_FIRNADCMP002\_PRIME & 29 Sep. 2015 & 233 & -0.1 & 18.6\\ CIRS\_230TI\_FIRNADCMP001\_PRIME & 15 Jan. 2016 & 282 & -15.0 & 19.5\\ CIRS\_231TI\_FIRNADCMP001\_PRIME & 31 Jan. 2016 & 254 & 15.0 & 19.6\\ CIRS\_231TI\_FIRNADCMP002\_PRIME & 01 Feb. 2016 & 236 & 0.4 & 18.9\\ CIRS\_232TI\_FIRNADCMP001\_PRIME & 16 Feb. 2016 & 249 & -50.2 & 24.5\\ CIRS\_232TI\_FIRNADCMP002\_PRIME & 17 Feb. 2016 & 92 & -19.8 & 21.5\\ CIRS\_234TI\_FIRNADCMP001\_PRIME & 04 Apr. 2016 & 328 & 19.8 & 24.7\\ CIRS\_235TI\_FIRNADCMP001\_PRIME & 06 May 2016 & 163 & -60.0 & 19.7\\ CIRS\_235TI\_FIRNADCMP002\_PRIME & 07 May 2016 & 221 & 15.7 & 20.1\\ CIRS\_236TI\_FIRNADCMP001\_PRIME & 07 Jun. 2016 & 88 & -70.5 & 20.5\\ CIRS\_236TI\_FIRNADCMP002\_PRIME & 07 Jun. 2016 & 238 & 60.8 & 20.0\\ CIRS\_238TI\_FIRNADCMP002\_PRIME & 25 Jul. 2016 & 220 & 15.4 & 20.5\\ CIRS\_248TI\_FIRNADCMP001\_PRIME & 13 Nov. 2016 & 185 & -88.9 & 18.3\\ CIRS\_248TI\_FIRNADCMP002\_PRIME & 14 Nov. 2016 & 186 & 30.3 & 17.4\\ CIRS\_250TI\_FIRNADCMP002\_PRIME & 30 Nov. 2016 & 219 & -19.8 & 28.4\\ CIRS\_259TI\_COMPMAP001\_PIE & 01 Feb. 2017 & 302 & -69.0 & 20.6\\ CIRS\_270TI\_FIRNADCMP001\_PRIME & 21 Apr. 2017 & 166 & -74.7 & 25.4\\ CIRS\_283TI\_COMPMAP001\_PRIME* & 10 Jul. 2017 & 114 & 60.0 & 26.5\\ CIRS\_283TI\_COMPMAP001\_PRIME* & 10 Jul. 2017 & 134 & 67.5 & 24.7\\ CIRS\_287TI\_COMPMAP001\_PIE & 11 Aug. 2017 & 305 & 88.9 & 9.3\\ CIRS\_288TI\_COMPMAP002\_PIE & 11 Aug. 2017 & 269 & 66.7 & 23.7\\ CIRS\_292TI\_COMPMAP001\_PRIME & 12 Sep. 2017 & 192 & 70.4 & 19.2\\ \hline \end{longtable} \twocolumn \section*{References} \bibliographystyle{elsarticle-harv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} After the work by \citet{my94}, the possibility that Gamma-Ray Bursts (GRBs) originate from a beamed emission has been one of the most debated issue about the nature of the GRB sources in the current literature \citep[see e.g.][and references therein]{p04,m06}. In particular, on the ground of the theoretical considerations by \citet{sph99}, it was conjectured that, within the framework of a conical jet model, one may find that the gamma-ray energy released in all GRBs is narrowly clustered around $5 \times 10^{50}$ ergs \citep{fa01}. In a recent letter \citep{PowerLaws} we analyzed the approximate power-law relations between the Lorentz gamma factor and the radial coordinate usually adopted in the current GRB literature. We pointed out how such relations are found to be mathematically correct but only approximately valid in a very limited range of the physical and astrophysical parameters and in an asymptotic regime which is reached only for a very short time, if any. Therefore, such relations were there shown to be not applicable to GRBs. Instead, the exact analytic solutions of the equations of motion of a relativistic thin and uniform shell expanding in the interstellar medium (ISM) in the fully radiative and adiabatic regimes were there presented. This program of identifying the exact analytic solutions instead of approximate power-law solution is in this letter carried one step forward. Using the above exact solutions, we here introduce the exact analytic expressions of the relations between the detector arrival time $t_a^d$ of the GRB afterglow radiation and the corresponding half-opening angle $\vartheta$ of the expanding source visible area due to the relativistic beaming \citep[see e.g.][]{Brasile}. Such visible area must be computed not over the spherical surface of the shell, but over the EQuiTemporal Surface (EQTS) of detector arrival time $t_a^d$, i.e. over the surface locus of points which are source of the radiation reaching the observer at the same arrival time $t_a^d$ \citep[see][for details]{EQTS_ApJL,EQTS_ApJL2}. The exact analytic expressions for the EQTSs in GRB afterglows, which have been presented in \citet{EQTS_ApJL2}, are therefore crucial in our present derivation. This approach clearly differs from the ones in the current literature, which usually neglect the contributions of the radiation emitted from the entire EQTS. The analytic relations between $t_a^d$ and $\vartheta$ presented in this Letter allow to compute, assuming that the expanding shell is not spherically symmetric but is confined into a narrow jet with half-opening angle $\vartheta_\circ$, the value $(t_a^d)_{jet}$ of the detector arrival time at which we start to ``see'' the sides of the jet. A corresponding ``break'' in the observed light curve should occur later than $(t_a^d)_{jet}$ \citep[see e.g.][]{sph99}. In the current literature, $(t_a^d)_{jet}$ is usually defined as the detector arrival time at which $\gamma \sim 1/\vartheta_\circ$, where $\gamma$ is the Lorentz factor of the expanding shell \citep[see e.g.][and also our Eq.(\ref{Sist}) below]{sph99}. In our formulation we do not consider effects of lateral spreadings of the jet. In the current literature, in the case of adiabatic regime, different approximate power-law relations between $(t_a^d)_{jet}$ and $\vartheta_\circ$ have been presented, in contrast to each other \citep[see e.g.][]{sph99,pm99,p06}. We show here that in four specific cases of GRBs, encompassing more than $5$ orders of magnitude in energy and more than $2$ orders of magnitude in ISM density, both the one by \citet{pm99} and the one by \citet{sph99} overestimate the exact analytic result. A third relation just presented by \citet{p06} slightly underestimate the exact analytic result. We also present an empirical fit of the numerical solutions of the exact equations for the adiabatic regime, compared and contrasted with the three above approximate relations. In the fully radiative regime, and therefore in the general case, no simple power-law relation of the kind found in the adiabatic regime can be established and the general approach we have outlined has to be followed. Although evidence for spherically symmetric emission in GRBs is emerging from observations \citep{sa06} and from theoretical argumentations \citep{Spectr1,cospar04}, it is appropriate to develop here an exact theoretical treatment of the relation between $(t_a^d)_{jet}$ and $\vartheta_\circ$. This will allow to make an assessment on the existence and, in the positive case, on the extent of beaming in GRBs, which in turn is going to be essential for establishing their correct energetics. \section{Analytic formulas for the beaming angle} The boundary of the visible region of a relativistic thin and uniform shell expanding in the ISM is defined by \citep[see e.g.][and references therein]{Brasile}: \begin{equation} \cos\vartheta = \frac{v}{c}\, , \label{bound} \end{equation} where $\vartheta$ is the angle between the line of sight and the radial expansion velocity of a point on the shell surface, $v$ is the velocity of the expanding shell and $c$ is the speed of light. To find the value of the half-opening beaming angle $\vartheta_\circ$ corresponding to an observed arrival time $(t_a^d)_{jet}$, this equation must be solved together with the equation describing the EQTS of arrival time $(t_a^d)_{jet}$ \citep{EQTS_ApJL2}. In other words, we must solve the following system: \begin{equation} \left\{ \begin{array}{rcl} \cos\vartheta_\circ & = & \frac{v\left(r\right)}{c}\\ \cos\vartheta_\circ & = & \cos\left\{\left.\vartheta \left[r;(t_a^d)_{jet}\right]\right|_{EQTS\left[(t_a^d)_{jet}\right]}\right\} \end{array} \right.\, . \label{Sist} \end{equation} It should be noted that, in the limit $\vartheta_\circ \to 0$ and $v \to c$, this definition of $(t_a^d)_{jet}$ is equivalent to the one usually adopted in the current literature (see sec. \ref{intro}). \subsection{The fully radiative regime} In this case, the analytic solution of the equations of motion gives \citep[see][]{EQTS_ApJL2,PowerLaws}: \begin{equation} \frac{v}{c} = \frac{\sqrt{\left(1-\gamma_\circ^{-2}\right)\left[1+\left(M_{ism}/M_B\right)+\left(M_{ism}/M_B\right)^2\right]}}{1+\left(M_{ism}/M_B\right)\left(1+\gamma_\circ^{-1}\right)\left[1+\textstyle\frac{1}{2}\left(M_{ism}/M_B\right)\right]}\, , \label{vRad} \end{equation} where $\gamma_\circ$ and $M_B$ are respectively the values of the Lorentz gamma factor and of the mass of the accelerated baryons at the beginning of the afterglow phase and $M_{ism}$ is the value of the ISM matter swept up to radius $r$: $M_\mathrm{ism}=(4\pi/3)m_pn_{ism}(r^3-{r_\circ}^3)$, where $r_\circ$ is the starting radius of the baryonic matter shell, $m_p$ is the proton mass and $n_{ism}$ is the ISM number density. Using the analytic expression for the EQTS given in \citet{EQTS_ApJL2}, Eq.(\ref{Sist}) takes the form: \begin{equation} \left\{ \begin{array}{rcl} \cos\vartheta_\circ & = & \frac{\sqrt{\left(1-\gamma_\circ^{-2}\right)\left[1+\left(M_{ism}/M_B\right)+\left(M_{ism}/M_B\right)^2\right]}}{1+\left(M_{ism}/M_B\right)\left(1+\gamma_\circ^{-1}\right)\left[1+\textstyle\frac{1}{2}\left(M_{ism}/M_B\right)\right]}\\[18pt] \cos\vartheta_\circ & = & \frac{M_B - m_i^\circ}{2r\sqrt{C}}\left( {r - r_\circ } \right) +\frac{m_i^\circ r_\circ }{8r\sqrt{C}}\left[ {\left( {\frac{r}{{r_\circ }}} \right)^4 - 1} \right] \\[6pt] & + & \frac{{r_\circ \sqrt{C} }}{{12rm_i^\circ A^2 }} \ln \left\{ {\frac{{\left[ {A + \left(r/r_\circ\right)} \right]^3 \left(A^3 + 1\right)}}{{\left[A^3 + \left( r/r_\circ \right)^3\right] \left( {A + 1} \right)^3}}} \right\} \\[6pt] & + & \frac{ct_\circ}{r} - \frac{c(t_a^d)_{jet}}{r\left(1+z\right)} + \frac{r^\star}{r} \\[6pt] & + & \frac{{r_\circ \sqrt{3C} }}{{6rm_i^\circ A^2 }} \left[ \arctan \frac{{2\left(r/r_\circ\right) - A}}{{A\sqrt{3} }} - \arctan \frac{{2 - A}}{{A\sqrt{3} }}\right] \end{array} \right. \label{SistRad} \end{equation} where $t_\circ$ is the value of the time $t$ at the beginning of the afterglow phase, $m_i^\circ=(4/3)\pi m_p n_{\mathrm{ism}} r_\circ^3$, $r^\star$ is the initial size of the expanding source, $A=[(M_B-m_i^\circ)/m_i^\circ]^{1/3}$, $C={M_B}^2(\gamma_\circ-1)/(\gamma_\circ +1)$ and $z$ is the cosmological redshift of the source. \subsection{The adiabatic regime} In this case, the analytic solution of the equations of motion gives \citep[see][]{EQTS_ApJL2,PowerLaws}: \begin{equation} \frac{v}{c} = \sqrt{\gamma_\circ^2-1}\left(\gamma_\circ+\frac{M_{ism}}{M_B}\right)^{-1} \label{vAd} \end{equation} Using the analytic expression for the EQTS given in \citet{EQTS_ApJL2}, Eq.(\ref{Sist}) takes the form: \begin{equation} \left\{ \begin{array}{rcl} \cos\vartheta_\circ & = & \sqrt{\gamma_\circ^2-1}\left(\gamma_\circ+\frac{M_{ism}}{M_B}\right)^{-1} \\[18pt] \cos\vartheta_\circ & = & \frac{m_i^\circ}{4M_B\sqrt{\gamma_\circ^2-1}}\left[\left(\frac{r}{r_\circ}\right)^3 - \frac{r_\circ}{r}\right] + \frac{ct_\circ}{r} \\[6pt] & - & \frac{c(t_a^d)_{jet}}{r\left(1+z\right)} + \frac{r^\star}{r} - \frac{\gamma_\circ-\left(m_i^\circ/M_B\right)}{\sqrt{\gamma_\circ^2-1}}\left[\frac{r_\circ}{r} - 1\right] \end{array} \right. \label{SistAd} \end{equation} where all the quantities have the same definition as in Eq.(\ref{SistRad}). \subsection{The comparison between the two solutions} \begin{figure} \includegraphics[width=\hsize,clip]{Beam_ad_rad_comp} \caption{Comparison between the numerical solution of Eq.(\ref{SistRad}) assuming fully radiative regime (blue line) and the corresponding one of Eq.(\ref{SistAd}) assuming adiabatic regime (red line). The departure from power-law behavior at small arrival time follows from the constant Lorentz $\gamma$ factor regime, while the one at large angles follows from the approach to the non-relativistic regime \citep[see details in section \ref{fit} and Fig. \ref{Beam_Comp_Num_Fit_Log}, as well as in][]{PowerLaws}.} \label{Beam_Comp_Rad_Ad} \end{figure} In Fig. \ref{Beam_Comp_Rad_Ad} we plot the numerical solutions of both Eq.(\ref{SistRad}), corresponding to the fully radiative regime, and Eq.(\ref{SistAd}), corresponding to the adiabatic one. Both curves have been plotted assuming the same initial conditions, namely the ones of GRB 991216 \citep[see][]{Brasile}. \section{Comparison with the existing literature} \begin{figure*} \includegraphics[width=0.5\hsize,clip]{Beam_ad_comp_sph99_pm99_p06_991216} \includegraphics[width=0.5\hsize,clip]{Beam_ad_comp_sph99_pm99_p06_980519}\\ \includegraphics[width=0.5\hsize,clip]{Beam_ad_comp_sph99_pm99_p06_031203} \includegraphics[width=0.5\hsize,clip]{Beam_ad_comp_sph99_pm99_p06_980425} \caption{Comparison between the numerical solution of Eq.(\ref{SistAd}) (red line) and the corresponding approximate formulas given in Eq.(\ref{ThetaSPH99}) (blue line), in Eq.(\ref{ThetaPM99}) (black line), and in Eq.(\ref{ThetaP06}) (green line). All four curves have been plotted for four different GRBs: a) GRB 991216 \citep[see][]{Brasile}, b) GRB 980519 \citep[see][]{980519}, c) GRB 031203 \citep[see][]{031203}, d) GRB 980425 \citep[see][]{cospar02}. The ranges of the two axes have been chosen to focus on the sole domains of application of the approximate treatments in the current literature.} \label{Beam_Comp} \end{figure*} Three different approximate formulas for the relation between $(t_a^d)_{jet}$ and $\vartheta_\circ$ have been given in the current literature, all assuming the adiabatic regime. \citet{pm99} proposed: \begin{equation} \cos\vartheta_\circ \simeq 1 - 5.9\times10^7 \left(\frac{n_{ism}}{E}\right)^{1/4} \left[\frac{(t_a^d)_{jet}}{1+z}\right]^{3/4} \, , \label{ThetaPM99} \end{equation} \citet{sph99}, instead, advanced: \begin{equation} \vartheta_\circ \simeq 7.4\times 10^3 \left(\frac{n_{ism}}{E}\right)^{1/8} \left[\frac{(t_a^d)_{jet}}{1+z}\right]^{3/8} \, . \label{ThetaSPH99} \end{equation} In both Eq.(\ref{ThetaPM99}) and Eq.(\ref{ThetaSPH99}), $(t_a^d)_{jet}$ is measured in seconds, $E$ is the source initial energy measured in ergs and $n_{ism}$ is the ISM number density in particles/cm$^3$. The formula by \citet{sph99} has been applied quite often in the current literature \citep[see e.g.][]{fa01,ggl04,fa05}. Both Eq.(\ref{ThetaPM99}) and Eq.(\ref{ThetaSPH99}) compute the arrival time of the photons at the detector assuming that all the radiation is emitted at $\vartheta=0$ (i.e. on the line of sight), neglecting the full shape of the EQTSs. Recently, a new expression has been proposed by \citet{p06}, again neglecting the full shape of the EQTSs but assuming that all the radiation is emitted from $\vartheta = 1/\gamma$, i.e. from the boundary of the visible region. Such an expression is: \begin{equation} \vartheta_\circ \simeq 5.4\times 10^3 \left(\frac{n_{ism}}{E}\right)^{1/8} \left[\frac{(t_a^d)_{jet}}{1+z}\right]^{3/8} \, . \label{ThetaP06} \end{equation} In Fig. \ref{Beam_Comp} we plot Eq.(\ref{ThetaPM99}), Eq.(\ref{ThetaSPH99}) and Eq.(\ref{ThetaP06}) together with the numerical solution of Eq.(\ref{SistAd}) relative to the adiabatic regime. All four curves have been plotted assuming the same initial conditions for four different GRBs, encompassing more than $5$ orders of magnitude in energy and more than $2$ orders of magnitude in ISM density: a) GRB 991216 \citep[see][]{Brasile}, b) GRB 980519 \citep[see][]{980519}, c) GRB 031203 \citep[see][]{031203}, d) GRB 980425 \citep[see][]{cospar02}. The approximate Eq.(\ref{ThetaSPH99}) by \citet{sph99} and Eq.(\ref{ThetaP06}) by \citet{p06} both imply a power-law relation between $\vartheta_\circ$ and $(t_a^d)_{jet}$ with constant index $3/8$ for any value of $\vartheta_\circ$, while Eq.(\ref{ThetaPM99}) by \citet{pm99} implies a power-law relation with constant index $3/8$ only for $\vartheta_\circ \to 0$ (for greater $\vartheta_\circ$ values the relation is trigonometric). All the above three approximate treatments are based on the approximate power-law solutions of the GRB afterglow dynamics which have been shown in \citet{PowerLaws} to be not applicable to GRBs. They also do not take fully into account the structure of the EQTSs, although in different ways. Both Eq.(\ref{ThetaPM99}) and Eq.(\ref{ThetaSPH99}), which assume all the radiation coming from $\vartheta=0$, overestimate the behavior of the exact solution. On the other hand, Eq.(\ref{ThetaP06}), which assumes all the radiation coming from $\vartheta\sim 1/\gamma$, is a better approximation than the previous two, but still slightly underestimates the exact solution. \section{An empirical fit of the numerical solution}\label{fit} \begin{figure} \includegraphics[width=\hsize,clip]{Beam_ad_comp_num_fit} \caption{The overlapping between the numerical solution of Eq.(\ref{SistAd}) (thick green lines) and the approximate fitting function given in Eq.(\ref{ThetaFIT}) (thin red lines) is shown in the four cases (a--d) represented in Fig. \ref{Beam_Comp}.} \label{Beam_Comp_Num_Fit} \end{figure} For completeness, we now fit our exact solution with a suitable explicit functional form in the four cases considered in Fig. \ref{Beam_Comp}. We chose the same functional form of Eq.(\ref{ThetaP06}), which is the closer one to the numerical solution, using the numerical factor in front of it (i.e. $5.4\times 10^3$) as the fitting parameter. We find that the following approximate expression: \begin{equation} \vartheta_\circ \simeq 5.84\times 10^3 \left(\frac{n_{ism}}{E}\right)^{1/8} \left[\frac{(t_a^d)_{jet}}{1+z}\right]^{3/8} \label{ThetaFIT} \end{equation} is in agreement with the numerical solution in all the four cases presented in Fig. \ref{Beam_Comp} (see Fig. \ref{Beam_Comp_Num_Fit}). However, if we enlarge the axis ranges to their full extension (i.e. the one of Fig. \ref{Beam_Comp_Rad_Ad}), we see that such approximate empirical fitting formula can only be applied for $\vartheta_\circ < 25^\circ$ \emph{and} $(t_a^d)_{jet} > 10^2$ s (see the gray dashed rectangle in Fig. \ref{Beam_Comp_Num_Fit_Log}). \begin{figure} \includegraphics[width=\hsize,clip]{Beam_ad_comp_num_fit_log} \caption{Comparison between the numerical solution of Eq.(\ref{SistAd}) (think green lines) and the approximate fitting function given in Eq.(\ref{ThetaFIT}) (thin red lines) in all the four cases (a--d) represented in Fig. \ref{Beam_Comp}. The ranges of the two axes have been chosen to have their full extension (i.e. the one of Fig. \ref{Beam_Comp_Rad_Ad}). The dashed gray lines are the boundaries of the region where the empirical fitting function can be applied.} \label{Beam_Comp_Num_Fit_Log} \end{figure} An equivalent empirical fit in the fully radiative regime is not possible. In this case, indeed, there is a domain in the $((t_a^d)_{jet},\vartheta_\circ)$ plane where the numerical solution shows a power-law dependence on time, with an index $\sim 0.423$ (see Fig. \ref{Beam_Comp_Rad_Ad}). However, the dependence on the energy cannot be factorized out with a simple power-law. Therefore, in the fully radiative regime, which is the relevant one for our GRB model \citep[see e.g.][]{Brasile}, the application of the full Eq.(\ref{SistRad}) does not appear to be avoidable. \section{Conclusions} We have presented in Eqs.(\ref{SistRad},\ref{SistAd}) the exact analytic relations between the jet half-opening angle $\vartheta_\circ$ and the detector arrival time $(t_a^d)_{jet}$ at which we start to ``see'' the sides of the jet, which may be used in GRB sources in which an achromatic light curve break is observed. The limiting cases of fully radiative and adiabatic regimes have been outlined. Such relations differs from the approximate ones presented in the current literature in the adiabatic regime: both the ones by \citet{pm99} and by \citet{sph99} overestimate the exact analytic result, while the one just presented by \citet{p06} slightly underestimate it. For a limited domain in the $((t_a^d)_{jet},\vartheta_\circ)$ plane defined in Fig. \ref{Beam_Comp_Num_Fit_Log}, and only in the adiabatic regime, an empirical fit of the numerical solution of the exact Eq.(\ref{SistAd}) has been given in Eq.(\ref{ThetaFIT}). However, in the fully radiative regime such a simple empirical power-law fit does not exist and the application of the exact Eq.(\ref{SistRad}) is needed. This same situation is expected also to occur in the general case. In light of the above results, the assertion that the gamma-ray energy released in all GRBs is narrowly clustered around $5 \times 10^{50}$ ergs \citep{fa01} should be reconsidered. In addition, the high quality data by Swift, going without gaps from the ``prompt emission'' all the way to latest afterglow phases, will help in uniquely identifying the equations of motion of the GRB sources and the emission regimes. Consequently, on the ground of the results presented in this Letter, which encompass the different dynamical and emission regimes in GRB afterglow, an assessment on the existence and, in the positive case, on the extent of beaming in GRBs will be possible. This is a step in the determination of their energetics. \acknowledgments We thank an anonymous referee for his/her interesting suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $\Gamma$ be the Fuchsian group of the first kind \cite[Section 1.7, page 28]{Miyake}. Examples of such groups are the important modular groups such as $SL_2(\Bbb Z)$ and its congruence subgroups $\Gamma_0(N)$, $\Gamma_1(N)$, and $\Gamma(N)$ \cite[Section 4.2]{Miyake}. Let $\Bbb H$ be the complex upper half-plane. The quotient $\Gamma\backslash \Bbb H$ can be compactified by adding a finite number of $\Gamma$-orbits of points in $\mathbb R\cup \{\infty\}$ called cusps of $\Gamma$ and we obtain a compact Riemann surface which will be denoted by $\mathfrak{R}_\Gamma$. For $l\ge 1$, let $H^l\left(\mathfrak R_\Gamma\right)$ be the space of all holomorphic differentials on $\mathfrak{R}_\Gamma$ (see \cite{FK}, or Section \ref{mhd}) in this paper). Let $m\ge 2$ be an even integer. Let $S_m(\Gamma)$ be the space of (holomorphic) cusp forms of weight $m$ (see Section \ref{prelim}). It is well--known that $S_2(\Gamma)$ is naturally isomorphic to the vector space $H^1\left(\mathfrak R_\Gamma\right)$ (see \cite[Theorem 2.3.2]{Miyake}). This is employed on many instances in studying various properties of modular curves (see for example \cite[Chapter 6]{ono}). In this paper we study the generalization of this concept to the holomorphic differentials of higher order. For an even integer $m\ge 4$, in general, the space $S_m(\Gamma)$ is too big to be isomorphic to $H^{m/2}\left(\mathfrak R_\Gamma\right)$ due to presence of cusps and elliptic points. So, in general we define a subspace $$ S^H_m(\Gamma)=\left\{f\in S_m(\Gamma); \ \ \text{$f=0$ or $f$ satisfies (\ref{int-mhd-8})} \right\}, $$ where \begin{equation}\label{int-mhd-8} \mathfrak c_f\ge \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]\mathfrak a+ \left(\frac{m}{2} -1\right)\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b. \end{equation} The integral divisor $\mathfrak c_f$ is defined in Lemma \ref{prelim-1} while the multiplicities $e_{\mathfrak a}$ are defined in Section \ref{prelim}. Now, we have the following result (see Section \ref{mhd-cont}): \begin{Thm} The usual map $f\longmapsto \omega_f$ from the space of all cuspidal modular form into space of meromorphic differentials (see \cite[Theorem 2.3.3]{Miyake}) induces the isomorphism of $S^H_m(\Gamma)$ onto $H^{m/2}\left(\mathfrak R_\Gamma\right)$. \end{Thm} We study the space $S^H_m(\Gamma)$ in detail in Section (see Section \ref{mhd-cont}). The main results are contained in a very detailed Lemma \ref{mhd-9} and Theorem \ref{mhd-10}. We recall (see \cite[III.5.9]{FK} or Definition \ref{mhd-def} that $\mathfrak a \in \mathfrak R_\Gamma$ is a $m/2$-Weierstrass point if there exists a non--zero $\omega\in H^{m/2}\left(\mathfrak R_\Gamma\right)$ such that $$ \nu_{\mathfrak a}(\omega)\ge \dim H^{m/2}\left(\mathfrak R_\Gamma\right). $$ Equivalently \cite[Proposition III.5.10]{FK} , if $$ \nu_{\mathfrak a}\left(W\left(\omega_1, \ldots, \omega_t\right)\right)\ge 1, $$ where $W\left(\omega_1, \ldots, \omega_t\right)$ is the Wronskian of holomorphic differential forms $\omega_1, \ldots, \omega_t$ (see Section \ref{mhd}). When $m=2$ we speak about classical Weierstrass points. So, $1$-Weierstrass points are simply Weierstrass points. Weierstrass points on modular curves are very-well studied (see for example \cite[Chapter 6]{ono}, \cite{neeman}, \cite{Ogg}, \cite{pete-1}, \cite{pete-2}, \cite{roh}). Higher--order Weierstrass points has not been not studied much (see for example \cite{neeman}). \vskip .2in The case $m\ge 4$ is more complex. We recall that $\mathfrak R_\Gamma$ is hyperelliptic if $g(\Gamma)\ge 2$, and there is a degree two map onto $\mathbb P^1$. By general theory \cite[Chapter VII, Proposition 1.10]{Miranda}, if $g(\Gamma)=2$, then $\mathfrak R_\Gamma$ is hyperelliptic. If $\mathfrak R_\Gamma$ is not hyperelliptic, then $\dim S_2(\Gamma)= g(\Gamma)\ge 3$, and the regular map $\mathfrak R_\Gamma\longrightarrow \mathbb P^{g(\Gamma)-1}$ attached to a canonical divisor $K$ is an isomorphism onto its image \cite[Chapter VII, Proposition 2.1]{Miranda}. \vskip .2in Let $\Gamma=\Gamma_0(N)$, $N\ge 1$. Put $X_0(N)=\mathfrak R_{\Gamma_0(N)}$. We recall that $g(\Gamma_0(N))\ge 2$ unless $$ \begin{cases} N\in\{1-10, 12, 13, 16, 18, 25\} \ \ \text{when $g(\Gamma_0(N))=0$, and}\\ N\in\{11, 14, 15, 17, 19-21, 24, 27, 32, 36, 49\} \ \ \text{when $g(\Gamma_0(N))=1$.} \end{cases} $$ Let $g(\Gamma_0(N))\ge 2$. Then, we remark that Ogg \cite{Ogg} has determined all $X_0(N)$ which are hyperelliptic curves. In view of Ogg's paper, we see that $X_0(N)$ is {\bf not hyperelliptic} for $N\in \{34, 38, 42, 43, 44, 45, 51-58, 60-70\}$ or $N\ge 72$. This implies $g(\Gamma_0(N))\ge 3$. \vskip .2in We prove the following result (see Theorem \ref{cts-50000}) \vskip .2in \begin{Thm}\label{cts-50000-int} Let $m\ge 4$ be an even integer. Assume that $\mathfrak R_\Gamma$ is not hyperelliptic. Then, we have $$ S_{m, 2}^H(\Gamma)= S_m^H(\Gamma), $$ where we denote the subspace $S_{m, 2}^H(\Gamma)$ of $S_m^H(\Gamma)$ spanned by all monomials $$ f_0^{\alpha_0}f_1^{\alpha_1}\cdots f_{g-1}^{\alpha_{g-1}}, \ \ \alpha_i\in \mathbb Z_{\ge 0}, \ \sum_{i=0}^{g-1} \alpha_i=\frac{m}{2}. $$ Here $f_0, \ldots, f_{g-1}$, $g=g(\Gamma)$, is a basis of $S_2(\Gamma)$ \end{Thm} \vskip .2in The criterion is given by the following corollary (see Corollary \ref{cts-500000}): \vskip .2in \begin{Cor}\label{cts-50000-int} Let $m\ge 4$ be an even integer. Assume that $\mathfrak R_\Gamma$ is not hyperelliptic. Assume that $\mathfrak a_\infty$ is a cusp for $\Gamma$. Let us select a basis $f_0, \ldots, f_{g-1}$, $g=g(\Gamma)$, of $S_2(\Gamma)$. Compute $q$--expansions of all monomials $$ f_0^{\alpha_0}f_1^{\alpha_1}\cdots f_{g-1}^{\alpha_{g-1}}, \ \ \alpha_i\in \mathbb Z_{\ge 0}, \ \sum_{i=0}^{g-1} \alpha_i=\frac{m}{2}. $$ Then, $\mathfrak a_\infty$ is {\bf not} a $\frac{m}{2}$--Weierstrass point if and only if there exist a basis of the space of all such monomials, $F_1, \ldots F_t$, $t=\dim{S_m^H(\Gamma)}=(m-1)(g-1)$ (see Lemma \ref{mhd-9} (v)), such that their $q$--expansions are of the form $$ F_u=a_uq^{u+m/2-1}+ \text{higher order terms in $q$}, \ \ 1\le u\le t, $$ where $$ a_u\in \mathbb C, \ \ a_u\neq 0. $$ \end{Cor} This is useful for explicit computations in SAGE at least when $\Gamma=\Gamma_0(N)$. We give examples in Section \ref{cts} (see Propositions \ref{cts-5001} and \ref{cts-5002}). A different more theoretical criterion is contained in Theorem \ref{mhd-10}. \vskip .2in Various other aspects of modular curves has been studied in \cite{BKS}, \cite{bnmjk}, \cite{sgal}, \cite{ishida}, \cite{Muic}, \cite{MuicMi}, \cite{mshi} and \cite{yy}. We continue the approach presented in \cite{Muic1}, \cite{Muic2}, and \cite{MuicKodrnja}. In the proof of Theorem \ref{cts-50000} we give an explicit construction of a higher order canonical map i.e., a map attached to divisor $\frac{m}{2}K$, where $K$ is a canonical divisor of $\mathfrak R_\Gamma$. The case $m=2$ is studied in depth in many papers (see for example \cite{sgal}). \vskip .2in In Section \ref{wron} we deal with a generalization of the usual notion of the Wronskian of cuspidal modular forms \cite{roh}, (\cite{ono}, 6.3.1), (\cite{Muic}, the proof of Theorem 4-5), and (\cite{Muic2}, Lemma 4-1). The main result of the section is Proposition \ref{wron-2} which in the most important case has the following form: \begin{Prop} Let $m\ge 1$. Then, for any sequence $f_1, \ldots, f_k\in M_m(\Gamma)$, the Wronskian $$ W\left(f_1, \ldots, f_k\right)(z)\overset{def}{=}\left|\begin{matrix} f_1(z) & \cdots & f_{k}(z) \\ \frac{df_1(z)}{dz} & \cdots & \frac{df_{k}(z)}{dz} \\ &\cdots & \\ \frac{d^{k-1}f_1(z)}{dz^{k-1}} & \cdots & \frac{d^{k-1}f_{k}(z)}{dz^{k-1}} \\ \end{matrix}\right| $$ is a cuspidal modular form in $S_{k(m+k-1)}(\Gamma)$ if $k\ge 2$. If $f_1, \ldots, f_k$ are linearly independent, then $W\left(f_1, \ldots, f_k\right)\neq 0$. \end{Prop} \vskip .2in What is new and deep is the computation of the divisor of $W\left(f_1, \ldots, f_k\right)$ (see Section \ref{wron-cont}). The main results are Proposition \ref{wron-cont-2} and Theorem \ref{wron-6}. A substantial example has been given in Section \ref{lev0} in the case of $\Gamma=SL_2(\mathbb Z)$ (see Proposition \ref{lev0-4}). \vskip .2in We would like to thank I. Kodrnja for her help with the SAGE system. Also we would like to thank M. Kazalicki and F. Najman for some useful discussions about modular forms and curves in general. \section{Preliminaries}\label{prelim} In this section we recall necessary facts about modular forms and their divisors \cite{Miyake}. We follow the exposition in (\cite{Muic2}, Section 2). \vskip .2in Let $\mathbb H$ be the upper half--plane. Then the group $SL_2({\mathbb{R}})$ acts on $\mathbb H$ as follows: $$ g.z=\frac{az+b}{cz+d}, \ \ g=\left(\begin{matrix}a & b\\ c & d \end{matrix}\right)\in SL_2({\mathbb{R}}). $$ We let $j(g, z)=cz+d$. The function $j$ satisfies the cocycle identity: \begin{equation}\label{cocycle} j(gg', z)=j(g, g'.z) j(g', z). \end{equation} Next, $SL_2({\mathbb{R}})$--invariant measure on $\mathbb H$ is defined by $dx dy /y^2$, where the coordinates on $\mathbb H$ are written in a usual way $z=x+\sqrt{-1}y$, $y>0$. A discrete subgroup $\Gamma\subset SL_2({\mathbb{R}})$ is called a Fuchsian group of the first kind if $$ \iint _{\Gamma \backslash \mathbb H} \frac{dxdy}{y^2}< \infty. $$ Then, adding a finite number of points in ${\mathbb{R}}\cup \{\infty\}$ called cusps, $\cal F_\Gamma$ can be compactified. In this way we obtain a compact Riemann surface $\mathfrak R_\Gamma$. One of the most important examples are the groups $$ \Gamma_0(N)=\left\{\left(\begin{matrix}a & b \\ c & d\end{matrix}\right) \in SL_2(\mathbb Z); \ \ c\equiv 0 \ (\mathrm{mod} \ N)\right\}, \ \ N\ge 1. $$ We write $X_0(N)$ for $\mathfrak R_{\Gamma_0(N)}$. Let $\Gamma$ be a Fuchsian group of the first kind. We consider the space $M_m(\Gamma)$ (resp., $S_m(\Gamma)$) of all modular (resp., cuspidal) forms of weight $m$; this is the space of all holomorphic functions $f: \mathbb H\rightarrow {\mathbb{C}}$ such that $f(\gamma.z)=j(\gamma, z)^m f(z)$ ($z\in \mathbb H$, $\gamma\in \Gamma$) which are holomorphic (resp., holomorphic and vanish) at every cusp for $\Gamma$. We also need the following obvious property: for $f, g\in M_m(\Gamma, \chi)$, $g\neq 0$, the quotient $f/g$ is a meromorphic function on $\mathfrak R_{\Gamma}$. \vskip .2in Next, we recall from (\cite{Miyake}, 2.3) some notions related to the theory of divisors of modular forms of even weight $m\ge 2$ and state a preliminary result. Let $m\ge 2$ be an even integer and $f\in M_{m}(\Gamma)-\{0\}$. Then, $\nu_{z-\xi}(f)$ denotes the order of the holomorphic function $f$ at $\xi$. For each $\gamma\in \Gamma$, the functional equation $f(\gamma.z)=j(\gamma, z)^m f(z)$, $z\in \mathbb H$, shows that $\nu_{z-\xi}(f)=\nu_{z-\xi'}(f)$ where $\xi'=\gamma.\xi$. Also, if we let $$ e_{\xi} =\# \left(\Gamma_{\xi}/\Gamma\cap \{\pm 1\}\right), $$ then $e_{\xi}=e_{\xi'}$. The point $\xi\in \mathbb H$ is elliptic if $e_\xi>1$. Next, following (\cite{Miyake}, 2.3), we define $$ \nu_\xi(f)=\nu_{z-\xi}(f)/e_{\xi}. $$ Clearly, $\nu_{\xi}=\nu_{\xi'}$, and we may let $$ \nu_{\mathfrak a_\xi}(f)=\nu_\xi(f), $$ where $$\text{$\mathfrak a_\xi\in \mathfrak R_\Gamma$ is the projection of $\xi$ to $\mathfrak R_\Gamma$,} $$ a notation we use throughout this paper. If $x\in {\mathbb{R}}\cup \{\infty\}$ is a cusp for $\Gamma$, then we define $\nu_x(f)$ as follows. Let $\sigma\in SL_2({\mathbb{R}})$ such that $\sigma.x=\infty$. We write $$ \{\pm 1\} \sigma \Gamma_{x}\sigma^{-1}= \{\pm 1\}\left\{\left(\begin{matrix}1 & lh'\\ 0 & 1\end{matrix}\right); \ \ l\in {\mathbb{Z}}\right\}, $$ where $h'>0$. Then we write the Fourier expansion of $f$ at $x$ as follows: $$ (f|_m \sigma^{-1})(\sigma.z)= \sum_{n=1}^\infty a_n e^{2\pi \sqrt{-1}n\sigma.z/h'}. $$ We let $$ \nu_x(f)=l\ge 0, $$ where $l$ is defined by $a_0=a_1=\cdots =a_{l-1}=0$, $a_l\neq 0$. One easily see that this definition does not depend on $\sigma$. Also, if $x'=\gamma.x$, then $\nu_{x'}(f)=\nu_{x}(f)$. Hence, if $\mathfrak b_x\in \mathfrak R_\Gamma$ is a cusp corresponding to $x$, then we may define $$ \nu_{\mathfrak b_x}=\nu_x(f). $$ Put $$ \mathrm{div}{(f)}=\sum_{\mathfrak a\in \mathfrak R_\Gamma} \nu_{\mathfrak a}(f) \mathfrak a \in \ \ {\mathbb{Q}}\otimes \mathrm{Div}(\mathfrak R_\Gamma), $$ where $\mathrm{Div}(\mathfrak R_\Gamma)$ is the group of (integral) divisors on $\mathfrak R_\Gamma$. Using (\cite{Miyake}, 2.3), this sum is finite i.e., $ \nu_{\mathfrak a}(f)\neq 0$ for only a finitely many points. We let $$ \mathrm{deg}(\mathrm{div}{(f)})=\sum_{\mathfrak a\in \mathfrak R_\Gamma} \nu_{\mathfrak a}(f). $$ Let $\mathfrak d_i\in {\mathbb{Q}}\otimes \mathrm{Div}(\mathfrak R_\Gamma)$, $i=1, 2$. Then we say that $\mathfrak d_1\ge \mathfrak d_2$ if their difference $\mathfrak d_1 - \mathfrak d_2$ belongs to $\mathrm{Div}(\mathfrak R_\Gamma)$ and is non--negative in the usual sense. \begin{Lem}\label{prelim-1} Assume that $m\ge 2$ is an even integer. Assume that $f\in M_m(\Gamma)$, $f\neq 0$. Let $t$ be the number of inequivalent cusps for $\Gamma$. Then we have the following: \begin{itemize} \item[(i)] For $\mathfrak a\in \mathfrak R_\Gamma$, we have $\nu_{\mathfrak a}(f) \ge 0$. \item [(ii)] For a cusp $\mathfrak a\in \mathfrak R_\Gamma$, we have that $\nu_{\mathfrak a}(f)\ge 0$ is an integer. \item[(iii)] If $\mathfrak a\in \mathfrak R_\Gamma$ is not an elliptic point or a cusp, then $\nu_{\mathfrak a}(f)\ge 0$ is an integer. If $\mathfrak a\in \mathfrak R_\Gamma$ is an elliptic point, then $\nu_{\mathfrak a}(f)-\frac{m}{2}(1-1/e_{\mathfrak a})$ is an integer. \item[(iv)]Let $g(\Gamma)$ be the genus of $ \mathfrak R_\Gamma$. Then \begin{align*} \mathrm{deg}(\mathrm{div}{(f)})&= m(g(\Gamma)-1)+ \frac{m}{2}\left(t+ \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} (1-1/e_{\mathfrak a})\right)\\ &= \frac{m}{4\pi} \iint_{\Gamma \backslash \mathbb H} \frac{dxdy}{y^2}. \end{align*} \item[(v)] Let $[x]$ denote the largest integer $\le x$ for $x\in {\mathbb{R}}$. Then \begin{align*} \dim S_m(\Gamma) &= \begin{cases} (m-1)(g(\Gamma)-1)+(\frac{m}{2}-1)t+ \sum_{\substack{\mathfrak a\in \mathfrak R_\Gamma, \\ elliptic}} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right], \ \ \text{if $m\ge 4$,}\\ g(\Gamma), \ \ \text{if $m=2$.}\\ \end{cases}\\ \dim M_m(\Gamma)&=\begin{cases} \dim S_m(\Gamma)+t, \ \ \text{if $m\ge 4$, or $m=2$ and $t=0$,}\\ \dim S_m(\Gamma)+t-1=g(\Gamma)+t-1,\ \ \text{if $m=2$ and $t\ge 1$.}\\ \end{cases} \end{align*} \item[(vi)] There exists an integral divisor $\mathfrak c'_f\ge 0$ of degree $$ \begin{cases} \dim M_m(\Gamma)+ g(\Gamma)-1, \ \ \text{if $m\ge 4$, or $m=2$ and $t\ge 1$,}\\ 2(g(\Gamma)-1), \ \ \text{if $m=2$ and $t=0$} \end{cases} $$ such that \begin{align*} \mathrm{div}{(f)}= & \mathfrak c'_f+ \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left(\frac{m}{2}(1-1/e_{\mathfrak a}) - \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]\right)\mathfrak a. \end{align*} \item[(vii)] Assume that $f\in S_m(\Gamma)$. Then, the integral divisor defined by $ \mathfrak c_f\overset{def}{=}\mathfrak c'_f- \sum_{\substack{\mathfrak b \in \mathfrak R_\Gamma, \\ cusp}} \mathfrak b$ satisfies $\mathfrak c_f\ge 0$ and its degree is given by $$ \begin{cases} \dim S_m(\Gamma)+ g(\Gamma)-1; \ \ \text{if $m\ge 4$,}\\ 2(g(\Gamma)-1); \ \ \text{if $m=2$.} \end{cases} $$ \end{itemize} \end{Lem} \begin{proof} The claims (i)--(v) are standard (\cite{Miyake}, 2.3, 2.4, 2.5). The claim (vi) follows from (iii), (iv), and (v) (see Lemma 4-1 in \cite{Muic}). Finally, (vii) follows from (vi). \end{proof} \section{Holomorhic Differentials and $m$--Weierstrass Points on $\mathfrak R_\Gamma$}\label{mhd} Let $\Gamma$ be a Fuchsian group of the first kind. We let $D^m\left(\mathfrak R_\Gamma\right)$ (resp., $H^m\left(\mathfrak R_\Gamma\right)$)be the space of meromorphic (resp., holomorphic) differential of degree $m$ on $\mathfrak R_\Gamma$ for each $m\in \mathbb Z$. We recall that $D^0\left(\mathfrak R_\Gamma\right)=\mathbb C\left(\mathfrak R_\Gamma\right)$, and $D^m\left(\mathfrak R_\Gamma\right)\neq 0$ for all other $m\in \mathbb Z$. In fact, if we fix a non--zero $\omega \in D^1\left(\mathfrak R_\Gamma\right)$, then $D^m\left(\mathfrak R_\Gamma\right)=\mathbb C\left(\mathfrak R_\Gamma\right)\omega^n$. We have the following: \begin{equation}\label{mhd-1} \deg{\left(\mathrm{div}{(\omega)}\right)}= 2m(g(\Gamma)-1), \ \ \omega \in D^m\left(\mathfrak R_\Gamma\right), \ \omega\neq 0. \end{equation} We shall be interested in the case $m\ge 1$, and in holomorphic differentials. We recall \cite[Proposition III.5.2]{FK} that \begin{equation}\label{mhd-2} \dim H^m\left(\mathfrak R_\Gamma\right)= \begin{cases} 0 \ \ &\text{if} \ \ m\ge 1, g(\Gamma)=0;\\ g(\Gamma) \ \ &\text{if} \ \ m=1, \ g(\Gamma)\ge 1;\\ g(\Gamma) \ \ & \text{if} \ \ m\ge 2, \ g(\Gamma)= 1;\\ (2m-1)\left(g\left(\mathfrak R_\Gamma\right)-1\right) \ \ &\text{if} \ \ m\ge 2, \ g(\Gamma)\ge 2.\\ \end{cases} \end{equation} This follows easily from Riemann-Roch theorem. Recall that a canonical class $K$ is simply a divisior on any non--zero meromorphic form $\omega$ on $\mathfrak R_\Gamma$. Different choices of a $\omega$ differ by a divisor of a non--zero function $f\in \mathbb C\left(\mathfrak R_\Gamma\right)$ $$ \mathrm{div}{(f\omega)}=\mathrm{div}{(f)}+ \mathrm{div}{(\omega)}. $$ Different choices of $\omega$ have the same degree since $\deg{\left(\mathrm{div}{(f)}\right)}=0$. For a divisor $\mathfrak a$, we let $$ L(\mathfrak a)=\left\{f\in \mathbb C\left(\mathfrak R_\Gamma\right); \ \ f=0 \ \text{or} \ \mathrm{div}{(f)}+ \mathfrak a\ge 0\right\}. $$ We have the following three facts: \begin{itemize} \item[(1)] for $\mathfrak a=0$, we have $L(\mathfrak a)=\mathbb C$; \item[(2)] if $\deg{(\mathfrak a)}< 0$, then $L(\mathfrak a)=0$; \item[(3)] the Riemann-Roch theorem: $\dim L(\mathfrak a)= \deg{(\mathfrak a)}-g(\Gamma)+1 + \dim L(K-\mathfrak a)$. \end{itemize} Now, it is obvious that $f\omega^m \in H^m\left(\mathfrak R_\Gamma\right)$ if and only if $$ \mathrm{div}{(f\omega^m)}=\mathrm{div}{(f)}+m\mathrm{div}{(\omega)}= \mathrm{div}{(f)}+mK\ge 0. $$ Equivalently, $f\in L(mK)$. Thus, we have that $\dim H^m\left(\mathfrak R_\Gamma\right)= \dim L(mK)$. Finally, by the Riemann-Roch theorem, we have the following: $$ \dim L(mK)=\deg{(mK)}-g(\Gamma)+1 + \dim L((1-m)K)=(2m-1)(g\left(\mathfrak R_\Gamma\right)-1)+ \dim L((1-m)K). $$ Now, if $g(\Gamma)\ge 2 $, then $\deg{(K)}=2(g(\Gamma)-1)>0$, and the claim easily follows from (1) and (2) above. Next, assume that $g(\Gamma)=1$. If $\omega\in \dim H^1\left(\mathfrak R_\Gamma\right)$ s non--zero, then it has a degree zero. Thus, it has no zeroes. This means that $\omega H^{l-1}\left(\mathfrak R_\Gamma\right)=H^l\left(\mathfrak R_\Gamma\right)$ for all $l\in \mathbb Z$. But since obviously $H^0\left(\mathfrak R_\Gamma\right)$ consists of constants only, we obtain the claim. Finally, the case $g(\Gamma)=0$ is obvious from (2) since the degree of $mK$ is $2m(g(\Gamma)-1)<0$ for all $m\ge 1$. \vskip .2in Assume that $g(\Gamma)\ge 1$ and $m\ge 1$. Then, $\dim H^m\left(\mathfrak R_\Gamma\right)\neq 0$. Let $t= \dim H^m\left(\mathfrak R_\Gamma\right)$. We fix the basis $\omega_1, \ldots, \omega_t$ of $H^m\left(\mathfrak R_\Gamma\right)$. Let $z$ be any local coordinate on $\mathfrak R_\Gamma$. Then, locally there exists unique holomorphic functions $\varphi_1, \ldots, \varphi_t$ such that $\omega_i=\varphi_i \left(dz\right)^m$, for all $i$. Then, again locally, we can consider the Wronskian $W_z$ defined by \begin{equation}\label{mhd-3} W_z\left(\omega_1, \ldots, \omega_t\right)\overset{def}{=}\left|\begin{matrix} \varphi_1(z) & \cdots & \varphi_{t}(z) \\ \frac{d\varphi_1(z)}{dz} & \cdots & \frac{d\varphi_{t}(z)}{dz} \\ &\cdots & \\ \frac{d^{t-1}\varphi_1(z)}{dz^{k-1}} & \cdots & \frac{d^{t-1}\varphi_{t}(z)}{dz^{t-1}} \\ \end{matrix}\right|. \end{equation} As proved in \cite[Proposition III.5.10]{FK}, collection of all \begin{equation}\label{mhd-4} W_z\left(\omega_1, \ldots, \omega_t\right)\left(dz\right)^{\frac{t}{2}\left(2m-1+t\right)} , \end{equation} defines a non--zero holomorphic differential form $$ W\left(\omega_1, \ldots, \omega_t\right)\in H^{\frac{t}{2}\left(2m-1+t\right)}\left(\mathfrak R_\Gamma\right). $$ We call this form the Wronskian of the basis $\omega_1, \ldots, \omega_t$. It is obvious that a different choice of a basis of $H^m\left(\mathfrak R_\Gamma\right)$ results in a Wronskian which differ from $W\left(\omega_1, \ldots, \omega_t\right)$ by a multiplication by a non--zero complex number. Also, the degree is given by \begin{equation}\label{mhd-5} \deg{\left(\mathrm{div}{\left(W\left(\omega_1, \ldots, \omega_t\right)\right)}\right)}= t\left(2m-1+t\right)(g\left(\mathfrak R_\Gamma\right)-1). \end{equation} \vskip .2in Following \cite[III.5.9]{FK}, we make the following definition: \begin{Def}\label{mhd-def} Let $m\ge 1$ be an integer. We say that $\mathfrak a \in \mathfrak R_\Gamma$ is a $m$-Weierstrass point if there exists a non--zero $\omega\in H^m\left(\mathfrak R_\Gamma\right)$ such that $$ \nu_{\mathfrak a}(\omega)\ge \dim H^m\left(\mathfrak R_\Gamma\right). $$ \end{Def} Equivalently \cite[Proposition III.5.10]{FK} , if $$ \nu_{\mathfrak a}\left(W\left(\omega_1, \ldots, \omega_t\right)\right)\ge 1. $$ When $m=1$ we speak about classical Wierstrass points. So, $1$-Weierstrass points are simply Weierstrass points. \section{Interpretation in Terms of Modular Forms} \label{mhd-cont} In this section we give interpretation of results of Section \ref{mhd} in terms of modular forms. Again, $\Gamma$ stand for a Fuschsian group of the first kind. Let $m\ge 2$ be an even integer. We consider the space $\mathcal A_m(\Gamma)$ be the space of all all meromorphic functions $f: \mathbb H\rightarrow {\mathbb{C}}$ such that $f(\gamma.z)=j(\gamma, z)^m f(z)$ ($z\in \mathbb H$, $\gamma\in \Gamma$) which are meromorphic at every cusp for $\Gamma$. By \cite[Theorem 2.3.1]{Miyake}, there exists isomorphism of vector spaces $\mathcal A_m(\Gamma)\longrightarrow D^{m/2}\left(\mathfrak R_\Gamma\right)$, denoted by $f\longmapsto \omega_f$ such that the following holds (see Section \ref{prelim} for notation, and \cite[Theorem 2.3.3]{Miyake}): \begin{equation}\label{mhd-5} \begin{aligned} &\nu_{\mathfrak a_\xi}(f)=\nu_{\mathfrak a_\xi}(\omega_f) +\frac{m}{2}\left(1- \frac{1}{e_{\mathfrak a_\xi}}\right) \ \ \text{if} \ \xi\in \mathbb H\\ &\nu_{\mathfrak a}(f)=\nu_{\mathfrak a}(\omega_f) + \frac{m}{2} \ \ \text{for $\Gamma$---cusp $\mathfrak a$.}\\ & \mathrm{div}{(f)}=\mathrm{div}{(\omega_f)}+\sum_{\mathfrak a \in \mathfrak R_\Gamma} \frac{m}{2}\left(1- \frac{1}{e_{\mathfrak a}}\right) \mathfrak a, \end{aligned} \end{equation} where $1/e_{\mathfrak a}=0$ if $\mathfrak a$ is a cusp. Let $f\in M_m(\Gamma)$. Then, combining Lemma \ref{prelim-1} (vi) and (\ref{mhd-5}), we obtain \begin{equation}\label{mhd-6} \mathrm{div}{(\omega_f)}=\mathfrak c' _f -\sum_{\mathfrak a\in \mathfrak R_\Gamma, \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]\mathfrak a - \frac{m}{2}\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b. \end{equation} This shows that $\omega_f$ is holomorphic everywhere except maybe at cusps and elliptic points. Moreover, if $f\in S_m(\Gamma)$, then (see Lemma \ref{prelim-1} (vii)) \begin{equation}\label{mhd-7} \mathrm{div}{(\omega_f)}=\mathfrak c_f - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]\mathfrak a - \left(\frac{m}{2} -1\right)\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b. \end{equation} Next, we determine all $f\in M_m(\Gamma)$ such that $\omega_f\in H^{m/2}\left(\mathfrak R_\Gamma\right)$. From (\ref{mhd-6}) we see that such $f$ must belong to $S_m(\Gamma)$, and from (\ref{mhd-7}) \begin{equation}\label{mhd-8} \mathfrak c_f\ge \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]\mathfrak a+ \left(\frac{m}{2} -1\right)\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b. \end{equation} Now, we define the subspace of $S_m(\Gamma)$ by $$ S^H_m(\Gamma)=\left\{f\in S_m(\Gamma); \ \ \text{$f=0$ or $f$ satisfies (\ref{mhd-8})} \right\}. $$ It is mapped via $f\longmapsto \omega_f$ isomorphically onto $H^{m/2}\left(\mathfrak R_\Gamma\right)$. We remark that when $m=2$, (\ref{mhd-8}) and reduces to obvious $\mathfrak c_f\ge 0$. Hence, $S^H_2(\Gamma)= S_2(\Gamma)$ recovering the standard isomorphism of $S_2(\Gamma)$ and $H^1(\mathfrak R_\Gamma)$ (see \cite[Theorem 2.3.2]{Miyake}). We have the following result: \begin{Lem} \label{mhd-9} Assume that $m, n\ge 2$ are even integers. Let $\Gamma$ be a Fuchsian group of the first kind. Then, we have the following: \begin{itemize} \item[(i)] $S^H_2(\Gamma)= S_2(\Gamma)$. \item[(ii)] $S^H_m(\Gamma)$ is isomorphic to $H^{m/2}\left(\mathfrak R_\Gamma\right)$. \item[(iii)] $S^H_m(\Gamma)=\{0\}$ if $g(\Gamma)=0$. \item[(iv)] Assume that $g(\Gamma)=1$. Let us write $S_2(\Gamma)=\mathbb C \cdot f$, for some non--zero cuspidal form $f$. Then, we have $S^H_m(\Gamma)= \mathbb C \cdot f^{m/2}$. \item[(v)] $\dim{S^H_m(\Gamma)}= (m-1)\left(g(\Gamma)-1\right)$ if $g(\Gamma)\ge 2$. \item[(vi)] $S^H_m(\Gamma)\cdot S^H_n(\Gamma)\subset S^H_{m+n}(\Gamma)$. \item[(vii)] There are no $m/2$--Weierstrass points on $\mathfrak R_\Gamma$ for $g(\Gamma)\in \{0, 1\}$. \item[(viii)] Assume that $g(\Gamma)\ge 2$, and $\mathfrak a_\infty$ is a $\Gamma$-cusp. Then, $\mathfrak a_\infty$ is a $\frac{m}{2}$--Weierstrass point if and only if there exists $f\in S^H_m(\Gamma)$, $f\neq 0$, such that $$ \mathfrak c'_f(\mathfrak a_\infty)\ge \begin{cases} \frac{m}{2} + g(\Gamma) \ \ \text{if} \ \ m=2;\\ \frac{m}{2} + (m-1)(g(\Gamma)-1) \ \ \text{if} \ \ m\ge 4.\\ \end{cases} $$ \item[(ix)] Assume that $g(\Gamma)\ge 1$, and $\mathfrak a_\infty$ is a $\Gamma$-cusp. Then, there exists a basis $f_1, \ldots f_t$ of $S_m^H(\Gamma)$ such that their $q$--expansions are of the form $$ f_u=a_uq^{i_u}+ \text{higher order terms in $q$}, \ \ 1\le u\le t, $$ where $$ \frac{m}{2}\le i_1< i_2< \cdots < i_t \le \frac{m}{2}+ m\left(g(\Gamma)-1\right), $$ and $$ a_u\in \mathbb C, \ \ a_u\neq 0. $$ \item[(x)] Assume that $g(\Gamma)\ge 1$, and $\mathfrak a_\infty$ is a $\Gamma$-cusp. Then, $\mathfrak a_\infty$ is {\bf not} a $\frac{m}{2}$--Weierstrass point if and only if there exists a basis $f_1, \ldots f_t$ of $S_m^H(\Gamma)$ such that their $q$--expansions are of the form $$ f_u=a_uq^{u+m/2-1}+ \text{higher order terms in $q$}, \ \ 1\le u\le t, $$ where $$ a_u\in \mathbb C, \ \ a_u\neq 0. $$ \end{itemize} \end{Lem} \begin{proof} (i) and (ii) follow from above discussion. Next, using above discussion and (\ref{mhd-2}) we obtain $$ \dim{S^H_m(\Gamma)} =\dim H^{m/2}\left(\mathfrak R_\Gamma\right)=\begin{cases} 0 \ \ &\text{if} \ \ m\ge 2, g(\Gamma)=0;\\ g(\Gamma) \ \ &\text{if} \ \ m=2, \ g(\Gamma)\ge 1;\\ g(\Gamma) \ \ & \text{if} \ \ m\ge 4, \ g(\Gamma)= 1;\\ (m-1)\left(g(\Gamma)-1\right) \ \ &\text{if} \ \ m\ge 4, \ g(\Gamma)\ge 2.\\ \end{cases} $$ This immediately implies (iii) and (v). Next, assume that $g(\Gamma)=1$. Then, we see that $\dim{S^H_m(\Gamma)}\le 1$ for all even integers $m\ge 4$. It is well known that $f^{m/2} \in S_m(\Gamma)$. Next, (\ref{mhd-8}) for $m=2$ implies $\mathrm{div}{(\omega_f)}=\mathfrak c_f$. Also, the degree of $\mathfrak c_f$ is zero by Lemma \ref{prelim-1} (vii). Hence, $$ \mathrm{div}{(\omega_f)}=\mathfrak c_f=0. $$ Using \cite[Theorem 2.3.2]{Miyake}, we obtain $$ \omega_{f^{m/2}}=\omega^{m/2}_f. $$ Hence $$ \mathrm{div}{(\omega_{f^{m/2}})}=\frac{m}{2} \mathrm{div}{(\omega_f)}=0. $$ Then, applying (\ref{mhd-7}) with $f^{m/2}$ in place of $f$, we obtain $$ \mathfrak c_{f^{m/2}} = \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]\mathfrak a + \left(\frac{m}{2} -1\right)\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b. $$ This shows that $f^{m/2}\in S^H_m(\Gamma)$ proving (iv). Finally, (vi) follows from \cite[Theorem 2.3.1]{Miyake}. We can also see that directly as follows. Let $0\neq f\in S^H_m(\Gamma)$ and $0\neq g\in S^H_n(\Gamma)$. Then, $fg\in S_{m+n}(\Gamma)$ since $f\in S^H_m(\Gamma)\subset S_m(\Gamma)$ and $g\in S^H_n(\Gamma)\subset S_n(\Gamma)$. We have the following: $$ \mathrm{div}{(f\cdot g)}= \mathrm{div}{(f)}+\mathrm{div}{(g)}. $$ Using Lemma \ref{prelim-1} (vi) we can rewrite this identity as follows: \begin{align*} &\mathfrak c'_{f\cdot g} - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left[\frac{m+n}{2}(1-1/e_{\mathfrak a})\right] \mathfrak a= \\ &\mathfrak c'_{f} - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right] \mathfrak a + \mathfrak c'_{g} - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left[\frac{n}{2}(1-1/e_{\mathfrak a})\right] \mathfrak a. \end{align*} By Lemma \ref{prelim-1} (vii) we obtain: \begin{align*} &\mathfrak c_{f\cdot g} - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left[\frac{m+n}{2}(1-1/e_{\mathfrak a})\right] \mathfrak a - \left(\frac{m+n}{2} -1\right)\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b=\\ &\left(\mathfrak c_{f} - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right] \mathfrak a - \left(\frac{m}{2} -1\right)\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b\right)+\\ &\left(\mathfrak c_{g} - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left[\frac{n}{2}(1-1/e_{\mathfrak a})\right] \mathfrak a - \left(\frac{n}{2} -1\right)\sum_{\mathfrak b\in \mathfrak R_\Gamma, \ cusp} \mathfrak b\right). \end{align*} Finally, (vi) follows applying (\ref{mhd-8}) since both terms on the right hand of equality are $\ge 0$. Next, (vii) follows immediately form the discussion in Section \ref{mhd}, and it is well--known. (viii) is a reinterpretation of Definition \ref{mhd-def}. The details are left to the reader as an easy exercise. Finally, (ix) and (x) in the case of $g(\Gamma)=1$ are obvious since by Lemma \ref{prelim-1} we have $S_2(\Gamma)= \mathbb C \cdot f$ where $$ \mathfrak c'_f=\mathfrak a_\infty + \sum_{\substack{\mathfrak b \in \mathfrak R_\Gamma, \ cusp\\ \mathfrak b \neq \mathfrak a_\infty}} \mathfrak b. $$ We prove (ix) and (x) in the case of $g(\Gamma)\ge 2$. Let $f\in S_m^H(\Gamma)$, $f\neq 0$. Then, by the definition of $S_m^H(\Gamma)$, we obtain \begin{equation} \label{mhd-11} \mathfrak c'_f(\mathfrak a_\infty)=1+ \mathfrak c_f(\mathfrak a_\infty)\ge 1+ \left(\frac{m}{2}-1\right)=\frac{m}{2}. \end{equation} On the other hand, again by the definition of $S_m^H(\Gamma)$ (see (\ref{mhd-8})) and the fact that $\mathfrak c'_f\ge 0$, we obtain \begin{align*} \deg{(\mathfrak c'_f)} & =\sum_{\mathfrak a \in \mathfrak R_\Gamma} \ \mathfrak c'_f(\mathfrak a) \ge \\ &\sum_{\mathfrak a\in \mathfrak R_\Gamma, \ elliptic} \mathfrak c'_f(\mathfrak a) + \sum_{\substack{\mathfrak b \in \mathfrak R_\Gamma, \ cusp\\ \mathfrak b \neq \mathfrak a_\infty}} \mathfrak c'_f(\mathfrak b) + \mathfrak c'_f(\mathfrak a_\infty)\ge\\ &\sum_{\mathfrak a\in \mathfrak R_\Gamma, \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]+ \frac{m}{2} \left(t-1\right) + \mathfrak c'_f(\mathfrak a_\infty) \end{align*} where $t$ is the number of inequivalent $\Gamma$--cusps. The degree $\deg{(\mathfrak c'_f)}$ is given by Lemma \ref{prelim-1} (vi) \begin{align*} \deg{(\mathfrak c'_f)} &= \dim M_m(\Gamma)+ g(\Gamma)-1\\ &= \begin{cases} 2(g(\Gamma)-1)+t \ \ \text{if} \ \ m=2;\\ m(g(\Gamma)-1)+ \frac{m}{2}t+ \sum_{\substack{\mathfrak a\in \mathfrak R_\Gamma, \\ elliptic}} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right] \ \ \text{if} \ \ m\ge 4.\\ \end{cases} \end{align*} Combining with the previous inequality, we obtain $$ \mathfrak c'_f(\mathfrak a_\infty) \le \frac{m}{2}+ m(g(\Gamma)-1) \ \ \text{if} \ \ m\ge 2. $$ Having in mind (\ref{mhd-11}), the rest of (ix) has standard argument (see for example \cite[Lemma 4.3]{Muic}). Finally, (x) follows (viii) and (ix). \end{proof} \vskip .2in The criterion in Lemma \ref{mhd-9} (x) is a quite good criterion to check whether or not $\mathfrak a_\infty$ is a Weierstrass points (the case $m=2$) using computer systems such as SAGE since we need just to list the basis. This case is well-known (see \cite[Definition 6.1]{ono}). This criterion has been used in practical computations in combination with SAGE in \cite{MuicKodrnja} for $\Gamma=\Gamma_0(N)$. \vskip .2in But it is not good when $m\ge 4$, regarding the bound for $S^H_m(\Gamma)$ given by Lemma \ref{mhd-9} (ix), since then a basis of $S_m(\Gamma)$ contains properly normalized cusp forms having leading terms $q^{m/2}, \ldots, q^{m/2+ m(g(\Gamma)-1)}$ at least when $\Gamma$ has elliptic points for $m$ large enough and we do not know which of then belong to $S^H_m(\Gamma)$. We explain that in Corollary \ref{mhd-14} bellow. \vskip .2in First, we recall the following result \cite[Lemma 2.9]{Muic2} which is well-known in a slightly different notation (\cite{pete-1}, \cite{pete-2}): \begin{Lem}\label{mhd-12} Let $m\ge 4$ be an even integer such that $\dim S_m(\Gamma)\ge g(\Gamma)+1$. Then, for all $1\le i \le t_m-g$, there exists $f_i\in S_m(\Gamma)$ such that $\mathfrak c'_{f_i}(\mathfrak a_\infty)= i$. \end{Lem} \vskip .2in \begin{Lem}\label{mhd-13} Assume that $\Gamma$ has elliptic points. (For example, $\Gamma=\Gamma_0(N)$.) Then, for a sufficiently large even integer $m$, we have \begin{equation}\label{mhd-15} \frac{m}{2} + m(g(\Gamma)-1) \le \dim S_m(\Gamma) -g(\Gamma). \end{equation} \end{Lem} \begin{proof} Assume that $m\ge 4$ is an even integer. Then, by Lemma \ref{prelim-1} (v), we obtain \begin{align*} &\dim S_m(\Gamma) -g(\Gamma) - \left(\frac{m}{2} + m(g(\Gamma)-1)\right) \\ &= \left(\frac{m}{2}-1\right)t - \frac{m}{2} + \sum_{\substack{\mathfrak a\in \mathfrak R_\Gamma, \\ elliptic}} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right] -2g(\Gamma)+1\\ &\ge \sum_{\substack{\mathfrak a\in \mathfrak R_\Gamma, \\ elliptic}} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right] -2g(\Gamma)-t+1. \end{align*} Since $\Gamma$ has elliptic points, the last term is $\ge 0$ for $m$ sufficiently large even integer. \end{proof} \vskip .2in \begin{Cor}\label{mhd-14} Assume that (\ref{mhd-15}) holds. Then, given a basis $f_1, \ldots f_t$ of $S_m^H(\Gamma)$ such that $\mathfrak c'_{f_j}(\mathfrak a_\infty)=i_j$, $1\le j\le t$, where $$ \frac{m}{2}\le i_1< i_2< \cdots < i_t \le \frac{m}{2}+ m\left(g(\Gamma)-1\right) $$ can be extended by additional $g(\Gamma)$ cuspidal modular forms in $S_m(\Gamma)$ to obtain the collection $F_k$, $\frac{m}{2}\le k \le \frac{m}{2}+ m\left(g(\Gamma)-1\right)$ such that $\mathfrak c'_{F_k}(\mathfrak a_\infty)=k$ for all $k$. \end{Cor} \begin{proof} This follows directly from Lemmas \ref{mhd-12} and \ref{mhd-13}. \end{proof} \vskip .2in Now, explain the algorithm for testing that $\mathfrak a_\infty$ is a $\frac{m}{2}$--Weierstrass point for $m\ge 6$. It requires some geometry. We recall that $\mathfrak R_\Gamma$ is hyperelliptic if $g(\Gamma)\ge 2$, and there is a degree two map onto $\mathbb P^1$. By general theory \cite[Chapter VII, Proposition 1.10]{Miranda}, if $g(\Gamma)=2$, then $\mathfrak R_\Gamma$ is hyperelliptic. If $\mathfrak R_\Gamma$ is not hyperelliptic, then $\dim S_2(\Gamma)= g(\Gamma)\ge 3$, and the regular map $\mathfrak R_\Gamma\longrightarrow \mathbb P^{g(\Gamma)-1}$ attached to a canonical divisor $K$ is an isomorphism onto its image \cite[Chapter VII, Proposition 2.1]{Miranda}. \vskip .2in Let $\Gamma=\Gamma_0(N)$, $N\ge 1$. Put $X_0(N)=\mathfrak R_{\Gamma_0(N)}$. We recall that $g(\Gamma_0(N))\ge 2$ unless $$ \begin{cases} N\in\{1-10, 12, 13, 16, 18, 25\} \ \ \text{when $g(\Gamma_0(N))=0$, and}\\ N\in\{11, 14, 15, 17, 19-21, 24, 27, 32, 36, 49\} \ \ \text{when $g(\Gamma_0(N))=1$.} \end{cases} $$ Let $g(\Gamma_0(N))\ge 2$. Then, we remark that Ogg \cite{Ogg} has determined all $X_0(N)$ which are hyperelliptic curves. In view of Ogg's paper, we see that $X_0(N)$ is {\bf not hyperelliptic} for $N\in \{34, 38, 42, 43, 44, 45, 51-58, 60-70\}$ or $N\ge 72$. This implies $g(\Gamma_0(N))\ge 3$. \vskip .2in Before we begin the study of spaces $S_m^H(\Gamma)$ we give the following lemma. \vskip .2in \begin{Lem}\label{cts-5000} Let $m\ge 4$ be an even integer. Let us select a basis $f_0, \ldots, f_{g-1}$, $g=g(\Gamma)$, of $S_2(\Gamma)$. Then, all of $\binom{g+\frac{m}{2}-1}{\frac{m}{2}}$ monomials $f_0^{\alpha_0}f_1^{\alpha_1}\cdots f_{g-1}^{\alpha_{g-1}}$, $\alpha_i\in \mathbb Z_{\ge 0}$, $\sum_{i=0}^{g-1} \alpha_i=\frac{m}{2}$, belong to $S_m^H(\Gamma)$. We denote this subspace of $S_m^H(\Gamma)$ by $S_{m, 2}^H(\Gamma)$. \end{Lem} \begin{proof} This follows from Lemma \ref{mhd-9} (vi) since $S_2(\Gamma)=S_2^H(\Gamma)$ (see Lemma \ref{mhd-9} (i)). \end{proof} \vskip .2in \begin{Thm}\label{cts-50000} Let $m\ge 4$ be an even integer. Assume that $\mathfrak R_\Gamma$ is not hyperelliptic. Then, we have $$ S_{m, 2}^H(\Gamma)= S_m^H(\Gamma). $$ \end{Thm} \begin{proof} We use notation of Section \ref{mhd} freely. The reader should review Lemma \ref{mhd-9}. Let $F\in S_2(\Gamma)$, $F\neq 0$. We define a holomorphic differential form $\omega \in H\left(\mathfrak R_\Gamma\right)$ by $\omega=\omega_F$. Define a canonical class $K$ by $K=\mathrm{div}{(\omega)}$. We prove the following: \begin{equation}\label{cts-2} L\left(\frac{m}{2}K\right)=\left\{\frac{f}{F^{m/2}}; \ \ f\in S_m^H(\Gamma) \right\}. \end{equation} The case $m=2$ is of course well--known. By the Riemann-Roch theorem and standard results recalled in Section \ref{mhd} we have \begin{align*} \dim L\left(\frac{m}{2}K\right) &= \deg{\left(\frac{m}{2}K\right)}-g(\Gamma)+1 + \dim L\left(\left(1- \frac{m}{2}\right)K\right)\\ &= (m-1)(g(\Gamma)-1)+\begin{cases} 1 \ \ \text{if} \ \ m=2;\\ 0 \ \ \text{if} \ \ m\ge 4.\\ \end{cases} \end{align*} Next, we recall that $S_2(\Gamma)=S_2^H(\Gamma)$ (see Lemma \ref{mhd-9} (i)). Then, Lemma \ref{mhd-9} (vi) we obtain $F^{\frac{m}{2}}\in S_m^H(\Gamma)$. Therefore, $f/F^{\frac{m}{2}}\in \mathbb C\left(\mathfrak R_\Gamma\right)$ for all $f\in S_m^H(\Gamma)$. By the correspondence described in (\ref{mhd-5}) we have \begin{align*} \mathrm{div}{(F)}=\mathrm{div}{(\omega_F)}+\sum_{\mathfrak a \in \mathfrak R_\Gamma} \left(1- \frac{1}{e_{\mathfrak a}}\right) \mathfrak a &= K+ \sum_{\mathfrak a \in \mathfrak R_\Gamma} \left(1- \frac{1}{e_{\mathfrak a}}\right) \mathfrak a\\ &= K+ \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} (1-1/e_{\mathfrak a}) \mathfrak a + \sum_{\substack{\mathfrak b \in \mathfrak R_\Gamma, \\ cusp}} \mathfrak b. \end{align*} Thus, for $f\in S_m^H(\Gamma)$, we have the following: \begin{align*} \mathrm{div}{\left(\frac{f}{F^{\frac{m}{2}}}\right)} + \frac{m}{2}K &= \mathrm{div}{(f)}- \frac{m}{2}\mathrm{div}{(F)} + \frac{m}{2}K\\ &= \mathrm{div}{(f)}-\frac{m}{2}\sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} (1-1/e_{\mathfrak a}) \mathfrak a-\frac{m}{2} \sum_{\substack{\mathfrak b \in \mathfrak R_\Gamma, \\ cusp}} \mathfrak b \end{align*} Next, using Lemma \ref{prelim-1} (vi), the right--hand side becomes $$ \mathfrak c'_f - \sum_{\mathfrak a\in \mathfrak R_\Gamma, \ \ elliptic} \left[\frac{m}{2}(1-1/e_{\mathfrak a})\right]\mathfrak a - \frac{m}{2} \sum_{\substack{\mathfrak b \in \mathfrak R_\Gamma, \\ cusp}} \mathfrak b \ge 0 $$ by the definition of $S_m^H(\Gamma)$. Hence, $f/F^{\frac{m}{2}}\in L\left(\frac{m}{2}K\right)$. Now, comparing the dimensions of the right-hand and left-hand side in (\ref{cts-2}), we obtain their equality. This proves (\ref{cts-2}). \vskip .2in Let $W$ be any finite dimensional $\mathbb C$--vector space. Let $\text{Symm}^k(W)$ be symmetric tensors of degree $k\ge 1$. Then, by Max Noether theorem (\cite{Miranda}, Chapter VII, Corollary 3.27) the multiplication induces surjective map $$ \text{Symm}^k{\left(L\left(K\right)\right)}\twoheadrightarrow L\left(\frac{m}{2}K\right). $$ The theorem follows. \end{proof} \vskip .2in Now, we combine Theorem \ref{cts-50000} with Lemma \ref{mhd-9} (x) to obtain a good criterion in the case $m\ge 4$ for {\bf testing} that $\mathfrak a_\infty$ is a $\frac{m}{2}$--Weierstrass point. We give examples in Section \ref{cts} (see Propositions \ref{cts-5001} and \ref{cts-5002}). \vskip .2in \begin{Cor}\label{cts-500000} Let $m\ge 4$ be an even integer. Assume that $\mathfrak R_\Gamma$ is not hyperelliptic. Assume that $\mathfrak a_\infty$ is a cusp for $\Gamma$. Let us select a basis $f_0, \ldots, f_{g-1}$, $g=g(\Gamma)$, of $S_2(\Gamma)$. Compute $q$--expansions of all monomials $$ f_0^{\alpha_0}f_1^{\alpha_1}\cdots f_{g-1}^{\alpha_{g-1}}, \ \ \alpha_i\in \mathbb Z_{\ge 0}, \ \sum_{i=0}^{g-1} \alpha_i=\frac{m}{2}. $$ Then, $\mathfrak a_\infty$ is {\bf not} a $\frac{m}{2}$--Weierstrass point if and only if there exist a basis of the space of all such monomials, $F_1, \ldots F_t$, $t=\dim{S_m^H(\Gamma)}=(m-1)(g-1)$ (see Lemma \ref{mhd-9} (v)), such that their $q$--expansions are of the form $$ F_u=a_uq^{u+m/2-1}+ \text{higher order terms in $q$}, \ \ 1\le u\le t, $$ where $$ a_u\in \mathbb C, \ \ a_u\neq 0. $$ \end{Cor} \vskip .2in When $\mathfrak R_\Gamma$ is hyperelliptic, for example if $g(\Gamma)=2$, the space $S_{m, 2}^H(\Gamma)$ could be proper subspace of $S_{m}^H(\Gamma)$. For example, if $N=35$, then $g(\Gamma_0(N))=3$ and $X_0(N)$ is hyperelliptic. For $m=4, 6, 8, 8, 10, 12, 14$ we checked that $\dim{S_{m, 2}^H(\Gamma)}= m+1$ while by general theory $\dim{S_{m}^H(\Gamma)}= (m-1)(g(\Gamma_0(N))-1)= 2(m-1)$. We see that $$ \dim{S_{m}^H(\Gamma)} - \dim{S_{m, 2}^H(\Gamma)}=m-3\ge 1, \ \ \text{for} \ m=4, 6, 8, 8, 10, 12, 14. $$ \vskip .2in In fact, the case of $g(\Gamma)= 2$ could be covered in full generality. We leave easy proof of the following proposition to the reader. \begin{Prop}\label{cts-5002} Assume that $g(\Gamma)= 2$. Let $f_0, f_1$ be a basis of $S_2(\Gamma)$. Then, for any even integer $m\ge 4$, $f_0^uf_1^{\frac{m}{2}-u}$, $0\le u \le m$ is a basis of $S_{m, 2}^H(\Gamma)$. Therefore, $$ \dim{S_{m}^H(\Gamma)}= (m-1)(g(\Gamma)-1)= m-1> \frac{m}{2}+1, \ \ \text{for} \ m\ge 6, $$ and $S_{4, 2}^H(\Gamma)= S_{4}^H(\Gamma)$. \end{Prop} \vskip .2in We end this section with the standard yoga. \vskip .2in \begin{Thm} \label{mhd-10} Let $m\ge 2$ be an even integer. Let $\Gamma$ be a Fuchsian group of the first kind such that $g(\Gamma)\ge 1$. Let $t= \dim S^H_m(\Gamma)=\dim H^{m/2}\left(\mathfrak R_\Gamma\right)$. Let us fix a basis $f_1, \ldots, f_t$ of $S^H_m(\Gamma)$, and let $\omega_1, \ldots, \omega_t$ be the corresponding basis of $H^{m/2}\left(\mathfrak R_\Gamma\right)$. As above, we construct holomorphic differential $W\left(\omega_1, \ldots, \omega_t\right)\in H^{\frac{t}{2}\left(m-1+t\right)}\left(\mathfrak R_\Gamma\right)$. We also construct the Wronskian $W(f_1,\ldots, f_t)\in S_{t(m+t-1)}(\Gamma)$ (see Proposition \ref{wron-2}). Then, we have the following equality: $$ \omega_{W(f_1,\ldots, f_t)}=W\left(\omega_1, \ldots, \omega_t\right). $$ In particular, we obtain the following: $$ W(f_1,\ldots, f_t)\in S^H_{t(m+t-1)}(\Gamma). $$ Moreover, assume that $\mathfrak a_\infty$ is a $\Gamma$-cusp. Then, $\mathfrak a_\infty$ is a $\frac{m}{2}$--Weierstrass point if and only if $$ \mathfrak c'_{ W(f_1,\ldots, f_t)}(\mathfrak a_\infty)\ge 1+ \frac{t}{2}\left(m-1+t\right) \ \ \text{i.e.,} \ \ \mathfrak c_{ W(f_1,\ldots, f_t)}(\mathfrak a_\infty)\ge \frac{t}{2}\left(m-1+t\right) $$ (See also Lemma \ref{mhd-9} (x) for more effective formulation of the criterion.) \end{Thm} \begin{proof} Since this is a equality of two meromorphic differentials, it is enough to check the identity locally. Let $z\in \mathbb H$ be a non--elliptic point, and $z\in U\subset \mathbb H$ a chart of $\mathfrak a_z$ such that $U$ does not contain any elliptic point. Then, one can use \cite[Section 2.3]{Miyake} to check the equality directly. Indeed, we have the following argument. Let $z_0\in \mathbb H$ be a non--elliptic point, and $z\in U\subset \mathbb H$ a chart of $\mathfrak a_{z_0}$ such that $U$ does not contain any elliptic point. Let $t_{\mathfrak{a}}$ be a local coordinate on a neighborhood $V_{\mathfrak{a}} =\pi(U)$. By the \cite[Section 1.8]{Miyake} if $U$ is small enough, projection $\pi$ gives homeomorphism of $U$ to $V_{\mathfrak{a}}$ such that \begin{equation}\label{mhd-16} t_{\mathfrak{a}} \circ \pi(z)=z \ \text{ for } z \in U. \end{equation} Let $f \in \mathcal A_m(\Gamma)$ and let $\omega_f$ be the corresponding differential. Locally there exist unique meromorphic function $\varphi$ such that $\omega_f=\varphi \left(dz\right)^{m/2}$. By \cite[Section 2.3]{Miyake}, local correspondence $f\mapsto \omega_f$ is given by $$\varphi (t_{\mathfrak{a}} \circ \pi(z)) = f(z) \left( d(t_{\mathfrak{a}}\circ\pi)/dz \right)^{-m/2},$$ which by the choice of local chart \ref{mhd-16} become $$\varphi(z)=f(z) \ \text{ for } \ z \in U . $$ So, in the neighborhood of non-elliptic point $z\in {\mathbb{H}}$ we have $$\omega_f=f \left(dz\right)^{m/2}.$$ This gives us local identity $$W_z\left(\omega_1, \ldots, \omega_t\right) = W(f_1,\ldots, f_t).$$ Since above is valid for any even $m\ge 2$, we get local identity of two meromorphic differentials \begin{align*} \omega_{W(f_1,\ldots, f_t)} &= W(f_1,\ldots, f_t) \left(dz\right)^{\frac{t}{2}\left(m-1+t\right)} \\ &= W_z\left(\omega_1, \ldots, \omega_t\right)\left(dz\right)^{\frac{t}{2}\left(m-1+t\right)} \\ &= W\left(\omega_1, \ldots, \omega_t\right) \end{align*} Now, assume that $\mathfrak a_\infty$ is a $\Gamma$-cusp. Then, $\mathfrak a_\infty$ is a $\frac{m}{2}$--Weierstrass point if and only if $$ \nu_{\mathfrak a_\infty}\left(W\left(\omega_1, \ldots, \omega_t\right)\right)\ge 1 $$ i.e., $$ \nu_{\mathfrak a_\infty}\left( \omega_{W(f_1,\ldots, f_t)}\right) \ge 1 $$ by the first part of the proof. Finally, by (\ref{mhd-5}), this is equivalent to $$ \mathfrak c'_{ W(f_1,\ldots, f_t)}(\mathfrak a_\infty)= \nu_{\mathfrak a_\infty}\left(\omega_{W(f_1,\ldots, f_t)}\right) + \frac{t}{2}\left(m-1+t\right)\ge 1+ \frac{t}{2}\left(m-1+t\right). $$ This completes the proof of the theorem. \end{proof} \section{Explicit computations based on Corollary \ref{cts-500000} for $\Gamma=\Gamma_0(N)$} \label{cts} In this section we apply the algorithm in Corollary \ref{cts-500000} combined with SAGE. The method is the following. We take $q$-expansions of the base elements of $S_2(\Gamma_0(N))$: $$ f_0, \ldots, f_{g-1}, $$ where $g=g(\Gamma_0(N))$. For even $m \ge 4$, we compute $q$--expansions of all monomials of degree $m/2$: $$ f_0^{\alpha_0}f_1^{\alpha_1}\cdots f_{g-1}^{\alpha_{g-1}}, \ \ \alpha_i\in \mathbb Z_{\ge 0}, \ \sum_{i=0}^{g-1} \alpha_i=\frac{m}{2}. $$ The number of monomials is $$ \binom{g+m/2-1}{m/2} $$ \vskip .2in By selecting first $m/2+m \cdot (g-1)$ terms from $q$--expansions of the monomials (see Lemma \ref{mhd-9} (ix)), we can create matrix of size $$ \binom{g+m/2-1}{m/2} \times \left(\frac{m}{2}+m \cdot (g-1)\right). $$ \vskip .2in Then, we perform suitable integral Gaussian elimination method to transform the matrix into row echelon form. The procedure is as follows. We successively sort and transform the row matrices to cancel the leading row coefficients with the same number of leading zeros as their predecessor. We use the {\it Quicksort algorithm} for sorting. We obtain the transformed matrix and the transformation matrix. The non-null rows of the transformed matrix give the $q$-expansions of the basis elements, and the corresponding rows of the transformation matrix give the corresponding linear combinations of monomials. \vskip .2in Using above described method we perform various computations mentioned below. For example, we can easily verify particular cases Theorem \ref{cts-50000}. \begin{Prop}\label{cts-5001} For $m=4, 6, 8, 10, 12$ and $N=34, 38, 44, 55$, and for $m=4, 6, 8, 10$ and $N=54, 60$, we have $S_{m}^H(\Gamma_0(N))=S_{m, 2}^H(\Gamma_0(N))$. (We remark that all curves $X_0(N)$, $N\in \{34, 38, 44, 54, 55, 60\}$ are not hyperelliptic (see the paragraph after Corollary \ref{mhd-14}.) \end{Prop} \vskip .2in We can also deal with generalized Weierstrass points. For example, we can check the following result: \begin{Prop}\label{cts-5002} For $m=2, 4, 6, 8, 10$, $\mathfrak a_\infty$ is not $\frac{m}{2}$--Weierstrass point for $X_0(34)$. Next, $\mathfrak a_\infty$ is not ($1$--)Weierstrass point for $X_0(55)$, but it is $\frac{m}{2}$--Weierstrass point for $X_0(55)$ and $m=4, 6, 8, 10$. \end{Prop} \vskip .2in For example, let $m=4$. Then, for $X_0(34)$ the monomials are \begin{align*} f_0^2 & =q^{2}-4q^{5}-4q^{6}+12q^{8}+12q^{9}-2q^{10} \\ f_0f_1& = q^{3}-q^{5}-2q^{6}-2q^{7}+2q^{8}+5q^{9}+2q^{10} \\ f_0f_2 & = q^{4}-2q^{5}-q^{6}-q^{7}+6q^{8}+6q^{9}+2q^{10} \\ -f_1^2 + f_0f_2& = -2q^{5}+q^{6}-q^{7}+5q^{8}+6q^{9}+4q^{10} \\ -f_1^2 + f_0f_2 + 2f_1f_2& =-3q^{6}-5q^{7}+11q^{8}+16q^{9}+2q^{10} \\ -f_1^2 + f_0f_2 + 2f_1f_2 + 3f_2^2 &= -17q^{7}+17q^{8}+34q^{9}+17q^{10} \\ \end{align*} Their first exponents are $\frac{m}{2}=4, 5, 6, \frac{m}{2}+ (m-1)(g-1)-1=7$ which shows that $\mathfrak a_\infty$ is not $2$--Weierstrass point for $X_0(34)$. \vskip .2in For $X_0(55)$ the monomials are \begin{align*} & f_0^2\\ & f_0f_1\\ & f_0f_2\\ & f_0f_3\\ & f_0f_4 \\ & -f_1f_2 + f_0f_3\\ & -f_1f_2 + f_0f_3 + 2f_2f_3\\ & -f_1f_2 + f_0f_3 + 2f_2f_3 - f_3^2\\ & -f_1f_2 + f_0f_3 + 2f_2f_3 - f_3^2 - 2f_3f_4\\ &-f_1f_2 + f_0f_3 + 2f_2f_3 - f_3^2 - 2f_3f_4 + f_4^2\\ & -f_1f_2 - f_2^2 + f_0f_3 + 2f_2f_3 - f_3^2 + f_0f_4 - 6f_3f_4 - f_4^2\\ &-f_2^2 + f_3^2 + f_0f_4 - f_2f_4 - 4f_3f_4 + 2f_4^2.\\ \end{align*} \vskip .2in Their $q$--expansions are given by the following expressions: \begin{align*} & q^{2}-2q^{8}-2q^{9}-2q^{10}+2q^{11}-4q^{13}+3q^{14}+4q^{15}+3q^{16}-2q^{17}+5q^{18} \\ & q^{3}-2q^{7}+q^{10}-2q^{11}+q^{12}-2q^{14}-4q^{16}+5q^{18} \\ & q^{4}-2q^{7}-q^{8}+3q^{9}+4q^{10}-4q^{11}-q^{13}-2q^{14}-3q^{15}-10q^{16}-2q^{17}+3q^{18} \\ & q^{5}-2q^{7}-q^{8}+3q^{9}+4q^{10}-4q^{11}-3q^{13}+q^{14}-q^{15}-11q^{16}-2q^{17}+5q^{18} \\ & q^{6}-2q^{11}-q^{12}-q^{13}-q^{14}+q^{15}-q^{16}+3q^{18} \\ & -2q^{7}+q^{8}+6q^{9}+q^{10}-10q^{11}-3q^{12}-5q^{13}+13q^{14}+21q^{15}-17q^{16}-8q^{17}-14q^{18} \\ & q^{8}+2q^{9}-5q^{10}-6q^{11}+19q^{12}+7q^{13}-13q^{14}-33q^{15}-7q^{16}+38q^{17}+14q^{18} \\ & 2q^{9}-q^{10}-4q^{11}+9q^{12}-5q^{13}+4q^{14}-13q^{15}-12q^{16}+18q^{17}+4q^{18} \\ & -q^{10}+11q^{12}-11q^{13}-7q^{15}-22q^{16}+22q^{17}+22q^{18} \\ & 11q^{12}-11q^{13}-11q^{15}-22q^{16}+22q^{17}+22q^{18} \\ &-22q^{13}+44q^{15}-44q^{16}+44q^{18} \\ &-22q^{14}+22q^{15}-22q^{16}+44q^{18} \\ \end{align*} The last exponent is $14> \frac{m}{2}+ (m-1)(g-1)-1=13$. So, $\mathfrak a_\infty$ is not $2$--Weierstrass point for $X_0(55)$. \section{ Wronskians of Modular Forms}\label{wron} In this section we deal with a generalization of the usual notion of the Wronskian of cuspidal modular forms \cite{roh}, (\cite{ono}, 6.3.1), (\cite{Muic}, the proof of Theorem 4-5), and (\cite{Muic2}, Lemma 4-1). \vskip .2in \begin{Lem}\label{wron-1} Let $f\in M_m(\Gamma, \chi)$. Let $\gamma\in \Gamma$. Then, for $k\ge0$, $k$--th derivative of the function $f(\sigma.z)$ is given by $$ \frac{d^{k}}{dz^k}f(\gamma.z) =\chi(\gamma) j(\gamma, z)^{m+2k} \cdot \frac{d^{k}f(z)}{dz^{k}}+ \chi(\gamma) \sum_{i=0}^{k-1} D_{ik} \cdot j(\gamma, z)^{m+k+i} \cdot \frac{d^{i}f(z)}{dz^{i}}. $$ where $D_{ik}$ are some constants depending on $m$, $k$, and $\gamma$. If $\Gamma\subset SL_2(\mathbb Z)$, then the constants can be taken to be from $\mathbb Z$. \end{Lem} \begin{proof} This follows by an easy induction on $k$ using the fact that $$ \frac{d}{dz} \gamma.z=j(\gamma, z)^{-2}. $$ See also the proof of Theorem 4-5 in\cite{Muic}, the text between the lines (4-6) and (4-8). \end{proof} The following proposition is the main result of the present section: \vskip .2in \begin{Prop}\label{wron-2} Let $m\ge 1$. Then, for any sequence $f_1, \ldots, f_k\in M_m(\Gamma, \chi)$, the Wronskian $$ W\left(f_1, \ldots, f_k\right)(z)\overset{def}{=}\left|\begin{matrix} f_1(z) & \cdots & f_{k}(z) \\ \frac{df_1(z)}{dz} & \cdots & \frac{df_{k}(z)}{dz} \\ &\cdots & \\ \frac{d^{k-1}f_1(z)}{dz^{k-1}} & \cdots & \frac{d^{k-1}f_{k}(z)}{dz^{k-1}} \\ \end{matrix}\right| $$ is a cuspidal modular form in $S_{k(m+k-1)}(\Gamma, \chi^k)$ if $k\ge 2$. If $f_1, \ldots, f_k$ are linearly independent, then $W\left(f_1, \ldots, f_k\right)\neq 0$. \end{Prop} \begin{proof} This is a standard fact. We apply Lemma \ref{wron-1} to conclude $$ W\left(f_1, \ldots, f_k\right)(\gamma. z)=\chi^k(\gamma) j(\gamma, z)^{k(m+k-1)} W\left(f_1, \ldots, f_k\right)(z), \ \ \gamma\in \Gamma, \ z\in \mathbb H. $$ Let $x\in {\mathbb{R}}\cup \{\infty\}$ be a cusp for $\Gamma$. Let $\sigma\in SL_2({\mathbb{R}})$ such that $\sigma.x=\infty$. We write $$ \{\pm 1\} \sigma \Gamma_{x}\sigma^{-1}= \{\pm 1\}\left\{\left(\begin{matrix}1 & lh'\\ 0 & 1\end{matrix}\right); \ \ l\in {\mathbb{Z}}\right\}, $$ where $h'>0$ is the width of the cusp. Then we write the Fourier expansion of each $f_i$ at $x$ as follows: $$ (f_i|_m \sigma^{-1})(\sigma.z)= \sum_{n=0}^\infty a_{n, i} \exp{\frac{2\pi \sqrt{-1}n\sigma.z}{h'}}. $$ Using the cocycle identity $$ 1=j(\sigma^{-1}\sigma, z)=j(\sigma^{-1}, \sigma.z)j(\sigma, z), $$ this implies the following: $$ j(\sigma, z)^{m} \cdot f_i (z)= \sum_{n=0}^\infty a_{n, i} \exp{\frac{2\pi \sqrt{-1}n\sigma.z}{h'}}. $$ \vskip .1in By induction on $t\ge 0$, using $$ \frac{d}{dz} \sigma.z=j(\sigma, z)^{-2}, $$ we have the following: \begin{equation}\label{wron-3} j(\sigma, z)^{m+2t} \frac{d^{t}f_i(z)}{dz^{t}} + \sum_{u=0}^{t-1} D_{i, t} j(\sigma, z)^{m+t+u} \frac{d^{u}f_i(z)}{dz^{u}}= \sum_{n=0}^\infty a_{n, i,t} \exp{\frac{2\pi \sqrt{-1}n\sigma.z}{h'}}, \end{equation} for some complex numbers $D_{i, t}$ and $a_{n, i,t}$, where $$ a_{0, i, t}=0, \ \ t\ge 1. $$ \vskip .2in Now, by above considerations, using (\ref{wron-3}), we have \begin{align*} \left(W\left(f_1, \ldots, f_k\right)|_{k(m+k-1)} \sigma^{-1}\right) (\sigma. z) &=j(\sigma, z)^{k(m+k-1)} W\left(f_1, \ldots, f_k\right)(z)\\ &= \left|\begin{matrix} j(\sigma, z)^{m} f_1(z) & \cdots & j(\sigma, z)^{m} f_{k}(z) \\ j(\sigma, z)^{m+2} \frac{df_1(z)}{dz} & \cdots & j(\sigma, z)^{m+2} \frac{df_{k}(z)}{dz} \\ &\cdots & \\ j(\sigma, z)^{m+2(k-1)}\frac{ d^{k-1}f_1(z)}{dz^{k-1}} & \cdots & j(\sigma, z)^{m+2(k-1)} \frac{d^{k-1}f_{k}(z)}{dz^{k-1}} \\ \end{matrix}\right|\\ &= \det{\left( \sum_{n=0}^\infty a_{n, i+1,t} \exp{\frac{2\pi \sqrt{-1}n\sigma.z}{h'}}\right)_{0\le i, t\le k-1}}\\ \end{align*} Now, we see that the Wronskian is holomorphic at each cup of $\Gamma$ and vanishes at the order at least $k-1$. In particular, it belongs to $S_{k(m+k-1)}(\Gamma, \chi^k)$ if $k\ge 2$. The claim that linear independence is equivalent to the fact that Wronskian is not identically zero is standard (\cite{Miranda}, Chapter VII, Lemma 4.4). \end{proof} \vskip .2in We end this section with an elementary remark regarding Wronskians. In the case when $\Gamma$ has a cusp at the infinity $\mathfrak a_\infty$, it is more convenient to use the derivative with respect to $$ q=\exp{\frac{2\pi \sqrt{-1} z}{h}}, $$ where $h>0$ is the width of the cusp since all modular forms have $q$--expansions. Using the notation from Proposition \ref{wron-2}. It is easy to see $$ \frac{d }{dz}= \frac{2\pi \sqrt{-1} }{h} \cdot q\frac{d }{dq}. $$ This implies that $$ \frac{d^k }{dz^k}= \left(\frac{2\pi \sqrt{-1} }{h}\right)^k \cdot \left(q\frac{d }{dq}\right)^k, \ \ k\ge 0. $$ \vskip .2in Thus, we may define the $q$--Wronskian as follows: \begin{equation}\label{wron-5000} W_q\left(f_1, \ldots, f_k\right)\overset{def}{=} \left|\begin{matrix} f_1 & \cdots & f_{k} \\ q\frac{d}{dq} f_1& \cdots & q\frac{d }{dq} f_k\\ &\cdots & \\ \left(q\frac{d }{dq}\right)^{k-1} f_1& \cdots & \left(q\frac{d }{dq}\right)^{k-1} f_k \\ \end{matrix}\right|, \end{equation} considering $q$--expansions of $f_1, \ldots, f_k$. \vskip .2in We obtain \begin{equation}\label{wron-5001} W\left(f_1, \ldots, f_k\right)=\left(\frac{2\pi \sqrt{-1} }{h}\right)^{k(k-1)/2} W_q\left(f_1, \ldots, f_k\right). \end{equation} \section{On a Divisor of a Wronskian} \label{wron-cont} In this section we discuss the divisor of cuspidal modulars forms constructed via Wronskians (see Proposition \ref{wron-2}). We start with necessary preliminary results. \vskip .2in \begin{Lem} \label{wron-cont-1} Let $\varphi_1, \cdots, \varphi_l$ be a sequence of linearly independent meromorphic functions on some open set $U\subset \mathbb C$. We define their Wronskian as usual $W(\varphi_1, \cdots, \varphi_k)= \det{\left(\frac{d^{i-1}\varphi_j}{dz^{i-1}} \right)_{i,j=1, \ldots, k}}$. Then, we have the following: \begin{itemize} \item[(i)] The Wronskian $W(\varphi_1, \cdots, \varphi_k)$ is a non--zero meromorphic function on $U$. \item[(ii)] We have $W(\varphi_1, \cdots, \varphi_k)=\varphi^k W(\varphi_1/\varphi, \cdots, \varphi_{k}/\varphi)$ for all non--zero meromorphic functions $\varphi$ on $U$. \item[(iii)] Let $\xi\in U$ be such that all $\varphi_i$ are holomorphic. Let $A$ be the $\mathbb C$--span of all $\varphi_i$. Then, all $\varphi\in A$ are holomorphic at $\xi$, and the set $\{\nu_{z-\xi}(\varphi); \ \ \varphi\in A, \ \varphi\neq 0\}$ has exactly $k=\dim A$ different elements (Here as in Section \ref{prelim}, $\nu_{z-\xi}$ stands for the order at $\xi$.). Let $\nu_{z-\xi}(\varphi_1,\ldots, \varphi_k)$ be the sum of all $\dim A$--values of that set. Then, $W(\varphi_1, \cdots, \varphi_k)$ is holomorphic at $\xi$, and the corresponding order is $$ \nu_{z-\xi}(\varphi_1,\ldots, \varphi_k)- \frac{k(k-1)}{2} $$ \end{itemize} \end{Lem} \begin{proof} (i) is well--known. See for example (\cite{Miranda}, Chapter VII, Lemma 4.4) or it is a consequence of \cite[Proposition III.5.8]{FK}. (ii) is a consequence of the proof of \cite[Proposition III.5.8]{FK} (see formula (5.8.4)). Finally, we prove (iii). Then, by the text before the statement of \cite[Proposition III.5.8]{FK}, we see that we can select another basis $\psi_1, \ldots, \psi_k$ of $A$ such that $$ \nu_{z-\xi}(\psi_1)< \nu_{z-\xi}(\psi_2)< \ldots< \nu_{z-\xi}(\psi_k). $$ Then, by \cite[Proposition III.5.8]{FK}, we have that the order of $W(\psi_1, \ldots, \psi_k)$ at $z$ is equal to $$ \sum_{i=1}^k \left( \nu_{z-\xi}(\psi_i) -i+1\right)= \nu_{z-\xi}(\varphi_1,\ldots, \varphi_k)- \frac{k(k-1)}{2}. $$ But $\varphi_1, \ldots, \varphi_k$ is also a basis of $A$. Thus, we see that we can write $$ \left(\begin{matrix} \varphi_1\\ \varphi_2\\ \vdots\\ \varphi_k \end{matrix}\right) = A\cdot \left(\begin{matrix} \psi_1 \\ \psi_2\\ \vdots\\ \psi_k \end{matrix}\right) $$ for some $A\in GL_k(\mathbb C)$. This implies $$ \left(\begin{matrix} \varphi_1 & d\varphi_1/dz& \cdots & d^{k-1}\varphi_1/dz^{k-1}\\ \varphi_2 & d\varphi_2/dz& \cdots & d^{k-1}\varphi_2/dz^{k-1}\\ \vdots\\ \varphi_k & d\varphi_k/dz& \cdots & d^{k-1}\varphi_k/dz^{k-1} \end{matrix}\right) = A\cdot \left(\begin{matrix} \psi_1 & d\psi_1/dz& \cdots & d^{k-1}\psi_1/dz^{k-1}\\ \psi_2 & d\psi_2/dz& \cdots & d^{k-1}\psi_2/dz^{k-1}\\ \vdots\\ \psi_k & d\psi_k/dz& \cdots & d^{k-1}\psi_k/dz^{k-1} \end{matrix}\right). $$ Hence $$ W(\varphi_1, \ldots, \varphi_k) =\det A \cdot W(\psi_1, \ldots, \psi_k) $$ has the same order at $z$ as $W(\psi_1, \ldots, \psi_k)$. \end{proof} \vskip .2in As a direct consequence of Lemma \ref{wron-cont-1}, we obtain the following result. At this point the reader should review the text in Section \ref{prelim} before the statement of Lemma \ref{prelim-1} as well as Proposition \ref{wron-2}. The proof is left to the reader as an exercise. \begin{Prop}\label{wron-cont-2} Assume that $m\ge 2$ is even. Let $f_1, \ldots, f_k\in M_m(\Gamma)$ be a sequence of linearly independent modular forms. Let $\xi \in \mathbb H$. Then, we have the following: $$ \nu_{\mathfrak a_\xi}\left(W(f_1,\ldots, f_k)\right)=\frac{1}{e_\xi} \cdot \left( \nu_{z-\xi}(\varphi_1,\ldots, \varphi_k)- \frac{k(k-1)}{2}\right). $$ \end{Prop} \vskip .2in The case of a cusp requires a different technique but final result is similar: \begin{Thm}\label{wron-6} Assume that $m\ge 2$ is even. Suppose that $\mathfrak a_\infty$ is a cusp for $\Gamma$. Let $f_1, \ldots, f_k\in M_m(\Gamma)$ be a sequence of linearly independent modular forms. Let $i\in \{1, \ldots, k\}$. Consider $f_1, \ldots, f_k$ as meromorphic functions in a variable $q$ in a neighborhood of $q=0$, and define $\nu_{q-0}\left(f_1, \ldots, f_k\right)$ as in Lemma \ref{wron-cont-1} (iii). Then, we have the following identity: $$ \nu_{\mathfrak a_\infty}\left(W\left(f_1, \ldots, f_k\right)\right)= \nu_{q-0}\left(f_1, \ldots, f_k\right). $$ \end{Thm} \begin{proof} By Lemma \ref{wron-cont-1} (ii), we can write \begin{equation}\label{wron-5} W\left(f_1, \ldots, f_k\right) =f_1^k \cdot W(1, f_2/f_1, \ldots, f_k/f_1) \end{equation} as meromorphic functions on $\mathbb H$. But the key fact that $1, f_2/f_1, \ldots, f_k/f_1$ can be regarded as meromorphic (rational) functions on $\mathfrak R_\Gamma$ i.e., they are elements of $\mathbb C\left(\mathfrak R_\Gamma\right)$. The key point now is that these meromorphic functions, and their Wronskian define meromorphic $k(k-1)/2$--differential form, denoted by $W_\Gamma$. Details are contained in \cite[Section 4, Lemma 4.9]{Miranda}. We recall the following. Let $w\in \mathbb H$ be a non--elliptic point for $\Gamma$, such that $f_1\neq 0$, and $U\subset \mathbb H$ small neighborhood of $w$ giving a chart of $\mathfrak a_w$ on the curve $\mathfrak R_\Gamma$. Then, in the chart $U$ we have: $$ W_\Gamma= W(1, f_2(z)/f_1(z), \ldots, f_k(z)/f_1(z)) \left(dz\right)^{k(k-1)/2}. $$ On the other hand, in a chart of $\mathfrak a_\infty$, $W_\Gamma$ is given by the usual Wronskian, denoted by $W_{\Gamma, q}(1, f_2/f_1, \ldots, f_k/f_1)$, of $1, f_2/f_1, \ldots, f_k/f_1$ presented by $q$--expansions with respect to the derivatives $d^i/dq^i$, $0\le i\le k-1$, multiplied by $\left(dq\right)^{k(k-1)/2}$ i.e., $$ W_\Gamma= W(1, f_2/f_1, \ldots, f_k/f_1) \left(dq\right)^{k(k-1)/2}. $$ Next, we insert $q$-expansions of $f_1, \ldots, f_k$ into $W(1, f_2/f_1, \ldots, f_k/f_1)$. So, we can express $$ W(1, f_2/f_1, \ldots, f_k/f_1)=c_mq^m+ c_{m+1} q^{m+1}+\cdots, $$ where $c_m\neq 0$, $c_{m+1}, c_{m+2}, \ldots$ are complex numbers. Hence, \begin{equation}\label{wron-6000} \nu_{\mathfrak a_\infty}\left(W(1, f_2/f_1, \ldots, f_k/f_1)\right) = \nu_{q-0}\left( W(1, f_2/f_1, \ldots, f_k/f_1)\right)=m. \end{equation} Let us fix a neighborhood $U$ of $\infty$ such that it is a chart for $\mathfrak a_\infty$, and there is no elliptic points in it. Then, we fix $w\in U$, $w\ne \infty$, and a chart $V$ of $w$ such that $V\subset \mathbb H\cap U$. Now, on $V$, we have the following expression for $W_\Gamma$: $$ W(1, f_2/f_1, \ldots, f_k/f_1) \left(dz\right)^{k(k-1)/2}=\left(c_mq^m+ c_{m+1} q^{m+1}+\cdots\right) \left(dz\right)^{k(k-1)/2}, \ \ q=\exp{\frac{2\pi \sqrt{-1} z}{h}}. $$ On the other hand, on $U$, we must have the expression for $W_\Gamma$ of the form $$ W_{\Gamma, q}(1, f_2/f_1, \ldots, f_k/f_1) \left(dq\right)^{k(k-1)/2}= \left(d_nq^n+ d_{n+1} q^{m+1}+\cdots\right) \left(dq\right)^{k(k-1)/2}, $$ where $d_n\neq 0$, $d_{n+1}, d_{n+2}, \ldots$ are complex numbers. We have \begin{equation}\label{wron-6001} \nu_{\mathfrak a_\infty}\left(W_\Gamma\right)= \nu_{q-0}\left( W_{\Gamma, q}(1, f_2/f_1, \ldots, f_k/f_1)\right)=n. \end{equation} By definition of meromorphic $k(k-1)/2$--differential, on $V$ these expressions must be related by $$ c_mq^m+ c_{m+1} q^{m+1}+\cdots = \left(d_nq^n+ d_{n+1} q^{m+1}+\cdots\right) \cdot \left(\frac{dq}{dz}\right)^{k(k-1)/2} . $$ Hence, we obtain $$ n=m-k(k-1)/2. $$ Using (\ref{wron-6000}) and (\ref{wron-6001}) this can be written as follows: $$ \nu_{q-0}\left( W_{\Gamma, q}(1, f_2/f_1, \ldots, f_k/f_1)\right)= \nu_{q-0}\left( W(1, f_2/f_1, \ldots, f_k/f_1)\right) -k(k-1)/2. $$ Consider again $f_1, \ldots, f_k$ as meromorphic functions in a variable $q$ in a neighborhood of $q=0$, and define the Wronskian $W_{\Gamma, q}(f_1, \ldots, f_k)$ using derivatives with respect to $q$. Then, Lemma \ref{wron-cont-1} (ii) implies \begin{align*} \nu_{\mathfrak a_\infty}\left(W\left(f_1, \ldots, f_k\right)\right)&= \nu_{\mathfrak a_\infty}\left(f^k_1\right)+ \nu_{\mathfrak a_\infty}\left(W\left(1, f_2/f_1, \ldots, f_k/f_1\right)\right)\\ &= \nu_{q-0}\left(f^k_1\right)+ \nu_{q-0}\left(W\left(1, f_2/f_1, \ldots, f_k/f_1\right)\right)\\ &= \nu_{q-0}\left(f^k_1\right)+ \nu_{q-0}\left( W_{\Gamma, q}(1, f_2/f_1, \ldots, f_k/f_1)\right)+k(k-1)/2\\ &= \nu_{q-0}\left( W_{\Gamma, q}(f_1, f_2, \ldots, f_k)\right)+k(k-1)/2. \end{align*} Finally, we apply Lemma \ref{wron-cont-1} (iii). \end{proof} \section{Computation of Wronskians for $\Gamma=SL_2(\mathbb Z)$}\label{lev0} Assume that $m\ge 4$ is an even integer. Let $M_m$ be the space of all modular forms of weight $m$ for $SL_2(\mathbb Z)$. We introduce the two Eisenstein series \begin{align*} &E_4(z)=1+240 \sum_{n=1}^\infty \sigma_3(n)q^n\\ &E_6(z)=1 -504 \sum_{n=1}^\infty \sigma_5(n)q^n\\ \end{align*} of weight $4$ and $6$, where $q=\exp{(2\pi i z)}$. Then, for any even integer $m\ge 4$, we have \begin{equation}\label{lev0-1} M_m=\oplus_{\substack{\alpha, \beta\ge 0\\ 4\alpha+6\beta =m }} \mathbb C E^\alpha_4 E^\beta_6. \end{equation} We have \begin{equation}\label{lev0-2} k=k_m\overset{def}{=} \dim M_m=\begin{cases} \left[m/12\right]+1, \ \ m\not\equiv \ 2 \text{(mod $12$)};\\ \left[m/12\right], \ \ m\equiv \ 2 \text{(mod $12$)}.\\ \end{cases} \end{equation} \vskip .2in We let $$ \Delta(z)=q+\sum_{n=2}\tau(n)q^n=q -24q^2+252 q^3+\cdots =\frac{E_4^3(z)-E_6^2(z)}{1728}. $$ be the Ramanujan delta function. \vskip .2in It is well--known that the map $f\longmapsto f\cdot \Delta$ is an ismorphism between the vector space of modular form $M_m$ and the space of all cuspidal modular forms $S_{m+12}$ inside $M_{m+12}$. In general, we have the following: $$ \dim S_m=\dim M_m-1,\\ $$ for all even integers $m\ge 4$. \vskip .2in Now, we are ready to compute our first Wronskian (see (\ref{wron-5000}) for notation). \vskip .2in \begin{Prop}\label{lev0-3} We have the following: \begin{itemize} \item[(i)] $W_q\left(E^3_4, E^2_6\right)=-1728\cdot \Delta \cdot E^2_4 E_6$. \item[(ii)] $2 E_4 \frac{d}{dq}E_6 - 3 E_6 \frac{d}{dq} E_4=-1728 \cdot \Delta\cdot q^{-1}$. \end{itemize} \end{Prop} \begin{proof} We compute \begin{align*} W_q\left(E_4, E_6\right) &= \left|\begin{matrix} E^3_4 & E^2_6\\ q\frac{d}{dq} E^3_4 & q\frac{d}{dq} E^2_6 \end{matrix}\right|\\ &=2 E^3_4 E_6 \cdot q\frac{d}{dq}E_6 - 3 E^2_4 E^2_6 \cdot q\frac{d}{dq} E_4\\ &= E^2_4 E_6 \cdot q\cdot \left(2 E_4 \frac{d}{dq}E_6 - 3 E_6 \frac{d}{dq} E_4 \right). \end{align*} But we know that $W_q\left(E_4, E_6\right)$ is a cusp form of weigth $2\cdot (12+2-1)=26$. Thus, we must have that is equal to $$ W_q\left(E_4, E_6\right) =\lambda\cdot \Delta\cdot E^2_4 E_6, $$ for some non--zero constant $\lambda$. This implies that $$ 2 E_4 \frac{d}{dq}E_6 - 3 E_6 \frac{d}{dq} E_4 =\lambda \cdot \Delta \cdot q^{-1} $$ Considering explicit $q$--expansions, we find that $$ \lambda=-1728. $$ This proves both (i) and (ii). \end{proof} \vskip .2in The general case requires a different proof based on results of Section \ref{wron-cont}. \vskip .2in \begin{Prop} \label{lev0-4} Assume that $m=12t$ for some $t\ge 1$. Then, we write the basis of $M_m$ as follows: $\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}$, $0\le u \le t$. Then, we have the following $$ W_q\left(\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}, \ \ 0\le u \le t\right)= \lambda \cdot\Delta^{\frac{t(t+1)}{2}} E_4^{t(t+1)}E_6^{\frac{t(t+1)}{2}}, $$ for some non--zero constant $\lambda$. \end{Prop} \begin{proof} We can select another basis $f_0, \ldots, f_{t}$ of $M_m$ such that $f_i=c_i q^i+ d_i q^{i+1}+\cdots$, $0\le i \le t$, where $c_i\neq 0 , d_i, \ldots$ are some complex constants. An easy application of Theorem \ref{wron-6} gives $$ \nu_{\mathfrak a_\infty}\left(W_{q}\left(\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}, \ \ 0\le u \le t\right)\right)= \frac{t(t+1)}{2}. $$ But since $\mathrm{div}{(\Delta)}=\mathfrak a_\infty$, we obtain that $$ f\overset{def}{=}W_{q}\left(\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}, \ \ 0\le u \le t\right)/\Delta^{\frac{t(t+1)}{2}} $$ is a non--cuspidal modular form of weight $$ l=k\cdot \left(m+ k-1\right) -12\frac{t(t+1)}{2}=(t+1) (12t+t)-12t=7t(t+1). $$ It remains to determine $f$. In order to do that, we use Proposition \ref{wron-cont-2}, and consider the order of vanishing of $W_q\left(\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}, \ \ 0\le u \le t\right)$ at elliptic points $i$ and $e^{\pi i/3}=(1+i\sqrt{3})/2$, of order $2$ and $3$, respectively. We recall (see \cite{Muic2}, Lemma 4-1) that $$ \mathrm{div}{(E_4)}=\frac13 \mathfrak a_{(1+i\sqrt{3})/2}. $$ Similarly we show that $$ \mathrm{div}{(E_6)}=\frac12 \mathfrak a_{i}. $$ This implies that $\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}$ has order $3u$ and $2(t-u)$ at $(1+i\sqrt{3})/2$ and $i$, respectively. Hence, $W_q\left(\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}, \ \ 0\le u \le t\right)$ has orders $$ \nu_{\mathfrak a_{(1+i\sqrt{3})/2}}\left(W_q\left(\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}, \ \ 0\le u \le t\right)\right)=\frac{1}{3} t(t+1), $$ and $$ \nu_{\mathfrak a_{(1+i\sqrt{3})/2}}\left(W_q\left(\left(E^3_4\right)^u \left(E^2_6\right)^{t-u}, \ \ 0\le u \le t\right)\right)=\frac{1}{4} t(t+1). $$ This implies the following: $$ \nu_{\mathfrak a_{(1+i\sqrt{3})/2}}\left(f\right) =\frac{1}{3} \cdot t(t+1), $$ and $$ \nu_{\mathfrak a_i}\left(f\right) =\frac{1}{4} \cdot t(t+1), $$ Since, $f\in M_{7t(t+1)}$, comparing divisors as before, we conclude that $$ f=\lambda \cdot E_4^{t(t+1)}E_6^{\frac{t(t+1)}{2}}, $$ for some non--zero constant $\lambda$. \end{proof} \vskip .2in We are not able to determine constant $\lambda$ in Proposition \ref{lev0-4} for all $t\ge 1$. It should come out of comparison of $q$--expansions of left and right sides of the identity in Proposition \ref{lev0-4}. For $t=1$, Proposition \ref{lev0-3} implies that $\lambda=-1728$. Experiments in SAGE shows that $\lambda= -2 \cdot 1728^3$ for $t=2$, and $\lambda= 12 \cdot 1728^6$ for $t=3$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The existence of resolution of singularities in arbitrary dimension over a field of characteristic zero was solved by Hironaka in his famous paper \cite{existence}. Later on, different constructive proofs have been given, among others, by Villamayor \cite{villa}, Bierstone-Milman \cite{bm}, Encinas-Villamayor \cite{cour}, Encinas-Hauser \cite{strong} and Wodarczyk \cite{Wo}. This paper is devoted to study the complexity of Villamayor's algorithm of resolution of singularities. This algorithm appears originally in \cite{villa} and we will use the presentation given in \cite{cour}. In this paper, the authors introduce a class of objects called \emph{basic objects} $B=(W,(J,c),E)$ where $W$ is a regular ambient space over a field $k$ of characteristic zero, $J\subset \mathcal{O}_W$ is a sheaf of ideals, $c$ is an integer and $E$ is a set of smooth hypersurfaces in $W$ having only normal crossings. That is, they consider the ideal $J$ together with a positive integer $c$, or \emph{critical value} defining the \emph{singular locus} $Sing(J,c)=\{\xi \in W|\ ord_{\xi}(J)\geq c \}$, where $ord_{\xi}(J)$ is the \emph{order} of $J$ in a point $\xi$. Let $W\stackrel{\pi}{\leftarrow} W'$ be the monoidal transformation with center $\mathcal{Z}\subset Sing(J,c)$, $\pi^{-1}(\mathcal{Z})=Y'$ is the exceptional divisor. Let $\xi$ be the generic point of $\mathcal{Z}$, $ord_{\xi}(J)=\theta$, the total transform of $J$ in $W'$ satisfies $J\mathcal{O}_{W'}=I(Y')^{\theta}\cdot J^{\curlyvee}$ where $J^{\curlyvee}$ is the \emph{weak} transform of $J$, (see \cite{cour} for details). A \emph{transformation} of a basic object $(W,(J,c),E)\leftarrow (W',(J',c),E')$ is defined by a monoidal transformation $W\stackrel{\pi}{\leftarrow} W'$ and defining $J'=I(Y')^{\theta-c}\cdot J^{\curlyvee}$, the \emph{controlled transform} of $J$. A sequence of transformations of basic objects \begin{equation} (W,(J,c),E)\leftarrow (W^{(1)},(J^{(1)},c),E^{(1)})\leftarrow \cdots \leftarrow (W^{(N)},(J^{(N)},c),E^{(N)}) \label{resolution} \end{equation} is a \emph{resolution} of $(W,(J,c),E)$ if $Sing(J^{(N)},c)=\emptyset$. \begin{remark} Superscripts $^{(k)}$ in basic objects will denote the $k$-stage of the resolution process. Subscripts $_i$ will always denote the dimension of the ambient space $W_i^{(k)}$. \end{remark} Villamayor's algorithm provides a \emph{log-resolution} in characteristic zero. A log-resolution of $J$ is a sequence of monoidal transformations at regular centers as (\ref{resolution}) such that each center has normal crossings with the exceptional divisors $E^{(i)}$, and the total transform of $J$ in $W^{(N)}$ is of the form $$J\mathcal{O}_{W^{(N)}}=I(H_1)^{b_1}\cdot \ldots \cdot I(H_N)^{b_N}$$ with $b_i\in \mathbb{N}$ for all $1\leq i \leq N$ and $E^{(N)}=\{H_1,\ldots,H_N\}$. In \cite{cour} it is shown that algorithmic principalization of ideals reduces to algorithmic resolution of basic objects. That is, starting with $c\!=\!max\ ord(J)$, the maximal order of $J$, we obtain a resolution of $(W,(J,c),E)$ as (\ref{resolution}). At this step $max\ ord(J^{(N)})\!=\!c^{(N)}\!<c$. If $c^{(N)}\!>1$, we continue resolving $(W^{(N)},(J^{(N)},c^{(N)}),E^{(N)})$ and so on, until have $max\ ord(J^{(\mathcal{N})})\!=\!c^{(\mathcal{N})}\!=1$. Finally, a resolution of $(W^{(\mathcal{N})},(J^{(\mathcal{N})},1),E^{(\mathcal{N})})$ provides a log-resolution of $J^{(\mathcal{N})}$, and therefore a log-resolution of $J$. In \cite{cour} it is also shown that algorithmic principalization of ideals leads to embe\-dded desingulari\-zation of varieties. That is, given a closed subscheme $X\subset W$, the algorithmic principalization of the ideal $I(X)$ provides an embedded desingularization of $X$. See also \cite{EncinasVillamayor2003} for more details. A key point in the definition of the algorithm is to use induction on the dimension of the ambient space $W$ to define an upper-semi-continuous function $t$. The set of points where this function attains its maximal value, $\underline{Max}\ t$, is a regular closed set, and defines a regular center for the next monoidal transformation. A resolution of the basic object $(W,(J,c),E)$ is achieved by a sequence of monoidal transformations as in (\ref{resolution}), with centers $\underline{Max}\ t^{(k)}$ for $0\leq k\leq N-1$. That is, the sequence of monoidal transformations is defined by taking successively the center defined by the upper-semi-continuous function. The algorithm stops at some stage because the maximal value of the function $t$ drops after monoidal transformations, that is, $max\ t^{(0)}>max\ t^{(1)}>\ldots >max\ t^{(N-1)}.$ This function $t$ will be the resolution invariant. We shall work with the invariant defined in \cite{cour}, using the language of mobiles developed in \cite{strong}. We remind briefly the main notions. Let $J\subset \mathcal{O}_W$ be an ideal defining a singular algebraic set $X\subset W$. The ideal $J$ factors into $J=M\cdot I$, with $M$ the ideal defining a normal crossing divisors, and $I$ some ideal still unresolved. By induction on the dimension of $W$, we will have this decomposition at every dimension from $n$ to $1$, that is $J_i=M_i\cdot I_i$, for $n\geq i \geq 1$, are defined in local flags $W_n\supseteq W_{n-1}\supseteq \cdots \supseteq W_i\supseteq \cdots \supseteq W_1$, where each $J_i,M_i,I_i \in \mathcal{O}_{W_i}$ are in dimension $i$. There is a critical value $c_{i+1}$ at each dimension $i$, ($c_{n+1}=c$), see \cite{strong} for details. All the basic objects $(W_i,(J_i,c_{i+1}),E_i)$, for $n\geq i \geq 1$, will be resolved during the process of the algorithm. Let $E$ be the exceptional divisor of previous monoidal transformations, and consider $E=\cup_{i=1}^nE_i$ where $E_i$ applies to dimension $i$. Obviously, we start with $E=\emptyset$. For any point $\xi \in Sing(J,c)$, the function $t$ will have $n$ coordinates, with lexicographical order, and it will be one of the following three types: \begin{equation} \label{invt} \hspace*{-0.2cm} \begin{array}{ll} (a) & t(\xi)=(t_n(\xi),t_{n-1}(\xi),\ldots,t_{n-r}(\xi),\ \infty,\ \infty,\ldots,\infty) \\ (b) & t(\xi)=(t_n(\xi),t_{n-1}(\xi),\ldots,t_{n-r}(\xi), \Gamma(\xi),\infty,\ldots,\infty) \\ (c) & t(\xi)=(t_n(\xi),t_{n-1}(\xi),\ldots,t_{n-r}(\xi),\ldots \ldots \ldots,t_1(\xi)) \end{array}\ \text{ with } t_i=\left[\frac{\theta_i}{c_{i+1}},m_i\right] \end{equation} where $\theta_i=ord_{\xi}(I_i)\ ,$ $m_i$ is the number of exceptional divisors in $E_i$, and $\Gamma$ is the resolution function corresponding to the so-called \emph{monomial case}, following the notation of \cite{cour}, pages $165-166$. We will recall the definition of $\Gamma$ in equation (\ref{gamma}). For simplicity, let assume that we start with a polynomial ring, $W=Spec(k[X_1,\ldots,X_n])$. In $\mathcal{O}_W=k[X_1,\ldots,X_n]$ the ideal $J$ is locally given by a monomial with respect to a regular system of parameters $$J=<X_1^{a_1}\cdot \ldots \cdot X_n^{a_n}>\subset \mathcal{O}_W \text{ with } a_i\in \mathbb{N}, \text{ for } i=1,\ldots,n.$$ Note that, in this situation, the center of the next monoidal transformation is combinatorial, it is a linear combination of $X_1,\ldots,X_n$. And this is also true after monoidal transformations, since Villamayor's algorithm applied to a monomial ideal provides always combinatorial centers, and after a monoidal transformation in a combinatorial center we obtain again a monomial ideal. So, at any stage of the resolution process, $W=\cup_i U_i$, where $U_i\cong\mathbb{A}^n_k$. Thereafter, we shall work locally, so we will assume that $W$ is an affine space. To resolve the toric hypersurface $\{f=0\}=\{Z^c-X_1^{a_1}\cdot \ldots \cdot X_n^{a_n}=0\}$ we note that its singular locus $Sing(<f>,c)$ is always included in $\{Z=0\}$, so we argue by induction on the dimension and reduce to the case where the corresponding ideal $J$ is of the form \begin{equation} \label{jota} J=<X_1^{a_1}\cdot \ldots \cdot X_n^{a_n}>\subset \mathcal{O}_W \text{ with }\ 1\leq a_1\leq a_2\leq \ldots \leq a_n,\ \sum_{i=1}^n a_i=d,\ d\geq c,\end{equation} where $c$ is the critical value. If $a_i=0$ for some $i$, then we may assume $dim(W)<n$. After a monoidal transformation, we always consider the controlled transform of $J$ with respect to $c$, $J'=I(Y')^{-c}\cdot J^{*}$ where $J^{*}$ is the total transform of $J$ and $Y'$ denotes the new exceptional divisor. For the toric problem $J=<Z^c-X_1^{a_1}\cdot \ldots \cdot X_n^{a_n}>$, taking the origin as center of the next monoidal transformation, at the $i$-th chart: $$J^*=<Z^c\cdot X_i^{c} - X_1^{a_1}\cdots X_i^{d} \cdots X_n^{a_n}>=<X_i^{c}\cdot(Z^c-X_1^{a_1}\cdots X_i^{d-c} \cdots X_n^{a_n})>,$$ and we can only factorize $c$ times the exceptional divisor. \begin{remark} We will denote as {\it $i$-th chart} the chart where we divide by $X_i$. When the center of the monoidal transformation is the origin, this monoidal transformation is expressed: $$\begin{array}{ccc} k[Z,X_1,\ldots,X_n] & \rightarrow & k[Z,X_1,\ldots,X_n,\frac{Z}{X_i},\frac{X_1}{X_i},\ldots,\frac{X_{i-1}}{X_i},\frac{X_{i+1}}{X_i},\ldots,\frac{X_n}{X_i}] \\ Z & \rightarrow & \frac{Z}{X_i} \\ X_i & \rightarrow & X_i \\ X_j & \rightarrow & \hspace*{1.5cm} \frac{X_j}{X_i} \text{ for } j\neq i \\ \end{array} $$ where $k[Z,X_1,\ldots,X_n,\frac{Z}{X_i},\frac{X_1}{X_i},\ldots,\frac{X_{i-1}}{X_i},\frac{X_{i+1}}{X_i},\ldots,\frac{X_n}{X_i}]\cong \\ k[\frac{Z}{X_i},\frac{X_1}{X_i},\ldots,\frac{X_{i-1}}{X_i},X_i,\frac{X_{i+1}}{X_i},\ldots,\frac{X_n}{X_i}]$. For simplicity, we will denote each $\frac{X_j}{X_i}$ again as $X_j$, and $\frac{Z}{X_i}$ as $Z$. \end{remark} So we will apply the resolution algorithm to the basic object $(W,(J,c),\emptyset)$ for $J=<X_1^{a_1}\cdot \ldots \cdot X_n^{a_n}>$, which is already a monomial ideal, but it is not necessarily supported on the exceptional divisors. \section{Monomial case (exceptional monomial)} The \emph{monomial case} is a special case in which J is a ``monomial ideal" given locally by a monomial that can be expressed in terms of the exceptional divisors. This case arises after several monoidal transformations. This means that we have a basic object $(W,(J,c),E)$ where $J$ is locally defined by one monomial supported on the hypersurfaces in $E$. In this case, the ideal $J$ factors into $J=M\cdot I$ with $J=M$ and $I=1$. We can also call it {\bf exceptional monomial}. \begin{theorem} \label{mon} Let $J\subset \mathcal{O}_W$ be a monomial ideal as in equation (\ref{jota}). Let $E=\{H_1,\ldots,H_n\}$, with $H_i=V(X_i)$, be a normal crossing divisor. \\ Then an upper bound for the number of monoidal transformations to resolve $(W,(J,c),E)$ is given by $$\frac{d - c + gcd (a_1,\ldots,a_n,c)}{gcd (a_1,\ldots,a_n,c)}.$$ \end{theorem} \begin{proof} We may assume that the greatest common divisor of the exponents $a_i$ and the critical value $c$ is equal to $1$, because both the simplified problem and the original problem have the same singular locus. That is, if $gcd (a_1,\ldots,a_n,c)=k$ then $d=k\cdot d_1$, $c=k\cdot c_1$, $a_i=k\cdot b_i$ for all $1\leq i\leq n$ and $gcd (b_1,\ldots,b_n,c_1)=1$. The ideal $J$ can be written as $J=(J_1)^k$ where $J_1=<X_1^{b_1}\cdot \ldots \cdot X_n^{b_n}>$ therefore $$Sing(J,c)=\{\xi \in X|\ ord_{\xi}((J_1)^k)\geq k\cdot c_1 \}=\{\xi \in X|\ ord_{\xi}(J_1)\geq c_1 \}=Sing(J_1,c_1),$$ where $X$ is the algebraic set defined by $J$. For a point $\xi \in \mathbb{A}^n_k$, $\Gamma(\xi)=(-\Gamma_1(\xi),\Gamma_2(\xi),\Gamma_3(\xi))$ where \begin{equation} \label{gamma} \vspace*{0.15cm} \begin{array}{l} \Gamma_1(\xi)=\min\{p\ |\ \exists\ i_1,\ldots,i_p, a_{i_1}(\xi)+\cdots+a_{i_p}(\xi)\geq c,\ \xi\in H_{i_1}\cap\cdots \cap H_{i_p}\}, \vspace*{0.15cm} \\ \Gamma_2(\xi)=\max\left\{\frac{a_{i_1}(\xi)+\cdots+a_{i_p}(\xi)}{c}\ |\ {\scriptstyle p=\Gamma_1(\xi),\ a_{i_1}(\xi)+\cdots+a_{i_p}(\xi)\geq c,\ \xi\in H_{i_1}\cap\cdots \cap H_{i_p}}\right\},\vspace*{0.15cm} \\ \Gamma_3(\xi)=\max\{(i_1,\ldots,i_p,0,\ldots,0)\in \mathbb{Z}^n\ |\ {\scriptstyle \Gamma_2(\xi)=\frac{a_{i_1}(\xi)+\cdots+a_{i_p}(\xi)}{c},\ \xi\in H_{i_1}\cap\cdots \cap H_{i_p}}\} \vspace*{0.1cm} \end{array} \end{equation} with lexicographical order in $\mathbb{Z}^n$. The center $\mathcal{Z}$ of the next monoidal transformation is given by the set of points where $\Gamma$ attains its maximal value. It is easy to see that $\mathcal{Z}=\cap_{i=n-(r-1)}^n H_i$. So at the $j$-th chart, the exponent of $X_j$ after the monoidal transformation is $(\sum_{i=n-r+1}^n a_i)-c$ and $$\left(\sum_{i=n-r+1}^n a_i\right)-c< \min_{n-r+1\leq i \leq n}a_i=a_{n-r+1}$$ because $\sum_{i=n-r+2}^{n} a_i<c$ by construction of the center $\mathcal{Z}$. This shows that the order of the ideal drops after each monoidal transformation by at least one, so in the worst case, we need $d-(c-1)$ monoidal transformations to obtain an order lower than $c$. \end{proof} \begin{remark} Note that it is necessary to consider the monomial case. On one hand, this case may appear in dimension $n$, and also in lower dimensions, $n-1,\ldots,1$, when we resolve any basic object $(W,(J,c),E)$ (where $J$ is any ideal). So we need to resolve the monomial case in order to obtain a resolution of the original basic object $(W,(J,c),E)$. On the other hand, the algorithm of resolution leads to the monomial case, since given any ideal $J$, the algorithm provides a log-resolution of $J$. And it is necessary to continue to a resolution within the monomial case. \end{remark} \begin{remark} The bound in theorem \ref{mon} is reached only for the following values of $c$: $$1,\ a_n+ \ldots + a_j+1 \text{ for } n\geq j\geq 2,\ d.$$ For these values of $c$, the order of the ideal drops after each monoidal transformation exactly by one: \begin{itemize} \item If $c=1$, the monoidal transformation is an isomorphism. The exponent of $X_n$ after the monoidal transformation is $a_n-1$. \item If $c=a_n+ \ldots + a_j+1$, for $n\geq j\geq 2$, the center of the monoidal transformation is $\mathcal{Z}=\cap_{i=j-1}^n H_i$. At the $l$-th chart, for $n\geq l\geq j-1$, the exponent of $X_l$ after the monoidal transformation is $(\sum_{i=j-1}^n a_i)-c=(\sum_{i=j-1}^n a_i)-(\sum_{i=j}^n a_i)-1=a_{j-1}-1$. In particular, at the $(j-1)$-th chart, the exponent of $X_{j-1}$ after the monoidal transformation has droped exactly by one. \item If $c=d$, we finish after only one monoidal transformation. \end{itemize} \end{remark} \begin{remark} If $gcd (a_1,\ldots,a_n,c)=k>1$, then the bound of the theorem \ref{mon} is $(d-c+k)/ k$. \\ As $(d-c+k)/ k<d-c+1,$ we can use in practice the bound for the case $gcd (a_1,\ldots,a_n,c)=1$. \end{remark} \section{Case of one monomial} To construct an upper bound for the number of monoidal transformations needed to resolve the basic object $(W,(J,c),E=\emptyset)$, where $J$ is locally defined by a unique monomial, we estimate the number of monoidal transformations needed to obtain $(W',(J',c),E')$, a transformation of the original basic object, with $J'=M'$ (an exceptional monomial), and then apply theorem \ref{mon}. In order to use theorem \ref{mon}, we need an estimation of the order of $M'$. This estimation will be valid at any stage of the resolution process. \begin{lemma} \label{grad} Let $(W,(J,c),\emptyset)$ be a basic object where $J$ is a monomial ideal as in equation (\ref{jota}). Let $J=M\cdot I$ be the factorization of $J$, where $M=1$, because of $E=\emptyset$, and $J=I$. After $N$ monoidal transformations we have $(W^{(N)},(J^{(N)},c),E^{(N)})$. Let $\xi\in W^{(N)}$ be a point. Then $$\ ord_{\xi}(M^{(N)}) \leq (2^N-1)(d-c)$$ where $ord_{\xi}(M^{(N)})$ denotes the order at $\xi$ of $M^{(N)}$, the (exceptional) monomial part of $J^{(N)}$. \end{lemma} \begin{proof} It follows by induction on $N$: \begin{itemize} \item that if $N=1, \ ord_{\xi}(M^{(1)})=d-c.$ \\ At the beginning, the first center defined by this algorithm is always the origin, so at the $i$-th chart: $$J^{(1)}=M^{(1)} \cdot I^{(1)}=<X_i^{d-c}>\cdot <X_1^{a_1}\cdot \stackrel{\widehat{i}}{\ldots} \cdot X_n^{a_n}>$$ with $E^{(1)}=\{H_i\}$, where $H_i=V(X_i)$. \item We assume that the result holds for $N=m-1$. $$J^{(m-1)}= M^{(m-1)} \cdot I^{(m-1)}= <X_{i_1}^{b_1}\cdots X_{i_s}^{b_s}>\cdot <X_{i_{s+1}}^{a_{i_{s+1}}}\cdots X_{i_n}^{a_{i_n}}>$$ with $\sum_{i=1}^s b_i=d'.$ By inductive hypothesis, after $m-1$ monoidal transformations, the maximal order $d'$ of the (exceptional) monomial part $M^{(m-1)}$ satisfies $$d'\leq (2^{m-1}-1)(d-c).$$ For $N=m$, there are two possibilities: \begin{enumerate} \item If $max\ ord(I^{(m-1)})=\sum_{j=s+1}^n a_{i_j}\geq c$ then the center of the next monoidal transformation contains only variables appearing in $I^{(m-1)}$. \item If $max\ ord(I^{(m-1)})=\sum_{j=s+1}^n a_{i_j}<c$ then the center of the next monoidal transformation contains variables appearing in $I^{(m-1)}$ and also variables appearing in $M^{(m-1)}$. \end{enumerate} \begin{itemize} \item[\underline{Case $1$}:] \ \\ If the center of the monoidal transformation is as small as possible, that is $\mathcal{Z}\!=\!\cap_{j=s+1}^n V(X_{i_j})$, at the $i_l$-th chart, $$J^{(m)}\!=\!M^{(m)}\cdot I^{(m)}\!=<\!X_{i_1}^{b_1}\!\cdots\!X_{i_s}^{b_s}\cdot X_{i_l}^{d-\sum_{j=1}^s a_{i_j}-c}\!>\!\cdot\! <\!X_{i_{s+1}}^{a_{i_{s+1}}}\!\stackrel{\widehat{i_l}}{\cdots}\!X_{i_n}^{a_{i_n}}\!>\!.$$ The exponent of $X_{i_l}$, $\ d-\sum_{j=1}^s a_{i_j}-c=\sum_{j=s+1}^n a_{i_j}-c \ $ is as big as possible, so this is the worst case, because the increase in the order of the exceptional monomial part after the monoidal transformation will be greater than that for another centers. The highest order of $M^{(m)}$ is $$\sum_{i=1}^s b_i + d- \sum_{j=1}^s a_{i_j}-c =d'+d-c- \sum_{j=1}^s a_{i_j} \leq d'+d-c,$$ so by inductive hypothesis $$d'+d-c \leq (2^{m-1}-1)(d-c)+d-c=2^{m-1}(d-c)\leq (2^m-1)(d-c).$$ \item[\underline{Case $2$}:] \begin{enumerate} \item[-] At the $i_j$-th chart, for $1\leq j\leq s$ $$ J^{(m)}=M^{(m)}\cdot I^{(m)}=<X_{i_1}^{b_1} \stackrel{\widehat{i_j}}{\cdots} X_{i_s}^{b_s}\cdot X_{i_j}^{\square}>\cdot <X_{i_{s+1}}^{a_{i_{s+1}}} \cdots X_{i_n}^{a_{i_n}}>.$$ \item[-] At the $i_l$-th chart, for $s+1\leq l\leq n$ $$J^{(m)}=M^{(m)}\cdot I^{(m)}= <X_{i_1}^{b_1}\cdots X_{i_s}^{b_s}\cdot X_{i_l}^{\vartriangle}>\cdot <X_{i_{s+1}}^{a_{i_{s+1}}} \stackrel{\widehat{i_l}}{\cdots} X_{i_n}^{a_{i_n}}>.$$ \end{enumerate} As above, if we are in the worst case, when the center of the monoidal transformation is as small as possible, that is, the center is a point, $$\square=\vartriangle= d'+d-\sum_{j=1}^s a_{i_j}-c\ .$$ Therefore in both cases the highest order of $M^{(m)}$ satisfies $$\leq 2d'+d-c\ \leq \ 2(2^{m-1}-1)(d-c)+d-c=(2^m-1)(d-c)\ .$$ \end{itemize} \end{itemize} \end{proof} \begin{remark} Due to its general character, this bound is large and far from being optimal. \end{remark} \begin{remark} \label{constructM} The ideals $M_i$ are supported on a normal crossing divisors $D_i$. Recall that their transformations after monoidal transformations, in the neighbourhood of a point $\xi\in W_i$, are $$D_i'=\left\{ \begin{array}{ll} D_i^*+(\theta_i-c_{i+1})\cdot Y' & \text{ \scriptsize if } {\scriptstyle (t_n'(\xi'),\ldots,t_{i+1}'(\xi')=(t_n(\xi),\ldots,t_{i+1}(\xi))} \\ \emptyset & \text{ \scriptsize in other case} \end{array} \right. ,\ n\geq i\geq 1,$$ \hspace*{2.5 cm} (${\scriptstyle D_n'=D_n^*+(\theta_n-c)\cdot Y'}$ {\scriptsize always}) \vspace*{2 mm} \\ where $D_i^*$ denotes the pull-back of $D_i$ by the monoidal transformation $\pi$, $Y'$ denotes the new exceptional divisor, the point $\xi'\in W_i'$ satisfies $\pi(\xi')=\xi$, $\theta_i=ord_{\xi}(I_i)$ and $c_{i+1}$ is the corresponding critical value. \end{remark} In what follows we will define the ideals $J_{i-1}$, $n\geq i> 1$. We need some auxiliary definitions: the companion ideals $P_i$ and the composition ideals $K_i$, see \cite{strong} for details. \\ We construct the companion ideals to ensure that $Sing(P_i,\theta_i)\subset Sing(J_i,c_{i+1})$, \begin{equation} \label{pes} P_i= \left\{\begin{array}{ll} I_i & \text{ if }\ \theta_i\geq c_{i+1} \\ I_i+M_i^{\frac{\theta_i}{c_{i+1}-\theta_i}} & \text{ if }\ 0< \theta_i< c_{i+1} \end{array}\right. \end{equation} where $\xi\in \mathbb{A}^n_k$ is a point, $\theta_i=ord_{\xi}(I_i)$ and $c_{i+1}$ is the corresponding critical value. Let $J_i=M_i\cdot I_i$ be the factorization of an ideal $J_i$ in $W_i$, where $M_i,I_i$ are ideals in $W_i$ in the neighborhood of a point $\xi\in \mathbb{A}^n_k$. Let $E_i$ be a normal crossing divisor in $\mathbb{A}^n_k$. The composition ideal $K_i$ in $W_i$ of the product $J_i=M_i\cdot I_i$, with respect to a control $c_{i+1}$, is \begin{equation} \label{qus} K_i= \left\{\begin{array}{ll} P_i\cdot I_{W_i}(E_i\cap W_i) & \text{ if }\ I_i\neq 1, \\ 1 & \text{ if }\ I_i=1. \end{array}\right. \end{equation} The critical value for the following step of induction on the dimension is $c_i=ord_{\xi}(K_i)$. The construction of the composition ideal $K_i$ ensures normal crossing with the exceptional divisor $E_i$. We say that an ideal $K$ is \emph{bold regular} if $K=<X^a>$, $K\in k[X]$, $a\in \mathbb{N}$. Finally, construct the junior ideal $J_{i-1}$ \begin{equation} \label{jotas} J_{i-1}= \left\{\begin{array}{ll} Coeff_V(K_i) & \text{ if }\ K_i \text{ is not bold regular or } 1 \\ 1 & \text{ otherwise }\ \end{array}\right. \end{equation} where $V$ is a hypersurface of maximal contact in $W_i$ (see \cite{strong} page $830$) and $Coeff_V(K_i)$ is the coefficient ideal of $K_i$ in $V$ (see \cite{strong} page $829$). The junior ideal $J_{i-1}$ is an ideal in this suitable hypersurface $V$ of dimension $i-1$. If $\frac{\theta_{n}}{c}\geq 1$ we are in the first case of equation (\ref{pes}), $\frac{\theta_{n-1}}{c_{n}}=\frac{\theta_{n-2}}{c_{n-1}}=\ldots =\frac{\theta_j}{c_{j+1}}=1$ and $t_{j-1}=\ldots=t_1=\infty$ for $n-1\geq j\geq 1$, because $D_{n-1}=\ldots=D_1=\emptyset$ and $P_i=I_i$, and hence $J_{i-1}$ is always given by a unique monomial. \begin{remark} For an ideal $J=<X_1^{a_1}\cdot \ldots \cdot X_n^{a_n}>$ as in equation (\ref{jota}), if we assume $a_n\geq a_{n-1}\geq \ldots \geq a_1\geq c$, then at every stage $\frac{\theta_{n}}{c}\geq 1$ , so we are always in the previous situation. The singular locus of $(J,c)$ is always a union of hypersurfaces $\cup_{i=1}^r\{X_i=0\}$, $1\leq r \leq n$, and the center of the next monoidal transformation will be the intersection of some of these hypersurfaces. So we will call this case the {\bf minimal codimensional case}. \end{remark} \begin{remark} \label{remgrande} If there exists some $a_{i_0}<c$, at a certain stage of the resolution process it may occur $\frac{\theta_{n}}{c}< 1$. Then we are in the second case of equation (\ref{pes}), the (exceptional) monomial part $M_n$ can appear in some $J_j$ for $n-1\geq j\geq 1$, and $\frac{\theta_{j}}{c_{j+1}}$ can be much greater than $1$, what increase the number of monoidal transformations. Now its singular locus is a union of intersections of hypersurfaces of the type $\cup_{l_j}(\{X_{l_1}=0\}\cap \ldots \cap \{X_{l_i}=0\})$. This is the {\bf higher codimensional case}. \end{remark} \section{Bound in the minimal codimensional case} \begin{remark} From now on, we always look to the points where the function $t$, defined in (\ref{invt}), is maximal. So the following results concerning the behaviour of the function $t$ always affect the points where it reaches its maximal value. \end{remark} \begin{proposition} \label{bajatn} Let $(W,(J,c),E)$ be a basic object where $J$ is a monomial ideal as in equation (\ref{jota}), with $a_i\geq c$ for all $1\leq i \leq n$. We can factor $J=J_n=M_n\cdot I_n$, and after $r-1$ monoidal transformations, $J_n^{(r-1)}=M_n^{(r-1)}\cdot I_n^{(r-1)}$. Let $\xi\in W^{(r-1)}$ be a point where $ord_{\xi}(I_n^{(r-1)})=\theta_n$. After each monoidal transformation $\pi$, the resolution function in a neighbourhood of $\xi'$, where $\pi(\xi')=\xi$ and $ord_{\xi'}(I_n^{(r)})=\theta_n'<\theta_n$, is of the form $$\left( \left[\frac{d-\sum_{j=1}^sa_{i_j}}{c},s\right],[1,0],\ldots,[1,0]\right) \text{ for some }\ 1\leq s\leq n-1. $$ \end{proposition} \begin{proof} After monoidal transformations, $$J_n^{(r)}=M_n^{(r)}\cdot I_n^{(r)}=<X_{i_1}^{b_1}\cdots X_{i_s}^{b_s}>\cdot <X_{i_{s+1}}^{a_{i_{s+1}}}\cdots X_{i_n}^{a_{i_n}}>$$ with $d-\sum_{j=1}^sa_{i_j}=\sum_{j=s+1}^n a_{i_j}\geq c $ then, $P_n^{(r)}=I_n^{(r)}$ and the (exceptional) monomial part does not appear in $J_l^{(r)}$ for all $n\geq l\geq 1$. We have $\theta_n'\neq \theta_n$, then $E_n^{(r)}=Y'+|E|^{\curlyvee}$ and $m_n=s$, we count all the exceptional divisors of the previous steps and the new one. There are no exceptional divisors in lower dimension because $E_{n-1}^{(r)}=(Y'+|E|^{\curlyvee})-E_n^{(r)}=\emptyset$ and, in a similar way, we obtain $E_l^{(r)}=\emptyset$ for all $n-1 \geq l\geq 1$. The normal crossing divisors $D_i^{(r)}=\emptyset$ for all $n-1\geq i \geq 1$, so the corresponding ideals $M_{n-1}^{(r)}=\ldots=M_1^{(r)}=1$. In particular, $M_{n-1}^{(r)}=1$, hence $$c_n'=ord_{\xi'}(K_n^{(r)})=ord_{\xi'}(Coeff(K_n^{(r)}))=ord_{\xi'}(J_{n-1}^{(r)})=ord_{\xi'}(I_{n-1}^{(r)})=\theta_{n-1}'$$ with $\xi'\in W^{(r)}$ such that $\pi(\xi')=\xi$, because $ord(Coeff(K))=ord(K)$ when $K$ is a monomial ideal, therefore $\frac{\theta_{n-1}'}{c_n'}=1$. By the same argument we obtain $\frac{\theta_{n-2}'}{c_{n-1}'}=\ldots=\frac{\theta_1'}{c_{2}'}=1$. \end{proof} \begin{remark} After each monoidal transformation, the exceptional divisors at each dimension are: $$E_j'=\left\{ \begin{array}{ll} {\scriptstyle E_j^{\curlyvee}} & \text{ \scriptsize if } {\scriptstyle (t_n'(\xi'),\ldots,t_{j+1}'(\xi'))=(t_n(\xi),\ldots,t_{j+1}(\xi))} \text{ \scriptsize and } {\scriptstyle \theta_j'=\theta_j} \\ {\scriptstyle (Y'+(E_1\cup \ldots \cup E_n)^{\curlyvee})-(E_n'+\cdots +E_{j+1}')} & \text{ \scriptsize in other case} \end{array} \right.$$ \hspace*{0.5 cm} ${\scriptstyle ( E_n'=E_n^{\curlyvee}}$ {\scriptsize if} ${\scriptstyle \theta_n'=\theta_n}$ {\scriptsize or} ${\scriptstyle E_n'=Y'+(E_1\cup \ldots \cup E_n)^{\curlyvee}}$ {\scriptsize otherwise)} \vspace*{2 mm} \\ for $n> j\geq 1$, where $E_j^{\curlyvee}$ denotes the strict transform of $E_j$ by the monoidal transformation $\pi$, $Y'$ denotes the new exceptional divisor, the point $\xi'\in W_i'$ satisfies $\pi(\xi')=\xi$, $\theta_j'=ord_{\xi'}(I_j')$ and $\theta_j=ord_{\xi}(I_j)$. We denote $|E|=E_1\cup \ldots \cup E_n$. Hence, after the first monoidal transformation, since $\theta_n'<\theta_n$ we have $E_n'=Y'$ and $E_{n-1}'=\cdots=E_1'=\emptyset$. After the second monoidal transformation, at the chart where $\theta_n''=\theta_n'$ we obtain $E_n''=(E_n')^{\curlyvee}=\emptyset$, $E_{n-1}''=Y''$, and $E_{n-2}''=\cdots=E_1''=\emptyset$ and so on. We call this phenomena propagation because every exceptional divisor appears in the resolution function $t$ firstly in dimension $n$, then in dimension $n-1$, $n-2$, and so on. \end{remark} \begin{definition} We will call {\bf propagation}, ${\bf p(i,j)}$, for $1\leq i\leq j-1$, $1\leq j\leq n$, to the number of monoidal transformations needed to eliminate $i$ exceptional divisors in dimension $j$, when we remain constant $(t_n,t_{n-1},\ldots ,t_{j+1})$ and $\theta_j$, and there are no exceptional divisors in lower dimensions $j-1,\ldots,1$. That is, passing from the stage $$([\theta_n,m_n],\ldots,[\theta_{j+1},m_{j+1}],[\theta_j,i],[1,0],\ldots, [1,0])$$ to the stage $$([\theta_n,m_n],\ldots,[\theta_{j+1},m_{j+1}],[\theta_j,0],[1,0],\ldots, [1,0],\overbrace{\infty,\ldots,\infty}^{i}).$$ \end{definition} \begin{lemma}{\bf Propagation Lemma} Let $(W,(J,c),E)$ be a basic object where $J$ is a monomial ideal as in equation (\ref{jota}) with $a_l\geq c$ for all $1\leq l \leq n$. Let $p(i,j)$ be the propagation of $i$ exceptional divisors in dimension $j$ in the resolution process of $(W,(J,c),E)$. \\ Then, for all $1\leq j\leq n$, \begin{equation} \label{pro} p\ (i,j)=\left\{ \begin{array}{ll} i+\sum_{k=1}^i p\ (k,j-1) & \text{\ if \ } 0\leq i\leq j-1 \\ 0 & \text{\ if \ } i=j \\ \end{array} \right.\end{equation} \end{lemma} \begin{proof} \begin{itemize} \item If there are $i$ exceptional divisors in dimension $i$, $K_{i+1}$ is bold regular, $t_i=\infty$ then $p(i,i)=0$. We can not propagate these $i$ exceptional divisors at this stage of the resolution process. If there are $s$ exceptional divisors at this step of the resolution process, then there are $n-s$ variables in $I_n$. On the other hand, from dimension $n$ to dimension $i+1$ there are $s-i$ exceptional divisors. When we construct $J_{n-1},\ldots ,J_{i+1}$, add to the corresponding composition ideal $K_j$ the variables in $I_{W_j}(E_j\cap W_j)$, so in these dimensions there are $(n-s)+(s-i)=n-i$ variables. When we make induction on the dimension, at each step lose one variable, so in $n-i-1$ steps obtain that $K_{i+1}$, that corresponds to the $n-(n-i-1)=i+1$ position, is bold regular. And the variables appearing in these $i$ exceptional divisors do not appear in the center of the next monoidal transformation. \item By induction on the dimension: \begin{itemize} \item[-] If $j=1$, $p(1,1)=0$ by the previous argument. \item[-] If $j=2$, $p(1,2)=1$ because when we propagate $1$ excepcional divisor from dimension $2$ to dimension $1$, $K_2'$ is bold regular. \begin{center} \vspace*{2 mm} \begin{tabular}{c} $([\theta_n,m_n],\ldots,[\theta_2,1],[1,0])$ \\ $\downarrow {\scriptstyle X_i}$ \\ $([\theta_n,m_n],\ldots,[\theta_2,0],\infty)$ \\ \end{tabular} \vspace*{2 mm} \end{center} Then $p(1,2)=1=1+0=1+p(1,1)$. \item[-]We assume that the result holds for $j\leq s-1$. For $j=s$: $$\begin{array}{c} ([\theta_n,m_n],\ldots,[\theta_{s+1},m_{s+1}],[\theta_s,i],[1,0],\ldots, [1,0]) \\ \downarrow \\ ([\theta_n,m_n],\ldots,[\theta_{s+1},m_{s+1}],[\theta_s,i-1],[1,1],[1,0],\ldots,[1,0]) \\ \vspace*{0.2cm} \left. \hspace*{2 cm} \begin{array}{r} \downarrow \\ \vdots \\ \downarrow \end{array} \right\} p(1,s-1) \\ \vspace*{0.2cm} ([\theta_n,m_n],\ldots,[\theta_{s+1},m_{s+1}],[\theta_s,i-1],[1,0],\ldots, [1,0],\infty) \\ \downarrow \\ ([\theta_n,m_n],\ldots,[\theta_{s+1},m_{s+1}],[\theta_s,i-2],[1,2],[1,0],\ldots,[1,0]) \\ \end{array}$$ We want $([\theta_n,m_n]\ldots [\theta_{s+1},m_{s+1}])$ and $\theta_s$ remain constant. So after the first monoidal transformation look to some suitable chart where $m_s=i$ drops. As $m_s$ drops then $m_{s-1}=i-(i-1)=1$ and propagate this exceptional divisor in dimension $s-1$, making $p(1,s-1)$ monoidal transformations. Otherwise, to keep $([\theta_n,m_n]\ldots [\theta_{s+1},m_{s+1}])$ and $\theta_s$ constant, the only possibility is to look to a suitable chart where $m_s$ drops from $i-1$ to $i-2$. But in this case this would provide the same resolution function that appears after the propagation. As we want to construct the largest possible sequence of monoidal transformations, we follow the propagation phenomenon as above. After more monoidal transformations: $$\begin{array}{c} \left. \hspace*{2 cm} \begin{array}{r} \downarrow \\ \vdots \\ \downarrow \end{array} \right\} p(2,s-1) \vspace*{0.2cm} \\ \downarrow \\ \vdots \\ \downarrow \\ ([\theta_n,m_n],\ldots,[\theta_{s+1},m_{s+1}],[\theta_s,1],[1,0],\ldots, [1,0],\overbrace{\infty,\ldots,\infty}^{i-1}) \\ \downarrow \\ ([\theta_n,m_n],\ldots,[\theta_{s+1},m_{s+1}],[\theta_s,0],[1,i],[1,0],\ldots,[1,0]) \\ \end{array}$$ Then, for $1\leq i \leq s-1$, $$p(i,s)=1+p(1,s-1)+1+p(2,s-1)+ \cdots +1+p(i,s-1)$$ with $p(l,s-1)$, $1\leq l \leq i$, defined by the induction hypothesis. \end{itemize} \end{itemize} \end{proof} \begin{remark} Computation of examples in Singular with \emph{desing} package has been useful to state this behaviour of the exceptional divisors after monoidal transformations. The implementation of this package is based on the results appearing in \cite{paperlib}. \end{remark} \begin{theorem} \label{inv} Let $(W,(J,c),\emptyset)$ be a basic object where $J$ is a monomial ideal as in equation (\ref{jota}) with $a_i\geq c$ for all $1\leq i \leq n$. Then, the resolution function corresponding to $(W,(J,c),\emptyset)$ drops after monoidal transformations in the following form: \begin{center} \begin{tabular}{cl} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ $([\frac{d}{c},0],[1,0],\ldots,[1,0])$ & \\ $\hspace*{0.4 cm} \downarrow {\scriptstyle X_i}$ & $1^{st}$ monoidal transformation \\ $([\frac{d-a_i}{c},1],[1,0],\ldots,[1,0])$ & \\ $ \hspace*{0.6 cm} \begin{array}{cc} \downarrow {\scriptstyle X_i} & \\ \vdots \\ \downarrow {\scriptstyle X_i} & \\ \end{array} $ & $\left. \hspace*{-7 mm} \begin{array}{c} \\ \\ \\ \end{array} \right\} p\ (1,n)$ monoidal transformations \\ $([\frac{d-a_i}{c},0],[1,0],\ldots,[1,0],\infty)$ & \\ $\downarrow$ & center defined only by variables in $I$ \\ $([\frac{d-a_i-a_j}{c},2],[1,0],\ldots,[1,0])$ & \\ $ \hspace*{0.3cm} \begin{array}{cc} \downarrow & \\ \vdots & \\ \downarrow & \\ \end{array}$ & $\left. \hspace*{-7 mm} \begin{array}{c} \\ \\ \\ \end{array} \right\} p\ (2,n)$ monoidal transformations \\ $([\frac{d-a_i-a_j}{c},0],[1,0],\ldots,[1,0],\infty,\infty)$ & \\ $\downarrow$ & center defined only by variables in $I$ \\ $\vdots$ & \hspace*{1 cm} $\vdots$ \\ $\downarrow$ & \\ $([\frac{a_l}{c},n-1],[1,0],\ldots,[1,0])$ & \\ $ \hspace*{0.3 cm} \begin{array}{cc} \downarrow & \\ \vdots & \\ \downarrow & \\ \end{array}$ & $\left. \hspace*{-7 mm} \begin{array}{c} \\ \\ \\ \end{array} \right\} p\ (n-1,n)$ monoidal transformations \\ $([\frac{a_l}{c},0],\infty,\ldots,\infty)$ \hspace*{0.6 cm} & \\ \end{tabular} \end{center} At this stage, $a_l\geq c$ by hypothesis, so the center of the next monoidal transformation is $\{X_l=0\}$, and then we obtain an exceptional monomial. \end{theorem} \begin{proof} It follows by the propagation lemma and the fact that each time that $\theta_n$ drops $E_n'=Y'+|E|^{\nu}\neq\emptyset$, and $E_l'=(Y'+|E|^{\nu})-(E_n'+\cdots +E_{l+1}')=\emptyset$ for all $n-1\geq l\geq 1$. \end{proof} \begin{remark} Following the propagation in the previous way provides the largest branch in the resolution tree, because in other case, for example after the first monoidal transformation $$\begin{array}{c} ([\frac{d-a_i}{c},1],[1,0],\ldots,[1,0]) \\ {\scriptstyle X_i}\swarrow \qquad \searrow {\scriptstyle X_j} \\ ([\frac{d-a_i}{c},0],[1,1],[1,0]\ldots,[1,0]) \qquad ([\frac{d-a_i-a_j}{c},2],[1,0],\ldots,[1,0]) \end{array}$$ looking to some chart $j$ with $j\neq i$ we obtain an invariant which will appear later in the resolution process, after the propagation $p(1,n)$. \end{remark} \begin{corollary} Let $(W,(J,c),\emptyset)$ be a basic object where $J$ is a monomial ideal as in equation (\ref{jota}) with $a_i\geq c$ for all $1\leq i \leq n$. Therefore the number of monoidal transformations needed to transform $J$ into an exceptional monomial is at most \begin{equation} \label{ec} 1 + p(1,n) +1 + p(2,n)+ \ldots + 1 + p(n-1,n) + 1 =\ n + \sum_{j=1}^{n-1}p(j,n). \end{equation} \end{corollary} \begin{remark} In this case we always have $\theta_n\geq c$, so $Sing(J,c)\neq \emptyset$ at every stage of the resolution process. Therefore, in the resolution tree, the branch of theorem \ref{inv} effectively appears, and it is the largest, hence (\ref{ec}) is exactly the number of monoidal transformations to obtain $J'=M'$. \end{remark} \begin{proposition} \label{cata} Let $(W,(J,c),\emptyset)$ be a basic object where $J$ is a monomial ideal as in equation (\ref{jota}) with $a_i\geq c$ for all $1\leq i \leq n$. Then the previous sum of propagations is a partial sum of Catalan numbers. $$n + \sum_{{\scriptscriptstyle j=1}}^{{\scriptscriptstyle n-1}}p(j,n)= \sum_{{\scriptscriptstyle j=1}}^{{\scriptscriptstyle n}}C_j\ \text{ where }\ C_j=\left\{\frac{1}{j+1}\left(\begin{array}{c} 2j\\ j \end{array} \right)\right\}\ \text{ are Catalan numbers.}$$ \end{proposition} \begin{proof} \begin{enumerate} \item[(1)] Extend $p$ to arbitrary dimension: $$n + \sum_{j=1}^{n-1}p(j,n)=p(n,n+1).$$ Because of the form of the recurrence equation defining $p(i,j)$, and the fact that $p(n,n)=0$ by definition, it follows that \begin{equation} \label{rec} p(n,n+1)=n + \sum_{j=1}^{n}p(j,n)=n + \sum_{j=1}^{n-1}p(j,n). \end{equation} \item[(2)] Solve the recurrence equation defining $p(i,j)$: \begin{enumerate} \item We transform the recurrence equation (\ref{pro}), defining $p(i,j)$ for $0\leq i\leq j$ and $1\leq j\leq n$, to another recurrence equation defined for every $i,j\geq 0$: By sending the pair $(i,j)$ to the pair $(i,j-i)$ we extend the recurrence to $i,j\geq 0$, that is, we consider $$\tilde{p}(i,j)=p(i,i+j)$$ then $p(i,j)=\tilde{p}(i,j-i)$. As $p(i,j)$ is defined for $0\leq i \leq j$ then $\tilde{p}(i,j)$ is defined for $0\leq i \leq i+j$ for every $i,j\geq 0$. \item Note that $$\tilde{p}(i,j)-\tilde{p}(i-1,j+1)=p(i,i+j)-p(i-1,i+j)$$ $$=i+\sum_{k=1}^i p(k,i+j-1)-(i-1)- \sum_{k=1}^{i-1}p(k,i+j-1)=p(i,i+j-1)+1=\tilde{p}(i,j-1)+1.$$ Therefore, we have the following recurrence equation involving $\tilde{p}(i,j)$ \begin{equation} \label{chis} \left\{\begin{array}{ll}\tilde{p}(i,j)=1+\tilde{p}(i-1,j+1)+\tilde{p}(i,j-1) & \text{ for } i,j \geq 1 \\ \tilde{p}(0,j)=\tilde{p}(i,0)=0 & \end{array}\right. \end{equation} Take $r(i,j)=p(i,i+j)+1=\tilde{p}(i,j)+1$ and replace $\tilde{p}(i,j)$ with $r(i,j)$ in the equation (\ref{chis}). It follows the auxiliary recurrence equation: \begin{equation} \label{erres} \left\{\begin{array}{ll} r(i,j)=r(i-1,j+1)+r(i,j-1)& \text{ for } i,j \geq 1\\ r(0,j)=\tilde{p}(0,j)+1=1,\ r(i,0)=\tilde{p}(i,0)+1=1 & \end{array}\right. \end{equation} \item Resolving the auxiliary recurrence equation (\ref{erres}) by generating functions: Define $r_{i,j}:=r(i,j)$ and the generating functions $$R(x,y)=\sum_{i,j\geq 0}r_{i,j}x^iy^j \in \mathbb{C}[[x,y]], \ R_s(x,y)=\sum_{i,j\geq 1}r_{i,j}x^{i-1}y^{j-1} \in \mathbb{C}[[x,y]].$$ Note that $R(x,y)$ is, by definition, the generating function of the sequence $r(i,j)$. By the recurrence equation (\ref{erres}) involving $r(i,j)$, it follows $$\hspace*{-3.5cm} R_s(x,y)=\sum_{i,j\geq 1}r_{i-1,j+1}x^{i-1}y^{j-1}+\sum_{i,j\geq 1}r_{i,j-1}x^{i-1}y^{j-1}$$ $$ \hspace*{1cm} \begin{array}{l} =\sum\limits_{i\geq 0,j\geq 1}r_{i,j+1}x^{i}y^{j-1}+\frac{1}{x}\sum\limits_{i\geq 1,j\geq 0}r_{i,j}x^{i}y^{j} \vspace*{0.15cm} \\ =\frac{1}{y^2}\sum\limits_{i\geq 0,j\geq 1}r_{i,j+1}x^{i}y^{j+1}+\frac{1}{x}\left[\sum\limits_{i\geq 1}r_{i,0}x^{i}+ \sum\limits_{i\geq 1,j\geq 1}r_{i,j}x^{i}y^{j}\right] \vspace*{0.15cm} \\ =\frac{1}{y^2}\sum\limits_{i\geq 0,j\geq 2}r_{i,j}x^{i}y^{j}+\frac{1}{x}\left[\sum\limits_{i\geq 1}x^{i}+ \sum\limits_{i\geq 1,j\geq 1}r_{i,j}x^{i}y^{j}\right] \vspace*{0.15cm} \\ =\frac{1}{y^2}\left[\sum\limits_{i\geq 0,j\geq 1}r_{i,j}x^{i}y^{j}-\sum\limits_{i\geq 0}r_{i,1}x^{i}y\right] +\frac{1}{x}\left[\frac{1}{1-x}-1+ xyR_s(x,y)\right] \vspace*{0.15cm} \\ =\frac{1}{y^2}\left[\sum\limits_{j\geq 1}r_{0,j}y^{j}+xyR_s(x,y)-y\sum\limits_{i\geq 0}r_{i,1}x^{i}\right] +\frac{1}{x}\left[\frac{x}{1-x}+ xyR_s(x,y)\right] \vspace*{0.15cm} \\ =\frac{1}{y^2}\left[\frac{y}{1-y}+xyR_s(x,y)-y\sum\limits_{i\geq 0}r_{i,1}x^{i}\right] +\frac{1}{1-x}+ yR_s(x,y) \vspace*{0.15cm} \\ =\frac{1}{y(1-y)}+\frac{x}{y}R_s(x,y)-\frac{1}{y}\sum\limits_{i\geq 0}r_{i,1}x^{i}+\frac{1}{1-x}+ yR_s(x,y). \end{array}$$ Then $$\left(1-y-\frac{x}{y}\right)R_s(x,y)=\frac{1}{y(1-y)}+\frac{1}{1-x}-\frac{1}{y}\sum_{i\geq 0}r_{i,1}x^{i}$$ multiplying the equality by \emph{y} we have $$(y-y^2-x)R_s(x,y)=\frac{1}{1-y}+\frac{y}{1-x}-\sum_{i\geq 0}r_{i,1}x^{i}$$ $$=\frac{1}{1-y}+\frac{y}{1-x}-r_{0,1}-\sum_{i\geq 1}r_{i,1}x^{i}=\frac{y}{1-y}+\frac{y}{1-x}-\sum_{i\geq 1}r_{i,1}x^{i}.$$ Therefore $$(y-y^2-x)R_s(x,y)=\frac{y}{1-y}+\frac{y}{1-x}-\sum_{i\geq 1}r_{i,1}x^{i}$$ which defines an equation of the form $$Q(x,y)R_s(x,y)=K(x,y)-U(x).$$ Now apply the \emph{kernel method} used in \cite{pet}, algebraic case $4.3$: If $Q(x,y)=0$ then $y=\frac{1\pm \sqrt{1-4x}}{2}$. We take the solution passing through the origin, $y=\frac{1- \sqrt{1-4x}}{2}$ and $y=xC(x)$ where $C(x)$ is the generating function of Catalan numbers. On the other hand, $Q(x,y)=0$ gives $K(x,xC(x))=U(x)$, $$K(x,y)=\frac{y}{1-y}+\frac{y}{1-x}=\frac{-y^2+y-x+1}{(1-x)(1-y)}-1$$ so $K(x,xC(x))=\frac{1}{(1-x)(1-xC(x))}-1$ and using $\frac{1}{1-xC(x)}=C(x)$ we have $$U(x)=\frac{C(x)}{1-x}-1.$$ Making some calculations and using that $R(x,y)$ satisfies $$R(x,y)=xyR_s(x,y)+\sum_{j\geq 0}r_{0,j}y^j+\sum_{i\geq 0}r_{i,0}x^i-r_{0,0}$$ we obtain the generating function of $r(i,j)$ $$R(x,y)=\frac{xyC(x)+x-y}{(y^2-y+x)(1-x)}.$$ \end{enumerate} \item[(3)] Compute the generating function of the sequence $p(n,n+1)$: The coefficient of \emph{y} in $R(x,y)$ is just $\sum_{i\geq 0}r_{i,1}x^{i}$ then $$\sum_{i\geq 0}r_{i,1}x^{i}=\frac{\partial R(x,y)}{\partial y}\Big|_{y=0}= \frac{C(x)}{1-x}$$ is the generating function of the elements in the first column. If $C(x)$ is the generating function of $C_n$ then the convolution product $C(x)\cdot \frac{1}{1-x}$ is the genera\-ting function of $\sum_{k=0}^nC_k=S_n$ therefore $$r_{n,1}=\sum_{k=0}^nC_k.$$ As $r(n,1)=p(n,n+1)+1$ then $p(n,n+1)=r(n,1)-1=\sum_{k=0}^nC_k-1$, as $C_0=1$ we have $$p(n,n+1)=\sum_{k=1}^nC_k$$ where $C_k$ are Catalan numbers. \end{enumerate} \end{proof} See \cite{stan} for more details about Catalan numbers and the web page \cite{slo} for further details about their partial sums. \begin{theorem} Let $(W,(J,c),\emptyset)$ be a basic object where $J$ is a monomial ideal as in equation (\ref{jota}) with $a_i\geq c$ for all $1\leq i \leq n$. Then the number of monoidal transformations required to resolve $(W,(J,c),\emptyset)$ is at most $$\sum_{j=1}^{n}C_j+(2^{\sum_{j=1}^{n}C_j}-1)(d-c)-c+1$$ where $C_j$ are Catalan numbers. \end{theorem} \begin{proof} It follows by theorem \ref{mon}, lemma \ref{grad} and proposition \ref{cata}. \end{proof} \begin{example} The following table shows some values of the bound for any monomial ideal $J$ as in equation (\ref{jota}) with $a_i\geq c$ for all $1\leq i \leq n$. \vspace*{0.6cm} \begin{table}[ht] \vspace*{-0.6cm} \caption{Values of the bound} \begin{center} \begin{tabular}{||c|c|c||} \hline {$n$} & $\sum_{j=1}^{n}C_j$ & global bound \\ \hline $1$ & $1$ & $1+(d-c)-c+1$ \\ \hline $2$ & $3$ & $3+7(d-c)-c+1$ \\ \hline $3$ & $8$ & $8+255(d-c)-c+1$ \\ \hline $4$ & $22$ & $22+4194303(d-c)-c+1$ \\ \hline \end{tabular} \end{center} \end{table} \end{example} \vspace*{-0.5cm} \begin{remark} Note that, as a consecuence of proposition \ref{cata}, the number of monoidal transformations needed to transform $J$ into an exceptional monomial only depends on $n$, the dimension of the ambient space. \end{remark} \begin{corollary} Let $J=<Z^c-X_1^{a_1}\cdot \ldots \cdot X_n^{a_n}>\subset k[X_1,\ldots,X_n,Z]$ be a toric ideal with $a_i\geq c$ for all $1\leq i \leq n$. Then the number of monoidal transformations needed to resolve $(\mathbb{A}^{n+1}_k,(J,c),\emptyset)$ is at most $$\sum_{j=1}^{n}C_j+(2^{\sum_{j=1}^{n}C_j}-1)(d-c)-c+1 $$ where $C_j$ are Catalan numbers and $d=\sum_{i=1}^n a_i$. \end{corollary} \section{Higher codimensional case} In the minimal codimensional case, the way in which the invariant drops essentially depends on the number of accumulated exceptional divisors. Because the first components of the invariant, $\theta_{n},\ldots,\theta_{1}$, defined in equation (\ref{invt}), only depend on the order of the ideals $I_n,\ldots,I_1$. Recall that, for each $J_i$, we use the ideal $M_i$ (see remark \ref{constructM}), to define the ideal $I_i$. But in this case the first components of the invariant play an important role. They can also depend on the order of the (exceptional) monomial part $M_n$, see remark \ref{remgrande}. So they may increase suddenly when some $\theta_{j}$ is given by the order of the ideal $M_n$. We will call this situation the {\it higher codimensional case in dimension $j$}. Note that after some monoidal transformations, we can obtain a new higher codimensional case in another dimension. So, we must compute the number of monoidal transformations while $\theta_n\geq c$, with a suitable sum of propagations. Then, estimate the number of monoidal transformations needed to get the higher codimensional case in dimension $1$, and use the known estimation for the order of $M_n$ to give an upper bound for the number of monoidal transformations needed to get the following higher codimensional case inside this one (if it is possible). Afterward, estimate the number of monoidal transformations needed to get the higher codimensional case in dimension $2$, and so on. Hence, it has not been possible to obtain a bound for this case in the same way as above, due to the complications of the combinatorial problem, that perform that we can not know what branch is the largest in the resolution tree (to obtain an exceptional monomial). Furthermore, if we could find such bound, the large number of potential cases we expect, suggests that this bound would be very huge, even to estimate only the number of monoidal transformations needed to obtain an exceptional monomial. \begin{example} If we consider the basic object $(W,(J,c),\emptyset)=(\mathbb{A}^3_k,(X_1^5X_2^4X_3,4),\emptyset)$, there exists a branch of height $15$ to obtain $J'=M'$ or $Sing(J',c)=\emptyset$. So, in dimension $3$, we need a bound greater than or equal to $15$ for a higher codimensional case, in front of the $8$ monoidal transformations needed for a minimal codimensional case. \end{example} \begin{remark} In any case, both theorem \ref{mon} and lemma \ref{grad} are valid also in the higher codimensional case. So the open problem is to find a bound $C$ to obtain an exceptional monomial, to construct a global bound of the form $$C+(2^C-1)(d-c)-c+1 .$$ \end{remark} \begin{remark} For $n=2$ the higher codimensional case appears only in dimension $1$ and making some calculations we obtain $C=3$, that gives the same bound as in the minimal codimensional case. This bound can be improved by studying the different branches. \end{remark} \section*{Acknowledgments} I am very grateful to Professor O. Villamayor for his suggestions to improve the writing of this paper, and Professor S. Encinas for his continued advice and help. I also would like to thank the anonymous referee for useful comments to improve the presentation of the paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the present paper, we investigate the global well-posedness of the 2D Boussinesq equations with zero viscosity and positive diffusivity in a smooth bounded domain $\Omega\subset \mathbb{R}^2$ with smooth boundary $\partial \Omega$. The corresponding system reads \begin{equation}\label{BOUo} \begin{cases} \partial_t u+ u\cdot\nabla u +\nabla p = \theta e_2, \\ \partial_t \theta+ u\cdot\nabla \theta-\kappa\Delta \theta = 0, \\ \nabla \cdot u =0, \end{cases} \end{equation} where $u$ is the velocity vector field, $p$ is the pressure, $\theta$ is the temperature, $\kappa>0$ is the thermal diffusivity, and $e_2=(0,1)$. We supplement the system (1.1) with the following initial boundary value conditions \begin{equation}\label{IBo} \begin{cases} (u,\theta)(x,0)=(u_{0},\theta_{0})(x),\;x\in\Omega,\\ u(x,t)\cdot n|_{\partial\Omega}=0, \theta(x,t)|_{\partial\Omega}=\bar \theta, \end{cases} \end{equation} where $n$ is the outward unit normal vector to $\partial \Omega$, and $\bar \theta$ is a constant. The general 2D Boussinesq equations with viscosity $\nu$ and diffusivity $\kappa$ are \begin{equation*}\label{GBOU} \begin{cases} \partial_t u+ u\cdot\nabla u -\nu\Delta u+\nabla p = \theta, \\ \partial_t \theta+ u\cdot\nabla \theta-\kappa\Delta \theta = 0, \\ \nabla \cdot u =0. \end{cases} \end{equation*} The Boussinesq equations are of relevance to study a number of models coming from atmospheric or oceanographic turbulence where rotation and stratification play an important role (see \cite{Ma}, \cite{Pe}). From the mathematical view, the 2D Boussinesq equations serve as a simplified model of the 3D Euler and Navier- Stokes equations (see \cite{Ma2}). Better understanding of the 2D Boussinesq equations will shed light on the understanding of 3D flows. Recently, there are many works devoted to the well-posedness of the 2D Boussinesq equations, see \cite{Abidi}-\cite{Huang}, \cite{Xu}-\cite{Zhou}. In particular, when $\Omega=\mathbb{R}^2$, Chae in \cite{Chae} showed that the system \eqref{BOUo}-\eqref{IBo} has a global smooth solution for $(u_0,\theta_0)\in H^3$. In the bounded domains case, the boundary effect requires a careful mathematical analysis. In this direction, Zhao in \cite{Zhao} was able to generalize the study of \cite{Chae} to smooth bounded domains. This result was later extended by Huang in \cite{Huang} to the case of Yudovich's type data: $\text{curl}\,u_0\in L^\infty$ and $\theta_0\in H^2$. We intend here to improve Huang's result further by lowering the regularity for initial data. Our main result is stated in the following theorem. \begin{theorem}\label{Th} Let $\Omega$ be a bounded domain in $\mathbb{R}^2$ with $C^{2+\epsilon}$ boundary for some $\epsilon>0$. Suppose that $u_0\in L^2$, $\text{curl}\, u_0\in L^\infty$, and $\theta_0\in B^{2-2/p}_{q,p}$ with $p\in(1,\infty)$, $q\in(2,\infty)$. Then there exists a unique global solution $(u,\theta)$ to the system \eqref{BOU}-\eqref{IB}, which satisfies that for all $T>0$ \begin{gather*} \theta\in C([0,T];B_{q,p}^{2-2/p})\cap L^{p}(0,T;W^{2,q})\,, \partial_t\theta \in L^{p}(0,T;L^{q}),\\ u \in L^\infty(0,T;L^2) \text{ and }\text{curl}\, u\in L^\infty(0,T;L^\infty). \end{gather*} \end{theorem} \begin{remark} We only require mild regularity for the initial temperature $\theta_0$, as the ``regularity index'' $2-2/p$ can be arbitrarily closed to zero. Thus, our result significantly improves the previous results \cite{Chae,Huang,Zhao}. \end{remark} \begin{remark} By modifying slightly the method in the current paper, we can prove the global well-posedness for initial data $(u_0,\theta_0)\in H^{2+s}\times H^s$ with $s>0$ or $(u_0,\theta_0)\in W^{2,q}\times W^{1,q}$ with $q>2$. \end{remark} The proof of Theorem \ref{Th} consists of two main steps. First, we show the global existence of weak solutions to \eqref{BOUo}-\eqref{IBo}. Then we improve the regularity of weak solutions using the maximal regularity for heat equation. Our proof is elementary and can be carried over to the case of $\mathbb{R}^2$ without difficulty. The rest of our paper is organized as follows. In Section 2, we recall maximal regularity for heat equations as well as some basic facts. Section 3 is devoted to the proof of our main theorem. \section{Notations and Preliminaries} \noindent{\bf Notations:} (1)Let $\Omega$ be a bounded domain in $\mathbb{R}^{2}$. For $p\ge 1$ and $k\ge 1$, $L^p(\Omega)$ and $W^{k,p}(\Omega)$ ($p=2$, $H^k(\Omega)$) denote the standard Lesbegue space and Sobolev space respectively. For $T>0$ and a function space $X$, denote by $L^p(0,T; X)$ the set of Bochner measurable $X$-valued time dependent functions $f$ such that $t\to \|f\|_X$ belongs to $L^p(0,T)$. (2)Let $s\in (0,\infty)$, $p\in (1, \infty)$ and $r\in [1, \infty]$. The Besov space $B_{p,r}^{s}(\Omega)$ is defined as the real interpolation space between $L^p(\Omega)$ and $W^{m,p}(\Omega)$ ($m>s$), \[B_{p,r}^{s}(\Omega)=(L^p(\Omega),W^{m,p}(\Omega))_{\frac s m,r}.\] See Adams and Fournier (\cite{AD}, Chapter 7). (3)Throughout this paper, the same letter $C$ denotes various generic positive constant which is dependent on initial data $(u_0,\theta_0)$, time $T$, the thermal diffusivity $\kappa$, and the domain $\Omega$. We need the well-known Sobolev Embeddings and Gagliardo-Nirenberg inequality (see Adams and Fournier \cite{AD} and Nirenberg \cite{Niren}). \begin{lemma}\label{embed} Let $\Omega\in \mathbb{R}^{2}$ be any bounded domain with $C^2$ boundary. Then the following embeddings and inequalities hold: \begin{enumerate} \item $ H^1(\Omega)\hookrightarrow L^q(\Omega)$, for all $q\in (1,\infty)$. \item $\|\nabla u\|_{L^\infty}\leq C \|\nabla^2u\|_{L^q}^{\alpha}\|u\|_{L^2}^{1-\alpha}+C\|u\|_{L^2}$, for all $u\in W^{2,q}(\Omega)$, with $q\in (2,\infty)$, $\alpha=\frac {2q} {3q-2}$ and $C$ is a constant depending on $q,\Omega$. \end{enumerate} \end{lemma} We also need the maximal regularity for heat equation (see Amann \cite{Amann}), which is critical to the proof of our main theorem. \begin{lemma}\label{le:2.2} Let $\Omega$ be a bounded domain with a $C^{2+\epsilon}$ boundary in $\mathbb{R}^2$ and $1 < p, q <\infty$. Assume that $u_0\in B_{q,p}^{2-2/p}$, $f \in L^p(0,\infty;L^q)$. Then the system \begin{equation} \begin{cases} \partial_t u-\kappa\Delta u=f,\\ u(x,t)|_{\partial \Omega}=0,\\ u(x,t)|_{t=0}=u_0, \end{cases} \end{equation} has a unique solution $u$ satisfying the following inequality for all $T > 0$: \begin{align*} &\|u(T)\|_{B_{q,p}^{2-2/p}}+\|u\|_{L^p(0,T; W^{2,q})}+\|\partial_t u\|_{L^p(0,T; L^q)}\\ &\leq C\left(\|u_0\|_{B_{q,p}^{2-2/p}}+\|f\|_{L^p(0,T; L^q)}\right), \end{align*} with $C=C(p,q,\kappa,\Omega)$. \end{lemma} We complete this section by recalling the following well-known inequality (see Yudovich \cite{Yudovich}), which will be used several times. \begin{lemma}\label{le:2.4} For any $p\in (1,\infty)$, the following estimate holds \[\|\nabla u\|_{L^p(\Omega)}\leq C \frac{p^2}{p-1}\|w\|_{L^p(\Omega)},\] where $C=C(\Omega)$ does not depend on $p$. \end{lemma} \section{Proof of Main Theorem} We first reformulate the initial-boundary value problem \eqref{BOUo}-\eqref{IBo}. Let $\bar p= p-\theta y$ and $\Theta=\theta-\bar \theta $, then we get from the original system \begin{equation}\label{BOU} \begin{cases} \partial_t u+ u\cdot\nabla u +\nabla \bar p = \Theta e_2, \\ \partial_t \Theta+ u\cdot\nabla \Theta-\kappa\Delta \Theta = 0, \\ \nabla \cdot u =0, \end{cases} \end{equation} The initial and boundary conditions become \begin{equation}\label{IB} \begin{cases} (u,\Theta)(x,0)=(u_{0},\Theta_{0})(x),\;x\in\Omega,\\ u(x,t)\cdot n|_{\partial\Omega}=0, \Theta(x,t)|_{\partial\Omega}=0, \end{cases} \end{equation} where $\Theta_0=\theta_0-\bar \theta$. It is clear that \eqref{BOU}-\eqref{IB} are equivalent to \eqref{BOUo}-\eqref{IBo}. Hence, for the rest of this paper, we shall work on the reformulated problem \eqref{BOU}-\eqref{IB}. \subsection{Existence} First, we show the existence of weak solutions. Then, we improve the regularity using the maximal regularity for heat equation. In fact, one can establish the global existence of weak solution to \eqref{BOU}-\eqref{IB} in bounded domains by a standard argument, see Zhao \cite{Zhao}. \begin{lemma}\label{weak} Let $\Omega$ be a bounded domain in $\mathbb{R}^2$. Assume that $(u_0,\Theta_0)\in L^2\times L^2$. Then there exists one solution $(u,\Theta)$ to \eqref{BOU}-\eqref{IB} such that for any $T>0$ \begin{enumerate} \item $u\in L^\infty (0,T;L^2)$, $\Theta\in L^\infty (0,T;L^2)\cap L^2(0,T;H^1)$. \item $\int_{\Omega}u_0\phi(x,0)dx+\int_0^T\int_{\Omega}\left(u\cdot \partial_t\phi+u\cdot\left(u\cdot \nabla \phi\right)+\Theta e_2\phi \right)dxdt=0$, for any vector function $\phi\in C_0^\infty(\Omega\times[0,T))$ satisfying $\nabla \cdot \psi=0$. \item $\int_{\Omega}\Theta_0\psi(x,0)dx+\int_0^T\int_{\Omega}\left(\Theta\cdot \partial_t\psi+\Theta u\cdot \nabla \psi-\nabla \Theta\cdot\nabla \psi\right)dxdt=0$, for any scalar function $\psi\in C_0^\infty(\Omega\times[0,T))$. \end{enumerate} \end{lemma} It remains to establish the global regularity of solutions obtained in Lemma \ref{weak}. The proof is divided into several lemmas. \begin{lemma}\label{le:3.1} Let the assumptions in Theorem \ref{Th} hold. Then the solution obtained in Lemma \ref{weak} satisfies \begin{equation}\label{le:3.1-1} \Theta\in L^\infty(0,T;L^2)\cap L^2(0,T;H^1), \end{equation} \begin{equation}\label{le:3.1-2} u\in L^\infty(0,T;L^2). \end{equation} \end{lemma} \begin{proof} Multiplying \eqref{BOU}$_{2}$ by $T$ and integrating it over $\Omega$ by parts, we find \[\|\Theta\|_{L^{2}}^{2}+2\kappa\int_{0}^{T}\|\nabla \Theta\|_{L^{2}}^{2}dt\leq\|\Theta_{0}\|_{L^2}^{2}.\] For second estimate, taking $L^2$ inner product of \eqref{BOU}$_{1}$ with $u$, and using H\"older's inequality, we get \[\frac{1}{2}\frac{d}{dt}\|u\|_{L^2}^2=\int_{\Omega}\Theta e_2 \cdot udx\leq \|\Theta\|_{L^2}\|u\|_{L^2},\] which, by the Cauchy-Schwarz inequality, gives that \[\|u\|_{L^2}\leq \|u_0\|_{L^2}+\int_0^T\|\Theta\|_{L^2}ds\leq \|u_0\|_{L^2}+T\|\Theta_0\|_{L^2}.\] Then the proof of Lemma \ref{le:3.1} is finished. \end{proof} \begin{lemma}\label{le:3.2} Let the assumptions in Theorem \ref{Th} hold. Then the solution obtained in Lemma \ref{weak} satisfies \begin{equation}\label{le:3.2-2} w\in L^\infty(0,T;L^2). \end{equation} \end{lemma} \begin{proof} We recall that the vorticity $w=\text{curl}\; u$ satisfies the equation \begin{equation}\label{3.2-1} \partial_t w+u\cdot\nabla w=\partial _1 \Theta. \end{equation} Multiplying \eqref{3.2-1} by $w$, integrating the resulting equations over $\Omega$ by parts, and using H\"older's inequality, we have \[\frac{1}{2}\frac{d}{dt}\|w\|_{L^2}^2=\int_{\Omega}\partial_1\Theta wdx\leq \|\nabla \Theta\|_{L^2}\|w\|_{L^2},\] which implies that \begin{align*} \|w\|_{L^2} &\leq \|w_0\|_{L^2}+\int_0^T\|\nabla \Theta\|_{L^2}ds\\ &\leq \|w_0\|_{L^2}+T^{1/2}\left(\int_0^T\|\nabla \Theta\|_{L^2}^2ds\right)^{1/2}\\ &\leq \|w_0\|_{L^2}+\frac{T^{1/2}}{\sqrt{2\kappa}}\|\Theta_0\|_{L^2}. \end{align*} Then the proof of Lemma \ref{le:3.2} is finished. \end{proof} \begin{lemma}\label{le:3.3} Let the assumptions in Theorem \ref{Th} hold. Then the solution obtained in Lemma \ref{weak} satisfies \begin{gather*} \Theta\in C^\infty([0,T];B_{q,p}^{2-2/p})\cap L^{p}(0,T;W^{2,q})\,, \partial_t \Theta \in L^{p}(0,T;L^{q}),\\ u \in L^\infty(0,T;L^2) \text{ and }\text{curl}\, u\in L^\infty(0,T;L^\infty). \end{gather*} \end{lemma} \begin{proof} First, we obtain from \eqref{le:3.2-2} and Lemma \ref{le:2.4} that \[\nabla u\in L^\infty(0,T;L^2),\]\ which implies that for $2\leq q <\infty$, \begin{equation}\label{3.2-2} u\in L^\infty(0,T;L^q). \end{equation} Considering the equation for the temperature, by the maximal regularity for heat equation and H\"{o}lder's inequality, we obtain that for $1<p<\infty$, $2<q<\infty$, \begin{equation}\label{3.2-3} \begin{aligned} &\quad\|\Theta\|_{L^\infty(0,T;B_{q,p}^{2-2/p})}+\|\Theta\|_{L^p(0,T; W^{2,q})}+\|\partial_t \Theta\|_{L^p(0,T; L^q)}\\ &\leq C\|\Theta_0\|_{B_{q,p}^{2-2/p}}+C \|u\cdot\nabla \Theta\|_{L^p(0,T;L^q)}\\ &\leq C\|\Theta_0\|_{B_{q,p}^{2-2/p}}+C \|u\|_{L^\infty(0,T;L^q)}\|\nabla \Theta\|_{L^p(0,T;L^\infty)}\\ &\leq C\|\Theta_0\|_{B_{q,p}^{2-2/p}}+C\|\nabla \Theta\|_{L^p(0,T;L^\infty)}. \end{aligned} \end{equation} Using the interpolation inequality in Lemma \ref{embed}, H\"{o}lder's inequality and Young's inequality, we have for arbitrary $\epsilon >0$, $q>2$, \[\|\nabla \Theta\|_{L^p(0,T;L^\infty)}\leq \epsilon \|\nabla^2 \Theta\|_{L^p(0,T;L^q)}+C(\epsilon)\|\Theta\|_{L^p(0,T;L^2)}.\] Plugging the above inequality into \eqref{3.2-3}, absorbing the small $\epsilon$ term, we get \[\|\Theta\|_{L^\infty(0,T;B_{q,p}^{2-2/p})}+\|\Theta\|_{L^p(0,T; W^{2,q})}+\|\partial_t \Theta\|_{L^p(0,T; L^q)}\leq C(p,q,\kappa,T,\Omega,u_0,\Theta_0),\] which yields that \[\nabla \Theta \in L^1(0,T;L^\infty).\] Coming back to the vorticity equation \eqref{3.2-1}, we derive that \[w\in L^\infty(0,T;L^\infty).\] Then the proof of Lemma \ref{le:3.3} is finished. \end{proof} \subsection{Uniquenss} The method adapted here is essentiallly due to Yudovich \cite{Yudovich}, see also Danchin \cite{Danchin}. Let $(u_1,\Theta_1,\bar p_1)$ and $(u_2, \Theta_2,\bar p_2)$ be two solutions of the system \eqref{BOU}-\eqref{IB}. Denote $\delta u=u_1-u_2$, $\delta \Theta=\Theta_1-\Theta_2$, and $\delta p=\bar p_1-\bar p_2$. Then $( \delta u, \delta \Theta,\delta p)$ satisfy \begin{equation} \begin{cases} \partial_t \delta u+ u_2\cdot\nabla \delta u +\nabla \delta p =-\delta u\cdot \nabla u_1+ \delta \Theta e_2, \\ \partial_t\delta \Theta -\kappa \Delta \delta \Theta = -u_2\cdot\nabla\delta \Theta-\delta u\cdot\nabla \Theta_1 , \\ \nabla \cdot \delta u =0, \\ \delta u(x,t)\cdot n|_{\partial\Omega}=0, \delta \Theta(x,t)|_{\partial\Omega}=0,\\ \delta u(x,0) =0, \quad \delta \Theta(x,0) =0. \end{cases} \end{equation} By standard energy method and H\"{o}lder's inequality, we have for all $r\in [2,\infty)$ \begin{align*} \frac{1}{2}\frac{d}{dt}\|\delta u\|_{L^2}^2 &\leq \|\nabla u_1\|_{L^r}\|\delta u\|_{L^{2r'}}^2+\|\delta \Theta \|_{L^2}\|\delta u\|_{L^2}\\ &\leq \|\nabla u_1\|_{L^r}\|\delta u\|_{L^{\infty}}^{2/r}\|\delta u\|_{L^2}^{2/r'}+\|\delta \Theta \|_{L^2}\|\delta u\|_{L^2},\\ \intertext{and} \frac{1}{2}\frac{d}{dt}\|\delta \Theta\|_{L^2}^2 &\leq \|\nabla \Theta_1\|_{L^\infty}\|\delta \Theta\|_{L^2}\|\delta u\|_{L^2}, \end{align*} where $1/r+1/r'=1$. Denoting $X(t)=\|\delta u\|_{L^2}^2+\|\delta \Theta\|_{L^2}^2$, we find that \[\frac{1}{2}\frac{d}{dt}X \leq \|\nabla u_1\|_{L^r}\|\delta u\|_{L^{\infty}}^{2/r}X^{1/r'}+\frac{1}{2}(1+\|\nabla \Theta_1\|_{L^\infty})X.\] Setting $Y=e^{-\int_0^t\left(1+\|\nabla \Theta_1\|_{L^\infty}\right)ds}X$, we deduce that \begin{align*} \frac{1}{r}Y^{-\frac{1}{r'}}\frac{d}{dt}Y &\leq 2e^{-\frac{1}{r}\int_0^t\left(1+\|\nabla \Theta_1\|_{L^\infty}\right)ds}\|\nabla u_1\|_{L^r}\|\delta u\|_{L^\infty}^{2/r}\\ &\leq \frac{2}{r}\|\nabla u_1\|_{L^r}\|\delta u\|_{L^\infty}^{2/r} \end{align*} Integrating in time on $[0,t]$ gives us that \begin{equation}\label{eqY} Y(t)\leq \left(2\int_0^t\frac{\|\nabla u_1\|_{L^r}}{r}\|\delta u\|_{L^\infty}^{2/r}ds\right)^r. \end{equation} To proceed, we make two simple observations. First, combing Lemma \ref{le:2.4} and the bound $w_1\in L^\infty(0,T;L^\infty)$, we deduce that \begin{equation} \sup_{1\leq r<\infty}\frac{\|\nabla u_1(t)\|_{L^r}}{r}\leq C(\Omega). \end{equation} Second, from the fact that $u_i\in L^\infty(0,T; L^2)$ and $w_i\in L^\infty(0,T; L^\infty)$ for $i=1,2$, we have \begin{equation}\label{bounddu} \delta u\in L^\infty(0,T;L^\infty). \end{equation} Next, choosing $T^*$ such that $\int_0^{T^*}\frac{\|\nabla u_1\|_{L^r}}{r}\leq 1/4$, together with \eqref{bounddu}, we can rewrite \eqref{eqY} as \[Y(t)\leq C\left(\frac{1}{2}\right)^r.\] Sending $r$ to $\infty$, we get $Y(t)\equiv 0$ on $[0,T^*]$. By standard induction argument, we conclude that $Y(t)\equiv 0$ for all $t>0$, which means that $(\delta u, \delta T,\delta p)\equiv 0$ for all $t\in [0,T]$. Thus we obtain the uniqueness of solutions. \ \noindent{\bf\small Acknowledgment.} {\small D.G. Zhou is supported by the National Natural Science Foundation of China (No. 11401176) and Doctor Fund of Henan Polytechnic University (No. B2012-110). Z.L. Li is supported by Doctor Fund of Henan Polytechnic University (No. B2016-57), the Fundamental Research Funds for the Universities of Henan Province (NSFRF 16A110015) and the National Natural Science Foundation of China (No. 11601128).} \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Preparing extended ancilla qubits for quantum phase estimation}\label{app:QPE_ancilla} The QPE algorithm requires the application of a unitary operator conditional on an ancilla qubit, which naively would require each Trotter step to be performed in series as the ancilla qubit is passed through the system. The following method parallelizes the QPE algorithm at a cost of $O(N)$ ancilla qubits and a constant depth preparation circuit, which may well be preferable. We make this trade by preparing a large cat state on $4n$ Majoranas by the circuit in Fig.~\ref{fig:cat_state_prep}. First, we prepare $n\times 4$ Majoranas in the $\frac{1}{2}(|00\>+|11\>)$ state on Majoranas $\gamma_{4j}\gamma_{4j+1}\gamma_{4j+2}\gamma_{4j+3}$ for $j=0,\ldots,n-1$. Then, making the joint parity measurements $\gamma_{4j+2}\gamma_{4j+3}\gamma_{4j+4}\gamma_{4j+5}$ for $j=0,\ldots,n-2$ forces our system into an equal superposition of \begin{equation} \frac{1}{\sqrt{2}}\left(\left|\prod_{j=0}^{n-1}x_{2j}x_{2j+1}\right\>+\left|\prod_{j=0}^{n-1}\bar{x}_{2j}\bar{x}_{2j+1}\right\>\right), \end{equation} where $x_j\in\{0,1\}$ is the parity on the $j$th fermion ($\bar{x}=1-x$), and $x_{2j}\oplus x_{2j-1}$ is determined by the outcome of the joint parity measurement. This can then be converted to the GHZ state $\frac{1}{\sqrt{2}}(|00\ldots 0\>+|11\ldots 1\>)$ by braiding (or the value of $x_j$ can be stored and used to decide whether to rotate by $\theta$ or $-\theta$). The rotations to be performed for QPE may then be controlled by \emph{any} of the pairs of Majoranas defining a single fermion, and so we may spread this correlated ancilla over our system as required to perform rotations. As the interaction between ancilla qubits and system qubits is limited to a single joint parity measurement per Trotter step, we expect that although $n$ should scale as $O(N)$ to allow for parallelizing the circuit, the prefactor will be quite small. At the end of the QPE circuit, we recover the required phase by rotating $\exp(i\frac{\pi}{4}\gamma_{4j+1}\gamma_{4j+2})$ for $j=0,\ldots,n-1$ and reading out the parity of all Fermions individually. Starting from the state \begin{equation} \frac{1}{\sqrt{2}}\left( |00\ldots 0\>+e^{i\phi} |11\ldots 1\>\right), \end{equation} this prescription yields a $\cos^2(\phi/2)$ probability for the sum of all parities to be $0$ mod $4$. \begin{figure} \centering{ \includegraphics[width=0.75\columnwidth]{cat_state_prep.pdf}} \caption{\label{fig:cat_state_prep}Circuit for preparing an extended cat state on a set of ancilla qubits with constant depth. The circuit need only be as local as the weight-four parity checks allow. Afterwards, any pair $\{\gamma_{2j},\gamma_{2j+1}\}$ of Majoranas may be used equivalently to perform a conditional Trotter step in QPE.} \end{figure} \section{An algorithm to perform a Trotter step for a fully-connected fourth-order Hamiltonian in $O(N^3)$ time.}\label{app:TrotterN3} We showed in the main text a compact circuit for a four-Majorana Trotter step that does not require Jordan-Wigner strings, and in App.~\ref{app:QPE_ancilla} we suggested a method to perform conditional evolution in parallel by using a large GHZ state for an ancilla qubit. Assuming a Fermionic Hamiltonian on $N$ spin-orbitals with $4$th order terms, this would imply an $O(N^3)$ circuit depth for our QPE algorithm per Trotter step. However, there is an additional complication; we need to ensure that we do not gain additional circuit depth from the requirement to bring sets of $4$ Majoranas close enough to perform this conditional evolution. To show this, we consider a line of $N$ Majoranas $\gamma_1,\ldots\gamma_N$. We allow ourselves at each timestep $t$ to swap a Majorana with its neighbour on the left or the right. (Note that this is a simplification from our architecture where we may not directly swap initialized Majoranas, but this brings only an additional constant time cost.) We wish to give an algorithm of length $O(N^3)$ such that for any set of four Majoranas $\{\gamma_i,\gamma_j,\gamma_k,\gamma_l\}$, there exists a timestep $t$ where these are placed consecutively along the line. As demonstrated in~\cite{kivlichan_quantum_2017}, inverting the line by a bubblesort solves the equivalent problem for pairs $\{\gamma_i,\gamma_j\}$ in $O(N)$ time, and this may be quickly extended to the case of sets of four. Let us consider the problem of forming all groups of $3$ Majoranas. We divide our line into the sets $\Gamma_0=\{\gamma_i,i\leq N/2\}$, and $\Gamma_1=\{\gamma_i,i>N/2\}$. We then group neighboring pairs of elements in $\Gamma_1$ to form subsets, which we then pair with all elements in $\Gamma_0$ in $O(N)$ time by a reverse bubblesort. Then, upon restoring to our previous position, we fix the position of elements of $\Gamma_0$, and perform a single iteration of the reverse bubblesort on the elements of $\Gamma_1$ to form new subsets of pairs. Repeating this procedure until the second bubblesort has finished generates all subsets consisting of $2$ Majoranas in $\Gamma_1$ and $1$ from $\Gamma_0$ in $O(N^2)$ time. All groups of $2$ Majoranas from $\Gamma_0$ and $1$ from $\Gamma_1$ may be given in the same manner. Then, we may split the line in $2$, and reapply the above method on $\Gamma_0$ and $\Gamma_1$ separately to obtain all groups consisting of $3$ Majoranas within. This final step takes $O((N/2)^2+(N/4)^2+(N/8)^2+\ldots)=O(N^2)$ time. From here, it is clear how to proceed for groups of $4$. We again divide our line into the sets $\Gamma_0$ and $\Gamma_1$, and split our problem into that of making all groups of $(m,4-m)$ Majoranas, where the first index denotes the number from $\Gamma_0$ and the second from $\Gamma_1$. For $1\leq m\leq3$, we have an $O(N^{m-1})$ circuit to prepare all groups of $m$ Majoranas in $\Gamma_0$, a $O(N^{3-m})$ circuit to prepare groups of $4-m$ Majoranas in $\Gamma_1$, and an $O(N)$ bubblesort to pair all groups from $\Gamma_0$ and $\Gamma_1$. These three steps must be looped within each other, giving a total time of $O(N^{m-1}N^{3-m}N)=O(N^3)$. Finally, we perform the $m=0$ and $m=4$ case simultaneously by repeating this procedure on the sets $\Gamma_0$, which takes again $O(N^3)$ time by the arguments above. \section{Details of parallel circuit for Hubbard model}\label{app:HubbardCircuits} In this section we expand upon the proposal in Fig.~\ref{fig:2darchitecture} to perform QPE for the Hubbard model in constant time. This is a key feature of proposals for pre-error correcting quantum simulation~\cite{dal16}, and as such bears further detail. There are $11$ terms in equation~\ref{eq:Hubbard} per site of our lattice, corresponding to $11$ Trotter steps that must be performed in series (as each circuit piece requires accessing a prepared ancilla and additional Majoranas for the controlled braiding). As part of these Trotter steps, we must move Majoranas to their appropriate islands for parity measurements, and leave sufficient space for the preparation of the controlled rotation gate. We split the $11$ Trotter steps into $3$ stages, as indicated in Fig.~\ref{fig:2darchitecture}(a). In the first stage, the Trotter steps corresponding to hopping terms between nearest neighbour fermions of the same spin are implemented, but only for those neighbours that are directly connected on the graph of Fig.~\ref{fig:2darchitecture}(a) (i.e.~those separated by a single braiding ancilla fermion). In the second stage, the steps for onsite two and four fermion interactions are implemented. From stage 2, as the qubits are being brought back to their resting position, the spin up and spin down fermions on each site have their locations exchanged. This allows for the final two Trotter steps to be applied between fermions that are now locally connected, without the large overhead of bringing distant fermions together and then apart. At the end of the unitary, the system is in a spin-rotated version of itself, and the order of Trotter steps for a second unitary evolution should be changed slightly to minimize braiding overhead. In Table~\ref{tab:Hubbard_order}, we detail these three stages further. In particular, we focus on the $10$ terms involving the fermion $f_\uparrow^{1,1}$, and the onsite interaction term for the fermion $f_{\downarrow}^{1,1}$. For each term, we specify the location of all involved system Majoranas, parking spots for unused system Majoranas, the control ancilla, three braiding ancillas (for the implementation of the phase gate of Fig.~\ref{fig:4fold_rotation}), and which islands are involved in the parity measurement. Each such set of operations should then be tessellated across the lattice by a translation of a unit cell and a spin rotation to generate $10$ parallelized Trotter steps for all fermions. (For example, the hopping steps involving $f_\downarrow^{1,1}$ or $f_\uparrow^{1,2}$ are implemented in the operations from neighboring cells, and the hopping steps of $f_{\sigma}^{1,2}$ are reflected compared to those of $f_{\sigma}^{1,1}$, but those of $f_{\sigma}^{2,1}$ are not). One should be careful then that this tessellation does not self-intersect, that all required qubits are connected to an island being measured, that the three braiding ancillas are connected in a way that allows for braiding, and that the measurement circuit does not isolate individual islands (which would cause them to dephase). We assume that the conditional braidings on system Majoranas is performed as they move between configurations (or potentially cancelled), and so we do not account for these. We also assume that our finite-sized lattice is surrounded by a common ground, and so parallel lines of coupled islands will maintain a common phase by connecting to this. We have further found paths to hop Majoranas between their needed configurations and costed them in terms of the number of hoppings. We make no claim that the found arrangement is optimal, and invite any interested readers to attempt to beat our score for an optimal braiding pattern. \begin{table*}[p] \begin{tabular}{c|c||c|c|c|c|c|c}\toprule \textbf{Stage}\;\;&\;\;\textbf{Hamiltonian}\;\; &\;\; \textbf{System} \;\; & \;\;\textbf{Parking}\;\; & \;\;\textbf{Control}\;\; & \;\;\textbf{Braiding}\;\; & \;\;\textbf{Measurement}\;\; & \;\; \textbf{Rearrangement} \;\;\\ & \textbf{term} & \textbf{fermions} & \textbf{sites} & \textbf{ancilla} & \textbf{ancillas} & \textbf{island} & \textbf{cost} \\\hline \vspace{-0.2cm}& & & & & & & \\ 1 & $\frac{it}{2}\gamma_{\uparrow,1}^{1,1}\gamma_{\uparrow,2}^{0,1}$ & $f_{b,1}^{1,0}$ & $f_{b,0}^{1,0}$ & $f_{\uparrow}^{1,1}$ & $f_{\uparrow}^{0,1}$, $f_{c}^{0,1}$, $f_{b,2}^{0,0}$ & $I_L^{1,1} $ & (11) \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 1 & $\frac{it}{2}\gamma_{\uparrow,1}^{0,1}\gamma_{\uparrow,2}^{1,1}$ & $f_{b,0}^{1,0}$ & $f_{b,1}^{1,0}$ & $f_{\uparrow}^{1,1}$ & $f_{\uparrow}^{0,1}$, $f_{c}^{0,1}$, $f_{b,2}^{0,0}$ & $I_L^{1,1} $ & 0 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 1 & $\frac{it}{2}\gamma_{\uparrow,1}^{1,1}\gamma_{\uparrow,2}^{1,2}$ & $f_\uparrow^{1,1}$ & $f_{b,1}^{1,1}$ & $f_c^{1,2}$ & $f_{b,2}^{1,1}$, $f_{b,0}^{2,1}$, $f_{\downarrow}^{1,1}$ & $I_C^{1,1}$ & 11+7 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 1 & $\frac{it}{2}\gamma_{\uparrow,1}^{1,2}\gamma_{\uparrow,2}^{1,1}$ & $f_{b,1}^{1,1}$ & $f_\uparrow^{1,1}$ & $f_c^{1,2}$ & $f_{b,2}^{1,1}$, $f_{b,0}^{2,1}$, $f_{\downarrow}^{1,1}$ & $I_C^{1,1}$ & 0 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 1 & $\frac{it}{2}\gamma_{\uparrow,1}^{1,1}\gamma_{\uparrow,2}^{1,0}$ & $f_{b,1}^{1,0}$ & $f_{\uparrow}^{1,0}$ & $f_c^{1,1}$ & $f_{b,2}^{1,0}$, $f_{b,0}^{2,0}$, $f_{\downarrow}^{1,0}$ & $I_C^{1,0}$ & 7+7 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 1 & $\frac{it}{2}\gamma_{\uparrow,1}^{1,0}\gamma_{\uparrow,2}^{1,1}$ & $f_{\uparrow}^{1,0}$ & $f_{b,1}^{1,0}$ & $f_c^{1,1}$ & $f_{b,2}^{1,0}$, $f_{b,0}^{2,0}$, $f_{\downarrow}^{1,0}$ & $I_C^{1,0}$ & 0\\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 2 & $\frac{i}{4} (U-2\mu)\gamma_{\uparrow,1}^{1,1}\gamma_{\uparrow,2}^{1,1}$ & $f_{c}^{1,1}$ & $f_{b,2}^{1,1}$ & $f_{\downarrow}^{1,1}$ & $f_{b,1}^{1,1}$, $f_{b,0}^{1,1}$, $f_{\uparrow}^{1,1}$ & $I_C^{1,1}$ & 7+6 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 2 & $\frac{i}{4} (U-2\mu)\gamma_{\downarrow,1}^{1,1}\gamma_{\downarrow,2}^{1,1}$ & $f_{b,2}^{1,1}$ & $f_{c}^{1,1}$ & $f_{\downarrow}^{1,1}$ & $f_{b,1}^{1,1}$, $f_{b,0}^{1,1}$, $f_{\uparrow}^{1,1}$ & $I_C^{1,1}$ & 0 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 2 & $-\frac{U}{4}\gamma_{\uparrow,1}^{1,1}\gamma_{\uparrow,2}^{1,1}\gamma_{\downarrow,1}^{1,1}\gamma_{\downarrow,2}^{1,1}$ & $f_{b,2}^{1,1}$, $f_{c}^{1,1}$ & & $f_{\downarrow}^{1,1}$ & $f_{b,1}^{1,1}$, $f_{b,0}^{1,1}$, $f_{\uparrow}^{1,1}$ & $I_C^{1,1}$ & 0 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 3 & $\frac{it}{2}\gamma_{\uparrow,1}^{1,1}\gamma_{\uparrow,2}^{2,1}$ & $f_{b,1}^{1,0}$ & $f_{b,0}^{1,0}$ & $f_{\uparrow}^{1,1}$ & $f_{\uparrow}^{0,1}$, $f_{c}^{0,1}$, $f_{b,2}^{0,0}$ & $I_L^{2,1} $ & 28+11 \\[5pt]\hline \vspace{-0.2cm}& & & & & & \\ 3 & $\frac{it}{2}\gamma_{\uparrow,1}^{2,1}\gamma_{\uparrow,2}^{1,1}$ & $f_{b,0}^{2,0}$ & $f_{b,2}^{1,0}$ & $f_{\downarrow}^{1,1}$ & $f_{\downarrow}^{2,1}$, $f_{c}^{2,1}$, $f_{b,1}^{2,0}$ & $I_L^{2,1} $ & 0 (+11) \\[5pt]\hline \end{tabular} \caption{\label{tab:Hubbard_order}Full scheme for an implementation of QPE on the Hubbard model (Eq.~\eqref{eq:Hubbard}), using the architecture in Fig.~\ref{fig:2darchitecture}. We specify a translatable layout for each Trotter step to be performed simultaneously, by specifying which sites should be used to store system fermions, control ancilla fermions, braiding ancilla fermions, and any additional fermions not used in this rotation (parking fermions). We further specify the island to be used for any joint parity readout. For each Trotter step we have costed the number of Majorana hoppings required to rearrange the system from its previous state. When these are written as a sum, the first term refers to restoring the configuration of Fig.~\ref{fig:2darchitecture} from the configuration required for the previous step, and the second to obtaining the configuration needed for the current step. Some steps require the same configuration as the previous step, and as such incur a $0$ rearrangement cost. The cost in brackets for the final step is the requirement to return the system to its shifted initial state (where up-spins and down-spins have been swapped). This may not be required, especially as the configuration for the final step and the initial steps are the same (modulo the swapping of the spins), and so repeated unitary evolution would not need this nor the rearrangement cost of the first step. This reduces the total rearrangement cost of the circuit to 85 Majorana hoppings.} \end{table*} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A quantum operation that maps a finite-dimensional bipartite quantum input system $\mathcal{X}_1\otimes\mathcal{X}_2$ to another finite-dimensional bipartite quantum output system $\mathcal{A}_1\otimes\mathcal{A}_2$ is a \emph{local operation with shared entanglement (LOSE)} if it can be approximated to arbitrary precision by quantum operations $\Lambda$ for which there exists a fixed quantum state $\sigma$ and quantum operations $\Psi_1,\Psi_2$ such that the action of $\Lambda$ on each input state $\rho$ is given by \[ \Lambda\pa{\rho} = \Pa{ \Psi_1\otimes\Psi_2 }\pa{\rho\otimes\sigma}. \] Specifically, $\sigma$ is a state of some finite-dimensional bipartite quantum auxiliary system $\mathcal{E}_1\otimes\mathcal{E}_2$ and the quantum operations $\Psi_i$ map the system $\mathcal{X}_i\otimes\mathcal{E}_i$ to the system $\mathcal{A}_i$ for each $i=1,2$. This arrangement is illustrated in Figure \ref{fig:LO}. \begin{figure}[h] \hrulefill \vspace{2mm} \begin{center} \setlength{\unitlength}{2072sp}% \begin{picture}(4140,4617)(846,-4123) \thinlines \put(2881,-1051){\line( 1, 0){360}} \put(2881,-1142){\line( 1, 0){360}} \put(2881,-1231){\line( 1, 0){360}} \put(2881,-1322){\line( 1, 0){360}} \put(2881,-1411){\line( 1, 0){360}} \put(2881,-961){\line( 1, 0){360}} \put(2881,-871){\line( 1, 0){360}} \put(2881,-1501){\line( 1, 0){360}} \put(2881,-1591){\line( 1, 0){360}} \put(2881,-1681){\line( 1, 0){360}} \put(2881,-2401){\line( 1, 0){360}} \put(2881,-2492){\line( 1, 0){360}} \put(2881,-2581){\line( 1, 0){360}} \put(2881,-2672){\line( 1, 0){360}} \put(2881,-2761){\line( 1, 0){360}} \put(2881,-2311){\line( 1, 0){360}} \put(2881,-2221){\line( 1, 0){360}} \put(2881,-2851){\line( 1, 0){360}} \put(2881,-2941){\line( 1, 0){360}} \put(2881,-3031){\line( 1, 0){360}} \put(1341,-3391){\line( 1, 0){1900}} \put(1341,-3481){\line( 1, 0){1900}} \put(1341,-3571){\line( 1, 0){1900}} \put(1341,-3661){\line( 1, 0){1900}} \put(1341,-3751){\line( 1, 0){1900}} \put(1341,-3301){\line( 1, 0){1900}} \put(1341,-3211){\line( 1, 0){1900}} \put(4141,-2941){\line( 1, 0){540}} \put(4141,-3031){\line( 1, 0){540}} \put(4141,-3121){\line( 1, 0){540}} \put(4141,-3211){\line( 1, 0){540}} \put(4141,-3301){\line( 1, 0){540}} \put(4141,-2851){\line( 1, 0){540}} \put(4141,-2761){\line( 1, 0){540}} \put(4141,-2671){\line( 1, 0){540}} \put(4141,-2581){\line( 1, 0){540}} \put(4141,-3391){\line( 1, 0){540}} \put(4141,-871){\line( 1, 0){540}} \put(4141,-961){\line( 1, 0){540}} \put(4141,-1051){\line( 1, 0){540}} \put(4141,-1141){\line( 1, 0){540}} \put(4141,-1231){\line( 1, 0){540}} \put(4141,-781){\line( 1, 0){540}} \put(4141,-691){\line( 1, 0){540}} \put(4141,-601){\line( 1, 0){540}} \put(4141,-511){\line( 1, 0){540}} \put(4141,-1321){\line( 1, 0){540}} \put(1341,-331){\line( 1, 0){1900}} \put(1341,-421){\line( 1, 0){1900}} \put(1341,-511){\line( 1, 0){1900}} \put(1341,-601){\line( 1, 0){1900}} \put(1341,-691){\line( 1, 0){1900}} \put(1341,-241){\line( 1, 0){1900}} \put(1341,-151){\line( 1, 0){1900}} \put(3241,-3931){\framebox(900,1890){$\Psi_2$}} \put(1701,-4111){\dashbox{57}(2620,4320){}} \put(3241,-1861){\framebox(900,1890){$\Psi_1$}} \put(2501,-1446){\makebox(0,0)[lb]{$\mathcal{E}_1$}} \put(2501,-2746){\makebox(0,0)[lb]{$\mathcal{E}_2$}} \put(4771,-1086){\makebox(0,0)[lb]{}} \put(4771,-3156){\makebox(0,0)[lb]{}} \put(911,-586){\makebox(0,0)[lb]{$\mathcal{X}_1$}} \put(911,-3646){\makebox(0,0)[lb]{$\mathcal{X}_2$}} \put(4800,-1086){\makebox(0,0)[lb]{$\mathcal{A}_1$}} \put(4800,-3146){\makebox(0,0)[lb]{$\mathcal{A}_2$}} \put(2871,389){\makebox(0,0)[lb]{$\Lambda$}} \put(1911,-2060){$\sigma\left\{\rule[-11.25mm]{0mm}{0mm}\right.$} \end{picture}% \end{center} \caption{A two-party local quantum operation $\Lambda$ with shared entanglement or randomness. The local operations are represented by $\Psi_1,\Psi_2$; the shared entanglement or randomness by $\sigma$.} \label{fig:LO} \hrulefill \end{figure} Intuitively, the two operations $\Psi_1,\Psi_2$ represent the actions of two distinct parties who cannot communicate once they receive their portions of the input state $\rho$. The parties can, however, arrange ahead of time to share distinct portions of some distinguished auxiliary state $\sigma$. This state---and any entanglement contained therein---may be used by the parties to correlate their separate operations on $\rho$. This notion is easily generalized to a multi-party setting and the results presented herein extend to this generalized setting with minimal complication. LOSE operations are of particular interest in the quantum information community in part because any physical operation jointly implemented by spatially separated parties who obey both quantum mechanics and relativistic causality must necessarily be of this form. Moreover, it is notoriously difficult to say anything meaningful about this class of operations, despite its simple definition. One source of such difficulty stems from the fact that there exist two-party LOSE operations with the fascinating property that they cannot be implemented with any finite amount of shared entanglement \cite{LeungT+08}. (Hence the allowance for arbitrarily close approximations in the preceding definition of LOSE operations.) This difficulty has manifested itself quite prominently in the study of two-player co-operative games: classical games characterize NP,\footnote{ As noted in Ref.~\cite{KempeK+07}, this characterization follows from the PCP Theorem \cite{AroraL+98,AroraS98}. } whereas quantum games with shared entanglement are not even known to be computable. Within the context of these games, interest has focussed largely on special cases of LOSE operations \cite{KobayashiM03,CleveH+04,Wehner06,CleveS+07,CleveGJ07,KempeR+07}, but progress has been made recently in the general case \cite{KempeK+07,KempeK+08,LeungT+08,DohertyL+08,NavascuesP+08}. In the physics literature, LOSE operations are often discussed in the context of \emph{no-signaling} operations \cite{BeckmanG+01,EggelingSW02,PianiH+06}. In the present paper some light is shed on the general class of multi-party LOSE operations, as well as sub-class of operations derived therefrom as follows: if the quantum state $\sigma$ shared among the different parties of a LOSE operation $\Lambda$ is \emph{separable} then it is easy to see that the parties could implement $\Lambda$ using only shared \emph{classical randomness}. Quantum operations in this sub-class are therefore called \emph{local operations with shared randomness (LOSR)}. Several distinct results are established in the present paper, many of which mirror some existing result pertaining to separable quantum states. What follows is a brief description of each result together with its analogue from the literature on separable states where appropriate. \subsubsection*{Ball around the identity} \begin{description} \item \emph{Prior work on separable quantum states.} If $A$ is a Hermitian operator acting on a $d$-dimensional bipartite space and whose Frobenius norm at most 1 then the perturbation $I\pm A$ of the identity represents an (unnormalized) bipartite separable quantum state. In other words, there is a ball of (normalized) bipartite separable states with radius $\frac{1}{d}$ in Frobenius norm centred at the completely mixed state $\frac{1}{d}I$ \cite{Gurvits02,GurvitsB02}. A similar ball exists in the multipartite case, but with smaller radius. In particular, there is a ball of $m$-partite separable $d$-dimensional states with radius $\Omega\Pa{2^{-m/2}d^{-1}}$ in Frobenius norm centred at the completely mixed state \cite{GurvitsB03}. Subsequent results offer some improvements on this radius \cite{Szarek05,GurvitsB05,Hildebrand05}. \item \emph{Present work on local quantum operations.} Analogous results are proven in Sections \ref{sec:gen} and \ref{sec:balls} for multi-party LOSE and LOSR operations. Specifically, if $A$ is a Hermitian operator acting on a $n$-dimensional $m$-partite space and whose Frobenius norm scales as $O\Pa{2^{-m}n^{-3/2}}$ then $I\pm A$ is the Choi-Jamio\l kowski representation of an (unnormalized) $m$-party LOSR operation. As the unnormalized completely noisy channel \[\Delta:X\mapsto\tr{X}I\] is the unique quantum operation whose Choi-Jamio\l kowski representation equals the identity, it follows that there is a ball of $m$-party LOSR operations (and hence also of LOSE operations) with radius $\Omega\Pa{2^{-m}n^{-3/2}d^{-1}}$ in Frobenius norm centred at the completely noisy channel $\frac{1}{d}\Delta$. (Here the normalization factor $d$ is the dimension of the output system.) The perturbation $A$ must lie in the space spanned by Choi-Jamio\l kowski representations of the no-signaling operations. Conceptual implications of this technicality are discussed in Remark \ref{rem:balls:noise} of Section \ref{sec:balls}. No-signaling operations are discussed in the present paper, as summarized below. \item \emph{Comparison of proof techniques.} Existence of this ball of LOSR operations is established via elementary linear algebra. By contrast, existence of the ball of separable states was originally established via a delicate combination of (i) the fundamental characterizations of separable states in terms of positive super-operators (described in more detail below), together with (ii) nontrivial norm inequalities for these super-operators. Moreover, the techniques presented herein for LOSR operations are of sufficient generality to immediately imply a ball of separable states without the need for the aforementioned characterizations or their accompanying norm inequalities. This simplification comes in spite of the inherently more complicated nature of LOSR operations as compared to separable states. It should be noted, however, that the ball of separable states implied by the present work is smaller than the ball established in prior work by a factor of $2^{-m/2}d^{-3/2}$. \end{description} \subsubsection*{Weak membership problems are NP-hard} \begin{description} \item \emph{Prior work on separable quantum states.} The weak membership problem asks, \begin{quote} ``Given a description of a quantum state $\rho$ and an accuracy parameter $\varepsilon$, is $\rho$ within distance $\varepsilon$ of a separable state?'' \end{quote} This problem was proven strongly NP-complete under oracle (Cook) reductions by Gharibian \cite{Gharibian08}, who built upon the work of Gurvits \cite{Gurvits02} and Liu \cite{Liu07}. In this context, ``strongly NP-complete'' means that the problem remains NP-complete even when the accuracy parameter $\varepsilon=1/s$ is given in unary as $1^s$. NP-completeness of the weak membership problem was originally established by Gurvits \cite{Gurvits02}. The proof consists of a NP-completeness result for the weak \emph{validity} problem---a decision version of linear optimization over separable states---followed by an application of the Yudin-Nemirovski\u\i{} Theorem~\cite{YudinN76, GrotschelL+88}, which provides an oracle-polynomial-time reduction from weak validity to weak membership for general convex sets. As a precondition of the Theorem, the convex set in question---in this case, the set of separable quantum states---must contain a reasonably-sized ball. In particular, this NP-completeness result relies crucially upon the existence of the aforementioned ball of separable quantum states. For \emph{strong} NP-completeness, it is necessary to employ a specialized ``approximate'' version of the Yudin-Nemirovski\u\i{} Theorem due to Liu~\cite[Theorem 2.3]{Liu07}. \item \emph{Present work on local quantum operations.} In Section \ref{sec:hard} it is proved that the weak membership problems for LOSE and LOSR operations are both strongly NP-hard under oracle reductions. The result for LOSR operations follows trivially from Gharibian (just take the input spaces to be empty), but it is unclear how to obtain the result for LOSE operations without invoking the contributions of the present paper. The proof begins by observing that the weak validity problem for LOSE operations is merely a two-player quantum game in disguise and hence is strongly NP-hard \cite{KempeK+07}. The hardness result for the weak \emph{membership} problem is then obtained via a Gurvits-Gharibian-style application of Liu's version of the Yudin-Nemirovski\u\i{} Theorem, which of course depends upon the existence of the ball revealed in Section \ref{sec:balls}. \end{description} \subsubsection*{Characterization in terms of positive super-operators} \begin{description} \item \emph{Prior work on separable quantum states.} A quantum state $\rho$ of a bipartite system $\mathcal{X}_1\otimes\mathcal{X}_2$ is separable if and only if the operator \[ \Pa{\Phi\otimes\mathbbm{1}_{\mathcal{X}_2}}\pa{\rho} \] is positive semidefinite whenever the super-operator $\Phi$ is positive. This fundamental fact was first proven in 1996 by Horodecki \emph{et al.}~\cite{HorodeckiH+96}. The multipartite case reduces inductively to the bipartite case: the state $\rho$ of an $m$-partite system is separable if and only if \( \Pa{\Phi\otimes\mathbbm{1}}\pa{\rho} \) is positive semidefinite whenever the super-operator $\Phi$ is positive on $(m-1)$-partite separable operators \cite{HorodeckiH+01}. \item \emph{Present work on local quantum operations.} In Section \ref{sec:char} it is proved that a multi-party quantum operation $\Lambda$ is a LOSE operation if and only if \[ \varphi\pa{\jam{\Lambda}}\geq 0 \] whenever the linear functional $\varphi$ is ``completely'' positive on a certain cone of separable Hermitian operators, under a natural notion of complete positivity appropriate to that cone. A characterization of LOSR operations is obtained by replacing \emph{complete} positivity of $\varphi$ with mere positivity on that same cone. Here $\jam{\Lambda}$ denotes the Choi-Jamio\l kowski representation of the super-operator $\Lambda$. The characterizations presented in Section \ref{sec:char} do not rely upon discussion in previous sections. This independence contrasts favourably with prior work on separable quantum states, wherein the existence of the ball around the completely mixed state (and the subsequent NP-hardness result) relied crucially upon the characterization of separable states in terms of positive super-operators. \end{description} \subsubsection*{No-signaling operations} A quantum operation is \emph{no-signaling} if it cannot be used by spatially separated parties to violate relativistic causality. By definition, every LOSE operation is a no-signaling operation. Moreover, there exist no-signaling operations that are not LOSE operations. Indeed, it is noted in Section \ref{sec:no-sig} that the standard nonlocal box of Popescu and Rohrlich \cite{PopescuR94} is an example of a no-signaling operation that is separable (in the sense of Rains \cite{Rains97}), yet it is not a LOSE operation. Two characterizations of no-signaling operations are also discussed in Section \ref{sec:no-sig}. These characterizations were first established somewhat implicitly for the bipartite case in Beckman \emph{et al.}~\cite{BeckmanG+01}. The present paper generalizes these characterizations to the multi-party setting and recasts them in terms of the Choi-Jamio\l kowski representation for quantum super-operators. \subsubsection*{Open problems} Several interesting open problems are brought to light by the present work. These problems are discussed in Section \ref{sec:conclusion}. \section{General Results on Separable Operators} \label{sec:gen} It is not difficult to see that a quantum operation is a LOSR operation if and only if it can be written as a convex combination of product quantum operations of the form $\Phi_1\otimes\Phi_2$. (This observation is proven in Proposition \ref{prop:LOSR-convex} of Section \ref{sec:balls:defs}.) In order to be quantum operations, the super-operators $\Phi_1,\Phi_2$ must be completely positive and trace-preserving. In terms of Choi-Jamio\l kowski representations, complete positivity holds if and only if the operators $\jam{\Phi_1}$, $\jam{\Phi_2}$ are positive semidefinite. The trace-preserving condition is characterized by simple linear constraints on $\jam{\Phi_1}$, $\jam{\Phi_2}$, the details of which are deferred until Section \ref{sec:balls}. It suffices for now to observe that these constraints induce subspaces $\mathbf{Q}_1$, $\mathbf{Q}_2$ of Hermitian operators within which $\jam{\Phi_1}$, $\jam{\Phi_2}$ must lie. In other words, the quantum operation $\Lambda$ is a LOSR operation if and only if $\jam{\Lambda}$ lies in the set \[ \convex \Set{ X\otimes Y : X\in\mathbf{Q}_1, Y\in\mathbf{Q}_2, \textrm{ and $X,Y$ are positive semidefinite} }. \] As the unnormalized completely noisy channel $\Delta$ has $\jam{\Delta}=I$, a ball of LOSR operations around this channel may be established by showing that every operator in the product space $\mathbf{Q}_1\otimes\mathbf{Q}_2$ and close enough to the identity lies in the above set. Such a fact is established in the present section. Indeed, this fact is shown to hold not only for the specific choice $\mathbf{Q}_1,\mathbf{Q}_2$ of subspaces, but for \emph{every} choice $\mathbf{S}_1,\mathbf{S}_2$ of subspaces that contain the identity. Choosing $\mathbf{S}_1$ and $\mathbf{S}_2$ to be full spaces of Hermitian operators yields an alternate (and simpler) proof of the existence of a ball of separable quantum states surrounding the completely mixed state and, consequently, of the NP-completeness of separability testing. However, the ball of separable states implied by the present work is not as large as that exhibited by Gurvits and Barnum \cite{GurvitsB02, GurvitsB03}. Due to the general nature of this result, discussion in this section is abstract---applications to quantum information are deferred until Section \ref{sec:balls}. The results presented herein were inspired by Chapter 2 of Bhatia~\cite{Bhatia07}. \subsection{Linear Algebra: Review and Notation} For an arbitrary operator $X$ the standard operator norm of $X$ is denoted $\norm{X}$ and the standard conjugate-transpose (or \emph{Hermitian conjugate}) is denoted $X^*$. For operators $X$ and $Y$, the standard Hilbert-Schmidt inner product is given by \[\inner{X}{Y}\stackrel{\smash{\text{\tiny def}}}{=}\ptr{}{X^*Y}.\] The standard Euclidean 2-norm for vectors is also a norm on operators. It is known in this context as the \emph{Frobenius norm} and is given by \[ \fnorm{X}\stackrel{\smash{\text{\tiny def}}}{=}\sqrt{\inner{X}{X}}. \] Discussion in the present paper often involves finite sequences of operators, spaces, and so on. As such, it is convenient to adopt a shorthand notation for the Kronecker product of such a sequence. For example, if $\mathbf{S}_1,\dots,\mathbf{S}_m$ are spaces of operators then \[ \kprod{\mathbf{S}}{i}{j} \stackrel{\smash{\text{\tiny def}}}{=} \mathbf{S}_i \otimes \cdots \otimes \mathbf{S}_j \] denotes their Kronecker product for integers $1\leq i\leq j\leq m$. A similar notation applies to operators. For any space $\mathbf{S}$ of operators, $\mathbf{S}^+$ denotes the intersection of $\mathbf{S}$ with the positive semidefinite cone. An element $X$ of the product space $\kprod{\mathbf{S}}{1}{m}$ is said to be \emph{$(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable} if $X$ can be written as a convex combination of product operators of the form $P_1\otimes\cdots\otimes P_m$ where each $P_i\in\mathbf{S}_i^+$ is positive semidefinite. The set of $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable operators forms a cone inside $\Pa{\kprod{\mathbf{S}}{1}{m}}^+$. By convention, the use of bold font is reserved for sets of operators. \subsection{Hermitian Subspaces Generated by Separable Cones} \label{sec:gen:sep} Let $\mathbf{S}$ be any subspace of Hermitian operators that contains the identity. The cone $\mathbf{S}^+$ always \emph{generates} $\mathbf{S}$, meaning that each element of $\mathbf{S}$ may be written as a difference of two elements of $\mathbf{S}^+$. As proof, choose any $X\in\mathbf{S}$ and let \[ X^\pm = \frac{ \norm{X} I \pm X }{2}. \] It is clear that $X=X^+ - X^-$ and that $X^\pm\in\mathbf{S}^+$ (using the fact that $I\in\mathbf{S}$.) Moreover, it holds that $\norm{X^\pm}\leq\norm{X}$ for this particular choice of $X^\pm$. In light of this observation, one might wonder whether it could be extended in product spaces to separable operators. In particular, do the $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable operators generate the product space $\kprod{\mathbf{S}}{1}{m}$? If so, can elements of $\kprod{\mathbf{S}}{1}{m}$ be generated by $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable operators with bounded norm? The following theorem answers these two questions in the affirmative. \begin{theorem} \label{thm:sep-bound} Let $\mathbf{S}_1,\dots,\mathbf{S}_m$ be subspaces of Hermitian operators---all of which contain the identity---and let $n=\dim\pa{\kprod{\mathbf{S}}{1}{m}}$. Then every element $X\in\kprod{\mathbf{S}}{1}{m}$ may be written $X = X^+ - X^-$ where $X^\pm$ are $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable with $\norm{X^\pm}\leq 2^{m-1}\sqrt{n}\fnorm{X}$. \end{theorem} \begin{proof} First, it is proven that there is an orthonormal basis $\mathbf{B}$ of $\kprod{\mathbf{S}}{1}{m}$ with the property that every element $E\in \mathbf{B}$ may be written $E = E^+ - E^-$ where $E^\pm$ are $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable with $\norm{E^\pm}\leq 2^{m-1}$. The proof is by straightforward induction on $m$. The base case $m=1$ follows immediately from the earlier observation that every element $X\in\mathbf{S}_1$ is generated by some $X^\pm\in\mathbf{S}_1^+$ with $\norm{X^\pm}\leq \norm{X}$. In particular, any element $E=E^+-E^-$ of any orthonormal basis of $\mathbf{S}_1$ has $\norm{E^\pm}\leq\norm{E}\leq\fnorm{E}=1=2^0$. In the general case, the induction hypothesis states that there is an orthonormal basis $\mathbf{B}'$ of $\kprod{\mathbf{S}}{1}{m}$ with the desired property. Let $\mathbf{B}_{m+1}$ be any orthonormal basis of $\mathbf{S}_{m+1}$. As in the base case, each $F\in \mathbf{B}_{m+1}$ is generated by some $F^\pm\in\mathbf{S}_{m+1}^+$ with $\norm{F^\pm}\leq \norm{F} \leq 1$. Define the orthonormal basis $\mathbf{B}$ of $\kprod{\mathbf{S}}{1}{m+1}$ to consist of all product operators of the form $E\otimes F$ for $E\in \mathbf{B}'$ and $F\in \mathbf{B}_{m+1}$. Define \begin{align*} K^+ &\stackrel{\smash{\text{\tiny def}}}{=} \cBr{E^+\otimes F^+} + \cBr{E^-\otimes F^-},\\ K^- &\stackrel{\smash{\text{\tiny def}}}{=} \cBr{E^+\otimes F^-} + \cBr{E^-\otimes F^+}. \end{align*} It is clear that $E\otimes F=K^+-K^-$ and $K^\pm$ are $(\mathbf{S}_1;\dots;\mathbf{S}_{m+1})$-separable. Moreover, \[ \Norm{K^+} \leq \Norm{E^+\otimes F^+} + \Norm{E^-\otimes F^-} \leq \cBr{2^{m-1}\times 1} + \cBr{2^{m-1}\times 1} = 2^m. \] A similar computation yields $\Norm{K^-}\leq 2^m$, which establishes the induction. Now, let $X\in\kprod{\mathbf{S}}{1}{m}$ and let $x_j\in\mathbb{R}$ be the unique coefficients of $X$ in the aforementioned orthonormal basis $\mathbf{B}=\set{E_1,\dots,E_n}$. Define \begin{align*} X^+ &\stackrel{\smash{\text{\tiny def}}}{=} \sum_{j\::\:x_j>0} x_j E_j^+ - \sum_{j\::\:x_j<0} x_j E_j^-,\\ X^- &\stackrel{\smash{\text{\tiny def}}}{=} \sum_{j\::\:x_j>0} x_j E_j^- - \sum_{j\::\:x_j<0} x_j E_j^+. \end{align*} It is clear that $X=X^+-X^-$ and $X^\pm$ are $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable. Employing the triangle inequality and the vector norm inequality $\norm{x}_1\leq\sqrt{n}\norm{x}_2$, it follows that \[ \Norm{X^+} \leq 2^{m-1} \sum_{j=1}^n \cAbs{x_j} \leq 2^{m-1} \sqrt{n} \sqrt{ \sum_{j=1}^n \cAbs{x_j}^2 } = 2^{m-1} \sqrt{n} \Fnorm{X}. \] A similar computation yields $\Norm{X^-}\leq 2^{m-1}\sqrt{n}\fnorm{X}$, which completes the proof. \end{proof} \subsection{Separable Operators as a Subtraction from the Identity} At the beginning of Section \ref{sec:gen:sep} it was observed that $\norm{X}I-X$ lies in $\mathbf{S}^+$ for all $X\in\mathbf{S}$. One might wonder whether more could be expected of $\norm{X}I-X$ than mere positive semidefiniteness. For example, under what conditions is $\norm{X}I-X$ a $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable operator? The following theorem provides three such conditions. Moreover, the central claim of this section is established by this theorem. \begin{theorem} \label{thm:identity-sep} Let $\mathbf{S}_1,\dots,\mathbf{S}_m$ be subspaces of Hermitian operators---all of which contain the identity---and let $n=\dim\pa{\kprod{\mathbf{S}}{1}{m}}$. The following hold: \begin{enumerate} \item \label{item:identity-sep:1} \( \Norm{P}I - P \) is $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable whenever $P$ is a product operator of the form $P=P_1\otimes\cdots\otimes P_m$ where each $P_i\in\mathbf{S}_i^+$. \item \label{item:identity-sep:2} \( \cBr{n+1}\Norm{Q}I - Q \) is $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable whenever $Q$ is $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable. \item \label{item:identity-sep:3} \( 2^{m-1}\sqrt{n}\cBr{n+1}\Fnorm{X} I - X \) is $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable for every $X\in\kprod{\mathbf{S}}{1}{m}$. \end{enumerate} \end{theorem} \begin{proof} The proof of item \ref{item:identity-sep:1} is an easy but notationally cumbersome induction on $m$. The base case $m=1$ was noted at the beginning of Section \ref{sec:gen:sep}. For the general case, it is convenient to let $\kprod{I}{1}{m}$, $I_{m+1}$, and $\kprod{I}{1}{m+1}$ denote the identity elements of $\kprod{\mathbf{S}}{1}{m}$, $\mathbf{S}_{m+1}$, and $\kprod{\mathbf{S}}{1}{m+1}$, respectively. The induction hypothesis states that the operator \( S \stackrel{\smash{\text{\tiny def}}}{=} \norm{\kprod{P}{1}{m}} \kprod{I}{1}{m} - \kprod{P}{1}{m} \) is $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable. Just as in the base case, we know \( S' \stackrel{\smash{\text{\tiny def}}}{=} \norm{P_{m+1}}I_{m+1}-P_{m+1} \) lies in $\mathbf{S}_{m+1}^+$. Isolating the identity elements, these expressions may be rewritten \begin{align*} \kprod{I}{1}{m} &= \frac{1}{ \Norm{\kprod{P}{1}{m}} } \cBr{\kprod{P}{1}{m} + S} \\ I_{m+1} &= \frac{1}{ \Norm{P_{m+1}} } \cBr{ P_{m+1} + S' }. \end{align*} Taking the Kronecker product of these two equalities and rearranging the terms yields \[ \Norm{\kprod{P}{1}{m+1}} \kprod{I}{1}{m+1} - \kprod{P}{1}{m+1} = \cBr{ \kprod{P}{1}{m} \otimes S' } + \cBr{ S \otimes P_{m+1} } + \cBr{ S \otimes S' }. \] The right side of this expression is clearly a $\pa{\mathbf{S}_1;\dots;\mathbf{S}_{m+1}}$-separable operator; the proof by induction is complete. Item \ref{item:identity-sep:2} is proved as follows. By Carath\'eodory's Theorem, every $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable operator $Q$ may be written as a sum of no more than $n+1$ product operators. In particular, \[ Q = \sum_{j=1}^{n+1} P_{1,j}\otimes\cdots\otimes P_{m,j} \] where each $P_{i,j}$ is an element of $\mathbf{S}_i^+$. As each term in this sum is positive semidefinite, it holds that \( \norm{Q} \geq \norm{P_{1,j}\otimes\cdots\otimes P_{m,j}} \) for each $j$. Item \ref{item:identity-sep:1} implies that the sum \[ \sum_{j=1}^{n+1} \Norm{P_{1,j}\otimes\cdots\otimes P_{m,j}}I - P_{1,j}\otimes\cdots\otimes P_{m,j} \] is also $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable. Naturally, each of the identity terms $\Norm{P_{1,j}\otimes\cdots\otimes P_{m,j}}I$ in this sum may be replaced by $\norm{Q}I$ without compromising $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separability, from which it follows that \( \br{n+1}\norm{Q}I-Q \) is also $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable. To prove item \ref{item:identity-sep:3}, apply Theorem \ref{thm:sep-bound} to obtain $X=X^+-X^-$ where $X^\pm$ are $(\mathbf{S}_1;\dots;\mathbf{S}_m)$-separable with \( \norm{X^\pm}\leq 2^{m-1}\sqrt{n}\fnorm{X}. \) By item \ref{item:identity-sep:2}, it holds that \( \br{n+1}\norm{X^+} I - X^+ \) is $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable, implying that \[ 2^{m-1}\sqrt{n}\cBr{n+1}\Fnorm{X} I - X^+ \] is also $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable. To complete the proof, it suffices to note that adding $X^-$ to this operator yields another $\pa{\mathbf{S}_1;\dots;\mathbf{S}_m}$-separable operator. \end{proof} \section{Ball Around the Completely Noisy Channel} \label{sec:balls} All the technical results required to establish a ball of LOSR operations around the completely noisy channel were proven in Section \ref{sec:gen}. This section introduces proper formalization of the relevant notions of quantum information so that the existence of the ball of LOSR operations may be stated and proven rigorously. \subsection{Linear Algebra: More Review and More Notation} Finite-dimensional complex Euclidean spaces $\mathbb{C}^n$ are denoted by capital script letters such as $\mathcal{X}$, $\mathcal{Y}$, and $\mathcal{Z}$. The (complex) space of linear operators $A:\mathcal{X}\to\mathcal{X}$ is denoted $\lin{\mathcal{X}}$ and $\her{\mathcal{X}}\subset\lin{\mathcal{X}}$ denotes the (real) subspace of Hermitian operators within $\lin{\mathcal{X}}$. In keeping with the notational convention of Section \ref{sec:gen}, $\pos{\mathcal{X}}\subset\her{\mathcal{X}}$ denotes the cone of positive semidefinite operators within $\her{\mathcal{X}}$. The identity operator in $\lin{\mathcal{X}}$ is denoted $I_\mathcal{X}$, and the subscript is dropped whenever the space $\mathcal{X}$ is clear from the context. A \emph{super-operator} is a linear operator of the form $\Phi : \lin{\mathcal{X}}\to\lin{\mathcal{A}}$. The identity super-operator from $\lin{\mathcal{X}}$ to itself is denoted $\idsup{\mathcal{X}}$---again, the subscript is dropped at will. A super-operator is said to be \begin{itemize} \item \emph{positive on $\mathbf{K}$} if $\Phi\pa{X}$ is positive semidefinite whenever $X\in\mathbf{K}$. \item \emph{positive} if $\Phi$ is positive on $\pos{\mathcal{X}}$. \item \emph{completely positive} if $\Pa{\Phi \otimes \idsup{\mathcal{Z}}}$ is positive for every choice of complex Euclidean space $\mathcal{Z}$. \item \emph{trace-preserving} if $\tr{\Phi\pa{X}}=\tr{X}$ for all $X$. \end{itemize} The \emph{Choi-Jamio\l kowski representation} of a super-operator $\Phi:\lin{\mathcal{X}}\to\lin{\mathcal{A}}$ is defined by \[ \jam{\Phi} \stackrel{\smash{\text{\tiny def}}}{=} \sum_{i,j=1}^{\dim\pa{\mathcal{X}}} \Phi\pa{e_ie_j^*} \otimes e_ie_j^* \] where $\set{e_1,\dots,e_{\dim\pa{\mathcal{X}}}}$ denotes the standard basis of $\mathcal{X}$. The operator $\jam{\Phi}$ is an element of $\lin{\mathcal{A}\otimes\mathcal{X}}$. Indeed, the mapping $J$ is an isomorphism between super-operators and $\lin{\mathcal{A}\otimes\mathcal{X}}$. The Choi-Jamio\l kowski representation has several convenient properties. For example, $\Phi$ is completely positive if and only if $\jam{\Phi}$ is positive semidefinite and $\Phi$ is trace-preserving if and only if it meets the partial trace condition \[ \ptr{\mathcal{A}}{\jam{\Phi}}=I_\mathcal{X}. \] An additional property of the Choi-Jamio\l kowski representation is given in Proposition \ref{prop:identities} of Section \ref{sec:char:LOSE}. The Kronecker product shorthand notation for operators $\kprod{X}{i}{j}$ and spaces $\kprod{\mathbf{S}}{i}{j}$ of Section \ref{sec:gen} is extended in the obvious way to complex Euclidean spaces $\kprod{\mathcal{X}}{i}{j}$ and super-operators $\kprod{\Phi}{i}{j}$. \subsection{Formal Definitions of LOSE and LOSR Operations} \label{sec:balls:defs} Associated with each $d$-level physical system is a $d$-dimensional complex Euclidean space $\mathcal{X}=\mathbb{C}^d$. The \emph{quantum state} of such a system at some fixed point in time is described uniquely by a positive semidefinite operator $\rho\in\pos{\mathcal{X}}$ with $\ptr{}{\rho}=1$. A \emph{quantum operation} is a physically realizable discrete-time mapping (at least in an ideal sense) that takes as input a quantum state of some system $\mathcal{X}$ and produces as output a quantum state of some system $\mathcal{A}$. Every quantum operation is represented uniquely by a completely positive and trace-preserving super-operator $\Phi:\lin{\mathcal{X}}\to\lin{\mathcal{A}}$. A quantum operation $\Lambda : \lin{\kprod{\mathcal{X}}{1}{m}} \to \lin{\kprod{\mathcal{A}}{1}{m}}$ is a \emph{$m$-party LOSE operation with finite entanglement} if there exist complex Euclidean spaces $\mathcal{E}_1,\dots,\mathcal{E}_m$, a quantum state $\sigma\in\pos{\kprod{\mathcal{E}}{1}{m}}$, and quantum operations $\Psi_i : \lin{\mathcal{X}_i\otimes\mathcal{E}_i}\to\lin{\mathcal{A}_i}$ for each $i=1,\dots,m$ such that \[ \Lambda\pa{X} = \Pa{\kprod{\Psi}{1}{m}} \pa{X\otimes\sigma} \] for all $X\in\lin{\kprod{\mathcal{X}}{1}{m}}$. The operation $\Lambda$ is a \emph{finitely approximable $m$-party LOSE operation} if it lies in the closure of the set of $m$-party LOSE operations with finite entanglement. As mentioned in the introduction, the need to distinguish between LOSE operations with finite entanglement and finitely approximable LOSE operations arises from the fact that there exist LOSE operations---such as the example appearing in Ref.~\cite{LeungT+08}---that cannot be implemented with any finite amount of shared entanglement, yet can be approximated to arbitrary precision by LOSE operations with finite entanglement. An analytic consequence of this fact is that the set of LOSE operations with finite entanglement is not a closed set. Fortunately, the work of the present paper is unhindered by a more encompassing notion of LOSE operations that includes the closure of that set. As such, the term ``LOSE operation'' is used to refer to any finitely approximable LOSE operation; the restriction to finite entanglement is made explicit whenever it is required. An \emph{$m$-party LOSR operation} is just a LOSE operation with finite entanglement in which the shared state $\sigma\in\pos{\kprod{\mathcal{E}}{1}{m}}$ is $m$-partite separable. (That is, the operator $\sigma$ is $\Pa{\her{\mathcal{E}_1};\dots;\her{\mathcal{E}_m}}$-separable.) An equivalent definition for LOSR operations is established by the following elementary proposition. \begin{prop} \label{prop:LOSR-convex} A quantum operation $\Lambda : \lin{\kprod{\mathcal{X}}{1}{m}}\to\lin{\kprod{\mathcal{A}}{1}{m}}$ is an $m$-party LOSR operation if and only if $\Lambda$ can be written as a convex combination of product super-operators of the form $\kprod{\Phi}{1}{m}$ where each $\Phi_i:\lin{\mathcal{X}_i}\to\lin{\mathcal{A}_i}$ is a quantum operation for $i=1,\dots,m$. \end{prop} \begin{proof} Let $\sigma$ be a $\Pa{\her{\mathcal{E}_1};\dots;\her{\mathcal{E}_m}}$-separable state and let $\Psi_i:\lin{\mathcal{X}_i\otimes\mathcal{E}_i}\to\lin{\mathcal{A}_i}$ be quantum operations such that $\Lambda\pa{X}=\Pa{\kprod{\Psi}{1}{m}}\pa{X\otimes\sigma}$ for all $X$. Let \[ \sigma=\sum_{j=1}^n p_j \cBr{\sigma_{1,j}\otimes\cdots\otimes\sigma_{m,j}} \] be a decomposition of $\sigma$ into a convex combination of product states, where each $\sigma_{i,j}\in\pos{\mathcal{E}_i}$. For each $i$ and $j$ define a quantum operation \( \Phi_{i,j} : \lin{\mathcal{X}_i}\to\lin{\mathcal{A}_i} : X\mapsto\Psi_i\pa{X\otimes\sigma_{i,j}} \) and observe that \[ \Lambda = \sum_{j=1}^n p_j \cBr{\Phi_{1,j}\otimes\cdots\otimes\Phi_{m,j}}. \] Conversely, suppose that $\Lambda$ may be decomposed into a convex combination of product quantum operations as above. Let $\mathcal{E}_1,\dots,\mathcal{E}_m$ be complex Euclidean spaces of dimension $n$ and let $\set{e_{i,1},\dots,e_{i,n}}$ be an orthonormal basis of $\mathcal{E}_i$ for each $i$. Let \[ \sigma=\sum_{j=1}^n p_j \cBr{ e_{1,j}e_{1,j}^* \otimes\cdots\otimes e_{m,j}e_{m,j}^*} \] be a $\Pa{\her{\mathcal{E}_1};\dots;\her{\mathcal{E}_m}}$-separable state, let \[ \Psi_i:\lin{\mathcal{X}_i\otimes\mathcal{E}_i}\to\lin{\mathcal{A}_i} : X\otimes E\mapsto\sum_{j=1}^n e_{i,j}^*Ee_{i,j} \cdot \Phi_{i,j}\pa{X} \] be quantum operations, and observe that \( \Lambda\pa{X} = \Pa{\kprod{\Psi}{1}{m}}\pa{X\otimes\sigma} \) for all $X$. \end{proof} By analogy with finitely approximable LOSE operations, one may consider finitely approximable LOSR operations. However, an easy application of Carath\'eodory's Theorem implies that any LOSR operation can be implemented with finite randomness in such a way that each of the spaces $\mathcal{E}_1,\dots,\mathcal{E}_m$ has dimension bounded (loosely) above by $\dim\Pa{\her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}}$. As suggested by this contrast, the sets of LOSE and LOSR operations are not equal. Indeed, while every LOSR operation is clearly a LOSE operation, much has been made of the fact that there exist LOSE operations that are not LOSR operations---this is quantum nonlocality. \subsection{Ball of LOSR Operations Around the Completely Noisy Channel} In the preliminary discussion of Section \ref{sec:gen} it was informally argued that there exist subspaces $\mathbf{Q}_1,\dots,\mathbf{Q}_m$ of Hermitian operators with the property that a quantum operation $\Lambda$ is a LOSR operation if and only if $\jam{\Lambda}$ is $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable. It was then established via Theorem \ref{thm:identity-sep} that any operator in the product space $\kprod{\mathbf{Q}}{1}{m}$ and close enough to the identity is necessarily a $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable operator, implying the existence of a ball of LOSR operations around the completely noisy channel. This last implication is addressed in Remark \ref{rem:ball:noisy} below---for now, it remains only to identify the subspaces $\mathbf{Q}_1,\dots,\mathbf{Q}_m$. Toward that end, recall from Proposition \ref{prop:LOSR-convex} that a quantum operation $\Lambda : \lin{\kprod{\mathcal{X}}{1}{m}} \to \lin{\kprod{\mathcal{A}}{1}{m}}$ is a LOSR operation if and only if it can be written as a convex combination of product super-operators of the form $\kprod{\Phi}{1}{m}$. Here each $\Phi_i:\lin{\mathcal{X}_i}\to\lin{\mathcal{A}_i}$ is a quantum operation, so that $\jam{\Phi_i}$ is positive semidefinite and satisfies $\ptr{\mathcal{A}_i}{\jam{\Phi_i}}=I_{\mathcal{X}_i}$. The set of all operators $X$ obeying the inhomogeneous linear condition $\ptr{\mathcal{A}_i}{X}=I_{\mathcal{X}_i}$ is not a vector space. But this set is easily extended to a unique smallest vector space by including its closure under multiplication by real scalars, and it shall soon become apparent that doing so poses no additional difficulty. With this idea in mind, let \[ \mathbf{Q}_i \stackrel{\smash{\text{\tiny def}}}{=} \Set{ X\in\her{\mathcal{A}_i\otimes\mathcal{X}_i} : \ptr{\mathcal{A}_i}{X}=\lambda I_{\mathcal{X}_i} \textrm{ for some } \lambda\in\mathbb{R} } \] denote the subspace of operators $X=\jam{\Phi}$ for which $\Phi:\lin{\mathcal{X}_i}\to\lin{\mathcal{A}_i}$ is a trace-preserving super-operator, or a scalar multiple thereof. Clearly, $\jam{\Lambda}$ is $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable whenever $\Lambda$ is a LOSR operation. Conversely, it is a simple exercise to verify that any quantum operation $\Lambda$ for which $\jam{\Lambda}$ is $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable is necessarily a LOSR operation. Letting $n=\dim\pa{\kprod{\mathbf{Q}}{1}{m}}$ and $k = 2^{m-1}\sqrt{n}\cBr{n+1}$, Theorem \ref{thm:identity-sep} states that the operator $k\fnorm{A}I-A$ is $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable for all $A\in\kprod{\mathbf{Q}}{1}{m}$. In particular, $I-A$ is $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable whenever $\fnorm{A}\leq\frac{1}{k}$. The following theorem is now proved. \begin{theorem}[Ball around the completely noisy channel] \label{thm:ball} Suppose $A\in\kprod{\mathbf{Q}}{1}{m}$ satisfies $\fnorm{A} \leq \frac{1}{k}$. Then $I-A=\jam{\Lambda}$ where $\Lambda : \lin{\kprod{\mathcal{X}}{1}{m}} \to \lin{\kprod{\mathcal{A}}{1}{m}}$ is an unnormalized LOSR operation. \end{theorem} \begin{remark} \label{rem:ball:noisy} As mentioned in the introduction, the (unnormalized) completely noisy channel $\Delta:X\mapsto\tr{X}I$ satisfies $\jam{\Delta}=I$. It follows from Theorem \ref{thm:ball} that there is a ball of LOSR operations centred at the (normalized) completely noisy channel. In particular, letting $d=\dim\pa{\kprod{\mathcal{A}}{1}{m}}$, this ball has radius $\frac{1}{kd}$ in Frobenius norm. \end{remark} \begin{remark} \label{rem:ball:same} The ball of Theorem \ref{thm:ball} consists entirely of LOSE operations that are also LOSR operations. However, there seems to be no obvious way to obtain a bigger ball if such a ball is allowed to contain operations that are LOSE but not LOSR. Perhaps a more careful future investigation will uncover such a ball. \end{remark} \begin{remark} \label{rem:balls:noise} The ball of Theorem \ref{thm:ball} is contained within the product space $\kprod{\mathbf{Q}}{1}{m}$, which is a strict subspace of the space spanned by all quantum operations $\Phi:\lin{\kprod{\mathcal{X}}{1}{m}} \to \lin{\kprod{\mathcal{A}}{1}{m}}$. Why was attention restricted to this subspace? The answer is that there are no LOSE or LOSR operations $\Lambda$ for which $\jam{\Lambda}$ lies outside $\kprod{\mathbf{Q}}{1}{m}$. In other words, $\kprod{\mathbf{Q}}{1}{m}$ is the \emph{largest} possible space in which to find a ball of LOSR operations. This fact follows from the discussion in Section \ref{sec:no-sig}, wherein it is shown that $\kprod{\mathbf{Q}}{1}{m}$ is generated by the so-called \emph{no-signaling} quantum operations. Of course, there exist quantum operations arbitrarily close to the completely noisy channel that are not no-signaling operations, much less LOSE or LOSR operations. This fact might seem to confuse the study of, say, the effects of noise on such operations because a completely general model of noise would allow for extremely tiny perturbations that nonetheless turn no-signaling operations into signaling operations. This confusion might even be exacerbated by the fact that separable quantum states, by contrast, are resilient to \emph{arbitrary} noise: any conceivable physical perturbation of the completely mixed state is separable, so long as the perturbation has small enough magnitude. There is, of course, nothing unsettling about this picture. In any reasonable model of noise, perturbations to a LOSE or LOSR operation occur only on the local operations performed by the parties involved, or perhaps on the state they share. It is easy to see that realistic perturbations such as these always maintain the no-signaling property of these operations. Moreover, any noise \emph{not} of this form could, for example, bestow faster-than-light communication upon spatially separated parties. \end{remark} \section{Recognizing LOSE Operations is NP-hard} \label{sec:hard} In this section the existence of the ball of LOSR operations around the completely noisy channel is employed to prove that the weak membership problem for LOSE operations is strongly NP-hard. Informally, the weak membership problem asks, \begin{quote} ``Given a description of a quantum operation $\Lambda$ and an accuracy parameter $\varepsilon$, is $\Lambda$ within distance $\varepsilon$ of a LOSE operation?'' \end{quote} This result is achieved in several stages. Section \ref{sec:app:games} reviews a relevant recent result of Kempe \emph{et al.} pertaining to quantum games. In Section \ref{sec:app:validity} this result is exploited in order to prove that the weak validity problem---a relative of the weak membership problem---is strongly NP-hard for LOSE operations. Finally, Section \ref{sec:app:membership} illustrates how the strong NP-hardness of the weak membership problem for LOSE operations follows from a Gurvits-Gharibian-style application of Liu's version of the Yudin-Nemirovski\u\i{} Theorem. It is also noted that similar NP-hardness results hold trivially for LOSR operations, due to the fact that separable quantum states arise as a special case of LOSR operations in which the input space is empty. \subsection{Co-Operative Quantum Games with Shared Entanglement} \label{sec:app:games} Local operations with shared entanglement have been studied in the context of two-player co-operative games. In these games, a referee prepares a question for each player and the players each respond to the referee with an answer. The referee evaluates these answers and declares that the players have jointly won or lost the game according to this evaluation. The goal of the players, then, is to coordinate their answers so as to maximize the probability with which the referee declares them to be winners. In a \emph{quantum} game the questions and answers are quantum states. In order to differentiate this model from a one-player game, the players are not permitted to communicate with each other after the referee has sent his questions. The players can, however, meet prior to the commencement of the game in order to agree on a strategy. In a quantum game the players might also prepare a shared entangled quantum state so as to enhance the coordination of their answers to the referee. More formally, a \emph{quantum game} $G=(q,\pi,\mathbf{R},\mathbf{V})$ is specified by: \begin{itemize} \item A positive integer $q$ denoting the number of distinct questions. \item A probability distribution $\pi$ on the question indices $\{1,\dots,q\}$, according to which the referee selects his questions. \item Complex Euclidean spaces $\mathcal{V},\mathcal{X}_1,\mathcal{X}_2,\mathcal{A}_1,\mathcal{A}_2$ corresponding to the different quantum systems used by the referee and players. \item A set $\mathbf{R}$ of quantum states $\mathbf{R} = \{ \rho_i \}_{i=1}^q \subset \pos{\mathcal{V}\otimes\mathcal{X}_1\otimes\mathcal{X}_2}$. These states correspond to questions and are selected by the referee according to $\pi$. \item A set $\mathbf{V}$ of unitary operators $\mathbf{V} = \{V_i \}_{i=1}^q \subset \lin{\mathcal{V}\otimes\mathcal{A}_1\otimes\mathcal{A}_2}$. These unitaries are used by the referee to evaluate the players' answers. \end{itemize} For convenience, the two players are called \emph{Alice} and \emph{Bob}. The game is played as follows. The referee samples $i$ according to $\pi$ and prepares the state $\rho_i\in\mathbf{R}$, which is placed in the three quantum registers corresponding to $\mathcal{V}\otimes\mathcal{X}_1\otimes\mathcal{X}_2$. This state contains the questions to be sent to the players: the portion of $\rho_i$ corresponding to $\mathcal{X}_1$ is sent to Alice, the portion of $\rho_i$ corresponding to $\mathcal{X}_2$ is sent to Bob, and the portion of $\rho_i$ corresponding to $\mathcal{V}$ is kept by the referee as a private workspace. In reply, Alice sends a quantum register corresponding to $\mathcal{A}_1$ to the referee, as does Bob to $\mathcal{A}_2$. The referee then applies the unitary operation $V_i\in \mathbf{V}$ to the three quantum registers corresponding to $\mathcal{V}\otimes\mathcal{A}_1\otimes\mathcal{A}_2$, followed by a standard measurement $\{\Pi_\mathrm{accept},\Pi_\mathrm{reject}\}$ that dictates the result of the game. As mentioned at the beginning of this subsection, Alice and Bob may not communicate once the game commences. But they may meet prior to the commencement of the game in order prepare a shared entangled quantum state $\sigma$. Upon receiving the question register corresponding to $\mathcal{X}_1$ from the referee, Alice may perform any physically realizable quantum operation upon that register and upon her portion of $\sigma$. The result of this operation shall be contained in the quantum register corresponding to $\mathcal{A}_1$---this is the answer that Alice sends to the referee. Bob follows a similar procedure to obtain his own answer register corresponding to $\mathcal{A}_2$. For any game $G$, the \emph{value} $\omega(G)$ of $G$ is the supremum of the probability with which the referee can be made to accept taken over all strategies of Alice and Bob. \begin{theorem}[Kempe \emph{et al.}~\cite{KempeK+07}] There is a fixed polynomial $p$ such that the following promise problem is NP-hard under mapping (Karp) reductions: \begin{description} \item[Input.] A quantum game $G=(q,\pi,\mathbf{R},\mathbf{V})$. The distribution $\pi$ and the sets $\mathbf{R},\mathbf{V}$ are each given explicitly: for each $i=1,\dots,q$, the probability $\pi(i)$ is given in binary, as are the real and complex parts of each entry of the matrices $\rho_i$ and $V_i$. \item[Yes.] The value $\omega(G)$ of the game $G$ is 1. \item[No.] The value $\omega(G)$ of the game $G$ is less than $1-\frac{1}{p(q)}$. \end{description} \end{theorem} \subsection{Strategies and Weak Validity} \label{sec:app:validity} Viewing the two players as a single entity, a quantum game may be seen as a two-message quantum interaction between the referee and the players---a message from the referee to the players, followed by a reply from the players to the referee. The actions of the referee during such an interaction are completely specified by the parameters of the game. In the language of Ref.~\cite{GutoskiW07}, the game specifies a \emph{two-turn measuring strategy} for the referee.% \footnote{ Actually, the game specifies a \emph{one}-turn measuring \emph{co}-strategy for the referee. But this co-strategy can be re-written as a two-turn measuring strategy by a careful choice of input and output spaces. These terminological details are not important to purpose of this paper. } This strategy has a Choi-Jamio\l kowski representation given by some positive semidefinite operators \[ R_\mathrm{accept},R_\mathrm{reject} \in \pos{\kprod{\mathcal{A}}{1}{2}\otimes\kprod{\mathcal{X}}{1}{2}}, \] which are easily computed given the parameters of the game. In these games, the players implement a one-turn non-measuring strategy compatible with $\{R_\mathrm{accept},R_\mathrm{reject}\}$. The Choi-Jamio\l kowski representation of this strategy is given by a positive semidefinite operator \[ P \in \pos{\kprod{\mathcal{A}}{1}{2}\otimes\kprod{\mathcal{X}}{1}{2}}. \] For any fixed strategy $P$ for the players, the probability with which they cause the referee to accept is given by the inner product \[ \Pr[\textrm{Players win with strategy $P$}] = \inner{R_\mathrm{accept}}{P}. \] In any game, the players combine to implement some physical operation \( \Lambda:\lin{\kprod{\mathcal{X}}{1}{2}}\to\lin{\kprod{\mathcal{A}}{1}{2}}. \) It is clear that a given super-operator $\Lambda$ denotes a legal strategy for the players if and only if $\Lambda$ is a LOSE operation. For the special case of one-turn non-measuring strategies---such as that of the players---the Choi-Jamio\l kowski representation of such a strategy is given by the simple formula $P=\jam{\Lambda}.$ Thus, the problem studied by Kempe \emph{et al.}~\cite{KempeK+07} of deciding whether $\omega\pa{G}=1$ can be reduced via the formalism of strategies to an optimization problem over the set of LOSE operations: \[ \omega\pa{G} = \sup_{\textrm{$\Lambda\in$ LOSE}} \left\{ \inner{R_\mathrm{accept}}{\jam{\Lambda}} \right\}. \] The following theorem is thus proved. \begin{theorem} \label{thm:wval} The weak validity problem for the set of LOSE [LOSR] operations is strongly NP-hard [NP-complete] under mapping (Karp) reductions: \begin{description} \item[Input.] A Hermitian matrix $R$, a real number $\gamma$, and a positive real number $\varepsilon>0$. The number $\gamma$ is given explicitly in binary, as are the real and complex parts of each entry of $R$. The number $\varepsilon$ is given in unary, where $1^s$ denotes $\varepsilon=1/s$. \item[Yes.] There exists a LOSE [LOSR] operation $\Lambda$ such that $\inner{R}{\jam{\Lambda}} \geq \gamma + \varepsilon$. \item[No.] For every LOSE [LOSR] operation $\Lambda$ we have $\inner{R}{\jam{\Lambda}} \leq \gamma - \varepsilon$. \end{description} \end{theorem} \begin{remark} \label{rem:sep-reduction} The hardness result for LOSR operations follows from a simple reduction from separable quantum states to LOSR operations: every separable state may be written as a LOSR operation in which the input space has dimension one. That weak validity for LOSR operations is in NP (and is therefore NP-complete) follows from the fact that all LOSR operations may be implemented with polynomially-bounded shared randomness. \end{remark} \subsection{The Yudin-Nemirovski\u\i{} Theorem and Weak Membership} \label{sec:app:membership} Having established that the weak \emph{validity} problem for LOSE operations is strongly NP-hard, the next step is to follow the leads of Gurvits and Gharibian~\cite{Gurvits02,Gharibian08} and apply Liu's version~\cite{Liu07} of the Yudin-Nemirovski\u\i{} Theorem~\cite{YudinN76, GrotschelL+88} in order to prove that the weak \emph{membership} problem for LOSE operations is also strongly NP-hard. The Yudin-Nemirovski\u\i{} Theorem establishes an oracle-polynomial-time reduction from the weak validity problem to the weak membership problem for any convex set $C$ that satisfies certain basic conditions. One consequence of this theorem is that if the weak validity problem for $C$ is NP-hard then the associated weak membership problem for $C$ is also NP-hard. Although hardness under mapping reductions is preferred, any hardness result derived from the Yudin-Nemirovski\u\i{} Theorem in this way is only guaranteed to hold under more inclusive oracle reductions. The basic conditions that must be met by the set $C$ in order for the Yudin-Nemirovski\u\i{} Theorem to apply are (i) $C$ is bounded, (ii) $C$ contains a ball, and (iii) the size of the bound and the ball are polynomially related to the dimension of the vector space containing $C$. It is simple to check these criteria against the set of LOSE operations. For condition (i), an explicit bound is implied by the following proposition. The remaining conditions then follow from Theorem \ref{thm:ball}. \begin{prop} \label{prop:explicit-bound} For any quantum operation $\Phi:\lin{\mathcal{X}}\to\lin{\mathcal{A}}$ it holds that $\tnorm{\jam{\Phi}}=\dim(\mathcal{X})$. \end{prop} Here $\tnorm{X}\stackrel{\smash{\text{\tiny def}}}{=}\ptr{}{\sqrt{X^*X}}$ denotes the standard \emph{trace norm} for operators. Proposition \ref{prop:explicit-bound} implies a bound in terms of the Frobenius norm via the inequality $\fnorm{X}\leq\tnorm{X}$. \begin{proof}[Proof of Proposition \ref{prop:explicit-bound}] It follows from the definition of the Choi-Jamio\l kowski representation that \[ \jam{\Phi} = \Pa{\Phi\otimes\idsup{\mathcal{X}}}(vv^*) \quad \textrm{ for } \quad v=\sum_{i=1}^{\dim(\mathcal{X})} e_i\otimes e_i \] where $\set{e_1,\dots,e_{\dim(\mathcal{X})}}$ denotes the standard basis of $\mathcal{X}$. As $\jam{\Phi}$ is positive semidefinite, it holds that \( \tnorm{\jam{\Phi}}=\ptr{}{\jam{\Phi}}. \) As $\Phi$ is trace-preserving, it holds that \( \ptr{}{\jam{\Phi}} = v^*v = \dim(\mathcal{X}). \) \end{proof} The following theorem is now proved. (As in Remark \ref{rem:sep-reduction}, the analogous result for LOSR operations follows from a straightforward reduction from separable quantum states.) \begin{theorem} \label{thm:wmem} The weak membership problem for the set of LOSE [LOSR] operations is strongly NP-hard [NP-complete] under oracle (Cook) reductions: \begin{description} \item[Input.] A Hermitian matrix $X\in\kprod{\mathbf{Q}}{1}{m}$ and a positive real number $\varepsilon>0$. The real and complex parts of each entry of $X$ are given explicitly in binary. The number $\varepsilon$ is given in unary, where $1^s$ denotes $\varepsilon=1/s$. \item[Yes.] $X=\jam{\Lambda}$ for some LOSE [LOSR] operation $\Lambda$. \item[No.] \( \norm{X-\jam{\Lambda}}\geq \varepsilon \) for every LOSE [LOSR] operation $\Lambda$. \end{description} \end{theorem} \section{Characterizations of Local Quantum Operations} \label{sec:char} Characterizations of LOSE and LOSR operations are presented in this section. These characterizations are reminiscent of the well-known characterizations of bipartite and multipartite separable quantum states due to Horodecki \emph{et al.}~\cite{HorodeckiH+96,HorodeckiH+01}. Specifically, it is proven in Section \ref{sec:char:LOSR} that $\Lambda$ is a LOSR operation if and only if $\varphi\pa{\jam{\Lambda}}\geq 0$ whenever the linear functional $\varphi$ is positive on the cone of $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable operators. This characterization of LOSR operations is a straightforward application of the fundamental Separation Theorems of convex analysis. (See, for example, Rockafellar \cite{Rockafellar70} for proofs of these theorems.) More interesting is the characterization of LOSE operations presented in Section \ref{sec:char:LOSE}: $\Lambda$ is a LOSE operation if and only if $\varphi\pa{\jam{\Lambda}}\geq 0$ whenever the linear functional $\varphi$ is \emph{completely} positive on that \emph{same cone} of $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$- separable operators. Prior to the present work, the notion of complete positivity was only ever considered in the context wherein the underlying cone is the positive semidefinite cone. Indeed, what it even \emph{means} for a super-operator or functional to be ``completely'' positive on some cone other than the positive semidefinite cone must be clarified before for any nontrivial discussion can occur. The results of this section do not rely upon the results in previous sections, though much of the notation of those sections is employed here. To that notation we add the following. For any operator $X$, its transpose $X^{\scriptstyle\mathsf{T}}$ and complex conjugate $\overline{X}$ are always taken with respect to the standard basis. \subsection{Characterization of Local Operations with Shared Randomness} \label{sec:char:LOSR} The characterization of LOSR operations presented herein is an immediate corollary of the following simple proposition. \begin{prop} \label{prop:LOSR} Let $K\subset\mathbb{R}^n$ be any closed convex cone. A vector $x\in\mathbb{R}^n$ is an element of $K$ if and only if $\varphi\pa{x}\geq 0$ for every linear functional $\varphi : \mathbb{R}^n \to \mathbb{R}$ that is positive on $K$. \end{prop} \begin{proof} The ``only if'' part of the proposition is immediate: as $x$ is in $K$, any linear functional positive on $K$ must also be positive on $x$. For the ``if'' part of the proposition, suppose that $x$ is not an element of $K$. The Separation Theorems from convex analysis imply that there exists a vector $h\in\mathbb{R}^n$ such that $\inner{h}{y} \geq 0$ for all $y\in K$, yet $\inner{h}{x} < 0$. Let $\varphi : \mathbb{R}^n \to \mathbb{R}$ be the linear functional given by $\varphi : z \mapsto \inner{h}{z}$. It is clear that $\varphi$ is positive on $K$, yet $\varphi\pa{x}<0$. \end{proof} \begin{corollary}[Characterization of LOSR operations] \label{thm:LOSR} A quantum operation $\Lambda : \lin{\kprod{\mathcal{X}}{1}{m}} \to \lin{\kprod{\mathcal{A}}{1}{m}}$ is an $m$-party LOSR operation if and only if $\varphi\pa{\jam{\Lambda}} \geq 0$ for every linear functional $\varphi : \lin{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}} \to \mathbb{C}$ that is positive on the cone of $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable operators. \end{corollary} \begin{proof} In order to apply Proposition \ref{prop:LOSR}, it suffices to note the following: \begin{itemize} \item The space $\her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}$ is isomorphic to $\mathbb{R}^n$ for $n=\dim\pa{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}^2$. \item The $\Pa{\mathbf{Q}_1;\dots;\mathbf{Q}_m}$-separable operators form a closed convex cone within $\her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}$. \end{itemize} While Proposition \ref{prop:LOSR} only gives the desired result for real linear functionals $\varphi: \her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}} \to \mathbb{R}$, it is trivial to construct a complex extension functional $\varphi':\lin{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}} \to \mathbb{C}$ that agrees with $\varphi$ on $\her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}$. \end{proof} \subsection{Characterization of Local Operations with Shared Entanglement} \label{sec:char:LOSE} \begin{notation} For each $i=1,\dots,m$ and each complex Euclidean space $\mathcal{E}_i$, let $\mathbf{Q}_i\pa{\mathcal{E}_i}\subset\her{\mathcal{A}_i\otimes\mathcal{X}_i\otimes\mathcal{E}_i}$ denote the subspace of operators $\jam{\Psi}$ for which $\Psi:\lin{\mathcal{X}_i\otimes\mathcal{E}_i}\to\lin{\mathcal{A}_i}$ is a trace-preserving super-operator, or a scalar multiple thereof. \end{notation} \begin{theorem}[Characterization of LOSE operations] \label{thm:LOSE} A quantum operation $\Lambda : \lin{\kprod{\mathcal{X}}{1}{m}} \to \lin{\kprod{\mathcal{A}}{1}{m}}$ is an $m$-party LOSE operation if and only if $\varphi\pa{\jam{\Lambda}} \geq 0$ for every linear functional $\varphi : \lin{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}} \to \mathbb{C}$ with the property that the super-operator $\Pa{\varphi \otimes \idsup{\kprod{\mathcal{E}}{1}{m}}}$ is positive on the cone of $\Pa{\mathbf{Q}_1\pa{\mathcal{E}_1};\dots;\mathbf{Q}_m\pa{\mathcal{E}_m}}$-separable operators for all choices of complex Euclidean spaces $\mathcal{E}_1,\dots,\mathcal{E}_m$. \end{theorem} \begin{remark} \label{rem:LOSE:cp} The positivity condition of Theorem \ref{thm:LOSE} bears striking resemblance to the familiar notion of complete positivity of a super-operator. With this resemblance in mind, a linear functional $\varphi$ obeying the positivity condition of Theorem \ref{thm:LOSE} is said to be \emph{completely positive on the $\Pa{\mathbf{Q}_1\pa{\mathcal{E}_1};\dots;\mathbf{Q}_m\pa{\mathcal{E}_m}}$-separable family of cones}. In this sense, Theorem \ref{thm:LOSE} represents what seems, to the knowledge of the author, to be the first application of the notion of complete positivity to a family of cones other than the positive semidefinite family. Moreover, for any fixed choice of complex Euclidean spaces $\mathcal{E}_1,\dots,\mathcal{E}_m$ there exists a linear functional $\varphi$ for which $\Pa{\varphi \otimes \idsup{\kprod{\mathcal{E}}{1}{m}}}$ is positive on the cone of $\Pa{\mathbf{Q}_1\pa{\mathcal{E}_1};\dots;\mathbf{Q}_m\pa{\mathcal{E}_m}}$-separable operators, and yet $\varphi$ is \emph{not} completely positive on this family of cones. This curious property is a consequence of the fact the set of LOSE operations with finite entanglement is not a closed set. By contrast, complete positivity (in the traditional sense) of a super-operator $\Phi:\lin{\mathcal{X}}\to\lin{\mathcal{A}}$ is assured whenever $\Pa{\Phi\otimes\idsup{\mathcal{Z}}}$ is positive for a space $\mathcal{Z}$ with dimension at least that of $\mathcal{X}$. (See, for example, Bhatia \cite{Bhatia07} for a proof of this fact.) \end{remark} The proof of Theorem \ref{thm:LOSE} employs the following helpful identity involving the Choi-Jamio\l kowski representation for super-operators. This identity may be verified by straightforward calculation. \begin{prop} \label{prop:identities} Let $\Psi:\lin{\mathcal{X}\otimes\mathcal{E}} \to \lin{\mathcal{A}}$ and let $Z\in\lin{\mathcal{E}}$. Then the super-operator \(\Lambda : \lin{\mathcal{X}} \to \lin{\mathcal{A}} \) defined by \( \Lambda\pa{X} = \Psi \pa{X\otimes Z} \) for all $X$ satisfies \( \jam{\Lambda} = \Ptr{\mathcal{E}}{ \cBr{ I_{\mathcal{A}\otimes\mathcal{X}} \otimes Z^{\scriptstyle\mathsf{T}} } \jam{\Psi} }. \) \end{prop} The following technical lemma is also employed in the proof of Theorem \ref{thm:LOSE}. \begin{lemma} \label{lm:LOSE:ip} Let \( \Psi : \lin{\mathcal{X}\otimes\mathcal{E}} \to \lin{\mathcal{A}} \), let \( Z \in \lin{\mathcal{E}}$, and let \( \varphi : \lin{ \mathcal{A}\otimes\mathcal{X} } \to \mathbb{C} \). Then the super-operator \( \Lambda : \lin{\mathcal{X}} \to \lin{\mathcal{A}} \) defined by \( \Lambda\pa{X} = \Psi \Pa{ X\otimes Z } \) for all $X$ satisfies \[ \varphi\pa{\jam{\Lambda}} = \Inner{ \overline{Z} } { \Pa{ \varphi \otimes \idsup{\mathcal{E}} } \Pa{ \jam{ \Psi } } }. \] \end{lemma} \begin{proof} Let $H$ be the unique operator satisfying $\varphi\pa{X}=\inner{H}{X}$ for all $X$ and note that the adjoint $\varphi^*:\mathbb{C}\to\lin{ \mathcal{A}\otimes\mathcal{X} }$ satisfies $\varphi^*(1)=H$. Then \begin{align*} \Inner{ \overline{Z} }{ \Pa{ \varphi \otimes \idsup{\mathcal{E}} } \Pa{ \jam{ \Psi } } } &= \Inner{ \varphi^*\pa{1} \otimes \overline{Z} }{ \jam{ \Psi } } = \Inner{ H \otimes \overline{Z} }{ \jam{ \Psi } } \\ &= \Inner{ H }{ \Ptr{\mathcal{E}} { \cBr{ I_{ \mathcal{A}\otimes\mathcal{X} } \otimes Z^{\scriptstyle\mathsf{T}} } \jam{\Psi} } } = \varphi\pa{\jam{\Lambda}}. \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:LOSE}] For the ``only if'' part of the theorem, let $\Lambda$ be any LOSE operation with finite entanglement and let $\Psi_1,\dots,\Psi_m,\sigma$ be such that $\Lambda : X \mapsto \Pa{\kprod{\Psi}{1}{m}}\pa{X\otimes \sigma}$. Let $\varphi$ be any linear functional on $\lin{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}$ that satisfies the stated positivity condition. Lemma \ref{lm:LOSE:ip} implies \[ \varphi\pa{\jam{\Lambda}} = \Inner{ \overline{\sigma} }{ \Pa{\varphi \otimes \idsup{\kprod{\mathcal{E}}{1}{m}} } \Pa{ \jam{\kprod{\Psi}{1}{m}} } } \geq 0. \] A standard continuity argument establishes the desired implication when $\Lambda$ is a finitely approximable LOSE operation. For the ``if'' part of the theorem, suppose that $\Xi : \lin{\kprod{\mathcal{X}}{1}{m}} \to \lin{\kprod{\mathcal{A}}{1}{m}}$ is a quantum operation that is \emph{not} a LOSE operation. The Separation Theorems from convex analysis imply that there is a Hermitian operator $H\in\her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}$ such that $\inner{H}{\jam{\Lambda}} \geq 0$ for all LOSE operations $\Lambda$, yet $\inner{H}{\jam{\Xi}} < 0$. Let $\varphi : \lin{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}\to\mathbb{C}$ be the linear functional given by $\varphi : X \mapsto \inner{H}{X}$. It remains to verify that $\varphi$ satisfies the desired positivity condition. Toward that end, let $\mathcal{E}_1,\dots,\mathcal{E}_m$ be arbitrary complex Euclidean spaces. By convexity, it suffices to consider only those $\Pa{\mathbf{Q}_1\pa{\mathcal{E}_1};\dots;\mathbf{Q}_m\pa{\mathcal{E}_m}}$-separable operators that are product operators. Choose any such operator and note that, up to a scalar multiple, it has the form $\jam{ \kprod{\Psi}{1}{m} }$ where each $\Psi_i : \lin{\mathcal{X}_i\otimes\mathcal{E}_i} \to \lin{\mathcal{A}_i}$ is a quantum operation. The operator \[ \Pa{ \varphi \otimes \idsup{\kprod{\mathcal{E}}{1}{m}} } \Pa{ \jam{ \kprod{\Psi}{1}{m} } } \] is positive semidefinite if and only if it has a nonnegative inner product with every density operator in $\pos{\kprod{\mathcal{E}}{1}{m}}$. As $\sigma$ ranges over all such operators, so does $\overline{\sigma}$. Moreover, every such $\sigma$---together with $\Psi_1,\dots,\Psi_m$---induces a LOSE operation $\Lambda$ defined by $\Lambda : X \mapsto \Pa{\kprod{\Psi}{1}{m}}\pa{X\otimes \sigma}$. Lemma \ref{lm:LOSE:ip} and the choice of $\varphi$ imply \[ 0 \leq \varphi \pa{\jam{\Lambda}} = \Inner{ \overline{\sigma} }{ \Pa{\varphi \otimes \idsup{\kprod{\mathcal{E}}{1}{m}} } \Pa{ \jam{\kprod{\Psi}{1}{m}} } }, \] and so $\varphi$ satisfies the desired positivity condition. \end{proof} \section{No-Signaling Operations} \label{sec:no-sig} It was claimed in Remark \ref{rem:balls:noise} of Section \ref{sec:balls} that the product space $\kprod{\mathbf{Q}}{1}{m}$ is spanned by Choi-Jamio\l kowski representations of no-signaling operations. It appears as though this fact has yet to be noted explicitly in the literature, so a proof is offered in this section. More accurately, two characterizations of no-signaling operations are presented in Section \ref{sec:no-sig:char}, each of which is expressed as a condition on Choi-Jamio\l kowski representations of super-operators. The claim of Remark \ref{rem:balls:noise} then follows immediately from these characterizations. Finally, Section \ref{sec:no-sig:counter-example} provides an example of a so-called \emph{separable} no-signaling operation that is not a LOSE operation, thus ruling out an easy ``short cut'' to the ball of LOSR operations revealed in Theorem \ref{thm:ball}. \subsection{Formal Definition of a No-Signaling Operation} Intuitively, a quantum operation $\Lambda$ is no-signaling if it cannot be used by spatially separated parties to violate relativistic causality. Put another way, an operation $\Lambda$ jointly implemented by several parties is no-signaling if those parties cannot use $\Lambda$ as a ``black box'' to communicate with one another. In order to facilitate a formal definition for no-signaling operations, the shorthand notation for Kronecker products from Section \ref{sec:gen} must be extended: if $K\subseteq\set{1,\dots,m}$ is an arbitrary index set with $K=\set{k_1,\dots,k_n}$ then we write \[ \mathcal{X}_K \stackrel{\smash{\text{\tiny def}}}{=} \mathcal{X}_{k_1} \otimes \cdots \otimes \mathcal{X}_{k_n} \] with the convention that $\mathcal{X}_\emptyset = \mathbb{C}$. As before, a similar notation also applies to operators, sets of operators, and super-operators. The notation $\overline{K}$ refers to the set of indices \emph{not} in $K$, so that $K$,$\overline{K}$ is a partition of $\set{1,\dots,m}$. Formally then, a quantum operation $\Lambda:\lin{\kprod{\mathcal{X}}{1}{m}}\to\lin{\kprod{\mathcal{A}}{1}{m}}$ is an \emph{$m$-party no-signaling operation} if for each index set $K\subseteq\set{1,\dots,m}$ we have \( \ptr{\mathcal{A}_K}{\Lambda\pa{\rho}} = \ptr{\mathcal{A}_K}{\Lambda\pa{\sigma}} \) whenever \( \ptr{\mathcal{X}_K}{\rho} = \ptr{\mathcal{X}_K}{\sigma}. \) What follows is a brief argument that this condition captures the meaning of a no-signaling operation---a more detailed discussion of this condition can be found in Beckman \emph{et al.}~\cite{BeckmanG+01}. If $\Lambda$ is no-signaling and $\rho,\sigma$ are locally indistinguishable to a coalition $K$ of parties (for example, when \( \ptr{\mathcal{X}_{\overline{K}}}{\rho} = \ptr{\mathcal{X}_{\overline{K}}}{\sigma} \)) then clearly the members of $K$ cannot perform a measurement on their portion of the output that might allow them to distinguish $\Lambda\pa{\rho}$ from $\Lambda\pa{\sigma}$ (that is, \( \ptr{\mathcal{A}_{\overline{K}}}{\Lambda\pa{\rho}} = \ptr{\mathcal{A}_{\overline{K}}}{\Lambda\pa{\sigma}} \)). For otherwise, the coalition $K$ would have extracted information---a signal---that would allow it to distinguish $\rho$ from $\sigma$. Conversely, if there exist input states $\rho,\sigma$ such that \( \ptr{\mathcal{X}_{\overline{K}}}{\rho} = \ptr{\mathcal{X}_{\overline{K}}}{\sigma} \) and yet \( \ptr{\mathcal{A}_{\overline{K}}}{\Lambda\pa{\rho}} \neq \ptr{\mathcal{A}_{\overline{K}}}{\Lambda\pa{\sigma}} \) then there exists a measurement that allows the coalition $K$ to distinguish these two output states with nonzero bias, which implies that signaling must have occurred. It is not hard to see that every LOSE operation is also a no-signaling operation. Conversely, much has been made of the fact that there exist no-signaling operations that are not LOSE operations---this is so-called ``super-strong'' nonlocality, exemplified by the popular ``nonlocal box'' discussed in Section \ref{sec:no-sig:counter-example}. \subsection{Two Characterizations of No-Signaling Operations} \label{sec:no-sig:char} In this section it is shown that the product space $\kprod{\mathbf{Q}}{1}{m}$ is spanned by Choi-Jamio\l kowski representations of no-signaling operations. (Recall that each $\mathbf{Q}_i\subset\her{\mathcal{A}_i\otimes\mathcal{X}_i}$ denotes the subspace of Hermitian operators $\jam{\Phi}$ for which $\Phi:\lin{\mathcal{X}_i}\to\lin{\mathcal{A}_i}$ is a trace-preserving super-operator, or a scalar multiple thereof.) Indeed, that fact is a corollary of the following characterizations of no-signaling operations. \begin{theorem}[Two characterizations of no-signaling operations] \label{thm:char:no-signal} Let $\Lambda:\lin{\kprod{\mathcal{X}}{1}{m}}\to\lin{\kprod{\mathcal{A}}{1}{m}}$ be a quantum operation. The following are equivalent: \begin{enumerate} \item \label{item:no-signal} $\Lambda$ is a no-signaling operation. \item \label{item:prod-space} $\jam{\Lambda}$ is an element of $\kprod{\mathbf{Q}}{1}{m}$. \item \label{item:constraints} For each index set $K\subseteq\set{1,\dots,m}$ there exists an operator $Q\in\pos{\mathcal{A}_{\overline{K}}\otimes\mathcal{X}_{\overline{K}}}$ with \( \ptr{\mathcal{A}_K}{\jam{\Lambda}} = Q\otimes I_{\mathcal{X}_K}. \) \end{enumerate} \end{theorem} \begin{remark} Membership in the set of no-signaling operations may be verified in polynomial time by checking the linear constraints in Item \ref{item:constraints} of Theorem \ref{thm:char:no-signal}. While the number of such constraints grows exponentially with $m$, this exponential growth is not a problem because the number of parties $m$ is always $O\pa{\log n}$ for $n=\dim\pa{\kprod{\mathbf{Q}}{1}{m}}$. (This logarithmic bound follows from the fact that each space $\mathbf{Q}_i$ has dimension at least two and the total dimension $n$ is the product of the dimensions of each of the $m$ different spaces.) \end{remark} The partial trace condition of Item \ref{item:constraints} of Theorem \ref{thm:char:no-signal} is quite plainly suggested by Ref.~\cite[Theorem 6]{GutoskiW07}. Moreover, essential components of the proofs presented for two of the three implications claimed in Theorem \ref{thm:char:no-signal} appear in a 2001 paper of Beckman \emph{et al.}~\cite{BeckmanG+01}. The following theorem, however, establishes the third implication and appears to be new. \begin{theorem} \label{thm:TP-constraints} A Hermitian operator $X\in\her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}$ is in $\kprod{\mathbf{Q}}{1}{m}$ if and only if for each index set $K\subseteq\set{1,\dots,m}$ there exists a Hermitian operator $Q\in\her{\mathcal{A}_{\overline{K}}\otimes\mathcal{X}_{\overline{K}}}$ with \( \ptr{\mathcal{A}_K}{X} = Q \otimes I_{\mathcal{X}_K}. \) \end{theorem} \begin{proof} The ``only if'' portion of the theorem is straightforward---only the ``if'' portion is proven here. The proof proceeds by induction on $m$. The base case $m=1$ is trivial. Proceeding directly to the general case, let $s=\dim\pa{\her{\mathcal{A}_{m+1}\otimes\mathcal{X}_{m+1}}}$ and let $\set{E_1,\dots,E_s}$ be a basis of $\her{\mathcal{A}_{m+1}\otimes\mathcal{X}_{m+1}}$. Let $X_1,\dots,X_s\in\her{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m}}$ be the unique operators satisfying \[ X = \sum_{j=1}^s X_j \otimes E_j. \] It shall be proven that $X_1,\dots,X_s\in\kprod{\mathbf{Q}}{1}{m}$. The intuitive idea is to exploit the linear independence of $E_1,\dots,E_s$ in order to ``peel off'' individual product terms in the decomposition of $X$. Toward that end, for each fixed index $j\in\set{1,\dots,s}$ let $H_j$ be a Hermitian operator for which the real number $\inner{H_j}{E_i}$ is nonzero only when $i=j$. Define a linear functional $\varphi_j:E\mapsto\inner{H_j}{E}$ and note that \[ \Pa{ \idsup{\kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m} } \otimes \varphi_j} \pa{X} = \sum_{i=1}^s \varphi_j\pa{E_i} X_i = \varphi_j\pa{E_j} X_j. \] Fix an arbitrary partition $K,\overline{K}$ of the index set $\set{1,\dots,m}$. By assumption, \( \ptr{\mathcal{A}_K}{X} = Q\otimes I_{\mathcal{X}_K} \) for some Hermitian operator $Q$. Apply $\trace_{\mathcal{A}_K}$ to both sides of the above identity, then use the fact that $\trace_{\mathcal{A}_K}$ and $\varphi_j$ act upon different spaces to obtain \begin{align*} & \varphi_j\pa{E_j} \ptr{\mathcal{A}_K}{X_j} \\ ={}& \Ptr{\mathcal{A}_K}{ \Pa{\idsup{ \kprod{\mathcal{A}}{1}{m}\otimes\kprod{\mathcal{X}}{1}{m} } \otimes \varphi_j} \Pa{X} } \\ ={}& \Pa{ \idsup{\mathcal{A}_{\overline{K}} \otimes \kprod{\mathcal{X}}{1}{n}} \otimes \varphi_j } \Pa{ \Ptr{\mathcal{A}_K}{X} } \\ ={}& \Pa{ \idsup{\mathcal{A}_{\overline{K}}\otimes \mathcal{X}_{\overline{K}} } \otimes \varphi_j } \pa{ Q } \otimes I_{\mathcal{X}_K}, \end{align*} from which it follows that $\ptr{\mathcal{A}_K}{X_j}$ is a product operator of the form $R \otimes I_{\mathcal{X}_K}$ for some Hermitian operator $R$. As this identity holds for all index sets $K$, it follows from the induction hypothesis that $X_j\in\kprod{\mathbf{Q}}{1}{m}$ as desired. Now, choose a maximal linearly independent subset $\set{X_1,\dots,X_t}$ of $\set{X_1,\dots,X_s}$ and let $Y_1,\dots,Y_t$ be the unique Hermitian operators satisfying \[ X = \sum_{i=1}^t X_i \otimes Y_i. \] A similar argument shows $Y_1,\dots,Y_t\in\mathbf{Q}_{m+1}$, which completes the induction. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:char:no-signal}] \begin{description} \item[Item \ref{item:constraints} implies item \ref{item:prod-space}.] This implication follows immediately from Theorem \ref{thm:TP-constraints}. \item[Item \ref{item:prod-space} implies item \ref{item:no-signal}.] The proof of this implication borrows heavily from the proof of Theorem 2 in Beckman \emph{et al.}~\cite{BeckmanG+01}. Fix any partition $K,\overline{K}$ of the index set $\set{1,\dots,m}$. Let $s=\dim\pa{\lin{\mathcal{X}_{\overline{K}}}}$ and $t=\dim\pa{\lin{\mathcal{X}_K}}$ and let $\set{\rho_1,\dots,\rho_s}$ and $\set{\sigma_1,\dots,\sigma_t}$ be bases of $\lin{\mathcal{X}_{\overline{K}}}$ and $\lin{\mathcal{X}_K}$, respectively, that consist entirely of density operators. Given any two operators $X,Y\in\lin{\kprod{\mathcal{X}}{1}{m}}$ let $x_{j,k},y_{j,k}\in\mathbb{C}$ be the unique coefficients of $X$ and $Y$ respectively in the product basis $\set{\rho_j\otimes \sigma_k}$. Then \( \ptr{\mathcal{X}_K}{X} = \ptr{\mathcal{X}_K}{Y} \) implies \[ \sum_{k=1}^t x_{j,k} = \sum_{k=1}^t y_{j,k} \] for each fixed index $j=1,\dots,s$. As $\jam{\Lambda}\in\kprod{\mathbf{Q}}{1}{m}$, it is possible to write \[ \jam{\Lambda} = \sum_{l=1}^n \jam{\Phi_{1,l}}\otimes\cdots\otimes\jam{\Phi_{m,l}} \] where $n$ is a positive integer and $\Phi_{i,l}:\lin{\mathcal{X}_i}\to\lin{\mathcal{A}_i}$ satisfies $\jam{\Phi_{i,l}}\in\mathbf{Q}_i$ for each of the indices $i=1,\dots,m$ and $l=1,\dots,n$. In particular, as each $\Phi_{i,l}$ is (a scalar multiple of) a trace-preserving super-operator, it holds that for each index $l=1,\dots,n$ there exists $a_l\in\mathbb{R}$ with $\tr{\Phi_{K,l}\pa{\sigma}}=a_l$ for all density operators $\sigma$. Then \begin{align*} \ptr{\mathcal{A}_K}{\Lambda\pa{X}} &= \sum_{l=1}^n a_l \cdot \cBr{\sum_{k=1}^t x_{j,k}} \cdot \sum_{j=1}^s \Phi_{\overline{K},l}\pa{\rho_j} \\ &= \sum_{l=1}^n a_l \cdot \cBr{\sum_{k=1}^t y_{j,k}} \cdot \sum_{j=1}^s \Phi_{\overline{K},l}\pa{\rho_j} = \ptr{\mathcal{A}_K}{\Lambda\pa{Y}} \end{align*} as desired. \item[Item \ref{item:no-signal} implies item \ref{item:constraints}.] This implication is essentially a multi-party generalization of Theorem 8 in Beckman \emph{et al.}~\cite{BeckmanG+01}. The proof presented here differs from theirs in some interesting but non-critical details. Fix any partition $K,\overline{K}$ of the index set $\set{1,\dots,m}$. To begin, observe that \[ \ptr{\mathcal{X}_K}{X} = \ptr{\mathcal{X}_K}{Y} \implies \ptr{\mathcal{A}_K}{\Lambda\pa{X}} = \ptr{\mathcal{A}_K}{\Lambda\pa{Y}} \] for \emph{all} operators $X,Y\in\lin{\kprod{\mathcal{X}}{1}{m}}$---not just density operators. (This observation follows from the fact that $\lin{\kprod{\mathcal{X}}{1}{m}}$ is spanned by the density operators---a fact used in the above proof that item \ref{item:prod-space} implies item \ref{item:no-signal}.) Now, let $s=\dim\pa{\mathcal{X}_{\overline{K}}}$ and $t=\dim\pa{\mathcal{X}_K}$ and let $\set{e_1,\dots,e_s}$ and $\set{f_1,\dots,f_t}$ be the standard bases of $\mathcal{X}_{\overline{K}}$ and $\mathcal{X}_K$ respectively. If $c$ and $d$ are distinct indices in $\set{1,\dots,t}$ and $Z\in\lin{\mathcal{X}_{\overline{K}}}$ is any operator then \[ \ptr{ \mathcal{X}_K }{ Z \otimes f_cf_d^* } = Z \otimes \tr{ f_cf_d^* } = 0_{\mathcal{X}_{\overline{K}}} = \ptr{ \mathcal{X}_K }{ 0_{\kprod{\mathcal{X}}{1}{m}} } \] and hence \[ \Ptr{ \mathcal{A}_K }{ \Lambda\Pa{ Z \otimes f_cf_d^* } } = \Ptr{ \mathcal{A}_K }{ \Lambda\Pa{ 0_{\kprod{\mathcal{X}}{1}{m}} } } = \Ptr{ \mathcal{A}_K }{ 0_{\kprod{\mathcal{A}}{1}{m}} } = 0_{\mathcal{A}_{\overline{K}}}. \] (Here a natural notation for the zero operator was used implicitly.) Similarly, if $\rho\in\lin{\mathcal{X}_K}$ is any density operator then \[ \Ptr{ \mathcal{A}_K }{ \Lambda\Pa{ Z \otimes f_cf_c^* } } = \Ptr{ \mathcal{A}_K }{ \Lambda\Pa{ Z \otimes \rho } } \] for each fixed index $c=1,\dots,t$. Employing these two identities, one obtains \begin{align*} \Ptr{ \mathcal{A}_K }{ \jam{\Lambda} } &= \sum_{a,b=1}^s \sum_{c=1}^t \Ptr{ \mathcal{A}_K }{ \Lambda\Pa{e_ae_b^* \otimes f_cf_c^*}} \otimes \cBr{e_ae_b^* \otimes f_cf_c^*} \\ &= \sum_{a,b=1}^s \Ptr{ \mathcal{A}_K }{ \Lambda\Pa{e_ae_b^* \otimes \rho}} \otimes e_ae_b^* \otimes \cBr{\sum_{c=1}^tf_cf_c^*} = \jam{\Psi} \otimes I_{ \mathcal{X}_K } \end{align*} where the quantum operation $\Psi$ is defined by $\Psi:X\mapsto\ptr{ \mathcal{A}_K }{\Lambda\Pa{X\otimes\rho}}$. As $\jam{\Psi} \otimes I_{ \mathcal{X}_K }$ is a product operator of the desired form, the proof that item \ref{item:no-signal} implies item \ref{item:constraints} is complete. \end{description} \end{proof} \subsection{A Separable No-Signaling Operation that is Not a LOSE Operation} \label{sec:no-sig:counter-example} One of the goals of the present work is to establish a ball of LOSR operations around the completely noisy channel. As such, it is prudent to consider the possibility of a short cut. Toward that end, suppose $\Lambda:\lin{\mathcal{X}_1\otimes\mathcal{X}_2}\to\lin{\mathcal{A}_1\otimes\mathcal{A}_2}$ is a quantum operation for which the operator $\jam{\Lambda}$ is $\Pa{\her{\mathcal{A}_1\otimes\mathcal{X}_1};\her{\mathcal{A}_2\otimes\mathcal{X}_2}}$-separable. Quantum operations with separable Choi-Jamio\l kowski representations such as this are called \emph{separable operations} \cite{Rains97}. If an operation is both separable and no-signaling then must it always be a LOSE operation, or even a LOSR operation? An affirmative answer to this question would open a relatively simple path to the goal by permitting the use of existing knowledge of separable balls around the identity operator. Alas, such a short cut is not to be had: there exist no-signaling operations $\Lambda$ that are not LOSE operations, yet $\jam{\Lambda}$ is separable. One example of such an operation is the so-called ``nonlocal box'' discovered in 1994 by Popescu and Rohrlich~\cite{PopescuR94}. This nonlocal box is easily formalized as a two-party no-signaling quantum operation $\Lambda$, as in Ref.~\cite[Section V.B]{BeckmanG+01}. That formalization is reproduced here. Let \( \mathcal{X}_1 = \mathcal{X}_2 = \mathcal{A}_1 = \mathcal{A}_2 = \mathbb{C}^2 \), let $\set{e_0,e_1}$ denote the standard bases of both $\mathcal{X}_1$ and $\mathcal{A}_1$, and let $\set{f_0,f_1}$ denote the standard bases of both $\mathcal{X}_2$ and $\mathcal{A}_2$. Write \[ \rho_{ab} \stackrel{\smash{\text{\tiny def}}}{=} e_ae_a^*\otimes f_bf_b^* \] for $a,b\in\set{0,1}$. The nonlocal box $\Lambda:\lin{\kprod{\mathcal{X}}{1}{2}}\to\lin{\kprod{\mathcal{A}}{1}{2}}$ is defined by \begin{align*} \Set{ \rho_{00},\rho_{01},\rho_{10} } &\stackrel{\Lambda}{\longmapsto} \frac{1}{2} \cBr{ \rho_{00} + \rho_{11} } \\ \rho_{11} &\stackrel{\Lambda}{\longmapsto} \frac{1}{2} \cBr{ \rho_{01} + \rho_{10} }. \end{align*} Operators not in $\spn\set{\rho_{00},\rho_{01},\rho_{10},\rho_{11}}$ are annihilated by $\Lambda$. It is routine to verify that $\Lambda$ is a no-signaling operation, and this operation $\Lambda$ is known not to be a LOSE operation~\cite{PopescuR94}. To see that $\jam{\Lambda}$ is separable, write \begin{align*} E_{a\to b} &\stackrel{\smash{\text{\tiny def}}}{=} e_be_a^* \\ F_{a\to b} &\stackrel{\smash{\text{\tiny def}}}{=} f_bf_a^* \end{align*} for $a,b\in\set{0,1}$. Then for all $X\in\lin{\kprod{\mathcal{X}}{1}{2}}$ it holds that \begin{align*} \Lambda\pa{X} &= \frac{1}{2} \Big[ E_{0\to 0} \otimes F_{0\to 0} \Big] X \Big[ E_{0\to 0} \otimes F_{0\to 0} \Big]^* + \frac{1}{2} \Big[ E_{0\to 1} \otimes F_{0\to 1} \Big] X \Big[ E_{0\to 1} \otimes F_{0\to 1} \Big]^* \\ &+ \frac{1}{2} \Big[ E_{0\to 0} \otimes F_{1\to 0} \Big] X \Big[ E_{0\to 0} \otimes F_{1\to 0} \Big]^* + \frac{1}{2} \Big[ E_{0\to 1} \otimes F_{1\to 1} \Big] X \Big[ E_{0\to 1} \otimes F_{1\to 1} \Big]^* \\ &+ \frac{1}{2} \Big[ E_{1\to 0} \otimes F_{0\to 0} \Big] X \Big[ E_{1\to 0} \otimes F_{0\to 0} \Big]^* + \frac{1}{2} \Big[ E_{1\to 1} \otimes F_{0\to 1} \Big] X \Big[ E_{1\to 1} \otimes F_{0\to 1} \Big]^* \\ &+ \frac{1}{2} \Big[ E_{1\to 0} \otimes F_{1\to 1} \Big] X \Big[ E_{1\to 0} \otimes F_{1\to 1} \Big]^* + \frac{1}{2} \Big[ E_{1\to 1} \otimes F_{1\to 0} \Big] X \Big[ E_{1\to 1} \otimes F_{1\to 0} \Big]^*, \end{align*} from which the $\Pa{\her{\mathcal{A}_1\otimes\mathcal{X}_1};\her{\mathcal{A}_2\otimes\mathcal{X}_2}}$-separability of $\jam{\Lambda}$ follows. It is interesting to note that the nonlocal box is the smallest possible nontrivial example of such an operation---the number of parties $m=2$ and the input spaces $\mathcal{X}_1,\mathcal{X}_2$ and output spaces $\mathcal{A}_1,\mathcal{A}_2$ all have dimension two. \section{Open Problems} \label{sec:conclusion} Several interesting open problems are suggested by the present work: \begin{description} \item[Bigger ball of LOSE or LOSR operations.] The size of the ball of (unnormalized) LOSR operations established in Theorem \ref{thm:ball} scales as $\Omega\Pa{2^{-m} n^{-3/2}}$. While the exponential dependence on the number of parties $m$ seems unavoidable, might it be possible to improve the exponent from $-m$ to $-m/2$, as is the case with separable quantum states~\cite{GurvitsB03}? Can the exponent on the dimension $n$ be improved? As mentioned in Remark \ref{rem:ball:same}, it is not clear that there is a ball of LOSE operations that strictly contains any ball of LOSR operations. Does such a larger ball exist? \item[Completely positive super-operators.] As noted in Remark \ref{rem:LOSE:cp}, the characterization of LOSE operations is interesting because it involves linear functionals that are not just positive, but ``completely'' positive on the family of $\Pa{\mathbf{Q}_1\pa{\mathcal{E}_1};\dots;\mathbf{Q}_m\pa{\mathcal{E}_m}}$-separable cones. Apparently, the study of completely positive super-operators has until now been strictly limited to the context of positive semidefinite input cones. Any question that may be asked of conventional completely positive super-operators might also be asked of this new class of completely positive super-operators. \item[Entanglement required for approximating LOSE operations.] It was mentioned in the introduction that there exist LOSE operations that cannot be implemented with any finite amount of shared entanglement \cite{LeungT+08}. The natural question, then, is how much entanglement is necessary to achieve an arbitrarily close approximation to such an operation? The present author conjectures that for every two-party LOSE operation $\Lambda$ there exists an $\varepsilon$-approximation $\Lambda'$ of $\Lambda$ in which the dimension of the shared entangled state scales as $O\pa{2^{\varepsilon^{-a}}n^b}$ for some positive constants $a$ and $b$ and some appropriate notion of $\varepsilon$-approximation. Here $n=\dim\Pa{\kprod{\mathbf{Q}}{1}{m}}$ is the dimension of the space in which $\jam{\Lambda}$ lies. Evidence in favor of this conjecture can be found in Refs.~\cite{CleveH+04,KempeR+07,LeungT+08}. Moreover, the example in Ref.~\cite{LeungT+08} strongly suggests that the exponential dependence on $1/\varepsilon$ is unavoidable. The pressing open question is whether the polynomial dependence on $n$ holds for the general case. At the moment, \emph{no upper bound at all} is known for this general class of two-party LOSE operations. \end{description} \section*{Acknowledgements} The author is grateful to Marco Piani and Stephanie Wehner for pointers to relevant literature, and to John Watrous for invaluable guidance. This research was supported by graduate scholarships from Canada's NSERC and the University of Waterloo. \newcommand{\etalchar}[1]{$^{#1}$}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:int} Supersoft sources (SSSs) were first discovered in the Large Magellanic Cloud with the \textsl{Einstein} Observatory \citep{Long1981}, including the two prototypes \object{CAL83} and \object{CAL87}. SSSs are characterised by very soft thermal spectra, with temperatures typically below 100\,eV, and have bolometric luminosities in excess of $10^{36}\,\text{erg}\,\text{s}^{-1}$. These sources can be divided into two groups. The first one includes the ``classical'' SSSs, which are characterized by bolometric luminosities in the range $10^{36}$--$2\times10^{38}\,\text{erg}\,\text{s}^{-1}$. The most promising model to explain the flux emitted from such SSSs was proposed by \cite{vandenHeuvel1992} to be a steady nuclear burning of hydrogen accreted onto white dwarfs with masses in the range of 0.7--1.2\,M$_{\odot}$. These sources are fairly common: 57 sources have been catalogued by J.~Greiner\footnote{\url{http://www.aip.de/~jcg/sss/ssscat.html}} up to 1999 December, but since then many more were discovered in distant galaxies by \textsl{XMM-Newton}\xspace and \textsl{Chandra}\xspace (see below). The second group includes SSSs for which the luminosity exceeds the Eddington limit for a $1.4\,\text{M}_\odot$ compact object, therefore excluding unbeamed emission from steady nuclear burning of hydrogen accreted onto a white dwarf. These sources are much less common. For ultraluminous SSSs, i.e., SSS with bolometric luminosities exceeding $10^{39}\,\text{erg}\,\text{s}^{-1}$, models involving intermediate-mass black holes \citep{Kong2003, Swartz2002, Kong2004} or stellar-mass black holes with matter outflow \citep{Mukai2005} have been invoked. According to \cite{Greiner2004}, 25 SSSs have been discovered in \object{M31} of which a large fraction (30\%) are found to be transient sources, with turnoff and turnon times of the order of a few months. Their luminosities are in the range $10^{36}$--$10^{38}\,\text{erg}\,\text{s}^{-1}$. \cite{DiStefano2003} found 16 SSSs in \object{M101}, 2--3 in \object{M51}, 10 in \object{M83} and 3 in \object{NGC~4697}. Of these sources, 11 have bolometric luminosities $>10^{38}\,\text{erg}\,\text{s}^{-1}$, from which six are brighter than $10^{39}\,\text{erg}\,\text{s}^{-1}$. In \object{M81}, \cite{Swartz2002} found 9 SSSs, including two with bolometric luminosities $>10^{38}\,\text{erg}\,\text{s}^{-1}$ and one $>10^{39}\,\text{erg}\,\text{s}^{-1}$. According to \cite{DiStefano2003}, normal SSSs in spiral galaxies appear to be associated with the spiral arms. The most luminous SSSs, however, have been found either in the arms, bulge, or disk \citep{Swartz2002}, as well as in halos \citep{DiStefano2003b}. SSSs have been found either in spiral (e.g. M31, M101, M83, M81, \object{M104}, \& NGC~300), elliptical (e.g., NGC~4697), interacting (e.g., M51 and \object{NGC~4038}/\object{NGC~4039}, i.e., the Antennae) or irregular galaxies (e.g., \object{LMC} and \object{SMC}). In this paper we report the discovery of a luminous ($>2\times10^{38}\,\text{erg}\,\text{s}^{-1}$) SSS, XMMU~J005455.0$-$374117, in the spiral galaxy NGC~300. This galaxy is a normal dwarf galaxy of type SA(s)d located at a distance of $\sim$1.88\,Mpc \citep{Gieren2005}. The galaxy is seen almost face-on and has a low Galactic column density \citep[$N_\text{H}=3.6\times10^{20}\,\text{cm}^{-2}$;][]{Dickey1990}. The major axes of its $D_{25}$ optical disk are 13.3\,kpc and 9.4\,kpc \citep[$22'\times 15'$;][]{deVaucouleurs1991}. NGC~300 was observed by \textsl{ROSAT}\xspace between 1991 and 1997 for a total of 46\,ksec with the Position Sensitive Proportional Counter and 40\,ksec with the High Resolution Imager. One SSS, XMMU J005510.7$-$373855, was present in these observations \citep{Read1997}. This source was visible in 1992 May and June but not in 1991 December \citep{Read2001}. The spectrum was well described with a thermal bremsstrahlung model with $kT \sim 0.1$\,keV \citep{Read1997}. More recently, we observed NGC~300 with \textsl{XMM-Newton}\xspace during its orbit 192 (2000 December 26; 37\,ksec on source time) and orbit 195 (2001 January 1; 47\,ksec on source time). The results of these observations have been presented by \cite{Carpano2005}. A deep analysis of the SSS XMMU J005510.7$-$373855 as seen in these observations was also performed by \cite{Kong2003}. These authors report that during the 6\,days between the two \textsl{XMM-Newton}\xspace pointings the source went from a ``high state'' to a ``low state'', and that a 5.4\,h periodicity was found during the low state. More information about this source will be given in Sect.~\ref{sec:discus}. Recently, \textsl{XMM-Newton}\xspace re-observed NGC~300 on 2005~May~22 (orbit 998) and on 2005~November~25 (orbit 1092), for 35\,ksec each. In this paper, we focus on the analysis of a new SSS, which was present in the 2005 \textsl{XMM-Newton}\xspace observations, and compare its properties with the previously known SSS. For simplicity, we will refer to XMMU~J005510.7$-$373855 as SSS$_{1}$ and XMMU J005455.0$-$374117 as SSS$_{2}$ in this work. Sect.~\ref{sec:obs} describes the observations and the reduction of the \textsl{XMM-Newton}\xspace data. In Sect.~\ref{sec:res} we present the spectral and timing analysis of SSS$_2$. We discuss the nature of these SSSs in Sect.~\ref{sec:discus} and conclusions are given in Sect.~\ref{sec:conc}. \section{Observations and data reduction} \label{sec:obs} For the 2005 May and November \textsl{XMM-Newton}\xspace observations, the EPIC-MOS \citep{Turner} and EPIC-pn \citep{Strueder} cameras were operated in their full frame mode with the medium filter. The EPIC-pn camera was centered on the previously known SSS$_{1}$ ($\alpha_\text{J2000.0}=00^\text{h}55^\text{m} 10\fs{}7$ and $\delta_\text{J2000.0}=-37^\circ 38' 55\farcs 0$). The data reduction was identical to that used in our analysis of the previous \textsl{XMM-Newton}\xspace observations \citep{Carpano2005}, except that version 6.5.0 of the \textsl{XMM-Newton}\xspace Science Analysis System (SAS) and recent calibration files were used. After screening the MOS data for proton flares using the standard procedures described by the \textsl{XMM-Newton}\xspace team, a total of 30\,ksec of low background data remained for revolution 998. The same good time intervals were then also used for the EPIC-pn data, leaving 26\,ksec of low background. No high background was present in the data of orbit 1092 where the exposure time was of 36\,ksec for the MOS and 31\,ksec for the pn data. Using the SAS \texttt{edetect\_chain} task, which performs a maximum likelihood source detection, SSS$_2$ is detected with a likelihood of $5.2\times 10^3$ in the combined observations of orbits 998 and 1092. Following \citet{Carpano2005}, we improve the X-ray positions by cross correlating positions between X-ray sources and their optical counterparts. The revised coordinates of the source are $\alpha_\text{J2000.0}=00^\text{h}54^\text{m} 55\fs{}0$ and $\delta_\text{J2000.0}=-37^\circ 41' 17\farcs 0$, with an uncertainty of $0\farcs 64$. \section{Timing and spectral analysis of SSS$_2$} \label{sec:res} Fig.~\ref{fig:image} shows the combined MOS/pn images centered on SSS$_2$ taken during \textsl{XMM-Newton}\xspace observations (revolution 192 and 195, 998, and 1092). SSS$_{2}$ is located close to the center of the galaxy, at a projected distance of $\sim$0.24\,kpc. We also show the combined BVR optical image of the field taken with the 2.2\,m MPG/ESO telescope on La Silla. See \citet{Carpano2005} and \citet{Schirmer2003} for a description of the optical data and their reduction. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{5609fig1.ps}} \caption{$278''\times278''$ images of SSS$_{2}$, in the 0.2--2.0 keV\xspace band from the different \textsl{XMM-Newton}\xspace observations (revolution 192+195, 998, and 1092) and $15''\times15''$ optical image centered on the X-ray position of the source (the circle represents the $2\sigma$ uncertainty of the X-ray position).} \label{fig:image} \end{figure} SSS$_{2}$ was clearly visible in revolutions 998 and 1092 but not in revolutions 192 and 195, where the detection limiting luminosity is $\sim1.3\pm0.6\times10^{36}\,\text{erg}\,\text{s}^{-1}$ and $\sim1.1\pm0.5\times10^{36}\,\text{erg}\,\text{s}^{-1}$, respectively (assuming a blackbody model with $kT\sim60$\,eV and $N_\text{H}=10^{21}\,\text{cm}^{-2}$, with a $4\sigma$ confidence level). On the other hand, SSS$_{1}$ was detected in revolution 192 and 195 \citep{Kong2003} but not in the last two revolutions where the detection limiting flux is of $\sim1.2\pm0.6\times10^{36}\,\text{erg}\,\text{s}^{-1}$ in both cases. SSS$_{2}$ has not been detected in any of the 5 \textsl{ROSAT}\xspace observations, although it would have been detectable in the first four \textsl{ROSAT}\xspace observations (where the detection limit was $<3.3\pm1.1\times10^{37}\,\text{erg}\,\text{s}^{-1}$) if it had had a luminosity similar to what has been found in the \textsl{XMM-Newton}\xspace data. In the optical images, including data from the Optical Monitor on \textsl{XMM-Newton}\xspace, no counterpart has been found to be bright enough to coincide with either of the SSSs and nor does the SIMBAD catalogue list possible counterparts. Because SSS$_{2}$ is located close to the center of NGC~300, the optical detection limit is high: 21.7\,mag, 21.7\,mag, 21.4\,mag in the B, V, and R band respectively. For SSS$_{1}$, located in one of the spiral arms of the galaxy, no optical counterpart brighter than $m_\text{V}=24.5\,\text{mag}$ coincides with the source, therefore excluding the presence of an O or early B companion star \citep[see][ for the optical field around the source]{Carpano2005}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[bb=20 8 333 297,clip=true]{5609fig2.ps}} \caption{MOS and pn 0.2--2.0\,keV light curve of SSS$_{2}$ in revolution 998 (top) and 1092 (bottom). Periods of high background have been excluded from the data. The horizontal line shows the fitted mean value. Times given are barycentric and measured in seconds from 1998 January 1 (MJD 50814.0).} \label{fig:light} \end{figure} Fig.~\ref{fig:light} shows the light curve of SSS$_{2}$ in revolutions 998 and 1092. Periods of high background have been excluded from the data. The light curve does not present large fluctuations. To test the significance of the source variability, we fit a constant value to the light curves (binned to 1000\,s), and, from the resulting $\chi^2$ (9 for 18\,dof, in rev. 998, and 31 for 33\,dof, in rev. 1092), we find that the source is variable with a probability of 5\% and 42\%, for revolution 998 and 1092, respectively. Using periodograms and epoch-folding, no short-time periodic signal was found on timescales from 5\,sec to 30\,ksec. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[bb=113 44 563 708,clip=true,angle=-90]{5609fig3.ps}} \resizebox{\hsize}{!}{\includegraphics[bb=113 44 563 708,clip=true,angle=-90]{5609fig4.ps}} \caption{pn and MOS spectra of source SSS$_{2}$ observed in revolution 998 (top) and 1092 (bottom), and the best fit spectral model. Bottom of each spectrum: residuals expressed in $\sigma$.} \label{fig:spec} \end{figure} \begin{table*} \centering \caption{Results of the spectral fits for SSS$_{2}$, using an absorbed blackbody model (\texttt{phabs+bbody}, in XSPEC), where $N_\text{H}$ is the column density of neutral hydrogen, $kT$ the temperature and $\chi^2_{\nu}$/dof is the reduced chi-square and the number of degrees of freedom. The corresponding 0.2--2\,keV\xspace flux, and absorbed luminosities, as well as the bolometric luminosities, are shown in the last three columns. Uncertainties are given at a 90\% confidence level, except for the bolometric luminosity, where they are at 99\%.} \label{tab:spec_fit} \begin{tabular}{lllllll} \hline source (rev.) & $N_\text{H}(\times 10^{20}\,\text{cm}^{-2})$ & $kT$ (eV) & $\chi^2_{\nu}$/dof & $F_{0.2-2}$ (\,cgs) & $L_{0.2-2}^{\text{obs}}$ (\,cgs) & $L^{\text{bol}}$ (\,cgs)\\[3pt] \hline SSS$_{2}$ (998) & $8.02^{+2.20}_{-1.53}$ & $54^{+3}_{-4}$ & $1.03$/$43$ & $1.18^{+1.07}_{-0.96}\times 10^{-13}$ & $4.99^{+4.54}_{-4.06}\times 10^{37}$ & $8.12^{+1.39}_{-4.47}\times 10^{38}$\\[3pt] SSS$_{2}$ (1092)& $5.82^{+1.97}_{-2.39}$ & $62^{+6}_{-4}$ & $1.16$/$31$ & $0.71^{+0.39}_{-0.49}\times 10^{-13}$ & $3.01^{+1.66}_{-2.07}\times 10^{37}$ & $2.21^{+0.45}_{-1.40}\times 10^{38}$\\[3pt] \hline \end{tabular} \end{table*} Fig.~\ref{fig:spec} shows the spectra and the best fit spectral model of SSS$_{2}$ for revolutions 998 and 1092. The data were binned to have at least 35 counts in each energy bin. Results of the spectral fits, the corresponding 0.2--2\,keV\xspace flux absorbed luminosities, and the bolometric luminosities are given in Table~\ref{tab:spec_fit}. During revolution 1092 SSS$_{2}$ is situated on an EPIC-pn CCD gap. For this observation the flux/luminosities are calculated from the MOS1 data alone. We tried to describe the data with several spectral models, including bremsstrahlung, power-law, blackbody and disk blackbody. The blackbody and disk blackbody are the only models that provide a reasonable value for the reduced $\chi^2$, ($\chi^2_{\nu}<1.2$). The spectral fit parameters and flux resulting from the disk blackbody model are very similar to that provided by the simple blackbody model. Therefore, for simplicity, we assume the simple blackbody model in the rest of the paper. As suggested by \cite{Mukai2005}, we also tried an ionized model for the absorption (implemented in XPSEC as \texttt{absori}), but the spectral parameters corresponding to this model cannot be constrained and the best-fit model is a neutral absorber (ionization state $\xi=0$). From the results of Table~\ref{tab:spec_fit} we see that within 6~months the absorbing column slightly decreased (although the associated errors are very large), the temperature increased, and the observed luminosity dropped by a factor of $\sim$1.7. Bolometric luminosities of supersoft sources are difficult to determine due to the large uncertainty associated with the absorbing column. In our case, a blackbody model associated with photo-electric absorption provides a low $\chi^2$ value ($\chi^2_\nu<1.05$) and any other suitable model does not significantly change the bolometric luminosity. Using these considerations, we therefore conclude that the high luminosity of SSS$_{2}$, which is above the Eddington luminosity of a white dwarf, excludes the presence of unbeamed emission from steady nuclear burning of hydrogen accreted onto a white dwarf. In the next section we discuss the models that could explain the nature of the SSSs observed in NGC~300 after summarizing the information we have on the previously known SSS$_{1}$. \section{The nature of SSS$_{1}$ and SSS$_{2}$} \label{sec:discus} \subsection{What do we know about SSS$_{1}$?} SSS$_{1}$ is a transient source: the observed luminosity changed by at least a factor of $\sim$35 (we measured an observed luminosity of $4\times10^{37}\,\text{erg}\,\text{s}^{-1}$ while the \textsl{XMM-Newton}\xspace detection threshold is $\sim 1.2\times10^{36}\,\text{erg}\,\text{s}^{-1}$). The source has been detected in the \textsl{ROSAT}\xspace data of 1992 May/June \citep{Read2001} and in the \textsl{XMM-Newton}\xspace data of 2002 December/2001 January \citep{Kong2003,Carpano2005}. In the highest luminosity state, the source could have been observed in the \textsl{ROSAT}\xspace data of 1991 November/1992 January, 1994 June and 1995 May, and certainly in the \textsl{XMM-Newton}\xspace observations of 2005 May and November. The source has thus been detected in two epochs spaced by $\sim$8 years. \cite{Kong2003} performed a deep analysis of the source and found it to be very luminous ($10^{38}$--$10^{39}\,\text{erg}\,\text{s}^{-1}$) and very soft ($kT\sim60$\,eV). Using the most recent calibration files, we re-evaluated the bolometric luminosity of SSS$_{1}$ to $6.2^{+1.3}_{-3.9}\times10^{38}\,\text{erg}\,\text{s}^{-1}$ in revolution 192 and $3.3^{+0.7}_{-2.0}\times10^{38}\,\text{erg}\,\text{s}^{-1}$ in revolution 195, at a confidence level of 99\%. \cite{Read2001} reported that the count rate of SSS$_{1}$ in 1992 May ($7.4\pm0.8\,\text{cts}\,\text{ks}^{-1}$) is equivalent to that of 1992 June ($7.5\pm0.8\,\text{cts}\,\text{ks}^{-1}$). This result suggests that the duration of the outburst decline is several months and that the decrease in luminosity within the 6\,days separating the two first \textsl{XMM-Newton}\xspace observations is just a short term flux modulation. In the light curve of SSS$_{1}$ during revolution 195, there are two luminosity decreases lasting $\sim$5\,ksec, separated by $\sim$20\,ksec. Using a Lomb-Scargle periodogram analysis \citep{Lomb1976,Scargle1982}, \cite{Kong2003} claim that the modulation present in the light curve is periodic at a confidence level $>$99.9\% and conclude that this periodicity could be associated with the orbital period of the system. Assuming white noise variability, however, is not an adequate assumption for X-ray binaries where many systems show strong variability on timescales of hours. Using the Monte Carlo approach of \cite{benlloch:01a}, we re-evaluated the confidence level assuming a red-noise process instead of pure white noise. We found that the 5.4\,h period is significant only at the 68\% confidence level. Much longer observations than the existing ones are therefore required to be able to associate the 5.4\,h feature with some periodic signal, which might or might not be related to the orbital period of the system. The properties of SSS$_{1}$ make it a very similar source to SSS$_{2}$. They both can be modelled with absorbed blackbodies with a temperature of $\sim60\,$eV, they both present transient behaviour, and their maximal bolometric luminosities observed in X-rays are in both cases close to $10^{39}\,\text{erg}\,\text{s}^{-1}$. The SSSs in NGC~300 therefore have a luminosity in their high state that classifies them as intermediate between the well-known `classical' SSSs and the ultraluminous SSSs. As we discussed in Sect.~\ref{sec:int}, this kind of system has not been well studied and only a few such sources have been reported. \cite{Orio2005} observed a variable SSS in M~31, r3-8, with a luminosity in the high state at $\sim6\times10^{38}$\,erg\,s$^{-1}$. Two other SSSs were observed in M~81 and one in M~101 \citep{Swartz2002,DiStefano2003}, all with $L_\text{bol}\sim4\times10^{38}\,\text{erg}\,\text{s}^{-1}$. The few observations of these SSSs, however, cannot establish a possibly transient behaviour nor the highest luminosity levels reached by the sources. \subsection{Interpretations for the high and soft state of the sources} The most natural explanation for these SSSs would be a steady nuclear burning of hydrogen accreted onto a white dwarf (WD). However, as shown in the previous section, the bolometric luminosity is above the Eddington limit for a $1.4\,\text{M}_\odot$ compact object ($1.82\times10^{38}\,\text{erg}\,\text{s}^{-1}$). To explain the nature of the variable ultraluminous supersoft X-ray source in the Antennae, \citet{Fabbiano2003} suggested beamed emission from nuclear burning onto a WD, with a beaming factor, $b=L/L_{\text{sph}} = 0.01$ , where $L_{\text{sph}}$ is the inferred isotropic luminosity of the blackbody and $L$ the true source luminosity. They suggest that the most likely cause of the anisotropy would be a warping of the accretion disk. In our case the beaming factor would be only of $\sim$0.25. Another model, suggested by \citet{Kong2003}, \citet{Kong2004}, and \citet{Swartz2002} to explain the luminous SSS(s) present in NGC~300, M~101 and M~81, respectively, is the presence of an intermediate-mass black hole in the source. Assuming a blackbody model, with $kT\sim60$\,eV, a luminosity of $1\times10^{39}\,\text{erg}\,\text{s}^{-1}$, and assuming that the X-ray emission comes from the innermost stable orbit, \citet{Kong2003} estimate the mass of the black hole to $\sim 2800\,\text{M}_{\odot}$. For our SSS, this hypothesis seems very unlikely: when observed in the `high' state, this massive black hole would emit only at $<0.3\% $ of its Eddington limit ($3.64\times10^{41}\,\text{erg}\,\text{s}^{-1}$), and much lower in the quiescent state. However, as reported by \cite{Nowak1995} for stellar-mass black holes, below a few percent of the Eddington luminosity, the sources are dominated by hard non-thermal emission and soft emission is only observed once the source luminosity increases by several percent of the Eddington luminosity. Another model was suggested by \cite{Mukai2003} and \cite{Fabbiano2003} to explain the SSS ULX in M~101 and in the Antennae, respectively. In this model, when material is accreted above the Eddington rate, the excess of matter is ejected from the inner part of the disk. The electron scattering opacity induced by the wind/outflow involves supersoft blackbody emission from a photosphere of $10^8$--$10^9$\,cm. \cite{King2003} re-analysed this model in more detail: assuming a radial outflow with an outflow rate $\dot{M}_{\text{out}}$ in a double cone occupying a solid angle $4 \pi b$, at a constant speed, they showed that the outflow is Compton-thick for $\dot{M}_\text{out}\sim\dot{M}_\text{Edd}$. This result is true for $b\sim1$, when scattering of photons from the sides of the outflow is negligible, and for $b\ll1$, when scattering is dominant. The emission is therefore mainly thermalized and observed as a soft spectral component \citep{King2003}. The authors also evaluated the temperature of this soft blackbody component as: \begin{equation} T_\text{eff}=1\times10^5 g^{-1/4} \dot{M}_1^{-1} M_8^{3/4}\,\text{K} \label{equ:teff} \end{equation} where $g(b)=1/b$ or $1/(2b^{1/2})$ (for $b\sim1$ or $b\ll1$), $\dot{M}_1=\dot{M}_{\text{out}}/(1\,\text{M}_{\odot}\text{yr}^{-1})$, and $M_8=M/10^8\,\text{M}_\odot$, and $M$ is the mass of the accretor. These results confirm the hypothesis and observations of the SSS ULX from \cite{Mukai2003} and \cite{Fabbiano2003}. For the SSSs in NGC~300, the accretion rate $\dot{M}=L/(\eta \text{c}^2)$, where $L$ is the luminosity and $\eta$ the radiative efficiency, is $\sim1.4\times\,10^{-7}\text{M}_\odot \text{yr}^{-1}$, if we take a typical value of $\eta=0.1$. This accretion rate value is not extreme considering that the system is observed in a high/outburst state. Furthermore, assuming the source is close to the Eddington limit, where $\dot{M}_{\text{out}}\sim\dot{M}$, we are able to estimate the mass of the accreting object, using equation\,\ref{equ:teff}. Fixing the temperature at 60~eV and the luminosity at 8$\times$10$^{38}$\,erg\,s$^{-1}$, the mass is \begin{equation} M=0.037 \left[ \frac{g^{1/4}}{\eta} \right] ^{4/3} \text{M}_\odot \end{equation} Except for very low values of $b$ ($\lesssim0.1$), $g^{1/4}$ is between 1 and 2, while $\eta$ goes from 0.06 for a non-rotating black hole to 0.42 for an maximally rotating black hole. This results in a mass range between $\sim$$0.1\,M_\odot$ and $4\,M_\odot$, i.e., at the lower limit of the mass range for black holes in our Galaxy. As a last model, we consider the viewing angle dependence of the emission of supercritical accretion flows \citep{Watarai2005}, i.e., accretion flows with a mass accretion rate that is so large that the accretion disk becomes geometrically thick due to enhanced radiation pressure. In this case, if the binary system is viewed at high inclination, the outer part of the disk occults its inner parts, such that only the very soft spectrum from the outer disk is observable. For such a flow around a $10\,\text{M}_\odot$ black hole, \citet{Watarai2005} showed that the emitted spectrum resembles a 2\,keV blackbody at low inclination angle ($i<40^\circ$) and looks like a 0.6\,keV blackbody at $i>60^\circ$. Consequently, when viewed edge-on, such a system will appear very faint. This is also the argument given by \cite{Narayan2005} to explain why none of their 20 studied black hole X-ray binaries present eclipses. In our case, although with 60\,eV the observed temperature is still lower than in the example given by \citet{Watarai2005} for a $10\,M_{\odot}$ black hole, we believe that this model is consistent with our observations. When the outburst state observed in SSS$_2$ begins to decline, an inner part of the disk would be visible, explaining the harder blackbody component in the spectrum of rev.\ 1092. \section{Conclusions} \label{sec:conc} We report the discovery of a new luminous supersoft source in the 2005 May \textsl{XMM-Newton}\xspace observation of NGC~300. The previously known luminous supersoft source detected by \textsl{ROSAT}\xspace and the previous \textsl{XMM-Newton}\xspace data was below detectability. This latter source already appeared highly variable in the \textsl{ROSAT}\xspace observations. No object in the SIMBAD catalogue is associated with the new SSS, and, from the optical data, no counterpart brighter than $\sim$21.7\,mag ($\sim$24.5\,mag for the previous known SSS) has been found. The X-ray spectrum is well described by an absorbed blackbody at temperatures of $kT$$\sim$60\,eV. The bolometric luminosity, in the highest observed state, is $8.1^{+1.4}_{-4.5}\times10^{38}\,\text{erg}\,\text{s}^{-1}$ and dropped to $2.2^{+0.5}_{-1.4}\times10^{38}\,\text{erg}\,\text{s}^{-1}$ six months later. The SSSs in NGC~300 are brighter than ``classical'' SSSs, for which steady nuclear burning of hydrogen accreted onto white dwarfs has been suggested to explain their nature. They are too faint, however, to be classified as ultraluminous sources. We summarized several possible explanations for their nature. These involve beaming emission from a WD, intermediate-mass black hole (IMBH) or a stellar-mass black hole with matter outflow or observed at high inclination angle. Except the one including IMBH, which seems unlikely in our case, all models are consistent with our observations. \begin{acknowledgements} This paper is based on observations with \textsl{XMM-Newton}, an ESA science mission with instruments and contributions directly financed by the ESA Member States and the USA (NASA), and on observations made with ESO Telescopes at the La Silla observatory and retrieved from the ESO archive. We acknowledge partial support from DLR grant 50OX0002. This work was supported by the BMBF through the DLR under the project 50OR0106, by the BMBF through DESY under the project 05AE2PDA/8, and by the Deutsche Forschungsgemeinschaft under the project SCHN~342/3-1. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In computer science, an exception is an abnormal event occurring during the execution of a program. A mechanism for handling exceptions consists of two parts: an exception is \emph{raised} when an abnormal event occurs, and it can be \emph{handled} later, by switching the execution to a specific subprogram. Such a mechanism is very helpful, but it is difficult for programmers to reason about it. A difficulty for reasoning about programs involving exceptions is that they are \emph{computational effects}, in the sense that their syntax does not look like their interpretation:% typically, a piece of program with arguments in $X$ that returns a value in $Y$ is interpreted as a function from $X+E$ to $Y+E$ where $E$ is the set of exceptions. On the one hand, reasoning with $f:X\to Y$ is close to the syntax, but it is error-prone because it is not sound with respect to the semantics. On the other hand, reasoning with $f:X+E\to Y+E$ is sound but it loses most of the interest of the exception mechanism, where the propagation of exceptions is implicit: syntactically, $f:X\to Y$ may be followed by any $g:Y\to Z$, since the mechanism of exceptions will take care of propagating the exceptions raised by $f$, if any. Another difficulty for reasoning about programs involving exceptions is that the handling mechanism is encapsulated in a $\mathtt{try}\texttt{-}\mathtt{catch}$ block, while the behaviour of this mechanism is easier to explain in two parts (see for instance~\cite[Ch. 14]{java} for Java or~\cite[\S 15]{cpp} for C++): the $\catch$ part may recover from exceptions, so that its interpretation may be any $f:X+E\to Y+E$, but the $\mathtt{try}\texttt{-}\mathtt{catch}$ block must propagate exceptions, so that its interpretation is determined by some $f:X\to Y+E$. In \cite{DDER14-exc} we defined a logical system for reasoning about states and exceptions and we used it for getting certified proofs of properties of programs in computer algebra, with an application to exact linear algebra. This logical system is called the \emph{decorated logic} for states and exceptions. Here we focus on exceptions. The decorated logic for exceptions deals with $f:X\to Y$, without any mention of~$E$, however it is sound thanks to a classification of the terms and the equations. Terms are classified, as in a programming language, according to the way they may interact with exceptions: a term either has no interaction with exceptions (it is ``pure''), or it may raise exceptions and must propagate them, or it is allowed to catch exceptions (which may occur only inside the $\catch$ part of a $\mathtt{try}\texttt{-}\mathtt{catch}$ block). The classification of equations follows a line that was introduced in \cite{DD10}: besides the usual ``strong'' equations, interpreted as equalities of functions, in the decorated logic for exceptions there are also ``weak'' equations, interpreted as equalities of functions on non-exceptional arguments. This logic has been built so as to be sound, but little was known about its completeness. In this paper we prove a novel completeness result: the decorated logic for exceptions is \emph{relatively Hilbert-Post complete}, which means that adding exceptions to a programming language can be done in such a way that the completeness of the language is not made worse. For this purpose, we first define and study the novel notion of {\em relative} Hilbert-Post completeness, which seems to be a relevant notion for the completeness of various computational effects: indeed, we prove that this notion is preserved when combining effects. Practically, this means that we have defined a decorated framework where reasoning about programs with and without exceptions are equivalent, in the following sense: if there exists an unprovable equation not contradicting the given decorated rules, then this equation is equivalent to a set of unprovable equations of the pure sublogic not contradicting its rules. Informally, in classical logic, a consistent theory is one that does not contain a contradiction and a theory is complete if it is consistent, and none of its proper extensions is consistent. Now, the usual (``\emph{absolute}'') Hilbert-Post completeness, also called Post completeness, is a syntactic notion of completeness which does not use any notion of negation, so that it is well-suited for equational logic. In a given logic $L$, we call \emph{theory} a set of sentences which is deductively closed: everything you can derive from it (using the rules of $L$) is already in it. Then, more formally, a theory is \emph{(Hilbert-Post) consistent} if it does not contain all sentences, and it is \emph{(Hilbert-Post) complete} if it is consistent and if any sentence which is added to it generates an inconsistent theory~\cite[Def. 4]{Tarski30}. All our completeness proofs have been verified with the Coq proof assistant. First, this shows that it is possible to formally prove that programs involving exceptions comply to their specifications. Second, this is of help for improving the confidence in the results. Indeed, for a human prover, proofs in a decorated logic require some care: they look very much like familiar equational proofs, but the application of a rule may be subject to restrictions on the decoration of the premises of the rule. The use of a proof assistant in order to check that these unusual restrictions were never violated has thus proven to be quite useful. Then, many of the proofs we give in this paper require a structural induction. There, the correspondence between our proofs and their Coq counterpart was eased, as structural induction is also at the core of the design of Coq. A major difficulty for reasoning about programs involving exceptions, and more generally computational effects, is that their syntax does not look like their interpretation: typically, a piece of program from $X$ to $Y$ is not interpreted as a function from $X$ to $Y$, because of the effects. The best-known algebraic approach for dealing with this problem has been initiated by Moggi: an effect is associated to a monad $T$, in such a way that the interpretation of a program from $X$ to $Y$ is a function from $X$ to $T(Y)$~\cite{Moggi91}: typically, for exceptions, $T(Y)=Y+E$. Other algebraic approaches include effect systems~\cite{LucassenGifford88}, Lawvere theories~\cite{PlotkinPower02}, algebraic handlers~\cite{PP09}, comonads~\cite{UV08,POM14}, dynamic logic~\cite{MSG10}, among others. Some completeness results have been obtained, for instance for (global) states~\cite{Pretnar10} and for local states~\cite{Staton10-fossacs}. The aim of these approaches is to extend functional languages with tools for programming and proving side-effecting programs; implementations include Haskell \cite{BHM00}, Idris \cite{Idris}, Eff \cite{BauerP15}, while Ynot \cite{Ynot} is a Coq library for writing and verifying imperative programs. Differently, our aim is to build a logical system for proving properties of some families of programs written in widely used non-functional languages like Java or C++\footnote{For instance, a denotational semantics of our framework for exceptions, which relies on the common semantics of exceptions in these languages, was given in~\cite[\S~4]{DDER14-exc}.}. The salient features of our approach are that: \\{\bf (1)} The syntax of our logic is kept close to the syntax of programming languages. This is made possible by starting from a simple syntax without effect and by adding decorations, which often correspond to keywords of the languages, for taking the effects into account. \\{\bf (2)} We consider exceptions in two settings, the programming language and the core language. This enables for instance to separate the treatment, in proofs, of the matching between normal or exceptional behavior from the actual recovery after an exceptional behavior. In Section~\ref{sec:hpc} we introduce a \emph{relative} notion of Hilbert-Post completeness in a logic $L$ with respect to a sublogic $L_0$. Then in Section~\ref{sec:exc} we prove the relative Hilbert-Post completeness of a theory of exceptions based on the usual $\throw$ and $\mathtt{try}\texttt{-}\mathtt{catch}$ statement constructors. We go further in Section~\ref{sec:excore} by establishing the relative Hilbert-Post completeness of a \emph{core} theory for exceptions with individualized $\trycore$ and $\catchcore$ statement constructors, which is useful for expressing the behaviour of the $\mathtt{try}\texttt{-}\mathtt{catch}$ blocks. All our completeness proofs have been verified with the Coq proof assistant and we therefore give the main ingredients of the framework used for this verification and the correspondence between our Coq package and the theorems and propositions of this paper in Section~\ref{sec:coq}. \section{Relative Hilbert-Post completeness} \label{sec:hpc} Each logic in this paper comes with a \emph{language}, which is a set of \emph{formulas}, and with \emph{deduction rules}. Deduction rules are used for deriving (or generating) \emph{theorems}, which are some formulas, from some chosen formulas called \emph{axioms}. A \emph{theory} $T$ is a set of theorems which is \emph{deductively closed}, in the sense that every theorem which can be derived from $T$ using the rules of the logic is already in $T$. We describe a set-theoretic \emph{intended model} for each logic we introduce; the rules of the logic are designed so as to be \emph{sound} with respect to this intended model. Given a logic $L$, the theories of $L$ are partially ordered by inclusion. There is a maximal theory $T_\mymax$, where all formulas are theorems. There is a minimal theory $T_\mymin$, which is generated by the empty set of axioms. For all theories $T$ and $T'$, we denote by $T+T'$ the theory generated from $T$ and~$T'$. \begin{example} \label{ex:eqn} With this point of view there are many different \emph{equational logics}, with the same deduction rules but with different languages, depending on the definition of \emph{terms}. In an equational logic, formulas are \emph{pairs of parallel terms} $(f,g):X\to Y$ and theorems are \emph{equations} $f\equiv g:X\to Y$. Typically, the language of an equational logic may be defined from a \emph{signature} (made of sorts and operations). The deduction rules are such that the equations in a theory form a \emph{congruence}, i.e., an equivalence relation compatible with the structure of the terms. For instance, we may consider the logic ``of naturals'' $L_\nat$, with its language generated from the signature made of a sort $N$, a constant $0:\unit\to N$ and an operation $s:N\to N$. For this logic, the minimal theory is the theory ``of naturals'' $T_\nat$, the maximal theory is such that $s^k\equiv s^\ell$ and $s^k\circ0\equiv s^\ell\circ0$ for all natural numbers $k$ and $\ell$, and (for instance) the theory ``of naturals modulo~6'' $T_\modsix$ can be generated from the equation $s^6\equiv \id_N$. We consider models of equational logics in sets: each type $X$ is interpreted as a set (still denoted $X$), which is a singleton when $X$ is $\unit$, each term $f:X\to Y$ as a function from $X$ to $Y$ (still denoted $f:X\to Y$), and each equation as an equality of functions. \end{example} \begin{definition} \label{defi:hpc} Given a logic $L$ and its maximal theory $T_\mymax$, a theory $T$ is \emph{consistent} if $T\ne T_\mymax$, and it is \emph{Hilbert-Post complete} if it is consistent and if any theory containing $T$ coincides with~$T_\mymax$ or with~$T$. \end{definition} \begin{example} \label{ex:hpc-eqn} In Example~\ref{ex:eqn} we considered two theories for the logic $L_\nat$: the theory ``of naturals'' $T_\nat$ and the theory ``of naturals modulo~6'' $T_\modsix$. Since both are consistent and $T_\modsix$ contains $T_\nat$, the theory $T_\nat$ is not Hilbert-Post complete. A Hilbert-Post complete theory for $L_\nat$ is made of all equations but $s\equiv \id_N$, it can be generated from the axioms $s\!\circ\! 0 \!\equiv\! 0$ and $s\!\circ\! s \!\equiv\! s$. \end{example} If a logic $L$ is an extension of a sublogic $L_0$, each theory $T_0$ of $L_0$ generates a theory $F(T_0)$ of $L$. Conversely, each theory $T$ of $L$ determines a theory $G(T)$ of $L_0$, made of the theorems of $T$ which are formulas of $L_0$, so that $G(T_\mymax)=T_\mymaxz$. The functions $F$ and $G$ are monotone and they form a \emph{Galois connection}, denoted $F\dashv G$: for each theory $T$ of $L$ and each theory $T_0$ of $L_0$ we have $F(T_0)\subseteq T$ if and only if $T_0\subseteq G(T)$. It follows that $T_0\subseteq G(F(T_0))$ and $F(G(T))\subseteq T$. Until the end of Section~\ref{sec:hpc}, we consider: \emph{a logic $L_0$, an extension $L$ of $L_0$, and the associated Galois connection $F\dashv G$.} \begin{definition} \label{defi:hpc-rel} A theory $T'$ of~$L$ is \emph{$L_0$-derivable} from a theory $T$ of~$L$ if $T'=T+F(T'_0)$ for some theory $T'_0$ of~$L_0$. A theory $T$ of~$L$ is \emph{(relatively) Hilbert-Post complete with respect to} $L_0$ if it is consistent and if any theory of~$L$ containing $T$ is $L_0$-derivable from~$T$. \end{definition} Each theory $T$ is $L_0$-derivable from itself, as $T=T+F(T_\myminz)$, where $T_\myminz$ is the minimal theory of $L_0$. In addition, Theorem~\ref{theo:hpc} shows that relative completeness lifts the usual ``absolute'' completeness from $L_0$ to $L$, and Proposition~\ref{prop:hpc-compose} proves that relative completeness is well-suited to the combination of effects. \begin{lemma} \label{lemm:hpc-rel} For each theory $T$ of $L$, a theory $T'$ of $L$ is $L_0$-derivable from $T$ if and only if $T'=T+F(G(T'))$. As a special case, $T_\mymax$ is $L_0$-derivable from $T$ if and only if $T_\mymax=T+F(T_\mymaxz)$. A theory $T$ of $L$ is Hilbert-Post complete with respect to $L_0$ if and only if it is consistent and every theory $T'$ of~$L$ containing $T$ is such that $T'=T+F(G(T'))$. \end{lemma} \begin{proof} Clearly, if $T'=T+F(G(T'))$ then $T'$ is $L_0$-derivable from $T$. So, let $T'_0$ be a theory of~$L_0$ such that $T'=T+F(T'_0)$, and let us prove that $T'=T+F(G(T'))$. For each theory $T'$ we know that $F(G(T')) \subseteq T'$; since here $T \subseteq T'$ we get $T+F(G(T')) \subseteq T'$. Conversely, for each theory $T'_0$ we know that $T'_0 \subseteq G(F(T'_0))$ and that $G(F(T'_0)) \subseteq G(T)+ G(F(T'_0)) \subseteq G(T+F(T'_0)) $, so that $T'_0 \subseteq G(T+F(T'_0)) $; since here $T'=T+F(T'_0)$ we get first $T'_0 \subseteq G(T')$ and then $T'\subseteq T+F(G(T')) $. Then, the result for $T_\mymax$ comes from the fact that $G(T_\mymax)=T_\mymaxz $. The last point follows immediately. \end{proof} \begin{theorem} \label{theo:hpc} Let $T_0$ be a theory of $L_0$ and $T=F(T_0)$. If $T_0$ is Hilbert-Post complete (in $L_0$) and $T$ is Hilbert-Post complete with respect to $L_0$, then $T$ is Hilbert-Post complete (in $L$). \end{theorem} \begin{proof} Since $T$ is complete with respect to $L_0$, it is consistent. Since $T=F(T_0)$ we have $T_0 \subseteq G(T)$. Let $T'$ be a theory such that $T\subseteq T'$. Since $T$ is complete with respect to $L_0$, by Lemma~\ref{lemm:hpc-rel} we have $T'=T+F(T'_0)$ where $T'_0=G(T')$. Since $T\subseteq T'$, $T_0 \subseteq G(T)$ and $T'_0=G(T')$, we get $T_0 \subseteq T'_0$. Thus, since $T_0$ is complete, either $T'_0=T_0$ or $T'_0=T_\mymaxz$; let us check that then either $T'=T$ or $T'=T_\mymax$. If $T'_0=T_0$ then $F(T'_0)=F(T_0)=T$, so that $T'=T+F(T'_0)=T$. If $T'_0=T_\mymaxz$ then $F(T'_0)=F(T_\mymaxz)$; since $T$ is complete with respect to $L_0$, the theory $T_\mymax$ is $L_0$-derivable from $T$, which implies (by Lemma~\ref{lemm:hpc-rel}) that $T_\mymax=T+F(T_\mymaxz)=T'$. \end{proof} \begin{proposition} \label{prop:hpc-compose} Let $L_1$ be an intermediate logic between $L_0$ and $L$, let $F_1\dashv G_1$ and $F_2\dashv G_2$ be the Galois connections associated to the extensions $L_1$ of $L_0$ and $L$ of $L_1$, respectively. Let $T_1=F_1(T_0)$. If $T_1$ is Hilbert-Post complete with respect to $L_0$ and $T$ is Hilbert-Post complete with respect to $L_1$ then $T$ is Hilbert-Post complete with respect to $L_0$. \end{proposition} \begin{proof} This is an easy consequence of the fact that $F=F_2\circ F_1$. \end{proof} Corollary~\ref{coro:hpc-equations} provides a characterization of relative Hilbert-Post completeness which is used in the next Sections and in the Coq implementation. \begin{definition} \label{defi:hpc-equations} For each set $E$ of formulas let $\Th(E)$ be the theory generated by $E$; and when $E=\{e\}$ let $\Th(e)=\Th(\{e\})$. Then two sets $E_1$, $E_2$ of formulas are \emph{$T$-equivalent} if $T+\Th(E_1)=T+\Th(E_2)$; and a formula $e$ of~$L$ is \emph{$L_0$-derivable} from a theory $T$ of~$L$ if $\{e\}$ is $T$-equivalent to $E_0$ for some set $E_0$ of formulas of~$L_0$. \end{definition} \begin{proposition} \label{prop:hpc-equations} Let $T$ be a theory of $L$. Each theory $T'$ of $L$ containing $T$ is $L_0$-derivable from $T$ if and only if each formula $e$ in $L$ is $L_0$-derivable from $T$. \end{proposition} \begin{proof} Let us assume that each theory $T'$ of $L$ containing $T$ is $L_0$-derivable from $T$. Let $e$ be a formula in $L$, let $T'=T+\Th(e)$, and let $T'_0$ be a theory of $L_0$ such that $T'=T+F(T'_0)$. The definition of $\Th(-)$ is such that $\Th(T'_0)=F(T'_0)$, so that we get $T+\Th(e)=T+\Th(E_0)$ where $E_0= T'_0$. Conversely, let us assume that each formula $e$ in $L$ is $L_0$-derivable from $T$. Let $T'$ be a theory containing $T$. Let $T''=T+F(G(T'))$, so that $T\subseteq T''\subseteq T'$ (because $F(G(T'))\subseteq T'$ for any $T'$). Let us consider an arbitrary formula $e$ in $T'$, by assumption there is a set $E_0$ of formulas of $L_0$ such that $T+\Th(e)=T+\Th(E_0)$. Since $e$ is in $T'$ and $T\subseteq T'$ we have $T+\Th(e)\subseteq T'$, so that $T+\Th(E_0)\subseteq T'$. It follows that $E_0$ is a set of theorems of $T'$ which are formulas of $L_0$, which means that $E_0\subseteq G(T')$, and consequently $\Th(E_0)\subseteq F(G(T'))$, so that $T+\Th(E_0)\subseteq T''$. Since $T+\Th(e)=T+\Th(E_0)$ we get $e\in T''$. We have proved that $T'=T''$, so that $T'$ is $L_0$-derivable from~$T$. \end{proof} \begin{corollary} \label{coro:hpc-equations} A theory $T$ of $L$ is Hilbert-Post complete with respect to $L_0$ if and only if it is consistent and for each formula $e$ of $L$ there is a set $E_0$ of formulas of~$L_0$ such that $\{e\}$ is $T$-equivalent to $E_0$. \end{corollary} \section{Completeness for exceptions} \label{sec:exc} Exception handling is provided by most modern programming languages. It allows to deal with anomalous or exceptional events which require special processing. E.g., one can easily and simultaneously compute dynamic evaluation in exact linear algebra using exceptions \cite{DDER14-exc}. There, we proposed to deal with exceptions as a decorated effect: a term $f:X\to Y$ is not interpreted as a function $f: X\to Y$ unless it is pure. A term which may raise an exception is instead interpreted as a function $f: X\to Y + E$ where ``+'' is the disjoint union operator and $E$ is the set of exceptions. In this section, we prove the relative Hilbert-Post completeness of the decorated theory of exceptions in Theorem~\ref{theo:exc-complete}. As in \cite{DDER14-exc}, decorated logics for exceptions are obtained from equational logics by classifying terms. Terms are classified as \emph{pure} terms or \emph{propagators}, which is expressed by adding a \emph{decoration} or superscript, respectively $\pure$ or $\ppg$; decoration and type information about terms may be omitted when they are clear from the context or when they do not matter. All terms must propagate exceptions, and propagators are allowed to raise an exception while pure terms are not. The fact of catching exceptions is hidden: it is embedded into the $\mathtt{try}\texttt{-}\mathtt{catch}$ construction, as explained below. In Section~\ref{sec:excore} we consider a translation of the $\mathtt{try}\texttt{-}\mathtt{catch}$ construction in a more elementary language where some terms are \emph{catchers}, which means that they may recover from an exception, i.e., they do not have to propagate exceptions. Let us describe informally a decorated theory for exceptions and its intended model. Each type $X$ is interpreted as a set, still denoted $X$. The intended model is described with respect to a set $E$ called the \emph{set of exceptions}, which does not appear in the syntax. A pure term $u^\pure:X\to Y$ is interpreted as a function $u:X\to Y$ and a propagator $a^\ppg:X\to Y$ as a function $a:X\to Y+E$; equations are interpreted as equalities of functions. There is an obvious conversion from pure terms to propagators, which allows to consider all terms as propagators whenever needed; if a propagator $a^\ppg:X\to Y$ ``is'' a pure term, in the sense that it has been obtained by conversion from a pure term, then the function $a:X\to Y+E$ is such that $a(x)\in Y$ for each $x\in X$. This means that exceptions are always propagated: the interpretation of $(b\circ a)^\ppg:X\to Z$ where $a^\ppg:X\to Y$ and $b^\ppg:Y\to Z$ is such that $(b\circ a)(x)=b(a(x))$ when $a(x)$ is not an exception and $(b\circ a)(x)=e$ when $a(x)$ is the exception $e$ (more precisely, the composition of propagators is the Kleisli composition associated to the monad $X+E$ \cite[\S~1]{Moggi91}). Then, exceptions may be classified according to their \emph{name}, as in \cite{DDER14-exc}. Here, in order to focus on the main features of the proof of completeness, we assume that there is only one exception name. Each exception is built by \emph{encapsulating} a parameter. Let $P$ denote the type of parameters for exceptions. The fundamental operations for raising exceptions are the propagators $\throw_Y^\ppg :P\to Y$ for each type $Y$: this operation throws an exception with a parameter $p$ of type $P$ and pretends that this exception has type $Y$. The interpretation of the term $\throw_Y^\ppg :P\to Y$ is a function $\throw_Y :P\to Y+E$ such that $\throw_Y(p) \in E$ for each $p\in P$. The fundamental operations for handling exceptions are the propagators $(\try (a) \catch (b))^\ppg:X\to Y$ for each terms $a:X\to Y$ and $b:P\to Y$: this operation first runs $a$ until an exception with parameter $p$ is raised (if any), then, if such an exception has been raised, it runs $b(p)$. The interpretation of the term $(\try (a) \catch (b))^\ppg:X\to Y$ is a function $\try (a) \catch (b) :X\to Y+E$ such that $(\try (a) \catch (b))(x)=a(x)$ when $a$ is pure and $(\try (a) \catch (b))(x)=b(p)$ when $a(x)$ throws an exception with parameter $p$. More precisely, first the definition of the \emph{monadic equational logic} $L_\eqn$ is recalled in Fig.~\ref{fig:eqn}, (as in~\cite{Moggi91}, this terminology might be misleading: the logic is called \emph{monadic} because all its operations are have exactly one argument, this is unrelated to the use of the \emph{monad} of exceptions). \begin{figure}[ht] \vspace{-5pt} \begin{framed} \small \renewcommand{\arraystretch}{1.5} \begin{tabular}{l} Terms are closed under composition: \\ $u_k\circ \dots \circ u_1:X_0\!\!\to\!\! X_k$ for each $(u_i:X_{i-1}\!\!\to\!\! X_i)_{1\leq i\leq k}$, and $\id_X:X\!\!\to\!\! X$ when $k=0$ \\ Rules: \quad (equiv) \squad $\dfrac{u}{u \eqs u} \quad \dfrac{u \eqs v}{v \eqs u} \quad \dfrac{u \eqs v \squad v \eqs w}{u \eqs w}$ \\ \quad (subs) \; $\dfrac{u\colon X\to Y \squad v_1 \eqs v_2 \colon Y\to Z} {v_1 \circ u \eqs v_2\circ u } $ \squad (repl) \; $ \dfrac{v_1\eqs v_2\colon X\to Y \squad w\colon Y\to Z} {w\circ v_1 \eqs w\circ v_2} $ \\ Empty type $\empt$ with terms $\copa_Y:\empt\to Y$ and rule: \quad (initial) \; $ \dfrac{u\colon \empt\to Y } {u \eqs \copa_Y} $ \\ \end{tabular} \renewcommand{\arraystretch}{1} \normalsize \vspace{-2mm} \end{framed} \vspace{-5pt} \caption{Monadic equational logic $L_\eqn$ (with empty type)} \label{fig:eqn} \end{figure} A monadic equational logic is made of types, terms and operations, where all operations are unary, so that terms are simply paths. This constraint on arity will make it easier to focus on the completeness issue. For the same reason, we also assume that there is an \emph{empty type} $\empt$, which is defined as an \emph{initial object}: for each $Y$ there is a unique term $\copa_Y:\empt\to Y$ and each term $u^\pure:Y\to\empt$ is the inverse of $\copa_Y^\pure$. In the intended model, $\empt$ is interpreted as the empty set. Then, the monadic equational logic $L_\eqn$ is extended to form the \emph{decorated logic for exceptions} $L_\exc$ by applying the rules in Fig.~\ref{fig:exc}, with the following intended meaning: \begin{itemize} \item (initial$_1$): the term $\![\,]_Y\!\!$ is unique as a propagator, not only as a pure term. \item (propagate): exceptions are always propagated. \item (recover): the parameter used for throwing an exception may be recovered. \item (try): equations are preserved by the exceptions mechanism. \item (try$_0$): pure code inside $\try$ never triggers the code inside $\catch$. \item (try$_1$): code inside $\catch$ is executed when an exception is thrown inside~$\try$. \end{itemize} \begin{figure}[ht] \vspace{-5pt} \begin{framed} \small \renewcommand{\arraystretch}{1.5} \begin{tabular}{l} Pure part: the logic $L_\eqn$ with a distinguished type $P$ \\ Decorated terms: $\throw_Y^\ppg:P\to Y$ for each type $Y$, \\ \quad $(\try (a) \catch (b))^\ppg:X\to Y$ for each $a^\ppg:X\to Y$ and $b^\ppg:P\to Y$, and \\ \quad $(a_k\circ \dots \circ a_1)^{(\max(d_1,...,d_k))}:X_0\to X_k$ for each $(a_i^{(d_i)}:X_{i-1}\to X_i)_{1\leq i\leq k}$ \\ \quad with conversion from $u^\pure:X\to Y$ to $u^\ppg:X\to Y$ \\ Rules: \\ \quad (equiv), (subs), (repl) for all decorations \qquad (initial$_1$) \; $ \dfrac{a^\ppg\colon \empt\to Y } {a \eqs \copa_Y} $ \\ \quad (recover) \squad $\dfrac{u_1^\pure,u_2^\pure:X\to P \squad \throw_Y\circ u_1 \eqs \throw_Y\circ u_2}{u_1 \eqs u_2} $ \\ \quad (propagate) $\dfrac{a^\ppg:X\to Y}{a\circ \throw_X \eqs\throw_Y} $ \quad (try) $\dfrac{a_1^\ppg \eqs a_2^\ppg\!:\!X\to Y \squad b^\ppg\!:\!P\to Y}{ \try (a_1) \catch (b) \eqs \try (a_2) \catch (b) } $ \\ \quad (try$_0$) \squad $\dfrac{u^\pure\!:\!X\to Y \squad b^\ppg\!:\!P\to Y}{ \try (u) \catch (b) \eqs u} $ \quad (try$_1$) \squad $\dfrac{u^\pure\!:\!X\to P \squad b^\ppg\!:\!P\to Y}{ \try (\throw_Y \!\circ u) \catch (b) \eqs b\circ u} $ \\ \end{tabular} \renewcommand{\arraystretch}{1} \normalsize \vspace{-2mm} \end{framed} \vspace{-5pt} \caption{Decorated logic for exceptions $L_\exc$} \label{fig:exc} \end{figure} The \emph{theory of exceptions} $T_\exc$ is the theory of $L_\exc$ generated from some arbitrary consistent theory $T_\eqn$ of $L_\eqn$; with the notations of Section~\ref{sec:hpc}, $T_\exc=F(T_\eqn)$. The soundness of the intended model follows: see \cite[\S 5.1]{DDER14-exc} and~\cite{DDFR12-dual}, which are based on the description of exceptions in Java~\cite[Ch. 14]{java} or in C++~\cite[\S 15]{cpp}. \begin{example}\label{ex:trycatch} Using the naturals for $P$ and the successor and predecessor functions (resp. denoted $\mathtt{s}$ and $\mathtt{p}$) we can prove, e.g., that $\try(\mathtt{s}(\throw~3))\catch(\mathtt{p})$ is equivalent to $2$. Indeed, first the rule (propagate) shows that $\mathtt{s}(\throw~3))\equiv\throw~3$, then the rules (try) and (try$_1$) rewrite the given term into $\mathtt{p}(3)$. \end{example} Now, in order to prove the completeness of the decorated theory for exceptions, we follow a classical method (see, e.g., \cite[Prop 2.37~\&~2.40]{Pretnar10}): we first determine canonical forms in Proposition~\ref{prop:exc-canonical}, then we study the equations between terms in canonical form in Proposition~\ref{prop:exc-equations}. \begin{proposition} \label{prop:exc-canonical} For each $a^\ppg\!:\!X\!\to\! Y$, either there is a pure term $u^\pure\!:\!X\!\to\! Y$ such that $a\!\eqs\! u$ or there is a pure term $u^\pure\!:\!X\!\to\! P$ such that $a\!\eqs\! \throw_Y \!\circ\! u$. \end{proposition} \begin{proof} The proof proceeds by structural induction. If $a$ is pure the result is obvious, otherwise $a$ can be written in a unique way as $a = b \circ \mathtt{op} \circ v$ where $v$ is pure, $\mathtt{op}$ is either $\throw_Z$ for some $Z$ or $\try(c)\catch(d)$ for some $c$ and $d$, and $b$ is the remaining part of $a$. If $a = b^\ppg \circ \throw_Z \circ v^\pure$, then by (propagate) $a \eqs \throw_Y \circ v^\pure$. Otherwise, $a = b^\ppg \circ (\try(c^\ppg)\catch(d^\ppg)) \circ v^\pure$, then by induction we consider two cases. \begin{itemize} \item If $c \eqs w^\pure$ then by (try$_0$) $a \eqs b^\ppg \circ w^\pure \circ v^\pure$ and by induction we consider two subcases: if $b \eqs t^\pure$ then $a \eqs (t \circ w \circ v)^\pure$ and if $b \eqs \throw_Y\circ t^\pure$ then $a \eqs \throw_Y \circ (t \circ w \circ v)^\pure$. \item If $c \eqs \throw_Z \circ w^\pure$ then by (try$_1$) $a \eqs b^\ppg \circ d^\ppg \circ w^\pure \circ v^\pure$ and by induction we consider two subcases: if $b \circ d \eqs t^\pure$ then $a \eqs (t \circ w \circ v)^\pure$ and if $b \circ d \eqs \throw_Y \circ t^\pure$ then $a \eqs \throw_Y\circ (t \circ w \circ v)^\pure$. \end{itemize} \end{proof} Thanks to Proposition~\ref{prop:exc-canonical}, the study of equations in the logic $L_\exc$ can be restricted to pure terms and to propagators of the form $\throw_Y \circ v$ where $v$ is pure. \begin{proposition} \label{prop:exc-equations} For all $v_1^\pure,v_2^\pure:X\to P$ let $a_1^\ppg = \throw_Y \circ v_1 :X\to Y$ and $a_2^\ppg = \throw_Y \circ v_2 :X\to Y$. Then $ a_1^\ppg \eqs a_2^\ppg $ is $T_\exc$-equivalent to $v_1^\pure \eqs v_2^\pure $. \end{proposition} \begin{proof} Clearly, if $v_1\eqs v_2$ then $a_1\eqs a_2 $. Conversely, if $a_1\eqs a_2 $, i.e., if $\throw_Y \circ v_1\eqs \throw_Y \circ v_2$, then by rule (recover) it follows that $v_1 \eqs v_2$. \end{proof} In the intended model, for all $v_1^\pure:X\to P$ and $v_2^\pure: X\to Y$, it is impossible to have $\throw_Y(v_1(x))=v_2(x)$ for some $x\in X$, because $\throw_Y(v_1(x))$ is in the $E$ summand and $v_2(x)$ in the $Y$ summand of the disjoint union $Y+E$. This means that the functions $\throw_Y \circ v_1$ and $v_2$ are distinct, as soon as their domain $X$ is a non-empty set. For this reason, it is sound to make the following Assumption~\ref{ass:exc-equations}. \begin{assumption} \label{ass:exc-equations} In the logic $L_\exc$, the type of parameters $P$ is non-empty, and for all $v_1^\pure:X\to P$ and $v_2^\pure: X\to Y$ with $X$ non-empty, let $a_1^\ppg = \throw_Y \circ v_1 :X\to Y$. Then $ a_1^\ppg \eqs v_2^\pure$ is $T_\exc$-equivalent to $T_\mymaxz$. \end{assumption} \begin{theorem} \label{theo:exc-complete} Under Assumption~\ref{ass:exc-equations}, the theory of exceptions $T_\exc$ is Hilbert-Post complete with respect to the pure sublogic $L_\eqn$ of $L_\exc$. \end{theorem} \begin{proof} Using Corollary~\ref{coro:hpc-equations}, the proof relies upon Propositions~\ref{prop:exc-canonical} and \ref{prop:exc-equations}. The theory $T_\exc$ is consistent, because (by soundness) it cannot be proved that $\throw_P^\ppg \eqs \id_P^\pure$. Now, let us consider an equation between terms with domain $X$ and let us prove that it is $T_\exc$-equivalent to a set of pure equations. When $X$ is non-empty, Propositions~\ref{prop:exc-canonical} and~\ref{prop:exc-equations}, together with Assumption~\ref{ass:exc-equations}, prove that the given equation is $T_\exc$-equivalent to a set of pure equations. When $X$ is empty, then all terms from $X$ to $Y$ are equivalent to $\copa_Y$ so that the given equation is $T_\exc$-equivalent to the empty set of pure equations. \end{proof} \section{Completeness of the core language for exceptions} \label{sec:excore} In this section, following~\cite{DDER14-exc}, we describe a translation of the language for exceptions from Section~\ref{sec:exc} in a \emph{core} language with \emph{catchers}. Thereafter, in Theorem~\ref{theo:excore-complete}, we state the relative Hilbert-Post completeness of this core language. Let us call the usual language for exceptions with $\throw$ and $\mathtt{try}\texttt{-}\mathtt{catch}$, as described in Section~\ref{sec:exc}, the \emph{programmers' language} for exceptions. The documentation on the behaviour of exceptions in many languages (for instance in Java \cite{java}) makes use of a \emph{core language} for exceptions which is studied in \cite{DDER14-exc}. In this language, the empty type plays an important role and the fundamental operations for dealing with exceptions are $\tagg^\ppg:P\to \empt$ for encapsulating a parameter inside an exception and $\untag^\ctc:\empt\to P$ for recovering its parameter from any given exception. The new decoration $\ctc$ corresponds to \emph{catchers}: a catcher may recover from an exception, it does not have to propagate it. Moreover, the equations also are decorated: in addition to the equations '$\eqs$' as in Section~\ref{sec:exc}, now called \emph{strong equations}, there are \emph{weak equations} denoted '$\eqw$'. As in Section~\ref{sec:exc}, a set $E$ of exceptions is chosen; the interpretation is extended as follows: each catcher $f^\ctc:X\to Y$ is interpreted as a function $f:X+E\to Y+E$, and there is an obvious conversion from propagators to catchers; the interpretation of the composition of catchers is straightforward, and it is compatible with the Kleisli composition for propagators. Weak and strong equations coincide on propagators, where they are interpreted as equalities, but they differ on catchers: $f^\ctc \eqw g^\ctc:X\to Y$ means that the functions $f,g:X+E\to Y+E$ coincide on $X$, but maybe not on $E$. The interpretation of $\tagg^\ppg:P\to \empt$ is an injective function $\tagg:P\to E$ and the interpretation of $\untag^\ctc:\empt\to P$ is a function $\untag:E\to P+E$ such that $\untag(\tagg(p))=p$ for each parameter $p$. Thus, the fundamental axiom relating $\tagg^\ppg$ and $\untag^\ctc$ is the weak equation $ \untag \circ \tagg \eqw \id_P$. \begin{figure}[ht] \vspace{-5pt} \begin{framed} \small \renewcommand{\arraystretch}{1.5} \begin{tabular}{l} Pure part: the logic $L_\eqn$ with a distinguished type $P$ \\ Decorated terms: $\tagg^\acc \colon P\to \empt$, $\untag^\ctc \colon \empt\to P$, and \\ \quad $(f_k\circ \dots \circ f_1)^{(\max(d_1,...,d_k))}:X_0\to X_k$ for each $(f_i^{(d_i)}:X_{i-1}\to X_i)_{1\leq i\leq k}$ \\ \quad with conversions from $f^\pure$ to $f^\ppg$ and from $f^\ppg$ to $f^\ctc$ \\ Rules: \\ \quad (equiv$_\eqs$), (subs$_\eqs$), (repl$_\eqs$) for all decorations \\ \quad (equiv$_\eqw$), (repl$_\eqw$) for all decorations, (subs$_\eqw$) only when $h$ is pure \\ \quad (empty$_\eqw$) \squad $\dfrac{f\colon \empt\to Y}{f \eqw \copa_Y}$ \quad ($\eqs$-to-$\eqw$) \squad $\dfrac{f\eqs g}{f\eqw g} $ \quad (ax) \squad $\dfrac{}{\untag \circ \tagg \eqw \id_P} $ \\ \quad (eq$_1$) \squad $\dfrac{f_1^{(d_1)}\eqw f_2^{(d_2)}}{f_1\eqs f_2}$ only when $d_1\leq 1$ and $d_2\leq 1$ \\ \quad (eq$_2$) \squad $\dfrac{f_1,f_2\colon X\to Y \;\; f_1\eqw f_2 \;\; f_1 \circ \copa_X \eqs f_2 \circ \copa_X}{f_1\eqs f_2}$ \\ \quad (eq$_3$) \squad $\dfrac{f_1,f_2\colon \empt\to X \quad f_1 \circ \tagg \eqw f_2 \circ \tagg} {f_1\eqs f_2}$ \\ \end{tabular} \renewcommand{\arraystretch}{1} \normalsize \vspace{-2mm} \end{framed} \vspace{-5pt} \caption{Decorated logic for the core language for exceptions $L_\excore$} \label{fig:excore} \end{figure} More precisely, the \emph{decorated logic for the core language for exceptions} $L_\excore$ is defined in Fig.~\ref{fig:excore} as an extension of the monadic equational logic $L_\eqn$. There is an obvious conversion from strong to weak equations ($\eqs$-to-$\eqw$), and in addition strong and weak equations coincide on propagators by rule (eq$_1$). Two catchers $f_1^\ctc,f_2^\ctc:X\to Y$ behave in the same way on exceptions if and only if $f_1\circ\copa_X \eqs f_2\circ\copa_X :\empt\to Y$, where $\copa_X:\empt\to X$ builds a term of type $X$ from any exception. Then rule (eq$_2$) expresses the fact that weak and strong equations are related by the property that $f_1\eqs f_2$ if and only if $f_1\eqw f_2$ and $f_1\circ\copa_X \eqs f_2\circ\copa_X$. This can also be expressed as a pair of weak equations: $f_1\eqs f_2$ if and only if $f_1\eqw f_2$ and $f_1\circ\copa_X\circ\tagg \eqw f_2\circ\copa_X\circ\tagg$ by rule (eq$_3$). The \emph{core theory of exceptions} $T_\excore$ is the theory of $L_\excore$ generated from the theory $T_\eqn$ of $L_\eqn$. Some easily derived properties are stated in Lemma~\ref{lemm:excore-ul}; which will be used repeatedly. \begin{lemma} \label{lemm:excore-ul} \begin{enumerate} \item \label{pt:excore-ul-lulu} For all pure terms $u_1^\pure,u_2^\pure:X\to P$, the equation $ u_1 \eqs u_2 $ is $T_\excore$-equivalent to $\tagg \circ u_1 \eqs \tagg \circ u_2 $ and also to $\untag \circ \tagg \circ u_1 \eqs \untag \circ \tagg \circ u_2 $. \item \label{pt:excore-ul-l} For all pure terms $u^\pure:X\to P$, $v^\pure:X\to\empt$, the equation $ u \eqs \copa_P \circ v$ is $T_\excore$-equivalent to $\tagg \circ u \eqs v $. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item % Implications from left to right are clear. Conversely, if $\untag \circ \tagg \circ u_1 \eqs \untag \circ \tagg \circ u_2 $, then using the axiom (ax) and the rule (subs$_\eqw$) we get $u_1 \eqw u_2$. Since $u_1$ and $u_2$ are pure this means that $u_1 \eqs u_2$. \item % First, since $\tagg \circ \copa_P :\empt\to\empt $ is a propagator we have $\tagg \circ \copa_P \eqs \id_\empt$. Now, if $ u \eqs \copa_P \circ v$ then $ \tagg \circ u \eqs \tagg \circ \copa_P \circ v \eqs v$. Conversely, if $\tagg \circ u \eqs v$ then $\tagg \circ u \eqs \tagg \circ \copa_P \circ v$, and by Point~\ref{pt:excore-ul-lulu} this means that $u \eqs \copa_P \circ v$. \end{enumerate} \end{proof} The operation $\untag$ in the core language can be used for decomposing the $\mathtt{try}\texttt{-}\mathtt{catch}$ construction in the programmer's language in two steps: a step for catching the exception, which is nested into a second step inside the $\mathtt{try}\texttt{-}\mathtt{catch}$ block: this corresponds to a translation of the programmer's language in the core language, as in \cite{DDER14-exc}, which is reminded below; then Proposition~\ref{prop:excore-impl} proves the correctness of this translation. In view of this translation we extend the core language with: \begin{itemize} \item for each $b^\ppg:P\to Y$, a catcher $(\catchcore(b))^\ctc:Y\to Y$ such that $\catchcore(b) \eqw \id_Y $ and $\catchcore(b)\circ\copa_Y \eqs b\circ\untag$: if the argument of $\catchcore(b)$ is non-exceptional then nothing is done, otherwise the parameter $p$ of the exception is recovered and $b(p)$ is ran. \item for each $a^\ppg\!:\!X\to Y$ and $k^\ctc\!:\!Y\to Y$, a propagator $ (\trycore(a,k))^\ppg :X\to Y$ such that $ \trycore(a,k) \eqw k\circ a$: thus $\trycore(a,k)$ behaves as $k\circ a$ on non-exceptional arguments, but it does always propagate exceptions. \end{itemize} Then, a translation of the programmer's language of exceptions in the core language is easily obtained: for each type $Y$, $\throw_Y^\ppg \!=\! \copa_Y \circ \tagg :P\to Y$. and for each $a^\ppg\!:\!X\!\to\! Y$, $b^\ppg\!:\!P\!\to\! Y$, $(\try (a) \catch (b))^\ppg \!=\! \trycore(a,\catchcore(b))\!:\!X\!\to\! Y$. This translation is correct: see Proposition~\ref{prop:excore-impl}. \begin{proposition} \label{prop:excore-impl} If the pure term $\copa_Y:\empt\to Y$ is a monomorphism with respect to propagators for each type $Y$, the above translation of the programmers' language for exceptions in the core language is correct. \end{proposition} \begin{proof} We have to prove that the image of each rule of $L_\exc$ is satisfied. It should be reminded that strong and weak equations coincide on $L_\exc$. \begin{itemize} \item (propagate) For each $a^\ppg:X\to Y$, the rules of $L_\excore$ imply that $a\circ \copa_X \eqs \copa_Y$, so that $a\circ \copa_X \circ \tagg \eqs \copa_Y \circ \tagg $. \item (recover) For each $u_1^\pure,u_2^\pure:X\to P$, if $\copa_Y \circ \tagg \circ u_1 \eqs \copa_Y \circ \tagg \circ u_2 $ since $\copa_Y$ is a monomorphism with respect to propagators we have $\tagg \circ u_1 \eqs \tagg \circ u_2 $, so that, by Point~\ref{pt:excore-ul-lulu} in Lemma~\ref{lemm:excore-ul}, we get $u_1 \eqs u_2 $. \item (try) Since $\try(a_i)\catch(b) \eqw \catch(b) \circ a_i$ for $i\in\{1,2\}$, we get $\try(a_1)\catch(b) \eqw \try(a_2)\catch(b)$ as soon as $a_1\eqs a_2$. \item (try$_0$) For each $u^\pure:X\to Y$ and $b^\ppg:P\to Y$, we have $\trycore(u,\catchcore(b)) \eqw \catchcore(b) \circ u$ and $\catchcore(b) \circ u \eqw u $ (because $\catchcore(b)\eqw\id $ and $u$ is pure), so that $\trycore(u,\catchcore(b)) \eqw u $. \item (try$_1$) For each $u^\pure:X\to P$ and $b^\ppg:P\to Y$, we have $\trycore(\copa_Y \circ \tagg \circ u,\catchcore(b)) \eqw \catchcore(b) \circ \copa_Y \circ \tagg \circ u$ and $ \catchcore(b) \circ \copa_Y \eqs b \circ \untag$ so that $\trycore(\copa_Y \circ \tagg \circ u,\catchcore(b)) \eqw b \circ \untag \circ \tagg \circ u$. We have also $ \untag \circ \tagg \circ u \eqw u$ (because $\untag \circ \tagg \eqw\id $ and $u$ is pure), so that $\trycore(\copa_Y \circ \tagg \circ u,\catchcore(b)) \eqw b \circ u$. \end{itemize} \end{proof} \begin{example}[Continuation of Example~\ref{ex:trycatch}]\label{ex:taguntag} We here show that it is possible to separate the matching between normal or exceptional behavior from the recovery after an exceptional behavior: to prove that $\try(\mathtt{s}(\throw~3))\catch(\mathtt{p})$ is equivalent to $2$ in the core language, we first use the translation to get: $\trycore(\mathtt{s}\circ\copa\circ\tagg\circ{}3,\catchcore(\mathtt{p}))$. Then (empty$_\eqw$) shows that $\mathtt{s}\circ\copa\tagg\circ{}3\eqw\copa\circ\tagg\circ{}3$. Now, the $\trycore$ and $\catchcore$ translations show that $\trycore(\copa\circ\tagg\circ{}3,\catchcore(\mathtt{p})) \eqw \catchcore(\mathtt{p})\circ\copa\circ\tagg\circ{}3 \eqw \mathtt{p}\circ\untag\circ\tagg\circ{}3$. Finally the axiom (ax) and (eq$_1$) give $\mathtt{p}\circ{}3\eqs{}2$. \end{example} In order to prove the completeness of the core decorated theory for exceptions, as for the proof of Theorem~\ref{theo:exc-complete}, we first determine canonical forms in Proposition~\ref{prop:excore-canonical}, then we study the equations between terms in canonical form in Proposition~\ref{prop:excore-equations}. Let us begin by proving the \emph{fundamental strong equation for exceptions}~(\ref{eq:excore-fundamental}): by replacement in the axiom (ax) we get $\tagg \circ \untag \circ \tagg \eqw \tagg $, then by rule (eq$_3$): \begin{equation} \label{eq:excore-fundamental} \tagg \circ \untag \eqs \id_\empt \end{equation} \begin{proposition} \label{prop:excore-canonical} \begin{enumerate} \item \label{pt:excore-canonical-acc} For each propagator $a^\ppg:X\to Y$, either $a$ is pure or there is a pure term $v^\pure:X\to P$ such that $ a^\ppg \eqs \copa_Y^\pure \circ \tagg^\ppg \circ v^\pure $. And for each propagator $a^\ppg:X\to \empt$ (either pure or not), there is a pure term $v^\pure:X\to P$ such that $ a^\ppg \eqs \tagg^\ppg \circ v^\pure $. \item \label{pt:excore-canonical-ctc} For each catcher $f^\ctc:X\to Y$, either $f$ is a propagator or there is an propagator $a^\acc:P\to Y$ and a pure term $u^\pure:X\to P$ such that $ f^\ctc \eqs a^\ppg \circ \untag^\ctc \circ \tagg^\ppg \circ u^\pure $. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item % If the propagator $a^\ppg:X\to Y$ is not pure then it contains at least one occurrence of $\tagg^\ppg$. Thus, it can be written in a unique way as $ a = b \circ \tagg \circ v$ for some propagator $b^\ppg:\empt\to Y$ and some pure term $v^\pure:X\to P$. Since $b^\ppg:\empt\to Y$ we have $b^\ppg\eqs\copa_Y^\pure$, and the first result follows. When $X=\empt$, it follows that $a^\ppg \eqs \tagg^\ppg \circ v^\pure$. When $a:X\to \empt$ is pure, one has $a \eqs \tagg^\ppg \circ (\copa_P \circ a)^\pure$. \item % The proof proceeds by structural induction. If $f$ is pure the result is obvious, otherwise $f$ can be written in a unique way as $f = g \circ \mathtt{op} \circ u$ where $u$ is pure, $\mathtt{op}$ is either $\tagg$ or $\untag$ and $g$ is the remaining part of $f$. By induction, either $g$ is a propagator or $g \eqs b\circ \untag \circ \tagg \circ v$ for some pure term $v$ and some propagator $b$. So, there are four cases to consider. (1) If $\mathtt{op}=\tagg$ and $g$ is a propagator then $f$ is a propagator. (2) If $\mathtt{op}=\untag$ and $g$ is a propagator then by Point~\ref{pt:excore-canonical-acc} there is a pure term $w$ such that $u \eqs \tagg \circ w$, so that $f \eqs g^\ppg \circ \untag \circ \tagg \circ w^\pure$. (3) If $\mathtt{op}=\tagg$ and $g \eqs b^\ppg\circ \untag \circ \tagg \circ v^\pure$ then $f \eqs b\circ \untag \circ \tagg \circ v \circ \tagg \circ u$. Since $v: \empt\to P$ is pure we have $\tagg \circ v \eqs \id_\empt$, so that $f \eqs b^\ppg\circ \untag \circ \tagg \circ u^\pure$. (4) If $\mathtt{op}=\untag$ and $g \eqs b^\ppg \circ \untag \circ \tagg \circ v^\pure$ then $f \eqs b \circ \untag \circ \tagg \circ v \circ \untag \circ u$. Since $v$ is pure, by (ax) and (subs$_\eqw$) we have $ \untag \circ \tagg \circ v \eqw v $. Besides, by (ax) and (repl$_\eqw$) we have $ v \circ \untag \circ \tagg \eqw v $ and $ \untag \circ \tagg \circ v \circ \untag \circ \tagg \eqw \untag \circ \tagg \circ v $. % Since $\eqw$ is an equivalence relation these three weak equations imply $ \untag \circ \tagg \circ v \circ \untag \circ \tagg \eqw v \circ \untag \circ \tagg $. By rule (eq$_3$) we get $ \untag \circ \tagg \circ v \circ \untag \eqs v \circ \untag$, and by Point~\ref{pt:excore-canonical-acc} there is a pure term $w$ such that $u \eqs \tagg \circ w$, so that $f \eqs (b \circ v)^\ppg \circ \untag \circ \tagg \circ w^\pure$. \end{enumerate} \end{proof} Thanks to Proposition~\ref{prop:excore-canonical}, in order to study equations in the logic $L_\excore$ we may restrict our study to pure terms, propagators of the form $\copa_Y^\pure \circ \tagg^\ppg \circ v^\pure$ and catchers of the form $a^\ppg \circ \untag^\ctc \circ \tagg^\ppg \circ u^\pure$. \begin{proposition} \label{prop:excore-equations} \begin{enumerate} \item \label{prop:excore-equations-ctc-ctc} For all $a_1^\ppg,a_2^\ppg:P\to Y$ and $u_1^\pure,u_2^\pure:X\to P$, let $f_1^\ctc = a_1\circ \untag\circ \tagg\circ u_1:X\to Y$ and $f_2^\ctc = a_2\circ \untag\circ \tagg\circ u_2:X\to Y$, then $f_1 \eqw f_2 $ is $T_\excore$-equivalent to $ a_1\circ u_1 \eqs a_2\circ u_2 $ and $ f_1\eqs f_2 $ is $T_\excore$-equivalent to $ \{ a_1\eqs a_2 \;,\; a_1\circ u_1 \eqs a_2\circ u_2 \}$. \item \label{prop:excore-equations-ctc-ppg} For all $a_1^\ppg:P\to Y$, $u_1^\pure:X\to P$ and $a_2^\ppg:X\to Y$, let $ f_1^\ctc = a_1\circ \untag\circ \tagg\circ u_1:X\to Y$, then $f_1 \eqw a_2$ is $T_\excore$-equivalent to $ a_1\circ u_1 \eqs a_2 $ and $f_1 \eqs a_2$ is $T_\excore$-equivalent to $ \{ a_1\circ u_1 \eqs a_2 \;,\; a_1\eqs \copa_Y \circ \tagg \}$. \item \label{pt:excorec-equations-ppg-ppg} Let us assume that $\copa_Y^\pure$ is a monomorphism with respect to propagators. For all $v_1^\pure,v_2^\pure:X\to P$, let $a_1^\ppg = \copa_Y\circ \tagg \circ v_1:X\to Y$ and $a_2^\ppg = \copa_Y\circ \tagg \circ v_2:X\to Y$. Then $ a_1\eqs a_2 $ is $T_\excore$-equivalent to $ v_1\eqs v_2 $. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item % Rule (eq$_2$) implies that $f_1\eqs f_2$ if and only if $f_1\eqw f2$ and $f_1 \circ \copa_X \eqs f_2 \circ \copa_X$. On the one hand, $f_1\eqw f_2$ if and only if $a_1\circ u_1 \eqs a_2\circ u_2$: indeed, for each $i\in\{1,2\}$, by (ax) and (subs$_\eqw$), since $u_i$ is pure we have $f_i \eqw a_i \circ u_i$. On the other hand, let us prove that $f_1 \circ \copa_X \eqs f_2 \circ \copa_X$ if and only if $a_1 \eqs a_2$. For each $i\in\{1,2\}$, the propagator $\tagg \circ u_i \circ \copa_X : \empt\to\empt$ satisfies $\tagg \circ u_i \circ \copa_X \eqs \id_\empt$, so that $f_i \circ \copa_X \eqs a_i \circ \untag $. Thus, $f_1 \circ \copa_X \eqs f_2 \circ \copa_X$ if and only if $a_1 \circ \untag \eqs a_2 \circ \untag$. Clearly, if $a_1 \eqs a_2$ then $a_1 \circ \untag \eqs a_2 \circ \untag$. Conversely, if $a_1 \circ \untag \eqs a_2 \circ \untag$ then $a_1 \circ \untag \circ \tagg \eqs a_2 \circ \untag \circ \tagg $, so that by (ax) and (repl$_\eqw$) we get $a_1 \eqw a_2$, which means that $a_1 \eqs a_2$ because $a_1$ and $a_2$ are propagators. \item % Rule (eq$_2$) implies that $f_1\eqs a_2$ if and only if $f_1\eqw a_2$ and $f_1 \circ \copa_X \eqs a_2 \circ \copa_X$. On the one hand, $f_1\eqw a_2$ if and only if $a_1\circ u_1 \eqs a_2$: indeed, by (ax) and (subs$_\eqw$), since $u_1$ is pure we have $f_1 \eqw a_1 \circ u_1$. On the other hand, let us prove that $f_1 \circ \copa_X \eqs a_2 \circ \copa_X$ if and only if $a_1\eqs \copa_Y \circ \tagg$, in two steps. Since $ a_2 \circ \copa_X : \empt\to Y$ is a propagator, we have $ a_2 \circ \copa_X \eqs \copa_Y$. Since $f_1 \circ \copa_X = a_1\circ \untag\circ \tagg\circ u_1 \circ \copa_X$ with $\tagg\circ u_1 \circ \copa_X:\empt\to\empt $ a propagator, we have $\tagg\circ u_1 \circ \copa_X \eqs \id_\empt$ and thus we get $ f_1 \circ \copa_X \eqs a_1\circ \untag$. Thus, $f_1 \circ \copa_X \eqs a_2 \circ \copa_X$ if and only if $a_1\circ \untag \eqs \copa_Y$. If $a_1\circ \untag \eqs \copa_Y$ then $a_1\circ \untag\circ \tagg \eqs \copa_Y\circ \tagg$, by (ax) and (repl$_\eqw$) this implies $a_1 \eqw \copa_Y\circ \tagg$, which is a strong equality because both members are propagators. Conversely, if $a_1 \eqs \copa_Y\circ \tagg $ then $a_1 \circ \untag \eqs \copa_Y\circ \tagg \circ \untag $, by the fundamental equation~(\ref{eq:excore-fundamental}) this implies $a_1 \circ \untag \eqs \copa_Y$. Thus, $a_1\circ \untag \eqs \copa_Y$ if and only if $a_1 \eqs \copa_Y\circ \tagg $. \item Clearly, if $ v_1\eqs v_2 $ then $ \copa_Y\circ \tagg \circ v_1\eqs \copa_Y\circ \tagg \circ v_2$. Conversely, if $ \copa_Y\circ \tagg \circ v_1\eqs \copa_Y\circ \tagg \circ v_2$ then since $\copa_Y$ is a monomorphism with respect to propagators we get $ \tagg \circ v_1\eqs \tagg \circ v_2$, so that $ \untag \circ \tagg \circ v_1\eqs \untag \circ \tagg \circ v_2$. Now, from (ax) we get $ v_1\eqw v_2 $, which means that $ v_1\eqs v_2 $ because $v_1$ and $v_2$ are pure. \end{enumerate} \end{proof} Assumption~\ref{ass:excore-equations} is the image of Assumption~\ref{ass:exc-equations} by the above translation. \begin{assumption} \label{ass:excore-equations} In the logic $L_\excore$, the type of parameters $P$ is non-empty, and for all $v_1^\pure:X\to P$ and $v_2^\pure: X\to Y$ with $X$ non-empty, let $a_1^\ppg = \copa_Y\circ \tagg \circ v_1 :X\to Y$. Then $ a_1^\ppg \eqs v_2^\pure$ is $T_\exc$-equivalent to $T_\mymaxz$. \end{assumption} \begin{theorem} \label{theo:excore-complete} Under Assumption~\ref{ass:excore-equations}, the theory of exceptions $T_\excore$ is Hilbert-Post complete with respect to the pure sublogic $L_\eqn$ of $L_\excore$. \end{theorem} \begin{proof} Using Corollary~\ref{coro:hpc-equations}, the proof is based upon Propositions~\ref{prop:excore-canonical} and \ref{prop:excore-equations}. It follows the same lines as the proof of Theorem~\ref{theo:exc-complete}, except when $X$ is empty: because of catchers the proof here is slightly more subtle. First, the theory $T_\excore$ is consistent, because (by soundness) it cannot be proved that $\untag^\ctc \eqs \copa_P^\pure$. Now, let us consider an equation between terms $f_1,f_2:X\to Y$, and let us prove that it is $T_\excore$-equivalent to a set of pure equations. When $X$ is non-empty, Propositions~\ref{prop:excore-canonical} and~\ref{prop:excore-equations}, together with Assumption~\ref{ass:excore-equations}, prove that the given equation is $T_\excore$-equivalent to a set of pure equations. When $X$ is empty, then $f_1\eqw \copa_Y$ and $f_2\eqw \copa_Y$, so that if the equation is weak or if both $f_1$ and $f_2$ are propagators then the given equation is $T_\excore$-equivalent to the empty set of equations between pure terms. When $X$ is empty and the equation is $f_1 \eqs f_2$ with at least one of $f_1$ and $f_2$ a catcher, then by Point~\ref{prop:excore-equations-ctc-ctc} or~\ref{prop:excore-equations-ctc-ppg} of Proposition~\ref{prop:excore-equations}, the given equation is $T_\excore$-equivalent to a set of equations between propagators; but we have seen that each equation between propagators (whether $X$ is empty or not) is $T_\excore$-equivalent to a set of equations between pure terms, so that $f_1\eqs f_2$ is $T_\excore$-equivalent to the union of these sets of pure equations. \end{proof} \section{Verification of Hilbert-Post Completeness in Coq} \label{sec:coq} All the statements of Sections~\ref{sec:exc} and~\ref{sec:excore} have been checked in Coq. The proofs can be found in \url{http://forge.imag.fr/frs/download.php/680/hp-0.7.tar.gz}, as well as an almost dual proof for the completeness of the state. They share the same framework, defined in~\cite{DDEP13-coq}: \begin{enumerate} \item the terms of each logic are inductively defined through the dependent type named $\mathtt{term}$ which builds a new \texttt{Type} out of two input \texttt{Type}s. For instance, $\mathtt{term \ Y \ X}$ is the \texttt{Type} of all terms of the form $\mathtt{f\colon X\to Y}$; \item the decorations are enumerated: \texttt{pure} and \texttt{propagator} for both languages, and \texttt{catcher} for the core language; \item decorations are inductively assigned to the terms via the dependent type called $\mathtt{is}$. The latter builds a proposition (a \texttt{Prop} instance in Coq) out of a \texttt{term} and a decoration. Accordingly, \texttt{is pure (id X)} is a \texttt{Prop} instance; \item for the core language, we state the rules with respect to weak and strong equalities by defining them in a mutually inductive way. \end{enumerate} The completeness proof for the exceptions core language is 950 SLOC in Coq where it is 460 SLOC in \LaTeX. Full certification runs in 6.745s on a Intel i7-3630QM @2.40GHz using the Coq Proof Assistant, v. 8.4pl3. Below table details the proof lengths and timings for each library. \begin{center} \begin{tabular}{ |l||l|l|l|l|} \hline \multicolumn{5}{|c|}{Proof lengths \& Benchmarks} \\ \hline package & source & length & length & execution time \\ & & in Coq & in \LaTeX & in Coq \\ \hline exc\_cl-hp \; & HPCompleteCoq.v \; &40 KB& 15 KB&6.745 sec. \\ exc\_pl-hp & HPCompleteCoq.v & 8 KB &6 KB&1.704 sec. \\ exc\_trans & Translation.v & 4 KB& 2 KB&1.696 sec. \\ st-hp & HPCompleteCoq.v & 48 KB& 15 KB &7.183 sec.\\ \hline \end{tabular} \end{center} The correspondence between the propositions and theorems in this paper and their proofs in Coq is given in Fig.~\ref{fig:coq-table}, and the dependency chart for the main results in Fig.~\ref{fig:coq-chart}. For instance, Proposition~\ref{prop:exc-equations} is expressed in Coq as: \scriptsize \begin{verbatim} forall {X Y} (a1 a2: term X Y) (v1 v2: term (Val e) Y), (is pure v1) /\ (is pure v2) /\ (a1 = ((@throw X e) o v1)) /\ (a2 = ((@throw X e) o v2)) -> ((a1 == a2) <-> (v1 == v2)). \end{verbatim} \normalsize \begin{figure}[ht] \renewcommand{\arraystretch}{1} \begin{center} \begin{tabular}{ |l|l|} \hline \multicolumn{2}{|c|}{ hp-0.7/exc$\_$trans/Translation.v} \\ \hline Proposition~\ref{prop:excore-impl} (propagate) & \texttt{propagate} \\ Proposition~\ref{prop:excore-impl} (recover) & \texttt{recover} \\ Proposition~\ref{prop:excore-impl} (try) & \texttt{try} \\ Proposition~\ref{prop:excore-impl} (try$_0$) & \texttt{try$_0$} \\ Proposition~\ref{prop:excore-impl} (try$_1$) & \texttt{try$_1$} \\ \hline \end{tabular} \\ \vspace{15pt} \begin{tabular}{ |l|l|} \hline \multicolumn{2}{|c|}{ hp-0.7/exc$\_$pl-hp/HPCompleteCoq.v} \\ \hline Proposition~\ref{prop:exc-canonical} \; & \texttt{can$\_$form$\_$th} \\ Proposition~\ref{prop:exc-equations} & \texttt{eq$\_$th$\_$1$\_$eq$\_$pu} \\ Assumption~\ref{ass:exc-equations} \; & \texttt{eq$\_$th$\_$pu$\_$abs} \\ Theorem~\ref{theo:exc-complete} & \texttt{HPC$\_$exc$\_$pl} \\ \hline \end{tabular}\\ \begin{tabular}{ |l|l|} \multicolumn{2}{c}{ \null } \\ \hline \multicolumn{2}{|c|}{ hp-0.7/exc$\_$cl-hp/HPCompleteCoq.v} \\ \hline Proposition~\ref{prop:excore-canonical}~Point~\ref{pt:excore-canonical-acc} & \texttt{can$\_$form$\_$pr} \\ Proposition~\ref{prop:excore-canonical}~Point~\ref{pt:excore-canonical-ctc} & \texttt{can$\_$form$\_$ca} \\ Assumption~\ref{ass:excore-equations} & \texttt{eq$\_$pr$\_$pu$\_$abs} \\ Proposition~\ref{prop:excore-equations}~Point~\ref{prop:excore-equations-ctc-ctc} & \texttt{eq$\_$ca$\_$2$\_$eq$\_$pr} \\ Proposition~\ref{prop:excore-equations}~Point~\ref{prop:excore-equations-ctc-ppg} & \texttt{eq$\_$ca$\_$pr$\_$2$\_$eq$\_$pr} \\ Proposition~\ref{prop:excore-equations}~Point~\ref{pt:excorec-equations-ppg-ppg} & \texttt{eq$\_$pr$\_$1$\_$eq$\_$pu} \\ Theorem~\ref{theo:excore-complete} & \texttt{HPC$\_$exc$\_$core} \\ \hline \end{tabular} \end{center} \renewcommand{\arraystretch}{1} \vspace{-15pt} \caption{Correspondence between theorems in this paper and their Coq counterparts} \label{fig:coq-table} \end{figure} \begin{figure}[H] \begin{framed} \hspace{-5mm} \small $ \xymatrix@C=.2pc@R=-.3pc{ \mathtt{can\_form\_ca} \ar[dr] &\\ & \mathtt{eq\_ca\_1\_or\_2\_eq\_pr} \ar[ddr] \ar[ddddddr]\\ \mathtt{eq\_ca\_pr\_2\_eq\_pr} \ar[ur] &\\ & & \mathtt{eq\_ca\_abs\_or\_2\_eq\_pu} \ar[ddr]\\ \mathtt{can\_form\_pr} \ar[dr]\\ \mathtt{eq\_pr\_1\_eq\_pu} \ar[r] & \mathtt{eq\_pr\_abs\_or\_1\_eq\_pu} \ar[uur] \ar[ddr] & &\mathtt{HPC\_exc}\\ \mathtt{eq\_pr\_pu\_abs} \ar[ur]&\\ & & \mathtt{eq\_ca\_abs\_2\_eq\_pu\_dom\_emp}\ar[uur]\\ & \mathtt{eq\_pr\_dom\_emp}\ar[ur]\\ } $ \normalsize \vspace{-2mm} \end{framed} \vspace{-15pt} \caption{Dependency chart for the main results} \label{fig:coq-chart} \end{figure} \section{Conclusion and future work} This paper is a first step towards the proof of completeness of decorated logics for computer languages. It has to be extended in several directions: adding basic features to the language (arity, conditionals, loops, \dots), proving completeness of the decorated approach for other effects (not only states and exceptions); the combination of effects should easily follow, thanks to Proposition~\ref{prop:hpc-compose}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} Multiferroic materials have increasingly inspired interest in recent years due to their potential technological applications; specifically magnetoelectric coupling, where the magnetic order can be influenced by an applied electric field and vice versa, may have intriguing applications in spintronics \cite{Kostylev2005,Tong2016,Zanolli2016} and novel electronic components \cite{Gajek2007,Yang2010}. Of particular interest are improper ferroelectric multiferroics, in which ferroelectricity is induced by frustrated magnetic interactions, forming a magnetic state that breaks inversion symmetry, for instance an incommensurate spin-cycloid \cite{Katsura2005,Kenzelmann2005}. Strong magnetoelectric coupling can occur in such materials \cite{Kimura2003,Wang2009}, however the improper ferroelectric phase generally occurs only at low temperatures, typically below $\sim70$\,K \cite{Kimura2003}. For many of the technological applications to be realized, room-temperature magnetoelectrics are strongly desired. The static polarization in an improper ferroelectric phase can be understood as arising from the spin current or inverse Dzyaloshinskii-Moriya (DM) interaction \cite{Katsura2005}, which induces a polarization $\mathbf{P}\propto\left(\mathbf{S}_{n}\times\mathbf{S}_{n+1}\right)$, where $\mathbf{S}_{n}$, $\mathbf{S}_{n+1}$ are adjacent spins in the cycloid. The induced static polarization in many improper ferroelectric materials is weak (typically $<1$\,$\mu$C\,cm$^{-2}$) \cite{Kimura2003}. Further, improper ferroelectrics exhibit dynamic magnetoelectric coupling, whereby an oscillating electric field couples to a spin wave, or magnon. This results in a novel quasiparticle excitation at terahertz frequencies - the electromagnon \cite{Pimenov2006,Pimenov2006a,Sushkov2007,Katsura2007,Kida2009,Jones2014}. Two distinct mechanisms that give rise to electromagnons have been discussed in the literature: Dzyaloshinskii-Moriya electromagnons, which are eigenmodes of a spin-cycloid induced by the DM interaction \cite{Katsura2007} and couple directly to oscillating electric fields \cite{DeSousa2008}; and exchange-striction electromagnons, which arise from a modulation of the isotropic Heisenberg exchange interaction proportional to a ($\mathbf{S}_{i}\cdot\mathbf{S}_{j}$) term in the Hamiltonian \cite{Sushkov2008,ValdesAguilar2009}. Electromagnons have been observed at low temperatures ($<70$\,K) in materials such as $R$MnO$_{3}$ and $R$Mn$_{2}$O$_{5}$ ($R=$ rare earths) \cite{Pimenov2006,Pimenov2006a,Sushkov2007,Katsura2007,Kida2009}, and at high temperatures in Cu$_{1-x}$Zn$_{x}$O alloys (213\,-\,230\,K in $x=0$ and 159\,-\,190\,K in $x=0.05$) \cite{Jones2014,Jones2014a}. An IR and Raman-active electromagnon has also recently been observed at up to 250K in a z-type hexaferrite \cite{Kadlec2016}. In this paper we first review the salient features of improper ferroelectricity in pure and spin-disordered CuO. We then demonstrate that the electromagnon can be used to probe order-to-order phase transitions, such as found in CuO, where one phase is an improper ferroelectric. A detailed study of the antiferromagnetic/paraelectric to antiferromagnetic/ferroelectric phase transition using terahertz time-domain spectroscopy highlighted the hysteretic nature of the phase transition, and the influence of spin-disorder. \section*{Improper ferroelectricity in CuO} A promising material system in the study of improper ferroelectrics is cupric oxide (CuO), which exhibits a magnetically-induced ferroelectric phase with spin-cycloidal ordering at $\sim$230\,K \cite{Kimura2008}. CuO has a monoclinic crystal structure with space group $C2/c$, which can be visualized as zig-zagging Cu-O-Cu chains along the [101] and [10$\bar{1}$] directions. The magnetic phases of CuO have previously been characterized by neutron diffraction \cite{Forsyth1988,Ain1992} and ultrasound velocity measurements \cite{Villarreal2012}. Below 213\,K, the dominant magnetic interaction is antiferromagnetic superexchange between spins in Cu$^{2+}$ chains in the [10$\bar{1}$] direction ($J_{\mathrm{AFM}}\sim80$\,meV) \cite{Ain1989}. Weaker ferromagnetic superexchange interactions occur between spins in adjacent spin chains along the [101] ($J_{\mathrm{FM1}}\sim5$\,meV) and [010] ($J_{\mathrm{FM2}}\sim3$\,meV) directions \cite{Ain1989}. As the ratio of interchain to intrachain interactions is about 0.1, CuO can be described as a quasi-1D collinear Heisenberg antiferromagnet in the low temperature (AF1) phase \cite{Boothroyd1997,Shimizu2003}, consisting of two interpenetrating Cu$^{2+}$ sublattices with spins aligned along the $b$-axis. At 213\,K the spins on one sublattice flop into the $ac$-plane \cite{Yablonskii1990} and form an incommensurate spin-cycloid phase (AF2) with magnetic ordering vector $\mathbf{q}=(0.506, 0, -0.483)$ \cite{Forsyth1988,Ain1992}. A magnetically-induced ferroelectric polarization in this phase occurs in the $b$-direction \cite{Kimura2008}, which also exhibits ferroelectric hysteresis loops \cite{Kimura2008}. An electromagnon occurs in the multiferroic AF2 phase, which is active for oscillating electric fields parallel to the [101] direction and its strength is linked to the size of the static polarization \cite{Jones2014}. Between 229.3\,-\,230\,K an intermediate commensurate, collinear magnetic phase (AF3) forms \cite{Villarreal2012}, and above 230\,K is the paramagnetic phase (PM). The first-order nature of the AF1\,-\,AF2 phase transition has been observed by specific heat \cite{Junod1989,Loram1989,Gmelin1991} measurements. The occurrence of the AF1\,-\,AF2 phase transition can be understood using the mechanism proposed by Yablonskii \cite{Yablonskii1990}, where a biquadratic exchange term stabilizes the low temperature AF1 state. The free energy of a system is given by $F=E-TS$, where $T$ is temperature and $S$ is entropy. The energy of the interactions between the spin chains in the [10$\bar{1}$] direction is given by Yablonskii as \cite{Yablonskii1990}: \begin{equation} \begin{split} E=\sum_{n}\big[J_{1}\mathbf{S}_{n}\cdot\mathbf{S}_{n+1}+J_{2}\mathbf{S}_{n}\cdot\mathbf{S}_{n+2} \\ +I(\mathbf{S}_{n}\cdot\mathbf{S}_{n+1})(\mathbf{S}_{n}\cdot\mathbf{S}_{n+2})\big], \label{Yablonskii} \end{split} \end{equation} \noindent where $J_{1}$ is the nearest-neighbour ferromagnetic exchange interaction, $J_{2}$ is the next-nearest-neighbour antiferromagnetic exchange interaction, and $I$ is the biquadratic exchange interaction. Taking a mean-field approximation with the spatially averaged value of the spins as $S_{\mathrm{av}}$, the AF1 state will have a lower energy for $S_{\mathrm{av}}^{2}>J_{1}^{2}/8IJ_{2}$. As temperature increases, the value of $S_{\mathrm{av}}$ decreases, therefore at some point the AF2 state has a lower energy and the system undergoes a phase transition. The value of $S_{\mathrm{av}}$ is effectively controlled by disorder, which can be influenced in a number of ways, the simplest being a change in temperature changing the form of $F$ and driving the phase transition. Analogously a change in the entropy of the system will also change the form of $F$; evidence for this has been provided by optical-pump X-ray probe measurements on CuO \cite{Johnson2012}. With the system held just below the phase transition at 207\,K, electrons were excited above the charge-transfer bandgap by 800\,nm femtosecond pulses, introducing spin-disorder into the system and thus changing the entropy. A larger reduction in the intensity of the X-ray peak associated with the AF1 phase compared to the peak associated with the AF2 phase was observed, hinting at an ultrafast phase transition driven by the introduction of spin-disorder. \section*{Quenched spin-disorder in CuO} An alternative method of introducing spin-disorder into the system which may be more desirable for technological applications is by alloying with non-magnetic ions: theoretical investigations predict that the introduction of non-magnetic impurities may stabilize the multiferroic phase at higher temperatures than the pure case \cite{Giovannetti2011}, and that hydrostatic pressure can broaden the multiferroic phase above room temperature \cite{Rocquefelte2013}. Alloying of Cu$_{1-x}M_{x}$O with a non-magnetic ion $M$ has been demonstrated to broaden the multiferroic phase, for $M=$ Zn, Co by studying the static ferroelectric polarization along $b$ \cite{Hellsvik2014}, and for $M=$ Zn using the electromagnon response \cite{Jones2014a}. Spin-disorder introduced by the non-magnetic ions breaks up long-range correlations between spins and suppresses the N\'{e}el temperatures \cite{Hone1975}, reducing the AF1\,-\,AF2 transition to $\sim159$\,K and the AF2\,-paraelectric transition to $\sim190$\,K for $M=$ Zn and $x=0.05$ \cite{Jones2014a}. This equates to broadening the multiferroic phase from 17\,K to 30\,K. The electromagnon is preserved in the AF2 phase of Cu$_{0.95}$Zn$_{0.05}$O, with the same E\,$\parallel$\,[101] selection rule as in the pure case \cite{Jones2014,Jones2014a}. The change in magnetic or lattice structure at a first-order phase transition is associated with many functional properties that emerge in the ordered phases \cite{Roy2013,Roy2014}. First-order phase transitions are characterized by a latent heat, due to a difference in entropy between the two phases requiring the system to absorb a fixed amount of energy in order for the transition to occur. This leads to phase coexistence, where some parts of the material have undergone the transition and others have not. In alloys, the dopant concentration can influence the nature of both the phase transition and the functional properties of interest \cite{Roy2013,Roy2014}. Imry and Wortis demonstrated that a first-order phase transition can be broadened by the introduction of quenched random disorder into the system \cite{Imry1979}. The correlation length of interactions can be reduced by local impurities, causing different parts of the material to undergo a first-order phase transition at different temperatures, appearing to round off the discontinuity in the order parameter or other related experimental observables. Experimentally, a phase transition can be classified as first-order by the observation of a discontinuous change in the order parameter and a latent heat, phase coexistence, or hysteresis with respect to a control variable (such as temperature or external magnetic field). It is not necessary to observe hysteresis in the order parameter, as many experimental observables can be hysteretic over a first-order phase transition, exemplified by the observation of hysteresis in magneto-optical properties during the melting of a superconducting vortex-solid \cite{Soibel2000}. Broadening of first-order phase transitions by disorder may obscure discontinuities in experimental observables, making the latent heat of the transition difficult to determine \cite{Roy2014,Imry1979}. As such, hysteresis or phase coexistence are required to observe the first-order nature of the transition. Ideally, one should look for hysteresis in an experimental observable that is uniquely linked to the multiferroic state (e.g.\ the ferroelectric polarization), rather then observables that only change marginally across the transition (e.g.\ ultrasound velocity, magnetization). The pyroelectric current method is commonly used to study the polarization of magnetically-induced ferroelectrics: a large electric field is applied while in the multiferroic state, which is then removed and the pyroelectric current on cooling or heating through the ferroelectric/paraelectric phase transition is recorded and integrated to yield the polarization. This method often only measures the polarization as the temperature is swept out of the multiferroic state, and it is therefore challenging to study thermal hysteresis. As the electromagnon in Cu$_{1-x}$Zn$_{x}$O is experimentally observed via the dynamic excitation of the ferroelectric polarization, it cannot be observed without the AF2 phase being present within the material. The excitation energy and absorption strength of the electromagnon in CuO have previously been shown to closely track the size of the static polarization $P_{[010]}$ in the AF2 phase \cite{Jones2014}, which is in turn intimately linked to the magnetic order. Theoretical investigations of electromagnons in CuO indicate that the electromagnon energy is related to the size of the magnetoelectric coupling parameters in the spin Hamiltonian \cite{Cao2015}. Hence the properties of the electromagnon can provide a direct probe of the magnetic interactions giving rise to multiferroicity, and their evolution as the material undergoes a phase transition. \section*{Experimental Methods} \begin{figure*}[tb] \includegraphics[width=1.0\textwidth]{ContourComp.pdf} \caption{\small Change in terahertz absorption coefficient $\Delta\alpha$ with respect to the low temperature phase over the AF1\,-\,AF2 phase transition in Cu$_{1-x}$Zn$_{x}$O alloys, with \textbf{(a)} $x=0$ and \textbf{(b)} $x=0.05$.} \label{Figure1} \end{figure*} Single crystals of CuO and Cu$_{0.95}$Zn$_{0.05}$O were prepared by methods described elsewhere \cite{Prabhakaran2003,Jones2014a}. Samples of both materials were cut from the boule and aligned using Laue X-ray diffraction to have a ($10\bar{1}$) surface normal, giving the [101] and [010] crystallographic directions in the plane. Samples were double-side polished, resulting in a thickness of $1.28$\,mm for the CuO and $0.89$\,mm for the Cu$_{0.95}$Zn$_{0.05}$O. The electromagnon response of the samples was measured by terahertz time-domain spectroscopy (THz-TDS) \cite{Jepsen2011,Lloyd-Hughes2012}. THz-TDS directly measures both the amplitude and the phase of the THz electric field after transmission through the sample, allowing the complex refractive index of the sample $\widetilde{n}=n+i\kappa$ to be determined. In order to avoid the influence of absorption from the broad $A^{3}_{u}$ phonon mode \cite{Jones2014a} or birefringent effects resulting from sample misalignment \cite{Jones2014,Mosley2017}, the relative change in terahertz absorption coefficient with respect to the low temperature AF1 phase $\Delta\alpha=\alpha(T)-\alpha(T_{\mathrm{AF1}})$, where $\alpha=2\omega\kappa/c$, was used instead of a vacuum reference. Single-cycle, linearly-polarized pulses of THz radiation were generated using a wide-area interdigitated GaAs photoconductive emitter and detected via electro-optic sampling in ZnTe. The entire THz path inside the spectrometer was purged with dry N$_{2}$ gas in order to remove the influence of water vapour, which has strong absorption features at THz frequencies \cite{VanExter1989}. Samples were mounted in a cryostat such that the THz electric field was parallel to the [101] crystallographic direction, where the electromagnon absorption is strongest \cite{Jones2014,Mosley2017}, and positioned at the sample focus of the spectrometer. On heating and cooling the sample temperature was increased or decreased in discrete steps of 0.1\,K, with a wait time of 2 minutes before beginning data acquisition at each step, to ensure the sample was in thermal equilibrium with the cold finger of the cryostat. Data acquisition took 5 minutes at each step. The wait time and measurement time was the same for all scans on both samples. \section*{Results and Discussion} \subsection*{Electromagnon response over the AF1\,-\,AF2 phase transition} To investigate the effects of zinc alloying on the AF1\,-\,AF2 phase transition in Cu$_{1-x}$Zn$_{x}$O, the change in terahertz absorption coefficient $\Delta\alpha$ over the AF1\,-\,AF2 phase transition is presented in Fig.\,\ref{Figure1}(a) for $x=0$ and Fig.\,\ref{Figure1}(b) for $x=0.05$. Samples were heated over a 5\,K range around the AF1\,-\,AF2 transition temperatures ($\sim213$\,K for $x=0$ and $\sim160$\,K for $x=0.05$), and $\Delta\alpha$ measured at each step. The electromagnon is observable in both samples as a peak in $\Delta\alpha$, at 0.72\,THz for $x=0$ and at 0.85\,THz for $x=0.05$, consistent with previous observations \cite{Jones2014,Jones2014a}. In the case of pure CuO, the electromagnon onset is rapid, with $\Delta\alpha$ rising from zero to a maximum in around 1.5\,K. Comparatively in the Zn-alloyed sample the onset is much slower, with the rise in absorption occurring in around 3.5\,K. At higher frequencies, a broad shoulder to the electromagnon around 1.2\,THz is also visible in the $x=0$ sample, and has a similar temperature-dependent onset as the main electromagnon. Intriguingly, this higher-frequency shoulder is not observed in the $x=0.05$ sample, suggesting that this mode may be either shifted in energy, suppressed or even disrupted entirely upon alloying with zinc. The electromagnon strength is intimately linked to the incommensurate magnetic order in the multiferroic phase, and therefore can give insight into the nature of the magnetic ordering. To evaluate the properties of the observed electromagnons in the Cu$_{1-x}$Zn$_{x}$O alloys, and to allow comparison with each other and others reported in the literature, a Drude-Lorentz oscillator model was used to fit the data. This gives the temperature-dependent change in dielectric function $\Delta\epsilon(\omega)=\epsilon(\omega,T_{2})-\epsilon(\omega,T_{1})$ as \begin{equation} \Delta\epsilon(\omega)=\frac{\Delta\epsilon\cdot\omega_{0}^{2}}{\omega^{2}_{0}-\omega^{2}-i\omega\Gamma}, \end{equation} \noindent where $\Delta\epsilon$ is the oscillator strength, $\omega_{0}$ is the oscillator frequency and $\Gamma$ is the linewidth. Fits to the experimentally measured change in dielectric function $\Delta\epsilon(\omega)$ of Cu$_{1-x}$Zn$_{x}$O with respect to the low-temperature phase are presented in Fig.\,\ref{Figure2}(a) for $x=0$ at 213\,K, and in Fig.\,\ref{Figure2}(b) for $x=0.05$ at 161\,K. A combination of two oscillators was required when fitting $\Delta\epsilon(\omega)$ for $x=0$, with the main electromagnon (blue) having a strength $\Delta\epsilon=0.065$, frequency $f=0.71$\,THz and linewidth $\Gamma=2.0$\,THz, while the broader shoulder mode (green) has $\Delta\epsilon=0.025$ and linewidth $\Gamma=8.0$\,THz. In order to correctly model the high-frequency features, the frequency of the shoulder mode was fixed at 1.2\,THz. Comparatively $\Delta\epsilon(\omega)$ for $x=0.05$ was well fit by a single oscillator (red), with strength $\Delta\epsilon=0.072$, frequency $f=0.84$\,THz and linewidth $\Gamma=2.0$\,THz. All values are consistent with those reported previously in the literature \cite{Jones2014,Jones2014a}. Experimental bandwidth was limited to 1.3\,THz in $x=0.05$ due to the softening and broadening of the phonon modes upon alloying with zinc \cite{Jones2014a}. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{RiseFitParameters.pdf} \caption{\small \textbf{(a)} and \textbf{(b)} are the temperature-induced change in dielectric function $\Delta\epsilon(\omega)$ (black dots) across the AF1\,-\,AF2 phase transition in Cu$_{1-x}$Zn$_{x}$O alloys with $x=0$ and $x=0.05$, respectively. Black lines are Drude-Lorentz fits comprised of a single oscillator (red shaded area) for $x=0.05$ and two oscillators (blue and green shaded areas) for $x=0$. \textbf{(c)}, \textbf{(d)} and \textbf{(e)} are the best fit parameters of the oscillator strength, frequency and linewidth $\Gamma$, respectively, over the phase transition for $x=0$. \textbf{(f)}, \textbf{(g)} and \textbf{(h)} are the corresponding best fit parameters over the phase transition for $x=0.05$. The color of the data points corresponds to the oscillator they refer to in \textbf{(a)} and \textbf{(b)}.} \label{Figure2} \end{figure} To quantify the evolution of the electromagnon across the phase transition in both samples, similar Drude-Lorentz fits to those in Fig.\,\ref{Figure2}(a) and \ref{Figure2}(b) were performed at each temperature step on cooling the samples over the AF1\,-\,AF2 phase transition. Fit parameters of oscillator strength, oscillator frequency and linewidth for $x=0$ are presented in Figs.\,\ref{Figure2}(c)\,-\,\ref{Figure2}(e). The strength of both oscillators begins increasing sharply at 212.2\,K, reaching a maximum in around 1.2\,K. The frequency of the main electromagnon mode continually red-shifts with increasing temperature over the phase transition, decreasing from 0.73\,THz at 212.2\,K to 0.69\,THz at 214.2\,K. The linewidth of both oscillators remains constant over the majority of the phase transition, decreasing slightly in width from when the excitation first begins to emerge. Fit parameters for $x=0.05$ are presented in Figs.\,\ref{Figure2}(f)\,-\,\ref{Figure2}(h). The strength of the oscillator shows a gentler, broader increase with temperature, starting to increase around 157.8\,K and reaching a maximum after around 3.5\,K. Both frequency and linewidth demonstrate the same trends seen for $x=0$, with the frequency red-shifting from 0.9\,THz at 167\,K to 0.85\,THz at 161\,K, and the linewidth remaining constant except for initially when the oscillator strength was very small. Values of frequency and linewidth are presented for temperatures at which the oscillator strength was large enough to provide a meaningful fit. Theoretical investigations of electromagnons in CuO \cite{Cao2015} have modeled the magnetic properties using a spin Hamiltonian of the form \begin{equation} \begin{split} \widehat{H}=\sum_{ij}J_{ij}\mathbf{S}_{i}\cdot\mathbf{S}_{j}+\mathbf{D}_{ij}\cdot(\mathbf{S}_{i}\times\mathbf{S}_{j}) \\ -\sum_{i}(\mathbf{K}\cdot\mathbf{S}_{i})^{2}+\widehat{H}_{\mathrm{me}}, \end{split} \end{equation} \noindent where the first term accounts for superexchange interactions between adjacent spins $\mathbf{S}_{i}$, the second term decribes the DM interaction, the third term describes single-ion anisotropy, and the final term accounts for magnetoelectric coupling. Increases to the magnitude of $\mathbf{D}$ and $\mathbf{K}$, the DM and anisotropic coupling constants respectively, were found to enhance the electromagnon energy, directly linking the electromagnon energy to the strength of the interactions responsible for the AF2 phase. An electromagnon frequency of 0.73\,THz (energy $\sim3$\,meV) as observed at 212.3\,K in Fig.\,\ref{Figure2}(d) corresponds to $\left|\mathbf{D}\right|=0.4$\,meV and $\left|\mathbf{K}\right|=0.6$\,meV \cite{Cao2015}. The frequency of the electromagnon mode in Cu$_{1-x}$Zn$_{x}$O is observed to increase considerably upon alloying of $x=0.05$, suggesting that zinc alloying alters the interactions between spins. From reference \cite{Cao2015}, an increase in electromagnon frequency to 0.9\,THz (energy $\sim3.7$\,meV) as observed at 158.1\,K in Fig.\,\ref{Figure2}(g) corresponds to either an increase in $\left|\mathbf{D}\right|$ to 0.65\,meV, or an increase in $\left|\mathbf{K}\right|$ to 0.95\,meV, or a smaller simultaneous increase in both. This alteration of magnetic interactions may be understood with reference to the ``order-by-disorder'' mechanism proposed by Henley \cite{Henley1989}, which describes the stabilization of a non-collinear spin state with respect to a collinear spin state, as quenched disorder favors spins in different sublattices to be oriented perpendicular to each other. Simulations have shown that the multiferroic phase in CuO can be stabilized by alloying with non-magnetic ions \cite{Hellsvik2014}. Density functional theory (DFT) calculations for pure CuO give a difference in energy between the AF2 and AF1 phase $\Delta{E}=E_{\mathrm{AF2}}-E_{\mathrm{AF1}}=2.15$\,meV per Cu, parameterised by $\Delta{E}=4IS^{4}$ where $S=1/2$ and $I$ is the biquadratic exchange coupling constant. Similar DFT calculations for alloyed CuO show a reduction in the energy difference between the AF1 and AF2 states, estimating a reduction to $\Delta{E}=1.94$\,meV per Cu for 5\% zinc alloying. Two contributions to this reduction in energy were determined, with similar sized contributions for Zn alloying: modification of the biquadratic exchange interactions between the non-magnetic impurity and spins on the other sublattice, and an alteration of the local Weiss field around the non-magnetic impurities which acts to orient spins on the same sublattice perpendicular to those on the other sublattice by the Henley mechanism. This change in the Weiss field around the non-magnetic ions will alter the value of the anisotropic coupling constant $\mathbf{K}$. A change in the DM coupling constant $\mathbf{D}$ can also occur as the DM interaction is sensitive to the Cu-O-Cu bond angle \cite{Katsura2005}, which changes on alloying with zinc \cite{Jones2014}. \subsection*{Disorder-broadening of the AF1\,-\,AF2 phase transition} \begin{figure}[t] \includegraphics[width=0.5\textwidth]{ElectromagnonRiseComp.pdf} \caption{\small Temperature dependent change in the oscillator strength of the electromagnon, over the AF1\,-\,AF2 phase transition, in Cu$_{1-x}$Zn$_{x}$O alloys with $x=0$ (blue points) and $x=0.05$ (red points).} \label{Figure3} \end{figure} For a direct comparison of the width of the AF1\,-\,AF2 phase transition for $x=0$ and $x=0.05$, Fig.\,\ref{Figure3} presents the oscillator strength of the two main modes from Figs.\,\ref{Figure1}(a) and \ref{Figure1}(b), normalized by their maximum value. The change in temperature $\Delta{T}$ is defined relative to the temperature at which the derivative of the oscillator strength with respect to temperature, $d(\Delta\epsilon)/dT$, increases from zero. The oscillator strength rises from zero to maximum in around $\Delta{T}_{1}=1.6$\,K for $x=0$ and $\Delta{T}_{2}=3.9$\,K for $x=0.05$. Thus a broadening of 2.3\,K occurs for the AF1\,-\,AF2 phase transition upon alloying with 5\% zinc, a ratio of $\Delta{T}_{2}/\Delta{T}_{1}=2.6$. This broadening of the phase transition occurs due to the quenched random spin-disorder introduced upon alloying with non-magnetic zinc ions. As mentioned previously, CuO can be regarded as a quasi-1D colinear Heisenberg antiferromagnet in the commensurate AF1 phase, consisting of spin chains along the [10$\bar{1}$] direction. Alloying of $x=0.05$ has been shown to suppress the N\'{e}el temperatures in Cu$_{1-x}$Zn$_{x}$O \cite{Jones2014a} due to spin disorder breaking communication along the spin chains, which reduces the correlation length to the average impurity separation. During formation of the alloy, the local impurity density in the melt will vary slightly around an average value, and this variation becomes frozen-in as the alloy crystallizes. The correlation length of interactions therefore varies depending on the local impurity density, and hence so does the local N\'{e}el temperature. In local regions where the impurity density is higher than average the N\'{e}el temperature will be lower than average, and vice versa. This causes the AF1\,-\,AF2 phase transition to occur over a range of temperatures depending on the local impurity density \cite{Imry1979}. The electromagnon excitation cannot be observed without the presence of the AF2 phase, as its strength is strongly linked to the size of the magnetically-induced static polarization in the AF2 phase \cite{Jones2014}; as such the normalised oscillator strength in Fig.\,\ref{Figure3} can be interpreted as the ratio of the amount of AF2 to AF1 phase present in the sample. The emergence of the electromagnon upon heating the sample through the phase transition therefore corresponds to initial nucleation of the AF2 phase. The local variation in N\'{e}el temperature resulting from quenched spin-disorder in the $x=0.05$ sample causes nucleation to occur at different temperatures in different regions of the sample, evidenced by the broader increase in electromagnon strength with temperature. \subsection*{Hysteresis in the electromagnon response} \begin{figure}[t] \includegraphics[width=0.5\textwidth]{HysteresisComp.pdf} \caption{\small Temperature-dependent hysteresis observed in the electromagnon response across the AF1\,-\,AF2 phase transition in Cu$_{1-x}$Zn$_{x}$O alloys with \textbf{(a)} $x=0$ and \textbf{(b)} $x=0.05$. Upwards facing arrows denote increasing temperature (also denoted by solid circles) and downwards facing arrows denote decreasing temperature (solid squares). $T_{1}$ and $T_{3}$ are the temperatures at which the oscillator strength begins to deviate significantly from zero or the maximum, respectively. $T_{2}$ and $T_{4}$ are the mid-points of the phase transition during heating and cooling, respectively. $T^{**}$ and $T^{*}$ are the limits of metastability for superheating and supercooling, respectively.} \label{Figure4} \end{figure} Temperature-dependent hysteresis is one of the experimental signatures of a first-order phase transition, and should be present in many experimental observables related to an order parameter involved in the phase transition. As a natural extention to the analysis of Fig.\,\ref{Figure3}, we show here that the strength of the electromagnon can be used to observe hysteretic behaviour at the AF1\,-\,AF2 phase transition on heating and cooling of the sample. The electromagnon response in Cu$_{1-x}$Zn$_{x}$O alloys as the temperature is first increased and then decreased across the AF1\,-\,AF2 phase transition is presented in Fig.\,\ref{Figure4}(a) for $x=0$ and in Fig.\,\ref{Figure4}(b) for $x=0.05$. Hysteresis is clearly visible in both pure and zinc alloyed samples, with the centre of the phase transition occurring at higher temperatures when temperature was increased over the phase transition compared to when it was decreased. The centre points of the increasing and decreasing temperature measurements, $T_{2}$ and $T_{4}$ respectively, are separated by 0.2\,K in the $x=0$ sample and 0.5\,K in the $x=0.05$ sample. The ratio between these values of 2.5 is similar to the degree of disorder-induced broadening of the phase transition observed in Fig.\,\ref{Figure3}. Subsequent heating and cooling measurements reproduced the same hysteretic behaviour. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{FreeEnergyFig3.pdf} \caption{\small Schematic diagram of the free energy $f$ as temperature is increased and decreased across a first-order phase transition, with respect to the order parameter $\phi$. $T_{N}$ is the thermodynamic transition temperature, $T^{*}$ is the metastable limit of supercooling, and $T^{**}$ is the metastable limit of superheating.} \label{Figure5} \end{figure} Here we consider a simple phenomenological description of the free energy with reference to Landau theory for a first-order phase transition, in order to describe the observed hysteretic behaviour. Advanced treatments more suitable for multiferroics have been reported, for instance including the magnetoelectric coupling between a uniform polarization and incommensurate magnetic order \cite{Harris2007}, or non-local Landau theories that describe spin-lattice coupling \cite{Plumer2008,Villarreal2012}. The free energy density of the system with respect to a control variable, here taken to be the temperature $T$, can be expressed in terms of the order parameter $\phi$. One possibility by which a first-order phase transition can occur is if the free energy density has a third-order term in $\phi$, such as \begin{equation} f(T,\phi)=\frac{r(T)}{2}\phi^{2}-w\phi^{3}+u\phi^{4}, \end{equation} \noindent where $w$ and $u$ are positive and temperature-independent. In many systems third-order terms in $\phi$ are forbidden by symmetry. In such a case, a first-order phase transition can still occur if the free energy has a negative quartic term with a positive sixth-order term for stability, such as \begin{equation} f(T,\phi)=\frac{r(T)}{2}\phi^{2}-w\phi^{4}+u\phi^{6}. \label{FOLandau} \end{equation} \noindent Fourth- and sixth-order terms can arise in magnetoelectrics, for instance as a result of strong spin-lattice coupling \cite{Plumer2008}, and such Landau models have been developed for CuO \cite{Villarreal2012,Quirion2013}. Substantial spin-lattice coupling has been reported in Cu$_{1-x}$Zn$_x$O as the phonon modes alter across the AF1-AF2 transition \cite{Jones2014a}. Note that the order parameter can be complex, to allow for non-collinear magnetic states \cite{Quirion2013}. A schematic diagram of the form of the free energy density around a first-order phase transition with respect to $\phi$ is presented in Fig.\,\ref{Figure5}. Here we assume the order parameter for CuO to be represented by the magnetisation along $b$ for one magnetic sublattice, which is zero in the spin-cycloid phase (AF2) and a finite value $\phi_{\mathrm{min}}$ in the antiferromagnetic AF1 phase. When the temperature is higher than the thermodynamic transition temperature $T_{N}$ the free energy has one global minimum, at $\phi=0$, corresponding to the AF2 phase. At $T=T_{N}$ there are two minima to the free energy and the AF1 and AF2 phases can coexist. These minima are located at $\phi=0$ and $\phi=\phi_{\mathrm{min}}$ and separated by an energy barrier. When the temperature drops below $T_{N}$ the global minimum of the system shifts to $\phi_{\mathrm{min}}$, corresponding to the AF1 phase; however $\phi=0$ remains a local minimum of the system, with the two phases still separated by an energy barrier, which will reduce in height with decreasing temperature. Eventually the height of the energy barrier will reduce to zero at the limit of metastability for supercooling $T=T^{*}$, the temperature at which $d^{2}f/d\phi^{2}=0$ at $\phi=0$ and none of the AF2 phase can exist. An analogous process occurs for increasing temperature, where $\phi_{\mathrm{min}}$ continues to be a local minimum of the system until the limit of metastability for superheating $T=T^{**}$, the temperature at which $d^{2}f/d\phi^{2}=0$ at $\phi_{\mathrm{min}}=0$ and none of the AF1 phase can exist. It is these superheating and supercooling effects that give rise to the hysteresis in experimental observables with respect to the control parameters. The hysteretic evolution of the oscillator strength with temperature for both pure and zinc alloyed samples can be explained with reference to the free energy diagram in Fig.\,\ref{Figure5}, and the nomenclature used to describe hysteresis of disorder-broadened first-order phase transitions outlined in reference \cite{Roy2014}. In the case of an ideally pure compound and in the absence of temperature fluctuations, the high or low temperature states can be taken to the limit of metastability at $T^{*}$ and $T^{**}$ respectively, where the phase transition occurs as a sharp discontinuity in the hysteretic parameter being observed. In any real system, temperature fluctuations will perturb the system and destroy the metastable state at some intermediate temperature between $T_{N}$ and $T^{*}$ or $T^{**}$. The small amount of disorder present in any real crystal from impurities or defects will also cause a slight rounding of the transition. Both of these effects are observable in the hysteresis of pure CuO in Fig.\,\ref{Figure4}(a). On heating, the electromagnon excitation emerges at temperature $T_{1}$, signifying the onset of nucleation of the AF2 phase, and grows in strength until reaching a maximum at 213.4\,K. On cooling, the electromagnon strength begins to differ from that observed during heating around 213.4\,K, signifying this temperature as $T^{**}$, and begins to decrease in strength at temperature $T_{3}$ signifying the onset of nucleation of the AF1 phase. The temperature at which the electromagnon strength reduces to zero on cooling can therefore be identified as $T^{*}$. The system is observed to be in the metastable state for superheating and supercooling over a temperature range of 0.4\,K in both cases. Similar trends can be seen for the zinc alloyed sample in Fig.\,\ref{Figure4}(b), however in this case the phase transition is broadened by spin-disorder. The metastable phases for superheating and supercooling are also broadened upon zinc alloying, with both increasing to a width of 1\,K. As discussed previously, simulations of the magnetic interactions in alloyed CuO showed a decrease in the energy difference between the AF1 and AF2 states relative to pure CuO \cite{Hellsvik2014}. However, the metastable phase observed in the zinc alloyed sample is wider than the pure sample. This may be partially due to smaller thermal fluctuations at the lower N\'{e}el temperatures of the alloyed sample, but could also be due to modifications to the exchange coupling parameters on alloying altering the form of the free energy barrier between the two phases. Modification of the biquadratic exchange and higher order coupling terms by the non-magnetic ions, as discussed in terms of the ``order-by-disorder" mechanism above, could therefore possibly alter the free energy barrier height and increase the stability of the metastable phases in the zinc alloyed sample. \section*{Conclusion} Here we reported for the first time that the dynamic magnetoelectric response at terahertz frequencies can be used to probe hysteresis and spin-disorder broadening at magnetic phase transitions. The oscillator strength $\Delta\epsilon$ of the electromagnon in Cu$_{1-x}$Zn$_x$O was taken to represent the fraction of the magnetically-induced ferroelectric phase, AF2. By precisely tracking $\Delta\epsilon$ on cooling or heating across the phase transition between the commensurate AF1 phase and the ferroelectric AF2 phase we observed thermal hysteresis, indicating the first-order nature of this transition. The limits of metastability for superheating and supercooling were identified, and the metastable region was found to broaden for the alloy with greater spin-disorder. Alloying also enhanced the electromagnon energy from 3.0\,meV to 3.7\,meV, potentially as a result of enhanced DM interaction or greater single-ion anisotropy. The results were discussed within the context of a simple Landau theory of a first-order phase transition, with reference to recent advances in phenomenological and first-principles theoretical models of CuO. This work may have immediate impact by providing a new way to study the nature of magnetic phase transitions in multiferroics. In the longer term the increased understanding of multiferroics yielded by ultrafast spectroscopic methods, including terahertz time-domain spectroscopy, may help develop new magnetoelectric and multiferroic materials for spintronic applications. \section*{References} \section{Introduction} The \texttt{iopart-num} Bib\TeX{} style is intended for use in preparing manuscripts for Institute of Physics Publishing journals, including Journal of Physics. It provides numeric citation with Harvard-like formatting, based upon the specification in ``How to prepare and submit an article for publication in an IOP journal using \LaTeXe'' by Graham Douglas (2005). The \texttt{iopart-num} package is available on the Comprehensive \TeX{} Archive Network (CTAN) as \texttt{/biblio/bibtex/contrib/iopart-num}. \section{General instructions} To use the \texttt{iopart-num} style, include the command \verb+\bibliographystyle{iopart-num}+ in the document preamble. The reference section is then inserted into the document with the command \verb+
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Experiments based on liquid xenon are leading the search of neutrinoless double beta decay of $^{136}$Xe due to the fast development of this detection technique in the last decade~\cite{Gando:2016ag}. More recently, experiments based on high pressure xenon time projection chambers (HPXe-TPCs)~\cite{GomezCadenas:2014jjgc, Galan:2016jg} have been proposed due to their better intrinsic energy resolution ($\sim$1\%~FWHM) and the access to topological features, which may provide extra discrimination from background events, keeping good signal efficiency. In this context, the use of Micromegas charge readout planes~\cite{Andriamonje:2010sa} in a HPXe-TPC has been studied by T-REX project\footnote{T-REX webpage: http://gifna.unizar.es/trex}, leading to the following results: Micromegas readouts show extremely low levels of radioactivity (below 0.1~$\mu$Bq/cm$^2$ for both $^{214}$Bi and $^{208}$Tl)~\cite{Irastorza:2016ii}; they have been operated in xenon-trimethylamine (Xe-TMA) at 10 bar in realistic experimental conditions (30~cm diameter readout, 1200~channels, 38~cm drift), proving an energy resolution of 3\%~FWHM at the Range of Interest (RoI)~\cite{Gonzalez:2016dg}; and the combination of high granularity readout planes and low diffusion (as measured in Xe-TMA) can reduce the background level by two-three orders of magnitude keeping a signal efficiency of 40\%~\cite{Cebrian:2013sc}. Micromegas readout planes will be used in the first module of PandaX-III experiment~\cite{Galan:2016jg}, to be installed at the China Jinping Underground Laboratory (CJPL)~\cite{Li:2015jl} by 2017. A half-life sensitivity of $10^{26}$~y (at 90\% CL) is expected after 3 years of live-time, supposing a~3\%~FWHM energy resolution at Q-value (2458 keV), a background level of $\sim 10^{-4}$ counts keV$^{-1}$kg$^{-1}$y$^{-1}$ at the RoI and a signal efficiency of~35\%. The potential of discrimination methods in this detection technique, which combines a low diffusion gas like Xe-TMA and readout planes with pixel sizes of $\sim$1~mm, is the aim of this paper and will be further developed in a future publication. \section{Simulation and discrimination algorithms} A 200~kg HPXeTPC filled with Xe+1\%TMA at 10 bar has been simulated by Geant4 and REST codes~\cite{Iguaz:2015fji}, fixing a pixel size of 2~mm and an energy resolution of 1\%~FWHM at the RoI. In the analysis, tracks\footnote{Group of points for which we can always find a continuous path between any pair of points.} and blobs\footnote{A big energy deposit at the end of one electron path.} are found by two well-known algorithms of graph theory: the identification of connections (or tracks) and the search for the longest path. Signal events are then selected by three discrimination criteria: a fiducial area; a single-track condition, with some possible bremsstrahlung photons (energy below 40~keV) situated near the main track (12~cm maximum); and some limits in the blob charge and density. The distribution of these two last observables are shown in in Fig.~\ref{fig:TopRejection} to illustrate their discrimination power. The little blob charge distribution (black line) is quite separated from background distributions, as already shown by the Gotthard experiment~\cite{Wong:1993uq}. Meanwhile, the blob density mainly rejects surface contaminations at the readout planes (yellow and brown lines), as they are deposited near the readouts, drift very short distances and show bigger charge densities. \begin{figure}[htb!] \centering \includegraphics[width=75mm]{Comb_LittleBlobCharge.pdf} \includegraphics[width=75mm]{Comb_BigBlobDensity.pdf} \caption{\it Distribution of the little blob charge (left) and the big blob density (right) in a 200~kg HPXeTPC for signal events (black line) and different contaminations: vessel (blue and red lines for $^{208}$Tl and $^{214}$Bi, respectively), cathode (green and magenta lines) and Micromegas readout planes (yellow and brown lines). Events are in the RoI and are single-track.} \label{fig:TopRejection} \end{figure} Fixing a signal efficiency\footnote{This signal efficiency includes the intrinsic and the analysis efficiencies.} of 40\% and scaling the results by the material radioactivity activities in~\cite{Irastorza:2016ii}, the background energy spectra of Fig.~\ref{fig:TopSpectra} are obtained for the vessel and readout planes. The background level of these components in the RoI is reduced by two and three orders of magnitude respectively, down to $(8.1 \pm 0.2)$ and $(1.3 \pm 0.1) \times 10^{-5}$ counts keV$^{-1}$kg$^{-1}$y$^{-1}$. The blob criteria reject a factor $> 15$ of background events for the vessel, a value better than Gotthard experiment~\cite{Wong:1993uq}, and a factor $> 55$ for the readouts. The absence of a trigger signal (or $T_0$) by registering the primary scintillation only gives a factor $\sim 2$ more background events in the RoI: this indicates that the implementation of a $T_0$ -unwanted due to the technological and radiopurity cost of a light readout- is not imperative. \begin{figure}[htb!] \centering \includegraphics[width=75mm]{BackSpec_Sim_Vessel2.pdf} \includegraphics[width=75mm]{BackSpec_Sim_Readouts2.pdf} \caption{\it Energy spectra generated by $^{208}$Tl and $^{214}$Bi events emitted from the vessel (left) and the Micromegas readout planes (right) in 200~kg HPXeTPC after the successive application of the selection criteria: raw data (black line), fiducial area (red line), single-track (blue line), blob observables (magenta line) and a trigger signal or T$_0$ (green line). The vertical violet dotted lines delimit a RoI of 40 keV, equivalent to a 1\%~FWHM energy resolution.} \label{fig:TopSpectra} \end{figure} \section{Prospects} These motivating results cannot be directly translated to a background model of a HPXeTPC because the electronics and the readout planes must be included in the simulation, as done in~\cite{Iguaz:2015fji}. Then either the algorithms in~\cite{Cebrian:2013sc} are adopted to the two 2D views for each event, as done in~\cite{Wong:1993uq}, or the event's 3D track is previously reconstructed from these two views. This second option has been extensively studied by liquid argon detectors, like ICARUS~\cite{Antonello:2013ma} and DUNE~\cite{Marshall:2015jsm}. However, its application to $0\nu\beta\beta$ is a challenging task as MeV electron tracks are relatively short compared to their complex topology and there are only two 2D views for each event. \ack We acknowledge the support from the European Commission under the European Research Council T-REX Starting Grant ref. ERC-2009-StG-240054 of the IDEAS program of the 7th EU Framework Program. F.I. and T.D. acknowledge the support from the \emph{Juan de la Cierva} and \emph{Ram\'on y Cajal} programs of the Spanish Ministry of Economy and Competitiveness. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since Leinaas, Myrheim, and Wilczek's pioneering works~\cite{Leinaas77,Wilczek82}, anyons have been the subject of intense research. These last two decades, these exotic quasiparticles have triggered much attention because of their potential use for quantum computers \cite{Nayak08}. In two dimensions, their nontrivial braiding properties may indeed be a key ingredient to perform operations, and their robustness with respect to perturbations provides an efficient protection against decoherence \cite{Kitaev03}. Anyons are also inextricably linked to topologically ordered phases of matter and long-range entanglement (see Ref.~\cite{Wen17} for a recent review). Although a complete classification of these phases is still lacking, substantial progress has been recently achieved for bosonic topological orders \cite{Schoutens16,Wen16}. Some of these phases naturally emerge in microscopic models. In particular, the string-net model proposed by Levin and Wen in 2005 \cite{Levin05} allows for the realization of any doubled achiral topological phases and provides an ideal framework to study phase transitions that may arise in the presence of sufficiently strong external perturbations~ \cite{Gils09_1,Gils09_3,Burnell12,Schulz13,Schulz14,Schulz15,Dusuel15,Schulz16_1,Schulz16_2,Mariens17}. In the string-net model, most perturbations considered so far are responsible for a transition between a topological phase and a trivial (non topological) phase. These transitions are often successfully described by the theory of anyon condensation, also known as topological symmetry breaking, which is the counterpart of the Landau symmetry-breaking theory for conventional phases~\cite{Bais09_1,Burnell11_2,Mansson13,Kong14,Eliens14,Neupert16,Burnell18}. The goal of the present paper is to study phase transitions between two nontrivial phases. To this aim, we consider the string-net model in the ladder geometry for microscopic degrees of freedom obeying ${{SU}(2)_2}$ fusion rules given in Eqs.~(\ref{eq:fusion1}) and (\ref{eq:fusion2}). This system is likely the simplest where the competition between two non-Abelian theories can be considered on equal footing. There are two families of anyon theories obeying ${{SU}(2)_2}$ fusion rules, giving rise to two distinct quantum string-net condensed phases. These non-Abelian theories can be distinguished by their Frobenius-Schur indicator, which encodes their behavior with respect to bending operations \cite{Kitaev06,Rowell09}. The main result of this work is that the low-energy effective Hamiltonian corresponds to a transverse-field cluster model, which is known to display a second-order quantum phase transition described by the $so(2)_1$ conformal field theory \cite{Lahtinen15_1}. Higher-energy sectors are obtained by freezing spins in this model. \section{Fusion rules and Hilbert space} Following Levin and Wen \cite{Levin05}, we consider degrees of freedom defined on the links of a trivalent graph. In the present case, these links can be in three different states: $1, \sigma$, and $\psi$. The Hilbert space is spanned by the set of configurations satisfying the branching rules, which stem from the ${{SU}(2)_2}$ fusion rules \begin{eqnarray} 1 \times a = a \times 1&=& a, \:\: \forall a \in \{1,\sigma,\psi\}, \label{eq:fusion1} \\ \sigma \times \sigma= 1+\psi, \:\: \sigma \times \psi&=& \psi \times \sigma=\sigma, \:\: \psi\times \psi= 1. \label{eq:fusion2} \end{eqnarray} These rules imply, for instance, that if two links of a given vertex are in the state $\sigma$, the third link must be in the state $1$ or $\psi$. Violations of these constraints lead to states that are not considered here (charge excitations). For any trivalent graph with $N_{\rm v}$ vertices, the dimension of this Hilbert space is given by \cite{Gils09_3} \begin{equation} \label{eq:dimH} \dim \mathcal{H}= 2^{N_{\mathrm v}+1}+2^{N_{\mathrm v}/2}. \end{equation} There are 16 anyon theories obeying the aforementioned fusion rules. These theories can be divided into two sets according to the Frobenius-Schur indicator of $\sigma$ that can take two different values $\varkappa_\sigma=\pm 1$. Each set contains eight theories that have the same $F$-symbols, but different $S$-matrix and $T$-matrix \cite{Kitaev06,Rowell09}. \section{The string-net model} According to the string-net construction \cite{Levin05}, we can build, for any input theory, operators that project onto any state of the corresponding doubled (achiral) theory. In their seminal paper \cite{Levin05}, Levin and Wen detailed the action of these operators in terms of the $F$-symbols of the theory considered. This procedure is valid for theories with positive Frobenius-Schur indicators but one must be careful when a string $s$ has a negative $\varkappa_s$. As will be shown in a forthcoming paper~\cite{Simon18}, a simple way to properly take into account such a situation is to replace the quantum dimension $d_s>0$ of the particle $s$ by $\varkappa_s d_s$. Note that this prescription reproduces the result for the semion \mbox{theory \cite{Levin05}}, derived by considering a negative quantum dimension. Although Ref.~\cite{Levin05} focuses on hexagonal plaquettes, it is straightforward to obtain the action of these projectors on any type of plaquette (see Appendix \ref{app:Bp}). Matrix elements of these operators only involve $F$-symbols and $\varkappa_s$. Consequently, for the problem at hand, these projectors are identical for each set of theories with the same $\varkappa_\sigma$. For convenience, in the following, we will alternatively refer to Ising theory for the set where $\varkappa_\sigma=+1$ and to ${{SU}(2)_2}$ for the set where $\varkappa_\sigma=-1$. \\ To study the competition between DIsing and D${{SU}(2)_2}$ topological phases (prefix D stands for doubled and achiral \cite{Levin05}), we consider the Hamiltonian \begin{equation} H= - \cos \theta\: \sum_p B_p^{1^+} -\sin \theta \: \sum_p B_p^{1^-}, \label{eq:ham_LW} \end{equation} where $p$ runs over all plaquettes of the system. Operators $B_p^{1^+}$ and $B^{1^-}_{p}$ are projectors onto the vacua ${1^+}$ and ${1^-}$ of DIsing and D${{SU}(2)_2}$ theories in the plaquette $p$, respectively. We refer the reader to Refs.~\cite{Burnell12,Schulz16_1} for a discussion of these doubled phases and their particle content. Importantly, these operators mutually commute except when $(p,p')$ correspond to neighboring plaquettes where $[B_p^{1^+},B^{1^-}_{p'}] \neq 0$. Furthermore, when acting on the same plaquette $p$, they are related by the following identity: \begin{equation} B_p^{1^+}= (-1)^{N_{l_\sigma}} B^{1^-}_{p} (-1)^{N_{l_\sigma}}, \label{eq:unit} \end{equation} where $N_{l_\sigma}$ is the operator that counts the total number of loops made of $\sigma$ links. Hence, $B_p^{1^+}$ and $B_p^{1^-}$ are unitarily equivalent and the spectrum $H$ is invariant under the transformation $\theta \leftrightarrow \pi/2-\theta$. Interestingly, for any $\theta$, the Hamiltonian commutes with $B_p^{\sigma^+}$ and $B^{\sigma^-}_{p}$ that are projectors onto the states ${\sigma^+}$ and ${\sigma^-}$ of DIsing and D${{SU}(2)_2}$ theories in the \mbox{plaquette} $p$. By construction, one indeed has $[B_p^{1^\pm},B_{p'}^{\sigma^\pm}]=0$ for all $(p,p')$ but, as shown in Appendix \ref{app:Bp}, one further has here $B_p^{\sigma^+}=B_p^{\sigma^-}$. Finally, depending on the system topology, one may also have other (nonlocal) conserved quantities measuring the presence of $\sigma^\pm$ in noncontractible loops as we shall now see in a concrete example. \section{The two-leg ladder and Hamiltonian symmetries} In the present work, we focus on a two-leg ladder with periodic boundary conditions. In this geometry, fusion rules given in Eqs.~(\ref{eq:fusion1}) and (\ref{eq:fusion2}) imply that the Hilbert space of this system decouples in two different sectors \cite{Gils09_3}. Indeed, strings of $\sigma$ can only form closed loops since $\sigma \times 1=\sigma \times \psi=\sigma$ and $\sigma \notin \sigma \times \sigma$. In the so-called odd sector, each plaquette has only one leg with a $\sigma$ link and there is only one loop of $\sigma$ links going around the ladder. Hence, in this sector, $B_p^{1^+}$ and $B_p^{1^-}$ have the same matrix elements [see Eq.~(\ref{eq:unit})]. By contrast, in the even sector, each plaquette has either no leg or two legs with a $\sigma$ link and there can be closed loops encircling plaquettes (see Fig.~\ref{fig:odd_even}). Let us note that a similar decoupling exists for $\mathbb{Z}_2$ fusion rules \cite{Morampudi14}. \begin{figure}[t] \includegraphics[width=0.4\columnwidth]{./state_odd_m} \hfill \includegraphics[width=0.4\columnwidth]{./state_even_m} \caption{Pictorial representation of states belonging to the odd (left) and even (right) sectors. Blue, green, and red links represent $1,\psi$, and $\sigma$ states, respectively.} \label{fig:odd_even} \end{figure} As argued in Refs.~\cite{Gils09_1,Schulz15}, the string-net Hamiltonian defined on a two-leg ladder with periodic boundary conditions also preserves the flux above and below the ladder. In the present case, for any $\theta$, $H$ only commutes with $P_a^{\sigma^\pm}$ and $P_b^{\sigma^\pm}$ that are projectors onto the flux $\sigma^\pm$ above and below the ladder, respectively. As shown in Appendix \ref{app:Bp}, one has $P_{a,b}^{\sigma^+}=P_{a,b}^{\sigma^-}$ so that we will omit superscript $\pm$ in the following (idem for $B_p^{\sigma^\pm}$). Projectors $P_a^{\sigma}$, $P_b^{\sigma}$, and $B_p^{\sigma}$ only involve loops of $1$ and $\psi$ ($S_{\sigma \sigma}=0$). As a direct consequence, $\sigma$ links are left unchanged by these projectors since \mbox{$\sigma \times 1=\sigma \times \psi=\sigma$}. Thus, these mutually commuting operators preserve the decoupling between odd and even sectors so that, {\it in fine}, the Hamiltonian can be decomposed as follows: \begin{eqnarray} H&=&H_{\rm odd} \oplus H_{\rm even},\\ &=&\underset{p_a^\sigma,p_b^\sigma,\{b_p^\sigma\}}{\oplus} H_{\rm odd}^{(p_a^\sigma,p_b^\sigma,\{b_p^\sigma\})} \oplus H_{\rm even}^{(p_a^\sigma,p_b^\sigma,\{b_p^\sigma\})}, \end{eqnarray} where $p_a^\sigma,p_b^\sigma$, and $b_p^\sigma$ are the eigenvalues of $P_a^{\sigma}$, $P_b^{\sigma}$, $B_p^{\sigma}$, respectively. Let us mention that fusion rules impose that if $N_\sigma=\sum_p b_p^\sigma$ is even (odd), then $p_a=p_b$ ($p_a \neq p_b$). In each sub-sector indexed by $(p_a^\sigma,p_b^\sigma,\{b_p^\sigma\})$, one can easily generate an eigenbasis by considering states \begin{eqnarray} \label{eq:state} |p_a^\sigma,p_b^\sigma,\{b_p^\sigma\}, \phi \rangle = &{\mathcal N} & \big[p_a^\sigma P_a^{\sigma} +(1- p_a^\sigma) (\mathbbm{1}-P_a^{\sigma})\big] \nonumber \\ &\times&\big[p_b^\sigma P_b^{\sigma} + (1- p_b^\sigma) (\mathbbm{1}-P_b^{\sigma})\big] \nonumber \\ &\times& \underset{p}{\Scale[1.5] \Pi}\big[b_p^\sigma B_b^{\sigma} \hspace{0.5pt} + \hspace{0.5pt} (1-b_b^\sigma) (\mathbbm{1}-B_b^{\sigma})\big] | \phi \rangle, \qquad \end{eqnarray} where $| \phi \rangle$ is a reference state and ${\mathcal N}$ is a normalization factor. Of course, $|\phi \rangle$ may be annihilated by the action of these operators and different reference states may lead to the same final state so that one has to carefully check the completeness of this basis. Operators $B_p^{1^+}$ and $B_p^{1^-}$ appearing in the Hamiltonian involve loops of $\sigma$ ($S_{1 \sigma} \neq 0$) that provide dynamics to the $\sigma$ links. Roughly speaking, these operators shrink or extend $\sigma$ loops as schematically illustrated in Fig.~\ref{fig:flip_flop}. \begin{figure}[h] \includegraphics[width=\columnwidth]{./flip_flop} \caption{Pictorial representation of two even reference states and their spin representation. In the sub-sector (0,0,0), these reference states lead to different states connected by the operator $B_{p}^{1^\pm}$ acting on the rightmost plaquette. } \label{fig:flip_flop} \end{figure} This property further decouples each sub-sector indexed by $(p_a^\sigma,p_b^\sigma,\{b_p^\sigma\})$ into sub-sub-sectors. Indeed, in the even sector, if a state has a plaquette $p$ with $b_p^\sigma=1$ and either two or no $\sigma$ legs, it conserves this property under the action of the Hamiltonian. Furthermore, the number of plaquettes $b_p^\sigma=1$ with (upper and lower) $\sigma$ legs must be even, otherwise $\mathcal N=0$. For instance, in the sub-sector $\mbox{$(0,0, b_{p_1}^\sigma=b_{p_2}^\sigma=1)$}$ where only two plaquettes $p_1$ and $p_2$ contain a $\sigma^\pm$ flux, there are two sub-sub-sectors spanned by reference states with or without $\sigma$ loops encircling $p_1$ and $p_2$, respectively. This result is easily generalized for any value of \mbox{$N_\sigma=\sum_p b_p^\sigma$}. For a given sub-sector $(p_a^\sigma,p_b^\sigma,\{b_p^\sigma\})$ corresponding to a given $N_\sigma \geqslant 1$, one has $2^{N_{\sigma}-1}$ distinct sub-sub-sectors of dimension $2^{N_{\rm p}-N_{\sigma}}$ ($N_{\rm p}$ being the total number of plaquettes). For $(p_a,p_b)=(0,0)$ and $N_\sigma=0$, there is only one sub-sub-sector of dimension $2^{N_{\rm p}}$. To summarize, $H$ can be splitted into two sectors, odd and even, according to the parity of $\sigma$ links on the legs of the ladder. In each sector, one can further block-diagonalize $H$ in different sub-sectors according to the presence of a $\sigma^\pm$ flux in plaquettes (measured by \mbox{$B_p^{\sigma}$}) as well as above and below the ladder (measured by $P_a^{\sigma}$ and $P_b^{\sigma}$). Each of these sub-sectors then splits into sub-sub-sectors according to the position of $\sigma$ loops in the reference states. Keeping in mind that the dimension of the odd sector is $4^{N_{\rm p}}$ \cite{Gils09_3}, the aforementioned considerations allow for the following decomposition of the Hilbert space dimension: \begin{eqnarray} \dim \mathcal{H}&=&\dim \mathcal{H}_{\rm odd}+\dim \mathcal{H}_{\rm even}, \\ &=& 4^{N_{\rm p}}+ 2 \Bigg[ 2^{N_{\rm p}}+ \sum_{N_{\sigma}=1}^{N_{\rm p}} \left( \begin{array}{c} N_{\rm p} \\ N_{\sigma} \end{array} \right) 2^{N_{\rm p}-N_{\sigma}} 2^{N_{\sigma}-1} \Bigg], \nonumber \end{eqnarray} where the binomial coefficient simply arises from the different ways to choose the $N_{\sigma}$ plaquettes carrying a $\sigma^{\pm}$ flux among $N_{\mathrm p}$. The factor of 2 in front of the bracket comes from the fact that, in the even sector, the spectrum of $H$ is the same for $(p_a,p_b)=(0,0)$ and $(1,1)$, as well as for $(p_a,p_b)=(0,1)$ and $(1,0)$. With periodic boundary conditions, one has $N_{\rm p}=N_{\rm v}/2$, so that one directly recovers Eq.~(\ref{eq:dimH}). \section{Low-energy sectors} To discuss the phase diagram, the first step consists of identifying the sector(s) in which the ground state lies. To achieve this goal, we performed exact numerical diagonalizations of the Hamiltonian. The location of the ground state as a function of $\theta$ is given in Table \ref{tab:main}. For $\theta \in ]\pi,3\pi/2[$, ground states are found in all sectors where each plaquette with $b_p^\sigma =0$ are surrounded by two plaquettes with $b_p^\sigma = 1$. Since one has $[B_p^{1^+},B^{1^-}_{p}] = 0$, eigenstates of $H$ in these sectors are simultaneous eigenstates of $B_p^{1^+}$ and $B_p^{1^-}$ with eigenvalues $b_p^{1^+}$ and $b_p^{1^-}$, respectively. The corresponding energy is \begin{equation} E(\{b_p^{1^+}\},\{b_p^{1^-}\})=-\cos \theta \sum_p b_p^{1^+} - \sin \theta \sum_p b_p^{1^-}. \end{equation} Hence, in this parameter range, the ground-state energy per plaquette is $e_0=0$. For all other values of $\theta$, the ground state is always obtained found in a sector where $N_\sigma=0$. \begin{table}[h] \begin{tabular}{c c c c c } \hline \hline $\theta$ & parity & $N_\sigma$ & $(p_a,p_b)$ & degeneracy\\ \hline $]0,\pi/2[$ & odd & 0 & $(0,0)$ & $1$ \\ $]\pi/2,\pi[$ & even & 0 & $(0,0),(1,1)$ & $1+1$ \\ $]\pi,3\pi/2[$ & -- & -- & -- & -- \\ $]3\pi/2,2\pi[$ & even & 0 & $(0,0),(1,1)$ & $1+1$\\ \hline \hline \end{tabular} \caption{Ground-state sector as a function of $\theta$. } \label{tab:main} \end{table} For $\theta=0$, the system is, by construction, in a DIsing phase. The degeneracy of $k$th energy level $E_k=-N_{\rm p}+k$ is \begin{equation} \label{eq:deg} \mathcal{D}_k=\left( \begin{array}{c} N_{\mathrm p} \\ k \end{array} \right)\times \left(1+2\times 3^k\right), \end{equation} where the binomial coefficient stems from the different ways to choose $k$ plaquettes among $N_{\mathrm p}$ carrying the nontrivial flux excitations. One can check that $\displaystyle{\dim \mathcal{H}=\sum_{k=0}^{N_{\rm p}} \mathcal{D}_k}$. Importantly, one expects $\mathcal{D}_0=3$ ground states with $e_0=E_0/N_{\mathrm p}=-1$: two of them are found in the even sector and the third one lies in the odd sector. The same results hold for $\theta=\pi/2$ where the system is in a D${{SU}(2)_2}$ phase. \section{Spectrum of the odd sector} As already mentioned, in the odd sector, $B_p^{1^+}$ and $B_p^{1^-}$ have the same matrix elements so that each energy level in $H_{\rm odd}^{(p_a^\sigma,p_b^\sigma,\{b_p^\sigma\})}$ is indexed by an integer that simply counts the number of plaquettes carrying a trivial flux. More precisely, one has \begin{equation} E(\{b_p^{1}\})=-(\cos \theta+ \sin \theta) \sum_p b_p^{1}, \end{equation} where $b_p^{1}=0$ or $1$ is the eigenvalue of $B_p^{1^\pm}$ in the corresponding eigenstate. Trivial transitions stemming from level crossings belonging to different sub-sectors are observed for \mbox{$\theta=3\pi/4,7\pi/4$}. \section{Spectrum of the even sector} The nontrivial part of the spectrum is found in the even sector where the Hamiltonian can be written in a simple form. Indeed, in each sub-sub-sector determined by the quantum numbers $(p_a^\sigma,p_b^\sigma,\{b_p^\sigma\})$ and the position of the $\sigma$ loops, the state of each plaquette with $b_p^\sigma=0$ is described by a $\mathbb{Z}_2$ variable that can be interpreted as the flux inside this plaquette (for instance, $1^{+}$ or $\psi^{+}$). A simple way to encode a state $|p_a^\sigma,p_b^\sigma,\{b_p^\sigma\}\rangle$ is to associate an effective spin-$1/2$ configuration to its reference state $|\phi\rangle$. By convention, for any link configuration, we will say that a plaquette is in the state $|\! \! \uparrow \rangle$ if it has no $\sigma$ legs and in the state $|\! \! \downarrow \rangle$ if it has two $\sigma$ legs (see Fig.~\ref{fig:flip_flop} for illustration). In this spin representation, $B_p^{1^+}$ acts effectively as \mbox{$(\mathbbm{1} +\sigma_p^x)/2$} on the spin variable located on plaquette $p$ whereas $B_p^{1^-}$ acts as \mbox{$(\mathbbm{1} -\sigma_{p-1}^z \sigma_p^x \sigma_{p+1}^z)/2$}. One can check that these expressions of $B_p^{1^\pm}$ are compatible with Eq.~(\ref{eq:unit}). In addition, as already mentioned, the presence (or the absence) of $\sigma$ legs in plaquettes with $b_p^\sigma=1$ is preserved by the Hamiltonian. In the spin language, it means that spins located in plaquettes with $b_p^\sigma=1$ are frozen and $B_p^{1^\pm}$ acting on these plaquettes gives 0. \subsection{$N_\sigma=0$} In the sub-sectors without $\sigma^\pm$ flux, all spins can freely fluctuate so that the Hamiltonian reads \begin{equation} H_{\rm even}^{(0,0,0)}=-\cos \theta \sum _p \frac{\mathbbm{1} + \sigma_p^x}{2} - \sin \theta \sum_p \frac{\mathbbm{1} -\sigma_{p-1}^z \sigma_p^x \sigma_{p+1}^z}{2}. \end{equation} This model, known as the transverse-field cluster model (TFCM) \cite{Pachos04} can be solved exactly using the standard Jordan-Wigner transformation \cite{Montes12,Lahtinen15_1}. In the thermodynamical limit, the ground-state energy per plaquette can be written as \begin{equation} e_0 = -\frac{1}{2}\bigg[ \cos\theta+\sin\theta+\frac{2}{\pi} \left( \frac{2+g}{g} \right)^{1/2} {E}\left(\frac{4}{2+g}\right) \bigg], \qquad \qquad \end{equation} where $\displaystyle{E(k)= \int_0^{\frac{\pi}{2}} {\mathrm d} \theta\sqrt{1-k^2 \sin^2 \theta} }$ is the complete elliptic integral of the second kind with \mbox{$g=\tan \theta+(\tan \theta)^{-1}$}, and the low-energy gap is \begin{equation} \Delta = \min\left(|\cos\theta+\sin\theta|,|\cos\theta-\sin\theta| \right). \end{equation} Hence, the phase diagram in this sub-sector $(0,0,0)$ consists of four equivalent gapped phases \cite{Montes12,Verresen17} with a unique ground state separated by four transition points at $|\tan \theta|=1$. As explained in Ref.~\cite{Lahtinen15_1}, with periodic boundary conditions, these critical points are described by the $so(2)_1$ conformal field theory. \begin{figure}[h] \includegraphics[width=\columnwidth]{./decoupling} \caption{Sketch of the decoupling induced by frozen spins (red) in the TFCM. The sub-sub-sector displayed here corresponds to $N_\sigma=3$ where two plaquettes with $b_p^\sigma=1$ have two $\sigma$ legs and the third one has no $\sigma$ legs. } \label{fig:decoupling} \end{figure} \subsection{$N_\sigma \geqslant 1$} For sub-sectors where $N_\sigma \neq 0$, the situation is slightly different since one must then consider the TFCM with frozen spins. Actually, these frozen spins effectively cut the system into several subsystems as shown in Fig~\ref{fig:decoupling}. As a result, it is sufficient to study the spectrum of the TFCM with frozen spins at the boundaries \cite{Configuration} to determine the spectrum of all sectors with $N_\sigma \geqslant 1$. The influence of the boundary conditions manifests notably in the finite-size critical spectrum \cite{Cardy89} but phase transitions that may occur in the thermodynamical limit in sectors with $N_\sigma \geqslant 1$ are the same as in the \mbox{$N_\sigma = 0$} sector. \section{Discussion} Interestingly, very similar results are found when studying the competition between double semion and toric code phases \cite{Morampudi14}. This striking similarity has two main origins. First, the fact that, once the map of $\sigma^\pm$ is fixed, i.e., for a given set of $b_p^\sigma$, one can describe the present system by $\mathbb{Z}_2$-variables. Second, the model discussed in Ref.~\cite{Morampudi14} concerns two (Abelian) theories with different Frobenius-Schur indicator obeying $\mathbb{Z}_2$ fusion rules, and projectors onto the vacuum of these theories also obey Eq.~(\ref{eq:unit}). We checked that the correspondence between both models also holds in the honeycomb lattice. Thus, one recovers a first-order transition between DIsing and D${{SU}(2)_2}$ phases in the nontrivial low-energy sector at the self-dual points $|\tan \theta|=1$. However, the role played by frozen spins in two dimensions remain to be elucidated.\\ \acknowledgments I would like to thank E. Ardonne, S. Dusuel, and V.~Lahtinen, S. H. Simon, and J. K. Slingerland for fruitful and insightful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Sec:Intro} In the standard paradigm of hierarchical structure formation, galaxies reside inside dark matter halos. The formation and evolution of these halos is dominated by gravity and can be well predicted using high-resolution numerical simulations and in some cases analytic models. The formation of the galaxies and their relation to the dark matter halos is more complex and depends on the detailed physical processes leading to the varied observed galaxy properties. It has been well established that the local halo environment of galaxies plays a fundamental role in shaping their properties. In particular, local effects are thought to be responsible for the transformation of blue, late-type and star-forming galaxies into red, early-type and passive galaxies (see, e.g., \citealt{oemler74,dressler80,lewis02,balogh04b,baldry04,blanton09}), even though there is no consensus on the relative importance of the specific processes that play a role. Different mechanisms such as mergers and interactions, ram-pressure stripping of cold gas, starvation or strangulation and harassment all lead to changes in galaxies morphologies within the host halo environment. It is not clear, however, to what extent are galaxy properties affected by their overall ``global'' environment on scales larger than the individual halos. While there is evidence that global environments affect galaxy populations --- for example, red galaxies frequent high-density environments while blue galaxies are prevalent in low-density regions (e.g., \citealt{hogg03,blanton05,blanton06,blanton09}) --- it is debatable whether the large-scale environment has an actual impact on the physical processes involved in galaxy formation and evolution. A useful approach for studying the predictions of galaxy formation processes is with semi-analytic modeling (SAM) of galaxy formation, in which halos identified in large N-body simulations are populated with galaxies and evolved according to specified prescriptions for gas cooling, gas formation, feedback effects and merging (e.g., \citealt{Cole00,Baugh06,Croton06}). These models have been successful in reproducing several measured properties such as the galaxy luminosity and stellar mass functions (see e.g., \citealt{Bower06,Guo11,Guo13Sams,Lacey16}). An alternative way of studying galaxy formation is using hydrodynamic simulations which follow the physical baryonic processes by a combination of solving the fluid equations and sub-grid prescriptions (see, e.g., \citealt{somerville15,Guo16}). Cosmological hydrodynamical simulations are starting to play a major role in the study of galaxy formation and evolution. Comparisons of such simulations with observations show broad agreement (e.g., \citealt{Vogelsberger14,Schaye15,Artale16}). A popular approach to empirically interpret observed galaxy clustering measurements as well as to characterize the predictions of galaxy formation theories is the Halo Occupation Distribution (HOD) framework (e.g., \citealt{peacock00,seljak00,scoccimarro01,berlind02,cooray02,zheng05}). The HOD formalism characterizes the relationship between galaxies and dark matter halos in terms of the probability distribution, $P(N|M_{\rm h})$, that a halo of virial mass $M_{\rm h}$ contains $N$ galaxies of a given type, together with the spatial and velocity distribution of galaxies inside halos. The fundamental ingredient of the modeling is the halo occupation function, $\langle N(M_{\rm h}) \rangle$, which represents the average number of galaxies as a function of halo mass. The typically assumed shape for the halo occupation function is motivated by predictions of hydrodynamic simulations and semi-analytic models (e.g, \citealt{zheng05}). It is often useful to consider separately the contributions from the central galaxies, namely the main galaxy at the center of the halo, and that of the additional satellite galaxies that populate the halo \citep{kravtsov04,zheng05}. The HOD approach has been demonstrated to be a powerful theoretical tool to study the galaxy-halo connection, effectively transforming measurements of galaxy clustering into a physical relation between galaxies and dark matter halos. This approach has been very successful in explaining the shape of the galaxy correlation function, its environment dependence and overall dependence on galaxy properties (e.g., \citealt{zehavi04,zehavi05b,zehavi11,berlind05,abbas06,skibba06,tinker08,coupon12}). A central assumption in the conventional applications of this framework is that the galaxy content in halos only depends on halo mass and is statistically independent of the halo's larger scale environment. This assumption has its origins in the uncorrelated nature of random walks describing halo assembly in the standard implementations of the excursion set formalism which results in the halo environment being correlated with halo mass, but uncorrelated with formation history at fixed mass \citep{bond91,white99,lemson99}. In this picture, the change in the fraction of blue and red galaxies in different large-scale environments, for example, is fully derived from the change in the halo mass function in these environments \citep{mo96,lemson99}. Consequently, it is not evident that global environments play a major role in directly shaping galaxy properties and in particular the HOD. This ansatz has been challenged in the last decade by the demonstration in simulations that the clustering of halos of fixed mass varies with halo formation time, concentration and substructure occupation \citep{sheth04,gao05,gao07,Jing06,Harker06,Wechsler06,Wetzel06,Angulo08,Pujol14,Lazeyras17}. The dependence of halo clustering on properties other than the halo mass has broadly been referred to as {\it halo assembly bias}. The dependences on the various halo properties manifest themselves in different ways and are not trivially derived from the correlation between these properties (see, e.g., \citealt{mao17}). While a prediction of $\Lambda$CDM, the exact physical origin of assembly bias remains unclear, but different explanations have been put forth, such as correlated modes which break down the random walk assumption, statistics of peaks and truncation of low-mass halo growth in dense environments \citep{keselman07,sandvik07,zentner07,dalal08,desjacques08,hahn09,ludlow11,lacerna11,zhang14,borz16}. A current topic of active debate is to what extent are galaxies affected by the assembly history of their host halos. The stochasticity in the complex baryonic physics may act to erase the record of halo assembly history. If, however, the galaxy properties closely correlate with the halo formation history, this would lead to a dependence of the galaxy content on large-scale environment and a corresponding clustering signature. This effect has commonly been referred to as {\it galaxy assembly bias} both colloquially and in the literature, and we adopt this distinction here. We stress, however, that what is referred to here is the manifestation of halo assembly bias in the galaxy distribution. The predictions for galaxy assembly bias have been explored with simulated galaxies \citep{croton07,reed07,Zhu06,Zu08,zentner14,angulo16,bray16}. Detecting (galaxy) assembly bias is much more challenging since halo properties are not directly observed. Observational studies of assembly bias have generally produced mixed results. There have been several suggestive detections in observations \citep{yang06,berlind06,wang08,cooper10,tinker12,wang13b,lacerna14b,watson15,Hearin15,miyatake16,saito16,zentner16,montero17,tinker17b} while numerous other studies indicate the impact of assembly bias to be small \citep{abbas06,blanton07,croton08,tinker08,tinker11,deason13,lacerna14a,lin16,ZuMan16,vakili16,dvornik17}. The situation is further complicated as various systematic effects can mimic the effects of assembly bias (e.g., \citealt{campbell15,Zu16,Zu17,Busch17,Sin17,tinker17a,lacerna17}) and the evidence for assembly bias to date remains inconclusive and controversial. Such galaxy assembly bias, if significant, would have direct implications for interpreting galaxy clustering using the HOD framework (e.g., \citealt{Pujol14,zentner14}), as secondary halo parameters in addition to the mass or, more generally, the large-scale environment in which the halo formed, would also impact the halo occupation function. For clarity, we term these variations of the halo occupation functions as {\it occupancy variation}. These effects are all directly related of course, as it is exactly this occupancy variation coupled with halo assembly bias that gives rise to galaxy assembly bias. In this paper, we aim to gain further insight and clarify this important topic by exploring explicitly the dependence of the halo occupation functions on the large-scale environment and formation redshift in semi-analytic models. Limited work has been carried out in directly studying the environmental dependencies of the HOD of galaxies, with varied results. Different works examined the dependence of the subhalo occupation on age (e.g., \citealt{jiang16}) and environment \citep{Croft12}, that can be regarded as a proxy of the satellite occupation, if neglecting the effects of baryons. \citet{Zhu06} explore the age dependence of the conditional luminosity function in a semi-analytic model and a hydrodynamical simulation. \citet{berlind03} and \citet{mehta14} explore the variations of the HOD in cosmological hydrodynamical simulations finding no detected dependence on environment. \citet{mcewen16} have recently investigated this using the age-matching mock catalogs of \citet{Hearin13} (which by design exhibit significant assembly bias) detecting a dependence of the HOD on environment, mostly for the central galaxy occupation function. While the impact of assembly bias on galaxy clustering has already been demonstrated using a SAM applied to the Millennium simulation \citep{croton07,Zu08}, the variation of the HOD itself with large-scale environment or other halo properties has not been explored for it. Here, we use the HOD formalism to directly study the impact of galaxy assembly bias as predicted by SAMs. We use the output of two independently developed SAMs, from the Munich and Durham groups, at different number densities. We measure the halo occupation functions for different large-scale environment regimes as well as for different ranges of halo formation redshift. This allows us to assess which features of the HODs vary with environment and halo age, and we present the corresponding changes in the HOD parameters. Additionally, we investigate the galaxy cross-correlation functions for these different regimes, which highlights the impact of assembly bias on clustering. Such studies will inform theoretical models incorporating assembly bias into halo models as well as attempts to determine it in observational data. Additionally, it can facilitate the creation of mock catalogs incorporating this effect. The outline of the paper is as follows. In Section~\ref{Sec:GFM} we describe the galaxy formation models used. In Section~\ref{Sec:HOD} we explore the dependence of the HOD on large-scale environment and halo age. We discuss the origin of the trends and the connection to the stellar mass -- halo mass relation in Section~\ref{Sec:SMHM}. In Section~\ref{Sec:clustering} we investigate the clustering dependence on these properties and we conclude in Section~\ref{Sec:summary}. Appendix~\ref{Sec:Cuts} shows our halo-mass dependent sample cuts, while Appendix~\ref{Sec:AutoCorr} presents further measurements of the auto-correlation functions. \vspace{0.2cm} \section{The galaxy formation models} \label{Sec:GFM} \vspace{0.1cm} \subsection{Semi-analytic models} \label{SubSec:SAM} The SAMs used in our work are those of \citet{Guo11} (hereafter G11) and \cite{Lagos12} (hereafter L12) \footnote{The G11 and L12 outputs are publicly available from the Millennium Archive in Garching \url{http://gavo.mpa-garching.mpg.de/Millennium/} and Durham \url{http://virgodb.dur.ac.uk/}}. The objective of SAMs is to model the main physical processes involved in galaxy formation and evolution in a cosmological context: (i) the collapse and merging of dark matter halos; (ii) the shock heating and radiative cooling of gas inside dark matter halos, leading to the formation of galaxy discs; (iii) quiescent star formation in galaxy discs; (iv) feedback from supernovae (SNe), from accretion of mass onto supermassive black holes and from photoionization heating of the intergalactic medium (IGM); (v) chemical enrichment of the stars and gas; (vi) dynamically unstable discs; (vii) galaxy mergers driven by dynamical friction within dark matter halos, leading to the formation of stellar spheroids, which may also trigger bursts of star formation. The two models have different implementations of each of these processes. By comparing models from different groups we can get a sense for which predictions are robust and which depend on the particular implementation of the galaxy formation physics (e.g., \citealt{Contreras13}). The G11 model is a version of {\tt L-GALAXIES}, the SAM code of the Munich group and is an updated version of earlier implementations \citep{Delucia04,Croton06,Delucia07}. The L12 model is a development of the {\tt GALFORM} Durham model \citep{Bower06,Font08}, which includes an improved treatment of star formation, separating the interstellar medium into molecular and atomic hydrogen components \citep{Lagos11}. An important difference between G11 and L12 is the treatment of satellite galaxies. In L12, a galaxy is assumed to be stripped of its hot gas halo completely once it becomes a satellite and start decaying onto the central galaxy. In G11, these processes are more gradual and depend on the destruction of the subhalo and the orbit of the satellite. \vspace{0.1cm} \subsection{N-Body simulation and halo merger trees} \label{SubSec:NBody} The SAMs used here are both implemented in the Millennium simulation \citep{Springel05}. This simulation has a periodic volume of $(500 \,h^{-1}\,{\rm Mpc})^3$ and contains $2160^3$ particle with a mass of $8.61 \times 10^8 M_{\odot}/h$ each. The simulation has 63 snapshots between $z=127$ and $z=0$ and was run with a $\rm \Lambda CDM$ cosmology\footnote{The values of the cosmological parameters used in the Millennium simulation are: $\Omega_{\rm b}$ =0.045, $\Omega_{\rm M}$ = 0.25, $\Omega_{\Lambda}$ = 0.75, h = $H_0/100$ = 0.73, $n_{\rm s}$ = 1, $\sigma_8$ = 0.9.}. The G11 and L12 models both use a friends-of-friends ({\tt FoF}) group finding algorithm \citep{Davis85} to identify halos in each snapshot of the simulation, retaining those with at least 20 particles. {\tt SUBFIND} is then run on these groups to identify subhalos \citep{Springel01}. The merger trees differ from this point on. G11 construct dark matter halo merger trees by linking a subhalo in one output to a single descendant subhalo in the subsequent snapshot. The halo merger tree used in {\tt L-GALAXIES} is therefore a subhalo merger tree. L12 employ the {\tt Dhalo} merger tree construction (\citealt{Jiang14}; see also \citealt{Merson13}) that also uses the outputs of the {\tt FoF} and {\tt SUBFIND} algorithms. The {\tt Dhalo} algorithm applies conditions on the amount of mass stripped from a subhalo and its distance from the center of the main halo before it is considered to be merged with the main subhalo. Subsequent output times are examined to see if the subhalo moves away from the main subhalo, to avoid merging subhalos which have merely experienced a close encounter before moving apart. {\tt GALFORM} post-processes the {\tt Dhalo} trees to ensure that the halo mass increases monotonically with time. Consequently, the definition of halo mass used in the two models is not the same. The {\tt Dhalo} mass used in {\tt GALFORM} corresponds to an integer number of particle masses whereas a virial mass is calculated in {\tt L-GALAXIES}. This leads to slight differences in the halo mass function between the models. In previous works that focused on comparing the HODs of the different models (e.g., \citealt{Contreras17}), we had matched the halo mass definitions. Here, since it is not our aim to compare the HODs themselves in detail, but rather examine the environmental effects on each, we prefer to leave the halo mass definitions as is, but we point out that some differences between the models are due to this. A comparison of {\tt Dhalo} masses and other halo definitions is presented in \citet{Jiang14}. \vspace{0.2cm} \section{The HOD dependence on environment and halo age} \label{Sec:HOD} A fundamental assumption of the HOD approach is that the galaxy content in halos depends only on the mass of the host halo. Any dependence of the HOD on secondary parameters, like halo age or large-scale environment, is a direct reflection of galaxy assembly bias (as discussed in \S~\ref{Sec:Intro}). In this section we examine the impact of halo age and environment on the HOD, as predicted in the SAMs. In \S~\ref{SubSec:samples} we provide details on how the halo age and large-scale environment are defined and their relation to one another. Our main results regarding how the halo occupation functions vary with environment and halo age are presented in \S~\ref{SubSec:HOD}, additional cases are studied in \S~\ref{SubSec:otherHODs}, and the impact on HOD parameters is shown in \S~\ref{SubSec:param}. \vspace{0.1cm} \subsection{Halo formation time and environment} \label{SubSec:samples} We define the formation redshift of a halo, as is commonly done, as the redshift when the main progenitor reached (for the first time) half of its present-day mass. We obtain this by following the halo merger trees of the different models and linearly interpolating between the time snapshots available. For defining the large-scale environment of the halos we use the density field obtained directly from the dark-matter particle distribution with a $5 \,h^{-1}\,{\rm Mpc}$ Gaussian smoothing (which we denote as $\delta_5$). This was calculated in cells of $\sim 2 \,h^{-1}\,{\rm Mpc}$ and is provided in the database. The $5 \,h^{-1}\,{\rm Mpc}$ smoothing scale is chosen as it is significantly larger than the size of the largest halos, so as to reflect the large-scale environment, and yet have enough different environments sampled. We test also the other smoothing radii provided in the database, $1.25$, $2.5$ and $10 \,h^{-1}\,{\rm Mpc}$, finding the same qualitative trends we find with $5 \,h^{-1}\,{\rm Mpc}$ for all the results shown in this paper. Alternative density and environment definitions are explored in the literature (e.g., \citealt{muldrew11}). Observationally, one naturally must resort to using the galaxy distribution to define the environment. Here, as it is available, we prefer to directly use the underlying dark matter density field, though in practice we expect our results to be insensitive to the details of the definitions. \begin{figure} \vspace{-0.4cm} \hspace{-0.6cm} \includegraphics[width=0.53\textwidth]{Fig01a.ps} \hspace{-0.6cm} \includegraphics[width=0.53\textwidth]{Fig01b.ps} \caption[]{\label{Fig:CW} (Top panels) A $120 \,h^{-1}\,{\rm Mpc}$ x $120 \,h^{-1}\,{\rm Mpc}$ x $20 \,h^{-1}\,{\rm Mpc}$ slice of the Millennium simulation showing the distribution of the halos in it. Red (blue) dots represent the 20\% of halos that live in the densest (least dense) environments, and the remainder are represented as black dots. The density selection is made in 0.2 dex bins of fixed halo mass (see text). The bigger plot on the left includes all halos, while the smaller ones on the right hand side show separately only the 20\% of halos that live in the densest and least dense environments. (Bottom panels) Same as in the top panels, for the identical slice from the Millennium simulation, but now color-coding halos by formation time instead of environment. Red (blue) dots represent the 20\% most early (late) formed halos. } \end{figure} To classify the halos by environment, we rank the halos by density in narrow (0.2 dex) bins of halo mass and select in each bin the 20\% of halos that are in the densest environment and the 20\% of halos in the least dense environment. This factors out the dependence of the halo mass function on environment and allows us to compare the HODs in the different environments for halos of nearly equal mass. We follow a similar procedure to select the 20\% of halos with the highest and lowest formation redshifts. We illustrate how our environment and halo age cuts vary with halo mass in Appendix~\ref{Sec:Cuts}. We have verified that our mass bins are sufficiently small by using also 0.1 dex bins and confirming that our results do not change. We also test splitting the sample into the 10\% and the 50\% extremes of the population, and find similar trends as found for the 20\% subsamples. The distribution of halos classified as residing in the 20\% most and least dense environments is shown using red and blue dots, respectively, in the top panels of Fig.~\ref{Fig:CW}, for a slice from the Millennium simulation. The remainder of the halos are shown as black dots. The dense and under dense regions appear to ``carve out'' disjoint regions in the cosmic web of structure, with the densest ones being more compact than the underdense regions, as can be expected. The corresponding classification for the early- and late-forming halos, for the same slice, is shown in the bottom panels of Fig.~\ref{Fig:CW}. It is apparent that the distribution of early- and late-forming halos is distinctly different than that of halos in dense and underdense environments. There is perhaps a tendency for the early-forming halos to preferentially occupy the dense environments, and a slight trend for late-forming halos to populate also the underdense regions. However, the general distribution is very different with both early- and late-forming halos tracing well the cosmic web, in contrast to the strong environment patchy pattern. It is also clear, even by visual inspection, that the early-forming halos are more clustered than the late-forming ones. We examine the clustering of the galaxies in these halos later on in \S~\ref{Sec:clustering}. To further examine the correlation between formation redshift and large-scale environment, we plot in Fig.~\ref{Fig:Cont} the joint distribution of the two properties. We do this separately for three narrow ranges of halo mass, as labelled, since the two properties by themselves also correlate with halo mass, which is apparent from the individually marginalized distributions also shown. These demonstrate the known trends that more massive halos reside in denser environments and are formed later than less massive halos. The 2D distribution appears very broad with no obvious strong trend. To quantify that we also plot the medians of one property as a function of the other: the solid lines are the median of formation redshift for fixed density and the dashed lines the median of environment for a given formation redshift. The fact that the solid lines are roughly horizontal and the dashed lines nearly perpendicular (or that the two sets of medians are almost perpendicular to each other) over most of the range reflects their lack of correlation on one another. This is perhaps somewhat surprising given the measurements of assembly bias (e.g., \citealt{gao05,gao07}) showing that early-formed halos are more clustered than late-forming ones, and as such expected to reside in dense environments. Only such a weak dependence is apparent, at the high density and high formation redshift end, where the two sets of lines slightly curve toward each other. \begin{figure} \includegraphics[width=0.48\textwidth]{Fig02.ps} \caption[]{\label{Fig:Cont} Joint distribution of large-scale environment ($\rm \delta_5$) and formation redshift ($\rm z_{\rm form}$) for present-day halos in the Millennium simulation, for three narrow ranges of halo mass. The red, blue and green contours represent halos with low, intermediate and high masses, respectively, as labelled in the top part of the figure. The different contour levels correspond to 1, 2 and 3 $\sigma$ of the distribution. The marginalized distributions of each property separately are shown as well, for each halo mass bin. The (roughly horizontal) solid lines represent the median values of the formation redshift as a function of environment. The (roughly perpendicular) dashed lines are the median values of environment at each formation redshift. } \end{figure} \vspace{0.1cm} \subsection{The HOD as a function of halo age and environment} \label{SubSec:HOD} It is of fundamental importance and interest to investigate how the halo occupation functions themselves vary as a function of each of these properties. For the galaxy sets we use fixed number-density samples drawn from the SAM catalogs when ranked by stellar mass. We have examined a range of different number density samples and present the results for three representative cases with number densities of $3.16 \times 10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$, $ 10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$ and $3.16 \times 10^{-3} \,h^{3}\,{\rm Mpc}^{-3}$. The corresponding minimum stellar mass thresholds for each of these are provided in Table~\ref{table}. Naturally, the stellar masses increase with decreasing number density. Differences between the stellar mass values of G11 and L12 are expected, given the differences in galaxy formation prescriptions and corresponding stellar mass functions. \begin{table} \centering \caption{\label{table} Stellar mass thresholds (in units of $h^{-1}M_{\odot}$) for the three main number-density samples (in units of $\,h^{3}\,{\rm Mpc}^{-3}$) presented in this work, for the G11 and L12 models.} \hspace{-0.2cm} \begin{tabular}{c c c c} \hline \hline & $3.16 \times 10^{-3}$ & $1 \times 10^{-2} $ & $3.16 \times 10^{-2} $ \\ \hline G11 & $3.88 \times 10^{10}$ & $1.42 \times 10^{10}$ & $1.85 \times 10^{9}$\\ L12 & $2.92 \times 10^{10}$ &$6.50 \times 10^{9}$ & $9.39 \times 10^{8}$\\ \hline \end{tabular} \end{table} Fig.~\ref{Fig:HOD_Err} shows how the halo occupation functions vary with environment and halo age for a galaxy sample from the G11 SAM model with a number density of $10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$. The top panel shows the HODs for the full galaxy sample (black) as well as for the subsets of galaxies that reside in the 20\% of halos in the densest environments (red) and 20\% of halos in the least dense environments (blue). We remind the reader that the division to 20\% most/least dense regions is done for each bin of halo mass, so that the different samples equally probe the full halo mass range. Also, we note that, by construction, these samples have equal numbers of halos but not equal number of galaxies. \begin{figure} \includegraphics[width=0.48\textwidth]{Fig03a.ps} \includegraphics[width=0.48\textwidth]{Fig03b.ps} \caption{\label{Fig:HOD_Err} (Top panel) The halo occupation functions for a galaxy sample with number density $ 10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$, for the G11 model. The solid black line shows the HOD of all galaxies in the sample. The solid red line shows the HOD for the galaxies in the 20\% of halos in the densest environments, while the solid blue line presents the HOD for the galaxies in the 20\% of halos in the least dense environments. The red and blue shaded regions (apparent only at the high-mass end) represent jackknife errors calculated using 10 subsamples. In all cases, dotted lines show separately the central galaxy occupation contribution and dashed lines represent the satellite galaxies occupation. (Bottom panel) Same as in the top panel, but here for halo samples selected by their formation time instead of their environment. The occupation for galaxies in the 20\% earliest-formed halos is shown in red, and for the 20\% latest-formed halos is shown in blue.} \end{figure} We find distinct differences in the HODs for both the central and satellite occupation functions. For the central occupation, the differences are noticeable at the ``knee'' of the occupation function and below. We find that in the densest environments, central galaxies are more likely to reside also in lower-mass halos, and the trend reverses in underdense regions. Stated in a slightly different way, in the regime where the halo occupation rises from 0 to 1, halos are more likely to host central galaxies if they reside in dense environments. This may be related to preferential early formation of halos in dense regions, though as we saw above the correlation is rather loose. We discuss below further insight into the resulting trends for central galaxies (see \S~\ref{Sec:SMHM}). The satellite occupation function in the G11 model also exhibits a dependence on large-scale environment. The satellite occupation function in dense environments exhibits a slight shift toward larger numbers, so that halos in dense environments are more likely to have more satellites on average. This behavior is perhaps naturally expected, due to the increased interactions and halo mergers in dense environments. The bottom panel of Fig.~\ref{Fig:HOD_Err} shows how the occupation function varies with halo age for the same G11 galaxy sample with number density $10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$. In the case of halo age, there are much larger effects on the occupation functions than we saw with environment. For the central occupation, we find a clear trend of early forming (old) halos being more likely to host galaxies also at lower masses than late forming (young) halos. This likely arises from the fact that the early formed halos have more time for stars to assemble and for the galaxy to form. The sense of the trend is the same as that for the environmental dependence but is a much more stronger one, with the ``shoulder'' of the occupation function extending significantly toward lower masses with larger age. We find a strong reverse effect for the satellites occupation at the low mass end: early-forming halos have significantly fewer satellites than late-forming halos. This trend is pronounced at low occupation numbers of $\langle N(M_{\rm h})\rangle<10$ and becomes negligible at higher occupation numbers. This is probably due to the fact that in the early-forming halos there is simply more time for the satellites to merge with the central galaxy, which will be a more dominant process at the low halo mass / low occupation regime. This trend is similar to the predicted dependence of subhalo occupation on halo formation time \citep{bosch05,zentner05,giocoli10,jiang16}, indicating that baryonic physics does not play an important role in the variation of the satellites occupancy. These differences in the halo occupation functions, for both age and environment, are significant. We estimate the uncertainties on the HOD calculations using jackknife resampling, dividing the full simulation volume into 10 slices. Incidentally, when separating the different subregions, if the center of a given halo is in a certain subvolume we include with it all galaxies in that halo, regardless of where the physical boundary between the subvolumes lie. The resulting errors are shown as shaded regions in the figure, and are in fact negligible over most of the range and only become significant at the high-mass range where the number of halos is small. The HOD dependences on age and environment are different in magnitude (for centrals) and sense (for satellites). The strength of the trends with age versus environment perhaps indicates that formation time is the more fundamental property related to assembly bias. Varying the Gaussian smoothing length used to define the environment impacts slightly the size of the deviations, with the differences becoming a bit more pronounced for small smoothing lengths, as expected. However, we choose to stick with our $5 \,h^{-1}\,{\rm Mpc}$ Gaussian smoothing so as to robustly infer the large-scale environment. We describe and model these differences in terms of the HOD parameters in \S~\ref{SubSec:param}. The dependence on environment we find for the central occupation is very similar to that measured by \citet{mcewen16}. However, they do not find any noticeable difference for the satellite occupation. Our results differ from those of \citet{mehta14} who find no significant dependence of the HOD on environment. The level of occupancy variation that is present appears to depend on the specifics of the galaxy formation model utilized. \begin{figure} \includegraphics[width=0.48\textwidth]{Fig04a.ps} \includegraphics[width=0.48\textwidth]{Fig04b.ps} \caption{\label{Fig:HOD_L12} The same as Fig.~\ref{Fig:HOD_Err} but for the L12 model.} \end{figure} \begin{figure*} \includegraphics[width=0.48\textwidth]{Fig05a.ps} \includegraphics[width=0.48\textwidth]{Fig05b.ps} \includegraphics[width=0.48\textwidth]{Fig05c.ps} \hspace{0.6cm} \includegraphics[width=0.48\textwidth]{Fig05d.ps} \caption{\label{Fig:HOD_envir} The dependence of the halo occupation functions on large-scale environment for different number densities than that shown in Fig.~\ref{Fig:HOD_Err} and Fig.~\ref{Fig:HOD_L12}, $3.16 \times 10^{-3} \,h^{3}\,{\rm Mpc}^{-3}$ on the left-hand side and $3.16 \times 10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$ on the right. The top panels are for the G11 model and the bottom ones are for L12.} \end{figure*} \vspace{0.1cm} \subsection{The HOD for different models and samples} \label{SubSec:otherHODs} \begin{figure*} \includegraphics[width=0.48\textwidth]{Fig06a.ps} \includegraphics[width=0.48\textwidth]{Fig06b.ps} \includegraphics[width=0.48\textwidth]{Fig06c.ps} \hspace{0.6cm} \includegraphics[width=0.48\textwidth]{Fig06d.ps} \caption{\label{Fig:HOD_age} The same as Fig.~\ref{Fig:HOD_envir}, but for galaxy samples selected using halo formation time, instead of environment.} \end{figure*} To further investigate the dependence on the galaxy formation model we repeat the analysis in \S~\ref{SubSec:HOD} using the independently derived L12 Durham model. Figure~\ref{Fig:HOD_L12} shows the HOD dependence on environment and halo age for a galaxy sample with number density of $10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$ from the L12 SAM. The environmental dependence for L12 shows a similar, but more subtle, trend for the central occupation, while the trend for the satellite occupation disappears. The difference in the satellite occupations between L12 and G12 could arise due to the different treatment of satellites in the two models (\S~\ref{SubSec:SAM}). As the satellite destruction processes are more immediate in L12, perhaps there is less time for the environmental effects to impact the occupation in that case. The HOD dependence on halo formation time for L12 and G11 is very similar, with L12 as well showing the strong trends for both the central and satellite occupation. The tendency of centrals to shift toward occupying lower mass halos is slightly stronger for L12. We note the distinct change of shape of the central occupation, giving rise to a non-monotonic occupation for the galaxies in early-forming halos. This is likely to be related to the form of AGN feedback in the Durham models, as discussed in \citet{nuala17}. We also examine the dependence of the different trends with stellar mass of the galaxies, by varying the number density of the samples. As the samples are ranked by stellar mass, larger number densities include smaller stellar masses, while small number densities are limited to more massive galaxies. We present our results for two additional number densities than the one previously used (one smaller and one larger) in Figures~\ref{Fig:HOD_envir} and \ref{Fig:HOD_age}, for environment and age, respectively. In both cases, the HODs change globally as expected, shifting overall towards lower halo masses with increasing number density (decreasing stellar mass). The specific signatures of the environmental dependence of the HOD change as well with number density. For G11 (top panels of Fig.~\ref{Fig:HOD_envir}), the differences in the central occupations increase with number density. This is in accordance with the findings of \citet{croton07} that galaxy assembly bias is stronger for fainter (less massive) galaxies. For the lowest number density shown, corresponding to galaxies with stellar masses larger than $3.88 \times 10^{10} h^{-1}M_{\odot}$ (Table~\ref{table}), the differences between the central occupations are barely noticeable. In contrast, the G11 satellite occupation differences decrease slightly with number density. These opposing changes with number density suggest that the environment dependence of the central and satellite occupations have different origins. We find a similar change with number density of the central occupation environment dependence for L12 (bottom panels of Fig.~\ref{Fig:HOD_envir}), while the satellite occupancy variation remains effectively undetected. The halo age signatures for the different number densities (Figs.~\ref{Fig:HOD_Err}, \ref{Fig:HOD_L12} and \ref{Fig:HOD_age}) are quite robust and do not exhibit any clear dependence on the number density for either model, again indicating that these may be of different physical nature. The non-monotonic occupation behavior for the early-forming halos \citep{nuala17} is also apparent in the smallest number density case for the G11 model. \vspace{0.1cm} \subsection{Extending the HOD parametrization} \label{SubSec:param} \begin{figure*}[bt] \includegraphics[width=1\textwidth]{Fig07.ps} \caption{\label{Fig:HODfit_Env} (Left) The HOD of the G11 SAM for a number density of $3.16 \times 10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$. Dots represent the HOD calculated in the simulation: the black ones show the HOD for all galaxies; the red ones the HOD for the $10\%$ of halos in the densest environments; and the blue ones show the HOD for the $10\%$ of halos in the least dense environments. The solid lines, in corresponding colors, show the 5-parameter best-fit models for these. (Right) The values of the best-fitting parameters of the HODs as a function of the environment percentile for $M_{\rm min}$ (top left), $\sigma_{\rm logM}$ (bottom left), $M_{\rm cut}$ (top middle), $M_{\rm 1}$ (bottom middle), $\alpha$ (top right) and $M_{\rm 1}/M_{\rm min}$ (bottom right). Each dot in these plots represents a different subsample selected by its large-scale environment, each with $10\%$ of the full halo population, with the environment density increasing from left to right. The left-most dots and right-most dots in these panels represent the parameter values of the models plotted in blue and red, respectively, in the left-hand side HOD panel. The errorbars reflect the $1\sigma$ uncertainty on the parameters. The green horizontal lines with shaded regions in the parameters panels are the values fitted for the full sample and their uncertainty. } \end{figure*} \begin{figure*} \includegraphics[width=1\textwidth]{Fig08.ps} \caption{\label{Fig:HODfit_Age} Same as Fig.~\ref{Fig:HODfit_Env}, but for subsamples selected by halo formation time instead of large-scale environment. In the panels for the individual parameters, the halo formation redshift (age) increases going from left to right. Please note that for the parameter values on the right, the y-axis ranges are different than those of Fig.~\ref{Fig:HODfit_Env}.} \end{figure*} It is customary to parametrize the shape of the HOD using a 5-parameter model which captures the main features of the halo occupation function, as predicted by SAMs and hydrodynamic simulations \citep{zheng05}. This model is commonly used when interpreting galaxy clustering measurements to infer the galaxy--halo connection (e.g., \citealt{zheng07,zehavi11}). Here, we characterize the HOD dependences on age and environment in terms of the 5 parameters, as a first step toward incorporating these variations into the HOD model. The halo occupation function is usually modeled separately for central galaxies and satellites. The occupation function for centrals is a softened step-like function with the following form: \begin{equation} \langle N_{\rm cen}(M_{\rm h})\rangle = \frac{1}{2}\left[ 1 + {\rm erf} \left( \frac{\log M_{\rm h} - \log M_{\rm min}}{\sigma_{\log M}} \right) \right], \label{Eq:Cen_HOD} \end{equation} where $ {\rm erf}(x)$ is the error function, $ {\rm erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} {\rm d}t. $ $M_{\rm min}$ characterizes the minimum halo mass for hosting a central galaxy above the specified threshold. In the form adopted here, it is the halo mass for which half of the halos are occupied. $\sigma_{\log M}$ indicates the width of the transition from zero to one galaxy per halo and reflects the scatter between stellar mass and halo mass. For satellite galaxies, the occupation function is modeled as: \begin{equation} \langle N_{\rm sat}(M_{\rm h})\rangle = \left( \frac{M_{\rm h}-M_{\rm cut}}{M^*_1}\right)^\alpha, \label{Eq:Sat_HOD} \end{equation} for $M_{\rm h}>M_{\rm cut}$, representing a power-law occupation function with a smooth cutoff at the low-mass end. Here $\alpha$ is the slope of the power-law, with typical values close to one, $M_{\rm cut}$ is the satellite cutoff mass scale (i.e., the minimum mass of halos hosting satellites), and $M^*_1$ is the normalization. Often, instead of the latter, a related parameter is used, $M_{1}$, which is the mass of halos that host one satellite galaxy on average ($M_{1} = M^*_1 + M_{\rm cut}$). The total occupation function is then specified by these 5 parameters and given by the sum of the two terms: \begin{equation} \langle N_{\rm gal}(M_{\rm h})\rangle = \langle N_{\rm cen}(M_{\rm h})\rangle + \langle N_{\rm sat}(M_{\rm h})\rangle. \end{equation} Figure~\ref{Fig:HODfit_Env} shows how these 5 parameters vary with environment. The left-hand side presents the HOD of the G11 SAM for $n=3.16 \times 10^{-2} h^{3} Mpc^{-3}$ for the full sample, and the $10\%$ of halos in the most dense regions and the $10\%$ of halos in the least dense regions. The dots represent the directly-measured HODs and the lines are the best-fit 5-parameter models to them. The right-hand side examines how each of the parameters varies with environment in $10\%$ bins of halo environment. The fits are done assuming equal weight to all measurements and using only those with $\langle N(M_{\rm h})\rangle > 0.1$. The errorbars on the parameters are obtained by requiring $\chi^2/{\rm dof}=1$, as in \citet{Contreras17}. For this G11 sample, we see that the changes to the parameters when varying the environment are subtle, but all are affected. The changes in the central occupation with density are in fact quite small, with $M_{\rm min}$ decreasing and $\sigma_{\rm logM}$ increasing slightly with density. The variations in best-fitting parameters are influenced by the limited flexibility in the assumed shape of the HOD. The changes in the satellite occupation with increasing density act to gradually decrease $M_{\rm cut}$ and $M_{\rm 1}$ and increase the slope $\alpha$, over at least part of the density range. We note that we find more intricate changes to the HOD parameters than those modeled in \citet{mcewen16}, since that work saw differences only in the centrals occupation function and not the satellites one. The resulting variation in the $M_{\rm 1}/M_{\rm min}$ ratio (bottom-right panel) is a noticeable decrease with increasing density over most of the range, but then a turnover and a slight increase for halos in the densest regions. Fig~\ref{Fig:HODfit_Age} examines the change in the parameters, but now with halo formation time. The change in parameters in this case is more distinct and significant, since the dependence of the HOD on halo age is stronger than on environment. $M_{\rm min}$ monotonically decreases with increasing formation redshift (earlier formation). $\sigma_{\rm logM}$ varies with halo age but does not show a clear trend. The satellite occupation changes in the opposite sense, with all three parameters $M_{\rm cut}$, $M_{\rm 1}$ and $\alpha$ increasing significantly with larger formation redshift. (The change in the slope $\alpha$ again may be somewhat affected by the limitations of the assumed HOD shape.) The combined effect on the $M_{\rm 1}/M_{\rm min}$ ratio is a dramatic increase with formation redshift, of about a factor six over the full range! This change is much stronger than the variation of this ratio with either number density or redshift, as explored by \citet{Contreras17}, about twice as large as the variation with number density and close to four times larger than the evolution in the ratio from redshift 3 to 0. This significant change, however, is easily understood from the predicted occupancy variation (e.g., bottom part of Fig.~\ref{Fig:HOD_Err}). For earlier-forming halos, $M_{\rm 1}$ shifts toward larger halo masses while $M_{\rm min}$ shifts toward smaller halo masses, resulting in a substantial increase in their ratio. Still, it is noteworthy that the $M_{\rm 1}/M_{\rm min}$ ratio is such a sensitive indicator of halo age. These results can inform theoretical modeling efforts extending the standard HOD framework. We can envision modeling each of the parameters change with halo age as a power-law function with an additional assembly bias parameter (similar to our modeling of the evolution of the HOD in \citealt{Contreras17}). Such a model may aid in obtaining constraints on assembly bias from observational data, as well as providing a straight-forward method of incorporating the age-dependence of the HOD into galaxy mock catalogs. \vspace{0.2cm} \section{The stellar mass -- halo mass relation} \label{Sec:SMHM} To gain a better understanding of the origin of the trends seen in the central galaxy occupation function with age and environment, we examine the stellar mass -- halo mass (SMHM) relation. As we show, it is the dependence of the scatter in this relation on the secondary parameters that gives rise to the occupancy variation and to galaxy assembly bias. Figure~\ref{Fig:SMHM} shows the stellar mass of central galaxies as a function of halo mass for galaxies in the G11 SAM. We plot $1\%$ of all central galaxies, for clarity. The stellar mass increases with halo mass, with the median of the relation (black line) exhibiting a relatively steep slope up to $M_{\rm h} \sim 10^{12} h^{-1}M_{\odot}$ and a shallower increase for more massive halos, when the AGN feedback becomes important. This was studied in detail in \citet{Mitchell16} for {\tt GALFORM} (see also \citealt{Contreras15}). There is significant scatter in the relation which decreases at the high-mass end. This scatter is expected to be due to stochasticity in both galaxy and halo assembly histories and the various physical processes. Thus we may expect the scatter to relate to the properties of the host halos. We examine this visually by color-coding each galaxy by its large-scale environment (top panel) and by the formation redshift of its host halo (bottom panel). \begin{figure} \hspace{-0.2cm} \includegraphics[width=0.5\textwidth]{Fig09.ps} \caption{\label{Fig:SMHM} (Top) The stellar mass of central galaxies as a function of host halo mass for the G11 SAM and its dependence on environment. Each dot represents a central galaxy, plotted for a representative (randomly chosen) $1\%$ of the galaxies. Galaxies are color-coded by their $5 \,h^{-1}\,{\rm Mpc}$ Gaussian smoothed density, $\delta_5$, according to the color scale shown on the right. The plotting order is also chosen randomly, so as to avoid any overplotting issue. The black solid line represents the median of the distribution with the errorbars designating the $20\%$--$80\%$ range of the distribution. (Bottom) Same as for the top panel, but now color-coded by the formation redshift, calculated as the time when the halo reaches half of its final mass. For fixed halo mass, more massive central galaxies tend to live in halos that formed early or reside in denser environments (with the latter being a weaker trend than the former). } \end{figure} As is apparent from Fig.~\ref{Fig:SMHM} the spread around the median SMHM relation is not random, but depends on the secondary property. For halos less massive than about $10^{12} h^{-1} M_{\odot}$, there is an apparent dependence on the large-scale environment (top panel), where for fixed halo mass, more massive central galaxies tend to reside in denser environments. This trend appears to have a fairly sharp transition between relatively low and high densities, even though there is a large scatter of different environments at each location on the SMHM relation (as is evident by the mix of colors). The trend does not persist toward larger masses, where there is no variation of environment for fixed halo mass (or else it is impossible to see one due to the large scatter of different densities). We find that the central galaxies in the densest environments (in absolute terms, not per halo mass bin, i.e., the red/maroon colored ones in the top panel) populate two distinct regions in this diagram: they predominantly populate the most massive halos, albeit at smaller numbers according to the halo mass function, and they also comprise the most massive centrals in low-mass halos. The former simply stems from the fact that the most massive halos tend to reside in dense environments, while the latter is related to the occupancy variation we discuss here. The bottom panel of Fig.~\ref{Fig:SMHM} shows the same SMHM relation, but now color-coded by the formation redshift (age) of the halos. The trend with halo age for fixed halo mass is particularly striking, with more massive central galaxies generally residing in halos that formed early. This dependence on halo age is gradual but very distinct, due to the small scatter of halo ages at each location in the stellar mass -- halo mass diagram for halos below $\sim 10^{12} h^{-1} M_{\odot}$. The trend persists for all halo masses, but with a significantly larger scatter of halo ages at the high-mass end, as the formation redshifts also progressively become more recent, as expected. A similar trend with halo formation time has been measured in SAMs by \citet{wang13a} and more recently also for galaxies in the EAGLE hydrodynamical simulation \citep{matthee17}. It likely arises because central galaxies in early-formed halos have more time for accretion and star formation and thus end up being more massive. Once again it appears that halo age is the more fundamental characteristic here that affects galaxy properties. The dependence on environment is more complex and harder to interpret. \citet{jung14} investigate the stellar mass dependence on environment for fixed halo mass using a different SAM. They find only small differences between halos in the densest and least dense environments for low halo masses, and these differences diminish with increasing halo mass (cf. \citealt{tonnesen15}). This suggests that the level of secondary correlations present (and by association, galaxy assembly bias) depends on the details of the galaxy formation model adopted. We note also the counter-intuitive fact that, at least according to the study of \citet{matthee17}, while some fraction of the scatter in the SMHM relation is accounted for by formation time, the large-scale environment seems to make a negligible contribution. The fundamental importance of these dependences of stellar mass on secondary properties at fixed halo mass is that they provide a direct explanation for the central galaxy occupancy variation with environment and halo age (as shown in, e.g., Fig.~\ref{Fig:HOD_Err}). For fixed halo mass, early-formed halos or halos in denser environments host more massive galaxies. Consequently, any fixed stellar-mass cut (e.g., the $1.42 \times 10^{10} h^{-1}M_{\odot}$ threshold used to define the sample analyzed in Fig.~\ref{Fig:HOD_Err}) would include these first. Thus the central galaxies in early-forming halos or dense environments populate relatively lower-mass halos, extending the central occupation function in that direction. And, conversely, late-forming halos or halos in underdense environments generally host lower-mass galaxies. Therefore, only centrals hosted by more massive halos will make it into the sample and the central occupation function in that case will be shifted toward more massive halos. The level of scatter in halo age or environment at each location directly determines the strength of the occupancy variation. The tight correlation between stellar mass and halo age (for fixed halo mass) results in a large variation of the HOD, while the large scatter involved with environment results in only a moderate change of the HOD in that case. Furthermore, as noted already, the SMHM trend with environment holds only at the low-mass end, while the general trend with halo age persists for all halo masses. This explains the change in occupancy variation with number density, demonstrated in Fig.~\ref{Fig:HOD_envir} and \ref{Fig:HOD_age}. For age, the occupancy variation remains at comparable levels for all number densities, similar to the trend in the SMHM relation. For environment, the level of occupation variation decreases for smaller number densities (larger stellar mass thresholds), as these correspond to larger halo masses where the trend with environment diminishes. In any case, we are seeing that the correlated nature of the scatter in the SMHM relation is intimately related to the trends in the occupation functions. It is exactly this coupling between halo properties (such as large-scale environment and formation time) and galaxy properties (such as stellar mass or luminosity) that causes the dependence of the HOD on halo assembly. A more extensive study of the connection between the SMHM relation and the occupancy variation and galaxy assembly bias will be presented elsewhere (Zehavi et al., in preparation). \vspace{0.2cm} \section{The impact on galaxy clustering} \label{Sec:clustering} To see the impact of the occupancy variation with halo age and environment (\S~\ref{Sec:HOD}) on galaxy clustering, we measure and examine the correlation functions of galaxies in these samples. The variations in the HODs couple with the different clustering properties of the halos to produce a signature of assembly bias in the galaxy distribution. \vspace{0.1cm} \subsection{The shuffling mechanism} \label{SubSec:Shuffle} To measure the effects of assembly bias on the galaxy correlation function, we need to create a control sample of galaxies where we explicitly remove the galaxy assembly bias, and then compare to the clustering of the original sample. In order to do that we shuffle the full galaxy population among halos of similar masses, following the procedure of \citet{croton07}. Specifically, we select halos in 0.2 dex bins of halo mass and randomly reassign the central galaxies hosted by these halos among all halos in that mass bin, placing them at the same location as the original central galaxy in the halo. The satellite galaxies are moved together with their original central galaxy, preserving their distribution around it. The shuffling eliminates a dependence of the galaxy population on any inherent properties of their host halos other than mass (since these are now randomly assigned). Practically, what the shuffling does is remove the occupancy variation, namely the dependence of the HOD on halo properties other than mass. For these shuffled samples, the HOD of the full galaxy sample remains the same, but the differences between the HODs of different halo populations, e.g., split by age or environment, are now eliminated and all share the same HOD as that of the full sample. We have verified that the results we present below are insensitive to our specific choice of the bin size in halo mass. We also note that alternative shuffling algorithms have been proposed in the literature, where the satellites are also shuffled among different halos of the same mass independent of the central galaxies (e.g., \citealt{Zu08,zentner14}). This additional satellite shuffling is important only when one is specifically concerned with features that correlate the properties of centrals and satellites or the satellites with themselves, such as galactic conformity or satellite alignment. For our purposes, the combined central+satellite galaxies shuffling completely suffices to erase the signature of the occupancy variation. We clarify that our shuffling does impact the small scales (1-halo term) of the correlation function of our subsamples, as we show below (in contrast to the statement made in some works that this shuffling preserves the 1-halo term, which only holds when considering the {\it full} galaxy sample). \vspace{0.1cm} \subsection{The correlation functions} \label{SubSec:clustering} Figure~\ref{Fig:CF_Err} show the correlation functions measured for the galaxy subsamples analyzed in Fig.~\ref{Fig:HOD_Err}, for the $n=10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$ sample from the G11 SAM. We calculate the auto-correlation function of the full galaxy sample (solid black lines) and the cross-correlation function between the full sample and the galaxies in the different subsets of 20\% of the halos (red and blue solid lines, as labelled), showing the environmental dependence on the top and formation time on the bottom. The dashed lines in all cases show the results when shuffling the galaxy samples, effectively removing the occupancy variation, as described in \S~\ref{SubSec:Shuffle}. The top subpanels are the correlation functions themselves, and the middle and bottom subpanels show ratios derived from these correlation functions, to highlight different features as described in the figure caption and discussed below. \begin{figure} \includegraphics[width=0.48\textwidth]{Fig10a.ps} \includegraphics[width=0.48\textwidth]{Fig10b.ps} \caption{\label{Fig:CF_Err} Correlation functions for the G11 $n=10^{-2} \,h^{3}\,{\rm Mpc}^{-3}$ sample. (Top panel) The top subpanel shows the auto-correlation function of the full galaxy sample (black solid line) and the cross-correlation functions of the full sample and galaxies in the 20\% of halos in the most and least dense regions (solid red and blue lines, respectively). Dashed lines are the corresponding correlation functions of the shuffled galaxies (see text). The middle subpanel displays these now divided by the full sample auto-correlation function, highlighting the different clustering properties in different environments. This is shown for both original and shuffled galaxy samples, e.g., the dashed red line is the ratio of the cross-correlation of the shuffled galaxies in the dense regions and the auto-correlation of the full shuffled sample. The bottom subpanel shows, for the three cases, the ratio between the correlation functions of the original and shuffled galaxy samples. In all subpanels, the shaded regions represent the errorbars estimated from 10 jackknife realizations. (Bottom panel) Same as in the top panel, but now for galaxies residing in halo samples chosen by their formation redshift instead of environment. Red (blue) lines correspond to the cross-correlation function of the full galaxy sample with the subset of galaxies residing in the 20\% earliest- (latest-)formed halos. } \end{figure} The shaded regions represent the uncertainties on these measurements estimated from jackknife resampling, when dividing the full simulation volume into 10 slices along one axis (identical subvolumes to those used to estimate the errors in Fig.~\ref{Fig:HOD_Err}). The uncertainties on the ratios in the middle and bottom subpanels are the jackknife errors on the ratios themselves, which are significantly smaller than propagating the individual measurement errors, as can be seen by comparing the red and blue shaded regions in the middle subpanels with the grey shaded regions in the middle subpanels (the jackknife measurement errors for the full sample). This is expected as the variations of the different auto-correlation and cross-correlation functions among the different jackknife samples are naturally correlated. (Note that the y-axis range in the two subpanels is different and the grey uncertainty regions plotted in the top and bottom parts of the figure are identical.) We start by examining the top part and top subpanel of Figure~\ref{Fig:CF_Err} which illustrates the dependence of clustering on environment. We find distinct differences on large scales between the clustering of the galaxies in the most dense environments versus the least dense regions. The galaxies in dense environments are significantly more clustered than the full sample, while the galaxies in the underdense regions are much less clustered, as expected. The cross-correlation functions do not have the same shape on large scales as the full auto-correlation function, due to the way these samples were defined using the $5 \,h^{-1}\,{\rm Mpc}$ Gaussian smoothed density fields, which effectively carves out different regions of dense and underdense environments as is seen in Fig.~\ref{Fig:CW}. In particular, the cross-correlation function for the underdense regions exhibits a fairly sharp dropoff above $\sim 1 \,h^{-1}\,{\rm Mpc}$ and goes below 0 at $\gtrsim 3 \,h^{-1}\,{\rm Mpc}$ (which is where we stop plotting $\log \xi$). The middle subpanel shows the ratios of the cross-correlations of the subsamples to the full sample auto-correlation and highlights the dependence on environment. These differences arise due to the dependence of clustering on large-scale environment, the fact that halos in dense environments are more clustered than halos of the same mass in underdense environments. We stress that this dependence by itself is {\it not} what is commonly referred to as galaxy assembly bias. To illustrate this we also plot the correlation functions of the shuffled galaxies, where galaxies are randomly assigned to halos of the same mass, which eliminates any connection to the assembly history of the halos. These correlation functions (the dashed lines) show essentially the same trends with environment. This is in agreement with the conclusions of \citet{abbas05,abbas06} who demonstrate that for the most part the clustering dependence on large-scale environment can be explained without resorting to occupancy variation. The differences between the solid and dashed lines (in the top and middle subpanels of Fig.~\ref{Fig:CF_Err}) are the ones reflecting galaxy assembly bias. Namely, the occupancy variation we quantified in the HOD coupled with halo assembly bias give rise to these systematic differences in the clustering. We illustrate these in detail in the bottom subpanel which plots the ratio of original to shuffled correlation functions. We find that the HOD differences with environment do induce significant differences in the large-scale clustering, where for the full sample the clustering of galaxies is stronger by about $15\%$ than the clustering of the shuffled galaxies (with no occupancy variation and thus no galaxy assembly bias). This translates to a $\sim 7\%$ change in the galaxy bias, in good agreement with the predictions of \citet{croton07}. The subsample of galaxies in the most dense environments exhibits a similar trend, while the galaxies in the least dense regions are significantly {\it less} clustered than their shuffled counterparts over most of the range shown. We discuss in \S~\ref{SubSec:origin} possible reasons for this difference. The bottom part of Fig.~\ref{Fig:CF_Err} investigates the dependence of clustering on halo age. We see that galaxies in the earliest-formed halos are more clustered on large scales than galaxies in the latest-forming halos. The clustering differences in this case are much smaller than the differences with large-scale environment and have similar shapes. This can be readily seen in the top and middle subpanels, and we remind the reader that the y-axis in the middle subpanel for halo age spans a much smaller range than the corresponding one for environment. Again, we note that while these differences are directly due to age-dependent halo clustering, namely reflecting halo assembly bias, these trends only marginally depend on the occupancy variation with age, as seen by the relatively-small differences on large scales between the solid and dashed lines in the middle subpanel. The small-scale clustering in this case shows bigger differences between the solid and dashed lines (that are noticeable in all subpanels). This arises mostly because of the large differences in the satellite occupation functions, as seen in bottom panel of Fig.~\ref{Fig:HOD_Err}. There are relatively more satellites in late-formed halos versus early-formed halos, especially at the lower mass end of halos that host satellites, likely simply due to having more time for the satellites to be destroyed in the early forming halos. This leads to a reversal of the clustering trend (most notable in the middle panel at $r \sim 1 \,h^{-1}\,{\rm Mpc}$) and a stronger small-scale clustering for the galaxies in the most-recently forming halos. Interestingly, this is then a case where there is no halo assembly bias but the galaxy clustering is still different due to occupancy variation with halo age. This feature is evident only for the clustering as a function of halo age and is negligible for the dependence on environment, since in the latter case the differences between the satellite occupations are minuscule (as we have seen in the top panel of of Fig.~\ref{Fig:HOD_Err}). The bottom subpanel focuses on the galaxy clustering differences due to the occupancy variations by showing the ratio of the correlation functions of the original galaxy samples to the shuffled ones. On small scales, we can see the strong signature of the satellite occupancy variation that we had just discussed (differences between blue and red lines in the 1-halo regime below $r \sim 1 \,h^{-1}\,{\rm Mpc}$). The ratio for the full sample (black line) on these scales remains unity, since the shuffling doesn't alter at all the 1-halo contribution in that case. On larger scales, in the 2-halo regime, we see that all three samples exhibit a similar galaxy assembly bias trend, where the clustering of galaxies is stronger than that of the shuffled galaxies. We discuss this further below. Appendix~\ref{Sec:AutoCorr} presents the auto-correlation functions for these subsets of galaxies (instead of the cross correlation with the full sample). \vspace{0.1cm} \subsection{Origin of galaxy assembly bias trends} \label{SubSec:origin} We note an additional subtle feature in the middle subpanel of the bottom part of Fig.~\ref{Fig:CF_Err}. As we explained above, the large-scale clustering of galaxies as a function of halo age is dominated by the fact that, at fixed mass, early-formed halos cluster more strongly than late-forming halos (i.e., halo assembly bias). This is reflected by the difference between the dashed red and blue lines. When including the occupancy variation (red and blue solid lines), we see that it acts to slightly {\it decrease} the clustering differences between early- and late-formed halos, for separations larger than $\sim 5 \,h^{-1}\,{\rm Mpc}$. These small differences can be understood by examining the changes to the HOD (as shown in the bottom panel of of Fig.~\ref{Fig:HOD_Err}). The central galaxy occupation function for the late-forming halos is shifted toward higher halo masses, which acts to slightly increase their clustering (the blue solid line being above the blue dashed line in the middle subpanel corresponding to formation time). Conversely, the central galaxy occupation function for the early-formed halos is shifted toward lower halo masses, resulting in a slightly reduced clustering (the red solid line lying slightly below the red dashed line). The interpretation gets a bit more complicated since an opposite trend is seen for the satellites, however, we have calculated the effective bias corresponding to these varying HODs and confirmed that the net effect is as described above. We further corroborate this origin of the trend by examining the clustering of the different samples when considering only the central galaxies while excluding the satellites. We now turn back to the galaxy assembly bias trends shown on the bottom subpanels in Fig.~\ref{Fig:CF_Err}, for both halo age and environment, where we plotted the clustering of the different samples compared to the clustering of shuffled galaxies where the occupancy variation was erased. When comparing the galaxy assembly trends for halo age and environment, we find a similar effect for the galaxies in early-formed halos and for the galaxies in dense environment (the red solid lines in the two bottom subpanels). The clustering difference is in the same sense for galaxies in late-forming halos (blue solid line in the bottom subpanel of the bottom part of the figure). However, for galaxies in dense environments, this trend reverses, with a weaker clustering than that of the shuffled sample (blue solid line in the bottom subpanel of the top part of the figure). We attempt to obtain some insight here regarding the origin of these trends. We first consider the trends with regard to halo age. In that case, we see that regardless of sample used (full sample, early forming, or late forming) the galaxy assembly bias trends go in the same direction, with the clustering of galaxies stronger than that of the shuffled samples. We can understand why that is by examining the variations in the central galaxies HOD in Fig.~\ref{Fig:HOD_Err} and the systematic dependence on halo age in the central galaxies SMHM relation (Fig.~\ref{Fig:SMHM}). For any halo mass, we see that central galaxies tend to occupy first the earlier-formed halos. Thus, it is always the case -- for any range of halo ages -- that the relatively earlier-formed halos will be preferentially populated. Coupling this with halo assembly bias, namely the stronger clustering of early-formed halos (for fixed halo mass), results in these galaxies being more clustered than one would expect otherwise. Therefore, the clustering of {\it any} such sample would always be stronger than the clustering of the corresponding shuffled sample. The variations in the magnitude of the effect can be explained by looking at the role of the occupancy variation for the satellite galaxies. For the galaxies in the $20\%$ earliest-formed halos (the blue solid line) this effect is even more prominent in fact, since these halos also have relatively more satellites (compared to the same halos in shuffled case), which acts to increase the clustering further (via central-satellite galaxy pairs that contribute as well to the large-scale clustering). On the other hand, the $20\%$ latest-forming halos have relatively less satellites (compared to the same halos in the shuffled case or relative to the HOD for the full sample), and this acts to decrease the clustering and slightly reduce the amplitude of the effect. For the large-scale environment, we similarly find preferential formation of central galaxies in dense environments. This again implies that halos in relatively denser environments tend to be more populated, and as these halos are more strongly clustered, the galaxies in such halos end up being more clustered (compared to the shuffled galaxies). However, the central galaxies occupancy variation is more nuanced for environment than for halo age and the satellites trend is different for them, so the interpretation is more complex. In particular, it is non-trivial to explain the sense of the galaxy assembly bias for the underdense regions, where the clustering is weaker than for the corresponding shuffled sample. From its occupancy variation, it appears that there are less satellites overall in that case (for all halo masses), which is possibly the origin of the reduced clustering we measure. To confirm this explanation we calculate the effective bias for the different HOD variants, by integrating over the halo mass function weighted by the different occupation function. For the HOD of the underdense regions, we find a reduced effective bias compared to that of the full (or shuffled) sample. When calculating this separately for the centrals and satellites occupations, we find that the centrals term increases a little while the satellites term decreases, in agreement with the overall effect, leading to the reduced clustering. We have also examined the clustering results for the L12 model, for which there is no change in the satellites occupation function with environment (Fig.~\ref{Fig:HOD_L12}), to aid our understanding of the role the satellites play here. The magnitude of the galaxy assembly bias for the underdense regions in that case is significantly smaller (but still in the same sense as for G11). For lower number density samples (corresponding to more massive galaxies), we find that the galaxy assembly bias even switches sign, i.e., the L12 cross-correlation function for the underdense regions is more strongly clustered than the shuffled case (while the trend remains unchanged for G11). This further indicates that the trend observed for G11 for the underdense regions is due to the satellite occupancy variation. Finally, we also measured the clustering of the G11 subsamples, when considering only central galaxies. We find similar galaxy assembly bias trends for the full sample and the dense regions, and a much reduced effect for the underdense regions. This again supports our understanding that the satellites occupancy variation is the main cause of the unique galaxy assembly bias we see for the underdense regions, while the central occupancy variation is the dominant factor in all other cases. This difference may also be related to the general shape of the cross-correlation function for the underdense regions (induced by the large smoothing window) and the relative bias factor becoming negative on large scales (when the cross-correlation function goes below zero). \vspace{0.2cm} \section{Summary and conclusion} \label{Sec:summary} We have utilized semi-analytic models applied to the Millennium simulation to study the occupancy variation leading to galaxy assembly bias. We studied in detail the explicit dependence of the halo occupation functions on large-scale environment, defined as the $5 \,h^{-1}\,{\rm Mpc}$ Gaussian smoothed dark matter density field, and on halo formation time, defined as the redshift at which the halo gained (for the first time) half of its present-day mass. While related, these two halo properties have only a very loose relation between them, and probe a different distribution of halos. We focus our analysis on the $20\%$ subsets of halos at the extremes of the distributions of each property, defined separately for each halo mass bin, so as to eliminate the dependence of the halo mass function on these properties. We then study the different occupation functions of the galaxies in these halos and investigate the origin of the variations. We stress that in all analyses done here the HODs are calculated directly from the SAM galaxy catalogs, and not inferred from clustering measurements. Our key results are shown in Figures~\ref{Fig:HOD_Err}, \ref{Fig:SMHM} and \ref{Fig:CF_Err}. For the dependence on environment, we find small but distinct differences in the HOD, especially at the ``knee'' of the central occupation function. The central galaxies in dense environments start populating lower mass halos, and conversely, central galaxies in underdense environments are more likely to be hosted by more massive halos. This trend is robust among the two SAMs we studied. For one of the SAMs (G11), the satellite occupation function shows similar trends as the centrals, while the other (L12) does not exhibit variation for the satellites. We quantify the occupancy variations in terms of the changes to the HOD parameters. When studying the dependence on halo formation redshift (halo age), we find similar but significantly stronger trends for the central occupation functions. The satellite occupation function shows a reverse trend, where the early-formed halos tend to have much fewer satellites than late-formed halos. This likely arises from having more time for the satellites to merge with the central galaxy in the older halos. The relatively stronger trends for halo age suggest that this is the more fundamental property (among the two) giving rise to galaxy assembly bias. We gain insight regarding the origin of the central galaxies occupancy variation by examining the scatter in the stellar mass -- halo mass relation for them and its correlation with halo age and environment. We find that, at fixed halo mass, central galaxies in early formed halos or dense environments tend to be more massive. This directly leads to the occupancy variation we observe, as for any stellar-mass limited sample, the more massive central galaxies will be ``picked'' first in lower-mass halos. The dependence on halo age is very distinct, while the dependence on environment shows more scatter, giving rise to the stronger trends with halo age. The trends with halo age can be easily explained as the galaxies in the early-formed halos have more time to accrete and form more stars. These correlations, and the resulting occupancy variation, thus depend on the specific details of the galaxy formation model. This direct link to the correlated nature of the stellar mass -- halo mass relation also has important implications for models which use subhalo abundance matching to connect galaxies with their host halos. We also examine the auto-correlation and cross-correlation functions of these different samples and the impact of these occupancy variations, by comparing to the clustering of shuffled galaxy samples where the occupancy variation has been erased. We demonstrate the stronger clustering signal of galaxies in the most dense regions versus least dense regions, and similarly the strong clustering of galaxies in early-formed versus late-formed halos. We clarify that while these clustering differences arise from the dependence of halo clustering on halo age and environment, they are only marginally affected by the occupancy variations (and are not, for the most part, what we refer to as galaxy assembly bias). For all samples defined by halo age, the clustering of galaxies is stronger than that of the shuffled samples. Namely, we see that the occupation variation coupled with halo assembly bias act to increase the clustering of galaxies. This effect is explained and dominated by the central galaxy occupancy variation. For any range of halo ages, the earlier-formed halos are preferentially populated. Since these halos are more strongly clustered, the net effect is a stronger clustering of the galaxies. The satellite occupancy variation further modulates this effect, but is secondary here. The same behavior is found for the full sample of galaxies and for galaxies in dense environments. This again is due to the tendency to populate more the halos in denser environments, which are in turn more strongly clustered. The galaxies in the most underdense regions exhibit the opposite trend, however, with weaker clustering than for the corresponding shuffled sample. This trend is less intuitive, but is likely caused by the satellites occupancy variation, as discussed. Our approach here has already provided considerable insight with regard to the nature and origin of this complex phenomena. A companion paper (Contreras et al., in preparation) is studying the redshift dependence of the occupancy variation and galaxy assembly bias in the SAMs, which provides a comprehensive view of the evolution of the different trends. We are also investigating the environment and age occupancy variation in the EAGLE and Illustris cosmological hydrodynamical simulations (Artale et al., in preparation). Our study can inform theoretical models of assembly bias as well as attempts to determine it in observational data. Additionally, it can facilitate creating mock catalogs that incorporate this effect, to can aid in preparation for future surveys and in evaluating the impact of assembly bias on cosmological analyses. It remains an open (and hotly-debated) question as to what is the level of assembly bias in the real Universe. We are hopeful that this work can set the stage to developing and applying a method that will conclusively determine that. \smallskip \acknowledgments This work was made possible by the efforts of Gerard Lemson and colleagues at the German Astronomical Virtual Observatory in setting up the Millennium Simulation database in Garching, and John Helly and Lydia Heck in setting up the mirror at Durham. We thank Celeste Artale, Shaun Cole, Ravi Sheth and Zheng Zheng for useful discussions. IZ and SC acknowledge the hospitality of the ICC at Durham and the helpful conversations with many of its members. IZ and NJS acknowledge support by NSF grant AST-1612085. IZ was further supported by a CWRU Faculty Seed Grant. PN \& IZ acknowledge support from the European Research Council, through the ERC Starting Grant DEGAS-259586. We acknowledge support from the European Commission's Framework Programme 7, through the Marie Curie International Research Staff Exchange Scheme LACEGAL (PIRSES-GA-2010-269264) and from a STFC/Newton Fund award (ST/M007995/1-DPI20140114). SC further acknowledges support the from CONICYT Doctoral Fellowship Programme. NP is supported by ``Centro de Astronomıa y Tecnologıas Afines'' BASAL PFB-06 and by Fondecyt Regular 1150300. CMB \& PN additionally acknowledge the support of the Science and Technology Facilities Council (ST/L00075X/1). PN further acknowledges the support of the Royal Society through the award of a University Research Fellowship. The calculations for this paper were performed on the ICC Cosmology Machine, which is part of the DiRAC-2 Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS, and Durham University and on the Geryon computer at the Center for Astro-Engineering UC, part of the BASAL PFB-06, which received additional funding from QUIMAL 130008 and Fondequip AIC-57 for upgrades.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection*{LSZ for Matrix Theory} The $N=2$ Matrix theory Hamiltonian \be H= \ft 1 2 P^0_\mu P^0_\mu + \Bigl ( \ft 12 \vec{P}_\mu \cdot \vec{P}_\mu + \, \ft 14 (\vec{X}_\mu \times \vec{X}_\nu)^2 + \, \ft i 2 \vec{X}_\mu\cdot \vec{\theta}\, \gamma_\mu\times \vec{\theta}\Bigr )\, \label{MTHam} \ee is a sum of an interacting $SU(2)$ part describing relative motions and a free $U(1)$ piece pertaining to the centre of mass. We use a vector notation for the adjoint representation of $SU(2)$, $\vec{X}_\mu=(Y^I_\mu,x_\mu)$ and $\vec{\theta}=(\theta^I,\theta^3)$ (with $I=1,2$ and $\mu=1,\ldots ,9$) and may choose a gauge in which $Y^I_9=0$. The model has a potential with flat directions along a valley floor in the Cartan sector $x_\mu$ and $\theta^3$. The remaining degrees of freedom transverse to the valley are supersymmetric harmonic oscillators in the variables $Y^I_\mu$ ($\mu\neq9$) and $\theta^I$. Upon introducing a large gauge invariant distance $x=(\vec{X}_9\cdot\vec{X}_9)^{1/2}=x_9$ as the separation of a pair of particles, the Hamiltonian \eqn{MTHam} was shown \cite{PW} to possess asymptotic two particle states of the form \be |p^1_\mu,{\cal H}^1;p^2_\mu,{\cal H}^2\rangle=|0_B,0_F\rangle\, \ft{1}{x_9}e^{i(p^1-p^2) \cdot x}e^{i(p_1+p_2)\cdot X^0} |{\cal H}^1\rangle_{\theta^0+\theta^3}\,|{\cal H}^2 \rangle_{\theta^0-\theta^3}\label{state} \ee Here $p^{1,2}_\mu$ and ${\cal H}^{1,2}$ are the momenta and polarizations of the two particles. The state $|0_B,0_F\rangle$ is the ground state of the superharmonic oscillators and the polarization states are the $\underline{44}\oplus\underline{84} \oplus\underline{128}$ representation of the $\theta^0\pm \theta^3$ variables, corresponding to the graviton, three-form tensor and gravitino respectively. For the computation of scattering amplitudes one may now form the $S$-matrix in the usual fashion $ S_{fi}\, =\, \langle {\rm out}| \exp \{-iHT\} |{\rm in}\rangle \, $ with the desired in and outgoing quantum numbers according to~(\ref{state}) \footnote{The asymptotic states above are constructed with respect to a large separation in the same direction for both in and outgoing particles, i.e.\ eikonal kinematics. More general kinematical situations are handled by introducing a rotation operator into the $S$-matrix \cite{PW1}.}. The object of interest is then the vacuum to vacuum transition amplitude \be e^{i\Gamma(x'_\mu,x_\mu,\theta^3)}= {}_{x^\prime_\mu}\langle 0_B,0_F| \exp \{-iHT\} | 0_B,0_F\rangle_{x_\mu}. \label{trans} \ee Note that the ground states actually depend on the Cartan variables $x_\mu$ and $x'_\mu$ through the oscillator mass. Also, both the left and right hand sides depend on the operator $\theta^3$. Our key observation is rather simple. In field theory one is accustomed to expand around a vanishing vacuum expectation value when computing the vacuum to vacuum transition amplitude for some field composed of oscillator modes. In quantum mechanics the idea is of course exactly the same, and therefore if one is to represent \eqn{trans} by a path integral one should expand the super oscillators transverse to the valley about a vanishing vev. One may then write the Matrix theory $S$-matrix in terms of a path integral with the stated boundary conditions \be e^{i\Gamma(v_\mu,b_\mu,\theta^3)}= \int_{{\vec{X}}_\mu=(0,0,x_\mu),\, {\vec{\theta}}=(0,0,\theta^3)} ^{{\vec{X}}_\mu=(0,0,x_\mu'),\, {\vec{\theta}}=(0,0,\theta^3)} {\cal D}(\vec{X}_\mu,\vec{A},\vec{b},\vec{c},\vec{\theta})\, \exp(i\,\int_{-T/2}^{T/2}L_{\rm SYM}). \ee The Lagrangian $L_{\rm SYM}$ is that of the supersymmetric Yang--Mills quantum mechanics with appropriate gauge fixing to which end we have introduced ghosts $\vec{b}$, $\vec{c}$ and the Lagrange multiplier gauge field $\vec{A}$. The effective action $\Gamma(v_\mu,b_\mu,\theta^3)$ is most easily computed via an expansion about classical trajectories $X^3_\mu(t)\equiv x_\mu^{\rm cl}(t) =b_\mu+v_\mu t$ and constant $\theta^3(t)=\theta^3$ which yields the quoted boundary conditions through the identification $b_\mu=(x'_\mu+x_\mu)/2$ and $v_\mu=(x'_\mu-x_\mu)/T$. Up to an overall normalization ${\cal N}$, our LSZ reduction formula for Matrix theory is simply \bea S_{fi}&=&\delta^9(k'_\mu-k_\mu)e^{-ik_\mu k_\mu T/2}\nonumber\\ &&\hspace{0cm} \int d^9x' d^9x \,{\cal N}\, \exp(-iw_\mu x'_\mu +iu_\mu x_\mu) \langle {\cal H}^3| \langle {\cal H}^4|e^{i\Gamma(v_\mu,b_\mu,\theta^3)} |{\cal H}^1\rangle |{\cal H}^2\rangle \label{superS} \eea The leading factor expresses momentum conservation for the centre of mass where we have denoted $k_\mu=p_\mu^1+p_\mu^2$ and $k'_\mu=p_\mu^3+p_\mu^4$ for the in and outgoing particles, respectively, and similarly for the relative momenta $u_\mu=(p_\mu^1-p_\mu^2)/2$ and $w_\mu=(p_\mu^4-p_\mu^3)/2$. In a loopwise expansion of the Matrix theory path integral one finds $\Gamma(v_\mu,b_\mu,\theta^3)=v_\mu v_\mu T/2+ \Gamma^{(1)} +\Gamma^{(2)}+\ldots$ of which we consider only the first two terms in order to compare our results with tree level supergravity. Inserting this expansion into~(\ref{superS}) and changing variables $d^9x' d^9x \rightarrow d^9 (Tv) d^9 b$, the integral over $Tv_\mu$ may be performed via stationary phase. Dropping the normalization and the overall centre of mass piece the $S$-matrix then reads \be S_{fi}=e^{-i[(u+w)/2]^2 T/2} \int d^9b \, e^{-i q_\mu b_\mu}\, \langle {\cal H}^3| \langle {\cal H}^4| e^{i\Gamma(v_\mu=(u_\mu+w_\mu)/2,b_\mu,\theta^3)} |{\cal H}^1\rangle |{\cal H}^2\rangle \label{sfi} \ee where $q_\mu=w_\mu-u_\mu$. It is important to note that in~(\ref{sfi}) the variables $\theta^3$ are operators $\{\theta^3_\a,\theta^3_\b\}=\delta_{\a\b}$ whose expectation between polarization states $|{\cal H}\rangle$ yields the spin dependence of the scattering amplitude. The loopwise expansion of the effective action should be valid for the eikonal regime, i.e. large impact parameter $b_\mu$ or small momentum transfer $q_\mu$. As we shall see below, this limit is dominated by $t$-channel physics on the supergravity side. \subsection*{D0 Brane Computation of the Matrix Theory Effective Potential} We must now determine the one-loop effective Matrix potential $\Gamma (v,b,\theta^3)$, namely the $v^4/r^7$ term and its supersymmetric completion. Fortunately the bulk of this computation has already been performed in string theory by \cite{mss1,mss2} who applied the Green-Schwarz boundary state formalism of \cite{gregut1} to a one-loop annulus computation for a pair of moving D0-branes. They found that the leading spin interactions are dictated by a simple zero modes analysis and their form is, in particular, scale independent. This observation allows to extrapolate the results of \cite{mss1,mss2} to short distances and suggest a Matrix theory description for tree-level supergravity interactions. Following \cite{mss1,mss2}, supersymmetric D0-brane interactions are computed from the correlator \be {\cal V}=\frac{1}{16}\int_0^\infty \!\!dt \, \langle B,\vec{x}=0|e^{-2\pi t\alpha^{\prime} p^+(P^--i\partial/\partial x^+)} e^{(\eta Q^-+\tilde{\eta}\tilde{Q}^-)}e^{V_B}|B,\vec{y}=\vec{b} \rangle \label{cyl} \ee with $Q^-,\tilde{Q}^-$ being the SO($8$) supercharges broken by the presence of the D-brane, $|B\rangle$ the boundary state associated to D0-branes and $V_B=v_i\oint_{\tau=0}\!d\sigma\left(X^{[1}\partial_{\sigma}X^{i]} +\frac{1}{2}S\,\gamma^{1i}S\right)$ is the boost operator where the direction 1 has to be identified with the time (see \cite{mss1,mss2} for details). Expanding (\ref{cyl}) and using the results in section four of \cite{mss2}, one finds the following compact form for the leading one-loop Matrix theory potential (normalizing to one the $v^4$ term and setting $\alpha^\prime=1$) \bea {\cal V}_{\rm 1-loop}&=& \Bigl [ v^4 + 2i\, v^2\,v_m\ps{m}{n}\, \partial_n -2\, v_p v_q \ps{p}{m}\ps{q}{n}\,\partial_m \partial_n\nonumber\\ &&\quad -\frac{4i}{9}\, v_q\ps{q}{m}\ps{n}{k}\ps{p}{k}\,\partial_m\partial_n\partial_p \nonumber \\ &&\quad + \frac{2}{63}\, \ps{m}{l}\ps{n}{l}\ps{p}{k}\ps{q}{k}\, \partial_m\partial_n\partial_p \partial_q\Bigr ]\, \frac{1}{r^7} \label{pot} \eea where $\theta=(\eta^a, \tilde \eta^{\dot a})$ should be identified with $\theta^3/2$ of the last section. The general structure of this potential was noted in \cite{harv} and its first, second and last terms were calculated in \cite{dkps},\cite{Kraus97} and \cite{static} respectively. Naturally it would be interesting to establish the supersymmetry transformations of this potential; for a related discussion see \cite{pss1}. \subsection*{Results} Our Matrix computation is completed by taking the quantum mechanical expectation of the effective potential \eqn{pot} between the polarization states of \eqn{sfi}. Clearly one can now study any amplitude involving gravitons, three--form tensors and gravitini. We choose to compute a $h_1 + h_2 \rightarrow h_4 + h_3$ graviton-graviton process, and thus prepare states \bea |{\rm in}\rangle&=& \ft{1}{256}\, h^1_{mn}\, (\ld{1}\gamma_{m}\ld{1})(\ld{1}\gamma_n\ld{1}) \, h^2_{pq}\, (\ld{2}\gamma_{p}\ld{2})(\ld{2}\gamma_q\ld{2})\, |-\rangle \, . \label{in} \nn\\ \langle{\rm out}|&=& \ft{1}{256}\, \langle -|\, h^4_{mn}\, (\la{1}\gamma_{m}\la{1})(\la{1}\gamma_n\la{1}) \, h^3_{pq}\, (\la{2}\gamma_{p}\la{2})(\la{2}\gamma_q\la{2}) \label{out} \eea Note that (following \cite{PW}) we have complexified the Majorana centre of mass and Cartan spinors $\theta^0$ and $\theta^3$ in terms of $SO(7)$ spinors $\lambda^{1,2}=(\theta^0_+\pm\theta^3_+ +i\theta^0_-\pm i\theta^3_-)/2$ where $\pm$ denotes projection with respect to $\gamma_9$. Actually the polarizations in \eqn{out} are seven dimensional but may be generalized to the nine dimensional case at the end of the calculation. We stress that these manoeuvres are purely technical and our final results are $SO(9)$ covariant. The creation and destruction operators $\lambda^\dagger_{1,2}$ and $\lambda_{1,2}$ annihilate the states $\langle-|$ and $|-\rangle$, respectively. The resulting one loop eikonal Matrix theory graviton-graviton scattering amplitude is comprised of 68 terms and (denoting e.g.\ $(q h_1h_4 v)=q_\mu h^1_{\mu\nu} h^4_{\nu\rho}v_\rho$ and $(h_1 h_4)=h^1_{\mu\nu}h^4_{\nu\mu}$) is given by \begin{eqnarray} {\cal A}&\,\,= \,\,\frac{\textstyle 1}{\textstyle q^2}\,\,\Biggl\{\,\,& \ft12(h_1 h_4)(h_2 h_3) v^4 + 2\Bigr[(q h_3 h_2 v) (h_1 h_4) - (q h_2 h_3 v) (h_1 h_4)\Bigr] v^2 \nn\\&&\hspace{-.23cm} + (vh_2v) (qh_3q)(h_1 h_4) + (vh_3v) (qh_2q)(h_1 h_4) - 2(qh_2v) (qh_3v)(h_1 h_4) \nn\\&&\hspace{-.23cm} - 2 (qh_1h_4v) (qh_3h_2v) + (qh_1h_4v) (qh_2h_3v) + (qh_4h_1v) (qh_3h_2v) \nn\\&&\hspace{-.23cm} + \ft{1}{2}\Bigl [(qh_1h_4h_3h_2q) - 2(qh_1h_4h_2h_3q) + (qh_4h_1h_2h_3q) - 2(qh_2h_3q)(h_1 h_4) \Bigr ] v^2 \nn\\&&\hspace{-.23cm} - (qh_2v) (qh_3q) (h_1h_4) + (qh_2q) (qh_3v) (h_1h_4) - (qh_1q) (qh_2h_3h_4v) + (qh_1q) (qh_3h_2h_4v) \nn\\&&\hspace{-.23cm} - (qh_4q) (qh_2h_3h_1v) + (qh_4q) (qh_3h_2h_1v) - (qh_1v) (qh_4h_2h_3q) + (qh_1v) (qh_4h_3h_2q) \nn\\&&\hspace{-.23cm} - (qh_4v) (qh_1h_2h_3q) + (qh_4v) (qh_1h_3h_2q) + (qh_1h_4q) (qh_2h_3v) - (qh_1h_4q) (qh_3h_2v) \nn\\&&\hspace{-.23cm} +\ft18 \Bigl[ (qh_1q) (qh_2q) (h_3h_4) +2 (qh_1q) (qh_4q) (h_2h_3) +2 (qh_1q) (qh_3q) (h_2h_4) \nn\\&&\hspace{-.23cm} + (qh_3q) (qh_4q) (h_1h_2) \Bigr] + \ft12\Bigl[ (qh_1q) (qh_4h_2h_3q) - (qh_1q) (qh_2h_4h_3q) \nn\\&&\hspace{-.23cm} - (qh_1q) (qh_4h_3h_2q) - (qh_4q) (qh_1h_2h_3q) + (qh_4q) (qh_1h_3h_2q) - (qh_4q) (qh_2h_1h_3q) \Bigr] \nn\\&&\hspace{-.23cm} + \ft14\Bigl[ (qh_1h_3q) (qh_4h_2q) + (qh_1h_2q) (qh_4h_3q) + (qh_1h_4q) (qh_2h_3q) \Bigr] \, \Biggr\} \nn\\&&\hspace{-.73cm} +\,\, \Bigl[h_1 \longleftrightarrow h_2\, , \, h_3 \longleftrightarrow h_4 \Bigr] \label{Ulle} \end{eqnarray} We have neglected all terms within the curly brackets proportional to $q^2\equiv q_\mu q_\mu$, i.e. those that cancel the $1/q^2$ pole. These correspond to contact interactions in the D0 brane computation, whereas this calculation is valid only for non-coincident branes. \subsection*{$D=11$ Supergravity} The above leading order result for eikonal scattering in Matrix theory is easily shown to agree with the corresponding eleven dimensional field theoretical amplitude. Tree level graviton--graviton scattering is dimension independent and has been computed in \cite{San}. We have double checked that work by a type IIA string theory computation and will not display the explicit result here which depends on eleven momenta $p^i_M$ (with $i=1,\ldots,4$) and polarizations $h^i_{MN}$ subject to the de Donder gauge condition $p^i_N h^i{}_M{}^N-(1/2)p^i_M h^i{}_N{}^N=0$ (no sum on $i$). Matrix theory, on the other hand, is formulated in terms of on shell degrees of freedom only, namely transverse physical polarizations and euclidean nine-momenta. Going to light-cone variables for the eleven momenta $p^i_M$ we take the case of vanishing $p^-$ momentum exchange \footnote{We denote $p_\pm=p^\mp =(p^{10}\pm p^0)/\sqrt{2}$ and our metric convention is $\eta_{MN}={\rm diag} (-,+\ldots,+)$.}, i.e. the scenario of our Matrix computation, \bea p_M^1=(-\ft12\,(v_\mu-q_\mu/2)^2 ,\, 1\, , v_\mu-q_\mu/2 ) &\quad& p_M^2=(-\ft12\, (v_\mu-q_\mu/2)^2 ,\, 1\, , -v_\mu+q_\mu/2) \nn\\ p_M^4=(-\ft12\, (v_\mu+q_\mu/2)^2 ,\, 1\, , v_\mu+q_\mu/2) &\quad & p_M^3=(-\ft12\, (v_\mu+q_\mu/2)^2 ,\, 1\, , -v_\mu-q_\mu/2) \, .\label{kinematics} \eea By transverse Galilean invariance we have set to zero the nine dimensional centre of mass momentum. We measure momenta in units of $p_-$ which we set to one. For this kinematical situation conservation of $p_+$ momentum clearly implies $v_\mu q_\mu=0$. Note that the vectors $u_\mu$ and $w_\mu$ of~(\ref{superS}) are simply $u_\mu=v_\mu-q_\mu/2$ and $w_\mu=v_\mu+q_\mu/2$ We reduce to physical polarizations by using the residual gauge freedom to set $h^i_{+M}=0$ and solve the de Donder gauge condition in terms of the transverse traceless polarizations $h^i_{\mu\nu}$ for which one finds $h^i_{-M}=-p^i_\nu h^i_{\nu M}$. Agreement with the Matrix result \eqn{Ulle} is then achieved by taking the eikonal limit $v_\mu>>q_\mu$ of the gravity amplitude in which the $t$-pole contributions dominate\footnote{In the above parametrization, the Mandelstam variables are $t=q_\mu^2=-2p^1_M p_4^M$, $s=4v_\mu^2+q^2_\mu=2p^1_M p_2^M$ and $u=4v_\mu^2=-2p^1_M p_2^M=s-t$.}. One then reproduces exactly \eqn{Ulle} as long as any pieces cancelling the $t$-pole (i.e. the aforementioned $q^2$ terms) are neglected. Although we have only presented here a Matrix scattering amplitude restricted to the eikonal regime, we nevertheless believe the agreement found is rather impressive. \subsection*{Acknowledgements} We thank B. de Wit, S. Moch, K. Peeters and J. Vermaseren for discussions. Our computation made extensive use of the computer algebra system FORM \cite{Jos}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Layered transition metal dichalcogenides (TMD) offer a wide variety of physical and chemical properties from metal to insulator \cite{WY69, XLSC13, KT16} and are extensively studied \cite{TAN14, JS12, CSELLZ13, YYR17}. An increasing interest and recent progress towards these materials led to a variety of improved applications such as sensors, energy storage, photonics, optoelectronics, and spintronics \cite{CHS12, APH14, WKKCS12, WKKCS12}. In particular, atomically thin monolayer TMDs have attracted most of the attention due to the unique mechanical and electronic properties related to their high flexibility \cite{HPMS13, CSBAPJ14, LZZZT16}. A large scope of flexible electronics has been realized via applications such as flexible displays \cite{JLK09, ZWWSPJ06, R16, WS09}, wearable sensors \cite{SC14, KGLR12, WW13}, and electronic skins \cite{STM13, PKV14, HCTTB13}. Each TMD (TX$_2$) monolayer consisting of 3 atomic layers (X-T-X stacking) can undergo bending deformation, possessing higher flexural rigidity than graphene (D$_{MoS_2}$ $\sim$ 7-8 D$_{Graphene}$ \cite{AB17}). The bending behavior (curvature effect) of 2D TMD monolayers, especially of MoS$_2$, has been studied both theoretically \cite{JQPR13, XC16} and experimentally \cite{ZDLHSR15, CSBAPJ14}. For 2D materials such as MoS$_2$, the bending can induce localization or delocalization in the electronic charge distribution. This change in the charge distribution results in changes in electronic properties such as the Fermi level, effective mass, and band gap \cite{YRP16}. However, the bending behavior of other TMD monolayers is largely unexplored at least from first-principles. Quantitatively, the resistance of a material against bending is characterized by the bending stiffness. The bending stiffness or flexural rigidity of the TMD monolayers can be estimated using first-principles as in Refs.~\onlinecite{RBM92, AB04, JQPR13}. Most of the earlier studies used nanotubes of different radii created by rolling an infinitely extended sheet to estimate the bending stiffness of 2D monolayers \cite{RBM92, AB04, HWH06}. However, such a scheme has several limitations. (1) It does not mimic the edges present in the monolayer. (2) The nanoribbons unfolded from differently sized nanotubes have different widths which contribute to different quantum confinement effects along with the curvature effect. By utilizing the bending scheme similar to the bending of a thin plate, we restore the edges as well as fix the width of the nanoribbon, thereby eliminating the quantum confinement effect resulting from difference in width between various configurations of nanoribbons from flat to bent ones. However, the edge effects due to their finite width may not be completely eliminated.\\ Here we report a comprehensive first-principles study of the structural, mechanical, and electronic properties of flat and bent monolayer TMD compounds, i.e., TX$_2$ (T = transition metal, X = chalcogen atom). As in Ref.~\onlinecite{WY69}, we represent each TMD (TX$_2$) with its transition metal group. For example, d$^0$ for group IV, d$^2$ for group VI, and d$^6$ for group X. Their layer structures have been observed in experiment: group IV (T = Ti, Zr or Hf; X = S, Se or Te) and group X (PdTe$_2$ and PtX$_2$) TMDs prefer the 1T phase, while group VI TMDs crystallize in the 1H (T = Mo or W; X = S, Se) as well as the distorted T (1T$'$) phase (WTe$_2$) \cite{WY69}. We first investigate the relative stability of a monolayer in three different phases (1H, 1T, 1T$'$). The mechanical and electronic properties have been studied only for those most stable phases. The organization of the rest of the paper is as follows. The computational details are presented in Sec. II. Section III presents our results, followed by some discussion and conclusions in Sec. IV.\\ \section{Computational Details} \begin{figure}[h!] \includegraphics[scale=0.5]{Figure1.pdf} \caption{Rectangular unit-cells of types 1H, 1T, and 1T$'$ (WTe$_2$) used in the calculations. The first row represents the top view (a-c) while second (d-f) corresponds to the side view; d(T-X) is metal-chalcogen distance, $\angle$XTX is an angle made by two d(T-X) sides, and d(X-X) (or d$_{X-X}$) is the distance between the outer and inner layer of flat monolayer bulk TMDs.} \label{fig:struc} \end{figure} The ground state calculations were performed using the Vienna ab initio simulation package (VASP) \cite{VASP} with projected augmented wave (PAW) \cite{B94} pseudo-potentials (PS) \cite{KJ99} as implemented in the VASP code \cite{K01}, modified to include the kinetic energy density required for meta-GGA (MGGA) calculations. We used pseudo-potentials recommended in VASP for all elements except for tungsten (W), where we used a pseudopotential such that the valence electron configuration includes 6s$^1$5d$^5$ electrons. The exchange-correlation energy was approximated using the strongly constrained and appropriately normed (SCAN) MGGA \cite{SRP15}. It can describe an intermediate range of dispersion via the kinetic energy density and is proven to deliver sufficiently accurate ground state properties for diversely bonded systems \cite{NBR18, SRZ16, SSP18, BLBRSB17}, as compared to local density approximation (LDA) and the generalized gradient approximation (GGA) of Perdew, Burke, and Ernzerhof (PBE). The unit-cell calculations for all pristine TMD monolayers were carried out using a rectangular supercell consisting of two MX$_2$-units with three different configurations 1H, 1T, and 1T$'$-WTe$_2$ to determine the most stable ground state. We used the energy cutoff of 550 eV and 24 $\times$ 16 $\times$ 1 and 16 $\times$ 24 $\times$ 1 Gamma-centered Monkhorst-Pack k-meshes \cite{MP76} to sample the Brillouin zone. Periodic boundary conditions were applied along the in-plane direction, while a vacuum of about 20 $\AA$ was inserted along the out-of-plane direction. The geometry optimization of the mono-layer unit-cell was achieved by converging all the forces and energies within 0.005 eV/$\AA$ and 10$^{-6}$ eV respectively. To estimate the bending stiffness, we relaxed our nano-ribbons having a width of 3-4 nm ({\color{blue} Supplementary Table S1}) with forces less than 0.01 eV/$\AA$, using an energy cutoff of 450 eV. The Brillouin zone was sampled using Gamma-centered Monkhorst-Pack k-meshes of 8 $\times$ 1 $\times$ 1 and 1 $\times$ 8 $\times$ 1.\\ To estimate the in-plane stiffness, we applied strain along one direction (say the x-direction) and relaxed the system along the lateral direction (i.e., the y-direction) or vice versa (See Figure~\ref{fig:struc}). An in-plane stiffness then can be estimated using \begin{equation} Y_{2D} =\frac{1}{A_0} \frac{\partial^2 E_s}{\partial \epsilon^2}, \label{Y2D} \end{equation} where E$_s$ $=$ E($\epsilon = s$) - E($\epsilon = 0$) is the strain energy, $\epsilon = \frac{\textnormal{Change in length ($\Delta l$)}} {\textnormal{equilibrium length ($l_0$)}} $ is the linear strain, and A$_0$ is an equilibrium area of an unstrained supercell. We also applied a 5\% axial strain and relaxed the rectangular supercell in the transverse direction to estimate the lateral strain and hence found the Poisson's ratio. We first relaxed the flat ribbon using various edge schemes. The choices of edges are mainly due to either relaxation of the flat nanoribbon or to satisfy the condition, areal bending energy density u($\kappa $)\hspace{1mm}$=$ $\frac{E_{bent} - E_{flat}}{Area (A)}$ $\rightarrow$ 0 as the bending curvature $\kappa$ \hspace{1mm}$=$ $\frac{1}{\textnormal{radius of curvature (R)}}$ $\rightarrow 0$ (Figure~\ref{fig:ribbon} (IV)). We have taken stoichiometric (n(X):n(T)$=$ 2:1) nano-ribbons ({\color{blue} Supporting Figure S4}) for most of the calculations in which TiTe$_2$, MoTe$_2$-1T$'$, and WX$_2$ (X = S, Se, or Te) were stabilized using hydrogen passivated edges whereas others were relaxed without hydrogen passivation. We also relaxed TiSe$_2$, HfS$_2$, PdTe$_2$, and PtSe$_2$ nano-ribbons in symmetric configuration (Figure~\ref{fig:ribbon} II). Finally, the bent structures of different bending curvatures were created by relaxing the ribbons along the infinite length direction, while keeping the transition atoms fixed at the opposite end, and applying strain along the width direction. A 20 $\AA$ of vacuum was introduced along the y- and z- direction to eliminate an interaction between the system and its image ({\color{blue} Supplementary Figure S4}). The areal bending energy density (u($\kappa$)) vs bending curvature ($\kappa$) curve were fitted with a cubic polynomial to capture the non-linear behavior (Figure~\ref{fig:ribbon} (IV)). The quadratic coefficient of the cubic fitting was utilized to estimate the bending stiffness, \begin{equation} S_b = \frac{\partial^2 u(\kappa)}{\partial \kappa^2} |_{\kappa = 0}. \end{equation} \begin{figure} \includegraphics[scale = 0.45]{Figure2.pdf} \caption{(I) A nanoribbon (enclosed by rectangle) is taken to simulate an extended sheet of 1T monolayer; a is the lattice constant with the ribbon extended along the \textit{a}-axis and a vacuum of 20 Angstroms is inserted along \textit{b-} and \textit{c-} axes ({\color{blue} Supplementary Figure S4}); bent sample of 1T nano-ribbon; (III) a schematic bending of a thin plate. d0, d, and R are the length of a thin plate before bending, length after bending, and radius of curvature respectively. N is the neutral surface denoted by a dashed line. t$_{tot}$, t$_{up}$, and t$_{dn}$ are the physical thicknesses of the bent nano-ribbon, assuming that the middle layer coincides with the neutral surface (N); (IV) areal bending energy density vs bending curvature curve to estimate the bending stiffness. E$_{bent}$, E$_{flat}$, and A are the total energy of bent nanoribbon, total energy of flat nanoribbon, and cross-sectional area of flat nanoribbon (length * width) respectively.} \label{fig:ribbon} \end{figure} \section{Results} \subsection{Relative Stability} Experimentally, it is largely known which phase is preferred in the bulk layer structure. However, the relative stability of their monolayer structures remained elusive. We have performed relative stability analysis of monolayer TX$_2$ among 3 different phases, namely 1H, 1T, and 1T$'$-WTe$_2$, to test the predictive power of SCAN. Energies of TMDs in different phases relative to the 1T phase are presented in Figure~\ref{fig:stability}. Among two different phases, 1H and 1T, group (IV) and (X) TMD monolayers prefer the 1T phase. We could not find a distorted phase (1T$'$) for these TMD monolayers. In addition to the 1H and 1T phase, group (VI) TMDs MoTe$_2$ and WTe$_2$ also crystallize in the distorted (1T$'$) form. Our relative stability analysis shows that TX$_2$ with X$=$S or Se prefers the 1H phase, while it depends on the transition metal for X$=$Te, consistent with the experimental predictions \cite{WY69}. WTe$_2$ prefers the 1T$'$ phase while the cohesive energies of 1H and 1T$'$ phases of MoTe$_2$ are almost identical (favoring the 1H phase by 5 meV), leading to an easy modulation between 2 phases \cite{WNGJDY17}. Satisfying 17 exact known constraints, SCAN accurately captures the necessary interactions present in these TMD monolayers and predicts the correct ground state structure. \begin{figure}[htbp!] \includegraphics[height=3.5in, width=4.5in]{Figure3.pdf} \captionsetup{font=scriptsize} \caption{Stability (relative to the 1T phase) from SCAN calculations for TMDs between the 3 experimentally observed phases 1H, 1T, and 1T$'$-WTe$_2$. The \textit{x}-axis represents the TMD with a phase corresponding to the minimum ground state (GS) energy, and the relative GS energies per atom of the TMDs of any phase with respect to corresponding GS of 1T phase are presented on the \textit{y}-axis. The straight line parallel to the \textit{x}-axis passing through the origin represents the GS energies of 1T phases. SCAN correctly predicts the ground state for these compounds. Also, MoTe$_2$ seems to be iso-energetic between 1H and 1T$'$-WTe$_2$ phases.} \label{fig:stability} \end{figure} \subsection{Structural properties} Comparison has been made for the estimated in-plane lattice constant of monolayers with the experimental bulk results in Figure \ref{fig:lattice}. The lattice constants are in good agreement with the experimental results with a mean absolute error (MAE) and a mean absolute percentage error (MAPE) of 0.03 $\AA$ and 0.7\% respectively. The results for the structural parameters related to the monolayer bulk are in good agreement with reference values \cite{CHS12}. The structural parameters related to the lattice constant such as d$_{T-X}$, d$_{X-X}$, and $\theta_{X-T-X}$ increase from S to Se to Te. The decreasing cohesive energies from S to Se to Te make them more loosely bound, thereby increasing the lattice parameters.\\ \begin{figure}[htbp!] \includegraphics[height=3in, width=5in]{Figure4.pdf} \captionsetup{font=scriptsize} \caption{Comparision of the SCAN-calculated in-plane lattice constants of various TMD mono-layers in the ground state with respect to the bulk lattice constants available in the literature \cite{WY69, AMTTRD16, LWBR73}.} \label{fig:lattice} \end{figure} \subsection{In-plane stiffness and Poisson's ratio} The strength of a material is crucial for a device's performance and its durability. As a measure of the strength, we computed an in-plane stiffness or 2D Young's modulus (Eq.~\ref{Y2D}) of the most stable ground state and tabulated it in Table~\ref{tab:eff-t}. Similar to the cohesive energy, the in-plane stiffness decreases from S to Se to Te, indicating a softening of TMD monolayers from S to Te under an application of linear strain. The estimated 2D in-plane stiffness of MoS$_2$ is 141.59 N/m, which is in close agreement with the experimental value of 180 $\pm$ 60 N/m \cite{BBK11}. \\ Under Poisson's effect, materials tend to expand (or contract) in a direction perpendicular to the axis of compression (or expansion). It can be measured using Poisson's ratio $\nu_{ij} = -\frac{d\epsilon_j}{d\epsilon_i}$, where $d\epsilon_j$ and $d\epsilon_i$ are transverse and axial strains respectively. The in-plane (-$\frac{d\epsilon_y}{d\epsilon_x}$ or $-\frac{d\epsilon_x}{d\epsilon_y}$) and an out of plane Poisson's ratio (-$\frac{d\epsilon_z}{d\epsilon_x}$) are also calculated and tabulated. The in-plane Poisson's ratio is different than that of the out of plane Poisson's ratio for 1T compounds. For example, PtS$_2$ has $\nu_{xy} = 0.29$ and $\nu_{xz} = 0.58$. However, the Poisson's ratio of 1H monolayers is almost isotropic ($\nu_{xy} \approx \nu_{xz}$ ). \subsection{Mechanical bending} The primary focus of this study is to understand the response of the TMD monolayers to mechanical bending. We have calculated the bending stiffness and studied the change in various physical and electronic properties due to bending. Since previous studies \cite{YRP16, ZDLHSR15} showed that the bending stiffness is independent of the type of the armchair or zigzag edges (chiral), we only utilized armchair-edge nanoribbons for the 1H structures. The bending stiffness of 20 TMDs are compared and tabulated in Table~\ref{tab:eff-t}. Unlike the in-plane stiffness, the overall bending stiffness increases from S to Se to Te (Table~\ref{tab:eff-t}), indicating a hardening of the nanoribbons from S to Se to Te. The d$^0$ compounds, especially S and Se, along with the PdTe$_2$ have lower ($<$ 3 eV) bending stiffness. The lower flexural rigidity of these compounds can result in enormous changes in their local strain as well as the charge density profile under mechanical bending. The 1H compounds have higher bending stiffness, possessing higher flexural rigidity against mechanical bending. The estimated bending stiffness of 12.29 eV for MoS$_2$ agrees with the experimental values of 6.62-13.24 eV \cite{CSBAPJ14} as well as 10-16 eV \cite{ZDLHSR15}. To explore the trend of mechanical strengths with respect to transition metal, one can look into the d-band filling of valence electrons. The filling of the d band increases from transition metal group IV ($\sim$ sparsely-filled) to VI ($\sim$ half-filled) to X ($\sim$ completely filled) within the same row in periodic table. Both quantities Y$_{2D}$ and S$_b$ increase as the number of valence d electrons increases until the shell becomes nearly half-filled. To facilitate the claim further, we have estimated the in-plane stiffness and bending stiffness of 1H-NbS$_2$ and 1H-TaS$_2$ corresponding to group V (d$^1$) transition metals. The in-plane stiffness of NbS$_2$ and TaS$_2$ were found to be 95.74 N/m and 115.04 N/m respectively. In addition, the bending stiffness was obtained as 4.87 eV and 6.43 eV respectively for NbS$_2$ and TaS$_2$. Comparing TMDs (TX$_2$) having the same chalcogen atom, we can see the trend d$^0$ $<$ d$^1$ $<$ d$^2$ for both stiffness. However, there is a decrement in both Y$_{2D}$ and S$_b$ while going from half-filled (d$^2$) to nearly completely filled (d$^6$) d-band transition metal. Moreover, the large bending stiffness of group VI compounds decreases on changing phase from 1H to distorted 1T phase, for instance, 1H to 1T$^\prime$ transformation in MoTe$_2$.\\ \noindent We utilized \begin{align} t_{eff} = \sqrt{12S_b/Y_{2D}} \\ \shortintertext{and} Y_{3D} = Y_{2D}/t_{eff} \end{align} to estimate the effective thickness as well as the 3D Young's modulus. An effective thickness is the combination of d$_{X-X}$ distance and the total effective decay length of electron density into the vacuum. Experimentally, it is difficult to define the total effective decay length of the electronic charge distribution. Therefore, it is a common practice to take a range from the d$_{X-X}$ distance to the inter layer metal-metal distance within the bulk structure as the effective thickness, which gives the range for both in-plane stiffness and bending stiffness \cite{CSBAPJ14, ZDLHSR15}. Using equation 3, one can estimate a reasonable value for the effective thickness for a wide range of TMDs. However, the computed effective thicknesses t$_{eff}$ of certain TX$_2$ (T=Ti, Zr, Hf; X=S, Se) are less than their d$_{X-X}$ distance (Figure 1), which means that bending is much easier than stretching. Similar underestimation was found for the effective thickness of a carbon monolayer estimated by various methods \cite{YS97, W04, KGB01, Z00}. Utilizing eq. (3), Yakobson et al. \cite{YS97}, Wang \cite{W04}, and Yu et al.\cite{YRP16} estimated the effective thickness of the carbon monolayer to be around 0.7-0.9 $\AA$, which is much less than 3.4 $\AA$, the normal spacing between sheets in graphite. Such huge underestimation indicates the possible breakdown of the expression (3) to estimate the effective thickness in the case of atomically thin carbon layer \cite{W04}. The 3D Young's modulus (eq.4) allows us to compare the strength between various 2D and 3D materials, for instance, MoS$_2$ against steel. Similar to 2D in-plane stiffness, the 3D Young's modulus of TMD monolayers decreases from S to Se to Te. Due to the larger underestimation of the effective thickness, there is a huge overestimation in the 3D Young's modulus of group IV compounds with sulfur as the chalcogen atom. With that in mind, one can conclude that MoS$_2$ as well as WS$_2$ have large 3D Young's moduli of 347.03 GPa and 351.02 GPa respectively, agreeing with the experimental value of 270$\pm$100 GPa \cite{BBK11} for MoS$_2$.\\ \subsection{Effect of bending on physical properties} \textbf{I. Local Strain} \\ Local strain ($\epsilon$ $=$ $\frac{\delta - \delta_{flat}}{\delta_{flat}}$) projected on the y-z plane (see b-c plane in Figure~\ref{fig:ribbon} (II)) of different TMD nano-ribbons corresponding to the bending curvature around 0.09 $\AA^{-1}$ are presented in {\color{blue}Supplementary Figure S1}. The inner layer gets contracted while the outer layer gets expanded, and this is consistent with the elastic theory of bending of a thin plate \cite{LL86}. The expansion of the outer layer is close to the contraction of the inner layer for 1T compounds, while the expansion dominates the contraction in the case of 1H compounds ({\color{blue}Supplementary Figure S1}). The middle metal layer is expanded up to 2\% in the case of 1T while it is 5-10\% for 1H, indicating that the middle layer is closer to the neutral axis for 1T than that of the 1H compounds. For 1T$'$ compounds (MoTe$_2$ and WTe$_2$), the outer layer is expanded more as compared to the contraction of the inner layer with a distortion represented by the zigzag structure in the strain profile ({\color{blue} Supplementary Figure S1}).\\ To study the effect of bending on the local strain profiles, we compare the local strain profiles of the PtS$_2$ nano-ribbon projected on y-z plane, as shown in Figure~\ref{fig:strain-PtS}. The inner layer is contracted while the outer layer gets expanded. This effect increases upon increasing the bending curvature. For PtS$_2$, the middle layer is expanded within 2-3\%, while the expansion is 16-20\% for the inner and the outer layer. Such large local strain can induce a highly non-uniform local potential and hence affect the charge distribution. Both lattice expansion in the outer layer and the lattice contraction in the inner layer could be applicable in tuning adsorption (binding distance and energy) of the 2D materials, similar to the linear strain modulated adsorption properties of various semiconducting or metallic surfaces \cite{KDCF14, CW16, MHN98}. The tensile strain strengthens the hydrogen adsorption in TMD surfaces, while a compressive strain weakens it \cite{CW16}. By utilizing both the concave (compressive strain) and convex (tensile strain) surfaces of a bent monolayers, one can tune the Gibb's free energy of hydrogen adsorption to zero when it is respectively more negative and more positive. \\ \begin{figure}[h!] \includegraphics[scale=0.4]{Figure5.pdf} \caption{Local strain ($\epsilon$ $=$ $\frac{\delta - \delta_{flat}}{\delta_{flat}}$) with respect to the inner chalcogen-chalcogen ($\epsilon^{inner}_{X-X}$), metal-metal ($\epsilon_{T-T}$), and outer chalcogen-chalcogen distance ($\epsilon^{outer}_{X-X}$) projected in the y-z plane for PtS$_2$. Strain at metal indices ``i" (see 2$^{nd}$ subfigure) is calculated with respect to the distance between two metals at indices i-1 and i where i = 1, 2, ...10 (or 11)} \label{fig:strain-PtS} \end{figure} \pagebreak \textbf{II. Physical thickness}\\ \begin{figure}[h!] \includegraphics[scale=0.5]{Figure6.pdf} \caption{The strain with respect to the physical thickness of the bent nano-ribbon around 0.09 $\AA^{-1}$ for various TMD compounds; t$_{tot}$ (t$_{up}$ + t$_{dn}$, {\color{blue} blue}) is outer-inner layer thickness; t$_{up}$ ({\color{red} red}) and t$_{dn}$ ({\color{green} green}) are measured between outer-middle and middle-inner layers respectively (see Figure~\ref{fig:ribbon} (III)).} \label{fig:compare-t} \end{figure} The behavior of different layers within the TMD nano-ribbon under mechanical bending can be understood by looking at the variation of the physical thickness (t$_{tot}$, defined later in this section and shown in Figure~\ref{fig:compare-t}) with respect to bending curvature. Moreover, tuning of the physical thickness can be particularly useful in nano-electronic applications due to an enhancement of the electron confinement in 2D materials with an out-of-plane compression \cite{BB18, GDKF17}. A percentage change in the thickness (t$_{tot}$, t$_{up}$, or t$_{dn}$) at the middle of various bent nano-ribbons with respect to the unbent ones is presented in {\color{blue}Supplementary Figure S2}. t$_{tot}$ represents an outer-inner chalcogen atom layer thickness at the vertex of a bent ribbon, while t$_{up}$ and t$_{dn}$ correspond to outer-middle and middle-inner layers respectively. We fitted a 6$^{th}$ order polynomial to each layer of the bent nanoribbon to estimate the thickness ({\color{blue}Supplementary Figure S3}). The thickness measured between outer and inner chalcogen layers is described by t$_{tot}$ (t$_{up}$ + t$_{dn}$, {\color{blue} blue}) while t$_{up}$ ({\color{red} red}) and t$_{dn}$ ({\color{green} green}) are measured between the outer-to-middle and middle-to-inner layers respectively (see Figure~\ref{fig:compare-t}). When a thin plate is bent, it undergoes both compression (z' to N, t$_{dn}$) and expansion (N to z'+h, t$_{up}$) with ``N" being the neutral surface \cite{LL86} (see figure~\ref{fig:ribbon} (III)). As the middle layer does not mimic the neutral surface (N), t$_{up}$ and t$_{dn}$ do not respectively increase and decrease with the bending curvature. For most of the compounds, t$_{up}$ decreases on increasing the bending curvature. On the other hand, t$_{dn}$ slightly increases for d$^0$-1T compounds, but depends on the bending curvature for d$^2$-1H and d$^6$-1T compounds ({\color{blue}Supplementary Figure S2}). For a quantitative comparison among different materials, we plot the thicknesses for various TMDs around the bending curvature of 0.09 $\AA^{-1}$ as shown in Figure~\ref{fig:compare-t}. Group IV compounds have a lower flexural rigidity, therefore have more of a decrement in the physical thickness (t$_{tot}$) than group VI and X compounds. \\ \subsection{Effect of the bending on electronic properties} \textbf{I. Local electronic charge density}\\ Along with the change in physical properties, mechanical bending also affects the electronic properties. The local charge density (average over ab-plane, [Figure~\ref{fig:ribbon} (I)]) is computed and plotted against distance along an out-of-plane direction (\textit{c}- axis) [Figure~\ref{fig:ribbon} (II)]. The different nature of the local charge distribution of flat WX$_2$ (X=S, Se, Te) ribbon with two equal local maxima may be related to the different pseudopotential used in the calculation. We choose a narrow window (within 2 black vertical lines) at the middle of a nano-ribbon (for both flat and bent) to study the local charge distribution near the surface-vacuum interface as shown in {\color{blue}Supplementary Figure S4}. We define 3 different quantities Width, Max, and an Area of the local charge density (left) and compared among the flat nano-ribbons of various TMDs (right), as shown in Figure~\ref{fig:local-charge}. The ``Width" represents the distance over which the charge density decays to a smaller non-zero value ($\epsilon < 10^{-4}$) in vacuum ({\color{blue} Supplementary Figure S4}) which also gives a tentative idea about the total effective decay length of electron density. In addition, the areal density ($\int_{0}^{Width}\rho(z)dz$, an area under the curve) represents the average number of electrons per unit area, as shown in Figure~\ref{fig:local-charge}.\\ For the flat nano-ribbons, the width increases whereas Max and the Area decrease as we go from S to Se to Te for a given transition metal. Increasing the width from S to Se to Te indicates an increase in the total effective decay length of electron density, hence the effective thickness. Also, the width corresponding to flat 1H nano-ribbons is shifted upward by atleast 0.5 $\AA$ compared to that of 1T flat nano-ribbons which then contributes to an effective thickness giving larger bending stiffness. Our results suggest that the overall bending stiffness follows the trend of the width of an electron density and hence the effective thickness. The variation of the local charge density along an out of plane direction for different TMD nano-ribbons with the bending curvature is presented in {\color{blue}Supplementary Figure S5}. When a nanoribbon is bent, the local charge density shrinks with the bending curvature within an outer layer-vacuum interface while expanding near the inner layer-vacuum interface leaving the total width unaffected. However, both the Max and the Area decrease with increasing bending curvature for most of the TMD compounds except for TiTe$_2$ and WX$_2$. For WX$_2$, the max. value of local maximum closer to the surface-vacuum interface decreases on increasing the bending curvature (Circular region in the {\color{blue}Supplementary Figure S5}) whereas the other local maxima have an opposite trend. To study the effect of bending on the aforementioned local maximum (Max) and areal density (Area) among different materials, we estimate their percentage change with respect to the flat ribbon, as in Figure~\ref{fig:local-charge}. The bending produces noticeable changes in the charge distribution within the surface-vacuum interfaces.\\ \begin{figure}[h!] \includegraphics[scale = 0.46]{Figure7.pdf} \caption{(a) The local charge density along the out of plane (z) direction of the nano-ribbon. (b) Width ($\AA$), max ($e/\AA^3$, e: an electronic charge), and areal density ($e/{\AA^{2}}$) of flat nanoribbon. (c) \% change in an area and the max of the bent nanoribbons having a bending curvature around 0.09 $\AA^{-1}$ with respect to the flat nano-ribbon; result of max. value is not shown for WX$_2$ as it possesses multiple local maxima.} \label{fig:local-charge} \end{figure} \begin{table}[h!] \centering \captionsetup{font=scriptsize} \caption{The ground state properties of TMD mono-layers having 1H or 1T phase: Relaxed in-plane lattice constant, a; Metal-chalcogen and chalcogen-chalcogen distance, $d_{T-X}$ and $d_{X-X}$ respectively (See Fig \ref{fig:struc}); X-T-X angle, $\theta_{X-T-X}$; Cohesive energy per atom, $E_c$; in-plane ($\nu_{in}$) and out-of-plane ($\nu_{out}$) Poisson's ratios; 2D Young's modulus, $Y_{2D}$; Bending stiffness, S$_b$; and Effective thickness, t$_{eff}$. Results for structural parameters of TiX$_2$ (X $=$ S, Se, Te), MoX$_2$, and WX$_2$ are in good agreement with the LDA+U results from reference~\onlinecite{CHS12}. The structure parameters of distorted T compounds,WTe$_2$ and MoTe$_2$ can be estimated from {\color{blue}Supplementary Table S2}. The representations of T$^{4+}$ such as d$^0$, d$^2$, and d$^6$ are taken from Ref.~\onlinecite{WY69}.} \resizebox{1.0\textwidth}{!}{% \begin{tabular}{lrrrrrrrrrrrr} \hline \hline T$^{4+}$ & TMDs & \multicolumn{1}{l}{a} & \multicolumn{1}{l}{d$_{T-X}$} & \multicolumn{1}{l}{d$_{X-X}$} & \multicolumn{1}{l}{$\theta_{X-T-X}$} & \multicolumn{1}{l}{E$_c$} & \multicolumn{1}{l}{$\nu_{in}$} & \multicolumn{1}{l}{$\nu_{out}$}& $Y_{2D}$& S$_b$ & t$_{eff}$ & $Y_{3D}$($\frac{Y_{2D}}{t_{eff}}$)\\ & & \multicolumn{1}{l}{($\AA$)} & \multicolumn{1}{l}{($\AA$)} & \multicolumn{1}{l}{($\AA$)} & \multicolumn{1}{l}{degree} & \multicolumn{1}{l}{(eV/atom)} & & & (N/m)& (eV) & ($\AA$) & (GPa)\\ \hline d$^0$ & TiS$_2$ & 3.42 & 2.42 & 2.80 & 90.16 & 6.80 & 0.17 & 0.42 & 85.20 & 2.25 & 2.25 & 378.67\\ & TiSe$_2$ & 3.55 & 2.55 & 3.04 & 91.76 & 6.17 & 0.23 & 0.43 & 59.74 & 2.86 & 3.03 & 197.72\\ & TiTe$_2$ & 3.76 & 2.77 & 3.44 & 94.55 & 5.41 & 0.24 & 0.38 & 44.46 & 3.29 & 3.77 & 117.93\\ & ZrS$_2$ & 3.67 & 2.57 & 2.87 & 88.14 & 7.35 & 0.19 & 0.52 & 83.76 & 2.13 & 2.21 & 379.00\\ & ZrSe$_2$ & 3.81 & 2.70 & 3.12 & 90.14 & 6.71 & 0.22 & 0.47 & 71.30 & 2.57 & 2.63 & 271.10\\ & ZrTe$_2$ & 4.01 & 2.91 & 3.53 & 92.94 & 5.89 & 0.18 & 0.44 & 43.16 & 3.01 & 3.66 & 117.92\\ & HfS$_2$ & 3.62 & 2.53 & 2.85 & 88.65 & 7.35 & 0.19 & 0.52 & 85.78 & 2.82 & 2.51 & 341.75\\ & HfSe$_2$ & 3.75 & 2.66 & 3.09 & 90.37 & 6.67 & 0.21 & 0.47 & 77.75 & 3.64 & 3.00 & 259.17\\ & HfTe$_2$ & 3.98 & 2.88 & 3.47 & 92.58 & 5.80 & 0.15 & 0.41 & 46.77 & 3.92 & 4.01 & 116.63\\ \hline d$^2$ & MoS$_2$ & 3.17 & 2.40 & 3.10 & 80.56 & 7.86 & 0.26 & 0.30 & 141.59 & 12.29 & 4.08 & 347.03\\ & MoSe$_2$ & 3.30 & 2.53 & 3.31 & 81.86 & 7.22 & 0.26 & 0.32 & 114.97 & 14.60 & 4.94 & 232.73\\ & MoTe$_2$-1H & 3.51 & 2.71 & 3.59 & 83.04 & 6.54 & 0.28 & 0.34 & 87.88 & 14.63 & 5.65 & 155.54\\ & MoTe$_2$-1T$'$ & 3.65 & -- & -- & -- & 6.54 & 0.28 & 0.46 & 61.85 & 7.28 & 4.75 & 130.21\\ & WS$_2$ & 3.16 & 2.40 & 3.10 & 80.25 & 7.91 & 0.26 & 0.33 & 143.92 & 12.61 & 4.10 & 351.02\\ & WSe$_2$ & 3.29 & 2.53 & 3.32 & 82.16 & 7.20 & 0.33 & 0.35 & 130.03 & 14.48 & 4.62 & 281.45\\ & WTe$_2$-1T$'$ & 3.61 & -- & -- & -- & 6.49 & 0.35 & 0.60 & 86.79 & 8.96 & 4.45 & 195.03\\ \hline d$^6$ & PdTe$_2$ & 3.96 & 2.67 & 2.73 & 83.91 & 4.07 & 0.32 & 0.64 & 61.82 & 2.78 & 2.94 & 210.27\\ & PtS$_2$ & 3.52 & 2.37 & 2.45 & 84.25 & 5.73 & 0.29 & 0.58 & 105.81 & 5.66 & 3.20 & 330.65\\ & PtSe$_2$ & 3.68 & 2.49 & 2.60 & 84.83 & 5.32 & 0.26 & 0.59 & 87.01 & 6.33 & 3.74 & 232.65\\ & PtTe$_2$ & 3.95 & 2.66 & 2.74 & 84.15 & 5.07 & 0.26 & 0.57 & 81.41 & 4.58 & 3.29 & 247.45\\ \hline \hline \end{tabular}}% \label{tab:eff-t}% \end{table}% \textbf{II. Band structure}\\ The band structure plots of group IV, VI, and X TMDs with respect to vacuum with various bending curvatures are shown in {\color{blue}Supplementary Figures S6, S7, and S8} respectively. The dashed lines in the band structure plots indicate the SCAN estimated Fermi energy with respect to vacuum (``-ve" of the work function) while the red bands correspond to in-gap edge states. The edge states are identified by comparing the band structures of the ribbon with that of the monolayer bulk, and are highlighted by red color. The bulk band-gap (E$_g$ (eV)) (excluding edge states) and the work function ($\phi$ (eV)) of our flat nano-ribbons are extracted and tabulated in {\color{blue} Supplementary Table S1}. Out of TMD nano-ribbons considered, ZrX$_2$, HfX$_2$, MoY$_2$, and WX$_2$ (X = S, Se; Y = S, Se, Te) are semiconductors. To study the changes in the band structure of these semiconductors with respect to bending, we utilized the hydrogen passivated edges. A few of the low band-gap semiconductors such as TiY$_2$, TTe$_2$ (T=Zr, Hf) and group (X) indirect band-gap semiconductors (PtX$_2$) become metallic due to the edge states. We did not observe any substantial effect of bending on metallic compounds. An effect of the mechanical bending on the band-gap is of particular interest for semiconductors, due to a wide range of applications in nano-electronics. One each from the 1T and the 1H group, respectively ZrS$_2$ and MoS$_2$, are chosen to study the effect of bending on the band structure as shown in Figure~\ref{fig:band-semi}.\\ \begin{figure}[h!] \includegraphics[scale=0.3]{Figure8.pdf} \includegraphics[scale=0.3]{Figure9.pdf} \caption{Variation of band edges with respect to the bending curvature for ZrS$_2$ (left) and MoS$_2$ (right); CBM and VB1 are the conduction band minimum and edge state VB (valence band) respectively; CB1 (CBM), CB2, VB1 (VBM), and VB2 respectively are edge state CB (conduction band), bulk CB, edge state VB (valence band), and bulk VB. For flat MoS$_2$ ribbon, VB1 represents the VBM while for higher bending curvature ($\kappa$ $=$ $0.09 \AA^{-1}$) VB2 switches to VBM.} \label{fig:band-semi} \end{figure} The nature of edge states is different for 1T and 1H semiconductors. The 1T nanoribbon has edge states only below the Fermi level while both the edge states above and below the Fermi level are present in the 1H nanoribbon. The horizontal black dashed lines represent water redox potentials with respect to the vacuum level, -4.44 eV for the reduction (H$^+$/H$_2$), and -5.67 eV for the oxidation (O2/H$_2$O) at pH 0 \cite{CAAWSS07}. When the band edges straddle these potentials, materials possess good water splitting properties. The band edges CB2, VB1 (VBM), and VB2 of MoS$_2$ straddle the water redox potentials while only the edge state CB1 stays within the gap. As semilocal DFT functionals underestimate the band gap \cite{P85}, a correction is always expected at the G$_0$W$_0$ level ({\color{blue} Supplementary Table S1}), which shifts the bands above and below the Fermi level even further up and below respectively \cite{YRP16}. However, it is known that such correction for localized states (in the case of point defects) is less considerable than that for the delocalized bulk states \cite{ABP08}.\\ \textbf{(a) Tuning of band edges}\\ The band edges (conduction band minimum (CBM) and valence band maximum (VBM)) of ZrS$_2$ and other 1T semiconductors increase on increasing the bending curvature, while this varies from one band edge to another for MoS$_2$ and other 1H semiconductors. For example, shifting of the band energies with respect to vacuum is negligible for edge states as compared to the bulk ones for MoS$_2$. The shifting of band edges also leads to changing of the Fermi level as well as the band gap ({\color{blue} Supplementary Figure S10)}. For MoS$_2$, VB2 increases while VB1 decreases on increasing the bending curvature and eventually results in the removal of some of the edge states, though, complete elimination might not be possible. Since the mechanical bending shifts the band edges only by a little, the photocatalytic properties of MoS$_2$ and WS$_2$ is preserved even for a larger bending curvature. On the other hand, bending can shift the band edges of 1T semiconductors by a considerable amount for bending curvature up to 0.06 $\AA^{-1}$, but shift downward for higher bending curvature. For example, one can shift the band edges of ZrS$_2$ upward by 0.25 eV when applying the bending curvature of 0.06 $\AA^{-1}$. Moreover, the G$_0$W$_0$ calculated band structure shows that the CBM (-4.58 eV and -4.53 eV respectively) of ZrS$_2$ and HfS$_2$ is slightly lower than the reduction potential (-4.44 eV) while the VBM (-7.15 eV and -6.98 eV) is significantly lower than the water oxidation potential (-5.67 eV) \cite{ZH13}. Mechanical bending can shift the band edges in the upward direction to straddle the water redox potentials, enhancing the photocatalytic activity. The effect of bending on the band edges of 1H-TSe$_2$ semiconductors is different than that of 1H-TS$_2$ ({\color{blue} Supplementary Figure S9}), especially in the bulk valence band maximum (VB2). The VB2 is almost constant for lower bending curvature for TSe$_2$, while there is an appreciable increase in the case of TS$_2$.\\ \pagebreak \textbf{(b) Charge localization and Conductivity}\\ In this section, we describe the effect of bending on band edges in terms of localization or delocalization of the charge carriers at those band edges. The variation of an isosurface of the partial charge (electrons or holes) density with respect to bending curvature are presented in Figures~\ref{fig:band-charge} and \ref{fig:cbm}. Using the mechanical bending, one can tune the conductivity of TMD monolayers \cite{YRP16}. Before bending, the charge carriers (holes) of ZrS$_2$ at VB2 are delocalized over the whole ribbon width, decreasing in magnitude from S-edge to Zr-edge. The mechanical bending localizes the charges towards the S-edge while depleting along the Zr-edge, reducing the conductivity from one edge to the other. On the other hand, the charge density on top of VB1 does not change much with the bending for lower bending curvatures. However, at $\kappa$ $=$ 0.09 $\AA^{-1}$ some charges accumulate at the Zr-edge, thereby changing the trend of band energy with respect to vacuum (see Figure~\ref{fig:band-semi}). Unlike ZrS$_2$, the charge carriers (holes) of MoS$_2$ at VB2 are delocalized over the whole width, decrease in magnitude from the center of the ribbon to either side of edges symmetrically. With bending, the charge carriers localize at the middle of the ribbon and deplete at the edges, reducing the conductivity due to holes from one edge to the other \cite{YRP16}. At a higher bending curvature beyond $\kappa > 0.065 \AA^{-1}$, edge state VB1 crosses the bulk-VB and becomes VB2 and vice versa. Similar to VB1, CB1 also has the same behavior before and after bending, except it does not cross the CB2. Instead, it is also shifted down as VB1 does.\\ Conversely, the charge carriers (electrons) of ZrS$_2$ at the CBM (CB2) decrease in magnitude from Zr-edge to the S-edge. Again, mechanical bending localizes the electrons towards the Zr-edge. On the other hand, the electronic conductivity does not change even for larger bending curvature for MoS$_2$. The electrons are delocalized uniformly over the whole ribbon width which remains unaffected for a wide range of bending curvature. The conductivity of a semiconductor is the sum of conductivity of both electrons and holes. The mechanical bending reduces both types of conductivity in 1T semiconductors, while it only reduces hole-type conductivity in 1H semiconductors.\\ \pagebreak \begin{figure}[h!] \includegraphics[scale=0.5]{Figure11.pdf} \caption{{Variation in the isosurface of partial charge densities at VB1 and VB2 (holes) with respect to the bending curvature; (a) ZrS$_2$; (b) MoS$_2$; (c) Variation in the isosurface of the partial charge densities (donor-like) of MoS$_2$ at CB1 with bending curvatures.}} \label{fig:band-charge} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.4]{Figure12.pdf} \caption{Variation in the isosurface of partial charge density (electrons) with respect to the bending curvature at bulk conduction band minimum; (a) CBM for ZrS$_2$; (b) CB2 for MoS$_2$.} \label{fig:cbm} \end{figure} \textbf{G. Stability of nano-ribbons and finite width effect}\\ Based on our calculation, we have found that the stability of the flat nano-ribbons also depends on the type of edge. We have taken stoichiometric (n(X):n(T)$=$ 2:1) nano-ribbons ({\color{blue} Supplementary Figure S4}) for most of the calculations. However, we could not relax TiSe$_2$, HfS$_2$, PdTe$_2$, and PtSe$_2$ nano-ribbons in this configuration. We confirm that the instability of these flat ribbons cannot be removed simply by increasing the width of the ribbon. We chose a symmetric edge nano-ribbon by removing 2 dangling X (S, Se or Te) atoms from one of the edges for these compounds (Figure~\ref{fig:ribbon} II). Our calculation shows that the TMD nano-ribbons are stable against mechanical bending for a wide range of bending curvature, except for WTe$_2$. The bond breaking at the curvature region is observed for $\kappa$ $>$ 0.086 $\AA^{-1}$, as shown in Figure~\ref{fig:WTe}. Upon bending, one of the chalcogen atoms in the curvature region moves towards the middle layer, causing a further separation of the 2 metal atoms, as shown inside the circle, creating a sudden jump, as shown in an areal bending energy density vs curvature plot (See Figure~\ref{fig:WTe} (III)).\\ \noindent We utilized the thin plate bending model in our assessment in which we fix the width between flat and bent nanoribbons. It eliminates the quantum confinement effect present in the nanotube method due to dissimilarity of the width between flat and bent nanoribbons of the different radii of curvatures. However, the edge effects due to the finite width may remain uneliminated. Rafael I. Gonz$\acute{a}$lez et al. \cite{GVRVSKM18}, using classical molecular dynamics simulation, reported that the bending stiffness of MoS$_2$ estimated with a 0.95 nm width nanoribbon is only 46\% of those estimated using a 8 nm width nanoribbon. But, it recovers 88-93\% of bending stiffness when the width increases up to 3-4 nm, leaving the overall trend unaffected. We believe that such an accuracy would be a reasonable tradeoff to the computational complexity that arises while using a larger width. Moreover, we expect that the finite size effect would be less present in our results than in those calculated from MD simulation, as the quantum effects are more properly treated. \begin{figure}[h!] \includegraphics[scale=0.4]{Figure10.pdf} \caption{(I)-(II): Structures for 2 different bending curvatures, showing the breaking of the ribbon within the curvature region; The figure on left is for $\kappa$ $=$ 0.086 $\AA^{-1}$ while the one on the right is for $\kappa$ $=$ 0.093 $\AA^{-1}$. (III) An areal bending energy density with respect to bending curvature for WTe$_2$, showing the breaking of structure.} \label{fig:WTe} \end{figure} \section{conclusion and discussion} The 2D materials offer a wide range of electronic properties efficiently applicable in sensors, energy storage, photonics, and optoelectronic devices. The higher flexural rigidity and strain-tunable properties of these compounds make them potential functional materials for future flexible electronics. In this work, we have employed the SCAN functional to explore the physical and mechanical properties of the 2D transition metal dichalcogenide (TMD) monolayers under mechanical bending. SCAN performs reasonably well in predicting the correct ground state phase as well as the geometrical properties. Also, a wide variety of flexural rigidities can be observed while scanning the periodic table for TMDs. The in-plane stiffness decreases from S to Se to Te, while the bending stiffness has the opposite trend. Overall, the bending stiffness also depends on the d band filling in the transition metal. The bending stiffness increases on increasing the filling of the d band from sparsely-filled (d$^0$) to nearly half-filled (d$^2$). However, decrease in bending stiffness is observed on moving from nearly half-filled (d$^2$) to completely-filled (d$^6$) d band. The out-of-plane Poisson's ratios are found to be different from the in-plane Poisson's ratio for 1T and 1T$'$ monolayers, while the difference is negligible in the case of 1H compounds, showing an anisotropic behavior of 1T and 1T$'$ monolayers.\\ Despite the extraordinary physical and electronic properties of TMDs, there are still challenges to make use of TMD semiconductors in nanoelectronics. The strong Fermi level pinning and high contact resistance are key bottlenecks in contact-engineering which are mainly due to in-plane, in-gap edge states and do not depend too much on the work function of a contact metal \cite{KMLCAN17}. Thanks to mechanical bending, tuning of various properties of monolayer TMDs is possible, including band edges, thickness, and local strain. Bending deformation produces highly non-uniform local strain up to 40\% ({\color{blue} Supplementary Figure S1}), which is almost impossible with a linear strain ($\epsilon$). The high out-of-plane compressive strain developed within the layers due to bending reduces the mechanical thickness and makes the materials thinner in the curvature region. Moreover, one can remove strong Fermi-level pinning while using it in contact-engineering. Besides that, the optimal band alignment with the HER redox potential can be achieved for 1T semiconductors ZrS$_2$ and HfS$_2$ under mechanical bending, which are not present in an unbent monolayer. Furthermore, both electron and hole conductivities are affected in 1T semiconductors, while only the hole conductivity is affected in 1H semiconductors \cite{YRP16}. Similar to graphene \cite{YS97, W04, KGB01, Z00}, the estimated effective thickness of group IV TMDs, especially sulfide and selenide, is underestimated as compared to chalcogen-chalcogen distance (d$_{X-X}$), which is quite puzzling and needs further investigation. \section{Acknowledgement} We thank Prof. John P. Perdew for useful comments on the manuscript. This research was supported as part of the Center for Complex Materials from First Principles (CCM), an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award No. DE-SC0012575. Computational support was provided by National Energy Research Scientific Computing Center (NERSC). Some of calculations were carried out on Temple University's HPC resources and thus was supported in part by the National Science Foundation through major research instrumentation grant number 1625061 and by the US Army Research Laboratory under contract number W911NF-16-2-0189. \begin{center} \textbf{Supplementary material\\} \end{center} \begin{table}[h!] \renewcommand\thetable{S1} \caption{Width of the relaxed flat nanoribbons used in the calculations ($\AA$); The bulk band gap (excluding edge states) of the semiconducting unbent nano-ribbons (E$_g$); Workfunction ($\phi$ (eV)); The G$_0$W$_0$ quasi-particle gap of monolayer semiconductors (E$^{QP}_g$) is shown for comparison.} \begin{tabular}{|l|r|r|r|r|} \hline TMDs & Width ($\AA$) & \multicolumn{1}{l|}{E$_g$ (eV)} & \multicolumn{1}{l|}{$\phi = E_{vacuum} - E_{Fermi}$ (eV)} & E$^{QP}_g$ (eV)\cite{ZH13} \\ \hline TiS$_2$ & 32.04 &-- & 5.236 & \\ \hline TiSe$_2$ & 32.62 &-- & 5.508 & \\ \hline TiTe$_2$ & 34.11 &-- & 5.09 & \\ \hline ZrS$_2$ & 34.64 &1.549 & 5.886 & 2.56 \\ \hline ZrSe$_2$ & 35.55 &1.025 & 5.549 & 1.54\\ \hline ZrTe$_2$ & 37.10 & -- & 5.084 & \\ \hline HfS$_2$ & 34.18 & 1.751 & 5.919 & 2.45 \\ \hline HfSe$_2$ & 35.01 & 1.092 & 5.386 & 1.39\\ \hline HfTe$_2$ & 36.77 & -- & 4.938 & \\ \hline MoS$_2$ & 29.85 & 1.836 & 5.376 & 2.36 \\ \hline MoSe$_2$ & 30.95 & 1.709 & 4.952 & 2.04\\ \hline MoTe$_2$ & 32.62 & 1.349 & 4.631 & 1.54 \\ \hline MoTe$_2$-1T$^\prime$ & 33.72 & -- & 4.795 & \\ \hline WS$_2$ & 29.80 & 2.094 & 5.126 & 2.64 \\ \hline WSe$_2$ & 30.73 & 1.893 & 4.736 & 2.26\\ \hline WTe$_2$ & 33.26 & -- & 4.584 & \\ \hline PdTe$_2$ & 37.07 & -- & 4.4 & \\ \hline PtS$_2$ & 35.36 & -- & 5.482 & \\ \hline PtSe$_2$ & 34.32 & -- & 4.958 & \\ \hline PtTe$_2$ & 37.20 & -- & 4.466 & \\ \hline \end{tabular} \label{tab:band} \end{table} \begin{table}[h!] \renewcommand\thetable{S2} \centering \caption{The calculated structure parameters for the rectangular 1T$^\prime$ monolayer unit cell of WTe$_2$-type having 2 TX$_2$ units: For MoTe$_2$, a $=$ 3.43439, b $=$ 6.31457 $\AA$. Similarly for WTe$_2$, a $=$ 3.45822, b $=$ 6.24802 $\AA$ (Figure 1 in the main text). The combined-fractional coordinates X, Y, and Z represent the position of the corresponding atom in the same row. Lattice constants can be estimated as $b/\sqrt3$. Chalcogen-chalcogen distances d$_{X-X}$ for the distorted MoTe$_2$ and WTe$_2$ are (2.9486 $\AA$ and 4.0940 $\AA$) and (2.9118 $\AA$ and 4.1386 $\AA$) respectively.} \begin{tabular}{rlrrr} \hline \hline \multicolumn{1}{l}{TMD} & Atom & \multicolumn{1}{l}{X} & \multicolumn{1}{l}{Y} & \multicolumn{1}{l}{Z}\\ \hline & Mo1 & 0.01712 & 0 & 0.50454 \\ & Mo2 & 0.37804 & 0.5 & 0.49545 \\ \multicolumn{1}{l}{MoTe$_2$} & Te1 & 0.27688 & 0 & 0.39764 \\ & Te2 & 0.62769 & 0 & 0.57361\\ & Te3 & 0.11724 & 0.5 & 0.60234 \\ & Te4 & 0.76664 & 0.5 & 0.42641 \\ & & & & \\ & W1 & 0.02032 & 0 & 0.50509\\ & W2 & 0.37395 & 0.5 & 0.49487 \\ \multicolumn{1}{l}{WTe$_2$} & Te1 & 0.27811 & 0 & 0.39653 \\ & Te2 & 0.62559 & 0 & 0.57281 \\ & Te3 & 0.11669 & 0.5 & 0.60346 \\ & Te4 & 0.76895 & 0.5 & 0.42722\\ \hline \hline \end{tabular}% \label{tab:eff-2t}% \end{table}% \begin{figure}[h!] \renewcommand\thefigure{S1} \includegraphics[scale=0.33]{strain/TiS2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/TiSe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/TiTe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/ZrS2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/ZrSe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/ZrTe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/HfS2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/HfSe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/HfTe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/MoS2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/MoSe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/MoTe2-1H-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/WS2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/WSe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/PtS2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/PtSe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/PtTe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/PdTe2-strain-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/MoTe2-dT-eps-converted-to.pdf} \includegraphics[scale=0.33]{strain/WTe2-dT-eps-converted-to.pdf} \caption{The local strain projected onto the bending plane (bc) for the bending curvature around 0.09 $\AA^{-1}$. The upper and lower plots correspond to outer and inner chalcogen layers respectively, while the middle one corresponds to the metallic layer.} \label{lab:local-strain} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S2} \includegraphics[scale=0.33]{thickness/TiS2.pdf} \includegraphics[scale=0.33]{thickness/TiSe2.pdf} \includegraphics[scale=0.33]{thickness/TiTe2.pdf} \includegraphics[scale=0.33]{thickness/ZrS2.pdf} \includegraphics[scale=0.33]{thickness/ZrSe2.pdf} \includegraphics[scale=0.33]{thickness/ZrTe2.pdf} \includegraphics[scale=0.33]{thickness/HfS2.pdf} \includegraphics[scale=0.33]{thickness/HfSe2.pdf} \includegraphics[scale=0.33]{thickness/HfTe2.pdf} \includegraphics[scale=0.33]{thickness/MoS2.pdf} \includegraphics[scale=0.33]{thickness/MoSe2.pdf} \includegraphics[scale=0.33]{thickness/MoTe2-1H.pdf} \includegraphics[scale=0.33]{thickness/WS2.pdf} \includegraphics[scale=0.33]{thickness/WSe2.pdf} \includegraphics[scale=0.33]{thickness/PtS2.pdf} \includegraphics[scale=0.33]{thickness/PtSe2.pdf} \includegraphics[scale=0.33]{thickness/PtTe2.pdf} \includegraphics[scale=0.33]{thickness/PdTe2.pdf} \includegraphics[scale=0.33]{thickness/MoTe2-dT.pdf} \includegraphics[scale=0.33]{thickness/WTe2.pdf} \caption{The change in the physical thickness with respect to the flat nano-ribbon for the bending curvature around 0.09 $\AA^{-1}$. We utilized a 6$^{th}$ order polynomial fit to estimate the physical thickness. The {\color{blue} blue}, {\color{red} red}, and {\color{green} green} plots represent t$_{tot}$ (t$_{up}$ + t$_{dn}$), t$_{up}$, and t$_{dn}$ respectively (see figure 2 (III) in the main paper).} \label{lab:thickness} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S3} \includegraphics[scale=0.8]{fitting.pdf} \caption{Estimating the physical thickness at the curvature (vertex) region for 1H structure. (a) The layer is fitted with a n$^{th}$ order polynomial and the shortest distance from a point at the middle (A and B) to the fitted curve; t$_{tot}$ is the shortest distance from point A to the inner layer; t$_{up}$ is the shortest distance from point A to the middle layer; t$_{dn}$ is the shortest distance from point B to the inner layer.(b) an absolute error with respect to polynomial order; t$_{X-X}$ is the distance between point A and C. A sixth order polynomial is sufficient to estimate the physical thickness for 1H, 1T, and 1T$^\prime$ structures.} \label{fig:1T-BG} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S4} \text{I} \includegraphics[scale=1.2]{new2/area1.pdf} \includegraphics[scale=0.33]{new2/1H.png} \includegraphics[scale=0.5]{new2/axis.png} \includegraphics[scale=0.25]{new2/dT.png} \caption{A narrow window is taken at the middle of the nano-ribbon. The local electronic charge density is calculated along the out-of-plane direction (c- axis) within the narrow window. The first, second, and the third figure correspond to 1T, 1H, and 1T$^\prime$ respectively.} \label{fig:window} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S5} \caption{Variation of the local electronic charge distribution with respect to bending curvature. The distance between 2 vertical red lines represent the d$_{X-X}$ distance of the monolayer bulk.} \includegraphics[scale=0.35]{new2/area-TiS.pdf} \includegraphics[scale=0.35]{new2/area-TiSe.pdf} \includegraphics[scale=0.35]{new2/area-TiTe.pdf} \includegraphics[scale=0.35]{new2/area-ZrS.pdf} \includegraphics[scale=0.35]{new2/area-ZrSe.pdf} \includegraphics[scale=0.35]{new2/area-ZrTe.pdf} \includegraphics[scale=0.35]{new2/area-HfS.pdf} \includegraphics[scale=0.35]{new2/area-HfSe.pdf} \includegraphics[scale=0.35]{new2/area-HfTe.pdf} \includegraphics[scale=0.35]{new2/area-MoS-2.pdf} \includegraphics[scale=0.35]{new2/area-MoSe.pdf} \includegraphics[scale=0.35]{new2/area-MoTe.pdf} \includegraphics[scale=0.35]{new2/area-WS1.pdf} \includegraphics[scale=0.35]{new2/area-WSe.pdf} \includegraphics[scale=0.35]{new2/area-WTe.pdf} \label{fig:log-charge} \end{figure} \begin{figure} \includegraphics[scale=0.35]{new2/area-PtS.pdf} \includegraphics[scale=0.35]{new2/area-PtSe.pdf} \includegraphics[scale=0.35]{new2/area-PtTe.pdf} \includegraphics[scale=0.35]{new2/area-PdTe.pdf} \includegraphics[scale=0.35]{new2/area-MoTe-dT.pdf} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S6} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-TiS.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiS-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiS-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiS-5.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-TiSe.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiSe-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiSe-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiSe-7.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-TiTe.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiTe-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiTe-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-TiTe-7.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-ZrS.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrS-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrS-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrS-7.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-ZrSe.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrSe-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrSe-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrSe-7.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-ZrTe.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrTe-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrTe-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-ZrTe-7.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-HfS.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfS-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfS-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfS-7.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-HfSe.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfSe-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfSe-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfSe-7.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-flat-HfTe.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfTe-2.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfTe-4.pdf} \includegraphics[height=0.9in, width=1.5in]{EIGENVAL/EIGENVAL-HfTe-7.pdf} \caption{Band structures with respect to vacuum corresponding to group IV TMDs; \textbf{TiS$_2$}, \textbf{TiSe$_2$}, \textbf{TiTe$_2$}, \textbf{ZrS$_2$}, \textbf{ZrSe$_2$}, \textbf{ZrTe$_2$}, \textbf{HfS$_2$}, \textbf{HfSe$_2$}, and \textbf{HfTe$_2$}. $\kappa$ is the bending curvature ($\AA^{-1}$).} \label{fig:band-IV} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S7} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-MoS.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoS-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoS-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoS-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-MoSe.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoSe-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoSe-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoSe-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-MoTe-S.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoTe-5S.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoTe-7S.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoTe-8.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-WS.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WS-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WS-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WS-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-WSe.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WSe-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WSe-5.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WSe-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-MoTe.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoTe-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoTe-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-MoTe-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-WTe.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WTe-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WTe-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-WTe-7.pdf} \caption{Band structures with respect to vacuum for group VI TMDs; \textbf{MoS$_2$}, \textbf{MoSe$_2$}, \textbf{MoTe$_2$}, \textbf{MoTe$_2$-1T$^\prime$}, \textbf{WS$_2$}, \textbf{WSe$_2$}, and \textbf{WTe$_2$}. $\kappa$ is the bending curvature ($\AA^{-1}$).} \label{fig:band-VI} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S8} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-PdTe.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PdTe-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PdTe-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PdTe-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-PtS.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtS-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtS-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtS-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-PtSe.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtSe-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtSe-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtSe-7.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-flat-PtTe.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtTe-2.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtTe-4.pdf} \includegraphics[height=1in, width=1.5in]{EIGENVAL/EIGENVAL-PtTe-7.pdf} \caption{Band structures with respect to vacuum for group X TMDs; PdTe$_2$ and PtY$_2$ (Y = S, Se, Te). $\kappa$ is the bending curvature ($\AA^{-1}$).} \label{fig:band-X} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S9} \includegraphics[scale=0.2]{ZrSe2-gap.pdf} \includegraphics[scale=0.2]{HfS2-gap.pdf} \includegraphics[scale=0.2]{HfSe2-gap.pdf} \includegraphics[scale=0.2]{MoSe2-gap.pdf} \includegraphics[scale=0.2]{WS2-gap.pdf} \includegraphics[scale=0.2]{WSe2-gap.pdf} \caption{Band edges with respect to vacuum; CBM, CB1, VB1, CB2, VB2 are the band edges defined in Figure 8 (Main text). The horizontal dotted lines represent water redox potentials: reduction (H$^+$/H2; -4.44 eV), and oxidation (H2O/O2; -5.67 eV).} \label{fig:1T-BE} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S10} \includegraphics[scale=0.3]{band-Hf-Zr.pdf} \includegraphics[scale=0.3]{band-Hf-Zr-gap.pdf} \includegraphics[scale=0.3]{MoS2-band-gap.pdf} \includegraphics[scale=0.3]{MoSe2-band-gap.pdf} \includegraphics[scale=0.3]{WS2-band-gap.pdf} \includegraphics[scale=0.3]{WSe2-band-gap.pdf} \caption{Band gaps as a function of the bending curvatures. CBM, CB1, VB1, CB2, VB2 are the band edges defined in Figure 8 (Main text). Band gap remains almost constant for 1T semiconductors, while it decreases with the bending curvature for 1H compounds.} \label{fig:1T-BG} \end{figure} \begin{figure}[h!] \renewcommand\thefigure{S11} \includegraphics[scale=0.62]{Figure13.pdf} \caption{An angular momentum decomposed wavefunction character of different bands corresponding to a different layer and their variation with the bending curvature; (a) - (c): ZrS$_2$; (d) - (g): MoS$_2$. The inner, middle, outer represent the different layers of the nanoribbon. CBM, CB1, VB1, CB2, VB2 are the band edges defined in Figure 8 (Main text).} \label{fig:wavefunc} \end{figure} \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{Introduction} Duplex stainless steels (DSS) have achieved great success in the development of stainless steels and have led to numerous commercial applications. They consist of approximately equal amounts of ferrite and austenite. Though DSS successfully combined the merits of these two phases, the brittle cleavage problem in the ferrite component has attracted further consideration~\cite{Wessman2008}. In order to grasp the mechanism of ductility in DSS, in the present work we concentrate on the effect of Mn and Ni additions on the intrinsic properties of ferromagnetic Fe. Both alloying elements are fundamental constituents of DSS and they are primarily responsible for the stabilization of the austenite phase in the low-carbon alloys. The mechanical properties of engineering alloys are determined by a series of mechanisms controlling the dislocation glide (such as elasticity, grain size, texture, grain boundary cohesion, solid solution and precipitation hardening). Describing such phenomena using merely \emph{ab initio} calculations is not feasible. On the other hand, the mechanical properties are closely related to several material parameters which are within reach of such theoretical tools. In particular, the fracture work depends on the surface energy, elastic energy, and plastic energy~\cite{Pugh1954}. Following the relationship between the crack and the surface energy ($\gamma_s$), and the plastic deformation and the unstable stacking fault (USF) energy ($\gamma_u$), Rice \cite{Rice1992} proposed that the brittle-ductile behavior can be effectively classified by the $\gamma_s/\gamma_u$ ratio. In body-centered cubic (bcc) system, plastic deformation shows various kinds of complexity due to its intricate dislocation core structure, multiple slip systems and the cross slip between them. According to the Peierls-Narbarro model, the shear stress necessary for moving a dislocation in the lattice can be estimated as $\tau_p=2G/(1-\nu)\exp[-2a/(b(1-\nu))]$, where $G$ is the shear modulus, and $\nu$ the Poisson ratio. Here $a$ represents the characteristic interlayer distance, and $b$ is the Burgers vector. For a fixed structure, $a/b$ depends only on the slip system. Accordingly, the greatest $a$ and the shortest $b$ give the lowest resistant stress, which corresponds to the most likely slip system. Usually, such slip systems have the closest packed atomic plane and the closest packed slip direction. In the bcc lattice, this is the $\{110\}\langle111\rangle$ slip system, \emph{i.e.}, slip occurs along the $\langle111\rangle$ direction on the $\{110\}$ planes. A recent molecular dynamics study \cite{Kumar2013} showed that other plane families such as $\{112\}$ and $\{123\}$ can be also efficient in bcc Fe. Unfortunately, both the USF energy and the surface energy for a particular crystallographic facet are extremely difficult to verify from experiments, and thus today the limited available data are exclusively based on first-principles quantum-mechanical calculations. However, long before the era of ``\emph{ab initio} materials science'', Pugh summarized dozens of experimental data and developed a simple phenomenological model~\cite{Pugh1954} which correlates the fracture properties with some easily accessible elastic parameters. According to this model, the ductility of materials can be characterized by the ratio between the bulk modulus $B$ and the shear modulus $G$. Ductile materials usually possess $B/G>\chi_{\rm P}\equiv 1.75$, whereas brittle materials have typically $B/G<\chi_{\rm P}$. The Pugh ratio is often used in modern computational materials science to assess and predict trends at relatively low calculation cost. One can easily establish a qualitative connection between the Pugh criterion and that of Rice. The surface energy is known to correlate with the cohesive energy \cite{Vitos1994,Vitos1997,Kollar2000,Kwon2005,PRL69}, which in turn is to a large degree proportional to the bulk modulus since larger cohesive energy requires larger curvature of the binding energy curve. The USF energy represents the energy barrier upon shifting one half of the crystal over the other half. The slope of the so defined $\gamma$-surface is expected to be proportional to $\gamma_u$ (maximum of the curve) and also to the Peierls-Narbarro stress. Assuming a constant Poisson ratio, for a fixed lattice the latter is given merely by the shear modulus. Hence, at least on a qualitative level one may expect that $\gamma_s/\gamma_u$ and $B/G$ should follow similar trends, \emph{e.g.}, upon alloying. The above correlations will be carefully investigated in the present work in the case of Fe-Mn and Fe-Ni alloys. For an isotropic polycrystalline material the two elastic moduli $B$ and $G$ determine the Poisson ratio $\nu = (3B/G-2)/(6B/G+2)$. Consequently, the Pugh criterion imposes the constraint $\nu > (3\chi_{\rm P}-2)/(6\chi_{\rm P}+2) \approx 0.26$ for the Poisson ratio of ductile materials. The polycrystalline bulk and shear moduli are derived from single-crystal elastic constants by Hill averaging of the Voigt and Reuss bounds \cite{Vitos-book}. In the case of cubic crystals, these two bounds are related to Pugh's ratio by \begin{eqnarray}\label{eq0} \frac{B}{G} = \left\{\begin{array}{cc}5\left(\frac{2}{3}Z^{-1}+\mu\right)(2Z^{-1}+3)^{-1}\;\;\mbox{(Voigt)}\\ \frac{1}{5}\left(\frac{2}{3}Z^{-1}+\mu\right)(3+2Z)\;\;\mbox{(Reuss)} \end{array}\right., \end{eqnarray} where $Z\equiv 2C_{44}/(C_{11}-C_{12})$ is the Zener anisotropy ratio. Hence, for both limiting cases, the Pugh ratio increases linearly with $\mu\equiv C_{12}/C_{44}$. For isotropic crystals $Z=1$ and thus the Pugh criterion for ductility requires $\mu\gtrsim 1.08$. For weakly anisotropic lattices ($Z\sim 2$, which is the case of the present alloys), the above condition modifies to $\eta\gtrsim 0.92$ or $\eta\gtrsim 1.07$ corresponding to the Reuss or Voigt value, respectively. For this anisotropy level, the Hill average requires $\mu\gtrsim 0.99$ for ductile materials. The above defined $\mu$ parameter is closely connected to the Cauchy pressure $(C_{12}-C_{44})=C_{44}(\mu-1)$. Negative Cauchy pressure (\emph{i.e.}, $\mu<1$) implies angular character in the bonding and thus characterizes covalent crystals while positive Cauchy pressure ($\mu>1$) is typical for metallic crystals \cite{Pettifor1992}. In other words, if the resistance of the crystal to shear ($C_{44}$) is larger than the resistance to volume change ($C_{12}$), the system shows covalent features and vice versa. This condition based on the Cauchy pressure is in line with the one formulated by Pugh. Compared to the criteria based on Pugh ratio and Cauchy pressure, the condition developed by Rice is expected to give more physical insight into the mechanism of plastic deformation. However, the application of Rice condition based on planar defect energies is related to the specific deformation mode and crystal orientation. In principle, one should analyze all possible cases with appropriate statistical weights. Except simple cases, such as the face-centered cubic (fcc) lattice, this makes the Rice approach rather cumbersome. In turn, the Pugh rule was established from a series of experimental data and is based on simple physical quantities. Although many assumptions are involved in this empirical approach, it has been confirmed to be practical in a large number of former scientific studies. In the present work, we focus on the alloying effect of Ni and Mn on ferrite Fe-based dilute alloys. In order to study the possible mechanisms behind the well-known facet cleavage problem in DSS alloys, which to a large extent limits their engineering applications, here we carry out systematic electronic structure calculations to find the single-crystal and polycrystalline elastic parameters and the surface and USF energies as a function of chemical composition. We consider the $\{110\}\langle111\rangle$ slip system due to its special role in the plastic deformation of the bcc lattice. Accordingly, we compute the surface energy for the $\{110\}$ crystallographic facet and the USF energy for the $\{110\}$ plane along the $\langle111\rangle$ slip direction. The theoretical predictions are used to assess various ductility parameters discussed above. The rest of the paper is divided into three main sections and conclusions. Section~\ref{tools} gives a brief description of the employed theoretical tools, an introduction to the theory of elasticity and planar defects, and lists the important numerical details. The results are presented and discussed in Sections~\ref{result} and \ref{discussion}, respectively. \section{Theoretical tools}\label{tools} \subsection{Total energy method}\label{method} The total energy calculations were performed using the exact muffin-tin orbitals (EMTO) method~\cite{Vitos2000, Vitos2001, Vitos-PRL, Vitos-book}. This density functional theory (DFT) solver is similar to the screened Korringa-Kohn-Rostoker method, where the exact Kohn-Sham potential is represented by large overlapping potential spheres. Inside these spheres the potential is spherically symmetric and constant between the spheres. It was shown~\cite{Andersen1998, Zwierzycki2009, Vitos-book} that using overlapping spheres gives a better representation of the full-potential as compared to the traditional muffin-tin or atomic-sphere approximations. Within the EMTO method, the compositional disorder is treated using the coherent-potential approximation (CPA)~\cite{Soven1967, Gyorffy1972} and the total energy is computed via the full-charge density technique~\cite{Kollar2000, Vitos1997, Kollar1997}. The EMTO-CPA method has been involved in many successful applications focusing on the thermo-physical properties of alloys and compounds~\cite{Delczeg2011,GJ,Landa2006,Magyari2001,Huang2006,Kollar2003,Magyari2004,Hu2009,Zhang2009,Zander2007,Vitos2006}, surface energies~\cite{Schonecker2013,Ropo2005} and stacking-fault energies~\cite{Li2014,Lu2011,Lu2012}. Here we would like to make a brief comment on CPA. Due to the single-site nature, CPA cannot account for the effect of local concentration fluctuations, short-range order effect and local lattice relaxation. This means that modeling the alloy by CPA one implicitly assumes a completely homogeneous random solid solution with a rigid underlying crystal lattice. On the other hand, CPA can be used to describe fluctuations within the supercell, e.g. near the unstable stacking fault or surface, by using variable concentrations for each atomic layer. This feature is used here as well when studying the segregation effects (see Section II.C). In the case of Fe-Ni and Fe-Mn alloys, embedding a single impurity atom into the mean-field (CPA) Green function means that we omit the inter-dependence between magnetic state and the local environment, which in turn could lead to specific local relaxation effects. In particular, within the present approximation the magnetic state of Mn is determined by the coherent Green function (representing the average effect of all alloying elements) rather than by the actual nearest neighbor atoms. Nevertheless, such local effects are expected to be relatively small when the impurity level is below $\sim10$\% (i.e. all impurities are mostly surrounded by Fe). The self-consistent EMTO calculations were performed within the generalized-gradient approximation proposed by Perdew, Burke, and Ernzerhof (PBE)~\cite{Perdew1996}. The performance of this approximation was verified for the Fe-based systems in many former investigations~\cite{Zhang2010,Asker2009,Pitkanen2009}. The one-electron equations were solved within the soft-core scheme and using the scalar-relativistic approximation. The Green's function was calculated for 16 complex energy points distributed exponentially on a semicircular contour including states within 1 Ry below the Fermi level. In the basis set, we included \emph{s, p, d}, and \emph{f} orbitals and for the one-center expansion of the full-charge density an orbital cut-off $l_{max}=8$ was used. The total energy was evaluated by the shape function technique with cut-off $l_{max}^{shape} = 30$~\cite{Vitos-book}. For the undistorted bcc structure, we found that a homogeneous $k$-mesh of $37\times37\times37$ ensured the required accuracy. For the elastic constant calculations, we used about $\sim30000-31000$ uniformly distributed $k$ points in the Brillouin zones of the orthorhombic and monoclinic structures. The potential sphere radii for the alloy components were fixed to the average Wigner-Seitz radius. For the surface energy and USF energy calculations, the $k$-mesh was set to $9\times23\times2$ in the the Brillouin zone of the base-centered orthorhombic super-cell. The impurity problem was solved within the single-site CPA approximation, and hence the Coulomb system of a particular alloy component $i$ may contain a non-zero net charge. Here the effect of charge misfit was taken into account using the screened impurity model (SIM) \cite{Korzhavyi1995,Ruban2002}. According to that, the additional shift in the one-electron potential and the corresponding correction to the total energy are controlled by the dimensionless screening parameters $\alpha_i$ and $\beta$, respectively. The parameters $\alpha_i$ are usually determined from the average net charges and electrostatic potentials of the alloy components obtained in regular supercell calculations \cite{Ruban2002}. The second dimensionless parameter ${\beta}$ is determined from the condition that the total energy calculated within the CPA should match the total energy of the alloy obtained using the supercell technique. For most of the alloys, the suggested optimal values of ${\alpha_i}$ and ${\beta}$ are $\sim0.6$ and $\sim1.2$, respectively \cite{Korzhavyi1995,Ruban2002}. Often $\alpha_i$ for different alloy components are assumed to the same and $\beta=1$ is used. In the present work, considering the volume sensitivity of $\alpha_i$, a dynamic SIM \cite{Guisheng2013} was applied for the Fe-Ni system. According to that, the optimal $\alpha_i$ varies as $1.099, 1.130, 1.160, 1.179, 1.186, 1.182, 1.179, 1.172, 1.163$ for $9$ lattice parameters $a$ ranging from $\sim 2.805$ to $\sim 2.891$ \AA\ (with interval of $\sim 0.011$ \AA). The second SIM parameter ${\beta}$ was fixed to 1. For Fe-Mn, the SIM parameters show rather weak volume dependence, so ${\alpha_i}$ and $\alpha\equiv\beta\alpha_i$ were chosen to be $0.79$ and $0.90$, respectively. The above figures for the SIM parameters were obtained for a specific concentration (6.25 \% Ni or Mn in Fe) and thus they are strictly valid only for that particular disordered system. However, in this study we assumed the same values for all binaries, \emph{i.e.}, the concentration dependence of the SIM parameters was omitted. \subsection{Elastic properties}\label{elastic} The elastic properties of a single-crystal are described by the elements of the elasticity tensor. In a cubic lattice, there are three independent elastic constants: $C_{11}$, $C_{12}$ and $C_{44}$. The tetragonal shear elastic constant ($C^{\prime}$) and the bulk modulus ($B$) are connected to the single-crystal elastic constants through $B=(C_{11}+2C_{12})/3$ and $C^{\prime}=(C_{11}-C_{12})/2$. The adiabatic elastic constants are defined as the second order derivatives of the energy ($E$) with respect to strain. Accordingly, the most straightforward way to obtain the elastic parameters is to strain the lattice and evaluate the total energy change as a function of lattice distortion. In practice, the bulk modulus is derived from the equation of state, obtained by fitting the total energy data calculated for a series of volumes ($V$) or cubic lattice constants ($a$) by a Morse-type function~\cite{Moruzzi1988}. Since we are interested in equilibrium (zero pressure) elastic parameters, the fitting should involve lattice points nearby the equilibrium. It is known that bcc Fe undergoes a weak magnetic transition at slightly enlarged lattice parameters ($a \approx 2.9\,\textrm{\AA}$), which should be excluded from the fitting interval to avoid undesirable scatter of the computed bulk modulus~\cite{Zhang2010a}. Accordingly, the present Morse-type fit was limited to lattice parameters smaller than $2.9\,\textrm{\AA}$. The same interval was applied to all considered binary alloys to minimize the numerical noise due to the volume mesh. Since the total energy depends on volume much more strongly than on small lattice strains, volume-conserving distortion are usually more appropriate to calculate $C^{\prime}$ and $C_{44}$. Here, we employed the following orthorhombic ($D_o$) and monoclinic ($D_m$) deformations \begin{eqnarray}\label{eq1} D_o&=&\left( \begin{array}{ccc} 1+\delta_o & 0 & 0 \\ 0 & 1-\delta_o & 0 \\ 0 & 0 & \frac1{1-{\delta^2_o}} \\ \end{array} \right) \;\;\mbox{and}\;\; \nonumber \\ D_m&=&\left( \begin{array}{ccc} 1 & \delta_m & 0 \\ \delta_m & 1 & 0 \\ 0 & 0 & \frac{1}{1-{\delta^2_m}} \\ \end{array} \right)\nonumber, \end{eqnarray} where $\delta$ denotes the distortion parameter. For both lattice deformations, six different distortions $\delta = 0,0.01, 0.02, ..., 0.05$ were used. These deformations lead to total energy changes $\Delta E(\delta_{o}) = 2VC^{\prime} \delta_o^2 + \mathcal{O}(\delta_o^4)$ and $\Delta E(\delta_{m}) = 2VC_{44} \delta_m^2 + \mathcal{O}(\delta_m^4)$, respectively, where $\mathcal{O}$ stands for the neglected terms. The polycrystalline shear modulus was obtained from the single-crystalline elastic constants according to the Hill average~\cite{Hill1952}, $G=(G_V+G_R)/2$, of the Voigt, $G_V=(C_{11}-C_{12}+3C_{44})/5$, and Reuss, $G_R=5(C_{11}-C_{12})C_{44}/[4C_{44}+3(C_{11}-C_{12})]$, bounds. \begin{figure} \centering \subfigure[]{\includegraphics[scale=0.4]{fig1a.eps}}\hspace{0.4in} \subfigure[]{\includegraphics[scale=0.4]{fig1b.eps}} \caption{(Color online) Schematics of the model structures used to compute the USF energy (panel a) and the surface energy (panel b). See text for details. }\label{fig1} \end{figure} \subsection{Surface and unstable stacking fault energy} The schematics of the USF and surface model are illustrated in Fig.~\ref{fig1}. For USF (panel a), we show a top view along the $[110]$ direction of the two atomic layers next to the stacking fault. The slip direction for the second (filled open circles) atomic layer is also shown. For the surface model (panel b), we show the side view along the $[001]$ direction of the two surface atomic layers (filled circles) and the two empty layers (empty circles) mimicking the vacuum. The surface energy ($\gamma_s$) of chemically homogeneous alloys (without segregation) was extracted from total energy calculations for slabs following the method proposed by Fiorentini and Methfessel~\cite{Fiorentini1996}. No reference to separately computed bulk total energies is made in this approach. Instead, the reference bulk energy per layer is deduced from a linear fit to the slab total energy versus the slab thickness. Slabs with 8, 10, and 12 atomic layers separated by 6 vacuum layers were used. According to Punkkinen \emph{et al.}~\cite{Punkkinen2011} the surface relaxation effects are small for the $\{110\}$ surface facet of Fe and have an insignificant effect on the magnitude of the surface energy. Our calculations confirmed this finding, and therefore here we omitted the effect of relaxation in all surface energy calculations. The USF energy ($\gamma_u$) was obtained in the following way. First, the total energies of 16, 20, and 24 layers thick supercells (each containing two USF) were calculated, and the bulk energy per layer was extracted from a linear fit to the energy versus thickness. Because the USF seriously disturbs the atomic arrangement in the vicinity of the stacking fault plane, structural relaxations perpendicular to the fault plane have to be taken into account. We considered the relaxation effect for pure bcc Fe and found that the first inter-layer spacing at the USF increases by approximately $4 \%$. For the Fe-based alloys, we kept the Mn/Ni content below $10 \%$, which is expected to have a small additional effect on the layer relaxation. Therefore, we used the structural relaxation obtained for pure Fe for all binaries considered here. We also considered the effect of segregation on the surface energy and the USF energy. To this end, we varied the composition of the atomic layers next to the stacking fault or at the surface. Unfortunately, the above described method to compute the formation energies based entirely on the slabs/supercells with different sizes is not applicable when the chemical concentration profile is no longer homogeneous. Instead, all segregation studies were performed for a slab/supercell with fixed size and the impact of segregation was interpreted with respect to the homogeneous slab/supercell with the same size. In that way, possible numerical errors associated with the reference system were kept at minimal level. In particular, the effect of segregation on the USF energy was derived by subtracting the energy of the supercell without USF from the energy of the supercell with USF, both with segregation on the slip planes. The surface energy for the chemically inhomogeneous systems was computed following the method proposed in Ref.~\cite{ruban1999surface} based on the grand canonical ensemble and involving the effective chemical potentials. \section{Results: intrinsic material parameters}\label{result} \subsection{Parameters of ferromagnetic bcc Fe} We assess the accuracy of the EMTO method for the present systems by comparing the results obtained for pure Fe with those derived from previous full-potential calculations and the available experimental data. Results are listed in Table~\ref{tabI}. The present lattice parameter $a$ is consistent with those from Refs.~\cite{Caspersen2004,Guo2000}. All theoretically predicted equilibrium lattice parameter are, however, slightly smaller than the experimental value, $2.866$\,\AA~\cite{Rayne1961}, which is due to the weak PBE over binding. Theory reproduces the experimental magnetic moment accurately. \begin{table*} \caption{Theoretical and experimental lattice parameter $a$ (in units of \AA), magnetic moment $\mu$ (in $\mu_B$), single-crystal elastic constants (in GPa), surface energy $\gamma_s$ (in J/m$^2$) and USF energy $\gamma_u$ (in J/m$^2$) for ferromagnetic bcc Fe. The quoted theoretical methods are EMTO: present results; PAW: full-potential projector augmented wave method; PP: ultrasoft non-norm-conserving pseudopotential; FLAPW: all-electron full-potential linear augmented plane wave method; FPLMTO: full-potential linear muffin-tin orbitals method. The low-temperature (4 K) experimental elastic constants are listed for comparison, as well as the semi-empirical surface energy.}\label{tabI} \begin{ruledtabular} \begin{tabular}{lccccccccc} Method& $a$ & $\mu$ & $C_{11}$ & $C_{12}$ & $C_{44}$ & $C^\prime$ & $B$ & $\gamma_s$ & $\gamma_u$ \\ EMTO & 2.838 & 2.21 & 288.9 & 133.7 & 100.0 & 77.6 & 185.4 & 2.47 & 1.08\\ PAW & 2.838\cite{Caspersen2004} & 2.21\cite{Caspersen2004} & 271\cite{Caspersen2004} & 145\cite{Caspersen2004} & 101\cite{Caspersen2004}&63\cite{Caspersen2004} & 172\cite{Caspersen2004}&2.50\cite{Punkkinen2011} & 0.98\cite{Mori2009}\\ PP & 2.848\cite{Vovcadlo1997} & 2.25\cite{Vovcadlo1997} & 289\cite{Vovcadlo1997} & 118\cite{Vovcadlo1997} &115\cite{Vovcadlo1997} & 85.5\cite{Vovcadlo1997}&176\cite{Vovcadlo1997}& \\ FLAPW & 2.84\cite{Guo2000} & 2.17\cite{Guo2000} & 279\cite{Guo2000} & 140\cite{Guo2000} & 99\cite{Guo2000} & 69\cite{Guo2000} & 186\cite{Guo2000} & &\\ FPLMTO & 2.812\cite{Sha2006} & & 303\cite{Sha2006} & 150\cite{Sha2006} &126\cite{Sha2006} & 76.5\cite{Sha2006}&201\cite{Sha2006}& \\ Expt. & 2.866\cite{Rayne1961} & 2.22\cite{Rayne1961} & 243.1\cite{Rayne1961} &138.1\cite{Rayne1961} & 121.9\cite{Rayne1961} & 52.5\cite{Rayne1961} & 173.1\cite{Rayne1961} & 2.41 \cite{Tyson1977}& \\ \end{tabular} \end{ruledtabular} \end{table*} In Table~\ref{tabI}, the present single-crystal elastic constants and the bulk modulus of bcc Fe are compared with data from literature. The EMTO results are consistent with the other theoretical elastic constants, the deviations being typical to the errors associated with such calculations. However, compared to the experimental data, EMTO is found to overestimate $C_{11}$ by about $\sim 19 \%$, and underestimates $C_{44}$ by $\sim 18 \%$. The larger $C_{11}$ (and the larger bulk modulus) can be partly accounted for by the underestimated lattice parameter~\cite{Zhang2010}. We notice that the excellent agreement in Ref.~\cite{Caspersen2004} between the theoretical and experimental bulk moduli is somewhat suspicious since the quoted single-crystal elastic constants results in $187$ GPa for the bulk modulus (compared to $172$ GPa reported in Ref.~\cite{Caspersen2004}), which is close to the present finding and also to that from Ref.~\cite{Guo2000}. The pseudopotential~\cite{Vovcadlo1997} equilibrium lattice parameter is the largest among the theoretical predictions listed in the table, which might explain why the pseudopotential bulk modulus is surprisingly close to the experimental value. The EMTO surface energy is in close agreement with the full-potential result~\cite{Punkkinen2011} and also with the semi-empirical value valid for an ``average'' surface facet~\cite{Tyson1977}. The present USF energy ($1.08$\,J/m$^2$) reproduces the full-potential result from Ref.~\cite{Mori2009} within $\sim 10\%$, which is well acceptable taking account that the present method is based on muffin-tin and full-charge density approximations. We recall that today it is not yet possible to measure the USF energy. \subsection{Equilibrium volumes and magnetic moments of Fe-Ni and Fe-Mn alloys} According to the equilibrium phase diagrams, in ferromagnetic bcc Fe the maximum solubility of Mn is $\sim 3\%$ and that of Ni is $\sim 5.5\%$. Nevertheless, in duplex stainless steels, solid solutions with substantially larger amount of Mn and Ni may appear as a result of the delicate balance between volume and surface effects. Hence, in the present theoretical study we go well beyond the experimental solubility limits and discuss the trends up to $10\%$ solute atoms in the Fe matrix. The calculated equilibrium lattice parameters for bcc binary Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys are shown in Fig.~\ref{fig2} for $0\le x \le 0.1$. We find that for both binaries the lattice parameters show a weak dependence on the amount of alloying addition $x$. Our results, especially the alloying trends, are in reasonable agreement with the available experimental data~\cite{Sutton1955,Owen1937}. \begin{figure}[b] \includegraphics[scale=0.3]{fig2.eps} \caption{(Color online) Lattice parameters (in units of \AA) of bcc Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys as a function of composition. For comparison, the measured values by Sutton~\cite{Sutton1955} and Owen~\cite{Owen1937} are also included.}\label{fig2} \end{figure} The lattice parameter of bcc Fe increases with Mn addition as $x$ increases from $0$ to $0.05$. When Fe is alloyed with more than 5\,\% Mn, the lattice parameter shows a small negative slope as a function of $x$. Due to the limited solubility of Mn in bcc Fe, the experimental data is restricted below $\sim 3 \%$ Mn. In contrast to Mn, the lattice parameter of Fe$_{1-x}$Ni$_x$ monotonically increases with the Ni amount. Sutton \emph{et al.}~\cite{Sutton1955} reported that the lattice parameter of Fe$_{1-x}$Ni$_x$ is a linear function of $x$ up to $\sim 5 \%$ Ni with small slope and remains constant at larger concentrations. This trend is reproduced by the present theory, although the switch from a weak positive slope to a constant value seems to be shifted to larger concentrations (Fig.~\ref{fig2}). The present theory slightly overestimates the observed volume expansion due to alloying. This can have both theoretical and experimental origin. From the side of theory, the exchange-correlation functional (PBE) may produce a noticeable effect since it performs differently for the present metals. For instance, while PBE underestimates the equilibrium lattice parameter of bcc Fe by $\sim 1.0\%$, it slightly overestimates that of fcc Ni ($\sim 0.3\%$) \cite{Ropo2008,Punkkinen2011}. This error results in $\sim 0.004$ \AA\; ``extra'' increase of the lattice parameter at $10 \%$ Ni compared to the ideal case when the DFT error stays at the same level for different elements. A careful inspection of Fig. \ref{fig2} shows that the above error accounts for about 30\% of the deviation obtained between the theoretical and experimental slopes of $a(x)$ for the Fe-Ni system. Another possible source of error is the neglect of all thermal effects in the present theory. From the experimental point of view, not all measured data points are relevant for the present comparison. That is because both Ni and Mn are fcc stabilizer, and thus a perfect bcc single-phase is hard to attain. Moreover, in the process of preparing the samples, different quenching temperatures and various annealing methods lead to subtle changes in the measured lattice parameter. The overall small influence of Mn/Ni alloying addition on the lattice parameter of Fe can be explained from the atomic volume and the magnetic moment. The atomic volumes of Mn and Ni are very close to that of Fe. Hence, based on the linear rule of mixing, small amounts of Mn and Ni should not affect the volume of Fe to a large extent. On the other hand, the observed small increment can be ascribed to the complex magnetic interaction between the solute atoms and ferromagnetic Fe matrix. The local magnetic moments in Fe-Mn and Fe-Ni are displayed in Fig.~\ref{fig3} as a function of composition and lattice parameter. These results were extracted from the CPA calculations and the local magnetic moments represent the magnetic moment density integrated within the corresponding Wigner-Seitz cells. Around the equilibrium lattice parameters ($\sim 2.84-2.85$ \AA), the Mn moments are antiparallel to those of Fe. The magnitude of Mn local magnetic moment $\mu$(Mn) decreases from $\sim 1.7 \mu_{\rm B}$ to about zero as the amount of Mn increases from zero to $10\%$ (Fig. \ref{fig3}, upper right panel). This trend is consistent with the previous findings, namely that above $\sim 10 \%$ Mn, the coupling between Fe and Mn becomes ferromagnetic \cite{Kulikov1997}. At the same time, the magnetic moment of Fe $\mu$(Fe) is increased slightly with Mn addition up to $\sim 5\%$ Mn, above which a very weak decrease of $\mu$(Fe) can be observed (Fig. \ref{fig3}, upper left panel). The enhanced $\mu$(Fe) expands the volume as a result of the positive excess magnetic pressure \cite{Punkkinen2011a}. For $x\gtrsim 0.05$, the small negative slope of $\mu$(Fe) leads to vanishing excess magnetic pressure and thus to shrinking volume, in line with Fig. \ref{fig2}. On the other hand, the Ni moments $\mu$(Ni) always couple parallel with the Fe moments (Fig. \ref{fig3}, lower right panel) and Ni addition increases $\mu$(Fe) (Fig. \ref{fig3}, lower left panel). This in turn increases the excess magnetic pressure and yields monotonously expanding volume with $x$. The electronic structure origin of the alloying-induced enhancement for the Fe magnetic moment for the present binaries is discussed in Ref.~\cite{xiaoqing2014}. \begin{figure} \includegraphics[scale=0.3]{fig3.eps} \caption{(Color online) The map of the local magnetic moments $\mu(x)$ (in units of $\mu_B$) as a function of chemical composition and lattice parameter $a$ (in units of \AA) for Fe and Mn in Fe$_{1-x}$Mn$_x$ (upper panels) and Fe and Ni in Fe$_{1-x}$Ni$_x$ (lower panels).}\label{fig3} \end{figure} In order to gain more insight into the magnetic coupling between Mn and the bcc Fe host, we constructed a $2\times2\times2$ supercell in terms of the conventional bcc unit cell (16-atom supercell) containing one or two Mn impurity atoms, which correspond to $6.25 \%$ Mn and $12.5 \%$ Mn, respectively. In the case of one isolated impurity atom, Mn couples antiferromagnetically to Fe and the average magnetic moments of Fe and Mn are almost identical to those from the corresponding CPA calculation. In the case of $12.5 \%$ Mn in the supercell, both Mn moments align parallel with respect to the surrounding Fe host irrespective of their relative positions in the supercell. However, the magnitude of their moments scales with the distance between them. If the two Mn atoms are located at nearest neighbor sites, their local magnetic moments are strongly reduced which may be interpreted as a result of competing magnetic interactions between Mn-Mn and Mn-Fe. At the same time, the local magnetic moments of the nearest Fe atoms remain almost at the level of pure bcc Fe, similar to the CPA result. These tests, based on supercell calculations, confirm that CPA accurately mimics the behavior of the magnetic moments versus concentration in the dilute limit. \subsection{Elastic parameters} The theoretical single-crystal elastic constants and polycrystalline elastic moduli of the Fe-Ni and Fe-Mn alloys are plotted in Figs.~\ref{fig4} and \ref{fig5} as a function of composition and the corresponding data are listed in Table~\ref{tabII}. The results indicate rather impressive alloying effects of Mn/Ni on the elastic parameters. We find that Mn and Ni produce similar effects on $C_{11}$ of Fe. Namely, $C_{11}$ decreases with $x$ below $\sim 7\%$ alloying addition, and then increases at larger concentrations. But Mn and Ni give different effects on $C_{12}$. Below $\sim 6\%$ Ni, $C_{12}$ keeps constant with Ni content, and then strongly increases with further Ni addition. On the other hand, $C_{12}$ drops from $133.7$ to $103.2$ GPa as the Mn concentration increases from zero to $\sim 8\%$. At larger concentrations, the effect of Mn on $C_{12}$ changes sign. The peculiar concentration dependencies obeyed by $C_{11}$ and $C_{12}$ originate mainly from the non-linear trend of the corresponding bulk modulus shown in Fig.~\ref{fig5}. The two cubic shear elastic constants also exhibit complex trends. $C^{\prime}$ decreases with Ni addition, which means that Ni decreases the mechanical stability of the bcc lattice. At the same time, Mn is found to have very small impact on the tetragonal shear elastic constant. $C_{44}$ increases with Mn addition, but remains nearly constant upon alloying Fe with Ni. The present trends for the elastic constants are very close to those reported by Zhang \emph{et al.} \cite{Zhang2010} using the same total energy method. The small differences seen at larger concentrations, especially in the case of the bulk modulus, are due to the different numerical parameters used in these two works. \begin{figure}[t] \includegraphics[scale=0.3]{fig4.eps} \caption{(Color online) Single-crystal elastic constants of bcc Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys as a function of composition.}\label{fig4} \end{figure} The theoretical polycrystalline elastic moduli $B$, $E$, and $G$ (Fig.~\ref{fig5}) reproduce reasonably well the experimental trends \cite{Speich1972} except perhaps for alloys encompassing about $10\%$ solute. This discrepancy is most likely due to the fact that the Fe-Mn and Fe-Ni binary alloys with $x\gtrsim 0.05$ exist as a mixture of bcc and fcc phases. On the theory side, the lattice parameters of Fe-Ni and Fe-Mn alloys are underestimated as a result of the employed exchange-correlation approximation (Fig. \ref{fig2}), which at least partly explains why theory in general overestimates the elastic moduli. On a qualitative level, the trends of the bulk moduli we understand as the result of the interplay between chemical and volume effects. The calculated bulk moduli of pure Ni and Mn in the bcc lattice ($B$(Mn)$\approx 222$ GPa, $B$(Ni)$\approx 193$ GPa) are both larger than that of Fe. Hence, at large concentrations the bulk moduli of binary alloys should eventually increase. On the other hand, at low concentrations (less than $\sim 7\%$), the interaction between the impurity atoms is weak and thus the trend of the bulk modulus is mainly governed by the volume effect. Increasing volume in turn produces a drop in the bulk modulus, in agreement with Fig.~\ref{fig5}. \begin{figure}[t] \includegraphics[scale=0.3]{fig5.eps} \caption{(Color online) Polycrystalline elastic moduli of bcc Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys as a function of composition. The experimental data are from Ref.~\cite{Speich1972}.}\label{fig5} \end{figure} The results for Fe-Ni indicate that the polycrystalline elastic moduli $E$ and $G$ monotonically decrease with solute concentration. For Fe-Mn, the alloying effect can be divided into two parts: when $x$ is less than $\sim 0.05$, $G$ and $E$ remain constant with $x$. When $x$ is larger than $\sim 0.05$, both $G$ and $E$ increase with Mn content. Since the Voigt and Reuss bounds depend only on the single-crystal shear elastic constants, the trends of $G$ in Fig. \ref{fig5} directly emerge from those of $C_{44}$ and $C^\prime$ (Fig. \ref{fig4}). We recall that for isotropic crystals, $G$ reduces exactly to the single-crystal shear elastic constant ($C_{44}=C^\prime$). The Young modulus is a mixture of $B$ and the Pugh ratio $B/G$, \emph{viz.} $E=9B/(3B/G+1)$, which is nicely reflected by the trends in Fig. \ref{fig5}. For Fe$_{1-x}$Ni$_x$ with Ni content below $\sim 6\%$, $B/G$ remains nearly constant with $x$ (Table \ref{tabII}), and thus the corresponding $E$ resembles the bulk modulus. At larger Ni concentrations, $B/G$ increases substantially which explain the continuous decrease of $E$. In the case of Fe-Mn, $B/G$ decreases with Mn addition which gives a strong positive slope to the Young modulus as compared to that of $B$. \begin{figure}[b] \includegraphics[scale=0.3]{fig6.eps} \caption{(Color online) Surface energy and unstable stacking fault energy of bcc Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys as a function of composition. The surface energy corresponds to the $\{110\}$ facet and the USF energy to the $\{110\}\langle111\rangle$ slip system.}\label{fig6} \end{figure} \begin{table*} \caption{Theoretical single-crystal elastic constants $C_{11}(x)$, $C_{12}(x)$, $C_{44}(x)$, $C^\prime(x)$, polycrystalline bulk $B(x)$, shear $G(x)$ and Young $E(x)$ moduli (GPa), Poisson ratio $\nu$, Pugh ductile/brittle ratio $B/G$, Cauchy pressure $(C_{12}-C_{44})$ (GPa) and Zener anisotropy ratio $C_{44}/C^\prime$ for Fe$_{1-x}$Mn$_x$ (upper panel) and Fe$_{1-x}$Ni$_x$ (lower panel) alloys as a function of composition.}\label{tabII} \begin{ruledtabular} \begin{tabular}{lcccccccccccc} $x$ & $B$ & $C^\prime$ & $C_{11}$ & $C_{12}$ & $C_{44}$ & $G$ & $E$ & $\nu$ & $(C_{12}-C_{44})$& $B/G$ & $C_{44}/C^\prime$\\ \hline \multicolumn{12}{c}{Fe$_{1-x}$Mn$_x$}\\ \hline 0& 185.4 & 77.6 & 288.9 & 133.7 & 100.0 & 90.3 & 233.1 & 0.290 & 33.71 & 2.05 & 1.29\\ 0.01& 179.1 & 76.0 & 280.5 & 128.4 & 100.5 & 89.9 & 231.0 & 0.285 & 27.94 & 1.99 & 1.32\\ 0.02& 172.0 & 75.2 & 272.3 & 121.8 & 102.8 & 90.7 & 231.4 & 0.276 & 19.06 & 1.90 & 1.37\\ 0.03& 166.8 & 74.1 & 265.6 & 117.4 & 105.4 & 91.5 & 232.1 & 0.268 & 12.01 & 1.82 & 1.42\\ 0.04& 161.0 & 73.3 & 258.8 & 112.1 & 107.7 & 92.3 & 232.5 & 0.259 & 4.41 & 1.74 & 1.47\\ 0.05& 159.7 & 73.1 & 257.2 & 111.0 & 110.4 & 93.6 & 234.8 & 0.255 & 0.59 & 1.71 & 1.51\\ 0.06& 155.4 & 73.6 & 253.5 & 106.3 & 113.0 & 95.2 & 237.1 & 0.246 & -6.71 & 1.63 & 1.54\\ 0.07& 153.1 & 74.6 & 252.5 & 103.4 & 115.1 & 96.7 & 239.6 & 0.239 &-11.68 & 1.58 & 1.54\\ 0.08& 153.6 & 75.6 & 254.4 & 103.2 & 116.8 & 98.1 & 242.6 & 0.237 &-13.55 & 1.57 & 1.54\\ 0.09& 157.7 & 76.3 & 259.4 & 106.8 & 117.9 & 99.0 & 245.7 & 0.240 &-11.07 & 1.59 & 1.55\\ 0.1& 163.5 & 76.8 & 265.9 & 112.3 & 119.0 & 99.8 & 248.9 & 0.246 & -6.71 & 1.64 & 1.55\\ \hline \multicolumn{12}{c}{Fe$_{1-x}$Ni$_x$}\\ \hline 0& 185.4 & 77.6 & 288.9 & 133.7 & 100.0 & 90.3 & 233.1 & 0.290 & 33.7 & 2.05 & 1.29\\ 0.01& 183.6 & 74.6 & 283.0 & 133.9 & 97.3 & 87.5 & 226.5 & 0.294 & 36.6 & 2.10 & 1.31\\ 0.02& 181.2 & 72.6 & 278.0 & 132.8 & 96.6 & 86.1 & 223.1 & 0.295 & 36.2 & 2.10 & 1.33\\ 0.03& 177.7 & 70.6 & 271.8 & 130.6 & 96.3 & 85.0 & 220.0 & 0.294 & 34.4 & 2.09 & 1.36\\ 0.04& 174.6 & 68.7 & 266.1 & 128.8 & 96.4 & 84.2 & 217.5 & 0.292 & 32.4 & 2.07 & 1.40\\ 0.05& 173.3 & 66.8 & 262.4 & 128.8 & 97.0 & 83.5 & 215.9 & 0.292 & 31.7 & 2.07 & 1.45\\ 0.06& 173.6 & 64.9 & 260.2 & 130.3 & 98.2 & 83.2 & 215.2 & 0.293 & 32.1 & 2.09 & 1.51\\ 0.07& 176.8 & 62.9 & 260.6 & 134.9 & 99.4 & 82.7 & 214.6 & 0.298 & 35.5 & 2.14 & 1.58\\ 0.08& 181.8 & 60.7 & 262.8 & 141.3 & 100.6 & 82.2 & 214.2 & 0.304 & 40.7 & 2.21 & 1.66\\ 0.09& 188.3 & 58.6 & 266.4 & 149.3 & 102.2 & 81.7 & 214.2 & 0.310 & 47.1 & 2.30 & 1.74\\ 0.1& 196.9 & 56.4 & 272.1 & 159.3 & 103.4 & 81.1 & 213.9 & 0.319 & 55.9 & 2.43 & 1.83\\ \end{tabular} \end{ruledtabular} \end{table*} \subsection{Surface energy and unstable stacking fault energy} The calculated surface energies and USF energies of Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ are shown in Fig.~\ref{fig6} as a function of $x$. Both Mn and Ni are predicted to decrease the surface energy of the $\{110\}$ facet of bcc Fe. Nickel has a stronger effect than Mn. Namely, $10 \%$ Ni addition reduces the surface energy of Fe by $0.17$\,J/m$^2$, which is about $7 \%$ of the surface energy of pure Fe, whereas Mn lowers the surface energy by $0.07$ J/m$^2$ (about $3 \%$). The alloying effect on the surface energy can be understood on a qualitative level using the surface energies calculated for the alloy components \cite{Punkkinen2011}. Nickel has substantially smaller surface energy (considering the close-packed fcc surfaces) than bcc Fe, and thus Ni addition is expected to reduce $\gamma_s$ of Fe. The surface energy of $\alpha-$Mn is larger than that of Fe \cite{Punkkinen2011a}. However, when considering the bcc lattice, the surface energy of Mn turns out to be intermediate between those of Ni and Fe, which is nicely reflected by the relative effects of Mn and Ni on $\gamma_s$. The USF energy shows a similar dependence on Ni/Mn alloying as the surface energy; it decreases from $1.08$ to $0.92$ J/m$^2$ upon $10\%$ Ni addition, which is about 15\,\% of the USF energy of pure Fe. Compared to the effect of Ni, $10\%$ Mn produces a smaller change in $\gamma_u$ ($5.6 \%$). Using the same methodology as for Fe and Fe-alloy, we computed the USF of hypothetical bcc Mn and bcc Ni for the present slip system and at the volume of bcc Fe. We obtained $0.63$\,J/m$^2$ for Mn and $0.44$\,J/m$^2$ for Ni. These figures are in line with the trends from Fig. \ref{fig6}. The segregation effect on the surface energy and USF energy are shown in Fig.~\ref{fig7}. We find that both Mn and Ni segregate to the fault plane. Above $\sim 1\%$ solute concentration in bulk Fe, the surface energy and the USF energy decrease slightly with segregation of Ni/Mn to the layer at the planar fault. In this segregation calculation we kept the volume the same as in the bulk (host alloy). Therefore, only the chemical segregation effect is considered and the local volume expansion effect nearby fault plane due to the segregating solute atoms was ignored. The results show that the segregation effect on the surface energy is larger for Ni than for Mn. Furthermore, the surface segregation effect slightly increases with increasing Mn content in Fe-Mn dilute alloy. For the USF energy, the segregation effect of Mn is similar to that of Ni when the concentrations are small. However, with increasing solution amount $x$, the segregation of Mn to stacking fault increases with high degree, but the effect of Ni remains almost the same. \begin{figure} \includegraphics[scale=0.3]{fig7.eps} \caption{(Color online) Relative change ($-[\gamma(x,y)-\gamma(0,0)]/\gamma(0,0)$) of the USF energy and the surface energy as a function of bulk concentration ($x$) of Mn/Ni (abscissae) and the additional solute concentration ($y$) at the fault plane (ordinates) for Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ ($ 0\leq x \leq 5\% $) alloys.}\label{fig7} \end{figure} The surface energy and the USF energy are primarily determined by the properties of the surface layer and the slip layers, respectively. Segregation changes the concentration of the solute at the surface or slip plane. Because Mn and Ni have lower planar fault energies than Fe (in the bcc structure), both formation energies of the chemically homogeneous Fe-alloys are expected to decrease as the Mn or Ni solute concentration increases, in line with the present results. The effect is more pronounced for Ni since its USF/surface energy is smaller than that of Mn. \section{Discussion}\label{discussion} \subsection{Ductile and brittle properties} The fracture behavior of a specific material rests on various conditions, such as, the presence of flaws, the way and the magnitude of the applied stress, temperature, strain rate, and alloying elements. Alloying changes the interaction among atoms, and produces indispensable influences on the mechanical properties like the elastic response or the ductile versus brittle behavior. In the following, the ductility of the dilute Fe-Mn and Fe-Ni binary alloys is addressed using previously established effective theoretical models and phenomenological relationships based on planar fault energies and elastic constants. These models are widely used and often overused in combination with \emph{ab initio} calculations and thus a careful assessment of their performance is of common interest. Here we employ four criteria discussed in the introduction to estimate the effect of Mn/Ni on the ductility of ferromagnetic bcc Fe. These criteria are based on the Poisson ratio ($\nu$), the Cauchy pressure ($C_{12}-C_{44}$), the Pugh ratio $B/G$, and the Rice ratio $\gamma_s/\gamma_u$. We recall that the first two criteria are closely connected with the Pugh conditions and thus no substantial deviations between them are expected. The present theoretical predictions are shown in Fig.~\ref{fig8}. Both $B/G$ and Poisson's ratio indicate that Mn makes the ferrite Fe-based alloys more brittle. According to Pugh criterion, about $4 \%$ Mn is needed to transfer the Fe alloy from the ductile into the brittle regime. On the other hand, small Ni addition ($\lesssim 6 \%$) keeps the good ductility of Fe whereas larger amounts of Ni make the Fe-Ni system more ductile. The Cauchy pressure $(C_{12}-C_{44})$ follows very closely the trend of $B/G$ and that of the Poisson ratio. However, the Cauchy pressure becomes negative at a slightly larger concentration ($\sim 6\%$) than the critical value in terms of $B/G$ ($\sim 4\%$). The small difference between the "predictions" based on these phenomenological correlations originates from the elastic anisotropy of the present alloys (cf. Eq. \eqref{eq0}). \begin{figure} \includegraphics[scale=0.3]{fig8.eps} \caption{(Color online) Poisson ratio $\nu$, Cauchy pressure $(C_{12}-C_{44})$ (in units of GPa), Pugh ratio $B/G$, and Rice ratio $\gamma_s/\gamma_u$ of bcc Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys as a function of composition.}\label{fig8} \end{figure} According to the theory developed by Rice~\cite{Rice1992}, the ratio $\gamma_s/\gamma_u$ is associated with the fracture behavior. The increment of $\gamma_s/\gamma_u$ stimulates the creation of dislocations. In such cases, the stress around a crack tip will be released by slipping the atomic layers. A decreasing ratio, on the other hand, means that the material will crack by opening new micro surfaces. Figure~\ref{fig8} shows that for Fe$_{1-x}$Ni$_x$, $\gamma_s/\gamma_u$ increases as a function of $x$ indicating that the system becomes more ductile with Ni addition. Although Ni decreases both $\gamma_s$ and $\gamma_u$ simultaneously, the alloying effect on the USF energy is more pronounced, resulting in an increasing Rice ratio from 2.27 corresponding to pure Fe to 2.47 belonging to Fe$_{0.9}$Ni$_{0.1}$. In the case of Mn addition, we observe relatively small changes in the Rice ratio. Small amounts of Mn make the Fe-Mn alloy slightly more ductile in terms of the Rice parameter (compared to pure Fe), but with increasing Mn content beyond $\sim 6\%$ the $\gamma_s/\gamma_u$ ratio saturates around $2.35$. According to the present theoretical results for the Rice parameter, Ni has stronger effect in making the alloy ductile compared to Mn. In addition, more than 6\% addition of Mn makes the bcc phase relatively more brittle. In fact, in terms of the USF, one would predict that both Ni and Mn make the dislocation formation in Fe more easy, and the effect of Ni is superior to that of Mn. On the other hand, both element decrease the surface energy as well, making the crack opening more likely. Before turning to the comparison between the Pugh and Rice criteria, we make an observation based on the present segregation studies. Allowing for surface and interface segregation leads to small changes in the USF energy and the surface energy (cf. Section III.D). In terms of the Rice parameter, in alloys containing $5\%$ impurity $5\%$ surface/USF segregation increases slightly the $\gamma_s/\gamma_u$ ratio ($\sim 2\%$ for Fe-Ni and $\sim 4\%$ for Fe-Mn). That is because in both binary systems the USF energy is lowered by a larger degree than the surface energy upon segregation (Fig. \ref{fig7}). These changes are far below those associated with the effect of bulk concentration (Fig. \ref{fig8}). In addition, taking into account the different time-scales for atomic diffusion and dislocation movement, we conclude that the segregation effects may safely be omitted for the present discussion. \subsection{Pugh criterion versus Rice criterion} In the case of Fe-Ni alloys, the conclusion drawn from $\gamma_s/\gamma_u$ is consistent with the other three criteria based on $B/G$, Poisson ratio, and Cauchy pressure. When we look for the individual trends, we find that this consistency is to some extent incidental, especially at large Ni content. Namely, while both $G$ and $\gamma_u$ decrease with Ni addition, the trends for the surface energy and $B$ strongly deviate from each other above $\sim 5 \%$ Ni. At low Ni levels $B$ and $\gamma_s$ follow similar trends, but the relatively large $B$ of Fe$_{0.9}$Ni$_{0.1}$ is not supported by the strongly decreased $\gamma_s$ calculated for this alloy. The reason why $\gamma_s/\gamma_u$ and $B/G$ still predict similar effects (from the Rice and Pugh conditions) is simply due to the strong decrease calculated for $\gamma_u$ upon Ni addition. In the case of Fe$_{1-x}$Mn$_x$, the conclusions made based on the Rice and Pugh criteria contradict each other. While $B/G$ decreases for $x\lesssim 0.08$, $\gamma_s/\gamma_u$ shows a weak increase for these alloys. We find that for this system, $G$ and $\gamma_u$ follow completely different trends as a function of Mn content (Figs. \ref{fig5} and \ref{fig6}), and the deviation between the trends of $\gamma_s$ and $B$ is also pronounced (although at low and intermediate Mn level both of them show decreasing trends). From these results, we conclude that the two ductility criteria (Pugh and Rice) lead to inconsistent results, a fact that questions their reliability and limits their scope. In the following we make an attempt to understand the origin of this discrepancy. To this end, we make use of the particular shear elastic constant associated with the present slip system as well as of the concept of theoretical cleavage stress. These two quantities are considered here as possible alternative measures of the materials resistance to dislocation slip and cleavage, respectively, and are expected to give a better estimate of the corresponding effects compared to the polycrystalline $G$ and $B$ employed in the Pugh criterion. Within the Griffith theory of brittle fracture, the theoretical cleavage stress is often approximated as \begin{eqnarray} \sigma_{cl.}\{lmn\}=\left(\dfrac{E_{lmn}\gamma_{lmn}}{d_{lmn}}\right)^{1/2}, \end{eqnarray} where $\{lmn\}$ stands for the cleavage plane, $E_{lmn}$, $\gamma_{lmn}$ and $d_{lmn}$ are the corresponding Young modulus, surface energy and interlayer distance, respectively. Irwing and Orowan extended the above equation (modified Griffith equation) by including the plastic work before fracture. Here we neglect this additional term, \emph{i.e.} $\gamma_{lmn}$ represents merely the solid vacuum interface energy. Using our calculated elastic constants, lattice parameters and surface energy, we computed $\sigma_{cl.}$ for Fe-Mn and Fe-Ni for the $\{110\}$ plane, for which $E_{110}=12C^{\prime}C_{44}B/(C_{11}C_{44}+3C^{\prime}B)$. The alloying-induced changes for $\sigma_{cl.}\{110\}$ are compared to those calculated for the surface energy in Fig.~\ref{fig9}, upper panel. Here the change $\eta_X(x) = [X(x)-X(0)]/X(0)$ ($X(x)$ stands for the cleavage stress or the surface energy for Fe$_{1-x}$M$_x$ alloy) is expressed relative to the corresponding value in pure Fe $X(0)$. It is found that $\eta_{\sigma_{cl.}}(x)$ and $\eta_{\gamma_s}$ follow similar trends for Fe-Ni, but strongly deviate for Fe-Mn. Before explaining this deviation in the case of Mn doping, we introduce the shear modulus associated with the present slip system. \begin{figure} \includegraphics[scale=0.32]{fig9a.eps} \includegraphics[scale=0.32]{fig9b.eps} \caption{(Color online) Relative changes ($\eta$ in $\%$) for the surface energy and the theoretical cleavage stress for the $\{110\}$ surface (upper panel) and for the USF energy and the single-crystal shear elastic modulus associated with the $1/2\langle111\rangle$ slip system (lower panel) for bcc Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys as a function of composition. Fixed-spin results are shown for the surface energy and the USF energy of the Fe-Mn system.}\label{fig9} \end{figure} \begin{figure} \includegraphics[scale=0.32]{fig10.eps} \caption{(Color online) Local magnetic moments of Fe (upper curves) and Mn/Ni (lower curves) for Fe$_{1-x}$Mn$_x$ (left panels) and Fe$_{1-x}$Ni$_x$ (right panels) as a function of layer index. Results are shown for the model systems used for the USF energy (upper panels) and surface energy (lower panels) calculations. Different symbols correspond to various impurity levels ($x$) as shown in the legend.}\label{fig10} \end{figure} In bcc alloys, slip occurs primarily in the $\{110\}$ plane along the $\langle111\rangle$ with Burgers vector $(1/2,1/2,1/2)$. The associated shear modulus can be expressed as \begin{eqnarray} G\{110\}\langle111\rangle=\dfrac{3C_{44}C^\prime}{C^\prime+2C_{44}}. \end{eqnarray} We note that the above modulus in fact expresses the shear for any possible shear plane $\{lmn\}$ which contains the $\langle111\rangle$ shear direction. In the original Pugh criteria, the averaged shear modulus $G$ is used which in anisotropic materials can substantially differ from $G\{110\}\langle111\rangle$. The alloying-induced changes for $G\{110\}\langle111\rangle$ are compared to those calculated for the USF energy in Fig.~\ref{fig9}, lower panel. Interestingly, we find that $\eta_{G\{110\}\langle111\rangle}$ and $\eta_{\gamma_u}$ are practically identical for Fe-Ni. Both $G\{110\}\langle111\rangle$ and $\gamma_u$ decrease by $\sim 15\%$ when $10\%$ Ni is added to Fe. Nickel substantially softens the elastic modulus associated with the $\{110\}\langle111\rangle$ shear which is nicely reflected by the decrease of the energy barrier for the $\{110\}\langle111\rangle$ slip. For Fe-Mn, $\eta_{G\{110\}\langle111\rangle}$ and $\eta_{\gamma_u}$ are also close to each other for Mn content below $\sim 5\%$, but show large deviations at larger concentrations. Adding more Mn to Fe$_{0.95}$Mn$_{0.05}$ further decreases the USF energy, but $G\{110\}\langle111\rangle$ changes slope and shows a weak increase for Fe$_{0.9}$Mn$_{0.1}$ relative to the value for pure Fe. We suggest that the different behaviors obtained for Fe-Ni and Fe-Mn has to a large extent magnetic origin. That is because in contrast to Ni, Mn is a weakly itinerant magnet and thus any (here structural) perturbation can have a marked impact on its magnetic state. To illustrate that, in Fig.~\ref{fig10} we plotted the local magnetic moments for the unit cells used for the USF energy and surface energy calculations. In the case of Fe-Ni, we see no substantial deviation in the local magnetic moments as we approach the planar fault area. Both Ni and Fe moments near the fault plane remain close to the bulk value. However, for Fe-Mn, the local magnetic moments of Mn next to the planar fault prefer a very strong antiferromagnetic coupling with the Fe matrix irrespective of the bulk concentration. Although the bulk moments approach zero as the Mn content increases to $10 \%$, the interface Mn moments remain around $-2.2 \mu_{\rm B}$ and those on the surface around $-3.1 \mu_{\rm B}$. This strong antiferromagnetic Fe-Mn coupling near the USF interface and surface indicates an energetically stable configuration that can lower the corresponding formation energy. Indeed, removing this degree of freedom by fixing all Mn moments to the corresponding bulk value (\emph{i.e.}, modeling a situation similar to the case of Fe-Ni system) increases the the surface energy and USF energy of Fe-Mn. The relative changes of the corresponding $\gamma_s^{\rm FS}$ and $\gamma_u^{\rm FS}$ values (FS stands for fixed-spin) relative to those of pure Fe are shown in Fig.~\ref{fig9}. It is found that $\gamma_u^{\rm FS}$ for Fe$_{0.9}$Mn$_{0.1}$ approaches the USF energy of pure Fe. In fact, constraining the magnetic moments near the planar fault brings the trend followed by $\gamma_u^{\rm FS}$ rather close to that of $G\{110\}\langle111\rangle$ (Fig. \ref{fig9}, lower panel). A similar impact of the constrained magnetic moment is seen for the surface energy as well. Namely, for Fe-Mn $\gamma_s^{\rm FS}$ and the theoretical cleavage stress show similar concentration dependencies (Fig.~\ref{fig9}, upper panel). On this ground, we conclude that the deviations seen between the surface energy and cleavage stress and between the USF energy and the shear modulus in the case of Fe-Mn are due to the stable antiferromagnetic state of Mn near the planar faults. Using the FS results for the Rice parameter, we get a weak decrease of $\gamma_s^{\rm FS}/\gamma_u^{\rm FS}$ above $\sim 5\%$ Mn addition to Fe (shown in Fig.~\ref{fig11}, upper panel). Thus, the fixed-moment result for the Rice parameter is somewhat closer to the original Pugh ratio, both of them predicting enhanced intrinsic brittleness for the Fe-Mn solid solution. This demonstrates that the discrepancy seen between the two criteria (Fig.~\ref{fig8}) is partly due to the weak magnetic behavior of Mn associated with the planar defects but absent in the elastic moduli. We suspect that the situation in non-magnetic alloys could be closer to the case of Fe-Ni, where a good parallelism between the Pugh and Rice conditions is found. However, further theoretical research is needed before making the final verdict for this question. \begin{figure} \includegraphics[scale=0.32]{fig11.eps} \caption{(Color online) Theoretical Rice parameter and the ratio between the theoretical cleavage stress and the shear modulus associated with the slip system ($\lambda$) for bcc Fe$_{1-x}$Mn$_x$ and Fe$_{1-x}$Ni$_x$ alloys as a function of composition. Fixed-spin results are shown for Rice parameter of the Fe-Mn system. Upper panel represents $\{110\}$ as the cleavage plane, and lower panel $\{100\}$ as the cleavage plane.}\label{fig11} \end{figure} Finally, we test the ratio between the cleavage energy and the shear modulus associated with the slip system, $\lambda\{lmn\} \equiv \sigma_{cl.}\{lmn\}/G\{110\}\langle111\rangle$, as a possible alternative measure of the ductile-brittle behavior. In the upper panel of Fig.~\ref{fig11}, we compare the Rice parameters (from Fig.~\ref{fig9}) to $\lambda\{110\}$. For Fe-Ni, we find a good correspondence between the two measures. Namely both of them increase with Ni addition, indicating enhanced ductility. For Fe-Mn, slightly larger deviation occurs at large Mn content. As we discussed above, part of this deviation may be ascribed to magnetism and in particular to the stable antiferromagnetic state of Mn around the planar faults. Since in bcc metals, cleavage predominantly occurs in the $\{100\}$ plane, in addition to the previously discussed cleavage stress, we also computed $\sigma_{cl.}\{100\}$ using the corresponding surface energy and $E_{100}=6 C^{\prime}B/(C_{11}+C_{12})$. The so obtained $\lambda\{100\}$ parameters are shown in the lower panel of Fig.~\ref{fig11} together with the Rice parameters calculated using the surface energy for the $\{100\}$ facet. We find that the two sets of Rice parameters (upper and lower panels) predict rather similar effects. The situation for $\lambda$ is very different: $\lambda\{100\}$ monotonously increases with Ni and decreases with Mn addition. The inconsistency between the Rice parameter and $\lambda$ obtained for the $\{100\}$ plane is a consequence of the limited information built in the Rice ratio (merely through the surface energy) compared to the cleavage stress (involving both the Young modulus and the surface energy). We conclude that in contrast to the Rice parameter, the average $\lambda$ (taking into account both cleavage planes) indicates increased brittleness for Fe-Mn and increased ductility for Fe-Ni. For both alloys, the present predictions based on $\lambda$ are in line with the observations~\cite{Wessman2008}. \section{Conclusion}\label{conclusion} We have investigated the effect of Mn and Ni addition on the mechanical properties of ferromagnetic bcc Fe. Fe-Ni, the lattice parameter increases nearly linearly with alloying whereas in Fe-Mn, the lattice parameter increases up to $\sim5\,\%$ Mn and then decreases with further Mn addition. For both Fe-Mn and Fe-Ni alloys, the elastic moduli show a nonlinear trends as a function of concentration. The surface energy and the unstable stacking fault energy decrease by adding Mn or Ni to Fe. For both planar fault energies, Ni shows a stronger effect than Mn. Segregation seems to have a minor effect on the surface and USF energies for dilute Fe-Ni and Fe-Mn alloys. According to semi-empirical correlations based on the Poisson ratio, the Cauchy pressure, and the Pugh ratio, we have found that Mn makes the bcc Fe-based dilute alloy more brittle and Ni makes it more ductile. However, the study of $\gamma_s/\gamma_u$ for the $\{110\}$ surface and the $\{110\}\langle111\rangle$ slip systems indicates that both binary alloys become more ductile with increasing solute concentration although the relative fracture behavior is consistent with the other three criteria. This discrepancy between the Rice and the Pugh criteria is ascribed to the complex magnetic effects around the planar fault, which are however missing from the bulk parameters entering the Pugh/Cauchy criterion. Using the theoretical cleavage stress and the shear modulus associated with the dominant slip system in bcc alloys, we introduce an alternative measure for the ductile-brittle behavior. We find that our $\lambda\{100\}\equiv \sigma_{cl.}\{100\}/G\{110\}\langle111\rangle$ ratio is able to capture the observed alloying effects in the mechanical properties of Fe-rich Fe-Ni and Fe-Mn alloys. The superior performance of $\lambda$ compared to the Rice parameter lies in the additional information built in the theoretical cleavage stress as compared the surface energy. \emph{Acknowledgements} Work supported by the Swedish Research Council, the Swedish Foundation for Strategic Research, and the China Scholarship Council. The National 973 Project of China (Grant No. 2014CB644001) and the Hungarian Scientific Research Fund (OTKA 84078 and 109570) are also acknowledged for financial support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} String theory is argued to be a consistent theory of quantum gravity including gauge interactions. As such, it provides an ideal testing ground for expectations a general quantum-gravity theory should satisfy. However, conversely one can try to identify quantum-gravity features in string theory and conjecture them to hold in general --- this is the approach known as the swampland program. The swampland is defined as the set of consistent effective quantum field theories which cannot be completed into consistent theories of quantum gravity \cite{Vafa:2005ui}. During the last years there has been an immense effort in pursuing the swampland program and in proposing and testing swampland conjectures. It is not possible to give an overview of the corresponding literature here, and we therefore want to refer to the review article \cite{Palti:2019pca}. There are however three conjectures which play a role for this paper: these are the weak gravity conjecture in its original form \cite{ArkaniHamed:2006dz} and the weak gravity conjecture with scalar fields \cite{Palti:2017elp}; the swampland distance conjecture \cite{Ooguri:2006in} in its refined version \cite{Klaewer:2016kiy}; and the emergence proposal \cite{Grimm:2018ohb} which has been further developed in \cite{Palti:2019pca}. These conjectures will be stated in section~\ref{sect_swampconj}. The swampland program typically does not rely on supersymmetry, but many computations in string theory are simplified considerably when some amount of supersymmetry is preserved. A well-established setting is that of type IIB string theory compactified on Calabi-Yau three-folds $\mathcal X$, for which the resulting theory in four dimensions is $\mathcal N=2$ supersymmetric. The massless spectrum is determined by the topology of the compact space, in particular, the four-dimensional theory contains one gravity multiplet, $h^{2,1}$ vector multiplets and $h^{1,1}+1$ hypermultiplets where $h^{2,1}$ and $h^{1,1}$ denote the non-trivial Hodge numbers of $\mathcal X$. In such compactifications one obtains a variety of BPS objects which have special properties and features, given for instance by D-branes wrapping cycles of the compact space. The authors of \cite{Grimm:2018ohb} consider such type IIB compactifications and focus on the vector-multiplet sector, which contains closed-string $U(1)$ gauge fields and the complex-structure moduli of $\mathcal X$. The relevant BPS objects are D3-branes wrapping three-cycles of the Calabi-Yau manifold, and they are charged under the $U(1)$s and their mass depends on the complex-structure moduli. For this setting many explicit checks of various swampland conjectures have been performed and the inter-dependence of some of these conjectures has been illustrated. The main question we want to ask in this paper is \textit{What happens to the $\mathcal N=2$ analysis when we perform an orientifold projection?} It is well-known that under an orientifold projection (giving rise to O3- and O7-planes) the $h^{2,1}$ vector multiplets of $\mathcal N=2$ are projected to $h^{2,1}_-$ chiral multiplets and $h^{2,1}_+$ vector multiplets of four-dimensional $\mathcal N=1$ supergravity \cite{Grimm:2004uq}. In particular, we have \eq{ \label{multiplets_split} \renewcommand{\arraystretch}{1.6} \arraycolsep15pt \begin{array}{ccl} \mathcal N=2 && \hfill\mathcal N=1\hfill \\ \hline \multirow{2}{*}{\mbox{$h^{2,1}$ vector multiplets}} & \multirow{2}{*}{$\xrightarrow{\hspace{5pt}{\rm orientifold}\hspace{5pt}}$} & \mbox{$h^{2,1}_-$ chiral multiplets} \\ && \mbox{$h^{2,1}_+$ vector multiplets} \end{array} } where $h^{2,1}=h^{2,1}_-+h^{2,1}_+$. Note that the $\mathcal N=1$ chiral multiplets contain the complex-structure moduli and the $\mathcal N=1$ vector multiplets contain the $U(1)$ gauge fields, and that both multiplets are independent of each other. As we will argue, this implies that D3-branes wrapping three-cycles are no longer BPS but split into massive-uncharged and massless-charged states in the four-dimensional theory. Hence, a priori the analysis of \cite{Grimm:2018ohb} no longer applies. We now want to briefly mention some works in the literature which are related to our discussion: \begin{itemize} \item As discussed above, the work of \cite{Grimm:2018ohb} studies swampland conjectures for $\mathcal N=2$ compactifications of type IIB string theory. Here we perform an analysis for the theory obtained after an orientifold projection to $\mathcal N=1$. \item In \cite{Font:2019cxq} the authors investigate (among others) domain walls in $\mathcal N=1$ orientifold compactifications of type II string theory. For the O3-/O7-situation, these domain walls are given by D5-branes wrapping three-cycles of the Calabi-Yau manifold. The mass of these four-dimensional states is determined by the complex-structure moduli and they are charged under closed-string three-form gauge fields, but not under closed-string one-form gauge fields. \end{itemize} This paper is organized as follows: in section~\ref{sec_iiborientifolds} we review type IIB orientifold compactifications with orientifold three- and seven-planes, and in section~\ref{sec_d3} we analyze D3-branes wrapping three-cycles in the compact space. In section~\ref{sect_swampconj} we discuss the weak gravity conjecture, the swampland distance conjecture and the emergence proposal for this setting, and section~\ref{sect_conc} contains our summary and conclusions. We furthermore mention that this paper is based on the master thesis \cite{martin:2019} of one of the authors, where further details and discussions can be found. \section{Type IIB orientifolds} \label{sec_iiborientifolds} In this section we briefly review Calabi-Yau orientifold compactifications of type IIB string theory with O3- and O7-planes. We consider the general situation with $h^{2,1}_+\neq0$, leading to closed-string $U(1)$ gauge fields in the four-dimensional effective theory \cite{Grimm:2004uq}. \subsection{Calabi-Yau orientifolds} \label{sec_orientifolds} We start by introducing our notation for the topology of Calabi-Yau orientifolds, and we provide some details on the special geometry of the complex-structure moduli space. \subsubsection*{Topology} We consider a Calabi-Yau three-fold $\mathcal X$, which comes with a holomorphic three-form $\Omega\in H^{3,0}(\mathcal X)$ and a real K\"ahler form $J\in H^{1,1}(\mathcal X)$. To perform an orientifold projection we impose a holomorphic involution $\sigma$ on $\mathcal X$, chosen to act on $\Omega$ and $J$ as \eq{ \label{orient_choice} \sigma^*J=+J\,,\hspace{60pt} \sigma^*\hspace{1pt}\Omega=-\Omega\,. } Since $\sigma$ is an involution, the cohomology groups of the Calabi-Yau three-fold $\mathcal X$ split into even and odd eigenspaces as $H^{p,q}(\mathcal X) = H^{p,q}_+(\mathcal X) \oplus H^{p,q}_-(\mathcal X)$ with dimensions $h^{p,q}_+$ and $h^{p,q}_-$. Of interest to us will be the third de-Rham cohomology group $H^3(\mathcal X)= H^{3}_+(\mathcal X) \oplus H^{3}_-(\mathcal X)$, for which we choose a symplectic basis in the following way \begin{align} &\label{basis_01} \arraycolsep2pt \begin{array}{l@{\hspace{40pt}}lcl@{\hspace{40pt}}lcl} \raisebox{-20pt}[0pt][-20pt]{$\displaystyle \{\alpha_I ,\alpha_a, \beta^I,\beta^a \}\,,$} & \displaystyle \int_{\mathcal X} \alpha_{I} \wedge \beta^J &=& \displaystyle \delta_I{}^J \,, & \displaystyle I,J &=& \displaystyle 0,\ldots, h^{2,1}_- \,, \\[16pt] & \displaystyle \int_{\mathcal X} \alpha_{a} \wedge \beta^b &=& \displaystyle \delta_a{}^b \,, & \displaystyle a,b &=& \displaystyle 1,\ldots, h^{2,1}_+ \,, \end{array} \\ \intertext{ with all other pairings vanishing. A corresponding basis of the third homology group $H_3(\mathcal X)= H_{3+}(\mathcal X) \oplus H_{3-}(\mathcal X)$ together with the non-trivial pairings takes the form } &\label{basis_02} \arraycolsep2pt \begin{array}{l@{\hspace{34pt}}lcl@{\hspace{62pt}}lcl} \raisebox{-20pt}[0pt][-20pt]{$\displaystyle \{A^I ,A^a, B_I,B_a \}\,,$} & \displaystyle \int_{A^I} \alpha_{J} &=& \displaystyle \delta^I{}_J \,, & \displaystyle \int_{B_I} \beta^{J} &=& \displaystyle \delta_I{}^J \,, \\[16pt] & \displaystyle \int_{A^a} \alpha_{b} &=& \displaystyle \delta^a{}_b \,, & \displaystyle \int_{B_a} \beta^{b} &=& \displaystyle \delta_a{}^b \,. \end{array} \end{align} \subsubsection*{Special geometry} Next, we consider the special geometry of the complex-structure moduli space. The holomorphic three-form $\Omega$ can be expanded in the symplectic basis \eqref{basis_01} as follows \eq{ \Omega = X^I \alpha_I - F_I\hspace{1pt} \beta^I \,. } Since $\Omega$ is odd under the orientifold projection -- cf.~equation \eqref{orient_choice} -- the coefficients of $\Omega$ in the even basis $\{\alpha_a,\beta^a\}$ are vanishing, that is $X^a=0$ and $F_a\rvert_{X^a=0}=0$. We will refer to these conditions also as the orientifold locus and discuss them in more detail below. The complex-structure moduli are coordinates on a projective space and are usually defined as \eq{ z^i = \frac{X^i}{X^0} \,, \hspace{40pt} z^i= u^i + i\hspace{1pt} v^i \,, \hspace{40pt} i = 1, \ldots, h^{2,1}_- \,, } and we note that in our conventions the large-complex-structure limit corresponds to $v^i\to\infty$. For Calabi-Yau compactifications the periods $F_I$ can be defined in terms of a prepotential $F$ which is a holomorphic function of degree two in the fields $X^{\mathsf I} = (X^I,X^a)$. More concretely, the $F_I$ are defined as the derivatives \eq{ F_I = \frac{\partial F}{\partial X^I} \biggr\rvert_{X^a=F_a=0} \,, } which in our present setting have to be evaluated at the orientifold locus. The prepotential $F$ of the $\mathcal N=2$ theory consists of a tree-level part involving the triple intersection numbers $d_{\mathsf{ijk}}$ of the mirror Calabi-Yau manifold and one-loop and non-perturbative corrections. More concretely, we have \cite{Hosono:1994av,Hosono:1994ax} \eq{ \label{prepotential_full} F= \arraycolsep2pt \begin{array}[t]{cl} &\displaystyle \frac{d_{\mathsf{ijk}} X^{\mathsf i}X^{\mathsf j}X^{\mathsf k}}{X^{0}} \\[8pt] +&\displaystyle \frac{1}{2}\hspace{1pt} a_{\mathsf {ij}}X^{\mathsf i}X^{\mathsf j}+b_{\mathsf i}\hspace{1pt} X^{\mathsf i}X^{0} +\frac{1}{2}\hspace{1pt} c\hspace{1pt} (X^{0})^2 + i\hspace{1pt} (X^{0})^2\sum_{k}n_{k}\text{Li}_{3}\Bigl(e^{2\pi i\hspace{1pt} k_{\mathsf i}X^{\mathsf i}/X^{0}}\Bigr) \,, \end{array} } where the indices $\mathsf{i},\mathsf{j},\mathsf{k}=1,\ldots, h^{2,1}$ extend over the even and odd cohomology, $a_{\mathsf{ij}}$ and $b_{\mathsf i}$ are rational real numbers whereas $c$ is purely imaginary.\footnote{These numbers encode the one-loop corrections. We note that there is an ambiguity related to them: for certain non-singular cases they have been computed in \cite{Hosono:1994ax} but to our knowledge these are not known for singular configurations.\label{foot_oneloop}} The triple intersection numbers $d_{\mathsf{ijk}}$ and the $a_{\mathsf{ij}}$ are symmetric in their indices, $n_{k}$ are the genus zero Gopakumar-Vafa invariants, ${\rm Li}_3$ denotes the third poly-logarithm and $k_{\mathsf i}$ are the wrapping numbers of world-sheet instantons along two-cycles of the mirror Calabi-Yau. \subsubsection*{Orientifold locus} \label{page_or_loc} From the $\mathcal N=2$ perspective the orientifold locus is determined by $X^a=0$, $F_a=0$ for all $a=1,\ldots, h^{2,1}_+$. The condition $X^a$ alone describes a conifold singularity in which an $A$-cycle shrinks to zero size, and the physics of conifold singularities is well understood \cite{Strominger:1995cz}. However, imposing in addition $F_a=0$ means that also the dual $B$-cycle vanishes. We are not aware of work in the literature studying such situations and we continue with a heuristic discussion. Let us then study in more detail the condition $F_a=0$ evaluated at $X^a=0$. From the explicit form of \eqref{prepotential_full} we determine \eq{ \label{period_ll} F_a = X^0 \left[ 3 \hspace{1pt} d_{aij} z^i z^j + a_{ai} z^i + b_a - 2\pi \sum_{k} k_a n_{k}\text{Li}_{2}\Bigl(e^{2\pi i\hspace{1pt} k_{i}z^{i}}\Bigr) \right], } where we recall that $i,j=1,\ldots, h^{2,1}_-$ and $a=1,\ldots, h^{2,1}_+$. Requiring $F_a=0$ for all $z^i$ can be solved in the following two ways: \begin{enumerate} \label{page_or_sol} \item A sufficient set of conditions for the periods $F_a$ to vanish for all $z^i$ reads \eq{ \label{or_sol_1} d_{aij} = 0 \,, \hspace{40pt} a_{ai} = 0 \,, \hspace{40pt} b_{a} = 0 \,, \hspace{40pt} k_{a} = 0 \,. } However, especially the condition $k_a=0$ is rather restrictive since it means that all world-sheet instanton corrections in the orientifold-even sector have to be absent. This in turn implies a discrepancy in our discussion of emergence in section~\ref{emergencesect}. \item A second possibility is to require a splitting of the sum over $k$ in \eqref{period_ll} into purely odd and purely even vectors as $ \sum_{k} = \sum_{k_i} + \sum_{k_a} $. Using then the relation $\text{Li}_2(1) = \pi^2/6$ we find the conditions \eq{ \label{or_sol_2} d_{aij} = 0 \,, \hspace{40pt} a_{ai} = 0 \,, \hspace{40pt} b_{a} = \frac{\pi^3}{3} \sum_{k_a} k_a n_k \,. } It is not clear to us whether the last of these conditions can be satisfied, but we mentioned in footnote~\ref{foot_oneloop} that there is an ambiguity for the one-loop coefficients. The solution \eqref{or_sol_2} however realizes $F_a=0$ with $k_a\neq0$ which is needed for our discussion in section~\ref{emergencesect}. \end{enumerate} \subsection{Effective action} We now turn to the effective four-dimensional action obtained after compactifying type IIB string theory on the Calabi-Yau orientifolds described above. The details of this action can be found in the literature, and here we focus on the sector corresponding to the third cohomology of $\mathcal X$. \subsubsection*{Moduli} We compactify type IIB string theory on a Calabi-Yau three-fold as $\mathbb R^{3,1}\times \mathcal X$ and perform an orientifold projection. The orientifold projection is of the form $\Omega_{\rm P} (-1)^{F_{\rm L}} \sigma$, where $\Omega_{\rm P}$ denotes the world-sheet parity operator and $F_{\rm L}$ is the left-moving fermion number. The holomorphic involution $\sigma$ acts on $\mathcal X$ as in \eqref{orient_choice} and leaves the non-compact four-di\-men\-sional part invariant. Hence, the fixed loci of $\sigma$ correspond to orientifold three- and seven-planes. The combination $\Omega_{\rm P} (-1)^{F_{\rm L}}$ acts on the massless bosonic fields in the following way \eq{ \label{orient_signs} \arraycolsep2pt \begin{array}{lclclcl} \arraycolsep1.5pt \displaystyle \Omega_{\rm P}\, (-1)^{F_{\rm L}}\: g &=& + \:g \,, & \hspace{60pt} & \displaystyle \Omega_{\rm P}\, (-1)^{F_{\rm L}}\: B &=& - B \,, \\[6pt] \displaystyle \Omega_{\rm P}\, (-1)^{F_{\rm L}}\: \phi &=& + \:\phi \,, & \qquad & \displaystyle \Omega_{\rm P}\, (-1)^{F_{\rm L}}\: C_{2p} &=& (-1)^{p} \,C_{2p} \,, \end{array} } with $g$ the metric, $B$ the Kalb-Ramond field, $\phi$ the dilaton and $C_{2p}$ the Ramond-Ramond (R-R) potentials in type IIB. The degrees of freedom contained in these fields become fields in the compactified theory, in particular, the deformations of the Calabi-Yau metric are contained in the K\"ahler form $J$ and the holomorphic three-form $\Omega$. For the Ramond-Ramond four-form potential we note that $C_4$ can be expanded as \begin{align} \label{expansion_001} C_4 = C_4^{(0,4)} +{} &C_4^{(1,3)} + C_4^{(2,2)} + C_4^{(4,0)} \,, \\[2pt] \nonumber &\hspace{4pt}\downarrow \\[2pt] \nonumber &C_4^{(1,3)} = U^a\wedge \alpha _a + V_a\wedge \beta^a \,, \end{align} where in $C_4^{(m,n)}$ the superscript $m$ denotes the degree in four dimensions and $n$ the degree on the compact space $\mathcal X$. Note furthermore that $U^a$ and $V_a$ correspond to $U(1)$ vector fields in four dimensions. The four-form \raisebox{0pt}[0pt][0pt]{$C_4^{(4,0)}$} in $\mathbb R^{3,1}$ is not dynamical, and due to the self-duality conditions imposed on $C_4$ half of the degrees of freedom in the expansion \eqref{expansion_001} are removed. Here we choose to remove \raisebox{0pt}[0pt][0pt]{$C_4^{(2,2)}$} and $V_a$ but keep \raisebox{0pt}[0pt][0pt]{$C_4^{(0,4)}$} and $U^a$, respectively. The four-dimensional massless fields (in addition to the four-dimensional metric $g_{\mu\nu}$) then take the following schematic form \eq{ \renewcommand{\arraystretch}{1.2} \arraycolsep2pt \begin{array}{l@{\hspace{40pt}}lcl@{\hspace{10pt}}c@{\hspace{10pt}}l} \mbox{complex-structure moduli} & \multicolumn{3}{@{}l}{\Omega} & \longleftrightarrow & H^{3}_-(\mathcal X)\,, \\ \mbox{$U(1)$ vector fields} & \multicolumn{3}{@{}l}{C_4^{(1,3)}} & \longleftrightarrow & H^{3}_+(\mathcal X)\,, \\[6pt] \mbox{K\"ahler moduli} & J &+& C_4^{(0,4)} & \longleftrightarrow & H^{2,2}_+(\mathcal X)\,, \\ \mbox{odd moduli} & B &+& C_2 & \longleftrightarrow & H^{1,1}_-(\mathcal X)\,, \\ \mbox{axio-dilaton} & \phi &+& C_0 & \longleftrightarrow & H^{0,0}_+(\mathcal X)\,. \end{array} } \subsubsection*{Four-dimensional action} Compactifications of type II string theory on Calabi-Yau manifolds lead to a $\mathcal N=2$ supergravity theory in four dimensions, which can be reduced to $\mathcal N=1$ by an orientifold projection. Here we are interested in the action corresponding to the complex-structure moduli $z^i$ and the vector fields $U^a$, which can be brought into the following general form \eq{ \label{action_001} \mathcal S \supset \frac{1}{2}\int_{\mathbb R^{3,1}} \,\Bigl[ \; -\mbox{Re}\hspace{1pt} (f_{ab}) \hspace{1pt} F^a\wedge \star F^b +\mbox{Im}\hspace{1pt} (f_{ab}) \hspace{1pt} F^a\wedge F^b - 2\hspace{1pt} G_{i\overline j} \, dz^i \wedge\star d\overline z{}^{\overline j} \;\Bigr]\,. } The K\"ahler metric $G_{i\overline j}$ for the complex-structure moduli $z^i$ appearing in the action \eqref{action_001} is computed from a K\"ahler potential $K_{\rm cs}$ as \eq{ \label{metric_k} G_{i\overline j} = \frac{\partial^2 K_{\rm cs}}{\partial z^i \partial \overline z{}^{\overline j}}\biggr\rvert_{X^c=F_c=0} \,, \hspace{50pt} K_{\rm cs} &= - \log \Bigl[\, - i\int_{\mathcal X} \Omega\wedge \overline\Omega\:\Bigr] \,, } and the $U(1)$ field strength corresponding to $U^a$ is denoted by $F^a = dU^a$. The gauge kinetic function $f_{ab}$ can be expressed in terms of the period matrix $\mathcal N$ as \eq{ \label{period_m_001} &f_{ab} = -i \hspace{1pt}\overline{\mathcal N}_{ab}\,, \\ &\mathcal N_{ab}=\overline{F}_{ab}+2\hspace{1pt} i \, \frac{ {\rm Im}(F_{aI}) X^I \, {\rm Im}(F_{bJ}) X^J}{ X^M \hspace{1pt}{\rm Im}(F_{MN}) X^N}=\mathcal{R}_{ab}+i\,\mathcal{I}_{ab} \,, } which follows from evaluating the $\mathcal N=2$ gauge kinetic function on the orientifold locus $X^a=0$, $F_a=0$ \cite{Grimm:2004uq}. The period matrix in \eqref{period_m_001} is written using second derivatives of the prepotential $F$, that is \eq{ \label{derivatives_001} F_{\mathsf{IJ}} = \frac{\partial^2 F}{\partial X^{\mathsf I}\partial X^{\mathsf J}} \biggr\rvert_{X^c=F_c=0} \,, \hspace{50pt} \mathsf I,\mathsf J = 0, \ldots, h^{2,1}\,. } \subsubsection*{Some expressions} We finally collect some technical results. We first define the inverse of the real part of the gauge kinetic function as the $\mathcal N=2$ expression restricted to the orientifold locus, and we obtain using \eqref{derivatives_001} that \eq{ \bigl[(\mbox{Re}\hspace{1pt} f)^{-1} \bigr]^{ab} = - \bigl[ (\mbox{Im}\hspace{1pt} F)^{-1} \bigr]^{ab}\,. } Second, for the $\mathcal N=2$ setting one usually defines a $(2h^{2,1}+2)\times(2h^{2,1}+2)$ dimensional matrix $\mathcal M$ which corresponds to a metric for the third cohomology. We are interested in the orientifold-even part evaluated on the orientifold locus which takes the form \eq{ \label{matrix_m} \mathcal M = \left( \begin{array}{cc} -\bigl[ ({\rm Im}\hspace{1pt}\mathcal N)^{-1}\bigr]^{ab} & \bigl[ ({\rm Im}\hspace{1pt}\mathcal N)^{-1}( {\rm Re}\hspace{1pt}\mathcal N )\bigr]^a_{\hspace{4pt}b} \\[4pt] \bigl[ ( {\rm Re}\hspace{1pt}\mathcal N )({\rm Im}\hspace{1pt}\mathcal N)^{-1}\bigr]_a^{\hspace{4pt}b} & -\bigl[( {\rm Im}\hspace{1pt}\mathcal N) +( {\rm Re}\hspace{1pt}\mathcal N )({\rm Im}\hspace{1pt}\mathcal N)^{-1}( {\rm Re}\hspace{1pt}\mathcal N)\bigr]_{ab} \end{array} \right), } with $a,b=1,\ldots, h^{2,1}_+$. Note that this matrix is positive definite. The separate blocks of \eqref{matrix_m} can be determined in terms of \eqref{derivatives_001} and take the following form \eq{ \bigl[ ({\rm Im}\hspace{1pt}\mathcal N)^{-1}\bigr]^{ab} & = - \bigl[ ({\rm Im}\hspace{1pt} F)^{-1}\bigr]^{ab} \,, \\ \bigl[ ( {\rm Re}\hspace{1pt}\mathcal N )({\rm Im}\hspace{1pt}\mathcal N)^{-1}\bigr]_a^{\hspace{4pt}b} & = - \bigl[ ({\rm Re}\hspace{1pt} F)({\rm Im}\hspace{1pt} F)^{-1}\bigr]_a^{\hspace{4pt}b} \,, \\ \hspace{-5pt} \bigl[( {\rm Im}\hspace{1pt}\mathcal N) +( {\rm Re}\hspace{1pt}\mathcal N )({\rm Im}\hspace{1pt}\mathcal N)^{-1}( {\rm Re}\hspace{1pt}\mathcal N)\bigr]_{ab} & = - \bigl[ ( {\rm Im}\hspace{1pt} F) +( {\rm Re}\hspace{1pt} F)({\rm Im}\hspace{1pt} F)^{-1}( {\rm Re}\hspace{1pt} F)\bigr]_{ab} \,. \hspace{-20pt} } \section{D3-branes wrapping three-cycles} \label{sec_d3} The main objects of interest for our analysis are D3-branes wrapping three-cycles in the compact geometry, which give rise to particles in the four-dimensional theory. \subsection{D-branes} For our purpose we are looking for objects which are charged under the $U(1)$ gauge fields $U^a$. Since the $U^a$ are contained in the expansion \eqref{expansion_001} of the R-R four-form $C_4$, we are focussing on D-branes. \subsubsection*{D3-branes} \label{d3bra} We first want to show that for the setting introduced above, the only objects coupling directly to the closed-string gauge fields are D3-branes wrapping three-cycles in the compact space.\footnote{As discussed for instance in \cite{Jockers:2005pn,Marchesano:2014bia}, the closed-string $U(1)$ gauge fields can also couple to open-string moduli. This situation is however not generic and will therefore be ignored in the following.} To do so, we recall the Chern-Simons part of the D-brane action. With $\mu_p$ its charge, $\Gamma$ the submanifold wrapped by the D-brane, $\mathcal F$ the open-string gauge invariant field strength and the $\hat{\mathcal{A}}$-genus of the curvature two-form, we have \eq{ \mathcal{S}_{{\rm D}p} &\supset -\mu_p \int_{\Gamma} \mbox{ch}\left( \mathcal F\right) \wedge \sqrt{ \frac{\hat{\mathcal A}(\mathcal R_T)}{\hat{\mathcal A}(\mathcal R_N)}} \wedge \bigoplus_q C_q \,. } Since $ \mbox{ch}\left( \mathcal F\right)$ as well as the $\hat{\mathcal{A}}$-terms are even forms, and since on a Calabi-Yau manifold the first and fifth cohomology are trivial, the only possible D-branes which couple to $C^{(1,3)}$ (cf.~the expansion in \eqref{expansion_001}) have to wrap (orientifold-even) three-cycles in $\mathcal X$. Furthermore, since $\mathcal F$ is odd under $ \Omega_{\rm P}\, (-1)^{F_{\rm L}}$ the four-dimensional part does not contain $\mathcal F$ and we arrive at a D3-brane wrapping a three-cycle in the compact space $\mathcal X$. That means, we consider D3-branes extending along the time direction in the non-compact four-dimensional space $\mathbb R^{3,1}$ and wrapping three-cycles $\Gamma_3\subset\mathcal X$, where the latter can be expanded in the basis \eqref{basis_02} as follows \eq{ \Gamma_3 = m_I A^I + m_a A^a + n^I B_I + n^a B_a\,. } \subsubsection*{Supersymmetry} The orientifold three- and seven-planes typically present in the background preserve a particular combination of the $\mathcal N=2$ supercharges in the four-dimensional theory, leading to $\mathcal N=1$. However, D3-branes wrapping three-cycles in the compact space break supersymmetry further. Using the $\kappa$-symmetry formalism for D-branes and following the analysis of \cite{Bergshoeff:1996tu,Bergshoeff:1997kr,Marino:1999af,Kachru:1999vj}, we find the following: \begin{itemize} \label{page_susy} \item A D3-brane wrapping an orientifold-even three-cycle $\Gamma_3\in H_{3+}(\mathcal X)$ breaks supersymmetry completely. The volumes of such three-cycles are expected to vanish, in agreement with the classification of toroidal orientifolds in \cite{Lust:2006zh} where the $h^{2,1}_+$ sector belongs to the twisted sector. Alternatively, this conclusion is reached by projecting the $\mathcal N=2$ expression to $\mathcal N=1$. \item A D3-brane wrapping an orientifold-odd three-cycle $\Gamma_3\in H_{3-}(\mathcal X)$ preserves at most one-half of the $\mathcal N=1$ supersymmetry in four dimensions, that is, such a D3-brane preserves at most two real supercharges. To do so, a calibration condition of the following form has to be satisfied \eq{ \label{calibration_001} \mbox{Re}\bigl(e^{i\hspace{1pt}\theta} \Omega\hspace{1pt}\bigr)\bigr\rvert_{\Gamma_3} = e^{-K_{\rm cs}/2}\,d\mbox{vol}(\Gamma_3) \,, \hspace{40pt} \mbox{Im}\bigl(e^{i\hspace{1pt}\theta} \Omega\hspace{1pt}\bigr)\bigr\rvert_{\Gamma_3} =0\,, } where $\theta\in\mathbb R$ is an arbitrary phase. It has to be equal for all such D3-branes present in the background.\footnote{ For space-time filling D-branes the angle $\theta$ is typically fixed by comparing with corresponding orientifold planes, but such a mechanism is not available here.} \end{itemize} The important point is that D3-branes wrapping three-cycles in the Calabi-Yau orientifold do not preserve the same supersymmetry as the orientifold three- and seven-planes. In particular, they are not BPS and hence the usual stability arguments do not apply --- in general such states are therefore unstable. We come back to this point in section~\ref{sec_stability} below. Our findings above are consistent with the projection of the $\mathcal N=2$ BPS condition for Calabi-Yau compactifications. More concretely, D3-branes wrapping three-cycles in Calabi-Yau three-folds are known to satisfy the inequality \cite{Becker:1995kb} \eq{ \text{Vol}(\Gamma_{3})\geq e^{K_{\rm cs}/2} \left| \int_{\Gamma_{3}}\Omega\hspace{1pt}\right| , \label{eq:ineqBBS} } where the equality corresponds to BPS states in the $\mathcal N=2$ theory. When projecting this condition to $\mathcal N=1$ via the orientifold projection, we expect corrections to the relation \eqref{eq:ineqBBS}. However, our expectancy is that these are under control in the large complex-structure regime. Furthermore, since these D3-branes wrap topologically non-trivial cycles in the Calabi-Yau orientifold while being non-BPS, the backreaction on the geometry should be taken into account. \subsubsection*{Four-dimensional particles} D3-branes wrapping three-cycles in the compact space can be interpreted as particles in the four-dimensional non-compact theory. The DBI-part of a D-brane action with $\mathcal F=0$ has the form $\mathcal S_{{\rm D}p}\supset - T_p \int_{\Gamma} e^{-\phi}\hspace{1pt} d\mbox{vol}$, where $T_p$ denotes the tension of the brane. Going then to Einstein frame we find the following action for D3-branes wrapping $\Gamma_3$ \eq{ \label{action_redux_001} \mathcal S_{{\rm D}3} &= - T_3 \int_{\mathbb R \times \Gamma_3} ds\wedge \text{dvol}(\Gamma_3) - \mu_3 \int_{\mathbb R \times \Gamma_3} C_4 \\[4pt] &\sim - m \int_{\mathbb R} ds - \int_{\mathbb R} \bigl(m_a \hspace{1pt} U^a + n^a\hspace{1pt} V_a\bigr) \,, } where $ds$ is the line element of the world-line in four dimensions. We also included electric as well as magnetic couplings of the D3-brane to the gauge fields and we combine the corresponding charges into the vector $\mathfrak q=(m_a,n^a)$. Depending on whether the D3-brane wraps an orientifold-even or an orientifold-odd three-cycle, we find the following two classes of particles: \begin{itemize} \item D3-branes wrapping odd three-cycles $\Gamma_3 \in H_{3-}(\mathcal X)$ give rise to massive particles in four dimensions. These are not charged under the $U(1)$ gauge symmetries and therefore are in general unstable. Using \eqref{action_redux_001} and the equality in \eqref{eq:ineqBBS} we can then read-off the mass (in units $\sqrt{8\pi} M_{\rm Pl}$) and charges of these particles as \eq{ \label{d3_odd} m = e^{K_{\rm cs}/2} \Bigl|m_IX^I - n^IF_I \Bigr| + \ldots \,, \hspace{40pt} \mathfrak q = 0 \,, } where $\ldots$ correspond to the aforementioned corrections in the orientifold setting which we expect are subleading in the large complex-structure limit. \label{page_masses_d3} \item D3-branes wrapping even three-cycles $\Gamma_3 \in H_{3+}(\mathcal X)$ give rise to massless four-dimensional particles. This can be explained by noting that explicit constructions of orientifold-even three-cycles in type IIB orientifolds belong to the twisted sector and have vanishing volume \cite{Lust:2006zh}, or alternatively by projecting the $\mathcal N=2$ result. However, these massless particles are charged under the $U(1)$ gauge symmetries with electric charge $m_a$ and magnetic charge $n^a$, that is \eq{ \label{d3_even} m = 0 \,, \hspace{40pt} \mathfrak q = \binom{m_a}{n^a}\,. } \end{itemize} D3-branes wrapping a general three-cycle can of course be massive as well as charged, but in these cases the $U(1)$ charges and their masses are not related to each other. \subsection{Stability} \label{sec_stability} Above we have argued that D3-branes wrapping three-cycles in the compact space do not preserve the same supersymmetry as the orientifold three- and seven-planes. They are therefore not BPS states of the four-dimensional theory and thus in general unstable. \subsubsection*{D3-branes wrapping orientifold-odd three-cycles} Let us start by briefly recalling the $\mathcal N=2$ situation. In this case, D3-branes wrapping special-Lagrangian three-cycles in the compact space give rise to BPS states of the four-dimensional theory. Such states feel an equilibrium of repulsive (gauge interactions) and attractive (gravitational and scalar interactions) forces, which allows them to be stable. Furthermore, there can be infinite towers of BPS states as they do not feel a force among themselves. Nevertheless, in order to avoid decay by marginal stability the authors of \cite{Grimm:2018ohb} use a monodromy-orbit formalism to find appropriate bound states building these towers. In the $\mathcal N=1$ orientifold setting, as discussed above, the gauge vectors associated to orientifold-odd three-cycles $\Gamma_3 \in H_{3-}(\mathcal X)$ are projected out. The D3-branes wrapping such cycles are massive but uncharged, and therefore feel an attractive potential which usually manifests itself via a tachyonic mode and subsequent tachyon condensation and decay \cite{Sen:1999mg,Sen:1999nx}. This is in resemblance to the D-brane of the bosonic string \cite{Sen:1999mg,Sen:1999nx}, but in our case the spatial extension of the brane is wrapping three-cycles of the Calabi-Yau manifold. As a consequence, without a proper analysis taking into account the backreaction onto the geometry the final state is unknown. The same reasoning applies to towers of such states which are again unstable, reflecting the fact that the monodromy-orbit formalism of \cite{Grimm:2018ohb} is not applicable to these uncharged D3-particles. To summarize, D3-branes wrapping orientifold-odd three-cycles of the Calabi-Yau are in general unstable. We make the following remarks: \begin{itemize} \item Around equation~\eqref{calibration_001} we discussed that some D3-brane configurations preserve $\mathcal{N}=1/2$ supersymmetry in four dimensions. These states might have some notion of stability, but a proper analysis is beyond the scope of this work. \item We are not considering non-BPS stability by means of K-theoretical discrete symmetries, since states which are K-theoretical stable are not candidates to build towers of states. In particular, the open-string tachyon for stable non-BPS branes on orientifolds is projected out by world-sheet parity which can only happen for a single brane, whereas towers of more than one brane are unstable and decay \cite{Sen:1999mg}. \end{itemize} As we have argued, at a generic point in complex-structure moduli space D3-branes wrapping cycles $\Gamma_3 \in H_{3-}(\mathcal X)$ are unstable. However, when approaching the large complex-structure limit their mass and their scalar interactions approach zero and these states become ``asymptotically stable''. In particular, the imbalance of forces gets reduced and backreaction effects become less and less relevant. A motivation for this interpretation is the analysis of emergence for the orientifold-odd sector in section~\ref{emergencesect}. \subsubsection*{D3-branes wrapping orientifold-even three-cycles} D3-branes wrapping orientifold-even three-cycles $\Gamma_3 \in H_{3+}(\mathcal X)$ in the compact space correspond to massless-charged particles in the four-dimensional theory, which in the context of the swampland program have been discussed for instance in \cite{Heidenreich:2017sim,Heidenreich:2019zkl}. Many of these particles are however unstable and can decay into their elementary constituents, since there is no mass associated to them. The stable states correspond to the following fundamental charges (for a fixed $a$): \eq{ \label{states_elem} \arraycolsep2pt \begin{array}{lcl@{\hspace{30pt}}l} (m_a,n^a)&=&(1,0) & \mbox{massless electric particles,} \\[4pt] (m_a,n^a)&=&(0,1) & \mbox{massless magnetic monopoles.} \end{array} } This reasoning is supported from the $\mathcal{N}=2$ perspective where it was argued in \cite{Strominger:1995cz} that the conifold singularity is resolved by the addition of the electric state $(m_a,n^a)=(1,0)$, while the states $(m_a,n^a)=(\mathbb Z,0)$ should decay to the previous one in order to match the one-loop beta function for the electric gauge coupling. Since the orientifold locus $X^a=0$ resembles a conifold singularity, it is not surprising that we inherit these stable states in the orientifold theory. \subsubsection*{Remark} We close this section be remarking that D3-branes wrapping general three-cycles can of course be massive as well as charged, but in these cases the $U(1)$ charges and their masses are not related to each other. As a consequence, they ultimately decay to the previous cases. \section{Swampland conjectures} \label{sect_swampconj} We now want to test several swampland conjectures for particles in the setting introduced above. These are the weak gravity conjecture, the swampland distance conjecture and the emergence proposal. \subsection{Weak gravity conjecture} We begin our discussion of swampland conjectures with the weak gravity conjecture. \subsubsection*{Recalling the conjectures} We start by briefly recalling the electric weak gravity conjecture \cite{ArkaniHamed:2006dz} in its modern formulation \cite{Palti:2019pca}: \begin{itemize} \item[] Consider a $U(1)$ gauge theory with gauge coupling $g$ coupled to gravity and described by the following action \begin{equation} \mathcal S= \frac{1}{2} \int_{\mathbb R^{3,1}} \left[ M_{\rm P}^2 \, R\star 1 - \frac{1}{2\hspace{1pt} g^2}\hspace{1pt} F\wedge\star F + \ldots\hspace{1pt} \right], \label{eq:actionWGC} \end{equation} where $R$ denotes the Ricci scalar and $F$ is the $U(1)$ field-strength two-form. Then there exists a particle in the theory with mass $m$ and charge $\mathfrak q$ satisfying the inequality \begin{equation} m \leq \sqrt{2}\hspace{1pt} \mathfrak q\hspace{1pt} g M_{\rm P} \, . \label{eq:eWGC} \end{equation} \end{itemize} This conjecture has been refined in many ways, and for our purpose the weak gravity conjecture with scalar fields will be relevant \cite{Palti:2017elp}. We recall this conjecture as follows: \begin{itemize} \item[] Consider a gravity theory with massless scalar fields $z^i$ and $U(1)$ gauge fields $U^a$ described by the action \eqref{action_001}. Then there should exist a particle with mass $m(z)$ satisfying the bound \begin{equation} \label{conjecture_wg} \mathcal{Q}^2 \geq m^2+G^{i\overline j}(\partial_{z^{i}} m)(\partial_{\overline z^{j}}m) \,, \hspace{40pt} \mathcal Q^2 = \frac{1}{2} \hspace{1pt}\mathcal Q^T \mathcal M \hspace{1pt}\mathcal Q\,, \end{equation} where $\mathcal Q$ denotes the vector $\mathcal Q = (m_I,m_a,n^I,n^a)$, $\mathcal M$ is the positive-definite matrix similar to the one defined in \eqref{matrix_m} and $G^{i\overline j}$ is the inverse of the K\"ahler metric \eqref{metric_k}. \end{itemize} \subsubsection*{Verifying the conjectures} We now discuss how these conjectures are satisfied in the orientifold setting introduced in the previous section: \begin{itemize} \item For D3-branes wrapping odd three-cycles $\Gamma_3 \in H_{3-}(\mathcal X)$ we recall from \eqref{d3_odd} that their $U(1)$ charges $\mathfrak q=(m_a,n^a)$ are vanishing. These particles do not provide the required charged particle for the weak gravity conjecture \eqref{eq:eWGC}, but one can check that they verify the scalar weak gravity conjecture \eqref{conjecture_wg} without $U(1)$ gauge interactions \cite{Palti:2017elp}. \item D3-branes wrapping even three-cycles $\Gamma_3 \in H_{3+}(\mathcal X)$ are massless and their charges $\mathfrak q$ are given by the wrapping numbers $(m_a,n^a)$. The electric weak gravity conjecture \eqref{eq:eWGC} is therefore verified. Furthermore, using \eqref{d3_even} we find that \eq{ \frac{1}{2}\hspace{1pt} \binom{m}{n}^T \mathcal M\hspace{1pt} \binom{m}{n} \geq 0 \,, } which is satisfied since the matrix $\mathcal M$ is positive definite. \end{itemize} We therefore conclude that the electric weak gravity conjecture \eqref{eq:actionWGC} and weak gravity conjecture with scalar fields \eqref{conjecture_wg} are trivially satisfied. \subsubsection*{Completeness conjecture and tower versions} The statement of the completeness conjecture is that a gravity theory with a gauge symmetry must have states of all possible charges under that gauge symmetry \cite{Polchinski:2003bq}. In our case this conjecture is satisfied by D3-branes wrapping orientifold-even cycles $\Gamma_3\in H_{3+}(\mathcal X)$ with wrapping numbers $(m_a,n^a)$. However, in general these states are not stable against decay to the elementary ones \eqref{states_elem} and hence the completeness conjecture is only satisfied as long as stability is not required. The completeness conjecture motivates the tower versions of the weak gravity conjecture \cite{Heidenreich:2015nta,Heidenreich:2016aqi,Andriolo:2018lvp}, which suggests that for the D3-brane particles discussed here the tower versions are satisfied only if stability of the states is relaxed. Furthermore, in \cite{Palti:2019pca} it is argued that avoidance of generalized global symmetries for an $U(1)$ gauge theory coupled to gravity requires at least one single charged state. Remarkably, in \cite{Gaiotto:2014kfa} it is claimed that an extension of the previous argument to discrete generalized global symmetries requires all possible charges in order to break any possible discrete symmetry. It would be interesting to explore the compatibility between our results concerning the completeness conjecture and \cite{Gaiotto:2014kfa}. \subsubsection*{Summary and remark} We now want to briefly summarize our findings regarding the weak gravity conjecture and make the following remark: \begin{itemize} \item We have shown that the weak gravity conjectures for particles are verified in type IIB orientifold compactifications with O3- and O7-planes for D3-branes wrapping three-cycles. However, the tower versions cannot be claimed to be verified as long as stability is required. \item Analogously to type IIB orientifolds with O3-/O7-planes case studied here, the same results are obtained for type IIB orientifolds with O5-/O9-planes and for type IIA orientifolds. \item In \cite{Font:2019cxq} it was shown that in the orientifold setting BPS D5-branes wrapping three-cycles verify the weak gravity conjecture as well. These are however extended objects and not particles, and they are charged under a different gauge symmetry. \end{itemize} \subsection{Swampland distance conjecture} We next consider the swampland distance conjecture for type IIB orientifolds with O3-/O7-planes introduced above. \subsubsection*{Recalling the conjecture} The swampland distance conjecture has been formulated in \cite{Ooguri:2006in}, and has been refined in \cite{Klaewer:2016kiy}. The refined version reads as follows: \begin{itemize} \item[] Consider a theory coupled to gravity with a moduli space $\mathcal M_{\rm mod}$ parametrized by the expectation values of some fields without potential. Let the geodesic distance between any two points $P,Q \in \mathcal{M}_{\rm mod}$ be denoted $d(P, Q)$. If $d(P,Q) \gtrsim M_{\rm P}$, then there exists an infinite tower of states with mass scale $m$ such that \begin{equation} m(Q)<m(P)\,e^{-\mu\frac{d(P,Q)}{M_{\rm P}}}\,, \label{eq:RSDC} \end{equation} where $\mu$ is some positive constant of order one. This statement holds even for fields with a potential, where the moduli space is replaced with the field space in the effective theory. \end{itemize} We now want to verify this conjecture for the setting of the previous section. In particular, we are starting from a generic point in complex-structure moduli space and approach a large complex-structure point. The results obtained in \cite{Grimm:2018ohb} for the $\mathcal N=2$ theory suggest that also in our $\mathcal N=1$ orientifold setting D3-branes wrapping three-cycles give rise to the infinite tower of states. \begin{itemize} \item We have argued that the mass of D3-branes wrapping even three-cycles $\Gamma_3 \in H_{3+}(\mathcal X)$ vanishes and hence they do not play a role for the swampland distance conjecture. \item For D3-branes wrapping odd three-cycles $\Gamma_3 \in H_{3-}(\mathcal X)$ the $U(1)$ charges are vanishing but they are massive (cf.~equation \eqref{d3_odd}). These states are therefore of interests for the swampland distance conjecture. \end{itemize} \subsubsection*{Geodesic distance} Let us consider a generic point in complex-structure moduli space for which the one-loop and non-perturbative corrections to the prepotential \eqref{prepotential_full} can be ignored. The K\"ahler metric is computed via \eqref{metric_k}, and the geodesic equation for such K\"ahler geometries reads in general \eq{ \label{geodesic_001} 0 = \ddot z^i + \Gamma^i_{jk} \dot z^j \dot z^k\,, \hspace{50pt} \Gamma^i_{jk} = G^{i\overline m} \partial_j G_{k\overline m} \,, } where a dot indicates a derivative with respect to the parametrization $t$ of the geodesic. The geodesic distance is defined as \eq{ d(P,Q) = \left\lvert\int_{t_1}^{t_2} dt \sqrt{2\hspace{1pt} \dot z^i\, G_{i\overline j}\, \dot{\overline{z}}{}^{\overline j}} \right\rvert \,, \hspace{50pt} \arraycolsep2pt \begin{array}{lcl} z^i(t_1) &=& P \in \mathcal{M}_{\rm mod}\,, \\[4pt] z^i(t_2) &=& Q \in \mathcal{M}_{\rm mod}\,. \end{array} } For simplicity we now restrict ourselves to a situation with one complex-structure modulus, that is $h^{2,1}_-=1$. We use the notation $z^1\equiv z=u+i\hspace{1pt} v$, and the moduli space is a hyperbolic space with metric \eq{ G_{1\overline 1} = \frac{3}{4}\hspace{1pt}\frac{1}{v^2} \,. } The solutions to the geodesic equation \eqref{geodesic_001} are well known, and correspond to lines with constant $u$ and to circles intersecting $v=0$ at a right angle \eq{ \label{geos_004} z_{(1)} = \alpha + i\hspace{1pt} \beta \hspace{1pt} e^{ t} \,, \hspace{50pt} z_{(2)} = \alpha + \frac{\beta}{\cosh(t)}\hspace{1pt} \bigl[ \sinh( t) + i \bigr]\,, } where $\alpha,\beta={\rm const}$ and $\beta>0$. The geodesic distance for these two cases is computed as follows \eq{ \label{geodist} d_{(1)}(P,Q) & = \sqrt{\frac{3}{2}}\,\lvert t_Q-t_P\rvert = \sqrt{\frac{3}{2}} \left\lvert \log \frac{v_Q}{v_P} \right\rvert \,, \\ d_{(2)}(P,Q) &= \sqrt{\frac{3}{2}}\,\lvert t_Q-t_P\rvert\,. } \subsubsection*{Verifying the conjecture} As mentioned above, motivated by the $\mathcal N=2$ results we expect that D3-branes wrapping orientifold-odd three-cycles will provide the tower of states which becomes exponentially light. In our example with $h^{2,1}_-=1$ we see that D3-branes with wrapping numbers $n^0\neq0$ or $n^1\neq0$ do not satisfy the swampland distance conjecture \eqref{eq:RSDC} in the large complex-structure limit $v\to\infty$; they are expected to provide the required states in the small complex-structure limit. We focus here on the large complex-structure limit and D3-branes with wrapping numbers $m_0$ and $m_1$: \begin{itemize} \item We first consider D3-branes satisfying the calibration condition \eqref{calibration_001}. In this case the mass is given by \eq{ m(z)= \frac{1}{\sqrt{8\hspace{1pt} d_{111}v^3}}\, \mbox{Re}\left[ e^{i\hat\theta} \bigl( m_0 + m_1 z + n^0 d_{111} z^3 - 3\hspace{1pt} n^1 d_{111} z^2 \bigr) \right], } with $d_{111}$ the triple intersection number introduced in \eqref{prepotential_full} and where we defined $\hat\theta = \theta + \arg( X^0)$. If we approach the large complex-structure limit via the first geodesic in \eqref{geos_004}, it turns out that there are only two choices for $\hat\theta$ compatible with the calibration condition. The tower satisfying the swampland distance conjecture together with the constant $\mu$ are the following \eq{ \arraycolsep2pt \begin{array}{@{}lcl@{\hspace{30pt}}lcl@{\hspace{15pt}}lcl@{\hspace{15pt}}lcl@{\hspace{30pt}}lcl@{}} \hat\theta & = & 0\,, & m_0 & > & 0\,, & m_1 & = & 0\,, & n^a&=&0 \,, & \mu &=& \sqrt{3/2}\,, \\[4pt] \hat\theta & = & \pi/2\,, & m_0 & = & 0\,, & m_1 & > & 0\,, & n^a&=&0 \,, & \mu &=& \sqrt{1/6}\,. \end{array} } \item Next, we relax the calibration condition and consider states satisfying the equality in \eqref{eq:ineqBBS}. These are the states inherited from the $\mathcal N=2$ theory, and their mass is given by \eq{ m(z) = \frac{1}{\sqrt{8\hspace{1pt} d_{111}v^3}}\, \Bigl| m_0 + m_1 z + n^0 d_{111}z^3 -3\hspace{1pt} n^1 d_{111}z^2 \Bigr|\,. \label{eq:mag} } Approaching the large complex-structure limit again via first geodesic in \eqref{geos_004}, we see that tower of states satisfying the swampland distance conjecture are given by \eq{ \arraycolsep2pt \begin{array}{@{}lcl@{\hspace{30pt}}lcl@{\hspace{15pt}}lcl@{\hspace{15pt}}lcl@{\hspace{30pt}}lcl@{}} \hphantom{\hat\theta} & \hphantom{=} & \hphantom{\pi/2\,,} & m_0 & \in & \mathbb Z\,, & m_1 & \in & \mathbb Z\,, & n^a&=&0 \,, & \mu &=& \sqrt{1/6}\,. \end{array} } \item We also want to consider the second geodesic in \eqref{geos_004}, for which the large complex-structure limit is characterized by $\beta\ggg 1$ and $t\to 0$. For this geodesic we do not find any tower of D3-brane states satisfying the swampland distance conjecture. However, since this path is along the boundary of the complex-structure moduli space it is not clear whether the conjecture applies. It would be interesting to study this point further. \end{itemize} To summarize, along the path $u=\textrm{const.}$ and $v\to\infty$ we have verified the swampland distance conjecture for D3-particles in type IIB orientifold compactifications, however, the tower of states is in general not stable. Furthermore, we were not able to verify the swampland distance conjecture for a path along the boundary of complex-structure moduli space. \subsubsection*{Relation between swampland distance and weak gravity conjectures} For $\mathcal N=2$ theories originating from type IIB Calabi-Yau compactifications it was found in \cite{Grimm:2018ohb} that the swampland distance as well as the weak gravity conjecture are verified by BPS D3-branes wrapping special Lagrangian three-cycles. However, for $\mathcal N=1$ theories obtained via an orientifold projection we have noted the $h^{2,1}$ vector multiplets in $\mathcal N=2$ are projected to $h^{2,1}_-$ chiral and $h^{2,1}_+$ vector multiplets in $\mathcal N=1$ (c.f.~equation \eqref{multiplets_split}). In particular, in the orientifold theory both multiplets are independent of each other, and there exist for instance orientifolds with $h^{2,1}_+=0$. Thus, D3-brane particles which are needed for the swampland distance conjecture are uncharged and trivially do not satisfy the electric weak gravity conjecture \eqref{eq:eWGC} --- and charged D3-brane particles satisfy the weak gravity conjecture but are not needed for the swampland distance conjecture. Thus, the relation between the weak gravity and swampland distance conjectures (and emergence to be discussed below) is determined by the bulk supergravity spectrum. This observation extends to similar cases for which D-branes couple to a bulk spectrum in $\mathcal N=2$ which splits under the orientifold projection into different $\mathcal N=1$ multiplets. For example, in K\"ahler moduli space the same happens for D-branes wrapping two-cycles (e.g.~the D-strings studied in \cite{Font:2019cxq}) coupling to both $J$ and $C_{2}$. It is also worth mentioning that similar to the complex-structure moduli, also the K\"ahler twisted sector of the examples in \cite{Lust:2006zh} contains the $h_{-}^{1,1}$ cycles, so we observe an analogy with respect to the situation studied in this work. \subsection{Emergence proposal} \label{emergencesect} In this section we discuss the emergence proposal for D3-branes wrapping three-cycles in type IIB orientifold compactifications with orientifold three- and seven-planes. \subsubsection*{Recalling the proposal} The emergence proposal has been formulated in \cite{Grimm:2018ohb,Palti:2019pca}, which we recall as follows: \begin{itemize} \item[] The dynamics (kinetic terms) for all fields are emergent in the infrared by integrating out towers of states down from an ultraviolet scale $\Lambda_{\rm UV} $ which is below the Planck scale. \end{itemize} At a practical level this means that at an UV scale $\Lambda_{\rm UV}$, the renormalization group flow has a boundary condition on all fields forcing them to have vanishing kinetic terms. Let us make the following remarks concerning the emergence proposal: \begin{itemize} \item For towers of states with equidistant mass and charge separation and without taking care of higher-loop corrections, it has been argued in \cite{Palti:2019pca} that imposing emergence of gravity one obtains the species scale $\Lambda_{\rm s}$ \cite{Dvali:2007hz} as the UV scale. \item Imposing emergence of a gauge field and a scalar field (to one-loop order) one recovers the magnetic weak gravity and the swampland distance conjecture, respectively. In \cite{Grimm:2018ohb} this mechanism has been discussed for type IIB Calabi-Yau compactifications with towers of states given by D3-branes wrapping special Lagrangian three-cycles. \item For orientifold compactifications with $\mathcal N=1$ supersymmetry in four dimensions we expect a modified picture. Indeed, the non-renormalization theorem for $\mathcal{N}=2$ changes in the case of $\mathcal{N}=1$ orientifolds as the K\"ahler potential $K_{\rm cs}$ receives loop corrections in terms of Eisenstein series \cite{Haack:2017vko}. These corrections are however subleading and expected to be irrelevant as we approach the large complex-structure limit. Besides, the gauge kinetic functions remain one-loop exact. \end{itemize} \subsubsection*{Emergence for the orientifold-odd third homology} We now analyze the emergence proposal for the orientifold-odd third cohomology. As discussed above, D3-branes wrapping cycles $\Gamma_3\in H_{3-}(\mathcal X)$ are massive but unstable. Integrating out these states gives rise to logarithmic corrections to the complex-structure moduli-space metric of the schematic form \eq{ \label{running_1} G_{i\overline j} \bigr\rvert_{\rm IR} = G_{i\overline j} \bigr\rvert_{\rm UV} + c_- \sum_{\alpha} \bigl(\partial_{z^i} m^{(\alpha)}\bigr) \bigl(\partial_{\overline z^j} m^{(\alpha)}\bigr) \log\frac{\Lambda_{\rm UV}}{m^{(\alpha)}} \,, } where $c_-$ is a normalization constant and the sum runs over all D3-brane states with mass below the cutoff scale $\Lambda_{\rm UV}$. Following the argumentation of \cite{Grimm:2018ohb,Palti:2019pca} for the emergence proposal in the $\mathcal N=2$ setting, the metric $G_{i\overline j}\rvert_{\rm UV}$ in the UV vanishes while the sum over logarithmic corrections generates the metric $G_{i\overline j}\rvert_{\rm IR}$ in the IR. It is beyond the scope of this work to check \eqref{running_1} explicitly for our setting, but we would like to make the following comments: \begin{itemize} \item For the orientifold setting studied in this work we have argued that the tower of D3-brane states is in general unstable. In order to verify the emergence proposal we therefore have to integrate-out (infinite) unstable states, which are however expected to become asymptotically-stable in the large complex-structure limit. \item In \cite{Font:2019cxq} it has been proposed that also four-dimensional domain walls contribute to the running of the moduli-space metric. These domain walls are given by BPS D5-branes wrapping three-cycles in the compact space, but to our knowledge it is not known how to integrate out extended objects at the quantitative level. This ambiguity is partially reflected in the unspecified constant $c_-$ in \eqref{running_1}. \item It is curious to note that in the $\mathcal N=1$ setting we have in general all-order loop corrections to the K\"ahler potential but also that the D3-brane particles being integrated out are not stable. \end{itemize} We believe these questions deserve further investigation in the future. \subsubsection*{Emergence for the orientifold-even third homology} We now consider D3-branes wrapping orientifold-even three-cycles $\Gamma_3\in H_{3+}(\mathcal X)$ in the compact space. These states are massless but are charged under the closed-string $U(1)$ gauge fields $U^a$, and therefore contribute to one-loop corrections to the gauge kinetic function. In particular, in field theory we have \eq{ \label{running_2} \mbox{Re}\hspace{1pt} f^{(\rm f)}_{ab}\hspace{1pt} \bigr\rvert_{\rm IR} &= \mbox{Re}\hspace{1pt} f^{(\rm f)}_{ab}\hspace{1pt} \bigr\rvert_{\rm UV} + c_+ \lim_{m^{(\alpha)}\to0} \sum_{\alpha} \mathfrak q^{(\alpha)}_a \mathfrak q^{(\alpha)}_b \log\frac{\Lambda_{\rm UV}}{m^{(\alpha)}} \,, } where the sum is over all D3-brane particles labelled by $\alpha$, $\mathfrak q^{(\alpha)}_a$ denote their electric charges and $m^{(\alpha)}$ denote their masses which vanish at the orientifold locus. The constant $c_+$ is a normalization constant, and the superscript indicates that this is a field-theory result. As expected, we obtain logarithmic divergences due to massless particles running in the loop.\footnote{There is an ambiguity in performing the sum over all D3-brane particles with mass below $\Lambda_{\rm UV}$ and taking the limit $m^{(\alpha)}\to 0$ in \eqref{running_2}. When first summing and then taking the limit, one obtains a polynomial divergence.} Let us now turn to the string-theory computation. Using the relation \eqref{period_m_001} and the explicit form of the prepotential \eqref{prepotential_full}, we find at the orientifold locus $X^a=0$ (without imposing $F_a=0$) \eq{ \label{lukas} \mbox{Re}\hspace{1pt} f^{({\rm s})}_{ab} = \mbox{Im}\hspace{1pt} F_{ab} = 6\hspace{1pt} d_{abi} v^i + (2\pi)^2 \sum_k k_a k_b \hspace{1pt} n_k \hspace{1pt} \mbox{Re} \log\left( 1 - e^{2\pi i k_i z^i} \right). } Here we used again $v^i = \mbox{Im}\hspace{1pt} z^i$ and the superscript indicates that this is the (effective) string-theory result. Following now the philosophy of the emergence proposal \cite{Grimm:2018ohb,Palti:2019pca}, the gauge-kinetic function should vanish in the ultraviolet and the expression in the infrared is generated by loop corrections. In the present situation we argue as follows: \begin{itemize} \item The gauge-kinetic function in the infrared is given by the expression \eqref{lukas}. Let us then recall our discussion from page~\pageref{page_or_sol} and consider the solution \eqref{or_sol_2} to the orientifold condition $F_a=0$. Reinstalling the $X^a$-dependence we have \eq{ \label{lll} \mbox{Re}\hspace{1pt} f^{({\rm s})}_{ab} = 6\hspace{1pt} d_{abi} v^i + (2\pi)^2 \lim_{X^a\to 0}\sum_k k_a k_b \hspace{1pt} n_k \hspace{1pt} \mbox{Re} \log\left( - 2\pi i\hspace{1pt} k_a X^a/X^0 \right) . } Recalling furthermore our results for the masses from page~\pageref{page_masses_d3} and performing a similar analysis for D3-branes wrapping orientifold-even cycles, we find for electri\-cally-charged particles that $m=\lim_{X^a\to 0}e^{K_{\rm cs}/2} | m_a X^a |$ (in units of $M_{\rm Pl}$) with $m_{a}$ given by \eqref{states_elem}. Using this expression in \eqref{lll}, we obtain up to prefactors and regular terms \eq{ \label{das_boot} \mbox{Re}\hspace{1pt} f_{ab}^{({\rm s})} \sim \lim_{m^{(\alpha)}\to0} \sum_{\alpha} k^{(\alpha)}_a k^{(\alpha)}_b \log \frac{M_{\rm s}}{m^{(\alpha)}}\,, } where $M_{\rm s}$ denotes the string scale and where we relabeled the vectors $k$ appearing in the sum by $\alpha$. This expression has the same form as the field-theoretical one-loop correction shown in \eqref{running_2}, and thus it is feasible that the emergence proposal is verified. \item The above-mentioned string-theory result for the gauge-kinetic function is not complete. Indeed, the results of \cite{Strominger:1995cz} imply that \eqref{lukas} receives corrections in the ultraviolet from D3-branes wrapping once around orientifold-even cycles. Properly taking into account these states cure the logarithmic divergence in the Calabi-Yau moduli space, and hence the gauge-kinetic functions \eqref{lukas} and \eqref{lll} are expected to be finite in a quantum-gravity theory. In particular, in the infrared we expect from string theory a behavior of the form \eq{ \label{ffull} \mbox{Re}\hspace{1pt} f^{({\rm full})}_{ab}\hspace{1pt} \bigr\rvert_{\rm IR} = 6\hspace{1pt} d_{abi} v^i + \mbox{finite}\,, } with the superscript indicating the full expression. \end{itemize} To summarize, we have argued that the emergence proposal for D3-branes wrapping orientifold-even three-cycles is feasible to be verified, though we have presented here only qualitative analysis and a more thorough check including numerical factors and regular terms would be necessary. \subsubsection*{Remarks and open questions} Let us make the following remarks concerning our discussion of emergence in the orientifold-even sector: \begin{itemize} \item In our analysis we have encountered a difference between the field-theory and the string-theory computation. In particular, the field-theory expression \eqref{running_2} for the gauge-kinetic function is logarithmically divergent whereas the full string-theory expression \eqref{ffull} is expected to be finite. \item The gauge-kinetic function \eqref{ffull} is the expression expected from string theory once the effect of D3-branes wrapping orientifold-even cycles on the moduli-space geometry has been taken into account. For the emergence proposal this gauge coupling in the infrared should be obtained entirely from loop-corrections with charged particles running in the loop, however, in our case these charged particles are massless and do not couple to the complex-structure moduli. It is therefore not clear to us how the $v^i$ as well as $d_{abi}$ dependence in \eqref{ffull} can be obtained. \begin{itemize} \item A potential answer to this question is that there exist particles which couple simultaneously to the $U(1)$ gauge fields and to the complex-structure moduli, and where the coupling depends on $d_{abi}$, such that generate the linearly divergent term when taking into account their effect in \eqref{running_2}. However, there are no other known states with this property. \item Another possibility is that the triple intersection numbers $d_{abi}$ vanish. This appears to be a too-strong requirement, since in T-dual settings these can be non-vanishing. \item Since D3-branes wrapping orientifold-even three-cycles do not preserve the bulk supersymmetry (see our discussion on page~\pageref{page_susy}), the gauge-kinetic function may receive higher-loop corrections which can generate a term linear in $v^i$. \end{itemize} \item We also note that when taking the large complex-structure limit in \eqref{ffull}, the gauge symmetry becomes global. Although there exists an obstruction against reaching the limit $v^i\to \infty$ due to towers of orientifold-odd D3-branes, it is unexpected to find a global gauge symmetry associated with integrating out states uncharged under it. In particular, this behavior is usually associated with integrating out a tower of states which becomes massless as in \eqref{eq:RSDC} and which is charged under the gauge symmetry. However, in our setting we do not realize a possible tower of states verifying both properties. \end{itemize} We leave open these questions here and refer for a more detailed description and further instructive attempts to solve them to section~6 of \cite{martin:2019}. \section{Summary and conclusions} \label{sect_conc} In this paper we have studied swampland conjectures for type IIB orientifolds with O3- and O7-planes. We have allowed for orientifold projections with $h^{2,1}_+\neq 0$, which leads to closed-string $U(1)$ gauge fields in four dimensions. The weak gravity conjecture, the swampland distance conjecture and emergence proposal have been investigated in the context of the $\mathcal N=2$ parent theory in \cite{Grimm:2018ohb}, and here we were interested in the question \textit{What happens to the $\mathcal N=2$ analysis when we perform an orientifold projection?} \subsubsection*{Summary} Let us summarize the main results of our analysis: \begin{itemize} \item The relevant objects for the above-mentioned swampland conjectures in our setting are D3-branes wrapping three-cycles in the compact space. In the $\mathcal N=2$ theory these couple to vector multiplets, and they give rise to four-dimensional particles charged under $U(1)$ gauge fields and with mass depending on the complex-structure moduli. In the orientifold theory the $\mathcal N=2$ vector multiplets split into vector and chiral multiplets of $\mathcal N=1$ supergravity as illustrated in \eqref{multiplets_split}. Correspondingly, D3-branes can be separated into massive-uncharged (orientifold-odd) and massless-charged (orientifold-even) particles in four dimensions. While for $\mathcal N=2$ these states can be BPS and therefore can be stable, for the $\mathcal N=1$ theory they are in general unstable and non-supersymmetric. This suggests that the $\mathcal N=1$ swampland conjectures studied in this paper are satisfied only for in general unstable states. \item We have illustrated that the weak gravity conjecture \cite{ArkaniHamed:2006dz} and the weak gravity conjecture with scalar fields \cite{Palti:2017elp} are trivially satisfied in the $\mathcal N=1$ setting. This is true both for D3-branes wrapping orientifold-odd and orientifold-even three-cycles, however, the tower versions can only be satisfied if the stability requirement is relaxed. \item We have verified the swampland distance conjecture \cite{Ooguri:2006in,Klaewer:2016kiy} for D3-branes wrapping orientifold-odd three-cycles in the Calabi-Yau three-fold. For the case $h^{2,1}_-=1$ we have studied two types of geodesics, where one geodesic along the boundary of the complex-structure moduli space appears to violate the conjecture. We noted furthermore that the massless-charged D3-brane particles do not contribute to the swampland distance conjecture, and that the tower of states is only asymptotically stable. \item For the emergence proposal \cite{Grimm:2018ohb,Palti:2019pca} we have seen that integrating out D3-branes wrapping orientifold-odd three-cycles gives rise to the correct behavior of the K\"ahler metric for the complex-structure moduli. The analysis in the orientifold-even sector was more ambiguous, but we argued that D3-branes wrapping (collapsing) three-cycles can re-produce the leading divergency in the gauge-kinetic function for the closed-string $U(1)$ gauge fields. However, when properly taking into account the effect of such D3-branes on the moduli-space geometry it is expected that this divergence is removed. There is furthermore an open question of how the leading regular term in the gauge-kinetic function can be obtained from loop corrections, and we briefly discussed possible solutions. \end{itemize} \subsubsection*{Open questions and future directions} We comment on open questions and future research directions: \begin{itemize} \item The closed-string gauge theories originating from the orientifold-even sector have been studied mostly from a supergravity point of view and require a better understanding within string theory. More concretely, the implications of the orientifold locus $X^a=0$, $F_a=0$ on the world-sheet instanton corrections (c.f.~page~\pageref{page_or_loc}) need to be clarified, the properties of D3-branes wrapping three-cycles $\Gamma_3 \in H_{3+}(\mathcal X)$ deserved to be studied further, and the emergence proposal for the gauge theories should be made more explicit. To do so a concrete example would be helpful, for which the results in \cite{Lust:2006zh} provide a good starting point. \item While in the $\mathcal N=2$ setting D3-branes wrapping three-cycles in the compact space can be BPS, this property is lost after orientifolding. More concretely, the corresponding states in the four-dimensional theory no longer preserve supersymmetry and we expect corrections to the mass formula for orientifold-odd D3-branes shown in \eqref{d3_odd}, associated with their instability and with backreaction effects onto the geometry. We have argued that these effects should be subleading as we approach the infinite-distance limit and have disregarded them, however, such corrections should be computed explicitly. This is in line with results obtained in \cite{Blumenhagen:2019vgj}, where logarithmic quantum corrections to various swampland conjectures -- including the de-Sitter swampland conjecture -- have been discussed. At the same time, the authors of \cite{Ooguri:2018wrx} have connected the de-Sitter conjecture to the swampland distance and weak gravity conjectures. If these connections survive in the orientifold setting, it would be interesting to see if through them the work of \cite{Blumenhagen:2019vgj} can shine light on the concrete form of possible modifications to the mass formula \eqref{d3_odd}, to the swampland distance and to weak gravity conjectures for our setting. \item In this work we focused on type IIB orientifolds with O3- and O7-planes. It would be interesting to extend our discussion to type IIA orientifolds where a similar splitting of the $\mathcal N=2$ vector multiplet occurs \cite{Grimm:2004ua}. From mirror symmetry we expect that the mirror towers/D-branes satisfy a similar behavior, which for the $\mathcal{N}=2$ case was shown in \cite{Corvilain:2018lgw}. Besides, the E2-instantons analyzed in \cite{vittmann:2019} have properties similar to the case we studied but in the complex-structure moduli space of type IIA. Furthermore, in the work of \cite{Font:2019cxq} it would be interesting to explore the role of the uncharged towers of extended objects. \item Many studies of the swampland conjectures are based on $\mathcal{N}\geq2$ supergravity theories (see \cite{DallAgata:2020ino} for recent work with extended supersymmetry). However, in order to make contact with phenomenology, theories with $\mathcal{N}=1$ supersymmetry are more suitable. What we find in our work is that many of the connections between the swampland conjectures and the emergence proposal are modified for $\mathcal{N}=1$, and therefore further studies of these connections and their implications for phenomenology would be desirable. \end{itemize} \vskip2em \subsubsection*{Acknowledgments} We would like to thank R.~Blumenhagen, L.~Martucci and E.~Palti for very helpful discussions and we thank D.~L\"ust for support. EP thanks K.~Schalm for hospitality and L.~Plauschinn for enlightening discussions. MER is grateful to H.~Erbin for suggestions on literature concerning tachyon condensation in string field theory \cite{Sen:1999nx} and non-BPS stability \cite{Sen:1999mg} as well as to H.~\'Asmundsson, D.~Bockisch, P.~Fragkos, D.P.~Lichtig, A.~Makridou, I.~Mayer, S.P.~Mazloumi and J.D.~Sim\~ao for patient and useful discussions and finally to his family and friends for unconditional support along the period of elaboration of this work. \clearpage \nocite{*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \let\thefootnote\relax\footnote{ F.d.P.C. sponsored by the Department of Defense under Air Force Contract FA8721-05-C-0002. Opinions, interpretations, recommendations, and conclusions are those of the authors and are not necessarily endorsed by the United States Government. Specifically, this work was supported by Information Systems of ASD(R\&E). M.M. and K.D. were supported in part by a Netapp faculty fellowship.} The security of systems is often predicated on a user or application selecting an object, a password or key, from a large list. If an inquisitor who wishes to identify the object in order to gain access to a system can only query each possibility, one at a time, then the number of guesses they must make in order to identify the selected object is likely to be large. If the object is selected uniformly at random using, for example, a cryptographically secure pseudo-random number generator, then the analysis of the distribution of the number of guesses that the inquisitor must make is trivial. Since the earliest days of code-breaking, deviations from perfect uniformity have been exploited. For example, it has long since been known that human-user selected passwords are highly non-uniformly selected, e.g. \cite{malone12}, and this forms the basis of dictionary attacks. In information theoretic security, uniformity of the string source is typically assumed on the basis that the source has been compressed. Recent work has cast some doubt on the appropriateness of that assumption by establishing that fewer queries are required to identify strings chosen from a typical set than one would expect by a na\"ive application of the asymptotic equipartition property. This arises by exploitation of the mild non-uniformity of the distribution of strings conditioned to be in the typical set \cite{Christiansen13a}. If the string has not been selected perfectly uniformly, but with a distribution that is known to the inquisitor, then the quantification of security is relatively involved. Assume that a string, $W_1$, is selected stochastically from a finite list, $ {\mathbb{A} } = \{0,\ldots,m-1\}$. An inquisitor who knows the selection probabilities, ${P}(W_1=w)$ for all $w\in {\mathbb{A} }$, is equipped with a method to test one string at a time and develops a strategy, $G: {\mathbb{A} }\mapsto\{1,\ldots,m\}$, that defines the order in which strings are guessed. As the string is stochastically selected, the number of queries, $G(W_1)$, that must be made before it is identified correctly is also a random variable, dubbed guesswork. Analysis of the distribution of guesswork serves as a natural a measure of computational security in brute force determination. In a brief paper in 1994, Massey \cite{Massey94} established that if the inquisitor orders his guesses from most likely to least likely, then the Shannon entropy of the random variable $W_1$ bears little relation to the expected guesswork $E(G(W_1))= \sum_{w\in {\mathbb{A} }} G(w) P(W_1=w)$, the average number of guesses required to identify $W_1$. Arikan \cite{Arikan96} established that if a string, $W_k$, is chosen from $ {\mathbb{A} }^k$ with i.i.d. characters, again guessing strings from most likely to least likely, then the moments of the guesswork distribution grow exponentially in $k$ with a rate identified in terms of the R\'enyi entropy of the characters, \begin{align*} \lim_{k\to\infty} \frac 1k \log E(G(W_k)^\alpha) &= (1+\alpha) \log \sum_{w\in {\mathbb{A} }} P(W_1=w)^{1/(1+\alpha)}\\ &= \alpha R\left(\frac{1}{1+\alpha}\right) \text{ for } \alpha>0, \end{align*} where $R((1+\alpha)^{-1})$ is the R\'enyi entropy of $W_1$ with parameter $(1+\alpha)^{-1}$. In particular, the average guesswork grows as the R\'enyi entropy with parameter $1/2$, a value that is lower bounded by Shannon entropy. Arikan's result was subsequently extended significantly beyond i.i.d. sources \cite{Malone042,Pfister04,Hanawal11}, establishing its robustness. In the generalized setting, specific R\'enyi entropy, the R\'enyi entropy per character, plays the r\^ole of R\'enyi entropy. In turn, these results have been leveraged to prove that the guesswork process $\{k^{-1}\log G(W_k)\}$ satisfies a Large Deviation Principle (LDP), e.g. \cite{Lewis95,Dembo98}, in broad generality \cite{Christiansen13}. That is, there exists a lower semi-continuous function $I:[0,\log(m)]\mapsto[0,\infty]$ such that for all Borel sets $B$ contained in $[0,\log(m)]$ \begin{align} -\inf_{x\in B^\circ} I(x) &\leq \liminf_{n\to\infty} \frac 1k \log P\left(\frac 1k \log G(W_k) \in B\right) \nonumber\\ &\leq \limsup_{n\to\infty} \frac 1k \log P\left(\frac 1k \log G(W_k) \in B\right) \nonumber\\ &\leq -\inf_{x\in \bar{B}} I(x), \label{eq:LDP} \end{align} where $B^\circ$ denotes the interior of $B$ and $\bar{B}$ denotes its closure. Roughly speaking, this implies $dP(k^{-1} \log G(W_k) \approx x)\approx \exp(-k I(x)) dx$ for large $k$. In \cite{Christiansen13} this LDP, in turn, was used to provide direct estimates on the guesswork probability mass function, $P(G(W_k)=n)$ for $n\in\{1,\ldots,m^k\}$. These deductions, along with others described in Section \ref{sec:brief}, have developed a quantitative framework for the process of brute force guessing a single string. In the present work we address a natural extension in this investigation of brute force searching: the quantification for multi-user systems. We are motivated by both classical systems, such as the brute force entry to a multi-user computer where the inquisitor need only compromise a single account, as well as modern distributed storage services where coded data is kept at distinct sites in a way where, owing to coding redundancy, several, but not all, servers need to be compromised to access the content \cite{oliveira12,Calmon12}. \section{Summary of contribution} Assume that $V$ users select strings independently from $ {\mathbb{A} }^k$. An inquisitor knows the probabilities with which each user selects their string, is able to query the correctness of each (user, string) pair, and wishes to identify any subset of size $U$ of the $V$ strings. The first question that must be addressed is what is the optimal strategy, the ordering in which (user, string) pairs are guessed, for the inquisitor. For the single user system, since the earliest investigations \cite{Massey94,Arikan96,Merhav99,Pliam00} it has been clear that the strategy of ordering guesses from the most to least likely string, breaking ties arbitrarily, is optimal in any reasonable sense. Here we shall give optimality a specific meaning: that the distribution of the number of guesses required to identify the unknown object is stochastically dominated by all other strategies. Amongst other results, for the multi-user guesswork problem we establish the following: \begin{itemize} \item If $U<V$, the existence of optimal guessing strategies, those that are stochastically dominated by all other strategies, is no longer assured. \item By construction, there exist asymptotically optimal strategies as the strings become long. \item For asymptotically optimal strategies, we prove a large deviation principle for their guesswork. The resulting large deviations rate function is, in general, not convex and so this result could not have been established by determining how the moment generating function of the multi-user guesswork distribution scales in string-length. \item The non-convexity of the rate function shows that, if users' string statistics are distinct, there may be no fixed ordering of weakness amongst users. That is, depending on how many guesses are made before the $U$ users' strings are identified, the collection of users whose strings have been identified are likely to be distinct. \item If all $V$ strings are chosen with the same statistics, then the rate function is convex and the exponential growth rate of the average guesswork as string-length increases is the specific R\'enyi entropy of the string source with parameter \begin{align*} \frac{V-U+1}{V-U+2} \in\left\{\frac 12,\frac 23,\frac 34,\frac4 5,\frac 56,\ldots\right\}. \end{align*} \item For homogeneous users, from an inquisitor's point of view, there is a law of diminishing returns for the expected guesswork growth rate in excess number of users ($V-U$). \item For homogeneous users, from a designer's point of view, coming full circle to Massey's original observation that Shannon entropy has little quantitative relationship to how hard it is to guess a single string, the specific Shannon entropy of the source is a lower bound on the average guesswork growth rate for all $V$ and $U$. \end{itemize} These results generalize both the original guesswork studies, where $U=V=1$, as well as some of the results in \cite{Merhav99,Hanawal11a} where, as a wiretap model, the case $U=1$ and $V=2$ with one of the strings selected uniformly, is considered and scaling properties of the guesswork moments are established. Interestingly, we shall show that that setting is one where the LDP rate function is typically non-convex, so while results regarding the asymptotic behavior of the guesswork moments can be deduced from the LDP, the reverse is not true. To circumvent the lack of convexity, we prove the main result using the contraction principle, Theorem 4.2.1 \cite{Dembo98}, and the LDP established in \cite{Christiansen13}, which itself relies on earlier results of work referenced above. \begin{figure} \includegraphics[scale=0.46]{u_of_v_ex0} \includegraphics[scale=0.46]{u_of_v_ex1} \caption{Strings created from i.i.d. letters are selected from a binary alphabet with probability $p$ for one character. Given an inquisitor wishes to identify $U$ of $V$ strings, the left panel shows the average exponential guesswork growth rate as a function of $V-U$, the excess number of guessable strings; the right panel shows the theoretically predicted approximate average guesswork for $168$ bit strings, as used in triple DES, as a function of $V-U$, the excess number of guessable strings. } \label{fig:bernoulli_guess} \end{figure} \section{ The impact of the number of users on expected guesswork growth rate, an example } \label{sec:firstexample} As an exemplar that illustrates the reduction in security that comes from having multiple users, the left panel in Figure \ref{fig:bernoulli_guess} the average guesswork growth rate for an asymptotically optimal strategy is plotted for the simplest case, a binary alphabet with $V$ i.i.d. Bernoulli string sources. In order to be satisfied, the inquisitor wishes to identify $U\leq V$ of the strings. The x-axis shows the excess number of guessable strings, $V-U$, and the y-axis is the $\log_2$ growth rate of the expected guesswork in string length. If the source is perfectly uniform (i.e. characters are chosen with a Bernoulli $1/2$ process), then the average guesswork growth rate is maximal and unchanging in $V-U$. If the source is not perfectly uniform, then the growth rate decreases as the number of excess guessable strings $V-U$ increases, with a lower bound of the source's Shannon entropy. For a string of length $168$ bits, as used in the triple DES cipher, and a Bernoulli $(0.25)$ source, the right panel in Figure \ref{fig:bernoulli_guess} displays the impact that the change in this exponent has, approximately, on the average number of guesses required to determine $U$ strings. More refined results for a broader class of processes can be found in later sections, including an estimate on the guesswork distribution. The rest of this paper is organized as follows. In Section \ref{sec:brief}, we begin with a brief overview of results on guesswork that we have not touched on so far. Questions of optimal strategy are considered in Section \ref{sec:strategy}. Asymptotically optimal strategies are established to exist in Section \ref{sec:asymptote} and results for these strategies appear in Section \ref{sec:results}. In Section \ref{sec:mismatch} we present examples where strings sources have distinct statistics. In Section \ref{sec:ident} we return to the setting where string sources have identical statistics. Concluding remarks appear in Section \ref{sec:conc}. \section{A brief overview of guesswork} \label{sec:brief} Since Arikan's introduction of the long string length asymptotic, several generalizations of its fundamental assumptions have been explored. Arikan and Boztas \cite{Arikan02} investigate the setting where the truthfulness in response to a query is not certain. Arikan and Merhav \cite{Arikan98} loosen the assumption that inquisitor needs to determine the string exactly, assuming instead that they only need to identify it within a given distance. That the inquisitor knows the distribution of words exactly is relaxed by Sundaresan \cite{Sundaresan07b}, \cite{Sundaresan06} and by the authors of \cite{Beirami15}. Motivated by a wiretap application, the problem of multiple users was first investigated by Merhav and Arikan \cite{Merhav99} in the $V=2$ and $U=1$ setting, assuming one of the users selects their string uniformly on a reduced alphabet. In \cite{Hayashi06} Hayashi and Yamamoto extend the results in \cite{Merhav99} to the case if there is an additional i.i.d. source correlated to the first, used for coding purposes, while Harountunian and Ghazaryan \cite{Haroutunian00} extend the results in \cite{Merhav99} to the setting of \cite{Arikan98}. Harountunian and Margaryan \cite{Haroutunian12} expand on \cite{Merhav99} by adding noise to the original string, altering the distribution of letters. Hanawal and Sundaresan \cite{Hanawal11a} extend the bounds in \cite{Merhav99} to a pre-limit and to more general sources, showing that they are tight for Markovian and unifilar sources. Sundaresan \cite{Sundaresan07a} uses length functions to identify the link between guesswork and compression. This result is extended by Hanawal and Sundaresan \cite{Hanawal11b} to relate guesswork to the compression of a source over a countably infinite alphabet. In \cite{Christiansen13a} the authors prove that, if the string is conditioned on being an element of a typical set the expected guesswork, is growing more slowly than a simple uniform approximation would suggest. In \cite{Christiansen13b} the authors consider the impact of guessing over a noisy erasure channel showing that the mean noise on the channel is not the significant moment in determining the expected guesswork, but instead one determined by its R\'enyi entropy with parameter $1/2$. Finally, we mention that recent work by Bunte and Lapidoth \cite{Bunte14} identifies a distinct operational meaning for R\'enyi entropy in defining a rate region for a scheduling problem. \section{Optimal strategies} \label{sec:strategy} In order to introduce the key concepts used to determine the optimal multi-user guesswork strategy, we first reconsider the optimal guesswork strategy in the single user case, i.e. $U=V=1$. Recall that $ {\mathbb{A} }=\{0,\ldots,m-1\}$ is a finite set. \begin{definition} A single user strategy, $S: {\mathbb{A} }^k\mapsto\{1,\ldots,m^k\}$, is a one-to-one map that determines the order in which guesses are made. That is, for a given strategy $S$ and a given string $w\in {\mathbb{A} }^k$, $S(w)$ is the number of guesses made until $w$ is queried. \end{definition} Let $W_k$ be a random variable taking values in $ {\mathbb{A} }^k$. Assume that its probability mass function, $P(W_k=w)$ for all $w\in {\mathbb{A} }^k$, is known. Since the first results on the topic it has been clear that the best strategy, which we denote $G$, is to guess from most likely to least likely, breaking ties arbitrarily. In particular, $G$ is defined by $G(w)<G(w')$ if $P(W_k=w)>P(W_k=w')$. We begin by assigning optimality a precise meaning in terms of stochastic dominance \cite{Lehmann55,denuit06}. \begin{definition} A strategy $S$ is optimal for $W_k$ if the random variable $S(W_k)$ is stochastically dominated by $S'(W_k)$, for all strategies $S'$. That is, if $P(S(W_k)\leq n) \geq P(S'(W_k)\leq n)$ for all strategies $S'$ and all $n\in\{1,\ldots,m^k\}$. \end{definition} This definition captures the stochastic aspect of guessing by stating than an optimal strategy is one where the identification stopping time is probabilistically smallest. One consequence of this definition that explains its appropriateness is that for any monotone function $\phi:\{1,\ldots,m^k\}\to {\mathbb{R} }$, it is the case that $E(\phi(S(W_k)))\leq E(\phi(S'(W_k)))$ for an optimal $S$ and any other $S'$ (e.g. Proposition 3.3.17, \cite{denuit06}). Thus $S(W_k)$ has the least moments over all guessing strategies. That guessing from most- to least-likely in the single user case is optimal is readily established. \begin{lemma} \label{lem:1opt} If $V=U=1$, the optimal strategies are those that guess from most likely to least likely, breaking ties arbitrarily. \end{lemma} \begin{proof} Consider the strategy $G$ defined above and any other strategy $S$. By construction, for any $n\in\{1,\ldots,m^k\}$ \begin{align*} P(G(W_k)\leq n) &= \sum_{i=1}^n P(G(W_k)=i) \\ & = \max_{w_1,\ldots,w_n} \left(\sum_{i=1}^n P(W_k=w_i)\right)\\ &\geq \sum_{i=1}^n P(S(W_k)=i) = P(S(W_k)\leq n). \end{align*} \end{proof} In the multi-user case, where (user, string) pairs are queried, a strategy is defined by the following. \begin{definition} A multi-user strategy is a one-to-one map $S:\{1,\ldots,V\}\times {\mathbb{A} }^k\mapsto\{1,\ldots,Vm^k\}$ that orders the guesses of (user, string) pairs. \end{definition} The expression for the number of guesses required to identify $U$ strings is a little involved as we must take into account that we stop making queries about a user once their string has been identified. For a given strategy $S$, let $ {N_S} :\{1,\ldots,V\}\times\{1,\ldots,Vm^k\}\mapsto\{1,\ldots,m^k\}$ be defined by \begin{align*} {N_S} (v,n) = |\{w\in {\mathbb{A} }^k:S(v,w)\leq n\}|, \end{align*} which computes the number of queries in the strategy up to $n$ that correspond to user $v$. The number of queries that need to be made if $U$ strings are to be identified is \begin{align*} { T(U,V,\vw)} = {\text{U-min} } \left(S(1,w^{(1)}),\ldots,S(V,w^{(V)})\right), \end{align*} where $ {\text{U-min} } : {\mathbb{R} }^V\to {\mathbb{R} }$ and $ {\text{U-min} } ( {\vec{x}} )$ gives the $U^{\rm th}$ smallest component of $ {\vec{x}} $. The number of guesses required to identify $U$ components of $ {\vec{w}} =(w^{(1)},\ldots,w^{(V)})$ is then \begin{align} \label{eq:GS} {G_S} (U,V, {\vec{w}} )= \sum_{v=1}^V {N_S} \left(v, \min\left(S(v,w^{(v)}),{ T(U,V,\vw)} \right) \right). \end{align} This apparently unwieldy object counts the number of queries made to each user, curtailed either when their string is identified or when $U$ strings of other users are identified. If $U=V$, equation \eqref{eq:GS} simplifies significantly, as $S(v,w^{(v)})\leq{ T(U,V,\vw)} $ for all $v\in\{1,\ldots,V\}$, becoming \begin{align} \label{eq:U=V} {G_S} (V,V, {\vec{w}} )= \sum_{v=1}^V {N_S} \left(v, S(v,w^{(v)}) \right), \end{align} the sum of the number of queries required to identify each individual word. In this case, we have the analogous result to Lemma \ref{lem:1opt}, which is again readily established. \begin{lemma} \label{lem:UVopt} If $V=U$, the optimal strategies are those that employ individual optimal strategies, but with users selected in any order. \end{lemma} \begin{proof} For any multi-user strategy $S$, equation \eqref{eq:U=V} holds. Consider an element in the sum on the right hand side, $ {N_S} \left(v, S(v,w^{(v)}) \right)$. It can be recognized to be the number of queries made to user $v$ until their string is identified. By Lemma \ref{lem:1opt}, for each user $v$, for any $S$ this stochastically dominates the equivalent single user optimal strategy. Thus the multi-user optimal strategies in this case are the sum of individual user optimal strategies, with users queried in any arbitrary order. \end{proof} The formula \eqref{eq:GS} will be largely side-stepped when we consider asymptotically optimal strategies, but is needed to establish that there is, in general, no stochastically dominant strategy if $V>U$. With $ {\vec{W}} _k=(W_k^{(1)},\ldots,W_k^{(V)})$ being a random vector taking values in $ {\mathbb{A} }^{kV}$ with independent, not necessarily identically distributed, components, we are not guaranteed the existence of an $S$ such that $P( {G_S} (U,V, {\vec{W}} _k)\leq n) \geq ( {G_{S'}} (U,V, {\vec{W}} _k)\leq n)$ for all alternate strategies $S'$. \begin{lemma} If $V>U$, a stochastically dominant strategy does not necessarily exist. \end{lemma} \begin{proof} A counter-example suffices and so let $k=1$, $V=2$, $U=1$ and $ {\mathbb{A} }=\{0,1,2\}$. Let the distributions of $W_1^{(1)}$ and $W_1^{(2)}$ be \begin{center} \begin{tabular}{|c|c|} \hline User 1 & User 2\\ \hline $P(W_1^{(1)}=0)=0.6$ & $P(W_1^{(2)}=0)=0.5$ \\ $P(W_1^{(1)}=1)=0.25$ & $P(W_1^{(2)}=1)=0.4$ \\ $P(W_1^{(1)}=2)=0.15$ & $P(W_1^{(2)}=2)=0.1$\\ \hline \end{tabular} \end{center} If a stochastically dominant strategy exists, its first guess must be user $1$, string $0$, i.e. $S(1,0)=1$, so that $P( {G_S} (1, {\vec{W}} _1)=1) = 0.6$. Given this first guess, to maximize $P( {G_S} (1, {\vec{W}} _1)\leq2)$, the second guess must be user $1$, string $1$, $S(1,1)=2$, so that $P( {G_S} (1, {\vec{W}} _1)\leq 2) = 0.85$. An alternate strategy with $S(2,0)=1$ and $S(2,1)=2$, however, gives $P( {G_{S'}} (1, {\vec{W}} _1)=1)=0.5$ and $P( {G_{S'}} (1, {\vec{W}} _1)\leq 2)=0.9$. While $P( {G_S} (1, {\vec{W}} _1)=1)>P( {G_{S'}} (1, {\vec{W}} _1)=1)$, $P( {G_S} (1, {\vec{W}} _1)\leq2)<P( {G_{S'}} (1, {\vec{W}} _1)\leq2)$ and so there is no strategy stochastically dominated by all others in this case. \end{proof} Despite this lack of universal optimal strategy, we shall show that there is a sequence of random variables that are stochastically dominated by the guesswork of all strategies and, moreover, there exists a strategy with identical performance in Arikan's long string length asymptotic. \begin{definition} A strategy $S$ is asymptotically optimal if $\{k^{-1}\log {G_S} (U,V, {\vec{W}} _k)\}$ satisfies a LDP with the same rate function as a sequence $\{k^{-1}\log \Upsilon(U,V, {\vec{W}} _k)\}$ where $\Upsilon(U,V, {\vec{W}} _k)$ is stochastically dominated by $ {G_{S'}} (U,V, {\vec{W}} _k)$ for all strategies $S'$. \end{definition} Note that $\Upsilon(U,V,\cdot)$ need not correspond to the guesswork of a strategy. \section{An asymptotically optimal strategy} \label{sec:asymptote} Let $\{ {\vec{W}} _k\}$ be a sequence of random strings, with $ {\vec{W}} _k$ taking values in $ {\mathbb{A} }^{kV}$, with independent components, $W_k^{(v)}$, corresponding to strings selected by users $1$ through $V$, although each user's string may not be constructed from i.i.d. letters. For each individual user, $v\in\{1,\ldots,V\}$, let $ {G^{(v)}} $ denote its single-user optimal guessing strategy; that is, guessing from most likely to least likely. We shall show that the following random variable, constructed using the $ {G^{(v)}} $, is stochastically dominated by the guesswork distribution of all strategies: \begin{align} {G_{\text{opt}} } (U,V, {\vec{W}} _k) = {\text{U-min} } \left(G^{(1)}(W^{(1)}_k),\ldots,G^{(V)}(W^{(V)}_k)\right). \label{eq:LB} \end{align} This can be thought of as allowing the inquisitor to query, for each $n$ in turn, the $n^{\rm th}$ most likely string for all users while only accounting for a single guess and so it does not correspond to an allowable strategy. \begin{lemma} For any strategy $S$ and any $U\in\{1,\ldots,V\}$, $ {G_{\text{opt}} } (U,V, {\vec{W}} _k)$ is stochastically dominated by $ {G_S} (U,V, {\vec{W}} _k)$. That is, for any any $U\in\{1,\ldots,V\}$ and any $n\in\{1,\ldots,m^k\}$ \begin{align*} P( {G_{\text{opt}} } (U,V, {\vec{W}} _k)\leq n) \geq P( {G_S} (U,V, {\vec{W}} _k)\leq n). \end{align*} \end{lemma} \begin{proof} Using equation \eqref{eq:GS} and the positivity of its summands, for any strategy $S$ \begin{align*} &G_S(U, V, {\vec{w}} )\\ &\ge {\text{U-min} } ( {N_S} (1, S(1, w^{(1)})),\ldots, {N_S} (V, S(V, w^{(V)}))). \end{align*} As for each $v\in\{1,\ldots,V\}$, $G^{(v)}(W_k^{(v)})$ is stochastically dominated by all other strategies, \begin{align*} P(G^{(v)}(W_k^{(v)})\le n)\ge P( {N_S} (v, S(1, W_k^{(v)}))\le n). \end{align*} Using equation \eqref{eq:LB}, this implies that \begin{align*} &P( {G_{\text{opt}} } ( {\vec{W}} _k)\le n)\\ &\ge P( {\text{U-min} } ( {N_S} (1, S(1, W_k^{(1)})),\ldots, {N_S} (V, S(V, W_k^{(V)})))\le n)\\ &\ge P(G_S(U, V, {\vec{W}} _k)\le n), \end{align*} as required. \end{proof} The strategy that we construct that will asymptotically meet the performance of the lower bound is to round-robin the single user optimal strategies. That is, to query the most likely string of one user followed by the most likely string of a second user and so forth, for each user in a round-robin fashion, before moving to the second most likely string of each user. An upper bound on this strategy's performance is to consider only stopping at the end of a round of such queries, even if they reveal more than $U$ strings, which gives \begin{align} V {G_{\text{opt}} } (U,V, {\vec{W}} _k), \label{eq:UB} \end{align} where $ {G_{\text{opt}} } (U,V, {\vec{W}} _k)$ is defined in \eqref{eq:LB}. In large deviations parlance the stochastic processes $\{k^{-1}\log {G_{\text{opt}} } (U,V, {\vec{W}} _k)\}$ and $\{k^{-1}\log(V {G_{\text{opt}} } (U,V, {\vec{W}} _k))\}$ arising from equations \eqref{eq:LB} and \eqref{eq:UB} are exponentially equivalent, e.g. Section 4.2.2 \cite{Dembo98}, as $\lim_{k\to\infty} k^{-1}\log V=0$. As a result, if one process satisfies the LDP with a rate function that has compact level sets, then the other does \cite{Dembo98}[Theorem 4.2.3]. Thus if $\{k^{-1}\log {G_{\text{opt}} } (U,V, {\vec{W}} _k)\}$ can be shown to satisfy a LDP, then the round-robin strategy is proved to be asymptotically optimal. \section{Asymptotic performance of optimal strategies} \label{sec:results} We first recall what is known for the single-user setting. For each individual user $v\in\{1,\ldots,V\}$, the specific R\'enyi entropy of the sequence $\{ W^{(v)} _k\}$, should it exist, is defined by \begin{align*} {R^{(v)}} (\beta):= \lim_{k\to\infty} \frac 1k \frac{1}{1-\beta} \log \sum_{w_k\in {\mathbb{A} }^k} P( W^{(v)} _k=w_k)^\beta \end{align*} for $\beta\in(0,1)\cup(1,\infty)$, and for $\beta=1$, \begin{align*} {R^{(v)}} (1)&:=\lim_{\beta\uparrow1} {R^{(v)}} (\beta)\\ &= -\lim_{k\to\infty} \frac1k \sum_{w_k\in {\mathbb{A} }^k} P( W^{(v)} _k=w_k)\log P( W^{(v)} _k=w_k), \end{align*} the specific Shannon entropy. Should $ {R^{(v)}} (\beta)$ exist for $\beta\in(0,\infty)$, then the specific min-entropy is defined \begin{align*} {R^{(v)}} (\infty)&=\lim_{\beta\to\infty} {R^{(v)}} (\beta) \\ &= -\lim_{k\to\infty}\frac1k \max_{w_k\in {\mathbb{A} }^k} \log P( W^{(v)} _k=w_k), \end{align*} where the limit necessarily exists. The existence of $ {R^{(v)}} (\beta)$ for all $\beta>0$ and its relationship to the scaled Cumulant Generating Function (sCGF) \begin{align} {\Lambda_G^{(v)}} (\alpha) &= \lim_{k\to\infty} \frac 1k \log E(\exp(\alpha\log {G^{(v)}} ( W^{(v)} _k))) \nonumber\\ &= \begin{cases} \displaystyle \alpha {R^{(v)}} \left(\frac{1}{1+\alpha}\right) & \text{ if } \alpha>-1\\ - {R^{(v)}} (\infty) & \text{ if } \alpha\leq-1 \end{cases} \label{eq:sCGF} \end{align} has been established for the single user case for a broad class of character sources that encompasses i.i.d., Markovian and general sofic shifts that admit an entropy condition \cite{Arikan96,Malone042,Pfister04,Hanawal11,Christiansen13}. If, in addition, $ {R^{(v)}} (\beta)$ is differentiable with respect to $\beta$ and has a continuous derivative, it is established in \cite{Christiansen13} that the process $\{k^{-1}\log {G^{(v)}} ( W^{(v)} _k)\}$ satisfies a LDP, i.e. equation \eqref{eq:LDP}, with a convex rate function \begin{align} \label{eq:rf} {\Lambda_G^{(v)}} ^*(x) = \sup_{\alpha\in {\mathbb{R} }}\left(x\alpha- {\Lambda_G^{(v)}} (\alpha)\right). \end{align} In \cite{Christiansen13}, this LDP is used to deduce an approximation to the guesswork distribution, \begin{align} \label{eq:approx1} P( {G^{(v)}} ( W^{(v)} _k)=n) \approx \frac 1n \exp\left(-k {\Lambda_G^{(v)}} ^*\left(\frac 1k\log n\right)\right) \end{align} for large $k$ and $n\in\{1,\ldots,m^k\}$. The following theorem establishes the fundamental analogues of these results for an asymptotically optimal strategy, where user strings may have distinct statistical properties. \begin{theorem} \label{thm:main} Assume that the components of $\{ {\vec{W}} _k\}$ are independent and that for each $v\in\{1,\ldots,V\}$ $ {R^{(v)}} (\beta)$ exists for all $\beta>0$, is differentiable and has a continuous derivative, and that equation \eqref{eq:sCGF} holds. Then the process $\{k^{-1}\log {G_{\text{opt}} } (U,V, {\vec{W}} _k)\}$, and thus any asymptotically optimal strategy, satisfies a Large Deviation Principle. Defining \begin{align*} & {\delta^{(v)}} (x)=\begin{cases} {\Lambda_G^{(v)}} ^*(x) & \text{ if }x\le {R^{(v)}} (1)\\ 0 & \text{ otherwise} \end{cases}\\ &\text{ and } {\gamma^{(v)}} (x)=\begin{cases} {\Lambda_G^{(v)}} ^*(x) &\text{ if }x\ge {R^{(v)}} (1)\\ 0 & \text{ otherwise} \end{cases}, \end{align*} the rate function is \begin{align} &\IGopt(U,V,x)=\nonumber\\ &\max_{v_1,\ldots, v_V} \left( {\Lambda_G^{(v_1)}} ^*(x)+\sum_{i=2}^U \delta^{(v_i)}(x) +\sum_{i=U+1}^V {\gamma^{(v_i)}} (x)\right), \label{eq:Iopt} \end{align} which is lower semi-continuous and has compact level sets, but may not be convex. The sCGF capturing how the moments scale is \begin{align} \LambdaGopt(U,V,\alpha) &= \lim_{k\to\infty} \frac 1k \log E(\exp(\alpha\log {G_{\text{opt}} } (U,V, {\vec{W}} _k)))\nonumber\\ &= \sup_{x\in[0,\log(m)]}\left(\alpha x -\IGopt(U,V,x)\right). \label{eq:scgf} \end{align} \end{theorem} \begin{proof} Under the assumptions of the theorem, for each $v\in\{1,\ldots,V\}$, $\{k^{-1} \log {G^{(v)}} ( W^{(v)} _k) \}$ satisfies the LDP with the rate function given in equation \eqref{eq:rf}. As users' strings are selected independently, the sequence of vectors \begin{align*} \left\{\left(\frac 1k \log G^{(1)}(W_k^{(1)}), \ldots, \frac 1k \log G^{(V)}(W_k^{(V)}) \right)\right\} \end{align*} satisfies the LDP in $ {\mathbb{R} }^V$ with rate function $I(y^{(1)},\ldots,y^{(V)}) = \sum_{v=1}^V {\Lambda_G^{(v)}} ^*(y^{(v)})$, the sum of the rate functions given in equation \eqref{eq:rf}. Within our setting, the contraction principle, e.g. Theorem 4.2.1 \cite{Dembo98}, states that if a sequence of random variables $\{X_n\}$ taking values in a compact subset of $ {\mathbb{R} }^V$ satisfies a LDP with rate function $I: {\mathbb{R} }^V\mapsto[0,\infty]$ and $f: {\mathbb{R} }^V\mapsto {\mathbb{R} }$ is a continuous function, then the sequence $\{f(X_n)\}$ satisfies the LDP with rate function $\inf_{ {\vec{y}} }\{I( {\vec{y}} ):f( {\vec{y}} )=x\}$. Assume, without loss of generality, that $ {\vec{x}} \in {\mathbb{R} }^V$ is such that $x^{(1)}<x^{(2)}<\cdots<x^{(V)}$, so that $ {\text{U-min} } ( {\vec{x}} ) = x^{(U)}$, and let $ {\vec{x}} _n=(x^{(1)}_n,\ldots,x^{(V)}_n)\to {\vec{x}} $. Let $\epsilon < \inf\{x^{(v)}-x^{(v-1)}:v\in\{2,\ldots,V\}\}$. There exists $N_\epsilon$ such that $\max_{v=1,\ldots,V} |x^{(v)}_n-x^{(v)}|<\epsilon$ for all $n>N_\epsilon$. Thus for all $v\in\{2,\ldots,V\}$ and all $n>N_\epsilon$ $x^{(v)}_n-x^{(v-1)}_n > x^{(v)}-x^{(v-1)}-\epsilon>0$ and so $| {\text{U-min} } ( {\vec{x}} _n)- {\text{U-min} } ( {\vec{x}} )|=|x_n^{(U)}-x^{(U)}|<\epsilon$. Hence $ {\text{U-min} } : {\mathbb{R} }^V\to {\mathbb{R} }$ is a continuous function and that a LDP holds follows from an application of the contraction principle, giving the rate function \begin{align*} \IGopt(U,V,x) = \inf\left\{ \sum_{v=1}^V {\Lambda_G^{(v)}} ^*(y_v): {\text{U-min} } (y_1,\ldots,y_V)=x \right\}. \end{align*} This expression simplifies to that in equation \eqref{eq:Iopt} by elementary arguments. The sCGF result follows from an application of Varahadan's Lemma, e.g \cite[Theorem 4.3.1]{Dembo98}. \end{proof} The expression for the rate function in equation \eqref{eq:Iopt} lends itself to a useful interpretation. In the long string-length asymptotic, the likelihood that an inquisitor has identified $U$ of the $V$ users' strings after approximately $\exp(kx)$ queries is contributed to by three distinct groups of identifiable users. For given $x$, the argument in the first term $(v_1)$ identifies the last of the $U$ users whose string is identified. The second summed term is contributed to by the collection of users, $(v_2)$ to $(v_U)$, whose strings have already been identified prior to $\exp(kx)$ queries, while the final summed term corresponds to those users, $(v_{U+1})$ to $(v_V)$, whose strings have not been identified. The reason for using the notation $\IGopt(U,V,\cdot)$ in lieu of $\LambdaGopt^*(U,V,\cdot)$ for the rate function in Theorem \ref{thm:main} is that $\IGopt(U,V,\cdot)$ is not convex in general, which we shall demonstrate by example, and so is not always the Legendre-Fenchel transform of the sCGF $\LambdaGopt(U,V,\cdot)$. Instead \begin{align*} \LambdaGopt^*(U,V,x) = \sup_\alpha\left(\alpha x -\LambdaGopt(U,V,\alpha)\right) \end{align*} forms the convex hull of $\IGopt(U,V,\cdot)$. In particular, this means that we could not have proved Theorem \ref{thm:main} by establishing properties of $\LambdaGopt(U,V,\cdot)$ alone, which was the successful route taken for the $U=V=1$ setting, and instead needed to rely on the LDP proved in \cite{Christiansen13}. Indeed, in the setting considered in \cite{Merhav99, Hanawal11a} with $U=1$, $V=2$, with one of the strings chosen uniformly, while the authors directly identify $\LambdaGopt(1,2,\alpha)$ for $\alpha>0$, one cannot establish a full LDP from this approach as the resulting rate function is not convex. Convexity of the rate function defined in equation \eqref{eq:Iopt} is ensured, however, if all users select strings using the same stochastic properties, whereupon the results in Theorem \ref{thm:main} simplify greatly. \begin{corollary} \label{cor:same} If, in addition to the assumptions of Theorem \ref{thm:main}, $ {\Lambda_G^{(v)}} (\cdot)= {\Lambda_G} (\cdot)$ for all $v\in\{1,\ldots,V\}$ with corresponding R\'enyi entropy $R(\cdot)$, then the rate function in equation \eqref{eq:rf} simplifies to the convex function \begin{align} \LambdaGopt^*(U,V,x) &= \begin{cases} \displaystyle U {\Lambda_G} ^*(x) & \text{ if } x\leq R(1)\\ \displaystyle (V-U+1) {\Lambda_G} ^*(x) & \text{ if } x\geq R(1) \end{cases} \label{eq:rf2} \end{align} where $R(1)$ is the specific Shannon entropy, and the sCGF in equation \eqref{eq:scgf} simplifies to \begin{align} \label{eq:scgf2} \LambdaGopt(U,V,\alpha) &= \begin{cases} \displaystyle U {\Lambda_G} \left(\frac{\alpha}{U}\right) & \text{ if } \alpha\leq0\\ \displaystyle (V-U+1) {\Lambda_G} \left(\frac{\alpha}{V-U+1}\right) & \text{ if } \alpha\geq0. \end{cases} \end{align} In particular, with $\alpha=1$ we have \begin{align} &\lim_{k\to\infty} \frac 1k \log E\left( {G_{\text{opt}} } (U,V, {\vec{W}} _k)\right)\nonumber\\ &= \LambdaGopt(1)\nonumber\\ &= (V-U+1) {\Lambda_G} \left(\frac{1}{V-U+1}\right)\nonumber\\ &= R\left(\frac{V-U+1}{V-U+2}\right), \label{eq:average} \end{align} where $R((n+1)/(n+2))-R((n+2)/(n+3))$ is a decreasing function of $n\in {\mathbb{N} }$. \end{corollary} \begin{proof} The simplification in equation \eqref{eq:rf2} follows readily from equation \eqref{eq:Iopt}. To establish that $R((n+1)/(n+2))-R((n+2)/(n+3))$ is a decreasing function of $n\in {\mathbb{N} }$, it suffices to establish that $R((x+1)/(x+2))$ is a convex, decreasing function for $x\in {\mathbb{R} }_+$. That $R(x)\downarrow R(1)$ as $x\uparrow1$ is a general property of specific R\'enyi entropy. For convexity, using equation \eqref{eq:average} it suffices to show that $x {\Lambda_G} (1/x)$ is convex for $x>0$. This can be seen by noting that for any $a\in(0,1)$ and $x_1,x_2>0$, \begin{align*} &(a x_1+(1-a)x_2) {\Lambda_G} \left(\frac{1}{a x_1+(1-a)x_2}\right)\\ &= (a x_1+(1-a)x_2) {\Lambda_G} \left(\eta \frac{1}{x_1}+(1-\eta)\frac{1}{x_2}\right)\\ &\leq a x_1 {\Lambda_G} \left(\frac{1}{x_1}\right) +(1-a)x_2 {\Lambda_G} \left(\frac{1}{x_2}\right), \end{align*} where $\eta = a x_1/(a x_1+(1-a)x_2)\in(0,1)$ and we have used the convexity of $ {\Lambda_G} $. \end{proof} As the growth rate, $R((n+1)/(n+2))-R((n+2)/(n+3))$, is decreasing there is a law of diminishing returns for the inquisitor where the greatest decrease in the average guesswork growth rate is through the provision of one additional user. From the system designer's point of view, the specific Shannon entropy of the source is a universal lower bound on the exponential growth rate of the expected guesswork that, while we cannot take the limit to infinity, is tight for large $V-U$. Regardless of whether the rate function $\IGopt(U,V,\cdot)$ is convex, Theorem \ref{lem:approx}, which follows, justifies the approximation \begin{align*} P( {G_{\text{opt}} } (U,V, {\vec{W}} _k)=n) \approx \frac 1n \exp\left(-k\IGopt\left(U,V,\frac 1k\log n\right)\right) \end{align*} for large $k$ and $n\in\{1,\ldots,m^k\}$. It is analogous to that in equation \eqref{eq:approx1}, first developed in \cite{Christiansen13}, but there are additional difficulties that must be overcome to establish it. In particular, if $U=V=1$, the likelihood that the string is identified at each query is a decreasing function of guess number, but this is not true in the more general case. As a simple example, consider $U=V=2$, $ {\mathbb{A} }=\{0,1\}$, strings of length $1$ and strings chosen uniformly. Here the probability of guessing both strings in one guess is $1/4$, but at the second guess it is $3/4$. Despite this lack of monotonicity, the approximation still holds in the following sense. \begin{theorem} \label{lem:approx} Under the assumptions of Theorem \ref{thm:main}, for any $x \in [0, \log m)$ we have \begin{align*} &\lim_{\epsilon \downarrow 0}\liminf_{k\rightarrow \infty} \frac 1k \log \inf_{n \in K_k(x, \epsilon)}P( {G_{\text{opt}} } (U,V, {\vec{W}} _k)=n)\\ &=\lim_{\epsilon \downarrow 0}\limsup_{k\rightarrow \infty} \frac 1k \log \sup_{n \in K_k(x, \epsilon)}P( {G_{\text{opt}} } (U,V, {\vec{W}} _k)=n)\\ &=-I_{ {G_{\text{opt}} } }(U, V, x)-x, \end{align*} where \begin{align*} K_k(x, \epsilon)=\{n:n \in (\exp(k(x-\epsilon)), \exp((k(x+\epsilon)))\} \end{align*} is the collection of guesses made in a log-neighborhood of $x$. \end{theorem} \begin{proof} The proof follows the ideas in \cite{Christiansen13} Corollary 4, but with the added difficulties resolved by isolating the last word that is likely to be guessed and leveraging the monotonicity it its individual likelihood of being identified. Noting the definition of $K_k(x, \epsilon)$ in the statement of the theorem, consider for $x\in(0,\log(m))$ \begin{align*} &\sup_{n \in K_k(x, \epsilon)}P( {G_{\text{opt}} } (U,V, {\vec{W}} _k)=n)\nonumber\\ =& \sup_{n \in K_k(x, \epsilon)}\sum_{(v_1, \ldots, v_V)} P( {G^{(v_1)}} (W_k^{(v_1)})=n)\\ &\prod_{i=2}^{U}P( {G^{(v_i)}} (W_k^{(v_i)})\le n) \prod_{i=U+1}^{V}P( {G^{(v_i)}} (W_k^{(v_i)})\ge n)\\ \le& \sup_{n \in K_k(x, \epsilon)} \max_{(v_1, \ldots, v_V)}(V!)P( {G^{(v_1)}} (W_k^{(v_1)})=n)\\ &\prod_{i=2}^{U}P( {G^{(v_i)}} (W_k^{(v_i)})\le n) \prod_{i=U+1}^{V}P( {G^{(v_i)}} (W_k^{(v_i)})\ge n)\\ \le& \sup_{n \in K_k(x, \epsilon)} \max_{(v_1, \ldots, v_V)}(V!)P( {G^{(v_1)}} (W_k^{(v_1)})=n)\\ &\prod_{i=2}^{U} P\left(\frac 1k \log {G^{(v_i)}} (W_k^{(v_i)})\le x-\epsilon\right)\\ &\prod_{i= U+1 }^{V}P\left(\frac 1k \log {G^{(v_i)}} (W_k^{(v_i)})\ge x+\epsilon\right)\nonumber\\ &\le \inf_{n \in K_k(x-2\epsilon, \epsilon)}\max_{(v_1, \ldots, v_V)}(V!) P\left(\frac1k \log {G^{(v_1)}} (W_k^{(v_1)})=n\right)\\ &\prod_{i=2}^{U} P\left(\frac 1k \log {G^{(v_i)}} (W_k^{(v_i)})\le x+\epsilon\right)\\ &\prod_{i=U+1}^{V}P\left(\frac 1k \log {G^{(v_i)}} (W_k^{(v_i)})\ge x-\epsilon\right).\nonumber \end{align*} The first equality holds by definition of $ {G_{\text{opt}} } (U,V,\cdot)$. The first inequality follows from the union bound over all possible permutations of $\{1, \ldots, V\}$. The second inequality utilizes $k^{-1}\log n \in (x-\epsilon, x+\epsilon)$ if $n\in K_k(x, \epsilon)$, while the third inequality uses the monotonic decreasing probabilities in guessing a single user's string. Taking $\lim_{\epsilon \downarrow 0}\limsup_{k\rightarrow \infty}k^{-1}\log$ on both sides of the inequality, interchanging the order of the max and the supremum, using the continuity of $ {\Lambda_G^{(v)}} (\cdot)$ for each $v\in\{1,\cdots,V\}$, and the representation of the rate function $\IGopt(U,V,\cdot)$ in equation \eqref{eq:Iopt}, gives the upper bound \begin{align*} &\lim_{\epsilon \downarrow 0}\limsup_{k\rightarrow \infty}\frac 1k \log \sup_{n \in K_k(x, \epsilon)} P( {G_{\text{opt}} } ( {\vec{W}} _k)=n) \\ &\leq-\IGopt(U, V, x)-x. \end{align*} Considering the least likely guesswork in the ball leads to a matching lower bound. The other case, $x=0$, follows similar logic, leading to the result. \end{proof} We next provide some illustrative examples of what these results imply, returning to using $\log_2$ in figures. \section{Mismatched Statistics Example} \label{sec:mismatch} The potential lack of convexity in the rate function of Theorem \ref{thm:main}, equation \eqref{eq:Iopt}, only arises if users' string statistics are asymptotically distinct. The significance of this lack of convexity on the phenomenology of guesswork can be understood in terms of the asymptotically optimal round-robin strategy: if the rate function is not convex, there is no single set of users whose strings are most vulnerable. That is, if $U$ strings are recovered after a small number of guesses, they will be from one set of users, but after a number of guesses corresponding to a transition from the initial convexity they will be from another set of users. This is made explicit in the following corollary to Theorem \ref{thm:main}. \begin{corollary} If $\IGopt(U,V,x)$ is not convex in $x$, then there is there is no single set of users whose strings will be identified in the long string length asymptotic. \end{corollary} \begin{proof} We prove the result by establishing the contraposition: if a single set of users is always most vulnerable, then $\IGopt(U,V,x)$ is convex. Recall the expression for $\IGopt(U,V,x)$ given in equation \eqref{eq:Iopt} \begin{align*} &\IGopt(U,V,x)=\\ &\max_{v_1,\ldots, v_V} \left( {\Lambda_G^{(v_1)}} ^*(x)+\sum_{i=2}^U \delta^{(v_i)}(x) +\sum_{i=U+1}^V {\gamma^{(v_i)}} (x)\right), \end{align*} As explained after Theorem \ref{thm:main}, for given $x$ the set of users $\{(v_1),\ldots,(v_U)\}$ corresponds to those users whose strings, on the scale of large deviations, will be identified by the inquisitor after approximately $\exp(kx)$ queries. If this set is unchanging in $x$, i.e. the same set of users is identified irrespective of $x$, then both of the functions \begin{align*} \left( {\Lambda_G^{(v_1)}} ^*(x)+\sum_{i=2}^U \delta^{(v_i)}(x)\right) \text{ and } \sum_{i=U+1}^V {\gamma^{(v_i)}} (x) \end{align*} are sums of functions that are convex in $x$, and so are convex themselves. Thus the sum of them, $\IGopt(U,V,x)$, is convex. \end{proof} This is most readily illustrated by an example that falls within the two-user setting of \cite{Merhav99}, where one string is constructed from uniformly from i.i.d. bits and the other string from non-uniformly selected i.i.d. bytes. \begin{figure} \begin{center} \includegraphics[scale=0.46]{nonconvex.pdf} \end{center} \caption{User 1 picks a uniform bit string. User 2 picks a non-uniform i.i.d. byte string. The straight line starting at $(0,1)$ displays ${ {\Lambda_G} ^{(1)}}^*(x)$, the large deviations rate function for guessing the uniform bit string. The convex function starting below it is ${ {\Lambda_G} ^{(2)}}^*(x)$, the rate function for guessing the non-uniform byte string. The highlighted line, which is the minimum of the two rate functions until $x=1$ and then $+\infty$ afterwards, displays $I_{ {G_{\text{opt}} } }(1, 2, x)$, as determined by \eqref{eq:Iopt}, the rate function for an inquisitor to guess one of the two strings. Its non-convexity demonstrates that initially it is the bytes that are most likely to be revealed by brute force searching, but eventually it is the uniform bits that are more likely to be identified. The Legendre-Fenchel transform of the scaled cumulant generating function of the guesswork distributions would form the convex hull of the highlighted line and so this could not be deduced by analysis of the asymptotic moments. } \label{fig:nonconvex} \end{figure} Let $ {\mathbb{A} }=\{0,\ldots,7\}$, $U=1$ and $V=2$. Let one character source correspond to the output of a cryptographically secure pseudo-random number generator. That is, despite having a byte alphabet, the source produces perfectly uniform i.i.d. bits, \begin{align*} P(W^{(1)}_1=i) &= \begin{cases} 1/2 & \text{ if } i\in\{0,1\}\\ 0 & \text{ otherwise}. \end{cases} \end{align*} The other source can be thought of as i.i.d. bytes generated by a non-uniform source, \begin{align*} P(W^{(2)}_1=i) &= \begin{cases} 0.55 & \text{ if } i=0\\ 0.1 & \text{ if } i\in\{1,2\}\\ 0.05 & \text{ if } i\in\{3,\ldots,7\}. \end{cases} \end{align*} This models the situation of a piece of data, a string from the second source, being encrypted with a shorter, perfectly uniform key. The inquisitor can reveal the hidden string by guessing either the key or the string. One might suspect that either the key or the string is necessarily more susceptible to being guessed, but the result is more subtle. Figure \ref{fig:nonconvex} plots the rate functions for guessing each of the user's strings individually as well as the rate function for guessing one out of two, determined by equation \eqref{eq:Iopt}, which in this case is the minimum of the two rate function where they are finite. The y-axis is the exponential decay-rate in string length $k$ of the likelihood of identification given approximately $\exp(k x)$ guesses, where $x$ is on the x-axis, have been made. The rate function reveals that if the inquisitor identifies one of the strings quickly, it will be the non-uniform byte string, but after a certain number of guesses it is the key, the uniform bit string, that is identified. Attempting to obtain this result by taking the Legendre Fenchel transform of the sCGF identified in \cite{Merhav99} results in the convex hull of this non-convex function, which has no real meaning. This explains the necessity for the distinct proof approach taken here if one wishes to develop estimates on the guesswork distribution rather than its moments. \section{Identical Statistics Examples} \label{sec:ident} When the string statistics of users are asymptotically the same, the resulting multi-user guesswork rate functions are convex by Corollary \ref{cor:same}, and the r\^ole of specific Shannon entropy in analyzing expected multi-user guesswork appears. This is the setting that leads to the results in Section \ref{sec:firstexample} where it is assumed that character statistics are i.i.d., but not necessarily uniform. An alternate means of departure from string-selection uniformity is that the appearance of characters within the string may be correlated. The simplest model of this is where string symbols are governed by a Markov chain with arbitrary starting distribution and transition matrix \begin{align*} \left( \begin{array}{cc} 1-a & a \\ b & 1-b \end{array} \right), \end{align*} where $a,b\in(0,1)$. The specific R\'enyi entropy of this character source can be evaluated, e.g. \cite{Malone042}, for $\beta\neq1$ to be \begin{align*} R(\beta) =& \frac{1}{1-\beta}\log \left( (1-a)^\beta+(1-b)^\beta \right. \\ & \left.+ \sqrt{((1-a)^\beta-(1-b)^\beta)^2 + 4(ab)^\beta } \right) -\frac{1}{1-\beta} \end{align*} and $R(1)$ is the Shannon entropy \begin{align*} R(1) &= \frac{b}{a+b}H(a)+\frac{a}{a+b}H(b), \end{align*} where $H(a) = -a\log(a)-(1-a)\log(1-a)$. Figure \ref{fig:gap_mark} shows $R(1/2)-R(1)$ the difference between the average guesswork growth rate for a single user system versus one for an arbitrarily large number of users as $a$ and $b$ are varied. Heavily correlated sources or those with unlikely characters give the greatest discrepancy in security. \begin{figure} \begin{center} \includegraphics[scale=0.46]{sec_gap_markov} \end{center} \caption{Markovian string source over a binary alphabet $ {\mathbb{A} }=\{0,1\}$ with $a$ being the probability of a 1 after a 0 and $b$ being the probability of a 0 after a 1. The plots shows the difference in average guesswork exponent for a single user system and a system with an arbitrarily large number of users, a measure of computational security reduction. } \label{fig:gap_mark} \end{figure} If $a=b$, then the stationary likelihood a symbol is a $0$ or $1$ is equal, but symbol occurrence is correlated. In that setting, the string source's specific R\'enyi entropy gives for $\beta\neq1$ \begin{align*} R(\beta) =& \frac{1}{1-\beta}\log \left( (1-a)^\beta + a^\beta \right), \end{align*} which is the same as a Bernoulli source with probability $a$ of one character. Thus the results in Section \ref{sec:firstexample} can be re-read with the Bernoulli string source with parameter $p=a$ substituted for a Markovian string source whose stationary distribution gives equal weight to both alphabet letters, but for which character appearance is correlated. \section{Discussion} \label{sec:conc} Since Massey \cite{Massey94} posed the original guesswork problem and Arikan \cite{Arikan96} introduced its long string asymptotic, generalizations have been used to quantify the computational security of several systems, including being related to questions of loss-less compression. Here we have considered what appears to be one of the most natural extensions of that theory, that of multi-user computational security. As a consequence of the inherent non-convex nature of the guesswork rate function unless string source statistics are equal for all users, this development wasn't possible prior to the Large Deviation Principle proved in \cite{Christiansen13}. The results therein themselves relied on the earlier work that determined the scaled cumulant generating function for the guesswork for a broad class of process \cite{Arikan96,Malone042,Pfister04,Hanawal11}. The fact that rate functions can be non-convex encapsulates that distinct subsets of users are likely to be identified depending on how many unsuccessful guesses have been made. As a result, a simple ordering of string guessing difficulty is inappropriate in multi-user systems and suggests that quantification of multi-user computational security is inevitably nuanced. The original analysis of the asymptotic behavior of single user guesswork identified an operational meaning to specific R\'enyi entropy. In particular, the average guesswork grows exponentially in string length with an exponent that is the specific R\'enyi entropy of the character source with parameter $1/2$. When users' string statistics are the same, the generalization to multi-user guesswork identifies a surprising operational r\^ole for specific R\'enyi entropy with parameter $n/(n+1)$ for each $n\in {\mathbb{N} }$ when $n$ is the excess number of strings that can be guessed. Moreover, while the specific Shannon entropy of the string source was found in the single user problem to have an unnatural meaning as the growth rate of the expected logarithm of the guesswork, in the multi-user system it arises as the universal lower bound on the average guesswork growth rate. For the asymptote at hand, the key message is that there is a law of diminishing returns for an inquisitor as the number of users increases. For a multi-user system designer, in contrast to the single character, single user system introduced in \cite{Massey94}, Shannon entropy is the appropriate measure of expected guesswork for systems with many users. Future work might consider the case where the $V$ strings are not selected independently, as was assumed here, but are instead linear functions of $U$ independent strings. A potential application of such a case, suggested by Erdal Arikan (Bilkent University) in a personal communication, envisages the use of multi-user guesswork to characterize the behavior of parallel concatenated decoders operating on blocks of convolutionally encoded symbols passed though a preliminary algebraic block Maximum Distance Separable (MDS) code, e.g. \cite{shu2004}. The connection between guessing and convolutional codes was first established by Arikan \cite{Arikan96}. Decoding over a channel may, in general, be viewed as guessing a codeword that has been chosen from a list of possible channel input sequences, given the observation of an output sequence formed by corrupting the input sequence according to some probability law used to characterize the channel, e.g \cite{Christiansen13b}. Considering sequential decoding of convolutional codes, first proposed by Wozencraft \cite{Wozencraft57}, that guessing may constitute an exploration along a decision tree of the possible input sequences that could have led to the observed output sequence, as modeled by Fano \cite{Fano63}. If the transmitted rate, given by the logarithm of the cardinality of possible codewords, falls below the cut-off rate, then results in \cite{Arikan96} prove that the guesswork remains in expectation less than exponential in the length of the code. Beyond the cut-off rate, it becomes exponentially large. One may view such a result as justifying the frequent use of cut-off rate as as a practical, engineering characterization of the limitations of block and convolutional codes. Consider now the following construction of a type of concatenated code \cite{shu2004}, which is a slight variant of that proposed by Falconer \cite{Falconer67}. The original data, a stream of i.i.d. symbols, is first encoded using an algebraic block MDS code. For a block MDS code, such as a Reed-Solomon code \cite{shu2004}, over a codeword constituted by a sequence of $V$ symbols, correct reception at the output of any $U$ symbols from the $V$ allows for correct decoding, where the feasibility of a pair of $V$ and $U$ depends on the family of codes. For every $U$ input symbols in the data stream, $V$ symbols are generated by the algebraic block MDS code. Note that these symbols may be selected over a set of large cardinality, for instance by taking each symbol to be a string of bits. As successive input blocks of length $U$ are processed by the block MDS code, these symbols form $V$ separate streams of symbols. Each of these $V$ streams emanating from the algebraic block MDS code is coded using a separate but identical convolutional encoder. The $V$ convolutional codewords thus obtained are dependent, even though any $U$ of them are mutually independent. This dependence is imputed by the fact that the $V$ convolutional codewords are created by $U$ original streams that form the input of the block MDS encoder. The $V$ convolutional codewords constitute then the inputs to $V$ mutually independent, Discrete Memoryless Channels (DMCs), all governed by the same probability law. In Falconer's construct, such parallel DMCs are embodied by time-sharing equally a single DMC. While Falconer envisages independent DMCs governed by a single probability law, as is suitable in the setting of interleaving over a single DMC, we may readily extend the scheme to the case where the parallel DMCs have different behaviors. Such a model is natural in wireless settings where several channels are used in parallel, say over different frequencies. While the behavior of such channels is often well modeled as being mutually independent, and the channels individually are well approximated as being DMCs, the characteristics of the channels, which may vary slowly in time, generally differ considerably from each other at any time. Decoding uses the outputs of the $V$ DMCs as follows. For each DMC, the output is initially individually decoded using sequential decoding so that, in the words of Falconer, "controlled by the Fano algorithm, all $[V]$ sequential decoders simultaneously and independently attempt to proceed along the correct path in their own trees". The dependence among the streams produced by the original application of the block MDS code entails that, when $U$ sequential decoders each correctly guesses a symbol, the correct guesses determine a block of $U$ original data symbols. The latter are communicated to all remaining $V-U$ sequential decoders, eliminating the need for them to continue producing guesses regarding that block of $U$ original data symbols. The sequential decoders then proceed to continue attempting to decode the next block of $U$ original data symbols. This scheme allows the $U$ most fortunate guesses out of $V$ to dominate the performance of the overall decoder. A sequential decoder that was a laggard for one block of the original $U$ symbols may prove to be a leader for another block of $U$ symbols. \section*{Acknowledgments:} The authors thank Erdal Arikan (Bilkent University) for informative feedback and for pointing out the relationship between multi-user guesswork and sequential decoding. They also thank the anonymous reviewers for their feedback on the paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} With machine learning applications becoming ubiquitous in modern-day life, there exists an increasing concern about the robustness of the deployed models. Since first reported in \citep{szegedy2013intriguing,goodfellow2014explaining,biggio2013evasion}, these \emph{adversarial attacks} are small perturbations of the input, imperceptible to the human eye, which can nonetheless completely fluster otherwise well-performing systems. Because of clear security implications \citep{darpa}, this phenomenon has sparked an increasing amount of work dedicated to devising defense strategies \citep{metzen2017detecting,gu2014towards,madry2017towards} and correspondingly more sophisticated attacks \citep{carlini2017adversarial,athalye2018obfuscated,tramer2020adaptive}, with each group trying to triumph over the other in an arms-race of sorts. A different line of research attempts to understand adversarial examples from a theoretical standpoint. Some works have focused on giving robustness certificates, thus providing a guarantee to withstand the attack of an adversary under certain assumptions \citep{cohen2019certified,raghunathan2018certified,wong2017provable}. Other works address questions of learnabiltiy \citep{shafahi2018adversarial,cullina2018pac,bubeck2018adversarial,tsipras2018robustness} or sample complexity \citep{schmidt2018adversarially,yin2018rademacher,tu2019theoretical}, in the hope of better characterizing the increased difficulty of learning hypotheses that are robust to adversarial attacks. While many of these results are promising, the analysis is often limited to simple models. Here, we strike a better balance by considering a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate. In particular, we focus our attention on the adversarial robustness of the supervised sparse coding model \citep{mairal2011task}, or task-driven dictionary learning, consisting of a linear classifier acting on the representation computed via a supervised sparse encoder. We show an interesting interplay between the expressivity and stability of a (supervised) representation map and a notion of margin in the feature space. The idea of employing sparse representations as data-driven features for supervised learning goes back to the early days of deep learning \citep{coates2011importance,kavukcuoglu2010fast,zeiler2010deconvolutional,ranzato2007unsupervised}, and has had a significant impact on applications in computer vision and machine learning \citep{wright2010sparse,henaff2011unsupervised,mairal2008discriminative,mairal2007sparse,gu2014projective}. More recently, new connections between deep networks and sparse representations were formalized by~\cite{papyan2018theoretical}, which further helped deriving stability guarantees \citep{papyan2017working}, providing architecture search strategies and analysis \citep{tolooshams2019deep,murdock2020dataless,sulam2019multi}, and other theoretical insights \citep{xin2016maximal,aberdam2019multi,aghasi2020fast,aberdam2020ada,moreau2016understanding}. While some recent work has leveraged the stability properties of these latent representations to provide robustness guarantees against adversarial attacks \citep{romano2019adversarial}, these rely on rather stringent generative model assumptions that are difficult to be satisfied and verified in practice. In contrast, our assumptions rely on the existence of a positive \emph{gap} in the encoded features, as proposed originally by \cite{mehta2013sparsity}. This distributional assumption is significantly milder -- it is directly satisfied by making traditional sparse generative model assumptions -- and can be directly quantified from data. This work makes two main contributions: The first is a bound on the robust risk of hypotheses that achieve a mild encoder gap assumption, where the adversarial corruptions are bounded in $\ell_2$-norm. Our proof technique follows a standard argument based on a minimal $\epsilon$-cover of the parameter space, dating back to \cite{vapnik1971uniform} and adapted for matrix factorization and dictionary learning problems in \cite{gribonval2015sample}. However, the analysis of the Lipschitz continuity of the adversarial loss with respect to the model parameters is considerably more involved. The increase in the sample complexity is mild with adversarial corruptions of size $\nu$ manifesting as an additional term of order $\mathcal{O}\left((1+\nu)^2/m\right)$ in the bound, where $m$ is the number of samples, and a minimal encoder gap of $\mathcal{O}(\nu)$ is necessary. Much of our results extend directly to other supervised learning problems (e.g. regression). Our second contribution is a robustness certificate that holds for every hypothesis in the function class for $\ell_2$ perturbations for multiclass classification. In a nutshell, this result guarantees that the label produced by the hypothesis will not change if the encoder gap is \emph{large enough} relative to the energy of the adversary, the classifier margin, and properties of the model (e.g. dictionary incoherence). \section{Preliminaries and Background} \label{sec:Preliminiaries} In this section, we first describe our notation and the learning problem, and then proceed to situate our contribution in relation to prior work. Consider the spaces of inputs, $\mathcal{X}\subseteq {B}_{\mathbb{R}^d}$, i.e. the unit ball in $\mathbb{R}^d$, and labels, $\mathcal{Y}$. Much of our analysis is applicable to a broad class of label spaces, but we will focus on binary and multi-class classification setting in particular. We assume that the data is sampled according to some unknown distribution $P$ over $\mathcal X \times \mathcal Y$. Let $\mathcal{H} = \{f:\mathcal{X}\to\mathcal{Y}'\}$ denote a hypothesis class mapping inputs into some output space $\mathcal{Y}' \subseteq \mathbb{R}$. Of particular interest to us are norm-bounded linear predictors, $f(\cdot)=\langle {\mathbf w}, \cdot \rangle$, parametrized by $d$-dimensional vectors ${\mathbf w}\in \mathcal W = \{{\mathbf w} \in \mathbb{R}^d : \|{\mathbf w}\|_2\leq B\}$. From a learning perspective, we have a considerable understanding of the linear hypothesis class, both in a stochastic non-adversarial setting as well as in an adversarial context~ \citep{charles2019convergence,li2019inductive}. However, from an application standpoint, linear predictors are often too limited, and rarely applied directly on input features. Instead, most state-of-the-art systems involve learning a representation. In general, an \emph{encoder} map $\varphi:\mathcal{X}\to\mathcal{Z} \subseteq \mathbb{R}^p$, parameterized by parameters $\theta$, is composed with a linear function so that $f({\mathbf x}) = \langle {\mathbf w}, \varphi_\theta({\mathbf x}) \rangle$, for ${\mathbf w}\in\mathbb{R}^p$. This description applies to a large variety of popular models, including kernel-methods, multilayer perceptrons and deep convolutional neural networks. Herein we focus on an encoder given as the solution to a Lasso problem \citep{tibshirani1996regression}. More precisely, we consider $\varphi_{\mathbf D}({\mathbf x}):\mathbb{R}^d\to \mathbb{R}^p$, defined by \begin{equation} \label{eqn:enc-1} \varphi_{\mathbf D}({\mathbf x}) \coloneqq \arg\min_{\mathbf z} \frac{1}{2} \|{\mathbf x} - {\mathbf D}{\mathbf z}\|^2_2 +\lambda \|{\mathbf z}\|_1. \end{equation} Note that when ${\mathbf D}$ is overcomplete, i.e. $p>d$, this problem is not strongly convex. Nonetheless, we will assume that that solution to Problem~\ref{eqn:enc-1} is unique\footnote{The solution is unique under mild assumptions \citep{tibshirani2013lasso}, and otherwise our results hold for any solution returned by a deterministic solver.}, and study the hypothesis class given by $ \mathcal{H} = \{ f_{{\mathbf D},{\mathbf w}}({\mathbf x}) = \langle {\mathbf w}, \varphi_{\mathbf D}({\mathbf x}) \rangle : {\mathbf w}\in \mathcal{W}, {\mathbf D} \in \mathcal{D} \}$, where $\mathcal{W} = \{{\mathbf w} \in \mathbb{R}^p : \|{\mathbf w}\|_2\leq B\}$, and $\mathcal{D}$ is the oblique manifold of all matrices with unit-norm columns (or \emph{atoms}); i.e. $\mathcal{D} =\{ {\mathbf D} \in \mathbb{R}^{d\times p} : \|{\mathbf D}_i\|_2 = 1 ~ \forall i \in [p] \}$. While not explicit in our notation, $\enc[{\mathbf D}]{({\mathbf x})}$ depends on the value of $\lambda$. For notational simplicity, we also suppress subscripts $({\mathbf D},{\mathbf w})$ in $f_{{\mathbf D},{\mathbf w}}(\cdot)$ and simply write $f(\cdot)$. We consider a bounded loss function $\ell: \mathcal{Y}\times \mathcal{Y}' \to [0,b]$, with Lipschitz constant $L_\ell$. The goal of learning is to find an $f \in \mathcal{H}$ with minimal risk, or expected loss, $R(f) = \textstyle\mathbb{E}_{({\mathbf x},y)\sim P} \left[ \ell(y,f({\mathbf x})) \right]$. Given a sample $S=\{({\mathbf x}_i,y_i)\}_{i=1}^m$, drawn i.i.d. from $P$, a popular learning algorithm is empirical risk minimization (ERM) which involves finding $f_{{\mathbf D}, {\mathbf w}}$ that solves the following problem: \vspace{-.1cm} \begin{equation} \min_{{\mathbf D},{\mathbf w}}~ \frac{1}{m} \sum_{i=1}^{m} \ell(y_i,f_{{\mathbf D},{\mathbf w}}({\mathbf x}_i)). \nonumber \end{equation}\vspace{-.1cm} \textbf{Adversarial Learning.} In an adversarial setting, we are interested in hypotheses that are robust to adversarial perturbations of inputs. We focus on \emph{evasion attacks}, in which an attack is deployed at test time (while the training samples are not tampered with). As a result, a more appropriate loss that incorporates the robustness to such contamination is the robust loss \citep{madry2017towards}, $\tilde{\ell}_\nu(y,f({\mathbf x})) \coloneqq \max_{{\mathbf v}\in\Delta_\nu} ~ \ell(y,f({\mathbf x}+{\mathbf v}))$, where $\Delta$ is some subset of $\mathbb{R}^d$ that restricts the {power} of the adversary. Herein we focus on $\ell_2$ norm-bounded corruptions, $\Delta _\nu = \{{\mathbf v} \in \mathbb{R}^d ~ : ~ \|{\mathbf v}\|_2\leq \nu \}$, and denote by $\tilde{R}_S(f) = \frac{1}{m} \sum_{i=1}^{m} \tilde{\ell}_\nu(y_i,f({\mathbf x}_i))$ the empirical robust risk of $f$ and $\tilde{R}(f) = \mathbb{E}_{({\mathbf x},y)\sim P}[ \tilde{\ell}_\nu(y,f({\mathbf x}))]$ its population robust risk w.r.t. distribution $P$. \textbf{Main Assumptions.} We make two general assumptions throughout this work. First, we assume that the dictionaries in $\mathcal{D}$ are $s$-incoherent, i.e, they satisfy a restricted isometry property (RIP). More precisely, for any $s$-sparse vector, ${\mathbf z} \in \mathbb R^p$ with $\|{\mathbf z}\|_0 = s$, there exists a minimal constant $\eta_s<1$ so that ${\mathbf D}$ is close to an isometry, i.e. $(1-\eta_s)\|{\mathbf z}\|^2_2 \leq \|{\mathbf D}{\mathbf z}\|^2_2 \leq (1 + \eta_s) \|{\mathbf z}\|^2_2$. Broad classes of matrices are known to satisfy this property (e.g. sub-Gaussian matrices \citep{foucart2017mathematical}), although empirically computing this constant for a fixed (deterministic) matrix is generally intractable. Nonetheless, this quantity can be upper bounded by the correlation between columns of ${\mathbf D}$, either via mutual coherence \citep{donoho2003optimally} or the Babel function \citep{tropp2003improved}, both easily computed in practice. Second, we assume that the map $\enc[{\mathbf D}]$ induces a positive \emph{encoder gap} on the computed features. Given a sample ${\mathbf x}\in\mathcal{X}$ and its encoding, $\enc[{\mathbf D}]{({\mathbf x})}$, we denote by $\Lambda^{p-s}$ the set of atoms of cardinality $(p-s)$, i.e., $\Lambda^{p-s} = \{ \mathcal{I} \subseteq \{1,\dots,p\} : |\mathcal{I}|=p-s \}$. The encoder gap $\tau_s(\cdot)$ induced by $\enc[{\mathbf D}]$ on any sample ${\mathbf x}$ is defined \citep{mehta2013sparsity} as \begin{equation} \tau_s({\mathbf x}) \coloneqq \max_{\mathcal{I}\in\Lambda^{p-s}} \min_{i \in \mathcal{I}} ~\left( \lambda - | \langle {\mathbf D}_i , {\mathbf x} - {\mathbf D}\enc[{\mathbf D}]({\mathbf x}) \rangle | \right). \nonumber \end{equation} {An equivalent and conceptually simpler definition for $\tau_s({\mathbf x})$ is the $(s+1)^{th}$ smallest entry in the vector $\lambda\mathbf{1} - | \langle {\mathbf D} , {\mathbf x} - {\mathbf D}\enc[{\mathbf D}]({\mathbf x}) \rangle |$. Intuitively, this quantity can be viewed as a measure of maximal energy along any dictionary atom that is not in the support of an input vector.} More precisely, recall from the optimality conditions of Problem \eqref{eqn:enc-1} that $|{\mathbf D}_i^T({\mathbf x}-{\mathbf D}\enc[{\mathbf D}]{({\mathbf x})})|=\lambda$ if $[\enc[{\mathbf D}]{({\mathbf x})}]_i\neq 0$, and $|{\mathbf D}_i^T({\mathbf x}-{\mathbf D}\enc[{\mathbf D}]{({\mathbf x})})|\leq\lambda$ otherwise. Therefore, if $\tau_s$ is large, this indicates that there exist a set $\mathcal{I}$ of $(p-s)$ atoms that are \emph{far} from entering the support of $\enc[{\mathbf D}]{({\mathbf x})}$. If $\enc[{\mathbf D}]({\mathbf x})$ has exactly $k$ non-zero entries, we may choose some $s>k$ to obtain $\tau_s({\mathbf x})$. In general, $\tau_s(\cdot)$ depends on the energy of the residual, ${\mathbf x}-{\mathbf D}\enc[{\mathbf D}]{({\mathbf x})}$, the correlation between the atoms, the parameter $\lambda$, and the cardinality $s$. In a nutshell, if a dictionary ${\mathbf D}$ provides a quickly decaying approximation error as a function of the cardinality $s$, then a positive encoder gap exists for some $s$. {We consider dictionaries that induce a positive encoder gap in every input sample from a dataset, and define the minimum such margin as $\tau_s^*:=\min_{i\in[m]} \tau_s({\mathbf x}_i) > 0.$} Such a positive encoder exist for quite general distributions, such as $s$-sparse and approximately sparse signals. However, this definition is more general and it will allow us to avoid making any other stronger distributional assumptions. We now illustrate such the encoder gap with both analytic and numerical examples\footnote{Code to reproduce all of our experiments is made available at \href{https://github.com/Sulam-Group/Adversarial-Robust-Supervised-Sparse-Coding}{our github repository.}}. \textbf{Approximate $k$-sparse signals} Consider signals ${\mathbf x}$ obtained as ${\mathbf x} = {\mathbf D}{\mathbf z} + {\mathbf v}$, where ${\mathbf D}\in\mathcal D$, $\|{\mathbf v}\|_2\leq\nu$ and ${\mathbf z}$ is sampled from a distribution of sparse vectors with up to $k$ non-zeros, with $k < \frac{1}{3}\left(1+\frac{1}{\mu({\mathbf D})}\right)$, where $\mu({\mathbf D}) = \max_{i\neq j}\langle {\mathbf D}_i , {\mathbf D}_j \rangle$ is the mutual coherence of ${\mathbf D}$. Then, for a particular choice of $\lambda$, we have that $\tau_s({\mathbf x}) > \lambda - \frac{15\mu\nu}{2}, \forall s>k $. This can be shown using standard results in \citep{tropp2006just}; we defer the proof to the Appendix \ref{supp:EncoderGapExample}. Different values of $\lambda$ provide different values of $\tau_s({\mathbf x})$. To illustrate this trade-off, we generate synthetic approximately $k$-sparse signals ($k=15$) from a dictionary with 120 atoms in 100 dimensions and contaminate them with Gaussian noise. We then numerically compute the value of $\tau^*_s$ as a function of $s$ for different values of $\lambda$, and present the results in \autoref{fig:encoder_gap_synthetic}. \textbf{Image data} We now demonstrate that a positive encoder exist for natural images as well. In \autoref{fig:encoder_gap_mnist} we similarly depict the value of $\tau_s(\cdot)$, as a function of $s$, for an encoder computed on MNIST digits and {CIFAR images} (from a validation set) with learned dictionaries (further details in \autoref{Sec:Experiments}). {In summary, the encoder gap is a measure of the ability of a dictionary to sparsely represent data, and one can induce a larger encoder gap by increasing the regularization parameter or the cardinality $s$. As we will shortly see, this will provide us with a a controllable knob in our generalization bound.} \begin{figure} \centering \subcaptionbox{\label{fig:encoder_gap_synthetic}} {\includegraphics[width = .32\textwidth,trim = 30 0 30 20]{figures/encoder_gap_synthetic.pdf}} \subcaptionbox{\label{fig:encoder_gap_mnist}} {\includegraphics[width = .32\textwidth,trim = 30 0 30 20]{figures/encoder_gap_mnist_trans.pdf} } \subcaptionbox{\label{fig:encoder_gap_cifar}} {\includegraphics[width = .32\textwidth,trim = 30 0 30 20]{figures/encoder_gap_cifar_supervised.pdf} } \caption{ Encoder gap, $\tau^*_s$, for synthetic approximately sparse signals (a) MNIST digits (b) and CIFAR10 images (c). \vspace{-.3cm}} \label{fig:Encoder_Gap} \end{figure} \vspace{-5pt} \section{Prior Work} \label{sec:prior_work} Many results exist on the approximation power and stability of Lasso (see \citep{foucart2017mathematical}), which most commonly rely on assuming data is (approximately) $k$-sparse under a given dictionary. As explained in the previous section, we instead follow an analysis inspired by \cite{mehta2013sparsity}, which relies on the encoder gap. \cite{mehta2013sparsity} leverage encoder gap to derive a generalization bound for the supervised sparse coding model in a stochastic (non-adversarial) setting. Their result, which follows a symmetrization technique~\citep{mendelson2004importance}, scales as $\tilde{\mathcal{O}} (\sqrt{(dp + \log(1/\delta))/m}$, and requires a minimal number of samples that is $\mathcal O (1/(\tau_s\lambda))$. In contrast, we study an generalization in the adversarially robust setting, detailed above. Our analysis is based on an $\epsilon$-cover of the parameter space and on analyzing a local-Lipschitz property of the adversarial loss. The proof of our generalization bound is simpler, and shows a mild deterioration of the upper bound on the generalization gap due to adversarial corruption. Our work is also inspired by the line of work initiated by~\cite{papyan2017convolutional} who regard the representations computed by neural networks as approximations for those computed by a Lasso encoder across different layers. In fact, a first analysis of adversarial robustness for such a model is presented by \cite{romano2019adversarial}; however, they make strong generative model assumptions and thus their results are not applicable to real-data practical scenarios. Our robustness certificate mirrors the analysis from the former work, though leveraging a more general and new stability bound (\autoref{lemma:stability_of_rep_adversarial}) relying instead on the existence of positive encoder gap. In a related work, and in the context of neural networks, \cite{cisse2017parseval} propose a regularization term inspired by Parseval frames, with the empirical motivation of improving adversarial robustness. Their regularization term can in fact be related to minimizing the (average) mutual coherence of the dictionaries, which naturally arises as a control for the generalization gap in our analysis. Lastly, several works have employed sparsity as a beneficial property in adversarial learning \citep{marzi2018sparsity,demontis2016security}, with little or no theoretical analysis, or in different frameworks (e.g. sparse weights in deep networks \citep{guo2018sparse,balda2019adversarial}, or on different domains \citep{bafna2018thwarting}). Our setting is markedly different from that of \cite{chen2013robust} who study adversarial robustness of Lasso as a sparse predictor directly on input features. In contrast, the model we study here employs Lasso as an encoder with a data-dependent dictionary, on which a linear hypothesis is applied. A few works have recently begun to analyze the effect of learned representations in an adversarial learning setting \citep{ilyas2019adversarial,allen2020feature}. Adding to that line of work, our analysis demonstrates that benefits can be provided by exploiting a trade-off between expressivity and stability of the computed representations, and the classifier margin. \vspace{-10pt} \section{Generalization bound for robust risk} \label{sec:GenBound} In this section, {we present a bound on the robust risk for models satisfying a positive encoder gap}. Recall that given a $b$-bounded loss $\ell$ with Lipschitz constant $L_\ell$, $\tilde{R}_S(f) = \frac{1}{m} \sum_{i=1}^{m} \tilde{\ell}_\nu(y_i,f({\mathbf x}_i))$ is the empirical robust risk, and $\tilde{R}(f) = \mathbb{E}_{({\mathbf x},y)\sim P} \big[ \tilde{\ell}_\nu(y,f({\mathbf x})) \big]$ is the population robust risk w.r.t. distribution $P$. Adversarial perturbations are bounded in $\ell_2$ norm by $\nu$. {Our main result below guarantees that if a hypothesis $f_{{\mathbf D},{\mathbf w}}$ is found with a sufficiently large encoder gap, and a large enough training set, its generalization gap is bounded as $\tilde{\mathcal O}\Big(b\sqrt{\frac{ (d+1)p}{m}}\Big)$, where $\tilde{\mathcal O}$ ignores poly-logarithmic factors.} \begin{theorem} \label{thm:generalization} Let $\mathcal W = \{{\mathbf w} \in \mathbb R^p : \|{\mathbf w}\|_2 \leq B \}$, and $\mathcal{D}$ be the set of column-normalized dictionaries with $p$ columns and with RIP at most $\eta^*_s$. Let $\mathcal{H} = \{ f_{{\mathbf D},{\mathbf w}}({\mathbf x}) = \langle {\mathbf w}, \varphi_{\mathbf D}({\mathbf x}) \rangle : {\mathbf w}\in \mathcal{W}, {\mathbf D} \in \mathcal{D} \}$. Denote $\tau^*_s$ the minimal encoder gap over the $m$ samples. Then, with probability at least $1-\delta$ over the draw of the $m$ samples, the generalization gap for any hypothesis $f\in\mathcal H$ that achieves an encoder gap on the samples of $\tau_s^*>2\nu$, satisfies \begin{multline} \left| \tilde{R}_S(f)\! -\! \tilde{R}(f) \right| \!\leq \frac{b}{\sqrt{m}} \left( (d+1) p \log\left(\frac{3m}{2\lambda (1-\eta^*_s)}\right) + p\log(B) + \log{\frac{4}{\delta}} \right)^{\frac{1}{2}} \\ + b\sqrt{\frac{2\log(m/2)+2\log(2/\delta)}{m}} + 12\frac{(1+\nu)^2 L_\ell B \sqrt{s} }{m} , \nonumber \end{multline} as long as $m > \frac{\lambda(1-\eta_s)}{(\tau_s^* - 2\nu)^2} K_\lambda$, where $K_\lambda = \left( 2 \left( 1+\frac{1+\nu}{2\lambda} \right) + \frac{5 (1+\nu)}{\sqrt{\lambda}} \right)^2$. \end{theorem} A few remarks are in order. First, note that adversarial generalization incurs a polynomial dependence on the adversarial perturbation $\nu$. This is mild, especially since it only affects the fast $\mathcal O(1/m)$ term. Second, the bound requires a minimal number of samples. Such a requirement is intrinsic to the stability of Lasso (see \autoref{lemma:advers_D_stable} below) and it exists also in the non-adversarial setting \citep{mehta2013sparsity}. In the adversarial case, this requirement becomes more demanding, as reflected by the term $(\tau^*_s-2\nu$) in the denominator. Moreover, a minimal encoder gap $\tau_s^*>2\nu$ is needed as well. \autoref{thm:generalization} suggests an interesting trade-off. One can obtain a large $\tau^*_s$ by increasing $\lambda$ and $s$ -- as demonstrated in in \autoref{fig:Encoder_Gap}. But increasing $\lambda$ may come at an expense of hurting the empirical error, while increasing $s$ makes the term $1-\eta_s$ smaller. {Therefore, if one obtains a model with small training error, along with large $\tau^*_s$ over the training samples for an appropriate choice of $\lambda$ and $s$ while ensuring that $\eta_s$ is bounded away from $1$, then $f_{{\mathbf D},{\mathbf w}}$ is guaranteed to generalize well. Furthermore, note that the excess error depends mildly (poly logarithmically) on $\lambda$ and $\eta_s$.} Our proof technique is based on a minimal $\epsilon$-cover of the parameter space, and the full proof is included in the Appendix~\ref{supp:Generalization}. Special care is needed to ensure that the encoder gap of the dictionary holds for a sample drawn from the population, as we can only measure this gap on the provided $m$ samples. To address this, we split the data equally into a training set and a development set: the former is used to learn the dictionary, and the latter to provide a high probability bound on the event that $\tau_s({\mathbf x})>\tau^*_s$. This is to ensure that the random samples of the encoder margin are i.i.d. for measure concentration. Ideally, we would like to utilize the entire dataset for learning the predictor; we leave that for future work. While most of the techniques we use are standard~\footnote{See \citep{seibert2019sample} for a comprehensive review on these tools in matrix factorization problems.}, the Lipschitz continuity of the robust loss function requires a more delicate analysis. For that, we have the following result. \begin{lemma}[Parameter adversarial stability] \label{lemma:advers_D_stable} Let ${\mathbf D},{\mathbf D}' \in \mathcal D$. If $\|{\mathbf D}-{\mathbf D}'\|_2 \leq \epsilon\leq 2 \lambda/(1+\nu)^2$, then \begin{equation} \label{eq:adver_perturb_stability} \max_{{\mathbf v}\in\Delta} \|\enc[{\mathbf D}]({\mathbf x}+{\mathbf v}) - \enc[{\mathbf D}']({\mathbf x}+{\mathbf v}) \|_2 \leq \gamma (1+\nu)^2 \epsilon, \end{equation} with $\gamma = \frac{3}{2} \frac{\sqrt{s}}{\lambda(1-\eta_s)}$, as long as $\tau_s({\mathbf x}) \geq 2 \nu + \sqrt{\epsilon}\left( \sqrt{\frac{25}{\lambda}}(1+\nu) + 2 \left( \frac{(1+\nu)}{\lambda} + 1\right) \right).$ \end{lemma} \autoref{lemma:advers_D_stable} is central to our proof, as it provides a bound on difference between the features computed by the encoder under model deviations. Note that the condition on the minimal encoder gap, $\tau_s({\mathbf x})$, puts an upper bound on the distance between models ${\mathbf D}$ and ${\mathbf D}'$. This in turn results in the condition imposed on the minimal samples in \autoref{thm:generalization}. It is worth stressing that the lower bound on $\tau_s({\mathbf x})$ is on the \emph{unperturbed} encoder gap -- that which can be evaluated on the samples from the dataset, without the need of the adversary. We defer the proof of this Lemma to Appendix~\ref{supp:ParameterAdversarialStability}. \section{Robustness Certificate} \label{sec:certificates} Next, we turn to address an equally important question about robust adversarial learning, that of giving a formal certification of robustness. Formally, we would like to guarantee that the output of the trained model, $f_{{\mathbf D},{\mathbf w}}({\mathbf x})$, does not change for norm-bounded adversarial perturbations of a certain size. Our second main result provides such a certificate for the supervised sparse coding model. Here, we consider a multiclass classification setting with $y\in\{1,\dots,K\}$; simplified results for binary classification are included in Appendix \ref{supp:certificate}. The hypothesis class is parameterized as $f_{{\mathbf D},{\mathbf W}}({\mathbf x}) = {\mathbf W}^T\enc[{\mathbf D}]({\mathbf x})$, with ${\mathbf W} = [{\mathbf W}_1, {\mathbf W}_2, \ldots, {\mathbf W}_K] \in\mathbb{R}^{p\times K}$. The multiclass margin is defined as follows: \[ \rho_{\mathbf x} = {\mathbf W}^T_{y_i} \enc[{\mathbf D}]({\mathbf x}) - \max_{j\neq y_i} {\mathbf W}^T_{j} \enc[{\mathbf D}]({\mathbf x}). \] We show the following result. \begin{theorem}[Robustness certificate for multiclass supervised sparse coding] \label{thm:multiclass_certificate} Let $\rho_x > 0$ be the multiclass classifier margin of $f_{{\mathbf D},{\mathbf w}}({\mathbf x})$ composed of an encoder with a gap of $\tau_s({\mathbf x})$ and a dictionary, ${\mathbf D}$, with RIP constant $\eta_s<1$. Let $c_{\mathbf W} := \max_{i\neq j}\|{\mathbf W}_i-{\mathbf W}_j\|_2$. Then, \begin{equation} \arg\max_{j\in[K]} ~[{\mathbf W}^T \enc[{\mathbf D}]({\mathbf x})]_j\ = \arg\max_{j\in[K]}~ [{\mathbf W}^T \enc[{\mathbf D}]({\mathbf x}+{\mathbf v})]_j,\quad \forall~ {\mathbf v}:\|{\mathbf v}\|_2\leq \nu, \end{equation} so long as $\nu \leq \min\{ \tau_s({\mathbf x})/2 , \rho_{\mathbf x} \sqrt{1-\eta_s} / c_{\mathbf W} \}.$ \end{theorem} \autoref{thm:multiclass_certificate} clearly captures the potential contamination on two flanks: robustness can no longer be guaranteed as soon as the energy of the perturbation is enough to either significantly modify the computed representation \emph{or} to induce a perturbation larger than the classifier margin on the feature space. Proof of \autoref{thm:multiclass_certificate}, detailed in Appendix \ref{supp:certificate}, relies on the following lemma showing that under an encoder gap assumption, the computed features are moderately affected despite adversarial corruptions of the input vector. \begin{lemma}[Stability of representations under adversarial perturbations] \label{lemma:stability_of_rep_adversarial} Let ${\mathbf D}$ be a dictionary with RIP constant $\eta_s$. Then, for any ${\mathbf x} \in \mathcal X$ and its additive perturbation ${\mathbf x}+{\mathbf v}$, for any $\|{\mathbf v}\|_2\leq \nu$, if $\tau_s({\mathbf x}) > 2 \nu$, then we have that \begin{equation} \| \enc[{\mathbf D}]{({\mathbf x})} - \enc[{\mathbf D}]{({\mathbf x}+{\mathbf v})} \|_2 \leq \frac{\nu}{\sqrt{1-\eta_s}}. \end{equation} \end{lemma} An extensive set of results exist for the stability of the solution provided by Lasso relying generative model assumptions \citep{foucart2017mathematical,elad2010sparse}. The novelty of Lemma~\ref{lemma:stability_of_rep_adversarial} is in replacing such an assumption with the existence of a positive encoder gap on $\enc[{\mathbf D}]({\mathbf x})$. Going back to \autoref{thm:multiclass_certificate}, note that the upper bound on $\nu$ depends on the RIP constant $\eta_s$, which is not computable for a given (deterministic) matrix ${\mathbf D}$. Yet, this result can be naturally relaxed by upper bounding $\eta_s$ with measures of correlation between the atoms, such as the mutual coherence. This quantity provides a measure of the worst correlation between two atoms in the dictionary ${\mathbf D}$, and it is defined as $\mu({\mathbf D}) = \max_{i\neq j} | \langle {\mathbf D}_i , {\mathbf D}_j \rangle|$ (for ${\mathbf D}$ with normalized columns). For general (overcomplete and full rank) dictionaries, clearly $0<\mu({\mathbf D})\leq 1$. While conceptually simple, results that use $\mu({\mathbf D})$ tend to be too conservative. Tighter bounds on $\eta_s$ can be provided by the Babel function\footnote{Let $\Lambda$ denote subsets (supports) of $\{1,2,\dots,p\}$. Then, the Babel function is defined as \mbox{$\mu_{(s)} = \max_{\Lambda:|\Lambda|=s} \max_{j\notin \Lambda} \sum_{i\in\Lambda} |\langle {\mathbf D}_i , {\mathbf D}_j \rangle|$.}}, $\mu_{(s)}$, which quantifies the maximum correlation between an atom and \emph{any other} collection of $s$ atoms in ${\mathbf D}$. It can be shown \citep[Chapter 2]{tropp2003improved,elad2010sparse} that $\eta_s\leq \mu_{(s-1)}\leq(s-1)\mu({\mathbf D})$. Therefore, we have the following: \begin{corollary} \label{cor:certificate_babel_multiclass} Under the same assumptions as those in \autoref{thm:multiclass_certificate}, \begin{equation} \arg\max_{j\in[K]} ~[{\mathbf W}^T \enc[{\mathbf D}]({\mathbf x})]_j\ = \arg\max_{j\in[K]}~ [{\mathbf W}^T \enc[{\mathbf D}]({\mathbf x}+{\mathbf v})]_j,\quad \forall~ {\mathbf v}:\|{\mathbf v}\|_2\leq \nu \end{equation} so long as $\nu \leq \min\{ \tau_s({\mathbf x})/2 , \rho_{\mathbf x} \sqrt{1-\mu_{(s-1)}}/c_{\mathbf W} \}$. \end{corollary} Although the condition on $\nu$ in the corollary above is stricter (and the bound looser), the quantities involved can easily be computed numerically leading to practical useful bounds, as we see~next. \vspace*{-5pt} \section{Experiments} \label{Sec:Experiments} In this section, we illustrate the robustness certificate guarantees both in synthetic and real data, as well as the trade-offs between constants in our sample complexity result. First, we construct samples from a separable binary distribution of $k$-sparse signals. To this end, we employ a dictionary with 120 atoms in 100 dimensions with a mutual coherence of 0.054. Sparse representations ${\mathbf z}$ are constructed by first drawing their support (with cardinality $k$) uniformly at random, and drawing its non-zero entries from a uniform distribution away from zero. Samples are obtained as ${\mathbf x} = {\mathbf D}{\mathbf z}$, and normalized to unit norm. We finally enforce separability by drawing ${\mathbf w}$ at random from the unit ball, determining the labels as $y=\text{sign}( {\mathbf w}^T\enc[{\mathbf D}]{({\mathbf x})})$, and discarding samples with a margin $\rho$ smaller than a pre-specified amount (0.05 in this case). Because of the separable construction, the accuracy of the resulting classifier is 1. We then attack the obtained model employing the projected gradient descent method \citep{madry2017towards}, and analyze the degradation in accuracy as a function of the energy budget $\nu$. We compare this empirical performance with the bound in \autoref{cor:certificate_babel_multiclass}: given the obtained margin, $\rho$, and the dictionary's $\mu_{s}$, we can compute the maximal certified radius for a sample ${\mathbf x}$ as \begin{equation} \label{eq:computing_bound} \nu({\mathbf x}) = \max_s \min\{ \tau_s({\mathbf x})/2 , \rho_{\mathbf x} \sqrt{1-\mu_{(s-1)}}/c_{\mathbf W}\}. \end{equation} {For a given dataset, we can compute the minimal certified radius over the samples, $\nu^* = \min_{i\in[n]} \nu({\mathbf x}_i)$.} This is the bound depicted in the vertical line in \autoref{fig:synthetic_separable}. As can be seen, despite being somewhat loose, the attacks do not change the label of the samples, thus preserving the accuracy. In non-separable distributions, one may study how the accuracy depends on the \emph{soft margin} of the classifier. In this way, one can determine a target margin that results in, say, 75\% accuracy on a validation set. One can obtain a corresponding certified radius of $\nu^*$ as before, which will guarantee that the accuracy will not drop below 75\% as long as $\nu<\nu^*$. This is illustrated in \autoref{fig:synthetic_non_separable}. An alternative way of employing our results from \autoref{sec:certificates} is by studying the \emph{certified accuracy} achieved by the resulting hypothesis. The certified accuracy quantifies the percentage of samples in a test set that are classified correctly while being \emph{certifiable}. In our context, this implies that a sample ${\mathbf x}$ achieves a margin of $\rho_{\mathbf x}$, for which a certified radius of $\nu^*$ can be obtained with \autoref{eq:computing_bound}. In this way, one may study how the certified accuracy decreases with increasing $\nu^*$. This analysis lets us compare our bounds with those of other popular certification techniques, such as randomized smoothing \citep{cohen2019certified}. Randomized smoothing provides high probability robustness guarantees against $\ell_2$ attacks for \emph{any} classifier by composing them with a Gaussian distribution (though other distributions have been recently explored as well for other $l_p$ norms \citep{salman2020black}). In a nutshell, the larger the variance of the Gaussian, the larger the certifiable radius becomes, albeit at the expense of a drop in accuracy. We use the MNIST dataset for this analysis. We train a model with 256 atoms by minimizing the following regularized empirical risk using stochastic gradient descent (employing Adam \citep{kingma2014adam}; the implementation details are deferred to Appendix \ref{supp:numerical}) \begin{equation} \min_{{\mathbf W},{\mathbf D}}~ \frac{1}{m} \sum_{i=1}^m \ell\left(y_i,\langle {\mathbf W},\enc[{\mathbf D}]{({\mathbf x}_i)} \rangle\right) + \alpha \|{\mathbf I} - {\mathbf D}^T{\mathbf D}\|^2_F + \beta \|{\mathbf W}\|^2_F, \end{equation} where $\ell$ is the cross entropy loss. Recall that $\enc[{\mathbf D}]({\mathbf x})$ depends on $\lambda$, and we train two different models with two values for this parameter ($\lambda=0.2$ and $\lambda=0.3$). \begin{figure} \centering \subcaptionbox{\label{fig:synthetic_separable}} {\hspace*{-2pt}\includegraphics[width = .25\textwidth,trim = 0 0 0 0 0]{figures/synthetic_demo_sep_trans_m.pdf}} \subcaptionbox{\label{fig:synthetic_non_separable}} {\hspace*{-12pt}\includegraphics[width = .25\textwidth,trim = 0 0 0 0]{figures/synthetic_demo_non_sep_trans.pdf}} \subcaptionbox{\label{fig:mnist_comparison_a}} {\hspace*{3pt}\includegraphics[width = .25\textwidth,trim = 0 -5 20 30]{figures/CertAcc_L02_trans.pdf}} \subcaptionbox{\label{fig:mnist_comparison_b}} {\hspace*{3pt}\includegraphics[width = .25\textwidth,trim = 0 -5 20 30]{figures/CertAcc_L03_transb.pdf}} \caption{Numerical demonstrations of our results. (a) synthetic separable distribution. (b) synthetic non-separable distribution. (c-d) certified accuracy on MNIST with $\lambda = 0.2$ and $\lambda = 0.3$, respectively, comparing with Randomized Smoothing with different variance levels.\vspace{-8pt}} \label{fig:Certificates} \end{figure} \autoref{fig:mnist_comparison_a} and \ref{fig:mnist_comparison_b} illustrate the certified accuracy on 200 test samples obtained by different degrees of randomized smoothing and by our result. While the certified accuracy resulting from our bound is comparable to that by randomized smoothing, the latter provides a certificate by \emph{defending} (i.e. composing it with a Gaussian distribution). In other words, different \emph{smoothed} models have to be constructed to provide different levels of certified accuracy. In contrast, our model is not defended or modified in any way, and the certificate relies solely on our careful characterization of the function class. Since randomized smoothing makes no assumptions about the model, the bounds provided by this strategy rely on the estimation of the output probabilities. This results in a heavy computational burden to provide a high-probability result (a failure probability of 0.01\% was used for these experiments). In contrast, our bound is deterministic and trivial to compute. Lastly, comparing the results in \autoref{fig:mnist_comparison_a} (where $\lambda = 0.2$) and \autoref{fig:mnist_comparison_b} (where $\lambda = 0.3$), we see the trade-off that we alluded to in \autoref{sec:GenBound}: larger values of $\lambda$ allow for larger encoder gaps, resulting in overall larger possible certified radius. In fact, $\lambda$ determines a hard bound on the possible achieved certified radius, given by $\lambda/2$, as per \autoref{eq:computing_bound}. This, however, comes at the expense of reducing the complexity of the representations computed by the encoder $\enc[{\mathbf D}]{({\mathbf x})}$, which impacts the risk attained. \vspace{-5pt} \section{Conclusion} \label{sec:Conclusion} In this paper we study the adversarial robustness of the supervised sparse coding model from two main perspectives: {we provide a bound for the robust risk of any hypothesis that achieves a minimum encoder gap over the samples, as well as a robustness certificate for the resulting end-to-end classifier}. Our results describe guarantees relying on the interplay between the computed representations, or features, and the classifier margin. While the model studied is still relatively simple, we envision several ways in which our analysis can be extended to more complex models. First, high dimensional data with shift-invariant properties (such as images) often benefit from convolutional features. Our results do hold for convolutional dictionaries, but the conditions on the mutual coherence may become prohibitive in this setting. An analogous definition of encoder gap in terms of convolutional sparsity \citep{papyan2017working} may provide a solution to this limitation. Furthermore, this analysis could also be extended to sparse models with multiple layers, as in \citep{papyan2017convolutional,sulam2019multi}. {On the other hand, while our result does not provide a uniform learning bound over the hypothesis class, we have found empirically that regularized ERM does indeed return hypotheses satisfying non-trivial encoder gaps. The theoretical underpinning of this phenomenon needs further research.} More generally, even though this work focuses on sparse encoders, we believe similar principles could be generalized to other forms of representations in a supervised learning setting, providing a framework for the principled analysis of adversarial robustness of machine learning models. \section*{Broader Impact} This work contributes to the theoretical understanding of the limitations and achievable robustness guarantees for supervised learning models. Our results can therefore provide tools that could be deployed in sensitive settings where these types of guarantees are a priority. On a broader note, this work advocates for the precise analysis and characterization of the data-driven features computed by modern machine learning models, and we hope our results facilitate their generalization to other more complex models. \section*{Acknowledgements} { This research was supported, in part, by DARPA GARD award HR00112020004, NSF BIGDATA award IIS-1546482, NSF CAREER award IIS-1943251 and NSF TRIPODS award CCF-1934979. Jeremias Sulam kindly thanks Aviad Aberdam for motivating and inspiring discussions. Raman Arora acknowledges support from the Simons Institute as part of the program on the Foundations of Deep Learning and the Institute for Advanced Study (IAS), Princeton, NJ, as part of the special year on Optimization, Statistics, and Theoretical Machine Learning. } \medskip \bibliographystyle{plainnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Engineered plane-wave refraction was one of the first functionalities demonstrated with a metasurface \cite{Yu2011}. These ultrathin planar devices are comprised of subwavelength polarizable particles (meta-atoms), allowing interaction with applied fields on a subwavelength scale via equivalent boundary conditions \cite{Holloway2012, Tretyakov2015}. It was soon found that to efficiently couple the incident beam towards a given direction in transmission, meta-atoms with both electric and magnetic polarizabilities have to be used \cite{Pfeiffer2013, Monticone2013, Selvanayagam2013}. Nonetheless, these so-called Huygens' metasurfaces (HMSs) feature a symmetric structure \cite{Monticone2013}, which only allows matching the wave impedance of either the incident or refracted waves \cite{Epstein2014_2}. Due to the inevitable mismatch, specular reflections occur, becoming significant for wide-angle refraction \cite{Selvanayagam2013}. This issue was solved in \cite{Wong2016}; by breaking the symmetry of HMS meta-atoms, a metasurface that is impedance matched for both incident and refracted fields was devised. This concept was later generalized in \cite{Epstein2016_3}, showing that \textcolor{black}{the required} asymmetric meta-atoms feature omega-type bianisotropy \textcolor{black}{(this was independently derived in \cite{Asadchy2016})}, exhibiting magnetoelectric coupling in addition to electric and magnetic polarizability. Correspondingly, it was shown that these \textcolor{black}{bianisotropic Huygens' metasurfaces (BHMSs)} can be realized by \emph{asymmetric} cascade of three impedance sheets \cite{Wong2016, Epstein2016_3}. \begin{figure}[t] \centering \includegraphics[width=7.1cm]{HBMS_Simulation.pdf} \caption{(a) Refracting \textcolor{black}{BHMS} (one period) and simulated field distribution $\Re\left\{\left|E_x\left(y,z\right)\right|\right\}$ [a.u.]. (b) Meta-atom geometry and parameters.} \label{fig:metasurface_configuration} \end{figure} In this paper, we verify \textcolor{black}{this theoretical concept}, by designing, fabricating, and characterizing a \textcolor{black}{BHMS} for reflectionless refraction of a transverse-electric normally-incident beam (${\theta_\mathrm{in}=0}$) towards ${\theta_\mathrm{out}=71.8^{\circ}}$ at ${f\sim20\mathrm{GHz}}$ [Fig. \ref{fig:metasurface_configuration}(a)]. The design \textcolor{black}{followed \cite{Epstein2016_3}}, yielding a printed circuit board (PCB) layout for the \textcolor{black}{BHMS}, verified via full-wave simulations. The fabricated PCB was then characterized in a quasi-optical setup, \textcolor{black}{accurately assessing} specular reflections. To complement this, the horn-illuminated \textcolor{black}{BHMS}' radiation pattern was measured in a far-field chamber, evaluating its refraction properties. The measured results indicate that, indeed, the specular reflections from the \textcolor{black}{BHMS} are negligible, and that the majority of the scattered power (${\sim80\%}$) is coupled to the desirable Floquet-Bloch (FB) mode, propagating towards $71.8^{\circ}$. To the best of our knowledge, this is the first experimental demonstration of such a reflectionless wide-angle refracting metasurface; it verifies the theory as well as demonstrates the viability of PCB \textcolor{black}{BHMS}s for realizing future omega-bianisotropic devices \cite{Asadchy2015,Epstein2016_3,Asadchy2016,Epstein2016_4} \section{Theory, Design, and Physical Realization} \label{sec:theory_design} The derivation in \cite{Epstein2016_3} formulates the spatially-dependent electric, magnetic, and magnetoelectric responses required for implementing the desirable functionality. In the specific case of plane-wave refraction, this response can be naturally expressed via generalized scattering matrix $\mathbf{[G]}$ parameters, with the port impedances corresponding to the wave impedances of the incident and refracted modes \cite{Wong2016, Epstein2016_3}. Within this framework, the \textcolor{black}{BHMS} should be composed of meta-atoms at positions $y$, which exhibit unity (generalized) transmission coefficients ${\left|G_{21}\right|=1}$ and linear (generalized) transmission phase ${\angle G_{21}\left(y\right)=-\frac{2\pi}{\lambda}y\Delta_\mathrm{sin}+\xi_\mathrm{out}}$, where $\lambda$ is the wavelength, $\xi_\mathrm{out}$ is a constant phase, and ${\Delta_\mathrm{sin}=\sin\theta_\mathrm{out}-\sin\theta_\mathrm{in}}$. This guarantees perfect impedance matching at both ports, while providing the necessary change in the transverse wavenumber to fully-couple the incident wave to the refracted one. To implement the \textcolor{black}{BHMS}, the cascaded impedance sheet scheme of \cite{Wong2016, Epstein2016_3} was used. Specifically, the \textcolor{black}{BHMS} consists of three copper layers ($1/2\mathrm{oz.}$), defined on two $25\mathrm{mil}$ Rogers RT/duroid 6010 laminates; the latter are bonded using a $2\mathrm{mil}$ Rogers 2929 bondply, yielding an overall metasurface thickness of ${52\mathrm{mil}\approx\lambda/11}$ at the design frequency ${f=20\mathrm{GHz}}$. Every meta-atom has ${\Delta_x\times\Delta_y=\lambda/9.5\times\lambda/9.5\approx1.58\mathrm{mm}\times1.58\mathrm{mm}}$ lateral dimensions, such that the metasurface period ${\lambda/\Delta_\mathrm{sin}}$ contains 10 unit cells. Each unit cell consists of a dogbone, a loaded dipole, and another dogbone, forming the bottom, middle, and top impedance sheets, respectively [Fig. \ref{fig:metasurface_configuration}(b)]. The meta-atom response is controlled by the dogbone arm lengths, $L_m^\mathrm{bot}$ and $L_m^\mathrm{top}$, and the capacitor width $W_e^\mathrm{mid}$. These parameters were initially set for each meta-atom following analytical formulas relating the required \textcolor{black}{BHMS} parameters to the sheet impedances, and fine tuned via simulations (ANSYS HFSS) to achieve the required overall bianisotropic response \cite{Epstein2016_3}. \begin{figure} \centering \includegraphics[width=7.5cm]{HBMS_design_.pdf} \caption{\textcolor{black}{BHMS} design. (a) Optimized geometrical parameters of the meta-atoms [corresponding to Fig. \ref{fig:metasurface_configuration}(b)]. (b) Designed ($\circ$) and realized ($\ast$) generalized transmission phase. (c) Designed ($\circ$) and realized ($\ast$) generalized transmission (black) and reflection (red) magnitudes.} \label{fig:metasurface_design} \end{figure} The final element geometrical parameters are presented in Fig. \ref{fig:metasurface_design}(a), along with the desirable and achievable \textcolor{black}{responses} [Fig. \ref{fig:metasurface_design}(b)-(c)], which generally agree well (losses cause reduction in $\left|G_{21}\right|$). Due to losses exhibited by the meta-atoms with $\angle G_{21}$ around $0^\circ$, we used at two occasions identical unit cells \textcolor{black}{(cells \{\#1,\#2\} and \{\#3,\#4\})} that implement the average response of two consecutive elements, which improved the overall transmission. One period of the metasurface was simulated under periodic boundary conditions, yielding the field distribution presented in Fig. \ref{fig:metasurface_configuration}(a). Simulations predict that about $28\%$ of the incident power is absorbed in the metasurface. Out of the scattered power, $93\%$ is coupled to the desirable FB mode (transmitted towards $71.8^\circ$), $3\%$ is specularly reflected, and $4\%$ is transmitted towards $-71.8^\circ$. \textcolor{black}{A frequency scan reveals that specular reflections are minimal at $19.8\mathrm{GHz}$; however, the main device characteristics, i.e. the refraction efficiency and ohmic losses, are similar within the range ${19.8-20.0\mathrm{GHz}}$}. This verifies that the designed \textcolor{black}{BHMS} indeed implements reflectionless wide-angle refraction as prescribed. The relatively high losses probably originate in the resonant nature of the meta-atoms, and could be improved by \textcolor{black}{additional} impedance sheets \cite{Pfeiffer2014_3}. Following the verified design, a metasurface PCB of size ${12''\times18''=30.48\mathrm{cm}\times45.72\mathrm{cm}\approx20\lambda\times30\lambda}$ was fabricated (Fig. \ref{fig:quasi_optical_setup}), containing ${29\times190}$ replicas of the simulated \textcolor{black}{BHMS} period [Fig. \ref{fig:metasurface_configuration}(a)] along the $y$ and $x$ axes, respectively. \section{Experimental Results} \label{sec:results} \subsection{Quasi-optical Specular Reflection Experiment} \label{subsec:quasi_optical} To test the specular reflectionless nature of the metasurface, a quasi-optical experiment was designed. This setup uses a horn (A-info LB-OMT-150220) and a Rexolite lens to focus a Gaussian beam onto the \textcolor{black}{BHMS} (Fig. \ref{fig:quasi_optical_setup}). The horn and the \textcolor{black}{BHMS} are placed at the focal planes of the lens; thus, the wavefront incident upon the \textcolor{black}{BHMS} is planar, allowing the characterization to closely resemble theory and simulations. \begin{figure} \centering \includegraphics[width=7.8cm]{quasiOpticalSetup_.pdf} \caption{Quasi-optical experimental setup. The focal distances from the lens to the horn and from the lens to the \textcolor{black}{BHMS} are 12.5 cm and 29 cm, respectively. \textbf{Inset}: Close-up on the top and bottom faces of the fabricated metasurface.} \label{fig:quasi_optical_setup} \end{figure} The measured and simulated \textcolor{black}{$\left|S_{11}\right|$} versus frequency are presented in Fig. \ref{fig:S11_response}, revealing a shift of the resonant frequency from \textcolor{black}{$19.8\mathrm{GHz}$ (simulated) to $20.6\mathrm{GHz}$ (measured), which is attributed to fabrication errors and material uncertainties. The measured S$_{11}$ indicates that less than 0.2\% of the incident power is back-reflected, in agreement with simulations, verifying that the \textcolor{black}{BHMS} is indeed specularly reflectionless.} \begin{figure} \centering \includegraphics[width=7.8cm]{S11BoldInterpFine.pdf} \caption{Measured and simulated normally incident specular reflection.} \label{fig:S11_response} \end{figure} \subsection{Far-field Anechoic Chamber Refraction Experiment} \label{subsec:far_field} As the main feature of the metasurface is its ability for extreme refraction, an experiment was conducted to characterize this effect. To this end, the \textcolor{black}{BHMS} was placed in front of a standard gain horn antenna (Quinstar QWH-KPRS-00)\textcolor{red}{,} and the radiation pattern of the overall system (horn+\textcolor{black}{BHMS}) was measured as a combined antenna under test (AUT). The transmitting horn was placed sufficiently far from the \textcolor{black}{BHMS} to produce a planar wavefront on its ${z\rightarrow0^+}$ face, while the receiving horn was aligned behind the \textcolor{black}{BHMS}, facing its ${z\rightarrow0^-}$ face \textcolor{black}{(see coordinate system in Fig. \ref{fig:metasurface_configuration})}. Absorbers were attached to the back and \textcolor{black}{sides} of the receiving horn to reduce spurious scattering. The AUT was then rotated around its axis, and the gain pattern was measured, expected to yield a maximum around ${\theta_\mathrm{out}=71.8^\circ}$. We note that this test is not ideal and there are clear trade-offs in the experimental setup. One such parameter is the distance of the receiving horn from the \textcolor{black}{BHMS}. As the \textcolor{black}{BHMS} was designed to interact with plane waves, the horn should be placed sufficiently far away to match the expected planar wavefront of the refracted wave. However, since the \textcolor{black}{BHMS} has a finite size, if the horn is placed too far, it would not be sufficiently shadowed by the surface. A good compromise was found at a horn-\textcolor{black}{BHMS} separation distance of ${24\mathrm{cm}=16\lambda}$. The measured radiation patterns at $20\mathrm{GHz}$ and $20.6\mathrm{GHz}$ are presented in Fig. \ref{fig:radiation_pattern}. While the entire measured angular range is shown, the most reliable information is obtained between -150$^\circ$ and 150$^\circ$. When the AUT is measured close to $\pm180^\circ$, the receiving horn is partially blocking the line of sight between the transmitter and the metasurface; as the back of the horn has been fitted with absorbers, this would interfere with the measured results. Nonetheless, the quasi-optical measurements complement the radiation pattern for these angles, assessing the specular reflection (Section \ref{subsec:quasi_optical}). \begin{figure} \centering \includegraphics[width=7.8cm]{Modes2Bold.pdf} \caption{Measured AUT radiation patterns at 20.6 GHz (resonant) and 20 GHz (off resonance). The excited FB modes are clearly identified: the 0th transmitted mode (0$^\circ$), the $\pm1$ transmitted modes ($\pm71.8^\circ$), the 0th reflected mode ($\pm180^\circ$) and the $\pm1$ reflected modes ($\pm108.2^\circ$).} \label{fig:radiation_pattern} \end{figure} The radiation patterns in Fig. \ref{fig:radiation_pattern} clearly indicate that the \textcolor{black}{BHMS} implements the desirable refraction functionality. At 20 GHz, away from the \textcolor{black}{(measured)} resonant frequency, all propagating FB modes are excited, corresponding to multiple scattered beams. However, at \textcolor{black}{resonance ($20.6\mathrm{GHz}$)}, most of the scattered power is coupled to the designated mode, refracting towards $\approx71.8^\circ$, while the other modes are suppressed. A closer look at the pattern at 20.6 GHz reveals that the gain actually peaks at 62$^\circ$. However, as the gain of finite apertures deteriorates with a cos($\theta$) factor, we must account for it to properly assess the direction of the main beam. This evaluates the refraction angle at 69$^\circ$, much closer to the designated one; the deviation may originate in fabrication tolerances and alignment errors. The 3dB beamwidth is $21^\circ$, corresponding to an effective aperture length of $20\mathrm{cm}$. Given the receiving horn dimensions and position, the expected $-10\mathrm{dB}$ beam diameter on the \textcolor{black}{BHMS} is ${\approx\!\!23\mathrm{cm}}$ \cite{Goldsmith1998_Chap7_GaussianHorn}, in a good agreement. Comparing the power in the refracted beam, calculated by integrating the gain between its nulls, to the overall integrated gain, indicates that approximately $80\%$ of the \emph{scattered} power is coupled to the desirable FB mode. Although this is smaller than the simulated $93\%$, which may be attributed to fabrication and material tolerances, it still demonstrates a reasonable quantitative agreement with the designated device functionality. It should be noted that these efficiencies, corresponding to \emph{realistic} \textcolor{black}{BHMS}s, are considerably higher than the theoretical $73\%$ predicted for an \emph{ideal} \textcolor{black}{non-bianisotropic} HMS implementing the same wide-angle refraction\cite{Selvanayagam2013,Epstein2014_2}; this highlights the crucial role omega-type bianisotropy plays in face of a significant impedance mismatch. \textcolor{black}{Figure \ref{fig:radiation_pattern} further indicates that the suppressed scattering at $20.6\mathrm{GHz}$ is mostly an outcome of increased absorption (the peak gain remains similar). Although this implies that losses become much pronounced at the resonant frequency, the current measurement setup does not allow reliable quantification of this parameter; this is left for future work. } \section{Conclusion} \label{sec:conclusion} We have demonstrated the first PCB \textcolor{black}{BHMS} implementing reflectionless wide-angle refraction. Experimental validation was carried out by combining the results of a quasi-optical setup and radiation pattern measurements. The hybrid approach verifies that, indeed, specular reflections are negligible, and approximately $80\%$ of the \emph{scattered} power is coupled to the designated beam, albeit with considerable losses. Future work will include \textcolor{black}{a more thorough} quantification of losses, and using \textcolor{black}{improved} \textcolor{black}{BHMS} designs to \textcolor{black}{further} enhance the overall efficiency.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Intoduction} Quantization of the conductance of submicron constriction in two-dimensional electron gas~\cite{Wharam} is described well by the Landauer formula under the assumption of spin degeneracy of one-dimensional single-particle subbands in zero magnetic field.\cite{Glazman,Buttiker,pyshkin,liang} The same approach explains the alternation of zero plateaux and peaks of thermopower (Seebeck coefficient S), which obeys the Mott formula, $S^M\propto\partial\ln G(V_g,T)/\partial E_F$.\cite{Streda,Proetto,Molenkamp,Appleyard98,Lunde} However, the dependence $G(V_g)$ of the conductance on the gate voltage exhibits a narrow region of anomalous behavior, the $0.7\cdot 2e^2/h$ plateau.\cite{Thomas} This plateau broadens with an increase in temperature,\cite{Thomas,Thomas98,Kristensen,Cronenwett,Liu,Komijani} can disappear at $T\to 0$,\cite{Thomas98,Kristensen,Cronenwett,Liu,Komijani} but persists at a complete thermal spread of the conductance quantization steps.\cite{Appleyard,Cronenwett,Liu} The 0.7 conductance anomaly is closely related to the anomalous plateau $S\ne0$ of the thermopower,\cite{Appleyard} which implies violation of the Mott approximation $S\propto \partial\ln G(V_g,T)/\partial V_g$.\cite{Appleyard98} There are dozens of works attempting to explain the 0.7 anomaly (see Refs.~\onlinecite{Sloggett}, \onlinecite{Micolich} and references therein). Numerous scenarios including spin polarization, Kondo effect, Wigner crystal, charge-density waves, and the formation of a quasi-localized state have been suggested. Calculations that reproduce the unusual temperature behavior of the 0.7 conductance anomaly have been performed so far only within phenomenological fitting models with spin subbands,\cite{Cheianov} or beyond the Landauer formula.\cite{Sloggett} Although a particular mechanism of the appearance of the anomalous plateaux in conductance and thermopower remains unclear, their common reason is thought to be the electron-electron interaction, which should manifest itself most effectively at the onset of filling the first subband, where the electron system is one-dimensional. In this work, we start from the standard Landauer approach to the description of conductance and thermopower of a single-mode ballistic quantum wire with spin degeneracy. This approach take into account interaction via the $T$-dependent one-dimensional reflecting barrier. First, we show that the appearance of the 0.7 anomaly implies pinning of the barrier height $U$ at a depth of $k_BT$ below the Fermi level $E_F$ and that this pinning yields the plateau $S\ne0$. Next, motivated by description of Friedel oscillations surrounding a delta-barrier in a one-dimensional electron gas,\cite{Matveev94} we suggest a simple formula which reduces the $T$-dependent part of the correction to the interaction-induced potential to the temperature dependence of the one-dimensional electron density. The behavior of the conductance and Seebeck coefficient calculated with the corrected potential agrees with the published experimental results. \section{Basic formulas and unusual pinning} Conductance and Seebeck coefficient of single mode ballistic channel can be written within the Landauer approach as follows:\cite{Streda,Proetto,Molenkamp} \begin{equation}\label{eq-1n} \begin{gathered} G=\frac{2e^2}{h} \int_{0}^{\infty} D(E,U(x,V_g))F(\epsilon)dE, \\ S=-\frac{2ek_B}{hG}\int_{0}^{\infty}D(E,U(x,V_g))\epsilon F(\epsilon)dE, \end{gathered} \end{equation} where $D$ is the transmission coefficient, $E$ is the energy of ballistic electrons, $U(x,V_g)$ is the effective $T$-dependent one-dimensional barrier, $\epsilon=(E-E_F)/k_BT$, $F(\epsilon)=4k_BT\cosh^{-2}(\epsilon/2)$ is the derivative of the Fermi distribution function with respect to $-E$. There is also the Mott approximation generalized for arbitrary temperatures $T$:\cite{Appleyard,Lunde} \[ S^M=-\frac{\pi^2k_B^2T}{3e}\frac{\partial\ln G(V_g,T)}{\partial E_F}. \] If the barrier $U(x)$ in a one-dimensional channel is sufficiently wide (according to the three-dimensional electrostatic calculations of gate-controlled quantum wires the barrier half-width must be $\stackrel{>}{_{\sim}}200$~nm), the step in the energy dependence $D(E)$ of the transmission coefficient is abrupt and the respective transition is much narrower in the energy $E$ than the thermal energy $k_BT$, at which the 0.7 plateau occurs. This condition is satisfied in many experiments.\cite{Thomas98,Kristensen,Cronenwett,Liu,Komijani} Then, Eq.~\ref{eq-1n} for $G$, $S$ and Mott approximation gives \begin{equation}\label{eq-2n} \begin{gathered} G= \frac{2e^2}{h}(1+e^{-\eta})^{-1},\\ S=-\frac{k_B}{e}[(1+e^{-\eta})\ln (1+e^{\eta})-\eta],\\ S^M=-\frac{k_B}{e}\frac{\pi^2}{3}(1+e^{\eta})^{-1}, \end{gathered} \end{equation} where $\eta=(E_F-U)/k_BT$ and $U=U(x=0)$ is the height of the barrier $U(x,T,V_g)$. Clearly, the dependences $G(E_F-U)$ at different $T$ are simply smooth steps of unit height with the fixed common point $G=e^2/h$. Curves $S^M(\eta)$ and $S(\eta)$ are numerically close to each other at the interval $0<S<2k_B/e$ (see Appendix \ref{appA}). The values $G\approx0.7\cdot 2e^2/h$ in $G(\eta)$ are not particularly interesting except that they correspond to $\eta\approx 1$. However, in experiments, there appear plateaux of $G(V_g)$ at these values, which implies pinning of $U(V_g)$ at a depth of $k_BT$ below the Fermi level (see Appendix \ref{appA}). According to Eq.\eqref{eq-2n}, the discovered pinning can be expected to give the plateau $S\approx-0.8k_B/e$ ($S^M\approx-k_B/e$) in the curve $S(V_g)$. Appendix \ref{appA} compares the calculated plateaux with the experimental ones,\cite{Appleyard} and shows that parameter $\eta$ obtained from $G$ is equal to that from $S$, which verifies applicability of Eq.\eqref{eq-2n} to the experiment. Notice that this pinning differs from the pinning discussed earlier\cite{hansen,Cheianov,Kristensen,Appleyard,Zozoulenko,Pepper} by unusual temperature dependence and single-channel transmission. The pinning that we detected seems paradoxical and urges us to suggest that a probe ballistic electron at the center of the barrier would ``see'' the potential $U(T, V_g)$, which is different from the potential $V(T, V_g)$ computed self-consistently with the electron density (see Appendix \ref{appB}). In fact, similar to our previous calculations,\cite{pyshkin,liang} we computed three-dimensional electrostatics of single-mode quantum wires using different kinds of self-consistency between the potential and the electron density with\cite{hansen} and without the inclusion of exchange interaction and correlations in the local approximation. These calculations show quite definitely that the one-dimensional electron density $n_c$ in the center of the barrier is almost independent of $T$ and is linear in $V_g$ starting from small $n_{c0}$ values; i.e., the electric capacitance between the gate and the quantum wire is conserved at $G > e^2/h$. In addition, since the density of states is positive, $d(E_F-V)/dn_c > 0$ and the dependence $V(V_g)$ of the self-consistent barrier height on the gate voltage does not yield pinning even with the inclusion of the exchange-correlation corrections in the local approximation (Appendix \ref{appB}). Therefore, we suggest that the discovered pinning of the reflecting barrier height is due to the nonlocal interaction. \section{Estimation of nonlocal interaction} It is well known in atomic physics and physics of metallic surfaces and tunneling gaps between two metals that the potential seen by a probe electron in a low-density region is different from the self-consistent potential found with the inclusion of interaction in the local approximation.\cite{Bardeen,Latter,Binnig,Bertoni} This difference was attributed to nonlocal exchange and correlations, i.e., to the attraction of the electron to an exchange-correlation hole, which remains in a high-density region.\cite{Gunnarsson,Wang} Consideration of this phenomenon regarding a quantum point contact is currently unavailable, despite the obvious analogy between two metallic bars separated by a tunneling gap and two-dimensional electron gas baths separated by a potential barrier. We suggest that a ballistic electron coming to the barrier region, where the density $n_c$ is low, gets separated from its exchange-correlation hole, which is situated in the region of dense electron gas. As a result, the local description of the correction to the potential becomes inadequate. Although the hole has a complicated shape, it can be associated with the effective center. Then, a decrease in the potential for the ballistic electron in the center of the barrier is $U-V\approx-\gamma e^2/(4\pi\epsilon\epsilon_0 r)$, where $r$ is the distance between the centers of the barrier and hole, whereas $\gamma\stackrel{<}{_{\sim}}1$ takes into account the shape of the hole and weakly depends on $r$. In perturbation theory, we are interested in a small (i.e., $T$-dependent) part of the correction: \begin{equation}\label{deltaU} \delta U\approx-[e^2/(4\pi\epsilon\epsilon_0]\gamma(r(T)^{-1}-r(0)^{-1}), \end{equation} and the correction at $T=0$ is thought to be already included in the independent variable, which is the initial barrier $U_0(x)$. Obviously, $r$ decreases with an increase in $n_c$, until the electron and hole recombine and the local approximation for the interaction term becomes valid in the center of the barrier. According to this tendency and the smallness of the $T$-dependent correction, we can write $\gamma (r(T)^{-1}-r(0)^{-1})\approx (n_c(T)-n_c(0))/(r^*n_c^*)$, where $n_c(T)$ and $n_c(0)$ are found perturbatively from the single-particle wavefunctions in the barrier $U_0(x)$. In a certain range of the barrier height, we can also neglect a change in the positive phenomenological parameter $\gamma r^*n_c^*$. Under these assumptions, Eq.\eqref{deltaU} is formally a special case of the interaction-induced correction $\delta U(x)\propto-\alpha\delta n(x)$, here, $\delta n(x)$ stands for Friedel density oscillations. This correction results from calculation of the propagation through the delta barrier in a one-dimensional electron system,\cite{Matveev94} in which case $\alpha=\alpha(0)-\alpha(2k_F)$ is a result of the competition between the exchange ($\alpha(0)$) and direct ($\alpha(2k_F)$) contribution to the interaction. A similar correction was used to simulate multimode quantum wires.\cite{Renard} We can attempt to extend the range of this correction, with the respective change in the meaning of $\alpha$ to the entire first subband of the quantum wire, including the top of the smooth barrier. \section{Calculation of 1D electron density} To find this correction perturbatively, we first compute the complete set of wavefunctions for the bare smooth barrier $U_0(x)$ and find the electron density $n(x)$ at a given temperature: \begin{equation}\label{eq-4n} n=\frac{1}{2\pi b} \int_{0}^{\infty}\frac{dE}{(EE_0)^{1/2}} \frac{|\psi_L(x,E)|^2+|\psi_R(x,E)|^2}{1+e^{(E-E_F)/k_BT}}, \end{equation} where $E_0=\hbar^2/2m^{*}b^2$, $b=1$~nm is the length scale, $m^*=0.067m_e$ is the electron effective mass, and $\psi_L(x,E)$, $\psi_R(x,E)$ is the wavefunction of electrons incident on the barrier from the left (right). The amplitude of the incident wave is set to unity. The bare potential is specified as $U_0(x)=V_0/\cosh^2(x/a)$. This form is quite appropriate for simulation of short ballistic channels, including escape to two-dimensional reservoirs.\cite{Buttiker,pyshkin} This potential does not yield any features in the transmittance $D(E)$ except the steps of unit height.\cite{LL} The values of the parameters were taken to be typical for ballistic quantum wires in a GaAs/AlGaAs two-dimensional electron gas. The calculated dependence $n(x,T)$ is presented in Fig.~\ref{fig1}. \begin{figure}[t] \centerline{\includegraphics*[width=0.98\linewidth]{nxnew}} \caption{\label{fig1} 1D-electron density calculated with formula~\eqref{eq-4n} at $E_F=5$~meV for potential $U_0(x)=V_0/\cosh^2(x/a)$, where the half-width a is fixed $a=200$~nm. Curves for different $V_0$ are offset by 0.01~nm$^{-1}$ for clarity. }\end{figure} \begin{figure}[t] \centerline{\includegraphics*[width=\linewidth]{deltanx_new}} \caption{\label{fig2} Electron density correction $\delta n=n(x)-n_0(x)$ at the same parameters as in Fig.~\ref{fig1}. Curves for different $V_0$ are offset by 0.005~nm$^{-1}$ for clarity. }\end{figure} One can see that the density strongly changes with increasing temperature in the transition from the tunnel regime to the open one. At the lowest temperature there are Friedel oscillations (FOs) in the tunnel regime, while in the open regime they are suppressed. Calculated correction $\delta n(x,T)$ is a wide perturbation of density across the whole barrier (Figs.~\ref{fig1} and \ref{fig2}). At $V_0>E_F$ one can see thermally activated increase $n(x,T)$ at the barrier top; this temperature behavior inverts in the open regime $V_0<E_F$. Details of this temperature behaviour are best seen in $\delta n=n(x)-n_0(x)$, where $n_0(x=0)=n_c(T=0)$, and $n_0(x\ne0)=\langle n(x,T=0)\rangle$~is the density averaged over the Friedel oscillations (Fig.~\ref{fig2}). The most interesting for the analysis of the consequences of Eqs.~\eqref{eq-2n}, \eqref{deltaU} is the dependence of the electron density $n_c$ in the center of the barrier on the bare height $V_0$ at various $T$. Figure~\ref{fig3} shows quite clearly the details of the $T$-dependent behavior of $n_c$ and $\delta n_c$, when $U_0(x)$ is an independent variable. \begin{figure}[b] \centerline{\includegraphics*[width=0.98\linewidth]{nc_col}} \caption{\label{fig3} Electron density $n_c$ in the center of the barrier versus the height $V_0$ of the barrier $U_0(x)=V_0/\cosh^2(x/a)$ calculated according to Eq.~\eqref{eq-4n} with $a=200$~nm and $E_F=5$~meV. }\end{figure} In the experiment, on the contrary, the gate voltage $V_g$ is varied independently and, according to the electrostatic calculation (see Appendix \ref{appB} and Refs.~\onlinecite{pyshkin}, \onlinecite{liang}), there is a linear relation between $n_c$ and $V_g$ above some small $n_{c0}$ value, so that the temperature dependence of $n_c(V_g)$ can be neglected. According to Fig.~\ref{fig3}, this contradiction is resolved under the assumption that the quantity $V_0$ actually depends on $T$ at constant $n_c$ or $V_g$. The relation between $V_g$ and the height of the bare barrier $V_0$ is mediated by the electron wavefunctions and the Fermi distribution. This relation and $n(x,T)$ do not depend, in the first-order perturbation theory, on interaction. However, they do contribute into it. \section{Corrected potential} According to Eq.\eqref{deltaU} and similar to Ref.~\onlinecite{Matveev94} the $T$-dependent part of the interaction-induced correction to the bare potential $U_0(x)$ was calculated with the use of the phenomenological formula: \begin{equation}\label{deltaU-matveev} \delta U(x)=-\alpha \pi\hbar v_F \delta n(x,E_F,T), \end{equation} where $\alpha=\textrm{const}>0$, $(\hbar v_F)^{-1}$~is the one-dimensional density of states far from the barrier, $\delta n=n(x)-n_0(x)$ where $n_0(x=0)=n_c(T=0)$, and $n_0(x\ne0)=\langle n(x,T=0)\rangle$~is the density averaged over the Friedel oscillations. The interaction-corrected potential $U(x)$ at various $V_0$ and $T$ values is shown in Fig.~\ref{fig4}. \begin{figure}[b] \centerline{\includegraphics*[width=0.92\linewidth]{Uxnew}} \caption{\label{fig4} Potential $U_0+\delta U(V_0,T)$ calculated from Eq.~\eqref{deltaU-matveev} with $\alpha=0.2$ for the case of $U_0(x)=V_0/\cosh^2(x/a)$, $a=200$~nm and $V_0=3,4,5,6,7,8$~meV. }\end{figure} At high $V_0$ values and low $T$ values, penetration of an incident electron to the classically forbidden region of the barrier is very low and the thermal perturbation of the electron density inside the barrier is negligible. Therefore, the potential barrier remains almost unchanged. At $V_0=E_F$, the height of the barrier $U(x)$ is lowered considerably with an increase in temperature. At constant $V_0 < E_F$, the barrier $U(x)$ is raised with $T$, in contrast to the case of $V_0\geq E_F$. The behavior of the barrier height $U$ is shown in more detail in Fig.~\ref{fig5}. For $T=0$ we have $U=V_0$, because $\delta n(x=0)=0$ by definition. One can see that $U$ becomes independent \begin{figure}[t \centerline{\includegraphics*[width=.96\linewidth]{UV0_c}} \caption{\label{fig5} Corrected barrier height versus the original barrier height $V_0$ at the same parameters as in Fig.~\ref{fig4}. }\end{figure} of $V_0$ near $E_F$ at $T>0.1$~meV. There appears a plateau below $E_F$. It becomes broader and deeper with an increase in $T$. The relative correction $\Delta U(x)/V_0$ at the parameters specified in the figure caption reaches 10\%. This is close to the limiting value for the present approximation, which implies that the correction is small. This limits the growth of $T$ and $\alpha$ in the model. Figure~\ref{fig5} in combination with Eq.\eqref{eq-2n} provides a qualitative understanding of the development of the 0.7 conductance anomaly and the thermopower plateau with an increase in temperature. As is seen, the height $U$ of the temperature-dependent barrier near $E_F$ is stabilized at about $T$ below $E_F$. \section{Calculated transport anomalies} Conductance as a function of the barrier height $V_0$ was calculated with the aid of formulas~\eqref{eq-1n}, \eqref{eq-4n}, \eqref{deltaU-matveev}. The result is shown in Fig.~\ref{fig6}. There is an usual conductance step \begin{figure}[t] \centerline{\includegraphics*[width=0.9\linewidth]{GV0new}} \caption{\label{fig6} (a) Conductance $G(V_0,T)$ calculated for corrected potential $U(x)=U_0(x)+\delta U(x)$ with $U_0(x)=V_0/\cosh^2(x/a)$, $a=200$~nm and interaction parameter $\alpha=0.2$. (b) $G(V_0,T)$ for $U(x)=U_0(x)$ }\end{figure} with unit height at $T = 0.01$~meV. However with increasing temperature additional 0.7-plateau is developed at $V_0 = E_F$. The width of these plateaux well corresponds to temperature. Notice that the height of corrected barrier is almost not changed at the lowest temperatures and only the distant Friedel oscillations can have influence on transmission. Indeed, similar to Ref.~\onlinecite{Matveev94}, scattering off Friedel oscillations leads to a small decrease in transmission coefficient at low but finite temperatures and a shift of the conductance step to the lower values of $V_0$. For comparison we show in Fig.~\ref{fig6}b the curves $G(V_0,T)$ calculated for the bare potential $U_0(x)=V_0/\cosh^2(x/a)$. Figure~\ref{fig7} shows that conductance behavior is somewhat universal for elevated $T$ and $\alpha$, and it strongly differs from that for $T\to0$. Conductance $G(T,\alpha)$ for corrected potential is plotted as a function of $G(T,{\alpha=0})$ in Figure~\ref{fig7}a. These conductances are related to each other via common parameter $V_0$. Conductance $G$ calculated at $E_F=V_0$ as a function of interaction parameter $\alpha$ shows that the height of the $0.7$-plateau is saturated with increasing $\alpha$ (Fig.~\ref{fig7}b). \begin{figure}[b] \centerline{\includegraphics*[width=\linewidth]{Galpha03G0}} \caption{\label{fig7} (a) Calculated $G(G_{\alpha=0},T)$ for the same bare potentials as in Fig.~\ref{fig6}a, but for $\alpha=0.3$. (b) Calculated $G(\alpha)$ at $V_0=E_F=5$~meV for the same bare potentials and different $T$: $0.01\leq T\leq 0.51$~meV. The upper curve in (b) represents all the curves for $T=0.11\div0.51$, as they fit within the indicated error bars. }\end{figure} The dependence of the Seebeck coefficient on $E_F$ and $T$ (Fig.~\ref{fig8}) was computed from Eqs.~\eqref{eq-4n}, \eqref{deltaU-matveev}, and \eqref{eq-1n}. In this case, we used a larger barrier half-width and a lower $\alpha$ value than before. \begin{figure}[t] \centerline{\includegraphics*[width=\linewidth]{SGnew}} \caption{\label{fig8} Calculated thermopower and conductance of the one-dimensional channel versus the Fermi energy $E_F$ at constant $V_0$ (the parameters are indicated in the figure). }\end{figure} In agreement with the analysis within approximation~\eqref{eq-2n} and similar to the experiment,\cite{Appleyard} $S(E_F)$ exhibits an anomalous step with a height of 0.9-1.0 of $-k_B/e$. As is clearly seen in Fig.~\ref{fig8}, this step is formed with an increase in $T$ simultaneously with the 0.7 conductance anomaly. It is noteworthy that such a combined evolution with temperature has not yet been observed. Thus, we propose to make the respective experiment for the additional proof of the proposed model. Though dependences $U(V_0)$, $G(V_0)$, $G(E_F)$ and $G(G_{\alpha=0})$ are easy to calculate, they can hardly be measured. However, we can compare with the experiment the respective dependences on the electron density $n_c$ in the center of the barrier, see Fig.~\ref{fig9}. In the figure, dependences (a)--(c) show the \emph{plateaux} which appear and become more pronounced with increase of the temperature. \begin{figure}[t \centerline{\includegraphics*[width=.94\linewidth]{Gnc_EFmU}} \caption{\label{fig9} (a,b) Calculated dependences $E_F-U$, $\eta(n_c)=(E_F-U)/k_BT$ for $E_F=5$~meV, the interaction parameter $\alpha=0.2$, and the bare potential $U_0(x)=V_0/\cosh^2(x/a)$ with $a=200$~nm. (c,d) Calculated conductance of the one-dimensional channel with the corrected (b) and the bare (c) potential versus $n_c$. }\end{figure} The shape of the computed plateaux is almost the same as that of experimental ones (see Figs.~\ref{fig11} and \ref{fig12} in Appendix \ref{appA}). The widths of the plateaux in Fig.~\ref{fig9}, $\Delta n_c\approx 0.01$~nm$^{-1}$ and $\Delta V_g\approx0.01-0.02$~V, agree in several experiments (Figs.~\ref{fig10}--\ref{fig12}), within small variations of the gate capacitance. Appendix \ref{appB} discusses the variations in more details. If correction Eq.\eqref{deltaU-matveev} is zero ($\alpha=0$), then calculated dependence $G(V_0(n_c,T))$ is almost the same as for self-consistent potential obtained in 3D-electrostatic potential calculation (Fig.~\ref{fig13}d). We made a number of simplifying assumptions in our 1D-model. Therefore, a detailed fit of the experimental data to the calculated curves is hardly appropriate. For example, it was checked that approximation~\eqref{eq-2n} in the calculation of the conductance replaces quite well more general formula~\eqref{eq-1n} (except the case of ultimately low $k_BT$ values). Thus, the discovered effect is unrelated to the details of the barrier profile $U(x)$ and is induced merely by the dependence of the difference $E_F-U$ on $n_c$; i.e., the correction $\delta U(T)$ in the center of the barrier, where Eq.~\eqref{deltaU} presumably holds, yielding $\alpha>0$, is crucial. It is difficult to find $\alpha$ from theoretical considerations. However, similarity of the experimental and calculated curves persists under a 50\% variation of $\alpha$ (Fig.~\ref{fig7}). It is noteworthy that the effective value $\alpha=0.2$ corresponds to $r^*n_c^*\approx 1.5$; i.e., the distance $r$ between the probe electron in the center of the barrier and the exchange-correlation hole is $1.5\gamma/n_c$. On the basis of the typical values $n_c\sim0.01$~nm$^{-1}$ (see Figs.~\ref{fig2} and \ref{fig5}), we can conclude that the conditions used to Eq.~\eqref{deltaU} and Eq.~\eqref{deltaU-matveev} are fulfilled. \section{Conclusion} A simple model of anomalous plateaux in the conductance and thermopower of one-dimensional ballistic quantum wires has been proposed on the basis of the Landauer approach with spin degeneracy. The key points of the model are pinning of the effective one-dimensional barrier height $U$ at a depth of $k_BT$ below the Fermi level under a change in the one-dimensional density in the center of the barrier or the gate voltage and the inclusion of all (local and nonlocal) temperature-dependent interaction-induced corrections via phenomenological formula~\eqref{deltaU-matveev}. \section*{Acknowledgements} This work was supported by the Presidium of the Russian Academy of Sciences, program no. 24, and the Siberian Branch, Russian Academy of Sciences, project no. IP130. We are grateful to Z.D. Kvon, M.V. Budantsev, A.P. Dmitriev, I.V. Gornyi for fruitful discussions, and A. Safonov for translation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} If a dynamical system maintains a partial ordering of states along trajectories of the system, it is said to be \emph{monotone} \cite{Hirsch:1983lq,Hirsch:1985fk, Smith:2008fk}. Large classes of physically motivated systems have been shown to be monotone including biological networks \cite{Sontag:2007ad} and transportation networks \cite{Gomes:2008fk, Lovisari:2014yq, coogan2015compartmental}. Monotone systems exhibit structure and ordered behavior that is exploited for analysis and control \emph{e.g.}, \cite{Angeli:2003fv, Angeli:2004qy, Rantzer:2012fj, Dirr:2015rt}. It has been observed that monotone systems are often amenable to separable Lyapunov functions for stability analysis. In particular, classes of monotone systems have been identified that allow for Lyapunov functions that are the sum or maximum of a collection of functions of a scalar argument \cite{Rantzer:2013bf, Dirr:2015rt, Sootla:2016sp}. In the case of linear monotone systems, also called \emph{positive} systems, such sum-separable and max-separable global Lyapunov functions are always possible when the origin is an asymptotically stable equilibrium \cite{Rantzer:2012fj}. On the other hand, if the distance between states along any pair of trajectories is exponentially decreasing, a dynamical system is said to be \emph{contractive} \cite{Pavlov:2004lr, LOHMILLER:1998bf, Sontag:2010fk,Forni:2012qe}. If a contractive system has an equilibrium, then the equilibrium is globally asymptotically stable and a Lyapunov function is given by the distance to the equilibrium. Certain classes of monotone systems have been shown to be also contractive with respect to non-Euclidean norms. For example, \cite{Margaliot:2012hc, Margaliot:2014qv, Raveh:2015wm} study a model for gene translation which is monotone and contractive with respect to a weighted $\ell_1$ norm. A closely related result is obtained for transportation flow networks in \cite{Coogan:2014ph, Como:2015ne}. In \cite{Coogan:2014ph}, a Lyapunov function defined as the magnitude of the vector field is used, and in \cite{Como:2015ne}, a Lyapunov function based on the distance of the state to the equilibrium is used. In this paper, we establish sufficient conditions for constructing sum-separable and max-separable Lyapunov functions for monotone systems by appealing to contraction theory. In particular, we study monotone systems that are contractive with respect to a possibly state-dependent, weighted $\ell_1$ norm, which leads to sum-separable Lyapunov functions, or weighted $\ell_\infty$ norm, which leads to max-separable Lyapunov functions. We first provide sufficient conditions establishing contraction for monotone systems in terms of negativity of scaled row or column sums of the Jacobian matrix for the system where the scaling may be state-dependent. In addition to deriving Lyapunov functions that are separable along the state of the system, we also introduce Lyapunov functions that are separable along components of the vector field. This is especially relevant for certain classes of systems such as multiagent control systems or flow networks where it is often more practical to measure velocity or flow rather than position or state. Additionally, we present results of independent interest for proving asymptotic stability and obtaining Lyapunov functions of systems that are \emph{nonexpansive} with respect to a particular vector norm, \emph{i.e.}, the distance between states along any pair of trajectories does not increase. Finally, we draw connections between our results and related results, particularly small-gain theorems for interconnected input-to-state stable (ISS) systems. The present paper significantly generalizes results previously reported in \cite{Coogan:2016kx}, which only considered constant norms over the state-space. Thus, the theorems of \cite{Coogan:2016kx} are presented here as corollaries to our main results. In addition to deriving separable Lyapunov functions for a class of monotone systems, we present explicit conditions for establishing contractive properties for nonautonomous, nonlinear systems with respect to state-dependent, non-Euclidean metrics. These conditions require that the matrix measure of a suitably defined generalized Jacobian matrix be uniformly negative definite and rely on the theory of Finsler-Lyapunov functions \cite{Forni:2012qe}. The recent work \cite{Manchester:2017wc} also seeks to determine when a contracting monotone system has a separable contraction metric. Unlike the present work, which focuses on norms that are naturally separable, namely, the $\ell_1$ and $\ell_\infty$ norm, \cite{Manchester:2017wc} considers Riemannian metrics and studies when a contracting system also has a separable Riemannian contraction metric. This paper is organized as follows. Section \ref{sec:notation} defines notation and Section \ref{sec:problem-setup} provides the problem setup. Section \ref{sec:main-results} contains the statements of our main results. Before proving these results, we review contraction theory for general, potentially time-varying nonlinear systems in Section \ref{sec:contr-with-resp}. In applications, it is often the case that the system dynamics are not contractive everywhere, but are nonexpansive with respect to a particular norm, \emph{i.e.}, the distance between any pair of trajectories does not increase for all time. In Section \ref{sec:glob-asympt-stab}, we provide a sufficient condition for establishing global asymptotic stability for nonexpansive systems. In Section \ref{sec:proof-main-result}, we provide the proofs of our main results, and in Section \ref{sec:an-algor-comp}, we use our main results to establish a numerical algorithm for searching for separable Lyapunov functions using sum-of-squares programming. We provide several applications of our results in Section \ref{sec:examples}. Next, we discuss how the present work relates to results for interconnected ISS systems and generalized contraction theory in Section \ref{sec:disc-comp-exist}. Section \ref{sec:conclusions} contains concluding remarks. \section{Preliminaries} \label{sec:notation} For $x,y\in\mathbb{R}^n$, we write $x\leq y$ if the inequality holds elementwise, and likewise for $<$, $\geq$ and $>$. Similarly, $x>0$ ($x\geq 0$) means all elements of $x$ are positive (nonegative), and likewise for $x<0$ or $x\leq 0$. For symmetric $X,Y\in\mathbb{R}^{n\times n}$, $X\succ 0$ (respectively, $X\succeq 0$) means $X$ is positive definite (respectively, semidefinite), and $X\succ Y$ (respectively, $X\succeq Y$) means $X-Y\succ 0$ (respectively, $X-Y\succeq 0$). For $\mathcal{X}\subseteq \mathbb{R}^n$, the matrix-valued function $\Theta:\mathcal{X}\to\mathbb{R}^{n\times n}$ is \emph{uniformly positive definite on $\mathcal{X}$} if $\Theta(x)=\Theta(x)^T$ for all $x\in\mathcal{X}$ and there exists $\alpha >0$ such that $\Theta(x)\succeq \alpha I$ for all $x\in\mathcal{X}$ where $I$ denotes the identity matrix and $T$ denotes transpose. The vector of all ones is denoted by $\mathbf{1}$. For functions of one variable, we denote derivative with the prime notation $'$. The $\ell_1$ and $\ell_\infty$ norms are denote by $|\cdot|_1$ and $|\cdot|_\infty$, respectively, that is, $|x|_1=\sum_{i=1}^n|x_i|$ and $|x|_\infty=\max_{i=1,\ldots,n}|x_i|$ for $x\in\mathbb{R}^n$. Let $|\wc|$ be some vector norm on $\mathbb{R}^n$ and let $\|\wc \|$ be its induced matrix norm on $\mathbb{R}^{n\times n}$. The corresponding \emph{matrix measure} of the matrix $A\in\mathbb{R}^{n\times n}$ is defined as (see, \emph{e.g.}, \cite{Desoer:2008bh}) \begin{align} \label{eq:5} \mu(A):= \lim_{h\to 0^+}\frac{\|I+hA\|-1}{h}. \end{align} One useful property of the matrix measure is that $\mu(A)<0$ implies $A$ is Hurwitz \cite{Vidyasagar:2002ly}. For the $\ell_1$ norm, % the induced matrix measure is given by \begin{align} \label{eq:56} \textstyle \mu_1(A)=\max_{j=1,\ldots,n}\left(A_{jj}+\sum_{i\neq j}|A_{ij}|\right) \end{align} for any $A\in\mathbb{R}^{n\times n}$. Likewise, for the $\ell_\infty$ norm, % the induced matrix measure is given by \begin{align} \label{eq:58} \textstyle \mu_\infty(A)=\max_{i=1,\ldots,n}\left(A_{ii}+\sum_{j\neq i}|A_{ij}|\right). \end{align} See, \emph{e.g.}, \cite[Section II.8, Theorem 24]{Desoer:2008bh}, for a derivation of the induced matrix measures for common vector norms. A matrix $A\in\mathbb{R}^{n\times n}$ is \emph{Metzler} if all of its off diagonal components are nonnegative, that is, $A_{ij}\geq 0$ for all $i\neq j$. When $A$ is Metzler, % \eqref{eq:56} and \eqref{eq:58} reduce to% \begin{align} \label{eq:10} \mu_1(A)&\textstyle =\max_{j=1,\ldots,n}\sum_{i=1}^nA_{ij},\\%=\max(\mathbf{1}^TA)\\ \label{eq:10-2} \mu_\infty(A)&\textstyle =\max_{i=1,\ldots,n}\sum_{j=1}^nA_{ij},% \end{align} that is, $\mu_1(A)$ is the largest column sum of $A$ and $\mu_{\infty}(A)$ is the largest row sum of $A$. \section{Problem Setup} \label{sec:problem-setup} Consider the dynamical system \begin{align} \label{eq:1} \dot{x}=f(x) \end{align} for $x\in\mathcal{X}\subseteq \mathbb{R}^n$ where % $f(\cdot)$ is continuously differentiable. Let $f_i(x)$ indicate the $i$th component of $f$ and let $J(x)= \frac{\partial f}{\partial x}(x)$ be the Jacobian matrix of $f$. Throughout, when $\mathcal{X}$ is not an open set (\emph{e.g.}, $\mathcal{X}$ is the positive orthant as in Example \ref{ex:statedep} below), we understand continuous differentiability of a function on $\mathcal{X}$ to mean that the function can be extended to a continuously differentiable function on some open set containing $\mathcal{X}$, and we further consider $\mathcal{X}$ to be equipped with the induced topology. Denote by $\phi(t,x_0)$ the solution to \eqref{eq:1} at time $t$ when the system is initialized with state $x_0$ at time $0$. We assume that \eqref{eq:1} is forward complete and $\mathcal{X}$ is forward invariant for \eqref{eq:1} so that $\phi(t,x_0)\in \mathcal{X}$ for all $t\geq 0$ and all $x_0\in\mathcal{X}$. % Except in Sections \ref{sec:contr-with-resp} and \ref{sec:glob-asympt-stab}, we assume \eqref{eq:1} is monotone \cite{Smith:2008fk, Angeli:2003fv}: \begin{definition} \label{def:mon} The system \eqref{eq:1} is \emph{monotone} if the dynamics maintain a partial order on solutions, that is, \begin{align} \label{eq:3} x_0\leq y_0 \implies \phi(t,x_0)\leq \phi(t,y_0) \quad \forall t\geq 0 \end{align} for any $x_0,y_0\in\mathcal{X}$. \end{definition} In this paper, monotonicity is defined with respect to the positive orthant since inequalities are interpreted componentwise, although it is common to consider monotonicity with respect to other cones \cite{Angeli:2003fv}. The following proposition characterizes monotonicity in terms of the Jacobian matrix $J(x)$. \begin{prop}[{\emph{Kamke Condition}, \cite[Ch. 3.1]{Smith:2008fk}}] \label{prop:mono} Assume $\mathcal{X}$ is convex. Then the system \eqref{eq:1} is monotone if and only if the Jacobian $J(x)$ is Metzler for all $x\in\mathcal{X}$. \end{prop} Here, we are interested in certifying stability of an equilibrium $x^*$ for the dynamics \eqref{eq:1}. To that end, we have the following definition of Lyapunov function, the existence of which implies asymptotic stability of $x^*$. \begin{definition}[{\cite[p. 219]{Sontag:1998cr}}] \label{def:lyap} Let $x^*$ be an equilibrium of \eqref{eq:1}. A continuous function $V:\mathcal{X}\to\mathbb{R}$ is a \emph{(local) Lyapunov function} for \eqref{eq:1} with respect to $x^*$ if on some neighborhood $\mathcal{O}$ of $x^*$ the following hold: \begin{enumerate}[label=(\roman*)] \item $V(x)$ is proper at $x^*$, that is, for small enough $\epsilon>0$, $\{x\in\mathcal{X}|V(x)\leq \epsilon\}$ is a compact subset of $\mathcal{O}$; \item $V(x)$ is positive definite on $\mathcal{O}$, that is, $V(x)\geq 0$ for all $x\in\mathcal{O}$ and $V(x)=0$ if and only if $x=x^*$; \item For any $x_0\in \mathcal{O}$, $x_0\neq x^*$, there is some time $\tau>0$ such that $V(\phi(\tau,x_0))<V(x_0)$ and $V(\phi(t,x_0))\leq V(x_0)$ for all $t\in(0,\tau]$. \end{enumerate} Furthermore, $V$ is a global Lyapunov function for \eqref{eq:1} if we may take $\mathcal{O}=\mathcal{X}$ and $V(x)$ is globally proper, that is, for each $L>0$, $\{x\in\mathcal{X}|V(x)\leq L\}$ is compact. \end{definition} In this paper, we are particularly interested in Lyapunov functions defined using nondifferentiable norms, thus we rely on the definition above which only requires $V(x)$ to be continuous. Such nondifferentiable Lyapunov functions will not pose additional technical challenges since we will not rely on direct computation of the gradient of $V(x)$. Instead, we will construct locally Lipschitz continuous Lyapunov functions and bound the time derivative evaluated along trajectories of the system, which exists for almost all time. Nonetheless, classical Lyapunov theory, which verifies condition (3) of the above definition by requiring $(\partial V/\partial x)\cdot f(x)<0$ for all $x\neq x^*$, is extended to such nondifferentiable Lyapunov functions with the use of generalized derivatives \cite{Clarke:1990kx}. % Note that when $\mathcal{X}=\mathbb{R}^n$, a Lyapunov function is globally proper (and hence a global Lyapunov function) if and only if it is radially unbounded \cite[p. 220]{Sontag:1998cr}. We call the Lyapunov function $V(x)$ \emph{agent sum-separable} if it decomposes as \begin{align} \label{eq:61} V(x)=\sum_{i=1}^nV_i(x_i,f_i(x)) \end{align} for a collection of functions $V_i$, and \emph{agent max-separable} if it decomposes as \begin{align} \label{eq:62} V(x)=\max_{i=1,\ldots,n}V_i(x_i,f_i(x)). \end{align} If each $V_i$ in \eqref{eq:61} (respectively, \eqref{eq:62}) is a function only of $x_i$, we further call $V(x)$ \emph{state sum-separable} (respectively, \emph{state max-separable}). On the other hand, if each $V_i$ is a function only of $f_i(x)$, we say $V(x)$ is \emph{flow state-separable} (respectively, \emph{flow max-separable}). Our objective is to construct separable Lyapunov functions for a class of monotone systems. \section{Main Results} \label{sec:main-results} In the main result of this paper, we provide conditions for certifying that a monotone system possesses a globally asymptotically stable equilibrium. These conditions then lead to easily constructed separable Lyapunov functions. As discussed in Section \ref{sec:contr-with-resp}, this result relies on establishing that the monotone system is contractive with respect to a possibly state-dependent norm. We will first state our main result in this section before developing the theory required for its proof in the sequel.% \begin{definition} The set $\mathcal{X}\subseteq \mathbb{R}^n$ is \emph{rectangular} if % there exists a collection of connected sets (\emph{i.e.}, intervals) $\mathcal{X}_i\subseteq \mathbb{R}$ for $i=1,\ldots, n$ such that $\mathcal{X}=\mathcal{X}_1\times \cdots \times \mathcal{X}_n$. \end{definition} \begin{thm} \label{thm:tvmain1} Let \eqref{eq:1} be a monotone system with rectangular domain $\mathcal{X}=\prod_{i=1}^n\mathcal{X}_i$, and let $x^*\in\mathcal{X}$ be an equilibrium for \eqref{eq:1}. If there exists a collection of continuously differentiable functions $\theta_i:\mathcal{X}_i\to\mathbb{R}$ for $i=1,\ldots,n$ such that for some $c>0$, $\theta_i(x_i)\geq c$ for all $x_i\in \mathcal{X}_i$ and for all $i$, and \begin{align} \label{eq:111} \theta(x)^TJ(x)+\dot{\theta}(x)^T&\leq 0 \quad \forall x\in\mathcal{X},\\ \label{eq:114} \theta(x^*)^TJ(x^*)&< 0, \end{align} where $\theta(x)=\begin{bmatrix}\theta_1(x_1) &\theta_2(x_2)&\ldots&\theta_n(x_n)\end{bmatrix}^T$, % then $x^*$ is globally asymptotically stable. % Furthermore, \begin{align} \label{eq:113} \sum_{i=1}^n\left|\int_{x_i^*}^{x_i}\theta_i(\sigma)d\sigma\right| \end{align} is a global Lyapunov function, and \begin{align} \label{eq:113-2} \sum_{i=1}^n\theta_i(x_i)|f_i(x)| \end{align} is a local Lyapunov function. If \eqref{eq:113-2} is globally proper, then it is also a global Lyapunov function. \end{thm} Above, $\dot{\theta}(x)$ is shorthand for \begin{align} \label{eq:163} \dot{\theta}(x)=\frac{\partial \theta}{\partial x}f(x)=% \begin{bmatrix} \theta_1'(x_1)f_1(x)&\ldots\ \theta_n'(x_n)f_n(x) \end{bmatrix}^T. \end{align} \begin{thm} \label{thm:tvmain2} Let \eqref{eq:1} be a monotone system with rectangular domain $\mathcal{X}=\prod_{i=1}^n\mathcal{X}_i$, and let $x^*\in\mathcal{X}$ be an equilibrium for \eqref{eq:1}. If there exists a collection of continuously differentiable functions $\omega_i:\mathcal{X}_i\to\mathbb{R}$ for $i=1,\ldots,n$ such that for some $c>0$, $0<\omega_i(x_i)\leq c$ for all $x_i\in\mathcal{X}_i$ and for all $i$, and \begin{align} \label{eq:45} J(x)\omega(x)-\dot{\omega}(x)&\leq 0 \quad \forall x\in\mathcal{X},\\ \label{eq:48} J(x^*)\omega(x^*)&< 0, \end{align} where $\omega(x)=\begin{bmatrix}\omega_1(x_1) &\omega_2(x_2)&\ldots&\omega_n(x_n)\end{bmatrix}^T$, then $x^*$ is globally asymptotically stable. Furthermore, % \begin{align} \label{eq:47} \max_{i=1,\ldots,n}\left|\int_{x_i^*}^{x_i}\frac{1}{\omega_i(\sigma)}d\sigma\right| \end{align} is a global Lyapunov function and \begin{align} \label{eq:47-2} \max_{i=1,\ldots,n}\frac{1}{\omega_i(x_i)}|f_i(x)| \end{align} is a local Lyapunov function. If \eqref{eq:47-2} is globally proper, then it is also a global Lyapunov function. \end{thm} Note that \eqref{eq:113} and \eqref{eq:47} are state-separable Lyapunov functions, while \eqref{eq:113-2} and \eqref{eq:47-2} are agent-separable Lyapunov functions. \begin{example} \label{ex:statedep} Consider the system \begin{align} \label{eq:80} \dot{x}_1&=-x_1+x_2^2\\ \dot{x}_2&=-x_2 \end{align} which is monotone on the invariant domain $\mathcal{X}=(\mathbb{R}_{\geq 0})^2$ with unique equilibrium $x^*=(0,0)$. The Jacobian is given by \begin{align} \label{eq:86} J(x)= \begin{bmatrix} -1&2x_2\\ 0&-1 \end{bmatrix}. \end{align} Let $\theta(x)= \begin{bmatrix} 1&x_2+1 \end{bmatrix}^T$ so that $\dot{\theta}(x)= \begin{bmatrix} 0&-x_2 \end{bmatrix}^T$ and \begin{align} \label{eq:90} \theta(x)^TJ(x)+\dot{\theta}(x)^T&= \begin{bmatrix} -1&-1 \end{bmatrix}\leq 0 \end{align} for all $x\in\mathcal{X}$. Since also $\theta(x^*)^TJ(x^*)= \begin{bmatrix} -1&-1 \end{bmatrix}$, Theorem~\ref{thm:tvmain1} is applicable. Therefore, $x^*=0$ is globally asymptotically stable and, from \eqref{eq:113} and \eqref{eq:113-2}, either of the following are global Lyapunov functions: \begin{align} \label{eq:93} V_1(x)% &=x_1+x_2+\frac{1}{2}x_2^2\text{, and}\\ V_2(x)% &=\left|{-x_1}+x_2^2\right|+x_2+x_2^2. \end{align} On the other hand, let $\omega(x)= \begin{bmatrix} 2&\frac{1}{1+x_2} \end{bmatrix}^T$ for which $\dot{\omega}(x)=\begin{bmatrix}0&\frac{x_2}{(1+x_2)^2}\end{bmatrix}^T$ and \begin{align} \label{eq:94} J(x)\omega(x)-\dot{\omega}(x)= \begin{bmatrix} -2+\frac{2x_2}{1+x_2}\\ \frac{-1}{1+x_2}-\frac{x_2}{(1+x_2)^2} \end{bmatrix}\leq 0 \end{align} for all $x\in\mathcal{X}$. Since also $J(x^*)\omega(x^*)=\begin{bmatrix}-2&-1\end{bmatrix}^T$, Theorem~\ref{thm:tvmain2} is applicable. Thus, from \eqref{eq:47} and \eqref{eq:47-2}, either of the following are also Lyapunov functions: \begin{align} \label{eq:95} V_3(x)% &=\max\left\{\frac{1}{2}x_1,x_2+\frac{1}{2}x_2^2\right\}{\text{, and}}\\ V_4(x)% &=\max\left\{\frac{1}{2}\left|{-x_1}+x_2^2\right|,x_2+x_2^2\right\}. \end{align} $\blacksquare$ \end{example} \begin{remark} Even when \eqref{eq:113-2} or \eqref{eq:47-2} is not globally proper and thus not a global Lyapunov function, it is nonetheless the case that both functions monotonically decrease to zero along any trajectory of the system. \end{remark} We now specialize Theorems \ref{thm:tvmain1} and \ref{thm:tvmain2} to the case where $\theta(x)$ and $\omega(x)$ are independent of $x$, that is, are constant vectors. This special case proves to be especially useful in a number of applications as demonstrated in Section \ref{sec:examples}. \begin{cor} \label{cor:1} Let \eqref{eq:1} be a monotone system with equilibrium $x^*$. Suppose there exists a vector $v>0$ such that $v^TJ(x)\leq 0$ for all $x\in\mathcal{X}$ and {$v^TJ(x^*)<0$. Then $x^*$ is globally asymptotically stable}. Furthermore, % \begin{align} \label{eq:17} \sum_{i=1}^nv_i|x_i-x_i^*|% \end{align} is a global Lyapunov function and \begin{align} \label{eq:17-2} \sum_{i=1}^nv_i|f_i(x)| \end{align} is a local Lyapunov function. {If, additionally, there exists $c>0$ such that $v^TJ(x)\leq -c\mathbf{1}^T$ for all $x\in\mathcal{X}$, then \eqref{eq:17-2} is also a global Lyapunov function.} \end{cor} \begin{cor} \label{cor:2} Let \eqref{eq:1} be a monotone system with equilibrium $x^*$. Suppose there exists a vector $w>0$ such that $J(x)w\leq 0$ for all $x\in\mathcal{X}$ and {$J(x^*)w<0$. Then $x^*$ is globally asymptotically stable}. Furthermore, % \begin{align} \label{eq:19} \max_{i=1,\ldots,n}\left\{\frac{1}{w_i}|x_i-x_i^*|\right\} \end{align} is a global Lyapunov function and \begin{align} \label{eq:20} \max_{i=1,\ldots,n}\left\{\frac{1}{w_i}|f_i(x)|\right\} \end{align} is a local Lyapunov function. {If, additionally, there exists $c>0$ such that $J(x)w\leq -c\mathbf{1}$ for all $x\in\mathcal{X}$, then \eqref{eq:20} is also a global Lyapunov function.} \end{cor} Note that \eqref{eq:17} and \eqref{eq:19} are state separable Lyapunov functions while \eqref{eq:17-2} and \eqref{eq:20} are flow separable Lyapunov functions. {\text{Corollaries \ref{cor:1} and \ref{cor:2}} were previously reported in \cite{Coogan:2016kx}}. The following example shows that Corollaries \ref{cor:1} and \ref{cor:2} recover a well-known condition for stability of monotone linear systems. \begin{example}[Linear systems] \label{ex:1} Consider $\dot{x}=Ax$ for $A$ Metzler. Corollaries \ref{cor:1} and \ref{cor:2} imply that if one of the following conditions holds, then the origin is globally asymptotically stable: \begin{align} \label{eq:53} &\text{There exists $v>0$ such that $v^TA<0$, \quad or}\\ \label{eq:53-2}&\text{There exists $w>0$ such that $Aw<0$}. \end{align} If \eqref{eq:53} holds then $\sum_{i=1}^nv_i|x_i|$ and $\sum_{i=1}^n v_i|(Ax)_i|$ are Lyapunov functions, and if \eqref{eq:53-2} holds then $\max_i\{|x_i|/w_i\}$ and $\max_i\{|(Ax)_i|/w_i\}$ are Lyapunov functions where $(Ax)_i$ denotes the $i$th element of $Ax$. \hfill $\blacksquare$ \end{example} In fact, it is well known that $A$ is Hurwitz if and only if either (and, therefore, both) of the two conditions \eqref{eq:53} and \eqref{eq:53-2} hold, as noted in, \emph{e.g.}, \cite [Proposition 1]{Rantzer:2012fj}, and the corresponding state separable Lyapunov functions of Example \ref{ex:1} are also derived in \cite{Rantzer:2012fj}. Thus, Theorems \ref{thm:tvmain1} and \ref{thm:tvmain2}, along with Corollaries \ref{cor:1} and \ref{cor:2}, are considered as nonlinear extensions of these results. The proofs of Theorems \ref{thm:tvmain1} and \ref{thm:tvmain2} use contraction theoretic arguments and, specifically, show that a monotone system satisfying the hypotheses of the theorems is contractive with respect to a suitably defined, state-dependent norm. The proof technique illuminates useful properties of contractive systems that are of independent interest and appear to be novel. These results are presented next, before returning to the proofs of the above theorems.% % \section{Contraction with respective to state-dependent, non-Euclidean metrics} \label{sec:contr-with-resp} In the following two sections, we develop preliminary results necessary to prove our main results of Section \ref{sec:main-results}. Here, we do not require the monotonicity property. Moreover, we develop our results for potentially time-varying systems, that is, systems of the form $\dot{x}=f(t,x)$.% We first provide conditions for establishing that a nonlinear system is \emph{contractive}. A system is contractive with respect to a given metric if the distance between any two trajectories decreases at an exponential rate. Contraction with respect to Euclidean norms with potentially state-dependent weighting matrices is considered in \cite{LOHMILLER:1998bf}. This approach equips the state-space with a Riemannian structure. For metrics defined using non-Euclidean norms, which have proven useful in many applications, a Riemannian approach is insufficient, and contraction has been characterized using matrix measures for fixed (i.e., state-independent) norms \cite{Sontag:2010fk}. These approaches were recently unified and generalized in \cite{Forni:2012qe} using the theory of Finsler-Lyapunov functions {which lifts Lyapunov theory to the tangent bundle}. Here, {we present conditions that establish contraction }with respect to potentially state-dependent, non-Euclidean norms. {The proofs of these conditions rely on the general theoretical framework of Finsler-Lyapunov theory of \cite{Forni:2012qe}, and thus our results are an application of the main results of \cite{Forni:2012qe}. } To the best of our knowledge, such explicit characterizations of contraction with respect to state-dependent, non-Euclidean norms using matrix measures are not established elsewhere in the literature. \begin{definition} Let $|\cdot|$ be a norm on $\mathbb{R}^n$ and for a {convex} set $\mathcal{X}\subseteq \mathbb{R}^{n}$, let $\Theta:\mathcal{X}\to\mathbb{R}^{n\times n}$ be continuously differentiable and {satisfy $\Theta(x)=\Theta(x)^T\succ 0$ for all $x\in\mathcal{X}$.} {Let $\mathcal{K}\subseteq \mathcal{X}$ be such that any two points in $\mathcal{K}$ can be connected by a smooth curve,} and, for any two points $x,y\in {\mathcal{K}}$, let {$\Gamma_{\mathcal{K}}(x,y)$} be the set of piecewise continuously differentiable curves $\gamma:[0,1]\to {\mathcal{K}}$ connecting $x$ to $y$ {within $\mathcal{K}$} so that $\gamma(0)=x$ and $\gamma(1)=y$. The induced \emph{distance metric} is given by \begin{align} \label{eq:64} d_{{\mathcal{K}}}(x,y)=\inf_{\gamma\in\Gamma_{{\mathcal{K}}}(x,y)}\int_{0}^1|\Theta(\gamma(s)){\gamma'}(s)| ds. \end{align} {When $\mathcal{K}=\mathcal{X}$, we drop the subscript and write $d(x,y)$ for $d_\mathcal{X}(x,y)$.} \end{definition} \begin{remark} \label{rem:fin} {Assuming $\Theta(x)$ is extended to a function defined on an open set containing $\mathcal{X}$}, from a differential geometric perspective, the function $V(x,\delta x):=|\Theta(x)\delta x|$ is a \emph{Finsler function} defined on the tangent bundle {of this open set}, and the distance metric $d(x,y)$ is the \emph{Finsler metric} associated with ${V}$ \cite{Bao:2012mg}. \end{remark} In the remainder of this section and in the following section, we study the system \begin{equation} \label{eq:1010} \dot{x}=f(t,x) \end{equation} for $x\in \mathcal{X}\subseteq \mathbb{R}^n$ where {$\mathcal{X}$ is convex}, $f(t,x)$ is differentiable in $x$, and $f(t,x)$ and the Jacobian $J(t,x):= \frac{\partial f}{\partial x}(t,x)$ are continuous in $(t,x)$. {When $\mathcal{X}$ is not an open set, we assume there exists an open set containing $\mathcal{X}$ such that $f(t,\cdot)$ can be extended as a differentiable function on this set and the continuity requirements also hold on this set.} As before, $\phi(t,x_0)$ denotes the solution to \eqref{eq:1010} at time $t$ when the system is initialized with state $x_0$ at time $t=0$. {Given \eqref{eq:1010} and continuously differentiable $\Theta:\mathcal{X}\to\mathbb{R}^{n\times n}$, $\dot{\Theta}(t,x)$ is shorthand for the matrix given elementwise by $[\dot{\Theta}(t,x)]_{ij}=\frac{\partial \Theta_{ij}}{\partial x}(x)f(t,x)$. } \begin{prop} \label{thm:1} Consider the system \eqref{eq:1010} and suppose $\mathcal{K}\subseteq \mathcal{X}$ is forward invariant and {such that any two points in $\mathcal{K}$ can be connected by a smooth curve}. Let $|\cdot|$ be a norm on $\mathbb{R}^n$ with induced matrix measure $\mu(\cdot)$ and let $\Theta:{\mathcal{X}}\to\mathbb{R}^{n\times n}$ be continuously differentiable {and satisfy $\Theta(x)=\Theta(x)^T\succ 0$ for all $x\in\mathcal{X}$.} If there exists $c\geq 0$ such that \begin{align} \label{eq:69} \mu\left(\dot{\Theta}(t,x)\Theta(x)^{-1}+\Theta(x)J(t,x)\Theta(x)^{-1}\right) \leq -c \end{align} for all $x\in\mathcal{K}$ and all $t\geq 0$, then \begin{align} \label{eq:70} d_{{\mathcal{K}}}(\phi(t,y_0),\phi(t,x_0))\leq e^{-ct}d_{{\mathcal{K}}}(y_0,x_0) \end{align} for all $x_0,y_0\in \mathcal{K}$. Moreover, if the system is time-invariant so that $\dot{x}=f(x)$, then \begin{align} \label{eq:71} |\Theta(\phi(t,x_0))f(\phi(t,x_0))|\leq e^{-ct}|\Theta(x_0)f(x_0)| \end{align} for all $x_0\in\mathcal{K}$ and all $t\geq 0$. \end{prop} Before proceeding with the proof of Proposition \ref{thm:1}, we first note that when $\Theta(x)\equiv \hat{\Theta}\succ 0$ for some fixed symmetric $\hat\Theta\in\mathbb{R}^{n\times n}$, then $|\cdot|_{\hat{\Theta}}$ defined as $|z|_{\hat{\Theta}}=|\hat\Theta z|$ for all $z$ is a norm and $d(x,y)=|y-x|_{\hat\Theta}$. The corresponding induced matrix measure satisfies $\mu_{\hat{\Theta}}(A)=\mu(\hat{\Theta} A\hat{\Theta}^{-1})$ for all $A$ so that Proposition \ref{thm:1} states: if $\mu_{\hat\Theta}(J(t,x))\leq -c\leq 0$ for all $t$ and $x$, then $|\phi(t,x_0)-\phi(t,y_0)|_{\hat\Theta}\leq e^{-ct}|x_0-y_0|_{\hat\Theta}$ and, in the time-invariant case, $|f(\phi(t,x_0))|_{\hat\Theta}\leq e^{-ct}|f(x_0)|_{\hat\Theta}$, which recovers familiar results for contractive systems with respect to (state-independent) non-Euclidean norms; see, \emph{e.g.}, \cite[Theorem 1]{Sontag:2010fk}, \cite[Proposition 2]{Coogan:2016kx}. Thus, Proposition \ref{thm:1} is an extension of these results to state-dependent, non-Euclidean norms. \begin{proof} Let $V(x,\delta x)=|\Theta(x)\delta x|$. To prove \eqref{eq:70}, we claim that $V(x,\delta x)$ satisfies \begin{align} \label{eq:83} \dot{V}(x, \delta x)\leq -cV(x, \delta x) \end{align} along trajectories of the \emph{variational} dynamics \begin{align} \label{eq:73} \dot{x}&=f(t,x)\\ \label{eq:73-2}\dot{\delta x}&=J(t,x)\delta x \end{align} {when $x(0)\in\mathcal{K}$.} To prove the claim, let $(x(t),\delta x(t))$ be some trajectory of the variational dynamics \eqref{eq:73}--\eqref{eq:73-2} {with $x(t)\in\mathcal{K}$ for all $t\geq 0$}. In the following, we omit dependent variables from the notation when clear and write, \emph{e.g.}, $f$, $J$, and $\Theta$ instead of $f(t,x)$, $J(t,x)$, and $\Theta(x)$. We then have{, for almost all $t$,} \begin{align} \label{eq:66} &\dot{V}(x,\delta x)\\ &:=\lim_{h\to 0^+}\frac{V(x(t+h),\delta x(t+h))-V(x(t),\delta x(t))}{h}\\ &=\lim_{h\to 0^+}\frac{V(x+hf,\delta x+hJ \delta x)-V(x,\delta x)}{h}\\ &=\lim_{h\to 0^+}\frac{|\Theta(x+hf)(\delta x+hJ \delta x)|-|\Theta(x)\delta x|}{h}\\ &=\lim_{h\to 0^+}\frac{|(\Theta+h\dot{\Theta})(\delta x+hJ\delta x)|-|\Theta \delta x|}{h}\\ &= \lim_{h\to 0^+}\frac{|(I+h(\Theta J\Theta^{-1}+\dot{\Theta}\Theta^{-1}))(\Theta \delta x)|-|\Theta \delta x|}{h}\\ &\leq \lim_{h\to 0^+}\frac{|I+h(\Theta J\Theta^{-1}+\dot{\Theta}\Theta^{-1})||\Theta \delta x|-|\Theta \delta x|}{h}\\ &=\mu\left(\Theta J\Theta^{-1}+\dot{\Theta}\Theta^{-1}\right)|\Theta \delta x|\\ &\leq -c |\Theta(x) \delta x|, \end{align} and we have proved \eqref{eq:83}. {Recall that we consider $\mathcal{X}\subseteq \mathcal{X}^o$ for some open set $\mathcal{X}^o$ for which $f(t,\cdot)$ and $\Theta(\cdot)$ can be extended to functions with domain $\mathcal{X}^o$. Using the fact that $\mathcal{X}^o$ is a smooth manifold \cite[pp. 56]{Boothby:1986dz} and $\mathcal{K}$ is a positively invariant, connected subset of $\mathcal{X}^o$, we apply \cite [Theorem 1]{Forni:2012qe} to conclude that \eqref{eq:83} implies \eqref{eq:70} for all $x_0,y_0\in \mathcal{X}$.} To prove \eqref{eq:71}, now suppose $\dot{x}=f(x)$. Then $\dot{f}(x)=J(x)f(x)$ so that $(x(t),f(x(t)))$ is a trajectory of the variational system for any trajectory $x(t)$ of $\dot{x}=f(x)$. Equation \eqref{eq:71} then follows immediately from \eqref{eq:83}. \end{proof} We note that \eqref{eq:71} can be proved directly by applying a change of coordinates. In particular, for $\dot{x}=f(x)$ and $\Theta(x)$ as in the hypotheses of Proposition \ref{thm:1}, consider the time-varying change of variables $w:=\Theta(x)f(x)$ for which $\dot{w}=\tilde{J}(x) w$ where \begin{align} \label{eq:74} \tilde{J}(x):=\dot{\Theta}(x)\Theta^{-1}(x)+\Theta(x)J(x)\Theta^{-1}(x). \end{align} By Coppel's Lemma (see, \emph{e.g.}, \cite[Theorem II.8.27]{Desoer:2008bh}), $|w(t)|\leq e^{\int_0^t\mu(\tilde{J}(x(t))} |w(0)|\leq e^{-ct}|w(0)|$ where the second inequality follows from \eqref{eq:69}, providing an alternative interpretation for \eqref{eq:71}. { \begin{remark} From Remark \ref{rem:fin}, $V(x,\delta x)$ given in the proof of Proposition \ref{thm:1} can be regarded as a Finsler function defined on the tangent bundle of {an open set (\emph{i.e.}, manifold) containing } $\mathcal{X}$. It follows that $V(x,\delta x)$ is then a \emph{Finsler-Lyapunov} function as defined in \cite{Forni:2012qe} for $\dot{x}=f(t,x)$ where $\delta x$ is the \emph{virtual displacement} associated with the system \end{remark} } \begin{definition} A system for which the hypotheses of Proposition \ref{thm:1} hold with $c<0$ (respectively, $c=0$) is \emph{contractive} (respectively, \emph{nonexpansive}) with respect to $|\cdot|$ and $\Theta(x)$ on $\mathcal{K}$. \end{definition} { \begin{remark} \label{rem:invariant} If $\mathcal{K}$ in the statement of Proposition \ref{thm:1} is not forward invariant, then a straightforward modification of the proof implies that the conclusions \eqref{eq:70} and \eqref{eq:71} hold for all $t\geq 0$ such that $\phi(\tau,x_0)\in \mathcal{K}$ and $\phi(\tau,y_0)\in\mathcal{K}$ for all $\tau\in[0,t]$. \end{remark} } \section{A global asymptotic stability result for nonexpansive systems} \label{sec:glob-asympt-stab} When the hypotheses of Proposition \ref{thm:1} are only satisfied with $c=0$, the system is nonexpansive as defined above so that the distance between any pair of trajectories is nonincreasing but not necessarily exponentially decreasing as when the system is contractive, \emph{i.e.}, when $c<0$. Nonetheless, if the vector field is periodic and there exists a periodic trajectory that passes through a region in which the contraction property holds locally, then all trajectories entrain, that is, converge, to this periodic trajectory. As a special case, if there exists an open set around an equilibrium in which the contraction property holds locally, then the equilibrium is globally asymptotically stable. We make these statements precise in this section. \begin{figure} \centering {\footnotesize \begin{tikzpicture} [xscale=3, yscale=2,decoration={ markings, mark=at position 0.25 with {\arrow[line width=1pt]{stealth}}}] \draw[black, fill=gray!5, draw=black] (.4,-.5) rectangle (3.1,1.5); \draw[gray, fill=gray!40] plot [smooth cycle, tension=.7,rotate around={15:(1.65,.45)}] coordinates { (.8,.25) (1.5,.9) (2,.4) (1.6,-.3)}; \node[black] at (.5,-.3) {\footnotesize $\mathcal{X}$}; \node[black] at (.85,.8) {\footnotesize $\zeta(t)$}; \node[black] at (2.4,1.3) {\footnotesize $\mu(\tilde{J}(x,t))\leq 0$}; \node[black] at (1.55,-.1) {\footnotesize$\mu(\tilde{J}(x,t))\leq -c$}; \node[black] at (1.6,.15) {\footnotesize $\zeta(t^*)$}; \node[fill=black, circle, inner sep=1pt] (a) at (1.55,.3) {}; \node[fill=black, circle, inner sep=1pt] (b) at ($(1.55,.31)+ (24:1.4)$){}; \draw[dashed, black,postaction={decorate}] plot[smooth cycle, tension=1.4] coordinates {(a) ($(a)+(0,1)$) ($(a)+(-.6,.5)$) }; \draw[black, line width=1pt] (a) to[bend right=20pt] node[below,pos=.5]{$\gamma(s)$}(b); \node[black] at ($(1.55,.31)+ (24:1.4)+(0,-.1)$){\footnotesize $x$}; \draw[->, black, line width=.6pt] ($(a)+(0,.08)$) to[bend right=3pt] +($(5:.22)$); \draw[->, black, line width=.6pt] ($(a)+(0,.08)+(9:.45)$) to[bend left=3pt] +($(13:-.22)$); \end{tikzpicture} } \caption{For a periodic, nonexpansive system with domain $\mathcal{X}$, $\mu(\tilde{J}(x,t))\leq 0$ for all $x,t$. If there exists a periodic trajectory $\xi(t)$ and a time $t^*$ such that $\mu(\tilde{J}(t^*,\zeta(t^*)))<0$, then there exists a neighborhood of $\zeta(t^*)$ and an interval of time during which the distance between any other trajectory and the periodic trajectory strictly decreases. It follows that all trajectories must entrain to the periodic trajectory. } \label{fig:fig1} \end{figure} \newcommand{\mathcal{B}}{\mathcal{B}} \begin{thm} \label{thm:2} Consider $\dot{x}=f(t,x)$ for $x\in\mathcal{X}\subseteq \mathbb{R}^n$ where $f(t,x)$ is differentiable in $x$, and $f(t,x)$ and the Jacobian $J(t,x):= \frac{\partial f}{\partial x}(t,x)$ are continuous in $(t,x)$. Assume $\mathcal{X}$ is forward invariant and {convex}. Let $|\cdot|$ be a norm on $\mathbb{R}^n$ with induced matrix measure $\mu(\cdot)$ and let $\Theta:\mathcal{X}\to\mathbb{R}^{n\times n}$ be continuously differentiable and uniformly positive definite {on $\mathcal{X}$}. Suppose $f(t,x)$ is $T$-periodic for some $T> 0$ so that $f(t,x)=f(t+T,x)$ for all $t$ and all $x\in\mathcal{X}${, and let }$\zeta(t)$ be a {T-}periodic trajectory of the system {so that $\zeta(t)=\zeta(t+T)$ for all $t$}. Define \begin{align} \label{eq:77} \tilde{J}(t,x):=\dot{\Theta}(t,x)\Theta(x)^{-1}+\Theta(x)J(t,x)\Theta(x)^{-1}. \end{align} If $ \mu(\tilde{J}(t,x))\leq 0 $ for all $x\in\mathcal{X}$ and $t\geq 0$, and there exists a time $t^*$ such that \begin{align} \label{eq:76} \mu(\tilde{J}(t^*,\zeta(t^*)))<0, \end{align} then \begin{align} \label{eq:82} \lim_{t\to\infty} d(\phi(t,x_0),\zeta(t))=0\quad \text{for all $x_0\in\mathcal{X}$} \end{align} and all trajectories entrain to $\zeta(t)$, that is, { \begin{align} \label{eq:7} \lim_{t\to\infty} |\phi(t,x_0)-\zeta(t)|=0\quad \text{for all $x_0\in\mathcal{X}$.} \end{align} } \end{thm} \begin{proof} Without loss of generality, assume $t^*=0$. Then, condition \eqref{eq:76} and continuity of $J(t,x)$, $\Theta(x)^{-1}$, and $\dot{\Theta}(t,x)$ imply there exists $\epsilon>0$, $c>0$, and $0<\tau\leq T$ such that \begin{align} \label{eq:600} \mu(\tilde{J}(t,y))\leq -c \qquad \forall t\in[0,\tau], \ \forall y\in B_{2\epsilon}(\zeta(t)) \end{align} where $\mathcal{B}_{2\epsilon}(y)=\{z\in\mathcal{X}:d(y,z){< 2\epsilon}\}$. {Furthermore, with $\mathcal{K}:=\mathcal{B}_{2\epsilon}(y)$, note that $d(y,z)=d_{\mathcal{K}}(y,z)$ for all $z\in\mathcal{B}_{2\epsilon}(y)$.} Define the mapping \begin{align} \label{eq:108} P(\xi)=\phi(T,\xi) \end{align} and observe that $P^k(\xi)=\phi(kT,\xi)$. Let $\zeta^*=\zeta(0)$ and note that $\zeta^*$ is a fixed point of $P$. {First, }consider a point $\xi\in{\bar\mathcal{B}}_\epsilon(\zeta^*){:=\{z\in\mathcal{X}:d(y,z)\leq \epsilon\}}$. Let $x(t)=\phi(t,\xi)$ and note that $d(\zeta^* ,P(\xi))=d(\zeta(T) ,x(T))$. We have $ d(\zeta(T),x(T))\leq d(\zeta(\tau), x(\tau))$ since $ \mu(\tilde{J}(t,x))\leq 0 $ for all $x\in\mathcal{X}$ and $t\geq 0$, and, {by Remark \ref{rem:invariant} with $\mathcal{K}=B_{2\epsilon}(y)$,} \eqref{eq:600} implies \begin{align} \label{eq:1017} d(\zeta(\tau) ,x(\tau))\leq e^{-c\tau}d(\zeta(0) ,x(0)). \end{align} Now consider $\xi\in \mathcal{X}$ such that $d(\zeta^*,\xi)> \epsilon$ and again let $x(t)=\phi(t,\xi)$. Let $\delta=(1-e^{-c\tau})\epsilon/2$ and let $\gamma:[0,1]\to \mathcal{X}$ with $\gamma(0)=\zeta^*$ and $\gamma(1)=\xi$ be such that $\int_0^1|\Theta(\gamma(s))\gamma'(s)|ds\leq d(\zeta^*,\xi)+\delta$. Since $d(\zeta^*,\gamma(0))=0$ and $d(\zeta^*,\gamma(1))>\epsilon$, by continuity of $\gamma(s)$ and $d(\zeta^*,\cdot)$, there exists $s_\epsilon$ such that $d(\zeta^*,\gamma(s_\epsilon))=\epsilon$. Moreover, \begin{align} \label{eq:78} & d(\zeta^*,\gamma(s_\epsilon))+d(\gamma(s_\epsilon),\xi)\\ &\leq \int_0^{s_\epsilon}|\Theta(\gamma(s))\gamma'(s)|ds+\int_{s_\epsilon}^{1}|\Theta(\gamma(s))\gamma'(s)|ds\\ &\leq d(\zeta^*,\xi)+\delta \end{align} where the first inequality follows from the definition of $d$ and the fact that arc-length is independent of reparameterizations of $\gamma$. Let $\sigma(t)=\phi(t,\gamma(s_\epsilon))$. By Proposition \ref{thm:1}, $\mu(\tilde{J}(t,x))\leq 0$ for all $x\in\mathcal{X}$ and all $t\geq 0$ implies $d(\phi(t,x_0),\phi(t,y_0))\leq d(x_0,y_0)$ for all $x_0,y_0\in\mathcal{X}$. In particular, $d(\sigma(T) ,x(T))\leq d(\sigma(0),x(0))= d(\gamma(s_\epsilon),\xi)\leq d(\zeta^*,\xi)+\delta-\epsilon$. Furthermore, by the same argument as in the preceding case, we have $d(\zeta(T),\sigma(T))\leq e^{-c\tau}d(\zeta^*,\gamma(s_\epsilon))=e^{-c\tau}\epsilon$. Thus, by the triangle inequality, \begin{align} \label{eq:1012} d(\zeta^*,P(\xi))&\leq d(\zeta(T),\sigma(T)) +d(\sigma(T) ,x(T))\\ &\leq d(\zeta^*,\xi)+\delta-(1-e^{-c\tau})\epsilon\\ &=d(\zeta^*,\xi)-\delta. \end{align} We thus have \begin{align} \label{eq:1050} d(\zeta^*,P(\xi))\leq \begin{cases} d(\zeta^*,\xi)-\delta&\text{if }d(\zeta^*,\xi)>\epsilon\\ e^{-c\tau}d(\zeta^*,\xi)&\text{if }d(\zeta^*,\xi)\leq \epsilon. \end{cases} \end{align} It follows that, for all $\xi$, $d(\zeta^*,P^k(\xi))\leq \epsilon$ for some finite $k$ (in particular, for any $k\geq d(\zeta^*,\xi)/\delta$). The second condition of \eqref{eq:1050} ensures $d(\zeta^*,P^k(\xi))\to 0$ as $k\to \infty$ so that \eqref{eq:82} holds. {Moreover, since $\Theta(x)$ is uniformly positive definite on $\mathcal{X}$ by hypothesis, there exists a constant $m>0$ such that $|\Theta(x)z|\geq m|z|$ for any $x, z$. It follows that $d(x,y)\geq \inf_{\gamma\in\Gamma(x,y)}\int_{0}^1m|{\gamma'}(s)| ds=m|y-x|$ where the equality holds because $\mathcal{X}$ is convex. Thus, \eqref{eq:82} implies \eqref{eq:7}.} \end{proof} Theorem \ref{thm:2} and its proof are illustrated in Figure \ref{fig:fig1}. Note that, as a consequence of the global entrainment property, {$\zeta(t)$} in the statement of Theorem \ref{thm:2} is the unique periodic trajectory of a system satisfying the hypotheses of the theorem. Theorem \ref{thm:2} is closely related to existing results in the literature, although we believe the generality provided by Theorem \ref{thm:2} is novel. In particular, \cite[Lemma 6]{Lovisari:2014yq} provides a similar result for $\mu(\cdot)$ restricted to the matrix measure induced by the $\ell_1$ norm under the assumption that \eqref{eq:1} is monotone and time-invariant. A similar technique is applied to periodic trajectories of a class of monotone flow networks in \cite[Proposition 2]{Lovisari:2014qv}, but a general formulation is not presented. While Theorem \ref{thm:2} is interesting in its own right, in this paper, {we are primarily interested in obtaining Lyapunov functions for monotone systems. Thus,} our main interest is in the following Corollary, which specializes Theorem \ref{thm:2} to time-invariant systems. \begin{cor} \label{cor:nonexpeq} Consider $\dot{x}=f(x)$ for $x\in\mathcal{X}\subseteq \mathbb{R}^n$ for continuously differentiable $f(x)$. Assume $\mathcal{X}$ is forward invariant and {convex}. Let $|\cdot|$ be a norm on $\mathbb{R}^n$ with induced matrix measure $\mu(\cdot)$ and let $\Theta:\mathcal{X}\to\mathbb{R}^{n\times n}$ be continuously differentiable and uniformly positive definite {on $\mathcal{X}$}. Let $x^*$ be an equilibrium of the system and define \begin{align} \label{eq:77} \tilde{J}(x):=\dot{\Theta}(x)\Theta(x)^{-1}+\Theta(x)J(x)\Theta(x)^{-1} \end{align} where $J(x)=\frac{\partial f}{\partial x}(x)$. If \begin{align} \label{eq:79} \mu(\tilde{J}(x))&\leq 0 \quad \text{for all }x\in\mathcal{X},\text{ and}\\ \label{eq:79-2} \mu(\tilde{J}(x^*))&<0, \end{align} then $x^*$ is unique and globally asymptotically stable. Moreover, $d(x,x^*)$ is a global Lyapunov function and $ |\Theta(x)f(x)|$ is a local Lyapunov function. If $ |\Theta(x)f(x)|$ is globally proper, then it is also a global Lyapunov function.% % % \end{cor} \begin{proof} Choose any $T>0$, for which $f(x)$ is then (vacuously) $T$-periodic, and $\zeta(t):=x^*$ is trivially a periodic trajectory so that Theorem \ref{thm:2} applies. Note that we may take $\tau=T$ in the proof of the theorem. {Therefore, $\lim_{t\to\infty}d(\phi(t,x_0),x^*)= 0$ for all $x_0\in\mathcal{X}$. Moreover, as in the proof of Theorem \ref{thm:2}, there exists $m>0$ such that $d(x,y)\geq m|y-x|$ because $\Theta(x)$ is uniformly positive definite and $\mathcal{X}$ is convex so that $d(x,x^*)$ is globally proper. } {We now show that $|\Theta(x)f(x)|$ is proper. Since $\Theta(x)$ is uniformly positive definite, it is sufficient to show $|f(x)|$ is proper. Let $f(x)=J(x^*)(x-x^*)+g(x)$ for appropriately defined $g$ satisfying that for any $\epsilon_1>0$, there exists $\epsilon_2>0$ such that $|g(x)|<\epsilon_1 |x-x^*|$ for all $x$ satisfying $|x-x^*|<\epsilon_2$. Moreover, \eqref{eq:79-2} implies $\tilde{J}(x^*)$ is Hurwitz and since $\tilde{J}(x^*)$ is related to $J(x^*)$ via a similarity transform, it follows that $J(x^*)$ is Hurwitz and thus invertible. Therefore, there exists $\epsilon_3>0$ for which $|J(x^*)(x-x^*)|\geq \epsilon_3 |x-x^*|$ for all $x\in\mathcal{X}$. Then, $|f(x)|\geq |J(x^*)(x-x^*)|-|g(x)|\geq \epsilon_3|x-x^*|-|g(x)|$ for all $x\in\mathcal{X}$. Choosing $\epsilon_1=\epsilon_3/2$ implies that $|f(x)|\geq \frac{\epsilon_3}{2}|x-x^*|$ for all $x\in\mathcal{X}$ satisfying $|x-x^*|<\epsilon_2$, and thus $|f(x)|$ is proper.} From Proposition \ref{thm:1} with $c=0$, we have that $d(x,x^*)$ and $|\Theta(x)f(x)|$ are nonincreasing by \eqref{eq:70} and \eqref{eq:71}, and both tend to zero along trajectories of the system, thus they both satisfy condition (iii) of Definition \ref{def:lyap}. {Observing that $d(x,x^*)$ and $|\Theta(x)f(x)|$ are both positive definite completes the proof, where we remark that global asymptotic stability of $x^*$ follows from the existence of a global Lyapunov function.} \end{proof} \section{Proof of main result} \label{sec:proof-main-result} We are now in a position to prove our main results, Theorems \ref{thm:tvmain1} and \ref{thm:tvmain2}. We begin with the following proposition, which shows that $d(x,y)$ as in \eqref{eq:64} can be obtained explicitly when $\Theta(x)$ is a diagonal, state-dependent weighting matrix. \begin{prop} Let $\mathcal{X}\subseteq \mathbb{R}^n$ be a rectangular set and for any $x,y\in\mathcal{X}$, let $\Gamma(x,y)$ be the set of piecewise continuously differentiable curves $\gamma:[0,1]\to \mathcal{X}$ connecting $x$ to $y$ {within $\mathcal{X}$} so that $\gamma(0)=x$ and $\gamma(1)=y$. Consider $\Theta:\mathcal{X}\to \mathbb{R}^{n\times n}$ given by $\Theta(x)=\textnormal{diag}\{\theta_1(x_1),\ldots,\theta_n(x_n)\}$ where $\{\theta_i(\cdot)\}_{i=1}^n$ is a collection of nonnegative continuously differentiable functions. Suppose there exists $c>0$ such that $\theta_i(x_i)>c$ for all $x\in\mathcal{X}$ for all $i=1,\ldots,n$. Let $|\cdot|$ be a norm on $\mathbb{R}^n$ and consider the metric $d(x,y)$ given by \eqref{eq:64}. Then \begin{align} \label{eq:65} d(x,y)= \left| \begin{bmatrix} \int_{x_1}^{y_1}\theta_i(\sigma_1)d\sigma_1&\ldots& \int_{x_n}^{y_n}\theta_i(\sigma_n)d\sigma_n \end{bmatrix}^T\right|. \end{align} \end{prop} \begin{proof} Given any curve $\gamma\in\Gamma(x,y)$, define a new curve $\tilde{\gamma}(s):[0,1]\to\mathbb{R}^n$ as follows. Let $\tilde{\gamma} (0)=x$ and let ${\tilde{\gamma}_i}'(s)=\theta_i(\gamma_i(s)){\gamma'_i}(s)$ for all $i$ so that \begin{align} \label{eq:68} \int_0^1|\Theta(\gamma(s)){\gamma'}(s)| ds=\int_0^1|\tilde{\gamma}'(s)| ds. \end{align} {That is, if we understand the left-hand side of \eqref{eq:68} to be the length of the curve $\gamma$ computed using the metric of \eqref{eq:64}, then this length is equal to the standard arc-length of $\tilde{\gamma}$ using the metric induced by the norm $|\cdot|$.} Observe that, for all $i$, \begin{align} \label{eq:67} \tilde{\gamma}_i(1)-x_i&=\int_0^1\theta_i(\gamma_i(s)){\gamma'_i}(s)ds=\int_{x_i}^{y_i}\theta_i(\sigma )d\sigma. \end{align} Then $z:=\tilde{\gamma}(1)$ is a point depending only on $x$ and $y$ and is independent of the particular curve $\gamma$. {That is, for every $x,y\in\mathcal{X}$, there exists a unique $z\in\mathbb{R}^n$ such that each curve $\gamma$ connecting $x$ to $y$ generates a curve $\tilde{\gamma}$ as defined above connecting $x$ to $z$. Observe that \begin{align} \label{eq:80} \inf_{\tilde{\gamma}\in\Gamma(x,z)}\int_{0}^1\left|\begin{bmatrix}{\tilde\gamma'_1}(s)&\ldots&{\tilde\gamma'_n}(s)\end{bmatrix}^T\right| ds&=|z-x| \end{align} {where the infimum is achieved for} $\tilde{\gamma}(s)=(1-s)x+sz${, and the equality follows since} $\tilde{\gamma}'(s)=z-x$. {Furthermore, this particular minimizing $\tilde{\gamma}$ is generated from the unique curve} $\gamma\in\Gamma(x,y)$ satisfying the decoupled system of differential equations $z_i-x_i=\theta_i(\gamma_i(s))\gamma_i'(s)$ for all $i$. Moreover, $\gamma_i(s)$ is monotonic for all $i$ and all $s\in[0,1]$ since $\gamma_i'(s)$ does not change sign, and thus $\gamma(s)$ is contained in the rectangle defined by the corners $x$ and $y$ so that $\gamma(s)\in\mathcal{X}$ for $s\in[0,1]$. It follows that $d(x,y)=|z-x|$, which is equivalent to \eqref{eq:65}. \end{proof} \textbf{Proof of Theorem \ref{thm:tvmain1}.} Let \begin{align} \label{eq:85} \Theta(x):=\textnormal{diag}\{\theta_1(x_1),\ldots,\theta_n(x_n)\} % % % % % % % \end{align} and consider the $\ell_1$ norm $|\cdot|_1$ with induced matrix measure $\mu_1(\cdot)$. Define $\tilde{J}(x)$ as in \eqref{eq:77}. % Recall ${J}(x)=\frac{\partial f}{\partial x}(x)$ is Metzler since the system is monotone. Then also $\tilde{J}(x)$ is Metzler since $\Theta(x)$ is diagonal and uniformly positive definite so that $\Theta(x)J(x)\Theta(x)^{-1}$ retains the sign structure of $J(x)$ and $\dot{\Theta}(x)\Theta(x)^{-1}$ is a diagonal matrix. Then \begin{align} \label{eq:75} \mu_1(\tilde{J}(x))% &=\max_{j=1\ldots,n}\left(\sum_{i=1}^n\tilde{J}_{ij}(x)\right)\\ \label{eq:75-2}&=\max_{j=1\ldots,n}\Bigg(\Bigg(\dot{\theta}_j(x) +\sum_{i=1}^n\theta_{i}(x){J}_{ij}(x)\Bigg) \theta_j(x)^{-1}\Bigg) \end{align} where the first equality follows by \eqref{eq:10} since $\tilde{J}(x)$ is Metzler. It follows from \eqref{eq:111} and \eqref{eq:75-2} that $\mu_1(\tilde{J}(x))\leq 0$ for all $x\in\mathcal{X}$, {and it follows from \eqref{eq:114} and \eqref{eq:75-2} that $\mu_1(\tilde{J}(x^*))<0$ because $\dot{\theta}_j(x^*)=0$ and $\theta_j(x_j^*)^{-1}>0$ for all $j$.} Applying Corollary \ref{cor:nonexpeq} establishes that \eqref{eq:113} is a global Lyapunov function and \eqref{eq:113-2} is a local Lyapunov function. \qed \textbf{Proof of Theorem \ref{thm:tvmain2}.} Let $\theta_i(x):=1/\omega_i(x)$ and define \begin{align} \label{eq:87} \Omega(x)&:=\textnormal{diag}\{\omega_1(x_1),\ldots,\omega_n(x_n)\}\\ \Theta(x)&:=\textnormal{diag}\{\theta_1(x_1),\ldots,\theta_n(x_n)\}. \end{align} Since $0< \omega_i(x)\leq c$ for all $x\in\mathcal{X}$ for $i=1,\ldots,n$, we have $0<c^{-1}\leq \theta_i(x)$ for all $x\in\mathcal{X}$ for $i=1,\ldots,n$. Observe that % \begin{align} \label{eq:88} \dot{\theta}_i(x)\theta_i(x_i)^{-1}=\frac{\frac{d\theta_i}{d x_i}(x_i)}{\theta_i(x_i)}f_i(x)&=\frac{-\frac{d\omega_i}{d x_i}(x_i)}{\omega_i(x_i)}f_i(x)\\ \label{eq:88-2}&=-\dot{\omega}_i(x)\omega_i(x_i)^{-1} \end{align} for all $i=1,\ldots,n$ where the second equality is established from the identity \begin{align} \label{eq:6} \frac{d\omega_i}{d x_i}(x_i)=\frac{d}{d x_i}\left(\frac{1}{\theta_i(x_i)}\right)=\frac{-(d\theta_i/dx_i)(x_i)}{(\theta_i(x_i))^2}. \end{align} As in the proof of Theorem {\ref{thm:tvmain1}}, define $\tilde{J}(x)$ according to \eqref{eq:77}, and now consider the $\ell_\infty$ norm $|\cdot|_\infty$ with induced matrix measure $\mu_\infty(\cdot)$. As before, $\tilde{J}(x)$ is Metzler so that \begin{align} \label{eq:91} & \mu_{\infty}(\tilde{J}(x))=\max_{i=1,\ldots,n}\sum_{j=1}^n\tilde{J}_{ij}(x)\\ &=\max_{i=1,\ldots,n}\Bigg(\dot{\theta}_i(x)\theta_i(x)^{-1}+\theta_i(x_i)\sum_{j=1}^nJ_{ij}(x)\theta_j(x_j)^{-1}\Bigg)\\ \label{eq:91-3}&=\max_{i=1,\ldots,n}\Bigg(\omega_i(x_i)^{-1}\Bigg(-\dot{\omega}_i(x)+\sum_{j=1}^nJ_{ij}(x)\omega_j(x_j)\Bigg)\Bigg) \end{align} where the first equality follows from \eqref{eq:10-2} and the last equality follows from \eqref{eq:88}--\eqref{eq:88-2}. It follows from \eqref{eq:45} and \eqref{eq:91-3} that $\mu_\infty(\tilde{J}(x))\leq 0$ for all $x\in\mathcal{X}$, and {it follows from \eqref{eq:48} and \eqref{eq:91-3} that $\mu_\infty(\tilde{J}(x^*))<0$ because $\dot{\omega}_i(x^*)=0$ and $\omega_i(x_i^*)^{-1}>0$ for all $i$.} Applying Corollary \ref{cor:nonexpeq} establishes that \eqref{eq:47} is a global Lyapunov function and \eqref{eq:47-2} is a local Lyapunov function. % \qed \textbf{Proofs of Corollaries \ref{cor:1} and \ref{cor:2}.} {We need only show that \eqref{eq:17-2} (respectively, \eqref{eq:20}) is a global Lyapunov function when the stated conditions hold, as the remainder of the claims follow immediately from Theorems \ref{thm:tvmain1} and \ref{thm:tvmain2}. Below, we show that \eqref{eq:17-2} is globally proper in this case. A symmetric argument establishes that \eqref{eq:20} is globally proper.} To this end, suppose {$v^TJ(x)\leq -c\mathbf{1}^T$ for all $x\in\mathcal{X}$ for some $c>0$.} Then there also exists some $\tilde{c}>0$ such that $v^TJ(x)\leq -\tilde{c} v$ for all $x\in \mathcal{X}$. In particular, we take $0<\tilde{c} \leq c/|v|_\infty$. With $\Theta:=\text{diag}\{v\}$, it follows that $\mu_1(\Theta J(x)\Theta^{-1})\leq -\tilde{c}$ for all $x\in\mathcal{X}$. From a slight modification of \cite[Theorem 33, pp. 34--35]{Desoer:2008bh}, we then have that $|\Theta f(x)|_1\geq \tilde{c} |\Theta x|_1$, which implies that $|\Theta f(x)|_1$ is globally proper. For completeness, we repeat this argument here. Since $\Theta f(x)=\int_0^1\Theta J(s x) x \ ds$, \begin{align} \label{eq:106} |\Theta f(x)|_1&\geq -\mu_1\left(\int_0^1\Theta J(s x )\Theta^{-1} \ ds\right) \left|\Theta x\right|_1\\ &\geq -\left(\int_0^1\mu_1(\Theta J(s x )\Theta^{-1})ds \right)|\Theta x|_1\\ &\geq \tilde{c} |\Theta x|_1 \end{align} where the first inequality follows from the fact that $|Ax|\geq -\mu(A)|x|$ for all $A$ and all $x$ where $|\cdot|$ and $\mu(\cdot)$ are any vector norm and corresponding matrix measure, and the second inequality follows from the fact that $\mu(A+B)\leq \mu(A)+\mu(B)$ for all $A$, $B$ (see \cite{Desoer:2008bh} for a proof of both these facts). Since \eqref{eq:17-2} is equivalent to $|\Theta f(x)|_1$, the proof is complete. \qed \section{An algorithm for computing separable Lyapunov functions} \label{sec:an-algor-comp} In this section, we briefly discuss an efficient and scalable algorithm for computing $\theta(x)$ and $\omega(x)$ in Theorems \ref{thm:tvmain1} and \ref{thm:tvmain2} when each element of $f(x)$ is a polynomial or rational function of $x$. % Thus, the proposed approach provides an efficient means for computing sum-separable and max-separable Lyapunov functions of monotone systems. Our proposed algorithm relies on \emph{sum-of-squares (SOS) programming} \cite{Parrilo:2000fk, Parrilo:2001uq}. A polynomial $s(x)$ is a \emph{sum-of-squares polynomial} if $s(x)=\sum_{i=1}^r(g_i(x))^2$ for some polynomials $g_i(x)$ for $i=1,\ldots,r$. A \emph{SOS feasibility problem} consists in finding a collection of polynomials $p_i(x)$ for $i=1,\ldots,N$ and a collection of SOS polynomials $s_i(x)$ for $i=1,\ldots,M$ such that \begin{align} \label{eq:96} a_{0,j}+\sum_{i=1}^Np_i(x)a_{i,j}(x)+\sum_{i=1}^Ms_i(x)b_{i,j}(x)&\\ \label{eq:96-2}\qquad\text{are SOS polynomials for $j=1,\ldots,J$}& \end{align} for fixed polynomials $a_{0,j}(x)$, $a_{i,j}(x)$, and $b_{i,j}(x)$ for all $i,j$. The set of polynomials $\{p_i(x)\}_{i=1}^N$ and $\{s_i(x)\}_{i=1}^M$ satisfying \eqref{eq:96}--\eqref{eq:96-2} forms a convex set, and by fixing the degree of these polynomials, we arrive at a finite dimensional convex optimization problem. There exists efficient computational toolboxes that convert SOS feasibility programs into standard semi-definite programs (SDP) \cite{sostools}. SOS programming has led to efficient computational algorithms for a number of controls-related applications such as searching for polynomial Lyapunov functions \cite{Papachristodoulou:2002jk}, underapproximating regions of attraction \cite{Topcu:2007fk}, and safety verification \cite{Coogan:2015dq}. Here, we present a SOS feasibility problem that is sufficient for finding $\theta(x)$ or $\omega(x)$ satisfying Theorem \ref{thm:tvmain1} or \ref{thm:tvmain2}. To that end, recall that in the hypotheses of these theorems, we assume $\mathcal{X}$ is rectangular. In this section, for simplicity, we assume $\mathcal{X}$ is a closed set with nonzero measure so that $\mathcal{X}=\mathbf{cl}\{x: a_i< x_i< b_i \text{ for all $i$}\}$ for appropriately defined $a_i$ and $b_i$ where possibly $a_i=-\infty$ and/or $b_i=\infty$ and where $\mathbf{cl}$ denotes closure. Equivalently, $\mathcal{X}=\{x:d(x)\geq 0\}$ where $d(x)=\begin{bmatrix}d_1(x_1)&\ldots&d_n(x_n)\end{bmatrix}^T$ and \begin{align} \label{eq:92} d_i(x_i)= \begin{cases} x_i-a_i&\text{if $a_i\neq -\infty$ and $b_i=\infty$}\\ (x_i-a_i)(b_i-x_i)&\text{if $a_i\neq -\infty$ and $b_i\neq \infty$}\\ b_i-x_i&\text{if $a_i= -\infty$ and $b_i\neq \infty$}\\ 0&\text{if $a_i= -\infty$ and $b_i= \infty$}. \end{cases} \end{align} If $\sigma(x)=\begin{bmatrix}\sigma_{1}(x) & \ldots & \sigma_n(x)\end{bmatrix}^T$ where each $\sigma_j(x)$, $j=1\ldots n$ is a SOS polynomial, we call $\sigma(x)$ a \emph{SOS $n$-vector}. \begin{prop} Let \eqref{eq:1} be a monotone system with equilibrium $x^*$ and rectangular domain $\mathcal{X}=\{x:d(x)\geq 0\}$ where $d(x)=\begin{bmatrix}d_1(x_1)&\ldots&d_n(x_n)\end{bmatrix}^T$. Suppose each $f_i(x)$ is polynomial. Then the following is a SOS feasibility problem, and a feasible solution provides $\theta(x)$ satisfying the conditions of Theorem \ref{thm:tvmain1}: { \begin{alignat}{2} \nonumber &\textnormal{For some fixed $\epsilon>0$,}\\ \nonumber &\textnormal{Find:}\\ \nonumber &\quad\textnormal{Polynomials $\theta_i(x_i)$, for $i=1,\ldots,n$} \\ \nonumber &\quad\textnormal{SOS polynomials $s_i(x_i)$ for $i=1,\ldots,n$}\\ \nonumber &\quad\textnormal{SOS $n$-vectors $\sigma^{i}(x)$ for $i=1,\ldots,n$}\\ \nonumber &\textnormal{Such that:}\\ \nonumber &\quad (\theta_i(x_i)-\epsilon)-s_i(x_i)d_i(x_i) \quad &&\hspace*{-.5in}\textnormal{is a SOS polynomial}\\ \label{eq:97-3}&&&\hspace*{-.5in}\textnormal{for $i=1,\ldots,n$}\\ \nonumber &\quad -(\theta(x)^TJ(x)+\dot{\theta}(x)^T)_i-\sigma^i(x)^Td(x) \\ \nonumber & &&\hspace*{-.5in}\textnormal{is a SOS polynomial}\\ \label{eq:97-4}& &&\hspace*{-.5in}\textnormal{for $i=1,\ldots,n$}\\ \label{eq:97-2}&\quad -\theta(x^*)^TJ(x^*)-\epsilon\geq 0 \end{alignat} } where $(\theta(x)^TJ(x)+\dot{\theta}(x)^T)_i$ denotes the $i$-th entry of $\theta(x)^TJ(x)+\dot{\theta}(x)^T$. \end{prop} \begin{proof} Note that $(\theta(x)^TJ(x)+\dot{\theta}(x)^T)_i$ is a polynomial in $x$ for which the decision variables of the SOS feasibility problem % appear linearly. In addition, \eqref{eq:97-2} is a linear constraint on the coefficients of $\theta(x)$. Thus, \eqref{eq:97-3}--\eqref{eq:97-2} is a well-defined SOS feasibility problem. Next, we claim that \eqref{eq:97-3} is sufficient for $\theta(x)\geq \epsilon \mathbf{1}$ for all $x\in\mathcal{X}$. To prove the claim, suppose \eqref{eq:97-3} is satisfied so that $(\theta_i(x_i)-\epsilon)-s_i(x_i)d_i(x_i)$ is a SOS polynomial and consider $x\in \mathcal{X}$ so that $d(x)\geq 0$. Since $s_i(x_i)$ is a SOS polynomial for all $i$, we have that also $s_i(x_i)d(x_i)\geq 0$ so that $\theta_i(x_i)\geq \epsilon + s_i(x_i)d(x_i)\geq \epsilon$. A similar argument shows that if \eqref{eq:97-4} holds for all $i$, then condition (1) of Theorem \ref{thm:tvmain1} holds, that is, \eqref{eq:111} holds for all $x\in \mathcal{X}$. Indeed, suppose $-(\theta(x)^TJ(x)+\dot{\theta}(x)^T)_i-\sigma^i(x)^Td(x)$ is a SOS polynomial and consider $x\in\mathcal{X}$ so that $d(x)\geq 0$. Since $\sigma^i(x)$ is a SOS $n$-vector, $\sigma^i(x)^Td(x)\geq 0$ and thus $-(\theta(x)^TJ(x)+\dot{\theta}(x)^T)_i\geq 0$. Finally, \eqref{eq:97-2} implies \eqref{eq:114} \end{proof} The technique employed in \eqref{eq:97-3} and \eqref{eq:97-4} to ensure that the conditions of Theorem \ref{thm:tvmain1} hold whenever $d(x)\geq 0$ is similar to the $\mathcal{S}$-procedure used to express constraints on quadratic forms as linear matrix inequalities \cite [p. 23]{Boyd:1994uq} and is common in applications of SOS programming to systems and control theory. \begin{remark} A symmetric proposition holds establishing a SOS feasibility program sufficient for computing $\omega(x)$ that satisfies the conditions of Theorem \ref{thm:tvmain2}. We omit an explicit form for this SOS program due to space constraints. \end{remark} \addtocounter{example}{-2} \begin{example}[Cont.] We consider the system from Example \ref{ex:statedep} and seek to compute a sum-separable Lyapunov function using the SOS program of Section \ref{sec:an-algor-comp}. We let $\epsilon=0.01$ and search for $\theta_1(x_1)$ and $\theta_2(x_2)$ that are polynomials of up to second-order. We consider SOS polynomials that are 0-th order, that is, all SOS polynomials are considered to be positive constants, which proves sufficient for this example. The SOS program requires 0.26 seconds of computation time and returns \begin{align} \label{eq:98} \theta_1(x)&=1.7429\\ \theta_2(x)&=x_2^2 + 1.3793x_2 + 1.9503 \end{align} where parameters have been scaled so that the leading coefficient of $\theta_2(x_2)$ is $1$ since scaling $\theta(x)$ does not affect the validity of the resulting SOS program or the conditions of Theorem \ref{thm:tvmain1}. Then, \eqref{eq:113} and \eqref{eq:113-2} give the following sum-separable Lyapunov functions: \begin{align} \label{eq:44} V(x)&=1.7429x_1+\frac{1}{3}x_2^3+\frac{1.3793}{2}x_2^2+1.9503 x_2\\ V(x)&=1.7429 \left|{-x_1}+x_2^2\right| + x_2^3 + 1.3793x_2^2 + 1.9503 x_2. \end{align} $\blacksquare$ \end{example} \addtocounter{example}{2} \section{Applications} \label{sec:examples} In this section, we present several applications of our main results. First, we establish a technical result that will be useful for constructing Lyapunov functions as the limit of a sequence of contraction metrics. \begin{prop} \label{prop:limit} Let $x^*\in \mathcal{X}$ be an equilibrium for \eqref{eq:1}. Suppose there exists a sequence of global Lyapunov functions $V^i:\mathcal{X}\to\mathbb{R}_{\geq 0}$ for \eqref{eq:1} that converges locally uniformly to $V(x):=\lim_{i\to\infty}V^i(x)$. If $V(x)$ is positive definite and globally proper (see Definition \ref{def:lyap}) then $V(x)$ is also a global Lyapunov function for \eqref{eq:1}. \end{prop} \begin{proof} Note that $x^*$ is globally asymptotic stable since there exists a global Lyapunov function, and consider some $x_0\in \mathcal{X}$. Asymptotic stability of $x^*$ implies there exists a bounded set $\Omega$ for which $\phi(t,x_0)\in\Omega\subseteq \mathcal{X}$ for all $t\geq 0$. For $i=1,\ldots,n$, we have $V^i(\phi(t,x_0))$ is nonincreasing in $t$ and $\lim_{t\to\infty}V^i(\phi(t,x_0))=0$. Local uniform convergence establishes $V(x)$ is continuous, $V(\phi(t,x_0))$ is nonincreasing in $t$, and $\lim_{t\to\infty}V(\phi(t,x_0))=0$, and thus condition (3) of Definition \ref{def:lyap} holds. Under the additional hypotheses of the proposition, we have that $V(x)$ is therefore a global Lyapunov function.% \end{proof} Note that a sequence $V^i(x)$ arising from a sequence of weighted contraction metrics, \emph{i.e.}, $V^i(x)=|P_i(x-x^*)|$ or $V^i(x)=|P_if(x)|$ for $P_i$ converging to some nonsingular $P$, satisfies the conditions of Proposition \ref{prop:limit}. The following example is inspired by \cite [Example 3]{Dirr:2015rt}. \begin{example}[Comparison system] Consider the system \begin{align} \label{eq:23} \dot{x}_1&=-x_1+x_1x_2\\ \label{eq:23-2}\dot{x}_2&=-2x_2-x_2^2+\gamma(x_1)^2 \end{align} evolving on $\mathcal{X}=\mathbb{R}_{\geq 0}^2$ where $\gamma:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}$ is strictly increasing and satisfies $\gamma(0)=0$, $\bar{\gamma}:=\lim_{\sigma\to\infty}\gamma(\sigma)<1$, and $ \gamma'(\sigma)\leq \frac{1}{(1+\sigma)^{2}}$. Consider the change of coordinates $(\eta_1,\eta_2)=(\log(1+x_1),x_2)$ so that \begin{align} \label{eq:27} \dot{\eta}_1&=\frac{1}{1+x_1}(-x_1+x_1x_2) \end{align} where we substitute $(x_1,x_2)=(e^{\eta_1}-1,\eta_2)$. Then \begin{align} \label{eq:28} \dot{\eta}_1\leq -\beta(e^{\eta_1}-1)+\eta_2 \end{align} where $\beta(\sigma)={\sigma}/{(1+\sigma)}$. Introduce the comparison system \begin{align} \label{eq:29} \dot{\xi}_1&=-\beta(e^{\xi_1}-1)+\xi_2\\ \label{eq:29-2} \dot{\xi}_2&=-2\xi_2-\xi_2^2+\gamma(e^{\xi_1}-1)^2 \end{align} evolving on $\mathbb{R}_{\geq 0}^2$. The comparison principle (see, \emph{e.g.}, \cite{Dirr:2015rt}) ensures that asymptotic stability of the origin for the comparison system \eqref{eq:29}--\eqref{eq:29-2} implies asymptotic stability of the origin of the $(\eta_1,\eta_2)$ system, which in turn establishes asymptotic stability of the origin for \eqref{eq:23}--\eqref{eq:23-2}. The Jacobian of \eqref{eq:29}--\eqref{eq:29-2} is given by \begin{align} \label{eq:30} J(\xi)= \begin{pmatrix} -e^{\xi_1}\beta'(e^{\xi_1}-1)&1\\ 2e^{\xi_1}\gamma(e^{\xi_1}-1)\gamma'(e^{\xi_1}-1)&-2-2\xi_2 \end{pmatrix} \end{align} where $\beta'(\sigma)=\frac{1}{(1+\sigma)^2}$. Let $v=(2\bar{\gamma}+\epsilon,1)$ where $\epsilon$ is chosen small enough so that $c_1:=(2\bar{\gamma}+\epsilon-2)<0$. It follows that \begin{align} \label{eq:14} v^TJ(\xi)\leq (-\epsilon e^{-\xi_1},c_1)< 0\quad \forall \xi. \end{align} Applying Corollary \ref{cor:1}, the origin of \eqref{eq:23}--\eqref{eq:23-2} and \eqref{eq:29}--\eqref{eq:29-2} is globally asymptotically stable. Furthermore, we have the following state and flow sum-separable Lyapunov functions for the comparison system \eqref{eq:29}--\eqref{eq:29-2}: \begin{align} \label{eq:50} V_1(\xi)&=(2\bar{\gamma}+\epsilon)\xi_1+\xi_2{\text{, and}} \\ V_2(\xi)&=(2\bar{\gamma}+\epsilon)|\dot{\xi}_1|+|\dot{\xi_2}|. \end{align} Above, we understand $\dot{\xi}_1$ and $\dot{\xi}_2$ to be shorthand for the equalities expressed in \eqref{eq:29}--\eqref{eq:29-2}. \end{example} \begin{example}[Multiagent system] \label{ex:multiagent} Consider the following system evolving on $\mathcal{X}=\mathbb{R}^3$: \begin{align} \label{eq:31} \dot{x}_1&=-\alpha_1(x_1)+\rho_1(x_3-x_1)\\ \dot{x}_2&=\rho_2(x_1-x_2)+\rho_3(x_3-x_2)\\ \label{eq:31-3}\dot{x}_3&=\rho_4(x_2-x_3) \end{align} where we assume $\alpha_1:\mathbb{R}\to\mathbb{R}$ is strictly increasing and satisfies $\alpha(0)=0$ and $\alpha_1'(\sigma)\geq \underline{c}_0$ for some $\underline{c}_0>0$ for all $\sigma$, and each $\rho_i:\mathbb{R}\to\mathbb{R}$ is strictly increasing and satisfies $\rho_i(0)=0$. Furthermore, for $i=1,3$, $\rho'_i(\sigma)\leq \overline{c}_i$ for some $\overline{c}_i>0$ for all $\sigma$, and for $i=2,4$, $\rho'_i(\sigma)\geq \underline{c}_i$ for some $\overline{c}_i>0$ for all $\sigma$. For example, $x_1$, $x_2$, and $x_3$ may be the position of three vehicles, for which the dynamics \eqref{eq:31}--\eqref{eq:31-3} are a rendezvous protocol whereby agent 1 moves towards agent 3 at a rate dependent on the distance $x_3-x_1$ as determined by $\rho_1$, \emph{etc.} Additionally, agent 1 navigates towards the origin according to $-\alpha_1(x_1)$. Computing the Jacobian, we obtain \begin{align} \label{eq:32} \nonumber& J(x)=\\ &\begin{pmatrix} -\alpha'(x_1)-\rho_1'(z_{31})&0&\rho_1'(z_{31})\\ \rho_2'(z_{12})&-\rho_2'(z_{12})-\rho_3'(z_{32})&\rho_3'(z_{32})\\ 0&\rho'_4(z_{23})&-\rho'_4(z_{23}) \end{pmatrix} \end{align} where $z_{ij}:= x_i-x_j$. Let $w=(1,1+\epsilon_1,1+\epsilon_1+\epsilon_2)^T$ % % % where $\epsilon_1>0$ and $\epsilon_2>0$ are chosen to satisfy \begin{align} \label{eq:33} \underline{c}_0&> (\epsilon_1+\epsilon_2)\overline{c}_1\quad \text{and}\quad \epsilon_1\underline{c}_2>\epsilon_2\overline{c}_3.% \end{align} We then have $ J(x)w\leq c\mathbf{1}$ for all $x$ for $c=\max\{(\epsilon_1+\epsilon_2) \overline{c}_1-\underline{c}_0 ,\epsilon_2\overline{c}_3-\epsilon_1\underline{c}_2,-\epsilon_2\underline{c}_4\}<0$. Thus, the origin of \eqref{eq:31}--\eqref{eq:31-3} is globally asymptotically stable by Corollary \ref{cor:2}. Furthermore, \begin{align} \label{eq:37} V_1(x)&=\max\{|x_1|,(1+\epsilon_1)^{-1}|x_2|,(1+\epsilon_1+\epsilon_2) ^{-1}|x_3|\},\\ \label{eq:37-2} V_2(x)&=\max\{|\dot{x}_1|,(1+\epsilon_1) ^{-1}|\dot{x}_2|,(1+\epsilon_1+\epsilon_2) ^{-1}|\dot{x}_3|\} \end{align} are state and flow max-seperable Lyapunov functions where we interpret $\dot{x}_i$ as shorthand for the equalities expressed in \eqref{eq:31}--\eqref{eq:31-3}. % Since we may take $\epsilon_1$ and $\epsilon_2$ arbitrarily small satisfying \eqref{eq:33}, using Proposition \ref{prop:limit} we have also the following choices for Lyapunov functions: \begin{align} \label{eq:49} V_3(x)&=\max\{|x_1|,|x_2|,|x_3|\},\\ \label{eq:49-2} V_4(x)&=\max\{|\dot{x}_1|,|\dot{x}_2|,|\dot{x}_3|\} . \end{align} The flow max-separable Lyapunov functions \eqref{eq:37-2} and \eqref{eq:49-2} are particularly useful for multiagent vehicular networks where it often easier to measure each agent's velocity rather than absolute position. \end{example} In Example \ref{ex:multiagent}, choosing $w=\mathbf{1}$, we have $J(x)w\leq 0$ for all $x$, however this is not enough to establish asymptotic stability using Corollary \ref{cor:2}. Informally, choosing $w$ as in the example distributes the extra negativity of $-\alpha'(x_1)$ among the columns of $J(x)$. Nonetheless, Proposition \ref{prop:limit} implies that choosing $w=\mathbf{1}$ indeed leads to a valid Lyapunov function.% The above example generalizes to systems with many agents interacting via arbitrary directed graphs, as does the principle of distributing extra negativity along diagonal entries of the Jacobian as discussed in Section \ref{sec:disc-comp-exist}. \begin{example}[Traffic flow] \label{ex:traffic} A model of traffic flow along a freeway with no onramps is obtained by spatially partitioning the freeway into $n$ segments such that traffic flows from segment $i$ to $i+1$, $x_i\in[0,\bar{x}_i]$ is the density of vehicles occupying link $i$, and $\bar{x}_i$ is the capacity of link $i$. A fraction $\beta_i\in(0,1]$ of the flow out of link $i$ enters link $i+1$. The remaining $1-\beta_i$ fraction is assumed to exit the network via, \emph{e.g.}, unmodeled offramps. Associated with each link is a continuously differentiable \emph{demand} function $D_i:[0,\bar{x}_i]\to\mathbb{R}_{\geq 0}$ that is strictly increasing and satisfies $D_i(0)=0$, and a continuously differentiable \emph{supply} function $S_i:[0,\bar{x}_i]\to\mathbb{R}_{\geq 0}$ that is strictly decreasing and satisfies $S_i(\bar{x}_i)=0$. Flow from segment to segment is restricted by upstream demand and downstream supply, and the change in density of a link is governed by mass conservation: \begin{align} \label{eq:38} \dot{x}_1&= \min\{\delta_1,S_1(x_1)\}-\frac{1}{\beta_1}g_{1}(x_{1},x_{2})\\ \dot{x}_i&= g_{i-1}(x_{i-1},x_i)-\frac{1}{\beta_i}g_{i}(x_{i},x_{i+1}), \quad i=2,\ldots,n-1\\ \label{eq:38-3}\dot{x}_n&=g_{n-1}(x_{n-1},x_n)- D_n(x_n) \end{align} for some $\delta_1>0$ where, for $i=1,\ldots,n-1$, \begin{align} \label{eq:39} g_{i}(x_{i},x_{i+1})=\min\{\beta_iD_i(x_i),S_{i+1}(x_{i+1})\}. \end{align} Let $\delta_i:= \delta_1\prod_{j=1}^{i-1}\beta_j$ for $i=2,\ldots, n$. If $d^{-1}_i(\delta_i)<s^{-1}_i(\delta_i)$ for all $i$, then $\delta_1$ is said to be \emph{feasible} and $x^*_i:=d^{-1}_i(\delta_i)$ constitutes the unique equilibrium. \begin{figure*} \begin{align} \label{eq:36} J(x)= \begin{pmatrix} \partial_1g_0-\frac{1}{\beta_1}\partial_1g_1&-\frac{1}{\beta_1}\partial_2g_1&0&0&\cdots&0\\ \partial_1 g_1&\partial_2g_1-\frac{1}{\beta_2}\partial_2g_2&-\frac{1}{\beta_2}\partial_3 g_2&0 &\cdots&0\\ 0&\partial_2g_2&\partial_3g_2-\frac{1}{\beta_3}\partial_3g_3&-\frac{1}{\beta_3}\partial_4g_3&&0\\ \vdots&&&&\ddots&\vdots\\ 0&0&\cdots&0&\partial_{n-1}g_{n-1}&\partial_{n}g_{n-1}-\partial_nD_n(x_n) \end{pmatrix} \end{align} \hrule \end{figure*} Let $\partial_i$ denote differentiation with respect to the $i$th component of $x$, that is, $\partial_ig(x):=\frac{\partial g}{\partial x_i}(x)$ for a function $g(x)$. The dynamics \eqref{eq:38}--\eqref{eq:38-3} define a system $\dot{x}=f(x)$ for which $f$ is continuous but only piecewise differentiable. Nonetheless, the results developed above apply for this case, and, in the sequel, we interpret statements involving derivatives to hold wherever the derivative exists. Notice that $\partial_{i}g_i(x_i,x_{i+1})\geq 0$ and $\partial_{i+1}g_i(x_i,x_{i+1})\leq 0$. Define $g_0(x_1):=\min\{\delta_1,S_1(x_1)\} $. The Jacobian, where it exists, is given by \eqref{eq:36}, which is seen to be Metzler. Let \begin{align} \label{eq:40} \tilde{v}=\begin{pmatrix}1,\beta_1^{-1},(\beta_1\beta_2)^{-1},\ldots,(\beta_1\beta_2\cdots\beta_{n-1})^{-1}\end{pmatrix}^T. \end{align} Then $\tilde{v}^TJ(x)\leq 0$ for all $x$. Moreover, there exists $\epsilon=(\epsilon_1,\epsilon_2,\ldots,\epsilon_{n-1},0)$ with $\epsilon_{i}>\epsilon_{i+1}$ for each $i$ such that $v:=\tilde{v}+\epsilon$ satisfies \begin{align} \label{eq:41} v^TJ(x)&\leq 0\quad \forall x\\ \label{eq:41-2} v^TJ(x^*)&<0. \end{align} Such a vector $\epsilon$ is constructed using a technique similar to that used in Example \ref{ex:multiagent}. In particular, the sum of the $n$th column of $\text{diag}(\tilde{v})J(x)$ is strictly negative because $-\partial_nD_n(x_n)<0$, and this excess negativity is used to construct $v$ such that \eqref{eq:41}--\eqref{eq:41-2} holds. A particular choice of $\epsilon$ such that \eqref{eq:41}--\eqref{eq:41-2} holds depends on bounds on the derivative of the demand functions $D_i$, but it is possible to choose $\epsilon$ arbitrarily small. Corollary~\ref{cor:1} establishes asymptotic stability, and Proposition \ref{prop:limit} gives the following sum-separable Lyapunov functions: \begin{align} \label{eq:42} V_1(x)&=\sum_{i=1}^n \left(x_i\prod_{j=1}^{i-1}\beta_j\right),\\ \label{eq:42-2} V_2(x)&=\sum_{i=1}^n \left(|\dot{x}_i|\prod_{j=1}^{i-1}\beta_j\right), \end{align} where we interpret $\dot{x}_i$ according to \eqref{eq:38}--\eqref{eq:38-3}. In traffic networks, it is often easier to measure traffic flow rather than traffic density. Thus, \eqref{eq:42-2} is a practical Lyapunov function indicating that the (weighted) total absolute net flow throughout the network decreases over time. \end{example} In \cite{coogan2015compartmental}, a result similar to that of Example \ref{ex:traffic} is derived for possibly infeasible input flow and traffic flow network topologies where merging junctions with multiple incoming links are allowed. The proof considers a flow sum-separable Lyapunov function similar to \eqref{eq:42-2} and appeals to LaSalle's invariance principle rather than Proposition \ref{prop:limit}. \section{Discussion} \label{sec:disc-comp-exist} {For nonlinear monotone systems with an asymptotically stable equilibrium, it is shown in \cite{Dirr:2015rt} that max-separable Lyapunov functions can always be constructed in compact invariant subsets of the domain of attraction. Such a Lyapunov function is constructed by considering a single dominating trajectory that is (componentwise) greater than all points in this subset. If there exists a trajectory of the system that converges to the equilibrium in forward time and diverges to infinity in all components in backwards time, then this construction leads to a global Lyapunov function. It is also shown in \cite{Sootla:2016sp} that such max-separable Lyapunov functions can be obtained from the leading eigenfunction of the linear, but infinite dimensional, Koopman operator associated with the monotone system. In contrast, Theorem \ref{thm:tvmain2} and Corollary \ref{cor:2} above only provide sufficient conditions for constructing max-separable Lyapunov functions. However, these results offer an alternative construction than that suggested in \cite{Dirr:2015rt, Sootla:2016sp}. In addition, \cite{Dirr:2015rt} provides counterexamples showing that an asymptotically stable monotone system need not admit sum-separable or max-separable Lyapunov functions globally \cite{Dirr:2015rt}. An important remaining open question is whether monotone systems whose domain satisfies a compact invariance condition necessarily admit sum-separable Lyapunov functions. } \subsection{Relationship to ISS small-gain conditions} \label{sec:relat-iss-small} In this section, we briefly discuss the relationship between the main results of this paper and small-gain conditions for interconnected input-to-state stable (ISS) systems. While there appears to be a number of interesting connections between the results presented here and the extensive literature on networks of ISS systems, see, \emph{e.g.}, \cite{Ito:2012ux,Ito:2013ez} for some recent results, a complete characterization of this relationship is outside the scope of this paper and will be the subject of future research. Nonetheless, we highlight how Theorems \ref{thm:tvmain1} and \ref{thm:tvmain2} provide a Jacobian-based perspective to ISS system analysis. Consider $N$ interconnected systems with dynamics $\dot{x}_i=f_i(x_1,\ldots,x_N)$ for $x_i\in\mathbb{R}^{n_i}$ and suppose each system satisfies an input-to-state stability (ISS) condition \cite{Sontag:1989fk} whereby there exists ISS Lyapunov functions $V_i$ \cite{Sontag:1995qf} satisfying \begin{align} \label{eq:57} \frac{\partial V_i}{\partial x_i}(x_i)f_i(x)\leq -\alpha_i(V_i(x_i))+\sum_{i\neq j}\gamma_{ij}(V_j(x_j)) \end{align} where each $\alpha_i$ and $\gamma_{ij}$ is a class $\mathcal{K}_\infty$ function\footnote{A continuous function $\alpha:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}$ is of class $\mathcal{K}_\infty$ if it is strictly increasing, $\alpha(0)=0$, and $\lim_{r\to\infty}\alpha(r)=\infty$.}. We obtain a monotone comparison system \begin{align} \label{eq:51} \dot{\xi}=g(\xi), \qquad g_i(\xi)=-\alpha_i(\xi_i)+\sum_{j\neq i}\gamma_{ij}(\xi_j) \end{align} evolving on $\mathbb{R}^n_{\geq 0}$ for which asymptotic stability of the origin implies asymptotic stability of the original system \cite{Ruffer:2010tw}. It is shown in \cite[Section 4.3]{Dashkovskiy:2011qv} that if $\gamma_{ij}(s)=k_{ij}h_j(s)$ and $\alpha_i(s)=a_ih_i(s)$ for some collection of constants $c_{ij}\geq0 $, $a_j>0$ and $\mathcal{K}_\infty$ functions $h_i$ for all $i,j=1,\ldots,n$, and there exists a vector $v$ such that $v^T(-A+C)<0$ where $A=\text{diag}(a_1,\ldots,a_n)$ and $[C]_{ij}=c_{ij}$, then $V(x)=v^T\begin{bmatrix}V_1(x_1)&\ldots V_n(x_n)\end{bmatrix}^T$ is an ISS Lyapunov function for the composite system. Indeed, in this case, and considering the comparison system \eqref{eq:51}, we see that \begin{align} \label{eq:86} \frac{\partial g}{\partial \xi}(\xi)= (-A+C)\text{diag}(h'_1(\xi_1),\cdots,h'_n(\xi_n)) \end{align} where $h'_i(\xi_i)\geq 0$ so that, if $v^T(-A+C)<0$, then also $v^T\frac{\partial g}{\partial \xi}(\xi)\leq 0$ for all $\xi$. If also $h'_i(0)>0$ for all $i$, then $v^T\frac{\partial g}{\partial \xi}(0)< 0$ so that Corollary \ref{cor:1} implies the sum-separable Lyapunov function $v^T\xi$, providing a contraction theoretic interpretation of this result. The case of $N=2$ was first investigated in \cite{Jiang:1996dw} where it is assumed without loss of generality that $a_1=a_2=1$ and it is shown that if $c_{12}c_{21}<1$, then $v_1V_1(x_1)+v_2V(x_2)$ is a Lyapunov function for the original system for any $v=\begin{bmatrix}v_1& v_2\end{bmatrix}^T>0$ satisfying $v_1c_{12}<v_2$ and $v_2c_{21}<v_1$. These conditions are equivalent to $v^T(-I+C)<0$. Alternatively, in \cite{Ruffer:2010tw, Dashkovskiy:2010zh}, it is shown that if there exists a function $\rho:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}^n$ with each component $\rho_i$ belonging to class $\mathcal{K}_\infty$ such that $g(\rho(r))< 0$ for all $r>0$, then the origin is asymptotically stable and $V(\xi):=\max_i\{\rho_i^{-1}(\xi_i)\}$ is a Lyapunov function. If the conditions of Corollary \ref{cor:2} hold for the comparison system for some $w$, we may choose $\rho(r)=rw$. Indeed, we have \begin{align} \label{eq:55} g(rw)=\int_0^1\frac{\partial g}{\partial \xi}(\sigma rw) rw\ d \sigma <0 \quad \forall r>0. \end{align} For this case, $V(\xi)=\max_i\{\rho_i^{-1}(\xi_i)\}=\max_i\{\xi_i/w_i\}$, recovering \eqref{eq:19}. \subsection{Generalized contraction and compartmental systems} We now discuss the relationship between the results presented here and additional results for contractive systems in the literature. First, we comment on the relationship between Corollary \ref{cor:nonexpeq} of Theorem \ref{thm:2} as well as Proposition \ref{prop:limit} and a generalization of contraction theory recently developed in \cite{Sontag:2014eu, Margaliot:2015wd} where exponential contraction between any two trajectories is required only after an arbitrarily small amount of time, an arbitrarily small overshoot, or both. In \cite[Corollary 1]{Margaliot:2015wd}, it is shown that if a system is contractive with respect to a sequence of norms convergent to some norm, then the system is generalized contracting with respect to that norm, a result analogous to Proposition \ref{prop:limit}. In \cite{Margaliot:2015wd}, conditions on the sign structure of the Jacobian are obtained that ensure the existence of such a sequence of weighted $\ell_1$ or $\ell_\infty$ norms. These conditions are a generalization of the technique in Example \ref{ex:multiagent} and Example \ref{ex:traffic} in Section \ref{sec:examples} where small $\epsilon$ is used to distribute excess negativity. Furthermore, it is shown in \cite{Margaliot:2012hc, Margaliot:2014qv} that a ribosome flow model for gene translation is monotone and nonexpansive with respect to a weighted $\ell_1$ norm, and additionally is contracting on a subset of its domain. % Entrainment of solutions is proved by first showing that all trajectories reach the region of exponential contraction. Theorem \ref{thm:2} provides a different approach for studying entrainment by observing that the distance to the periodic trajectory strictly decreases in each period due to a neighborhood of contraction along the periodic trajectory.% Finally, we note that Metzler matrices with nonpositive column sums have also been called \emph{compartmental} \cite{Jacquez:1993uq}. It has been shown that if the Jacobian matrix is compartmental for all $x$, then $V(x)=|f(x)|$ is a nonincreasing function along trajectories of \eqref{eq:1} \cite{Jacquez:1993uq, Maeda:1978fk}; Proposition \ref{thm:1} recovers this observation when considering \eqref{eq:71} with $c=0$, $\Theta(x)\equiv I$, and $|\wc|$ taken to be the $\ell_1$ norm. \section{Conclusions} \label{sec:conclusions} We have investigated monotone systems that are also contracting with respect to a weighted $\ell_1$ norm or $\ell_\infty$ norm. In the case of the $\ell_1$ (respectively, $\ell_\infty$) norm, we provided a condition on the weighted column (respectively, row) sums of the Jacobian matrix for ensuring contraction. When the norm is state-dependent, these conditions include an additive term that is the time derivative of the weights. This construction leads to a pair of sum-separable (respectively, max-separable) Lyapunov functions. The first Lyapunov function pair is separable along the state of the system while the second is \emph{agent-separable}, that is, each constituent function depends on $f_i(x)$ in addition to $x_i$. When the weighted contractive norm is independent of state, the components of this Lyapunov function only depend on $f_i(x)$ and we say it is \emph{flow-separable}. Such flow separable Lyapunov functions are especially relevant in applications where it is easier to measure the derivative of the system's state rather than measure the state directly. In addition, we provided a computational algorithm to search for separable Lyapunov functions using our main results and sum-of-squares programming, and we demonstrated our results through several examples. We further highlighted some connections to stability results for interconnected input-to-state stable systems. These connections appear to be a promising direction for future work. %
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Seeding a process consists in providing a signal level larger than the noise level present in the system, from which the process could uncontrollably grow. % With seeding, the process grows from a known and controllable initial signal. % The final state of the system may be the result of non-linear evolution, but when the process is seeded, the final state is uniquely related to the seed signal both in amplitude and phase. % In the case of self-modulation (SM) of a relativistic (v$_b\cong$ c), charged particle bunch in plasma~\cite{bib:kumar}, the process is the transformation of the long, continuous bunch into a train of shorter micro-bunches through the effect of the periodic transverse wakefields. % The important wakefields final parameters are their amplitude and their relative phase with respect to the seed point. % This is key for external injection of a bunch of particles to be accelerated by these wakefields. % This bunch must be short with respect to the wakefields period. % It must be injected in a narrow phase range of accelerating and focusing wakefields, whose extent is on the order of one quarter wakefields period (plasma wakefields linear theory~\cite{bib:keinings}). Reproducibility of the wakefields amplitude and phase means that the electron bunch can be injected with a fixed time delay (or phase) with respect to the seed signal. % Reproducibility is necessary for loading of the wakefields and in general to be able to produce a quality accelerated bunch, eventually using feedback systems, as in conventional accelerators. % The period of (linear) wakefields in an initially neutral plasma of uniform electron density n$_{e0}$ is given by the plasma wavelength $\lambda_{pe}=2\pi v_b/\omega_{pe}\cong 2\pi c/\omega_{pe}$. % Here $\omega_{pe}=\left(\frac{n_{e0}e^2}{\epsilon_0m_e}\right)^{1/2}$ is the angular plasma electron frequency.\footnote{Here $e$ and $m_e$ are the electron charge and mass, and $\epsilon_0$ is the vacuum permittivity.} The bunch is considered long, when longer than $\lambda_{pe}$: $\sigma_z\gg\lambda_{pe}$, $\sigma_z$ the root-mean-square (RMS) length of a Gaussian bunch. % Micro-bunches are separated by, and are shorter than $\lambda_{pe}$. % For the bunch to be stable against the current filamentation instability~\cite{bib:CFI} and to effectively drive wakefields, its transverse size must also be smaller than the cold plasma collisionless skin depth $c$/$\omega_{pe}$. % In the following, size comparisons refer to $\lambda_{pe}$ or $c$/$\omega_{pe}$. \section{Self-Modulation Seeding} Methods to provide seed wakefields include: short, intense laser pulse or particle bunch preceding the bunch to self-modulate; sharp rise of the long bunch density; and relativistic ionization front traveling with and within the long bunch. Each method has pros and cons. % In experiments, the seeding method is chosen with respect to practical considerations, e.g. availability of a laser pulse or particle bunch, long bunch parameters, plasma creation process, etc. % Producing a sharp rising edge in the long bunch is practical with low energy particles~\cite{bib:mask}. % Experiments have shown that a 60\,MeV electron bunch drives multiple wakefields periods along itself~\cite{bib:fang}. % These can serve as SM seed wakefields. % However, for bunches of high-energy particles (e.g., 400\,GeV protons), this seeding method requires parameters and means (relative energy spread at the \%-level, magnetic dog-leg or chicane) that are proportional to the particles' inertia ($\gamma$m, $\gamma$ their relativistic factor) and quickly become prohibitive in size and cost. \subsection{Relativistic Ionization Front Seeding} The relativistic ionization front method requires a short laser pulse co-propagating within the proton bunch, and a low ionization potential gas or vapor to keep the laser pulse intensity relatively low. % In this case, it is the fast creation of plasma and the onset of beam plasma interaction within the bunch that drives the seed wakefields. This method was very successfully used in the AWAKE experimen ~\cite{bib:muggli} with a rubidium vapor of density (1-10)$\times$10$^{14}$\,cm$^{-3}$ and with a laser pulse 120\,fs-long ($\lambda_0 = $ 780\,nm) and an intensity I$_0\sim$10\,TW/cm$^2$ to fully ionize the atoms of their first electron~\cite{bib:karl,bib:marlene}. % We note that with these parameters the value of the laser pulse a$_0\cong8.6\times10^{-10}\lambda_0[\mathrm{\mu m}]I_0^{1/2}$\,[W/cm$^2$] reaches only $\cong$ 0.01. % This places laser wakefields excitation by the laser pulse in the linear regime. % That means that the longitudinal wakefields amplitude the laser pulse drives is on the order of $\cong\frac{\pi}{4}\frac{a_0^2}{2}E_{WB}$~\cite{bib:esarey}, i.e., 100\,kV/m, where E$_{WB}=m_ec\omega_{pe}/e$ is the cold plasma wave breaking field (n$_{e0}=7\times10^{14}\,\mathrm{cm}^{-3}$). % This amplitude is much smaller than that of the wakefields at the ionization front, which is on the order of MV/m (see below). % Thus SM seeding occurs because of the sudden onset of beam-plasma interaction at the ionization front, not because of the wakefields driven by the laser pulse. % In the relativistic ionization front seeding method, the seed wakefields amplitude can be defined as the wakefields driven by the bunch at the location of the ionizing laser pulse or ionization front. % In the case of a long bunch, with low density n$_b\ll n_{e0}$, the seed wakefields amplitude can be calculated from linear plasma wakefields theory~\cite{bib:keinings}, considering the bunch density does not change over a few wakefields periods behind the ionization front. % In this case, the seed amplitude is $W_{\perp}=2\frac{en_{b\xi_s}}{\epsilon_0k_{pe}^2}\frac{dR}{dr}|_{r=\sigma_r}$, where $R(r)$ reflects the transverse dependency of the wakefields on the bunch transverse profile. The bunch density at seed location $\xi_s$ is $n_{b\xi_s}$. % When $k_{pe}\sigma_r\cong1$, $k_{pe}=\omega_{pe}$/c, $\frac{dR}{dr}|_{r=\sigma_r}\cong k_{pe}$. For parameters of AWAKE, the seed wakefields amplitude reaches a few MV/m~\cite{bib:marlene} and exceeds that driven by the laser pulse when placed near the bunch peak density location. The longitudinal wakefields amplitude is $W_{\parallel}=\frac{en_{b\xi_s}}{\epsilon_0k_{pe}}R(r)$ and reaches values similar to those of $W_{\perp}$. % This seeding method has a number of favorable characteristics. % First, it only requires a laser pulse short when compared to the wakefields period and intense enough to ionize a gas or vapor. % This is easily satisfied with lasers available today when operating at low plasma density and with a low ionization potential vapor, e.g., alkali metals~\cite{bib:oz}. We note here that ionization occurs on a time scale on the order of the laser field period, even shorter than the pulse length in most cases. Second, the seed wakefields are driven by the bunch to self-modulate. % That means that there is natural alignment between the seed wakefields and the bunch. % This is important so as not to seed the non-axisymmetric hose instability (HI) mode~\cite{bib:witthum,bib:mathias}. % Third, the seed wakefields naturally have the same transverse structure as that driven by the bunch to self-modulate. % This may not be the case when seed wakefields are generated by a driver preceding the long bunch. % Fourth, a replica of the ionizing laser pulse, thus (perfectly) synchronized with it, can be used to drive an RF gun that provides a synchronised electron bunch for external injection. % The laser pulse serves two purposes that can be decoupled: plasma creation by itself and, in conjunction with the drive bunch, wakefields seeding. Ionization depletes the energy of the laser pulse and thus limits the plasma density/length product that can be created by this ionization method. % This directly limits the maximum energy gain by witness bunch particles externally injected into the wakefields. % To avoid staging and its intricacies, acceleration in a single, preformed and very long plasma may be desirable. Also, as the wakefields phase velocity varies during growth of the SM process~\cite{bib:pukhov}, external injection of electrons to be accelerated at a location downstream from the SM saturation point may be desirable. % This naturally calls for the use of two plasma sections: one for self-modulation of the proton bunch, and one for acceleration of the electron bunch, sections separated by an injection region. % The ionization front seeding method leaves un-modulated the fraction of the long bunch ahead of the seed point, since it travels in neutral vapor or gas. % However, propagation of the bunch with its front un-modulated in a following preformed plasma means that self-modulation instability may grow in the bunch front. % This growth would generate wakefields that would add to the ones driven by the self-modulated back part of the bunch. % Since the SMI in the bunch front is not seeded, the phase of those wakefields is not controlled and may be different from event to event. % By causality, these wakefields would also perturb the self-modulated bunch train itself and probably prevent acceleration and quality preservation of the injected electron bunch. % Self-modulation instability of the whole bunch in a preformed plasma, in this case with the ionizing laser pulse placed ahead of the proton bunch, was observed experimentally~\cite{bib:spencer}. % A seeding method that leads to self-modulation of the entire bunch may therefore be required. % \subsection{Electron Bunch Seeding} We consider here self-modulation seeding using an electron bunch. % This is natural since an electron accelerator is necessary for external injection. % Also, driving large amplitude wakefields over a few meters of plasma with a laser pulse may be challenging. % For effective seeding, the electron bunch may have to drive wakefields with the same transverse structure as that driven by the long bunch. % Therefore, the electron bunch may have to have the same transverse size as the un-modulated, long bunch. % This places constraints on the bunch charge since the transverse size of the wakefields and thus of the drive bunch may be quite large at low plasma density, i.e., 200\,$\mathrm{\mu}$m at n$_{e0}=7\times$10$^{14}$\,cm$^{-3}$. % Misalignment between the electron and the long bunches may act as seed for the HI of the long bunch. % This could be somewhat mitigated by making the electron bunch radius larger than that of the long bunch, though global misalignment would persist. % Also, since the long bunch satisfies k$_{pe}\sigma_{rp^+}\cong$1, theory predicts that the electron bunch with k$_{pe}\sigma_{r}>$ 1 is subject to the current filamentation instability (CFI)~\cite{bib:CFI}. % Experiments with a low energy electron bunch showed that CFI only occurred for k$_{pe}\sigma_{r}>$ 2.2, i.e., when at least two filaments can be formed out of the original bunch~\cite{bib:CFIexp}. % The length of the seed bunch must be $<\lambda_{pe}/2$ and its charge sufficient to drive the required seeding amplitude value. % The long bunch has an initial density n$_{b0}$ smaller than that of the plasma (over-dense plasma). % Seed wakefields, for example when seeding with an ionization front, are thus in the linear plasma wakefields theory regime (n$_{b0}\ll n_{e0}$~\cite{bib:keinings}). % When using an electron bunch to seed the wakefields, electron bunch parameters, such as density and length, can be chosen independently from those of the proton bunch. % For example, they can be chosen so the bunch drives non-linear wakefields, possibly in the blow-out regime. % The development of seeded self-modulation (SSM) from wakefields with amplitude equal or larger than that driven by the self-modulated wakefields may be quite different than from linear wakefields. % In particular, the saturation length may be shorter and issues related to wakefields phase velocity during the growth of the SSM mitigated~\cite{bib:pukhov}. Also, the linear transverse seed wakefields are focusing (or null) over most of the bunch length. % These develop into focusing and defocusing wakefields during the growth. % Transverse seed wakefields driven by the electron bunch, i.e., behind the electron bunch placed ahead of the proton bunch are already periodically focusing and defocusing. % This affects the development of the SM process. % A major issue may also be that, depending on the electron bunch parameters, the seed wakefields can be present over a distance much longer than the growth or saturation length of the SM process. % Though of lower amplitude than that of the fully developed self-modulated wakefields, these can also interfere with acceleration over long distance. % For example, their phase remains constant over the plasma length whereas that of the self-modulated bunch drifts backwards over the growth region~\cite{bib:pukhov}. % One solution may be to choose the seed bunch energy so that it is depleted by the seed wakefields (its own) over a length on the order of the SM-process saturation length or on the order of the length necessary for the self-modulating bunch to drive wakefields of amplitude comparable to that of the seed ones. % Transverse evolution of the seed bunch and possible matching to the plasma focusing force are other topics of research. % \section{Seeding Methods Comparison} Using particle-in-cell simulations with OSIRIS~\cite{bib:osiris} in 2D cylindrical coordinates, we briefly compare seeding of the SM process of a proton bunch with a sharp cut, representing seeding by an ionization front, and by a preceding particle bunch. % We choose the cut bunch case as reference. % The proton bunch has a Gaussian longitudinal profile with a root-mean-square (RMS) length of $\sigma_b=$ 6\,cm (200\,ps duration) and a bi-Gaussian transverse profile with a RMS radius of 200\,$\mathrm{\mu}$m. % Its charge is 48.1\,nC, and it has a relativistic factor of $\gamma_{p^+} = 426$. % Its normalized emittance is 3.6\,mm-mrad. % For this case we place the cut in the center of the proton bunch (the point where its density is highest). % The plasma density is 7$\times$10$^{14}$\,cm$^{-3}$. % This yields a seed amplitude that we define as the longitudinal wakefields amplitude immediately behind the cut of 13.5\,MV/m (or 5$\times$10$^{-3}\,$E$_{WB}$, E$_{WB}$=2.55\,GV/m). We note here that we quote longitudinal wakefields amplitudes because they can always be evaluated on the system axis. % Transverse wakefields amplitudes are zero on axis, and depend on the radius where they are evaluated (usually at $r=\sigma_r$). % However, since the bunch and the micro-bunches radii change along the SM process, quoting such amplitudes becomes ambiguous. % The maximum longitudinal electric field along the bunch as a function of propagation in the plasma, together with the position along the bunch where that amplitude is reached, are plotted on Fig.~\ref{fig:Ezmaxofz} as the blue lines. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{maxfield_and_pos3.png} \caption{a) Maximum longitudinal electric field E$_{z,max}$ as a function of position along the plasma for seeding: bunch cut placed in the center of the proton bunch (blue line); bunch cut at three RMS lengths ahead of the bunch center (orange line, labelled as ``no seed"); short particle bunch placed ahead of the proton bunch, Q = 184\,pC, yellow line, $Q$ = 921\,pC, purple line, and $Q$ = 3.69\,nC, green line. b) Position of E$_{z,max}$, with respect to the bunch center ($z_\mathrm{c}$), in terms of the bunch RMS length ($\sigma_\mathrm{b}$). Same color code for the lines. Since all these simulations were performed with the same plasma density and beam sizes, the same numerical parameters are used: $\Delta_r= 0.01\,c/\omega_{pe}$, $\Delta_z=0.015\,c/\omega_{pe}$, $\Delta_t= 0.0074\,/\omega_{pe}$ and $4 \times 4$ particles per cell for the plasma electrons. % The simulation box size is $7.97\,c/\omega_{pe}$ in radius and $1204.89\,c/\omega_{pe}$ in length for all cases except the cut bunch case, in which it is $746.84\,c/\omega_{pe}$. % } \label{fig:Ezmaxofz} \end{figure} The maximum wakefields amplitudes grows from its seed value, saturates around $\mathrm{z}=450$\,cm at $\sim$820\,MV/m (0.32\,E$_{WB}$) and decays after that. The position of this maximum amplitude shifts quickly from the bunch front to approximately two RMS lengths behind that point for the first meter and a half, after which it remains close to one RMS length behind the seeding position, until the maximum field starts decreasing. % In the decreasing portion, the maximum is found around two and a half RMS lengths from the seeding position. % We now consider a seed electron bunch that drives wakefields with a constant amplitude, which is equal to that of the cut bunch. % We note that, in this case, the sum of the seed bunch wakefields and the long bunch adiabatic response are equal to the wakefields of the cut bunch at the peak of the bunch current profile (the cut bunch seed point). Also, the seed bunch drives defocusing fields along the proton bunch ahead of the bunch center, a major difference from the ionization front seeding case. % It is clear that the two situations are not identical, however, we consider them as sufficiently similar to draw some important conclusions. % The maximum electric field and its position along the bunch for this seeding case are also plotted on Fig.~\ref{fig:Ezmaxofz} as the yellow lines. % The seed bunch we consider has a negative charge, and we increase the mass of its particles by ten orders of magnitude compared to the mass of an electron. % This is a way to avoid transverse evolution and dephasing of the seed bunch along the simulation that would occur with a low energy electron bunch not matched to the plasma focusing force. % The RMS length is chosen to be 300\,$\mathrm{\mu}$m, short when compared to the wakefields period (1270\,$\mathrm{\mu}$m). % The bunch has a Gaussian longitudinal profile and bi-Gaussian transverse profile with a size of 200\,$\mathrm{\mu}$m, equal to that of the proton bunch. % The energy of the seed bunch particles is sufficient to drive a seed wakefields amplitude over the whole propagation distance (10\,m) without dephasing with respect to the wakefields. % The seed bunch is placed 631\,ps (or about three proton bunch RMS lengths) ahead of the proton bunch peak density point. In this case, the wakefields also grow and saturate. % However, they reach a higher peak value at saturation, $\sim$1200\,MV/m (0.47\,E$_{WB}$). Saturation is reached around $\sim$500\,cm. The fields also decrease after saturation. % The position of this maximum field follows the same trend as that with cut bunch seeding, but around one RMS length closer to the bunch center. % The evolution of the maximum field is not significantly different between the two cases, especially considering plasma lengths necessary for saturation of fields and distances useful for acceleration, i.e., after saturation. % We now consider the case of seeding with five times the charge of the previous case. % As a result of the charge increase, the seed wakefields amplitude is $\sim$65.2\,MV/m (0.025\,E$_{WB}$), with results plotted as the purple lines on Fig.~\ref{fig:Ezmaxofz}. % The maximum value is similar to that of the previous seeding case, $\sim$1100\,MV/m (0.43\,E$_{WB}$), and is reached a bit earlier, at z = 330\,cm. % However, the field decreases in the same manner as in the previous two cases, and reaches about the same amplitude at 10\,m. % We further use a seed bunch with a charge increased by twenty times compared to the first seed bunch charge, with results plotted on Fig. \ref{fig:Ezmaxofz} as the green lines. % In this case, the seed wakefields amplitude is 298\,MV/m (0.12\,E$_{WB}$). % Since the seed bunch charge density is approximately 17\% of the plasma density, seed wakefields are in the quasi-linear regime, giving an amplitude 19\% higher than that predicted by linear theory. % The maximum longitudinal field is reached even earlier than in the previous cases, at about 200\,cm, and approaches $\sim$1400 MV/m (0.5\,E$_{WB}$), also higher than before. % After the peak, the train of micro-bunches reaches a stable configuration with a maximum wakefields amplitude of $\sim$495\,MV/m (0.19\,E$_{WB}$), only about 200\,MV/m higher than the seed level. % The position of the maximum field remains at half a RMS length in front of the bunch center, starting at about 400\,cm. In the high charge seed bunch case, seed wakefields are already large and quickly reach the non-linear regime, as seen in Fig.~\ref{fig:WR20} for the longitudinal wakefields. A large fraction of the wakefields period ($>$50\%, the linear regime value) is defocusing for protons. % This leads to a faster increase and steeper decrease of the maximum wakefields amplitude. % For all cases, except the case with the maximum seed bunch charge, the system keeps evolving, expelling protons over the full plasma length, and producing wakefields of decreasing amplitude. In the case of the largest seed bunch charge (3.69\,nC), protons are expelled fast enough to only leave behind narrow micro-bunches that resonantly drive wakefields. % This produces a system that drives wakefields with constant amplitude and that evolves very slowly ($z>$300\,cm). % The proton bunch charge decreases continuously for the first 500\,cm of the plasma, thereafter it has a value around 10\% of the initial bunch charge. \begin{figure}[ht] \centering \includegraphics[scale=0.25]{WR21Z2.png} \caption{Line-out of the longitudinal wakefields close to the axis at 200\,cm into the plasma. % Wakefields at this point are non-sinusoidal and their amplitude reaches 0.37\,E$_{WB}$, i.e., are non-linear (seed bunch charge Q=3.69\,nC).} \label{fig:WR20} \end{figure} For completion, we also performed a simulation using the same conditions as with particle bunch seeding, but without seed bunch. The proton bunch is cut at three RMS lengths ahead of the bunch center, leading to low seed wakefields of 150\,kV/m amplitude (6$\times$10$^{-5}$\,E$_{WB}$) when compared to any of the other cases of seeding used previously. % The SM evolves more slowly (orange lines on Fig.~\ref{fig:Ezmaxofz}). % The longitudinal fields reach a peak amplitude at around 700\,cm, after which they start decreasing as in the other cases. % Numerical studies are ongoing to have an in-depth understanding of the electron bunch seeding process and to determine how a plasma density step placed in the growth region of the SM process can maintain the longitudinal field at approximately its saturation values~\cite{bib:step} even with this seeding method. % \subsection{Electron Bunch Seeding and Acceleration in a Single Plasma} When conceiving of an accelerator based on wakefields driven by a self-modulated long bunch, one would naturally envisage separating the SM from the acceleration process. % The self-modulation would be seeded with an electron bunch for the reasons outlined above. % The electron bunch to be accelerated would be injected after the first short, self-modulation plasma, and on-axis into the long, acceleration plasma~\cite{bib:muggliRun2}. % Since the accelerated bunch must create blow-out and load the wakefields~\cite{bib:veronica}, its wakefields amplitude must be similar to that of the self-modulated proton bunch. % However, in practice, the injection section, between the two plasma sections, must be quite short, a difficult section to design, build and diagnose~\cite{bib:livio}. % One could then envisage simplifying the plasma and injection scheme by injecting all three bunches right at the entrance of a single plasma, as shown on Fig.~\ref{fig:schematic2}. % \begin{figure}[ht] \centering \includegraphics[scale=0.5]{schematic2} \caption{Schematic set-up for injection of the three bunches, short electron seed, long proton and short accelerated electron bunches in a single plasma (not to scale). % The seeding process leads to modulation of the entire proton bunch. % The relative timing of the three bunches is for illustration only. % Accelerated electrons go to a high-energy physics experiments, protons are dumped.} \label{fig:schematic2} \end{figure} Because the accelerated bunch drives wakefields with amplitudes similar to those of the self-modulated proton bunch~\cite{bib:veronica}, it could remain weakly affected by the proton bunch wakefields both in the (probable) density ramp at the plasma entrance and by the growing wakefields. % Moreover, its relative timing with respect to the seed electron bunch and proton bunch could be chosen to minimize these effects (if any). % The seed electron bunch would lose most of its energy and thus drive wakefields that would be overwhelmed by those of the self-modulated proton bunch. It would therefore disappear naturally after some distance, as low energy electrons, and not affect the wakefields or the proton bunch after that. % We are currently investigating all these possibilities in numerical simulations. % We are looking for optimum input parameters and ultimate output parameters in terms of charge, relative energy spread, and emittance of the accelerated electron bunch. % We will also investigate tolerances in terms of relative transverse alignment of the three bunches so as not to seed the hose instability of the proton or accelerated electron bunch, and reach good accelerated bunch parameters. % This scheme would allow for the use of a single, pre-ionized plasma that could be made as long as necessary for high-energy physics applications, for example by using a single, very long helicon plasma source~\cite{bib:helicon}. % \section{Summary} We described seeding methods for the self-modulation of a long charged particle bunch in plasma, as they apply in particular to the AWAKE experiment. % We briefly compared relativistic ionization front seeding, emulated in numerical simulations by a cut bunch, with electron bunch seeding in simple cases. % We showed that the evolution of the wakefields as well as their parameters can be similar in both seeding schemes. % We showed that the value and position of the peak of the maximum longitudinal electric field can be affected by varying the seed bunch charge and thus the seed wakefields amplitude. % Electron bunch seeding could allow for simplification of the electron injection scheme and allow for a single plasma source to be used. % The work presented here is the seed for more studies to develop a simple concept for accelerating electrons to high-energy for high-energy physics applications. % \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} \label{sec:introduction} \input{intro.tex} \section{VEHICLE DESIGN} \label{sec:vehicle-design} \input{design.tex} \section{PROPULSION MODEL} \label{sec:propulsion} \input{propulsion.tex} \section{OPTIMIZATION OF THE DESIGN PARAMETERS} \label{sec:design-parameters} \input{optimization.tex} \section{POSITION AND ATTITUDE CONTROL} \label{sec:posit-attit-contr} \input{control.tex} \section{SIMULATION RESULTS} \label{sec:prel-valid} \input{simulation.tex} \section{CONCLUSIONS AND FUTURE WORK} \label{sec:concl-future-work} \input{conclusion.tex} \section*{ACKNOWLEDGMENT} The authors are grateful to André Santos for the CAD modeling of the vehicle and the renderings presented in this paper. This work was supported by the FCT project~[UID/EEA/50009/2013]. \subsection{Problem statement} \label{sec:problem-statement} We will start by considering that each actuation signal is bounded between $-1$ and $1$, that is, \begin{equation} \label{eq:ahcube} -1 \leq u_i \leq 1 \quad \text{for} \quad i=1,\ldots,6 \end{equation} According to \eqref{eq:au}, this hypercube will map onto a 6-dimensional convex polyhedron\footnote{A convex polyhedron is an intersection of a finite number of half-spaces~\cite{jeter86}.} in the $(\bar{F}, \bar{M})$ space. Any other choice of bounds is possible by appropriately scaling constants $K_1$ and $K_2$. However, it assumes that the maximum propeller trust is symmetric with respect to the direction of rotation. The remaining parameters are the angles $\{\phi_i\}$. We will base our analysis on the optimization of these angles with respect to various criteria. Our goal will be to find the configurations of angles $\{\phi_i\}$ that maximize the range of forces (and torques) over all directions. Geometrically, this corresponds to changing $\{\phi_i\}$ such that a ball of nonzero radius can fit inside the 3-dimensional convex polyhedron in the $\bar{F}$ space mapped by the actuation hypercube in~\eqref{eq:ahcube}, while keeping zero torque, $\bar{M}=0$. A similar reasoning applies to the torque space $\bar{M}$, while keeping $\bar{F}=0$. First, we will address the problem of computing the maximum force along a given direction specified as a unit vector $\hat{e}$, while maintaining a zero torque. From~\eqref{eq:au}, and assuming that $\mathbf{A}$ is full rank, we get \begin{equation} \bar{u} = \mathbf{A}^{-1} \left( \begin{array}{c} \bar{F} \\ \bar{M} \end{array} \right) =\mathbf{A}^{-1} \left( \begin{array}{c} F \hat{e} \\ 0 \end{array} \right) \end{equation} where $F>0$ is the force magnitude. For what follows, it will be convenient to express the $\mathbf{A}^{-1}$ matrix as blocks of three dimensional row vectors: \begin{equation} \label{eq:bici} \mathbf{A}^{-1} = \left[ \begin{array}{cc} b_1^T & c_1^T \\ \vdots & \vdots \\ b_6^T & c_6^T \\ \end{array} \right] \end{equation} where $b_i,c_i\in\mathbb{R}^3$. Then, the actuation of the $i$-th propeller is given by \begin{equation} u_i = F b_i^T \hat{e} \end{equation} Since $|u_i|\leq1$, we have $F|b_i^T\hat{e}|\leq1$, and thus $F$ has this upper bound: \begin{equation} F \leq \frac{1}{|b_i^T\hat{e}|} \end{equation} Since this inequality has to be satisfied for all propellers $i=1,\ldots,6$, the maximum force $F^\mathrm{max}_{\hat{e}}$ is given by the lowest of these upper bounds \begin{equation} F^\mathrm{max}_{\hat{e}} = \min_i \frac{1}{|b_i^T\hat{e}|} \end{equation} This force is the maximum force along a given direction $\hat{e}$. The maximum force attainable in any direction can be obtained by minimising this force over all possible directions. Since $|b_i^T\hat{e}|\leq\|b_i\|$, this minimum is given by \begin{equation} \label{eq:fmax} F^\mathrm{max} = \min_i \frac{1}{\|b_i\|} \end{equation} The same reasoning can be applied to the torques: consider a torque $\bar{M}=M\hat{e}$ along an arbitrary direction defined by $\hat{e}$, the corresponding actuation with $\bar{F}=0$ is $u_i=M c_i^T \hat{e}$, resulting in the following maximum torque along any direction: \begin{equation} \label{eq:mmax} M^\mathrm{max} = \min_i \frac{1}{\|c_i\|} \end{equation} Now, these maximum forces and torque values depend on the design parameters. In the following we will use an optimization approach to find the values of these parameters that maximize the maximum force and/or torque. We will consider the propellers to be equally distributed radially, that is, \begin{equation} \label{eq:thetas} \theta_i = (i-1) \frac{\pi}{3} \end{equation} and a fixed distance $d$, as well as the constants $K_1$ and $K_2$. All the remaining parameters will be the unknown variables: \begin{equation} \bar{\psi} = ( \phi_1, \ldots, \phi_6, w_1,\ldots, w_6 )^T \end{equation} with the feasibility domain defined by \begin{equation} \Psi = \left\{ \bar{\psi} \::\: |\phi_{1,\ldots,6}| \leq \phi_{max}, w_{1,\ldots,6}\in\{-1,1\} \right\} \end{equation} where $\phi_{max}$ is the maximum allowed deviation from the vertical. As a mechanical constraint to allow obstructionless air flow we considered $\phi_{max}=\pi/3$ in this work. We can restate the maximization of~\eqref{eq:fmax} and~\eqref{eq:mmax} using the epigraph form~\cite{boyd04}, thus getting rid of the maximization of a minimum: \begin{equation} \label{eq:minp} \begin{split} &\text{minimize}\: p \\ &\text{subject to:} \\ &\quad p \geq \|b_i\|^2, i=1,\ldots,6 \end{split} \end{equation} for the force and \begin{equation} \label{eq:minq} \begin{split} &\text{minimize}\: q \\ &\text{subject to:} \\ &\quad q \geq \|c_i\|^2, i=1,\ldots,6 \end{split} \end{equation} for the torque, where $\{b_i\}$ and $\{c_i\}$ depend non-linearly on the parameters $\bar{\psi}$ through the inverse of the actuation matrix $\mathbf{A}$ as~\eqref{eq:bici}. In this form, the optimization variables are augmented with the cost, that is, $\bar{\psi}_p=(p, \bar{\psi})$ for the problem~\eqref{eq:minp} and $\bar{\psi}_q=(q, \bar{\psi})$ for~\eqref{eq:minq}. It can be readily seen that these forms maximize~\eqref{eq:fmax} and~\eqref{eq:mmax}, where the resulting maximum forces and torques can be recovered using $F^{max}=1/\sqrt{p}$ and $M^{max}=1/\sqrt{q}$. \subsection{Multi-criteria optimization} \label{sec:multi-crit-optim} Since we intend to both maximize force and torque, we will make the trade-off between the two explicit by taking a multi-criteria optimization approach: \begin{equation} \label{eq:multi} \begin{split} &\text{minimize}\:(p, q) \\ &\text{subject to:} \\ &\quad p \geq \|b_i\|^2, i=1,\ldots,6 \\ &\quad q \geq \|c_i\|^2, i=1,\ldots,6 \end{split} \end{equation} In this problem, the optimization variables are augmented with both $p$ and $q$, $\bar{\psi}_{pq}=(p, q, \bar{\psi})$, and we have two cost functions, say $J_1(\bar{\psi}_{pq})=p$ and $J_2(\bar{\psi}_{pq})=q$. The solution of this multi-criteria optimization problem is the set $\mathcal{P}$ of non-dominated solutions, defined by: $\bar{\psi}_{pq}^0\in P$ if and only if there is no $\bar{\psi}_{pq}\in\Psi$ such that $J_i(\bar{\psi})\leq J_i(\bar{\psi}^0)$ for all $i\in\{1,2\}$ and $J_i(\bar{\psi})< J_i(\bar{\psi}^0)$ for at least one $i\in\{1,2\}$. This set is also called \emph{Pareto optimal set}~\cite{statnikov95}, a subset of the \emph{objective space} defined by all~$(J_1(\bar{\psi}),J_2(\bar{\psi}))$ for $\bar{\psi}\in\Psi$. Apart from very simple cases, the Pareto optimal set is not trivial to obtain exactly. Thus, we will make a pointwise approximation using the Normally Boundary Intersection (NBI) method~\cite{das98}. This method is guaranteed to obtain Pareto optimal points if the objective space is convex. But it is still capable of obtaining points in ``sufficiently concave'' parts of the objective space~\cite{das98}. The first step of NBI is to obtain the minimizers of the each cost function taken individually. These are also called \emph{shadow minima}. Let us start by considering the first minimization problem~\eqref{eq:minp}, where $\bar{\psi}^*_p$ is the minimizer with minimum cost $p^*$. Then, this minimizer both minimizes $p$ in~\eqref{eq:multi} and, together with $q^0=\max_i\,\|c_i\|^2$, is a non-dominated solution of~\eqref{eq:multi}, and thus belongs to its Pareto optimal set. This results from the fact that this $q^0$ is the smallest one that still satisfies the constrains of~\eqref{eq:multi}: any $(p^*,q)$ with $q>q^0$ is dominated by $(p^*,q^0)$. The same reasoning can be applied to~\eqref{eq:minq}, resulting in the minimizer $\bar{\psi}^*_q$, with minimum cost $q^*$, that together with $p^0=\max_i\,\|b_i\|^2$ is also a non-dominated solution of~\eqref{eq:multi}, thus also belonging to its Pareto optimal set. On the $(p,q)$ space, these two non-dominated solutions corresponds to two extremal points, $(p^*,q^0)$ and $(p^0,q^*)$, of the Pareto optimal set: no feasible solutions exists neither to the left of $p^*$ nor lower than $q^*$. \emph{The application of NBI to a two cost function problem amounts to scanning along a straight line joining $(p^*,q^0)$ and $(p^0,q^*)$, and then, for each point on this straight line, to determine the single non-dominated solution along the orthogonal direction.} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{nbi.pdf} \caption{Illustration of the NBI method: given a point defined by $\lambda\in[0;1]$, along the search line between $(p^*,q^0)$ and $(p^0,q^*)$, the optimization is done along the normal direction spawned by vector $\bar{n}$, resulting on the $NBI(\lambda)$ intersection point.} \label{fig:nbi} \end{figure} Figure~\ref{fig:nbi} illustrates the NBI method. This straight line can be parametrized by a $\lambda$ value ranging between 0 and 1, resulting in $(1-\lambda)(p^*,q^0)+\lambda\,(p^0,q^*)$. An orthogonal direction to this straight line is spawned by the vector $\bar{n}=(q^0-q^*,p^0-p^*)$. Using again the epigraph form, but now along this vector, we obtain the following constrained optimization problem: \begin{equation} \label{eq:nbi} \begin{aligned} &\text{minimize}\: t \\ &\text{subject to:} \\ &\quad (q^0-q^*)\,t + (1-\lambda) p^* + \lambda p^0 &\geq \|b_i\|^2 \\ &\quad (p^0-p^*)\,t + \lambda q^* + (1-\lambda) q^0 &\geq \|c_i\|^2 \\ &\qquad \text{for } i=1,\ldots,6 \end{aligned} \end{equation} with the augmented vector $\bar{\psi}_t=(t, \bar{\psi})$ as optimization variable. For a given $\lambda\in[0;1]$, the solution of this optimization problem yields a minimizer $\bar{\psi}^*_t(\lambda)$ from which the corresponding point in the $(p,q)$ space is \begin{equation} NBI(\lambda) = (p_\lambda, p_\lambda), \quad p_\lambda = \min_i \|b_i\|^2, \quad q_\lambda = \min_i \|c_i\|^2 \end{equation} from which the maximum values of force and torque can be recovered as above mentioned. Since these problems cannot be solved in closed form, we will make use of numerical optimization methods. The following section presents the numerical results obtained for this problem. \subsection{Numerical results} \label{sec:numerical-results} The optimization problem in~\eqref{eq:nbi} shows some features that make it non-trivial to solve: it is both strongly non-convex with mixed continuous and discrete variables. First, we will factor out the discrete part by iteratively trying each combination of $\{w_i\}$ values modulo rotations (also called \emph{orbits}\footnote{For instance, $[1,-1,1,1,1,1]$ and $[1,1,-1,1,1,1]$ belong to the same orbit, and thus it is redundant to try both of them.}): from its $2^6=64$ possible combinations, only 14 correspond to combinations where no pair can be made equal after rotating one of them. Second, we use a random multistart initialization together with a convex optimization algorithm: for each sample drawn uniformly from the $\{\phi_i\::\:|\phi_i| \leq \phi_{max}\}$ cube, we run the Constrained Optimization BY Linear Approximation (COBYLA) algorithm~\cite{powell94}, as implemented in the SciPy optimization package. To make the relation between force and actuation dimensionless, we divided the actuation matrix by $K_1$. This way, the only dependence on physical coefficients of this matrix is on the parameters $d$ and the ratio $K_2/K_1$. Using~\eqref{eq:blade}, this ratio can be expressed as \begin{equation} \frac{K_2}{K_1} = \frac{D}{2\pi} \frac{C_P}{C_T} \end{equation} For various small propellers (of about 4'') we found\footnote{We used the UIUC Propeller Data Site, Vol. 2, http://m-selig.ae.illinois.edu/props/propDB.html (retrieved Nov-2015).} this ratio to be approximately~0.01. And thus we used this value to obtain the numerical results presented in this section. For $d$ we used the one from the design presented in Section~\ref{sec:vehicle-design}, that is, $d=0.16$. The extremes of the NBI search line are the shadow minima, \textit{i.e.,} the minima of~\eqref{eq:minp} and~\eqref{eq:minq}. For~1000 random initializations, we obtained these values for the shadow minima: $(p^*,q^0)=(0.250, 15.01)$ and $(p^0,q^*)=(0.4167,9.728)$. \begin{figure*} \centering \subfloat[]{\includegraphics[height=0.3\linewidth]{nbi_pq.pdf}} \hfill \subfloat[]{\includegraphics[height=0.3\linewidth]{nbi_fml.pdf}} \hfill \subfloat[]{\includegraphics[height=0.3\linewidth]{nbi_FM.pdf}} \caption{Pointwise approximation to the Pareto optimal set using the NBI method: (a)~obtained points in the $(p.q)$ space, (b)~$F^{max}$ and $M^{max}$ in function of $\lambda$, and (c)~in the $(F^{max},M^{max})$ space. The dimensions for $F^{max}$ and $M^{max}$ have no physical meaning because of the division of the actuation matrix by~$K_1$, as explained in the text, and thus are to be understood in relative terms only.} \label{fig:nbiout2} \end{figure*} \begin{table*} \centering \begin{tabular}{l|cccccc|rrrrrr|cc} $\lambda$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\phi_4$ & $\phi_5$ & $\phi_6$ & $w_1$ & $w_2$ & $w_3$ & $w_4$ & $w_5$ & $w_6$ & $F^{max}$ & $M^{max}$ \\ \hline 0 & 54.74 &-54.73 & 54.74 &-54.74 & 54.73 &-54.74 &-1 & 1 &-1 & 1 &-1 & 1 & 2.000 & 0.2798 \\ 0.25 & 54.73 &-54.74 & 54.74 &-54.73 & 54.74 &-54.74 &-1 & 1 &-1 & 1 &-1 & 1 & 2.000 & 0.2798 \\ 0.5 & 53.73 &-53.73 & 53.73 &-53.73 & 53.73 &-53.73 &-1 & 1 &-1 & 1 &-1 & 1 & 1.999 & 0.2844 \\ 0.75 & 49.56 &-49.56 & 49.56 &-49.56 & 49.56 &-49.56 &-1 & 1 &-1 & 1 &-1 & 1 & 1.969 & 0.3009 \\ 1 & 38.84 &-38.85 & 38.84 &-38.84 & 38.85 &-38.84 &-1 & 1 &-1 & 1 &-1 & 1 & 1.745 & 0.3206 \\ \hline \end{tabular} \caption{Some of the configurations obtained for 5 equally spaced values of $\lambda$. The values for $\{\phi_i\}$ are shown in degrees. As before, the dimensions for $F^{max}$ and $M^{max}$ have no physical meaning.} \label{tab:configs} \end{table*} With these values, we ran our optimization method for $\lambda$ ranging from~0 to~1 on 0.01 steps, for 1000 random initializations each. The result is a set of points approximating the Pareto optimal set, shown in Figure~\ref{fig:nbiout2}. The (a) plot of this figure suggests a convex Pareto front. For a range of $\lambda$ values from 0 to about 0.4, the $(p,q)$ values are constant. As $\lambda$ increases over 0.4, there is a drop on the values of $q$, meaning a slight increase on $M^{max}$ \begin{table} \centering \begin{tabular}{ccccccc} propeller ($i$) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline $\theta_i$ & 0 & 60 & 120 & 180 & 240 & 300 \\ $\phi_i$ & 55 & -55 & 55 & -55 & 55 & -55 \\ $w_i$ & -1 & 1 & -1 & 1 & -1 & 1 \\ \hline \end{tabular} \caption{Design parameters of the selected solution. Both $\{\theta_i\}$ and $\{\phi_i\}$ are expressed in degrees.} \label{tab:selected} \end{table} Table~\ref{tab:configs} shows some of the optimal configurations obtained for some values of $\lambda$. In our choice we decided to prefer maximum force, and thus we selected a configuration found in the lower range of $\lambda$ values. We rounded off the angle values to the closest integer degree, resulting in the configuration shown in Table~\ref{tab:selected}. All of the following results shown in this paper employ this selected configuration. \subsection{Individual motion mode} For this validation, the robot is holding on position $(0,0,1)$, while the reference is set at $(1,2,4)$. The total movement of the robot is 1, 2 and 3 meters along X, Y and Z axis. On Figure~\ref{fig:position_errors} we provide the obtained results for position convergence with the proposed controller in~\eqref{eq:fctrl} and~\eqref{eq:pctrl}. Convergence rates are equal across all axis as expected. It only depends on the gains (equal for all axis) and the mass of the vehicle. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{error-p.pdf} \caption{Translational mode error convergence, from top to bottom: $e_x$ and $e_v$.} \label{fig:position_errors} \end{figure} On the attitude simulation, we start the vehicle at $(0\degree,0\degree,0\degree)$ and the reference is set as $(70\degree,-50\degree,30\degree)$ (both in $XYZ$ Euler angles). The results are shown in Figure~\ref{fig:attitude_errors}. Attitude convergence is approximately the same on all axis. As it depends only on the gains (equal for all axis) and the inertia of the vehicle (almost diagonal due to its symmetry), this result was also expectable. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{error-a.pdf} \caption{Attitude mode error convergence, from top to bottom: $e_R$ and $e_\omega$.} \label{fig:attitude_errors} \end{figure} \subsection{Waypoint navigation and payload transportation} To validate the joint position and attitude control, we used waypoint navigation: we created a virtual path composed by 6 waypoints, varying on both position and attitude. The trajectory followed by the vehicle is represented on \ref{fig:trajectory}. Plotted are both the X and Y axes of the body frame~$\mathcal{B}$ to represent its attitude. We retrieved the errors across the whole trajectory following, for both position and atitude. These are shown in Figure~\ref{fig:nav_errors}. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{3d-trajectory.pdf} \caption{Trajectory followed by the robot without load. Plotted are X and Y axis of the inertia frame~$\mathcal{I}$.} \label{fig:trajectory} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{robust-noload.pdf} \caption{Position (on top) and attitude (on bottom) errors along the no-load trajectory of Figure \ref{fig:trajectory}.} \label{fig:nav_errors} \end{figure} Then, we added a non-modeled payload to the system: a sphere with 6Kg of mass --- about the same mass of the vehicle. Even with such payload, the controller was able to converge to the required positions, though taking significantly more time. Shown on Figure~\ref{fig:loaded} are screenshots of the vehicle along its 3 waypoint trajectory: 1~meter up along Z and 1~meter right along Y. The position and attitude errors for this trajectory are plotted on Figure \ref{fig:nav_errors_load} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{snapshots.jpg} \caption{Trajectory followed by the robot with 6kg non-modeled load. Pictured are six positions of the robot along its trajectory.} \label{fig:loaded} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{robust-load.pdf} \caption{Position (on top) and attitude (on bottom) errors with non-modeled 6kg load along the trajectory of Figure \ref{fig:loaded}.} \label{fig:nav_errors_load} \end{figure} A video showing simulations of Space CoBot in V-REP can be found here: \url{https://youtu.be/M4kdZjxf6-Q} .
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Example Appendix} \end{document} \section{Conclusion} In this paper, we present our WMT21 word-level QE shared task submission based on Levenshtein Transformer training and a two-step finetuning process. We also explore various ways to create synthetic data to build more generalizable systems with limited human annotations. We show that our system outperforms the OpenKiwi+XLM baseline for all language pairs we experimented with. Our official results on the blind test set also demonstrate the competitiveness of our system. We hope that our work can inspire other applications of Levenshtein Transformer beyond the widely studied case of non-autoregressive translation. \section{Experiment} \subsection{Setup} \begin{table*}[] \centering \scalebox{0.9}{\begin{tabular}{@{}llll@{}} \toprule \textbf{Language Pair} & \textbf{Synthesis Method} & \textbf{Data Source} & \textbf{\# Triplets} \\ \midrule English-German & src-mt-tgt & WMT20 en-de parallel data & 44.2M \\ English-German & src-mt1-mt2 & WMT20 en-de parallel data cleaned & 36.1M \\ English-German & bt-rt-tgt & 10M sample from newscrawl2020 & 10.0M \\ English-German & bt-noisy-tgt & 10M sample from newscrawl2020 & 10.0M \\ English-Chinese & src-mt-tgt & shared task en-zh parallel & 20.3M \\ English-Chinese & MVPPE & shared task en-zh parallel & 20.3M \\ Romanian-English & src-rt-ft & shared task ro-en parallel & 3.09M \\ Romanian-English & MVPPE & shared task ro-en parallel & 3.09M \\ Russian-English & src-rt-ft & shared task ru-en parallel & 2.32M \\ Estonian-English & src-rt-ft & shared task et-en parallel & 880K \\ Nepalese-English & src-rt-ft & shared task ne-en parallel & 498K \\ Pashto-English & src-rt-ft & WMT20 Parallel Corpus Filtering Task & 347K \\ \bottomrule \end{tabular}} \caption{Source and statistics of datasets used for our synthetic finetuning experiments. \label{tab:data}} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.6]{figures/main.pdf} \caption{Target MCC results on \texttt{test20} dataset for all language pairs we submitted systems for (except for ps-en which is not included in \texttt{test20}).\label{fig:main}} \end{figure*} We follow the data split on human post-edited data as determined by the task organizers and use \texttt{test20} as the devtest for our system development purposes. Apart from the human post-edited data provided by the task organizers, we also used some extra data for synthetic finetuning. Table \ref{tab:data} shows the source and statistics of the data that's been used for our synthetic finetuning experiments. For stage 1 LevT translation training, we always use the same parallel data that is used to train the MT systems provided by the shared task. For the only language pair where we applied the \textbf{src-mt1-mt2} synthetic finetuning, we used the WMT19 Facebook's winning system \cite{ng-etal-2019-facebook} to generate the higher-quality translation \textbf{mt2}. For both \textbf{src-mt-tgt} and \textbf{src-mt1-mt2}, we used the system provided by the shared task to generate the MT output in the pseudo translation triplet. For the Levenshtein Transformer in \textbf{bt-noisy-tgt}, we used a Levenshtein Transformer model trained from scratch (without M2M initialization). For all the other forward/backward/round-trip translations, we used M2M-mid (1.2B parameters) model to generate the necessary outputs. All of our experiments uses Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with linear warmup and \texttt{inverse-sqrt} scheduler. For stage 1, we use the same hyperparameters as \citet{DBLP:conf/nips/GuWZ19} for LevT translation training, but use a smaller learning rate of 2e-5 to avoid overfitting for all to-English language pairs. For stage 2 and beyond, we stick to the learning rate of 2e-5 and perform early-stopping based on the loss function on the development set. We also experiment with label balancing factor $\sigma=1.0$ and $\sigma=3.0$ for each language pair and pick the one that works the best on devtest data. We find the QE task performance to be quite sensitive to $\sigma$ but there is also no universally optimal value for all language pairs. Our LevT-QE model is implemented based on Fairseq \cite{DBLP:conf/naacl/OttEBFGNGA19}. We also implemented another TER computation tool\footnote{\texttt{https://github.com/marian-nmt/moses-scorers}} to generate the word-level and subword-level tags that we use as the reference for finetuning, but stick to the original reference tags in the test set for evaluation to avoid potential result mismatch. \subsection{Results} Our pre-submission results on \texttt{test20} devtest data is shown in Figure \ref{fig:main}. We can first observe that for all language pairs except English-Chinese, our LevT QE models perform better than the OpenKiwi-XLM \cite{kepler-etal-2019-openkiwi} baselines that we built on our own.\footnote{We followed their \texttt{xlmroberta.yaml} recipe to build our own baseline systems.} It is not clear as to why English-Chinese is an outlier, but we suspect it might be related to the fact that the M2M model is not a good translation model on this specific language pair, as reported in \cite{DBLP:journals/corr/abs-2010-11125}. Among the other language pairs, it should be noted that the benefit of LevT is most significant on the language pairs with a large amount of available parallel data. This is intuitive, because the less parallel data we have, the less knowledge we can draw out of the LevT training process. In the extreme case where we have no parallel data at all, all of our knowledge comes from the pre-trained model and human annotation data finetuning, which will reduce to the same model as the baseline. Finally, our Nelder-Meade ensemble further improves the result by a small but steady margin. We do not have enough time to run a rigorous comparison for all the explored synthetic finetuning methods before the submission deadline (we will include it in our next draft). Our general development results indicate that \textbf{src-mt1-mt2} works significantly better than all the other synthetic finetuning methods, which does not improve the result and may even hurt sometimes. This is different from the experiments shown in \cite{lee-2020-two}, which successfully deployed \textbf{src-mt-tgt}-style synthetic finetuning to improve the performance on the word-level QE task. Note that the limitation of \textbf{src-mt1-mt2} is also pretty clear: it calls for an existing stronger MT system than the one whose translation quality is being estimated, which is why we did not perform similar synthetic finetuning for the language pairs other than en-de, as there are no clearly stronger publicly available MT models for these language pairs. In theory, we can produce stronger translations for that purpose by resorting to an online translation system such as Google/Bing Translate, which another team used in their 2020 participation \cite{wang-etal-2020-hw}, but those models are not publicly available, hence making the submission results hard to reproduce. To verify \textbf{src-mt1-mt2} does work beyond en-de language pair, we plan to build contrastive systems that take advantage of the online systems to build synthetic finetuning data. \section{Experiments} \subsection{Data Setup} \paragraph{LevT Training} We used the same parallel data that was used to train the MT system in the shared task, except for the en-de, et-en, and ps-en language pairs. For en-de language pair, we use the larger parallel data from the WMT20 news translation shared task. For et-en language pair, we experiment with augmenting with the News Crawl Estonian monolingual data from 2014 to 2017, which was inspired by \citet{zhou-keung-2020-improving}. For ps-en language pair, because there is no MT system provided, we take the data from the WMT20 parallel corpus filtering shared task and applied the baseline LASER filtering method. For the multi-source LevT model, we simply concatenate the data from ro-en, ru-en, es-en (w/o monolingual augmentation) and ne-en. The resulting data scale is summarized in Table \ref{tab:data}. Following the setup in \citet{DBLP:conf/nips/GuWZ19}, we conduct sequence-level knowledge distillation during training for all language pairs except for ne-en and ps-en\footnote{The exception was motivated by the poor quality of the translation we obtained from the M2M-100 model.}. For en-de, the knowledge distillation data is generated by the WMT19 winning submission for that language pair from Facebook \cite{ng-etal-2019-facebook}. For en-zh, we train our own en-zh autoregressive model on the parallel data from the WMT17 news translation shared task. For the other language pairs, we use the decoding output from M2M-100-mid (1.2B parameters) model to perform knowledge distillation. \begin{table}[h] \centering \scalebox{0.68}{ \begin{tabular}{@{}lllr@{}} \toprule \textbf{Configuration} & \textbf{Stage 2} & \textbf{Stage 3} & \multicolumn{1}{l}{\textbf{Target MCC}} \\ \midrule en-de OpenKiwi & N & default & 0.337 \\ en-de bilingual best & src-mt1-mt2 & $\mu = 1.0$ & 0.500 \\ en-de ensemble & N/A & N/A & 0.504 \\ \midrule en-zh OpenKiwi & N & default & 0.421 \\ en-zh bilingual best & mvppe & $\mu = 1.0$ & 0.459 \\ en-zh ensemble & N/A & N/A & 0.466 \\ \midrule ro-en OpenKiwi & N & default & 0.556 \\ ro-en bilingual best & src-rt-ft & $\mu = 1.0$ & 0.604 \\ ro-en multilingual best & N & $\mu = 1.0$ & 0.612 \\ ro-en ensemble & N/A & N/A & 0.633 \\ \midrule ru-en OpenKiwi & N & default & 0.279 \\ ru-en bilingual best & src-rt-ft & $\mu = 3.0$ & 0.316 \\ ru-en multilingual best & N & $\mu = 3.0$ & 0.339 \\ ru-en ensemble & N/A & N/A & 0.349 \\ \midrule et-en OpenKiwi & N & default & 0.503 \\ et-en bilingual best & N & $\mu = 3.0$ & 0.556 \\ et-en bilingual best (w/ aug) & N & $\mu = 3.0$ & 0.548 \\ et-en multilingual best & N & $\mu = 3.0$ & 0.533 \\ et-en ensemble & N/A & N/A & 0.575 \\ \midrule ne-en OpenKiwi & N & default & 0.664 \\ ne-en bilingual best & N & $\mu = 3.0$ & 0.677 \\ ne-en multilingual best & N & $\mu = 3.0$ & 0.681 \\ ne-en ensemble & N/A & N/A & 0.688 \\ \bottomrule \end{tabular} } \caption{Target MCC results on \texttt{test20} dataset for all language pairs we submitted systems for (except for ps-en which is not included in \texttt{test20}). Stage 2 stands for synthetic finetuning (where N stands for not performing this stage). Stage 3 stands for human annotation finetuning. $\mu$ stands for the label balancing factor.\label{tab:main}} \end{table} \begin{table}[h] \centering \scalebox{0.9}{ \begin{tabular}{@{}lrrr@{}} \toprule & \multicolumn{1}{l}{\textbf{Target MCC}} & \multicolumn{1}{l}{\textbf{F1-OK}} & \multicolumn{1}{l}{\textbf{F1-BAD}} \\ \midrule N & 0.489 & 0.955 & 0.533 \\ src-mt-ref & 0.493 & 0.955 & 0.537 \\ src-mt1-mt2 & \textbf{0.500} & 0.956 & \textbf{0.544} \\ bt-rt-tgt & 0.490 & 0.956 & 0.534 \\ src-rt-ft & 0.494 & 0.956 & 0.538 \\ mvppe & \textbf{0.500} & \textbf{0.960} & 0.540 \\ \bottomrule \end{tabular} } \caption{Analysis of different data synthesis methods on en-de language pair. All models here are initialized with M2M-100-small. \label{tab:data-synth-de}} \end{table} \begin{table*}[h] \centering \scalebox{0.9}{ \begin{tabular}{@{}lllrrr@{}} \toprule \textbf{Configuration} & \textbf{Stage 2} & \textbf{Stage 3} & \multicolumn{1}{l}{\textbf{Target MCC}} & \multicolumn{1}{l}{\textbf{F1-OK}} & \multicolumn{1}{l}{\textbf{F1-BAD}} \\ \midrule ro-en multilingual & N & $\mu = 1.0$ & \textbf{0.612} & 0.949 & \textbf{0.659} \\ ro-en multilingual & mvppe & $\mu = 1.0$ & 0.611 & \textbf{0.951} & \textbf{0.659} \\ ro-en multilingual & src-mt1-mt2 (Bing mt2) & $\mu = 1.0$ & 0.585 & 0.936 & 0.630 \\ ro-en bilingual (Bing KD) & N & $\mu = 1.0$ & 0.581 & 0.949 & 0.632 \\ ro-en bilingual (Bing KD) & src-mt1-mt2 (Bing mt2) & $\mu = 1.0$ & 0.568 & 0.938 & 0.619 \\ \midrule et-en bilingual & N & $\mu = 3.0$ & 0.548 & 0.914 & 0.622 \\ et-en bilingual & mvppe & $\mu = 3.0$ & 0.544 & \textbf{0.929} & 0.615 \\ et-en bilingual & src-mt1-mt2 (Bing mt2) & $\mu = 3.0$ & \textbf{0.563} & 0.919 & \textbf{0.634} \\ et-en bilingual (Bing KD) & N & $\mu = 3.0$ & 0.557 & 0.918 & 0.629 \\ et-en bilingual (Bing KD) & src-mt1-mt2 (Bing mt2) & $\mu = 3.0$ & 0.559 & 0.916 & 0.631 \\ \bottomrule \end{tabular} } \caption{Analysis of \texttt{src-mt1-mt2} and \texttt{mvppe} method on ro-en and et-en language pair. \label{tab:data-synth-ro-et}} \end{table*} \paragraph{Synthetic Finetuning} We always conduct data synthesis based on the same parallel data that was used to train the LevT translation model. For the only language pair (en-de) where we applied the \texttt{src-mt1-mt2} synthetic finetuning for shared task submission, we again use the WMT19 Facebook's winning system \cite{ng-etal-2019-facebook} to generate the higher-quality translation \texttt{mt2}, and the system provided by the shared task to generate the MT output in the pseudo translation triplet \texttt{mt1}. For all other combinations of translation directions, language pairs and MVPPE decoding, we use the M2M-100-mid (1.2B parameters) model. \paragraph{Human Annotation Finetuning} We follow the data split for human post-edited data as determined by the task organizers and use \texttt{test20} as the devtest for our system development purposes. \paragraph{Reference Tag Generation} We implemented another TER computation tool\footnote{\texttt{https://github.com/marian-nmt/moses-scorers}} to generate the word-level and subword-level tags that we use as the reference for finetuning, but stick to the original reference tags in the test set for evaluation to avoid potential result mismatch. \subsection{Model Setup} Our LevT-QE model is implemented based on Fairseq \cite{DBLP:conf/naacl/OttEBFGNGA19}. All of our experiments uses Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with linear warmup and \texttt{inverse-sqrt} scheduler. For stage 1, we use the same hyperparameters as \citet{DBLP:conf/nips/GuWZ19} for LevT translation training, but use a smaller learning rate of 2e-5 to avoid overfitting for all to-English language pairs. For stage 2 and beyond, we stick to the learning rate of 2e-5 and perform early-stopping based on the loss function on the development set. For stage 3, e also experiment with label balancing factor $\mu=1.0$ and $\mu=3.0$ for each language pair and pick the one that works the best on devtest data, while for stage 2 we keep $\mu = 1.0$ because early experiments indicate that using $\mu = 3.0$ at this stage is not helpful. For pre-submission developments, we built OpenKiwi-XLM baselines \cite{kepler-etal-2019-openkiwi} following their \texttt{xlmroberta.yaml} recipe. Keep in mind due to the fact that this baseline model is initialized with a much smaller XLM-Roberta-base model (281M parameters) compared to our M2M-100-small initialization (484M parameters), the performance comparison is not a strict one. \subsection{Devtest Results} Our system development results on \texttt{test20} devtest data are shown in Table \ref{tab:main}\footnote{Note that the results on en-zh also reflect a crucial bug fix on our TER computation tool that we added after the system submission deadline. Hence the results shown here are from a different system as in the official shared task results. The bug fix should not affect the results of the other language pairs.}. In all language pairs, our systems can outperform the OpenKiwi baseline based upon the pre-trained XLM-RoBERTa-base encoder. Among these language pairs, the benefit of LevT is most significant on the language pairs with a large amount of available parallel data. Such behavior is expected, because the less parallel data we have, the less knowledge we can extract from the LevT training process. Furthermore, the lack of good quality knowledge distillation data in the low-resource language pairs also expands this performance gap. To our best knowledge, this is also the first attempt to train non-autoregressive translation systems under low-resource settings, and we hope future explorations in this area can enable us to build a better QE system from LevT. In terms of comparison between multilingual and bilingual models for to-English language pairs, the results are mixed, with the multilingual model performing significantly better for ru-en language pair, but significantly worse for et-en language pair. Finally, our Nelder-Mead ensemble further improves the result by a small but steady margin. \subsection{Analysis} \citet{ding2021levenshtein} already conducted comprehensive ablation studies for techniques such as the effect of LevT training step, heuristic subword-level reference tag, as well as the effect of various data synthesis methods. In this section, we extend the existing analyses by studying if the synthetic finetuning is still useful with M2M initialization, and if it is universally helpful across different languages. We also examine the effect of label balancing factor $\mu$ and take a detailed look at the prediction errors. \paragraph{Synthetic Finetuning} We redo the analysis on en-de synthetic finetuning with the smaller 2M parallel sentence samples from Europarl, as in \citet{ding2021levenshtein}, but with the updated \texttt{test20} test set and models with M2M-100-small initialization. The results largely corroborate the trend in the other paper, showing that \texttt{src-mt1-mt2} and \texttt{mvppe} being the most helpful two data synthesis methods. We then extend those two most helpful methods to ro-en and et-en, using the up-to-date Bing Translator production model as the stronger MT system (a.k.a. \texttt{mt2}) in the \texttt{src-mt1-mt2} synthetic data. The result is mixed, with \texttt{mvppe} failing to improve performance for both language pairs, and \texttt{src-mt1-mt2} only being helpful for et-en language pair. We also trained two extra ro-en and et-en LevT models using the respective Bing Translator models to generate the KD data, which are neither helpful for improving performance on their own nor working better with \texttt{src-mt1-mt2} synthetic data. We notice that the \texttt{mvppe} synthetic data seems to significantly improve the F1 score of the \texttt{OK} label in general, for which we don't have a good explanation yet. \paragraph{Label Balancing Factor} \begin{table}[] \scalebox{0.9}{ \begin{tabular}{@{}lrrr@{}} \toprule \textbf{Configuration} & \multicolumn{1}{l}{\textbf{Target MCC}} & \multicolumn{1}{l}{\textbf{F1-OK}} & \multicolumn{1}{l}{\textbf{F1-BAD}} \\ \midrule ro-en $\mu = 1.0$ & \textbf{0.612} & \textbf{0.949} & \textbf{0.659} \\ ro-en $\mu = 3.0$ & 0.577 & 0.930 & 0.619 \\ \midrule ru-en $\mu = 1.0$ & 0.267 & \textbf{0.960} & 0.284 \\ ru-en $\mu = 3.0$ & \textbf{0.339} & 0.943 & \textbf{0.390} \\ \midrule et-en $\mu = 1.0$ & 0.478 & \textbf{0.933} & 0.511 \\ et-en $\mu = 3.0$ & \textbf{0.512} & 0.925 & \textbf{0.587} \\ \midrule ne-en $\mu = 1.0$ & 0.660 & \textbf{0.885} & 0.774 \\ ne-en $\mu = 3.0$ & \textbf{0.681} & 0.855 & \textbf{0.788} \\ \bottomrule \end{tabular} } \caption{Analysis of different label balancing factors initialized on to-English language pairs. All results are based on the multilingual model and not performing synthetic finetuning step. \label{tab:lb-factor}} \end{table} \definecolor{applegreen}{rgb}{0.553, 0.714, 0.0} \definecolor{msgreen}{rgb}{0.490, 0.718, 0.0} \definecolor{bananamania}{rgb}{0.98, 0.91, 0.71} \definecolor{pastelgreen}{rgb}{0.467, 0.867, 0.467} \def\cca#1{ \pgfmathsetmacro\calc{(#1*100} \edef\clrmacro{\noexpand\cellcolor{msgreen!\calc}} \clrmacro{#1} } \begin{table*}[] \centering \scalebox{0.6}{ \begin{tabular}{@{}lrrrrrrrrrrrrrrr@{}} \toprule \textbf{Lang.} & \multicolumn{1}{l}{\textbf{Tgt. MCC}} & \multicolumn{1}{l}{\textbf{MT MCC}} & \multicolumn{3}{l}{\textbf{MT BAD (P/R/F1)}} & \multicolumn{3}{l}{\textbf{MT OK (P/R/F1)}} & \multicolumn{1}{l}{\textbf{GAP MCC}} & \multicolumn{3}{l}{\textbf{GAP BAD (P/R/F1)}} & \multicolumn{3}{l}{\textbf{GAP OK (P/R/F1)}} \\ \midrule en-de & \cca{0.504} & \cca{0.503} & \cca{0.476} & \cca{0.731} & \cca{0.576} & \cca{0.950} & \cca{0.863} & \cca{0.904} & \cca{0.280} & \cca{0.366} & \cca{0.238} & \cca{0.288} & \cca{0.980} & \cca{0.989} & \cca{0.984} \\ en-zh & \cca{0.466} & \cca{0.381} & \cca{0.467} & \cca{0.787} & \cca{0.586} & \cca{0.879} & \cca{0.633} & \cca{0.736} & \cca{0.146} & \cca{0.276} & \cca{0.099} & \cca{0.145} & \cca{0.965} & \cca{0.990} & \cca{0.977} \\ ro-en & \cca{0.612} & \cca{0.645} & \cca{0.729} & \cca{0.709} & \cca{0.719} & \cca{0.922} & \cca{0.929} & \cca{0.926} & \cca{0.164} & \cca{0.411} & \cca{0.073} & \cca{0.125} & \cca{0.973} & \cca{0.997} & \cca{0.985} \\ ru-en & \cca{0.349} & \cca{0.329} & \cca{0.296} & \cca{0.675} & \cca{0.411} & \cca{0.945} & \cca{0.775} & \cca{0.852} & \cca{0.167} & \cca{0.265} & \cca{0.123} & \cca{0.168} & \cca{0.978} & \cca{0.991} & \cca{0.985} \\ et-en & \cca{0.575} & \cca{0.553} & \cca{0.676} & \cca{0.681} & \cca{0.679} & \cca{0.875} & \cca{0.873} & \cca{0.874} & \cca{0.251} & \cca{0.426} & \cca{0.169} & \cca{0.242} & \cca{0.967} & \cca{0.991} & \cca{0.979} \\ ne-en & \cca{0.694} & \cca{0.434} & \cca{0.760} & \cca{0.918} & \cca{0.832} & \cca{0.746} & \cca{0.454} & \cca{0.564} & \cca{0.192} & \cca{0.444} & \cca{0.098} & \cca{0.161} & \cca{0.955} & \cca{0.994} & \cca{0.974} \\ \bottomrule \end{tabular} } \caption{Detailed evaluation metric breakdown of all submitted ensemble system on \texttt{test20} test set.\label{tab:breakdown}} \end{table*} We find the QE task performance to be quite sensitive to the label balancing factor $\mu$, but there is also no universally optimal value for all language pairs. Table \ref{tab:lb-factor} shows this behavior for all to-English language pairs. Notice that while for most of the cases $\mu$ simply controls a trade-off between the performance of \texttt{OK} and \texttt{BAD} outputs, there are also cases such as ro-en where a certain choice of $\mu$ hurts the performance of both classes. This might be due to a certain label class being particularly hard to fit, thus creating more difficulties with learning when the loss function is designed to skew to this label class. It should be noted that this label balancing factor does not correlate directly with the ratio of the \texttt{OK} vs. \texttt{BAD} labels in the training set. For example, to obtain the best performance, ne-en requires $\mu = 3.0$ while en-de requires $\mu = 1.0$, while the \texttt{OK} to \texttt{BAD} ratio for ne-en (2.14:1) is much less skewed compare to en-de (10.2:1). \paragraph{Detailed Error Breakdown} We found it hard to develop an intuition for the model performance from the MCC metric. To further understand which label categories our models struggle with the most, we breakdown the target-side metric into a cross product of \{\texttt{MT}, \texttt{GAP}\} tags and \{\texttt{OK}, \texttt{BAD}\} classes and compute precision, recall and F1-score for each category. The breakdown is shown in Table \ref{tab:breakdown}. It can be seen that our model is making the most mistakes with the \texttt{GAP BAD} category, while the category with the least mistakes is the \texttt{GAP OK} category. Also, note that for MT word tags, the models often seem to suffer more from low precision rather than low recall, while for gaps it is the opposite. Overall, we see that the highest F1 scores we can achieve for detecting bad MT words or gaps are rarely higher than 0.8, which indicates that there should be ample room for improvement. It would also be interesting to measure the inter-annotator agreement of these word-level quality labels, in order to get a sense of the human performance we should be aiming for. \section{Introduction\blfootnote{$^*$ Shuoyang Ding had a part-time affiliation with Microsoft at the time of this work.}} In the machine translation (MT) literature, quality estimation (QE) refers to the task of evaluating the translation quality of a system without using a human-generated reference. There are several different granularities as to the way those quality labels or scores are generated. Our participation in the WMT21 quality estimation shared task focuses specifically on the \emph{word-level} quality labels (word-level subtask of Task 2), which are helpful for both human \cite{lee-etal-2021-intellicat} and automatic \cite{lee-2020-cross} post-editing of translation outputs. The task asks the participant to predict one binary quality label (\texttt{OK}/\texttt{BAD}) for each target word and each gap between target words, respectively.\footnote{While there is another sub-task for predicting source-side quality labels, we do not participate in that task.} Our approach closely follows our contemporary work \cite{ding2021levenshtein}, which focuses on en-de and en-zh language pairs tested in the 2020 version of the shared task. The intuition behind our idea is that translation knowledge is very useful for predicting word-level quality labels of translations. However, usage of machine translation models is limited in the previous work mainly due to (1) the difficulties in using both the left and right context of an MT word to be evaluated; (2) the difficulties in making the word-level reference labels compatible with subword-level models; and (3) the difficulties in enabling translation models to predict gap labels. To resolve these difficulties, we resort to Levenshtein Transformer \cite[LevT,][]{DBLP:conf/nips/GuWZ19}, a model architecture designed for non-autoregressive neural machine translation (NA-NMT). Because of its iterative inference procedure, LevT is capable of performing post-editing on existing translation output even just trained for translation. To further improve the model performance, we also propose to initialize the encoder and decoder of the LevT model with those from a massively pre-trained multilingual NMT model \cite[M2M-100,][]{DBLP:journals/corr/abs-2010-11125}. Starting from a LevT translation model, we then perform a two-stage finetuning process to adapt the model from translation prediction to quality label prediction, using automatically-generated pseudo-post-editing triplets and human post-editing triplets respectively. All of our final system submissions are also linear ensembles from several individual models with weights optimized on the development set using the Nelder-Mead method \cite{DBLP:journals/cj/NelderM65}. \section{Method} Our system building pipeline is consisted of three different stages: \begin{itemize} \itemsep -2pt \item \textbf{Stage 1}: Training LevT for translation \item \textbf{Stage 2 (Optional)}: Finetuning LevT on synthetic post-editing triplets \item \textbf{Stage 3}: Finetuning LevT on human post-editing triplets \end{itemize} We start by introducing the architecture we use and then describe each stage of our training process. \subsection{Levenshtein Transformer} The Levenshtein Transformer \cite[LevT,][]{DBLP:conf/nips/GuWZ19} is a non-autoregressive NMT architecture that generates translations by iteratively applying Longest Common Subsequence (LCS) edits \cite{DBLP:journals/csur/Navarro01} on the translation output. Compared to the standard transformer model, the main difference in terms of the model architecture is that, on top of the transformer decoder blocks, it has two extra prediction heads\footnote{A prediction head is generally implemented as one or more linear layers that take the decoder state as the input.} that predict editing actions for this iteration, namely the \emph{deletion head} $\vec{A}_{del}$ and \emph{mask insertion head} $\vec{A}_{ins}$. A standard transformer model, on the other hand, only has one prediction head that predicts the upcoming word, which we call \emph{word prediction head} $\vec{A}_{w}$. At inference time, LevT takes the source sentence and the previous iteration target sequence (or empty sequence for the first iteration) as the input, and applies edits in the order of deletion $\rightarrow$ mask insertion $\rightarrow$ word insertion to generate the sequence iteratively, as shown in Figure~\ref{fig:ter-exp}b. At training time, the three prediction heads $\vec{A}_w$, $\vec{A}_{del}$ and $\vec{A}_{ins}$ are trained with imitation learning, with the randomly noised target sentence as target-side input and LCS edits as the expert policy. Like many other non-autoregressive translation models \cite[][\emph{inter alia}]{DBLP:conf/iclr/Gu0XLS18,ghazvininejad-etal-2019-mask,lee-etal-2020-iterative}, LevT also requires training on knowledge distillation data to achieve the optimal performance. We refer the readers to Appendix \ref{sec:levT-details} and \citet{DBLP:conf/nips/GuWZ19} for details about the model, training scheme, and translation results. \subsection{System Building Process} \paragraph{Stage 1: Training LevT for Translation} We largely follow the same procedure as \citet[LevT,][]{DBLP:conf/nips/GuWZ19} to train the LevT translation model, except that we initialize the embedding, the encoder, and decoder of LevT with those from the small M2M-100 model \cite[418M parameters,][]{DBLP:journals/corr/abs-2010-11125} to take advantage of large-scale pretraining. Because of that, we also use the same sentencepiece model and vocabulary as the M2M-100 model. For to-English language pairs, we explored training multi-source Levenshtein Transformer model. According to the results on devtest data, this is shown to be beneficial for the QE task for ro-en, ru-en and ne-en, but not for the other language pairs. \paragraph{Stage 2: Synthetic Finetuning} During both finetuning stages, we update the model parameters to minimize the NLL loss of word quality labels and gap quality labels, for $\vec{A}_{del}$ and $\vec{A}_{ins}$ respectively. For language pairs with high translation quality, the translation errors are often quite scarce, thus creating a skewed label distribution over the \texttt{OK} and \texttt{BAD} labels. To ensure that the model captures both label categories in a balanced manner, we introduce a multiplicative label balance factor $\sigma$ to up-weight the NLL loss of the \texttt{BAD} labels. To obtain training targets for finetuning, we need \emph{translation triplet data}, i.e. aligned triplet of source, target, and post-edited segments. Human post-edited data naturally provides all three fields of the triplet, but only comes in a limited quantity. To further help the model to generalize, we conduct an extra step of finetuning on synthetic translation triplets, similar to some previous work \cite[][\textit{inter alia}]{lee-2020-two}. We explored six different methods for data synthesis, namely: \begin{enumerate} \item \textbf{src-mt-tgt}: Take the source side of a parallel corpus (\textbf{src}), translate it with a MT model to obtain the MT output (\textbf{mt}), and use the target side of the parallel corpus as the pseudo post-edited output (\textbf{tgt}). \item \textbf{src-mt1-mt2}: Take a corpus in the source language (\textbf{src}) and translate it with two different MT systems that have a clear system-level translation quality ordering. Then, take the inferior MT output as the MT output in the triplet (\textbf{mt1}) and the better as the pseudo post-edited output in the triplet (\textbf{mt2}). \item \textbf{bt-rt-tgt}: Take a corpus in the target language (\textbf{tgt}) and back-translate it into the source langauge (\textbf{bt}), and then translate again to the target language (\textbf{rt}). We then use \textbf{rt} as the MT output in the triplet and \textbf{tgt} as the pseudo post-edited output in the triplet. \item \textbf{bt-noisy-tgt}: Take a corpus in the target language (\textbf{tgt}) and back-translate it into the source language (\textbf{bt}). We then randomly mask some words in the target and let a LevT translation model complete the translation by referring to the \textbf{bt} and masked \textbf{tgt}, which will generate a noisy target sentence (\textbf{noisy}). We then use \textbf{noisy} as the MT output in the triplet and the original target sentence in the corpus (\textbf{tgt}) as the pseudo post-edited output in the triplet. \item \textbf{src-rt-ft}: Take a parallel corpus and translate its source side and use it as the pseudo post-edited output (\textbf{ft}), and round-trip translate its target side (\textbf{rt}) as the MT output in the translation triplet. \item \textbf{multiview pseudo post-editing (MVPPE)}: This method is inspired by \citet{thompson-post-2020-paraphrase}, which used a multilingual translation system as a zero-shot paraphraser. We take a parallel corpus and translate the source side (\textbf{src}) with a multilingual translation system (\textbf{mt}) as the MT output in the triplet. We then generate the pseudo-post-edited output by ensembling two different \emph{views} of the same model: (1) using the multilingual translation model as a translation model, with \textbf{src} as the input; (2) using the multilingual translation model as a paraphrasing model, with \textbf{tgt} as the input. The ensemble process is the same as standard MT model ensembles and beam search is performed on top of the ensemble. \end{enumerate} \RestyleAlgo{ruled} \begin{algorithm}[t] \KwIn{subword-level token sequence $\vec{y}^{sw}$, word-level token sequence $\vec{y}^w$, subword-level tag sequence $\vec{q}^{sw}$} \KwOut{word-level tag sequence $\vec{q}^w$} $\vec{q}^w \leftarrow$ []\; \For{each word $w_k$ in $\vec{y}^w$}{ find subword index span $(s_k, e_k)$ in $\vec{y}^{sw}$ that corresponds to $w_k$\; $\vec{q}^{sw}_{k} \leftarrow$ subword-level translation and gap tags within span $(s_k, e_k)$\; $g^{sw}_{s_k} \leftarrow$ subword-level gap tag before span $(s_k, e_k)$\tcp*{$g^{sw}_{s_k-1}\in\vec{q}^{sw}$} \eIf{$\forall\,\vec{q}^{sw}_k$ are \texttt{OK}}{ $\vec{q}^w$ += [$g^{sw}_{s_k}$, \texttt{OK}]\; }{ $\vec{q}^w$ += [$g^{sw}_{s_k}$, \texttt{BAD}]\; } } $\vec{q}^w$ += [$\vec{q}^{sw}$[-1]]\tcp*{add ending gap tag} \Return{$\vec{q}^w$}\; \caption{Conversion of subword-level tags to word-level tags\label{algo1}} \end{algorithm} \paragraph{Stage 3: Human Post-editing Finetuning} We follow the same procedure as the synthetic finetuning stage except that we finetune on the human post-edited triplet dataset provided by the shared task organizers for this stage. \subsection{Compatibility With Subwords} \RestyleAlgo{ruled} \begin{algorithm}[htbp] \KwIn{subword-level token sequence $\vec{y}^{sw}$, word-level token sequence $\vec{y}^w$, naive subword-level tag sequence $\vec{q}^{sw}$, word-level tag sequence $\vec{q}^{w}$} \KwOut{heuristic subword-level tag sequence $\widetilde{\vec{q}^{sw}}$} $\widetilde{\vec{q}^{sw}} \leftarrow$ []\; \For{each word $w_k$ in $\vec{y}^w$}{ find subword index span $(s_k, e_k)$ in $\vec{y}^{sw}$ that corresponds to $w_k$\; $\vec{q}^{sw}_k \leftarrow$ subword-level translation and gap tags within span $(s_k, e_k)$\; $t^w_k \leftarrow$ word-level translation tag for $w_k$\tcp*{$t^w_k\in\vec{q}^w$} $g^w_k \leftarrow$ word-level gap tag before $w_k$\tcp*{$g^w_k\in\vec{q}^w$} $\widetilde{\vec{q}^{sw}}$ += [$g^w_k$]\tcp*{copy left gap tag} $n$ = $\left|\vec{q}^{sw}_k\right|$\tcp*{\# subwords for $w_k$} \uIf{$t^t_k$ is \texttt{OK}}{ \tcc{word is \texttt{OK}, so all subwords and gaps between them should be \texttt{OK}} $\widetilde{\vec{q}^{sw}}$ += [\texttt{OK}] * $(2n-1)$ } \uElseIf{$\exists\,$\texttt{BAD} in $\vec{q}^{sw}_k$}{ \tcc{no conflict between subword-level tag and word-level tag -- copy $\vec{q}^{sw}_k$ as-is} $\widetilde{\vec{q}^{sw}}$ += $\vec{q}^{sw}_k$\; } \uElse{ \tcc{subword-level tag disagrees with word-level tag -- force it as all-\texttt{BAD} to guarantee perfect conversion} $\widetilde{\vec{q}^{sw}}$ += [\texttt{BAD}] * $(2n-1)$ } } $\widetilde{\vec{q}^{sw}}$ += [$\vec{q}^{w}$[-1]]\tcp*{add ending gap tag} \Return{$\widetilde{\vec{q}^{sw}}$}\; \caption{Construction of heuristic subword-level tags\label{algo2}} \end{algorithm} To the best of our knowledge, previous work on word-level quality estimation either builds models that directly output word-level tags~\cite{lee-2020-two,hu-etal-2020-niutrans-system,moura-etal-2020-ist} or uses simple heuristics to re-assign word-level tags to the first subword token~\cite{wang-etal-2020-hw}. Since LevT predicts edits on a subword-level starting from translation training, we need to: (1) for inference, convert subword-level tags predicted by the model to word-level tags for evaluation, and (2) for both finetuning stages, build subword-level reference tags. For inference, the conversion can be easily done by heuristics, as shown in Algorithm \ref{algo1}. For finetuning, a \emph{naive subword-level tag reference} can be built by running TER alignments on MT and post-edited text after subword tokenization. However, a preliminary analysis shows that such reference introduces a 10\% error after being converted back to word-level. Hence, we introduce another heuristic to create \emph{heuristic subword-level tag references}, shown in Algorithm \ref{algo2}. The high-level idea is to interpolate the word-level and the naive subword-level references to ensure that the interpolated subword-level tag reference can be perfectly converted back to the word-level references. \subsection{Ensemble} For each binary label prediction made by the model, the model will give a score $p(\texttt{OK})$, which will be translated into binary labels in post-processing. To ensemble predictions from $k$ models $p_1(\texttt{OK}), p_2(\texttt{OK}), \dots, p_k(\texttt{OK})$, we perform a linear combination of the scores for each label: $$p_E(\texttt{OK}) = \lambda_1 p_1(\texttt{OK}) + \lambda_2 p_2(\texttt{OK}) + \dots + \lambda_k p_k(\texttt{OK})$$ To determine the optimal interpolation weights, we optimize towards target-side MCC on the development set. Because the target-side MCC computation is not implemented in a way such that gradient information can be easily obtained, we experimented with two gradient-free optimization method: Powell method \cite{DBLP:journals/cj/Powell64} and Nelder-Mead method \cite{DBLP:journals/cj/NelderM65}, both as implemented in \texttt{SciPy} \cite{2020SciPy-NMeth}. We found that Nelder-Mead method tends to find better optimum on the development set while also leading to better performance on the devtest dataset (not involved in optimization). Hence, we use the Nelder-Mead optimizer for all of our final submissions with ensembles. \section{Method} Our system building pipeline is consisted of three different stages: \begin{itemize} \itemsep -2pt \item \textbf{Stage 1}: Training LevT for translation \item \textbf{Stage 2 (Optional)}: Finetuning LevT on synthetic post-editing triplets \item \textbf{Stage 3}: Finetuning LevT on human post-editing triplets \end{itemize} \paragraph{Stage 1: Training LevT for Translation} We largely follow the same procedure as \citet[LevT,][]{DBLP:conf/nips/GuWZ19} to train the LevT translation model, except that we initialize the embedding, the encoder, and decoder of LevT with those from the small M2M-100-small model \cite[418M parameters,][]{DBLP:journals/corr/abs-2010-11125} to take advantage of large-scale pretraining. Because of that, we also use the same sentencepiece model and vocabulary as the M2M-100 model. For to-English language pairs, we explored training multi-source LevT model. According to the results on devtest data, this is shown to be beneficial for the QE task for ro-en, ru-en and ne-en, but not for other language pairs. \paragraph{Stage 2: Synthetic Finetuning} During both finetuning stages, we update the model parameters to minimize the NLL loss of word quality labels and gap quality labels, for the deletion and insertion head, respectively. To obtain training targets for finetuning, we need \emph{translation triplet data}, i.e., the aligned triplet of source, target, and post-edited segments. Human post-edited data naturally provides all three fields of the triplet, but only comes in a limited quantity. To further help the model to generalize, we conduct an extra step of finetuning on synthetic translation triplets, similar to some previous work \cite[][\textit{inter alia}]{lee-2020-two}. We explored five different methods for data synthesis, namely: \begin{enumerate}[leftmargin=0.5cm] \item \texttt{src-mt-tgt}: Take the source side of a parallel corpus (\texttt{src}), translate it with a MT model to obtain the MT output (\texttt{mt}), and use the target side of the parallel corpus as the pseudo post-edited output (\texttt{tgt}). \item \texttt{src-mt1-mt2}: Take a corpus in the source language (\texttt{src}) and translate it with two different MT systems that have clear system-level translation quality orderings. Then, take the worse MT output as the MT output in the triplet (\texttt{mt1}) and the better as the pseudo post-edited output in the triplet (\texttt{mt2}). \item \texttt{bt-rt-tgt}: Take a corpus in the target language (\texttt{tgt}) and back-translate it into the source langauge (\texttt{bt}), and then translate again to the target language (\texttt{rt}). We then use \texttt{rt} as the MT output in the triplet and \texttt{tgt} as the pseudo post-edited output in the triplet. \item \texttt{src-rt-ft}: Take a parallel corpus and translate its source side and use it as the pseudo post-edited output (\texttt{ft}), and round-trip translate its target side (\texttt{rt}) as the MT output in the translation triplet. \item \texttt{Multi-view Pseudo Post-Editing (MVPPE)}: Same as \citet{ding2021levenshtein}, we take a parallel corpus and translate the source side (\texttt{src}) with a multilingual translation system (\texttt{mt}) as the MT output in the triplet. We then generate the pseudo-post-edited output by ensembling two different \emph{views} of the same model: (1) using the multilingual translation model as a translation model, with \texttt{src} as the input; (2) using the multilingual translation model as a paraphrasing model, with \texttt{tgt} as the input. The ensemble process is the same as ensembling standard MT models, and we perform beam search on top of the ensemble. Unless otherwise specified, we use the same ensembling weights of $\lambda_t = 2.0$ and $\lambda_p = 1.0$ as \citet{ding2021levenshtein}. \end{enumerate} \paragraph{Stage 3: Human Post-editing Finetuning} We follow the same procedure as stage 2, except that we finetune on the human post-edited dataset provided by the shared task organizers for this stage. \begin{table*}[] \centering \scalebox{0.9}{ \begin{tabular}{@{}lll@{}} \toprule Language Pair & Data Source & Sentence Pairs \\ \midrule English-German & WMT20 en-de parallel data & 44.2M \\ English-Chinese & shared task en-zh parallel & 20.3M \\ Romanian-English & shared task ro-en parallel & 3.09M \\ Russian-English & shared task ru-en parallel & 2.32M \\ Estonian-English & shared task et-en parallel & 880K \\ Estonian-English & shared task et-en parallel + NewsCrawl 14-17 & 3.42M \\ Nepalese-English & shared task ne-en parallel & 498K \\ Pashto-English & WMT20 Parallel Corpus Filtering Task & 347K \\ \bottomrule \end{tabular} } \caption{Source and statistics of parallel datasets used in our experiments. \label{tab:data}} \end{table*} \paragraph{Compatibility With Subwords} As pointed out before, since LevT predicts edits on a subword-level starting from translation training, we must construct reference tags that are compatible with the subword segmentation done for both the MT and the post-edited output. Specifically, we need to: (1) for inference, convert subword-level tags predicted by the model to word-level tags for evaluation, and (2) for both finetuning stages, build subword-level reference tags. We follow the same heuristic subword-level tag reference construction procedure as \citet{ding2021levenshtein}, which was shown to be helpful for task performance. \paragraph{Label Imbalance} Like several previous work \cite{lee-2020-cross,wang-etal-2020-hw-tscs,moura-etal-2020-ist}, we also observed that the translation errors are often quite scarce, thus creating a skewed label distribution over the \texttt{OK} and \texttt{BAD} labels. Since it is critical for the model to reliably predict both classes of labels, we introduce an extra hyperparameter $\mu$ in the loss function that allows us to upweight the words that are classified with \texttt{BAD} tags in the reference. $$\mathcal{L} = \mathcal{L}_{OK} + \mu \mathcal{L}_{BAD}$$ \paragraph{Ensemble} For each binary label prediction made by the model, the model will give a score $p(\texttt{OK})$, which are translated into binary labels in post-processing. To ensemble predictions from $k$ models $p_1(\texttt{OK}), p_2(\texttt{OK}), \dots, p_k(\texttt{OK})$, we perform a linear combination of the scores for each label: $$p_E(\texttt{OK}) = \lambda_1 p_1(\texttt{OK}) + \lambda_2 p_2(\texttt{OK}) + \dots + \lambda_k p_k(\texttt{OK})$$ To determine the optimal interpolation weights, we optimize towards target-side MCC on the development set. Because the target-side MCC computation is not implemented in a way such that gradient information can be easily obtained, we experimented with two gradient-free optimization method: Powell method \cite{DBLP:journals/cj/Powell64} and Nelder-Mead method \cite{DBLP:journals/cj/NelderM65}, both as implemented in \texttt{SciPy} \cite{2020SciPy-NMeth}. We found that the Nelder-Mead method finds better optimum on the development set while also leading to better performance on the devtest dataset (not involved in optimization). Hence, we use the Nelder-Mead optimizer for all of our final submissions with ensembles. We set the initial points of Nelder-Mead optimization to be the vertices of the standard simplex in the $k$-dimensional space, with $k$ being the number of the models. We find that it is critical to build ensembles from models that yield diverse yet high-quality outputs. Specifically, we notice that ensembles from multiple checkpoints of a single experimental run are not helpful. Hence, for each language pair, we select 2-8 models with different training configurations that also have the highest performance to build our final ensemble model for submission.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Boolean functions are wide-spread in mathematics and computer science and can describe yes-no voter systems, hardware circuits, and predicates \citep{o2014analysis}. A Boolean function is a function from \ensuremath{\Varid{n}} bits to one bit, for example majority (\ensuremath{\Varid{maj}_{\Varid{n}}}), which returns the value that has majority among the \ensuremath{\Varid{n}} inputs for odd \ensuremath{\Varid{n}}. We are interested in the cost of evaluating Boolean functions: in the context of vote-counting after an election the cost is the number of votes we need to count before we know the outcome for certain. \subsection{Vote counting example} In US elections a presidential candidate can lose even if they win the popular vote. One reason for this is that the outcome is not directly determined by the majority, but rather majority iterated two times.% \footnote{The actual presidential election is a direct majority vote among the electors who are not formally bound by their state's outcome.} Our running example is a very much simplified case: consider 3 states with 3 voters in each. \[\majbrace{\majbrace{\val{(1,1)}}{\val{(1,2)}}{\val{(1,3)}}{m_1 = \ensuremath{\Varid{maj}_{3}\;(\mathbin{...})}}} {\majbrace{\val{(2,1)}}{\val{(2,2)}}{\val{(2,3)}}{m_2 = \ensuremath{\Varid{maj}_{3}\;(\mathbin{...})}}} {\majbrace{\val{(3,1)}}{\val{(3,2)}}{\val{(3,3)}}{m_3 = \ensuremath{\Varid{maj}_{3}\;(\mathbin{...})}}}{\ensuremath{\Varid{maj}_{3}}(m_1,m_2,m_3)} \] We first compute the majority in each ``state'' of three bits, and then the majority of $m_1$, $m_2$, and $m_3$. For example we see here $\ensuremath{\mathbf{0}},\ensuremath{\mathbf{1}},\ensuremath{\mathbf{0}}$ which gives $m_1 = \ensuremath{\mathbf{0}}$, then $\ensuremath{\mathbf{1}},\ensuremath{\mathbf{0}},\ensuremath{\mathbf{1}}$ which gives $m_2 = \ensuremath{\mathbf{1}}$, and $\ensuremath{\mathbf{0}},\ensuremath{\mathbf{1}},\ensuremath{\mathbf{0}}$ again which gives $m_3 = \ensuremath{\mathbf{0}}$. The final majority is \ensuremath{\mathbf{0}}: \[\majbrace{\majbrace{\ensuremath{\mathbf{0}}}{\ensuremath{\mathbf{1}}}{\ensuremath{\mathbf{0}}}{m_1 = \ensuremath{\mathbf{0}}}} {\majbrace{\ensuremath{\mathbf{1}}}{\ensuremath{\mathbf{0}}}{\ensuremath{\mathbf{1}}}{m_2 = \ensuremath{\mathbf{1}}}} {\majbrace{\ensuremath{\mathbf{0}}}{\ensuremath{\mathbf{1}}}{\ensuremath{\mathbf{0}}}{m_3 = \ensuremath{\mathbf{0}}}}{\ensuremath{\Varid{maj}_{3}} = \ensuremath{\mathbf{0}}} \] \noindent But if we switch the first and 8th bit (perhaps through gerrymandering) we get another example with the changed bits marked in red: \[\majbrace{\majbrace{\textcolor{red}{\ensuremath{\mathbf{1}}}}{\ensuremath{\mathbf{1}}}{\ensuremath{\mathbf{0}}}{m_1 = \textcolor{red}{\ensuremath{\mathbf{1}}}}} {\majbrace{\ensuremath{\mathbf{1}}}{\ensuremath{\mathbf{0}}}{\ensuremath{\mathbf{1}}}{m_2 = \ensuremath{\mathbf{1}}}} {\majbrace{\ensuremath{\mathbf{0}}}{\textcolor{red}{\ensuremath{\mathbf{0}}}}{\ensuremath{\mathbf{0}}}{m_3 = \ensuremath{\mathbf{0}}}}{\ensuremath{\Varid{maj}_{3}} = \textcolor{red}{\ensuremath{\mathbf{1}}}} \] This changes $m_1$ from \ensuremath{\mathbf{0}}{} to \ensuremath{\mathbf{1}}{} without affecting $m_2$, or $m_3$. But now the two-level majority is changed to \ensuremath{\mathbf{1}}, just from the switch of two bits. Both examples have four \ensuremath{\mathbf{1}}{}'s and five \ensuremath{\mathbf{0}}{}'s but the result is different based on the positioning of the bits. In our case the two-level majority is \ensuremath{\mathbf{1}}{} even though there are fewer \ensuremath{\mathbf{1}}{}'s than \ensuremath{\mathbf{0}}{}'s. This means that the \ensuremath{\mathbf{0}}{}'s ``lose'' even though they won the ``popular vote''. \subsection{Related work} We use binary decision trees to describe the evaluation order of Boolean functions. The depth of the decision tree corresponds to the number of votes needed to know the outcome for certain. This is called deterministic complexity. Another well-known notion is randomized complexity, and the complexity bounds of iterated majority have been studied in \cite{landau2006lower}, \cite{leonardos2013improved} and \cite{magniez2016improved}. Iterated majority on two levels corresponds to the Boolean function for US elections as described above, and we are particularly interested in this function. Other relevant concepts are certificate complexity, degree of a Boolean function, and communication complexity \citep{buhrman2002complexity}. Complexity measures related specifically to circuits are circuit complexity, additive, and multiplicative complexity \citep{wegener1987complexity}. Thus, there are many competing complexity measures but we focus on level-\ensuremath{\Varid{p}}-complexity --- a function of the probability that a bit is 1 \citep{garban2014noise}. Level-\ensuremath{\Varid{p}}-complexity is more complicated than deterministic complexity but is easier to compute than other more general complexity measures like full randomized complexity. Moreover, level-\ensuremath{\Varid{p}}-complexity has many interesting properties, as can be seen in \citep{jansson2022level}. This paper presents a purely functional library for computing level-\ensuremath{\Varid{p}}-complexity of Boolean functions in general, and for two-level iterated three-bit majority in particular. The implementation is in Haskell but should work also in other languages. \subsection{Motivation} \begin{figure}[tbp] \centering \begin{tikzpicture} \node at (0,0){\includegraphics[width=0.7\textwidth]{plots/4polys.pdf}}; \node at (-0.5,-0.5){% \begin{minipage}{0.3\textwidth}% \tiny \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{7}{@{}>{\hspre}l<{\hspost}@{}}% \column{9}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[7]{}\Varid{sim}_{5}\;[\mskip1.5mu \Varid{x}_{1},\Varid{x}_{2},\Varid{x}_{3},\Varid{x}_{4},\Varid{x}_{5}\mskip1.5mu]\mathrel{=}{}\<[E]% \\ \>[7]{}\hsindent{2}{}\<[9]% \>[9]{}\neg \;(\Varid{same}\;[\mskip1.5mu \Varid{x}_{1},\Varid{x}_{2},\Varid{x}_{3}\mskip1.5mu]){}\<[E]% \\ \>[7]{}\hsindent{2}{}\<[9]% \>[9]{}\mathrel{\vee}\Varid{same}\;[\mskip1.5mu \Varid{x}_{4},\Varid{x}_{5}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks \end{minipage} }; \end{tikzpicture} \caption{The four polynomials computed by \ensuremath{\Varid{genAlgThinMemo}\;\mathrm{5}\;\Varid{sim}_{5}}.} \label{fig:4polys} \end{figure} To get a feeling for what the end result will look like we start with two examples which will be explained in detail later: the level-\ensuremath{\Varid{p}}-complexity of 2-level iterated majority \ensuremath{\Varid{maj}_{3}^2} and of a 5-bit function we call \ensuremath{\Varid{sim}_{5}}, defined in Fig.~\ref{fig:4polys}\footnote{The function \ensuremath{\Varid{sim}_{5}} is referred to as $f_{\!AC}$ in \citep{jansson2022level}.}. The complexity is a piecewise polynomial function of the probability \ensuremath{\Varid{p}} and \ensuremath{\Varid{sim}_{5}} is the smallest arity Boolean function we have found which has more than one polynomial piece contributing to the complexity. Polynomials are represented by their coefficients: for example, \ensuremath{\Conid{P}\;[\mskip1.5mu \mathrm{5},\mathbin{-}\mathrm{8},\mathrm{8}\mskip1.5mu]} represents \(5-8x+8x^2\). The function \ensuremath{\Varid{genAlgThinMemo}} uses thinning and memoization to generate a set of minimal cost polynomials. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{25}{@{}>{\hspre}l<{\hspost}@{}}% \column{30}{@{}>{\hspre}l<{\hspost}@{}}% \column{31}{@{}>{\hspre}l<{\hspost}@{}}% \column{49}{@{}>{\hspre}l<{\hspost}@{}}% \column{67}{@{}>{\hspre}c<{\hspost}@{}}% \column{67E}{@{}l@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{ps5}\mathrel{=}\Varid{genAlgThinMemo}\;\mathrm{5}\;{}\<[25]% \>[25]{}\Varid{sim}_{5}{}\<[31]% \>[31]{}\mathbin{::}\Conid{Set}\;(\Conid{Poly}\;\mathbb{Q}){}\<[E]% \\ \>[B]{}\Varid{check5}\mathrel{=}{}\<[11]% \>[11]{}\Varid{ps5}\doubleequals\Varid{fromList}\;[\mskip1.5mu {}\<[30]% \>[30]{}\Conid{P}\;[\mskip1.5mu \mathrm{2},\mathrm{6},\mathbin{-}\mathrm{10},\mathrm{8},\mathbin{-}\mathrm{4}\mskip1.5mu],{}\<[49]% \>[49]{}\Conid{P}\;[\mskip1.5mu \mathrm{4},\mathbin{-}\mathrm{2},\mathbin{-}\mathrm{3},\mathrm{8},\mathbin{-}\mathrm{2}\mskip1.5mu],{}\<[E]% \\ \>[30]{}\Conid{P}\;[\mskip1.5mu \mathrm{5},\mathbin{-}\mathrm{8},\mathrm{9},\mathrm{0},\mathbin{-}\mathrm{2}\mskip1.5mu],{}\<[49]% \>[49]{}\Conid{P}\;[\mskip1.5mu \mathrm{5},\mathbin{-}\mathrm{8},\mathrm{8}\mskip1.5mu]{}\<[67]% \>[67]{}\mskip1.5mu]{}\<[67E]% \ColumnHook \end{hscode}\resethooks The graph, in Fig.~\ref{fig:4polys}, shows that different polynomials dominate in different intervals. \ensuremath{\Conid{P}_{1}} is best near the end-points, but \ensuremath{\Conid{P}_{4}} is best near \ensuremath{\Varid{p}\mathrel{=}\mathrm{1}\mathbin{/}\mathrm{2}} (despite being really bad near the endpoints). The level-\ensuremath{\Varid{p}}-complexity is the piecewise polynomial minimum, a combination of \ensuremath{\Conid{P}_{1}} and \ensuremath{\Conid{P}_{4}}. This computation can be done by exhaustive search over the \ensuremath{\mathrm{54192}} different decision trees and \ensuremath{\mathrm{39}} resulting polynomials, but for more complex Boolean functions the doubly exponential growth makes that impractical. For our running example, \ensuremath{\Varid{maj}_{3}^2}, a crude estimate indicates we would have $10^{111}$ decision trees to search and very many polynomials. Thus the computation would be intractable if it were not for the combination of thinning, memoization, and symbolic comparison of polynomials. Thanks to symmetries in the problem there turns out to be just one dominating polynomial: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{11}{@{}>{\hspre}l<{\hspost}@{}}% \column{25}{@{}>{\hspre}l<{\hspost}@{}}% \column{30}{@{}>{\hspre}l<{\hspost}@{}}% \column{31}{@{}>{\hspre}l<{\hspost}@{}}% \column{60}{@{}>{\hspre}c<{\hspost}@{}}% \column{60E}{@{}l@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{ps9}\mathrel{=}\Varid{genAlgThinMemo}\;\mathrm{9}\;{}\<[25]% \>[25]{}\Varid{maj}_{3}^2{}\<[31]% \>[31]{}\mathbin{::}\Conid{Set}\;(\Conid{Poly}\;\mathbb{Q}){}\<[E]% \\ \>[B]{}\Varid{check9}\mathrel{=}{}\<[11]% \>[11]{}\Varid{ps9}\doubleequals\Varid{fromList}\;[\mskip1.5mu {}\<[30]% \>[30]{}\Conid{P}\;[\mskip1.5mu \mathrm{4},\mathrm{4},\mathrm{6},\mathrm{9},\mathbin{-}\mathrm{61},\mathrm{23},\mathrm{67},\mathbin{-}\mathrm{64},\mathrm{16}\mskip1.5mu]{}\<[60]% \>[60]{}\mskip1.5mu]{}\<[60E]% \ColumnHook \end{hscode}\resethooks The graph, shown later in Fig.~\ref{fig:itermajalgs2}, shows that only 4 bits are needed in the limiting cases of \ensuremath{\Varid{p}\mathrel{=}\mathrm{0}} or \ensuremath{\Varid{p}\mathrel{=}\mathrm{1}} and that a bit more than 6 bits are needed in the maximum at \ensuremath{\Varid{p}\mathrel{=}\mathrm{1}\mathbin{/}\mathrm{2}}. \subsection{Contributions} \label{sec:aim} This paper presents a Haskell library for computing level-\ensuremath{\Varid{p}}-complexity of Boolean functions in general, and for \ensuremath{\Varid{maj}_{3}^2} in particular. The level-\ensuremath{\Varid{p}}-complexity of \ensuremath{\Varid{maj}_{3}^2} was conjectured in \citet{jansson2022level}, but could not be proven because it was hard to generate all possible decision trees. This paper fills that gap, by showing that the conjecture is false and by computing the true level-\ensuremath{\Varid{p}}-complexity of \ensuremath{\Varid{maj}_{3}^2}. The strength of our implementation is that it can calculate the level-$p$-complexity for boolean functions quickly and correctly, compared to tedious calculations by hand. Our specification uses exhaustive search and considers all possible candidates (decision trees). Some partial candidates dominate (many) others, which may be discarded. Thinning \citep{bird_gibbons_2020} is an algorithmic design technique which maintains a small set of partial candidates which provably dominate all other candidates. We hope that one contribution of this paper is an interesting example of how a combination of algorithmic techniques can be used to make the intractable tractable. The code in this paper is available on GitHub\footnote{The paper repository is at \url{https://github.com/juliajansson/BoFunComplexity}.} and uses packages from \citet{JanssonIonescuBernardyDSLsofMathBook2022}. \begin{comment} This paper builds on work from \citet{jansson2022level} where the level-\ensuremath{\Varid{p}}-complexity of some Boolean functions is calculated. In particular the level-\ensuremath{\Varid{p}}-complexity of iterated three-bit majority on two levels was conjectured in \citet{jansson2022level}, but could not be proven because it was hard to generate all possible decision trees (also called algorithms in \citep{jansson2022level}, \citep{garban2014noise} and \citep{landau2006lower}).\TODO{move also called part to background} We show that the conjecture is false, and compute the true level-\ensuremath{\Varid{p}}-complexity of two-level three-bit majority. The executable specification, and naive implementation, simply enumerates all decision trees, computes their cost polynomials, and selects the best. But as the number of decision trees grows doubly exponentially, we hit a wall very soon. Fortunately, a combination of thinning, memoization, and symbolic comparison of polynomials makes it possible to compute the best polynomial in under a second! Our specification uses exhaustive search and considers all possible candidates (decision trees). Some partial candidates dominate (many) others, which may be discarded. Thinning \citep{bird_gibbons_2020} is an algorithmic design technique which maintains a small set of partial candidates which provably dominate all other candidates. Thus this paper fills the gap in \citet{jansson2022level}, by finding a more efficient way to calculate the level-\ensuremath{\Varid{p}}-complexity using memoization and thinning. The paper also goes more into detail about the implementation than \citet{jansson2022level}, and leaves out some of the theory which is not deemed relevant. The code in this paper is available on GitHub\footnote{The paper repository is at \url{https://github.com/juliajansson/BoFunComplexity}.} and uses packages from \citet{JanssonIonescuBernardyDSLsofMathBook2022}. \end{comment} \section{Background} \label{sec:Background} To explain what level-\ensuremath{\Varid{p}}-complexity of Boolean functions means we introduce some background about Boolean functions, decision trees, cost and complexity. \subsection{Boolean functions} A Boolean function \ensuremath{\Varid{f}\mathop{:}\mathbb{B}^{\Varid{n}}\,\to\,\mathbb{B}} is a function from \ensuremath{\Varid{n}} Boolean inputs to one Boolean output. The Boolean input type \ensuremath{\mathbb{B}} could be \ensuremath{\{\mskip1.5mu \Conid{False},\Conid{True}\mskip1.5mu\},\{\mskip1.5mu \Conid{F},\Conid{T}\mskip1.5mu\}} or \ensuremath{\{\mskip1.5mu \mathrm{0},\mathrm{1}\mskip1.5mu\}} and from now on we use \ensuremath{\mathbf{0}}{} for false and \ensuremath{\mathbf{1}}{} for true in our notation. The easiest example of a Boolean function is the function which is constant \ensuremath{\ensuremath{\mathbf{0}}{}} or constant \ensuremath{\ensuremath{\mathbf{1}}{}}. The usual logical gates like \ensuremath{\Varid{and}} and \ensuremath{\Varid{or}} are very common Boolean functions. Another example is the dictator function (also known as first projection), which is defined as \ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}}}\;[\mskip1.5mu \Varid{x}_{1},\mathbin{...},\Varid{x}_n\mskip1.5mu]\mathrel{=}\Varid{x}_{1}} when the dictator is bit~\ensuremath{\mathrm{1}}. A naive implementation of Boolean functions could be as functions \ensuremath{\Varid{f}\mathop{:}[\mskip1.5mu \mathbb{B}\mskip1.5mu]\,\to\,\mathbb{B}}, but that turns out to be inefficient. Instead we use Binary Decision Diagrams \ensuremath{\Conid{BDD}}s \citep{Bryant_BDD_1986} as implemented in Masahiro Sakai's excellent Hackage package\footnote{\url{https://github.com/msakai/haskell-decision-diagrams}}. In the complexity computation, we only need two operations on Boolean functions which we capture in the following type class interface: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{12}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{class}\;\Conid{BoFun}\;\Varid{bf}\;\mathbf{where}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{isConst}{}\<[12]% \>[12]{}\mathbin{::}\Varid{bf}\,\to\,\Conid{Maybe}\;\mathbb{B}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{setBit}{}\<[12]% \>[12]{}\mathbin{::}\Conid{Index}\,\to\,\mathbb{B}\,\to\,\Varid{bf}\,\to\,\Varid{bf}{}\<[E]% \\[\blanklineskip]% \>[B]{}\mathbf{type}\;\Conid{Index}\mathrel{=}\mathbb{N}{}\<[E]% \ColumnHook \end{hscode}\resethooks The use of a type class here means we keep the interface to the BDD implementation minimal, which makes proofs easier and gives better feedback from the type system. The first method, \ensuremath{\Varid{isConst}\;\Varid{f}}, returns \ensuremath{\Conid{Just}\;\Varid{b}} iff the function \ensuremath{\Varid{f}} is constant and always returns \ensuremath{\Varid{b}\mathbin{::}\mathbb{B}}. The second method, \ensuremath{\Varid{setBit}\;\Varid{i}\;\Varid{b}\;\Varid{f}}, restricts a Boolean function (on \ensuremath{\Varid{n}\mathbin{+}\mathrm{1}} bits) by setting its \ensuremath{\Varid{i}}:th bit to \ensuremath{\Varid{b}}. The result is a ``subfunction'' on the remaining \ensuremath{\Varid{n}} bits, abbreviated $\fixatill{i}{b}{f}$, and illustrated in Figure \ref{fig:subf}. \begin{figure}[tbp] \centering \begin{forest} for tree = minimum height=1ex, text depth = 0.25ex, anchor = north, edge = {-Stealth}, s sep = 3em, l sep = 2em }, EL/.style = before typesetting nodes= where n=1{edge label/.wrap value={node[pos=0.5,anchor=east]{#1}} {edge label/.wrap value={node[pos=0.5,anchor=west]{#1}} } [\ensuremath{\Varid{f}}, [, edge label = {node[midway,anchor = south] {1}} [\ensuremath{\fixatill{\mathrm{1}}{\ensuremath{\mathbf{0}}{}}{\Varid{f}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\fixatill{\mathrm{1}}{\ensuremath{\mathbf{1}}{}}{\Varid{f}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] [, EL = 2 [\ensuremath{\fixatill{\mathrm{2}}{\ensuremath{\mathbf{0}}{}}{\Varid{f}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\fixatill{\mathrm{2}}{\ensuremath{\mathbf{1}}{}}{\Varid{f}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}, s sep = 1em [, EL = 1, s sep = 1ex [$\ldots$, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [$\ldots$, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] [, EL = 2, s sep = 1ex [$\ldots$, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [$\ldots$, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] [,no edge,EL = $\ldots$] [,no edge, EL = {}] [, EL = $n$, s sep = 1ex [$\ldots$, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [$\ldots$, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] ] ] [,no edge,EL = $\ldots$] [,no edge] [, edge label = {node[midway,anchor = south]{$n+1$}} [\ensuremath{\fixatill{\Varid{n}\mathbin{+}\mathrm{1}}{\ensuremath{\mathbf{0}}{}}{\Varid{f}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\fixatill{\Varid{n}\mathbin{+}\mathrm{1}}{\ensuremath{\mathbf{1}}{}}{\Varid{f}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] ] \end{forest} \caption{The tree of subfunctions of a Boolean function \ensuremath{\Varid{f}}. For brevity \ensuremath{\Varid{setBit}\;\Varid{i}\;\Varid{b}\;\Varid{f}} is denoted \ensuremath{\fixatill{\Varid{i}}{\Varid{b}}{\Varid{f}}}. This tree structure is also the call-graph for our generation of decision trees. Note that this is related to, but not the same as, the decision trees.} \label{fig:subf} \end{figure} As an example, for the function \ensuremath{\Varid{and}_{2}} we have that \ensuremath{\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{0}}{}\;\Varid{and}_{2}\mathrel{=}\Varid{const}\;\ensuremath{\mathbf{0}}{}} and \ensuremath{\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{1}}{}\;\Varid{and}_{2}\mathrel{=}\Varid{id}}. For \ensuremath{\Varid{and}_{2}} we get the same result for \ensuremath{\Varid{i}\mathrel{=}\mathrm{1}}, or \ensuremath{\mathrm{2}} but for the dictator function it depends if we pick the dictator index or not. We get \ensuremath{\Varid{setBit}\;\mathrm{1}\;\Varid{b}\;\ensuremath{\Varid{dict}_{\Varid{n}\mathbin{+}\mathrm{1}}}\mathrel{=}\ensuremath{\Varid{const}_{\Varid{n}}}\;\Varid{b}}, since the result of the dictator function is already decided. Otherwise, if \ensuremath{\Varid{i}\not\doubleequals\mathrm{1}}, we get \ensuremath{\Varid{setBit}\;\Varid{i}\;\Varid{b}\;\ensuremath{\Varid{dict}_{\Varid{n}\mathbin{+}\mathrm{1}}}\mathrel{=}\ensuremath{\Varid{dict}_{\Varid{n}}}} irrespective of the value of \ensuremath{\Varid{b}} since only the value of the dictator bit matters. This behaviour is shown in Figure \ref{fig:dict}. \begin{figure}[htbp] \centering \begin{forest} for tree = minimum height=1ex, text depth = 0.25ex, anchor = north, edge = {-Stealth}, s sep = 1em, l sep = 2em }, EL/.style = before typesetting nodes= where n=1{edge label/.wrap value={node[pos=0.5,anchor=east]{#1}} {edge label/.wrap value={node[pos=0.5,anchor=west]{#1}} } [\ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}\mathbin{+}\mathrm{1}}}}, [, edge label = {node[midway,anchor = south] {1}} [\ensuremath{\ensuremath{\Varid{const}_{\Varid{n}}}\;\ensuremath{\mathbf{0}}{}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\ensuremath{\Varid{const}_{\Varid{n}}}\;\ensuremath{\mathbf{1}}{}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] [, EL = 2 [\ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] [,no edge,EL = $\ldots$] [,no edge] [, edge label = {node[midway,anchor = south]{$n+1$}} [\ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] ] \end{forest} \caption{The tree of subfunctions of the \ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}\mathbin{+}\mathrm{1}}}} function.} \label{fig:dict} \end{figure} \subsection{Decision trees} Consider a decision tree that picks the \ensuremath{\Varid{n}} bits of a Boolean function \ensuremath{\Varid{f}} in a deterministic way depending on the values of the bits picked further up the tree. Decision trees are referred to as algorithms in \citep{jansson2022level}, \citep{garban2014noise} and \citep{landau2006lower}. Given a natural number \ensuremath{\Varid{n}} and a Boolean function \ensuremath{\Varid{f}}, a decision tree \ensuremath{\Varid{t}} describes one way to evaluate the function \ensuremath{\Varid{f}}. The Haskell datatype is as follows: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{data}\;\Conid{DecTree}\mathrel{=}\Conid{Res}\;\mathbb{B}\mid \Conid{Pick}\;\Conid{Index}\;\Conid{DecTree}\;\Conid{DecTree}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\mathbf{deriving}\;(\Conid{Eq},\Conid{Ord},\Conid{Show}){}\<[E]% \ColumnHook \end{hscode}\resethooks Parts of the ``rules of the game'' in the mathematical literature is that you must return a \ensuremath{\Conid{Res}}ult if the function is constant and you may only \ensuremath{\Conid{Pick}} an index once. We can capture most of these rules with a type family version of the \ensuremath{\Conid{DecTree}} datatype (here expressed in \ensuremath{\Conid{Agda}} syntax). Here we use two type indices: \ensuremath{\Varid{t}\mathop{:}\Conid{DecTree}\;\Varid{n}\;\Varid{f}} is a decision tree for the Boolean function \ensuremath{\Varid{f}}, of arity \ensuremath{\Varid{n}}. The \ensuremath{\Conid{Res}} constructor may only be used for constant functions (but for any arity), while \ensuremath{\Conid{Pick}\;\Varid{i}} takes two subtrees for Boolean functions of arity \ensuremath{\Varid{n}} to a tree of arity \ensuremath{\Varid{suc}\;\Varid{n}\mathrel{=}\Varid{n}\mathbin{+}\mathrm{1}}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{9}{@{}>{\hspre}c<{\hspost}@{}}% \column{9E}{@{}l@{}}% \column{12}{@{}>{\hspre}l<{\hspost}@{}}% \column{15}{@{}>{\hspre}c<{\hspost}@{}}% \column{15E}{@{}l@{}}% \column{18}{@{}>{\hspre}l<{\hspost}@{}}% \column{40}{@{}>{\hspre}l<{\hspost}@{}}% \column{62}{@{}>{\hspre}l<{\hspost}@{}}% \column{88}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{data}\;\Conid{DecTree}{}\<[15]% \>[15]{}\mathop{:}{}\<[15E]% \>[18]{}(\Varid{n}\mathop{:}\mathbb{N})\,\to\,(\Varid{f}\mathop{:}\Conid{BoolFun}\;\Varid{n})\,\to\,\Conid{Set}\;\mathbf{where}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Conid{Res}{}\<[9]% \>[9]{}\mathop{:}{}\<[9E]% \>[40]{}(\Varid{b}\mathop{:}\mathbb{B})\,\to\,{}\<[62]% \>[62]{}\Conid{DecTree}\;\Varid{n}\;(\ensuremath{\Varid{const}_{\Varid{n}}}\;\Varid{b}){}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Conid{Pick}{}\<[9]% \>[9]{}\mathop{:}{}\<[9E]% \>[12]{}\{\mskip1.5mu \Varid{f}\mathop{:}\Conid{BoolFun}\;(\Varid{suc}\;\Varid{n})\mskip1.5mu\}\,\to\,{}\<[40]% \>[40]{}(\Varid{i}\mathop{:}\Conid{Fin}\;(\Varid{suc}\;\Varid{n}))\,\to\,{}\<[62]% \>[62]{}\Conid{DecTree}\;\Varid{n}\;(\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{0}}{}\;\Varid{f})\,\to\,{}\<[E]% \\ \>[62]{}\Conid{DecTree}\;\Varid{n}\;(\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{1}}{}\;{}\<[88]% \>[88]{}\Varid{f})\,\to\,{}\<[E]% \\ \>[62]{}\Conid{DecTree}\;(\Varid{suc}\;\Varid{n})\;\Varid{f}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{setBit}\mathop{:}\Conid{Fin}\;(\Varid{suc}\;\Varid{n})\,\to\,\mathbb{B}\,\to\,\Conid{BoolFun}\;(\Varid{suc}\;\Varid{n})\,\to\,\Conid{BoolFun}\;\Varid{n}{}\<[E]% \ColumnHook \end{hscode}\resethooks Note that the dependently type version of \ensuremath{\Varid{setBit}} clearly indicates that the resulting function \ensuremath{\Varid{g}\mathrel{=}(\Varid{setBit}\;\Varid{i}\;\Varid{b}\;\Varid{f})\mathop{:}\Conid{BoolFun}\;\Varid{n}} has arity one less that of \ensuremath{\Varid{f}\mathop{:}\Conid{BoolFun}\;(\Varid{suc}\;\Varid{n})}. This helps maintaining the invariant that each input bit may only be picked once. We use the Haskell versions, but the Agda versions capture the invariants better. We can use these rules backwards to generate all possible decision trees for a certain function. If the function is constant, returning \ensuremath{\Varid{b}\mathop{:}\mathbb{B}}, we immediately know that the only decision tree allowed is \ensuremath{\Conid{Res}\;\Varid{b}}. If it is not constant, we pick any index \ensuremath{\Varid{i}}, any decision tree \ensuremath{\Varid{t}_{0}} for the subfunction \ensuremath{\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{0}}{}\;\Varid{f}} and \ensuremath{\Varid{t}_{1}} for the subfunction \ensuremath{\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{1}}{}\;\Varid{f}} recursively. \begin{figure}[htbp] \centering \begin{forest} for tree = draw, rounded corners, top color=white, bottom color=blue!20, font = \ttfamily, minimum height=1ex, text depth = 0.25ex, anchor = north, edge = {-Stealth}, s sep = 2em, l sep = 2em }, EL/.style = before typesetting nodes= where n=1{edge label/.wrap value={node[pos=0.5,anchor=east]{#1}} {edge label/.wrap value={node[pos=0.5,anchor=west]{#1}} } [$x_1$, root [$x_3$, EL=\ensuremath{\mathbf{0}} [\ensuremath{\mathbf{0}}, EL=\ensuremath{\mathbf{0}}, sharp corners] [$x_2$, EL=\ensuremath{\mathbf{1}} [\ensuremath{\mathbf{0}}, EL=\ensuremath{\mathbf{0}}, sharp corners] [\ensuremath{\mathbf{1}}, EL=\ensuremath{\mathbf{1}}, sharp corners]]] [$x_2$, EL=\ensuremath{\mathbf{1}} [$x_3$, EL=\ensuremath{\mathbf{0}} [\ensuremath{\mathbf{0}}, EL=\ensuremath{\mathbf{0}}, sharp corners] [\ensuremath{\mathbf{1}}, EL=\ensuremath{\mathbf{1}}, sharp corners]] [\ensuremath{\mathbf{1}}, EL=\ensuremath{\mathbf{1}}, sharp corners]] ] \end{forest} \caption{An example of a decision tree for \ensuremath{\Varid{maj}_{3}}. % The root node branches on the value of bit 1. % If it is \ensuremath{\mathbf{0}}{}, it picks bit 3, while if it is \ensuremath{\mathbf{1}}{}, it picks bit 2. % It then picks the last remaining bit if necessary. } \label{fig:Algex} \end{figure} An example of a decision tree for the majority function \ensuremath{\Varid{maj}_{3}} on three bits is defined by the expression \ensuremath{\Varid{ex1}} visualised in Figure~\ref{fig:Algex}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{15}{@{}>{\hspre}l<{\hspost}@{}}% \column{36}{@{}>{\hspre}l<{\hspost}@{}}% \column{56}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{ex1}\mathrel{=}\Conid{Pick}\;\mathrm{1}\;{}\<[15]% \>[15]{}(\Conid{Pick}\;\mathrm{3}\;(\Conid{Res}\;\ensuremath{\mathbf{0}}{})\;{}\<[36]% \>[36]{}(\Conid{Pick}\;\mathrm{2}\;(\Conid{Res}\;\ensuremath{\mathbf{0}}{})\;(\Conid{Res}\;\ensuremath{\mathbf{1}}{})))\;{}\<[E]% \\ \>[15]{}(\Conid{Pick}\;\mathrm{2}\;(\Conid{Pick}\;\mathrm{3}\;(\Conid{Res}\;\ensuremath{\mathbf{0}}{})\;(\Conid{Res}\;\ensuremath{\mathbf{1}}{}))\;{}\<[56]% \>[56]{}(\Conid{Res}\;\ensuremath{\mathbf{1}}{})){}\<[E]% \ColumnHook \end{hscode}\resethooks We will define several functions as folds over \ensuremath{\Conid{DecTree}} and to do that we introduce a type class \ensuremath{\Conid{TreeAlg}} (for ``Tree Algebra'') which collects the two methods \ensuremath{\Varid{res}} and \ensuremath{\Varid{pic}} which are then used in the fold to replace the constructors \ensuremath{\Conid{Res}} and \ensuremath{\Conid{Pick}}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{24}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{class}\;\Conid{TreeAlg}\;\Varid{a}\;\mathbf{where}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{res}\mathbin{::}\mathbb{B}\,\to\,\Varid{a}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{pic}\mathbin{::}\Conid{Index}\,\to\,\Varid{a}\,\to\,\Varid{a}\,\to\,\Varid{a}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{foldDT}\mathbin{::}\Conid{TreeAlg}\;\Varid{a}\Rightarrow \Conid{DecTree}\,\to\,\Varid{a}{}\<[E]% \\ \>[B]{}\Varid{foldDT}\;(\Conid{Res}\;\Varid{b}){}\<[24]% \>[24]{}\mathrel{=}\Varid{res}\;\Varid{b}{}\<[E]% \\ \>[B]{}\Varid{foldDT}\;(\Conid{Pick}\;\Varid{i}\;\Varid{t}_{0}\;\Varid{t}_{1}){}\<[24]% \>[24]{}\mathrel{=}\Varid{pic}\;\Varid{i}\;(\Varid{foldDT}\;\Varid{t}_{0})\;(\Varid{foldDT}\;\Varid{t}_{1}){}\<[E]% \ColumnHook \end{hscode}\resethooks The \ensuremath{\Conid{TreeAlg}} class is used to define our decision trees but also for several other purposes. (In the implementation we additionally require some total order on \ensuremath{\Varid{a}} to enable efficient set computations.) We see that our decision tree type is the initial algebra of \ensuremath{\Conid{TreeAlg}} and that we can reimplement a generic version of \ensuremath{\Varid{ex1}} which can be instantiated to any \ensuremath{\Conid{TreeAlg}} instance: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{14}{@{}>{\hspre}l<{\hspost}@{}}% \column{34}{@{}>{\hspre}l<{\hspost}@{}}% \column{53}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{instance}\;\Conid{TreeAlg}\;\Conid{DecTree}\;\mathbf{where}\;\Varid{res}\mathrel{=}\Conid{Res};\Varid{pic}\mathrel{=}\Conid{Pick};{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{ex1}\mathbin{::}\Conid{TreeAlg}\;\Varid{a}\Rightarrow \Varid{a}{}\<[E]% \\ \>[B]{}\Varid{ex1}\mathrel{=}\Varid{pic}\;\mathrm{1}\;{}\<[14]% \>[14]{}(\Varid{pic}\;\mathrm{3}\;(\Varid{res}\;\ensuremath{\mathbf{0}}{})\;{}\<[34]% \>[34]{}(\Varid{pic}\;\mathrm{2}\;(\Varid{res}\;\ensuremath{\mathbf{0}}{})\;(\Varid{res}\;\ensuremath{\mathbf{1}}{})))\;{}\<[E]% \\ \>[14]{}(\Varid{pic}\;\mathrm{2}\;(\Varid{pic}\;\mathrm{3}\;(\Varid{res}\;\ensuremath{\mathbf{0}}{})\;(\Varid{res}\;\ensuremath{\mathbf{1}}{}))\;{}\<[53]% \>[53]{}(\Varid{res}\;\ensuremath{\mathbf{1}}{})){}\<[E]% \ColumnHook \end{hscode}\resethooks \subsection{Cost} The cost of a given \ensuremath{\Varid{x}\in \mathbb{B}^{\Varid{n}}} for some decision tree \ensuremath{\Varid{t}} is the length of the path from root to leaf when the input to \ensuremath{\Varid{f}} is \ensuremath{\Varid{x}}. This computation can be defined as an instance of \ensuremath{\Conid{TreeAlg}}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{24}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{type}\;\Conid{CostFun}\mathrel{=}\mathbb{B}^{\Varid{n}}\,\to\,\Conid{Int}{}\<[E]% \\[\blanklineskip]% \>[B]{}\mathbf{instance}\;\Conid{TreeAlg}\;\Conid{CostFun}\;\mathbf{where}\;\Varid{res}\mathrel{=}\Varid{resC};\Varid{pic}\mathrel{=}\Varid{pickC}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{resC}\mathbin{::}\mathbb{B}\,\to\,\Conid{CostFun}{}\<[E]% \\ \>[B]{}\Varid{resC}\;\Varid{b}\mathrel{=}\Varid{const}\;\mathrm{0}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{pickC}\mathbin{::}\Conid{Index}\,\to\,\Conid{CostFun}\,\to\,\Conid{CostFun}\,\to\,\Conid{CostFun}{}\<[E]% \\ \>[B]{}\Varid{pickC}\;\Varid{i}\;\Varid{c}_{0}\;\Varid{c}_{1}\mathrel{=}\lambda \Varid{t}\,\to\,{}\<[24]% \>[24]{}\mathrm{1}\mathbin{+}\mathbf{if}\;\Varid{index}\;\Varid{t}\;\Varid{i}\;\mathbf{then}\;\Varid{c}_{1}\;\Varid{t}\;\mathbf{else}\;\Varid{c}_{0}\;\Varid{t}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{cost}\mathbin{::}\Conid{DecTree}\,\to\,\Conid{CostFun}{}\<[E]% \\ \>[B]{}\Varid{cost}\mathrel{=}\Varid{foldDT}{}\<[E]% \ColumnHook \end{hscode}\resethooks We get that \ensuremath{\Varid{cost}\;\Varid{ex1}\;[\mskip1.5mu \ensuremath{\mathbf{1}}{},\ensuremath{\mathbf{0}}{},\ensuremath{\mathbf{1}}{}\mskip1.5mu]} is \ensuremath{\mathrm{3}}, while \ensuremath{\Varid{cost}\;\Varid{ex1}\;[\mskip1.5mu \ensuremath{\mathbf{1}}{},\ensuremath{\mathbf{1}}{},\ensuremath{\mathbf{0}}{}\mskip1.5mu]} is \ensuremath{\mathrm{2}}, as can be seen in Figure \ref{fig:Algex}. Taking the maximum of the cost over all \ensuremath{\Varid{x}\in \mathbb{B}^{\Varid{n}}} gives us the depth of the decision tree \ensuremath{\Varid{t}}. This can also be defined as an instance of \ensuremath{\Conid{TreeAlg}}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{type}\;\Conid{MaxCost}\mathrel{=}\mathbb{N}{}\<[E]% \\ \>[B]{}\Varid{pickMC}\;\Varid{i}\;\Varid{m}_{1}\;\Varid{m}_{2}\mathrel{=}\mathrm{1}\mathbin{+}\Varid{max}\;\Varid{m}_{1}\;\Varid{m}_{2}{}\<[E]% \\ \>[B]{}\mathbf{instance}\;\Conid{TreeAlg}\;\Conid{MaxCost}\;\mathbf{where}\;\Varid{res}\mathrel{=}\Varid{const}\;\mathrm{0};\Varid{pic}\mathrel{=}\Varid{pickMC}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{maxCost}\mathbin{::}\Conid{DecTree}\,\to\,\Conid{MaxCost}{}\<[E]% \\ \>[B]{}\Varid{maxCost}\mathrel{=}\Varid{foldDT}{}\<[E]% \ColumnHook \end{hscode}\resethooks By evaluating \ensuremath{\Varid{maxCost}\;\Varid{ex1}} we get that the maximum cost is \ensuremath{\mathrm{3}} for this example. Another kind of cost is \emph{expected} cost where we let the bits be independent and identically distributed. We use the distribution $\pi_p$ for the input bits which means that they are i.i.d.\ with Bernoulli distribution with parameter $p \in [0,1]$ \citep{garban2014noise}. As for the other cost notions, expected cost is also implemented as an instance of \ensuremath{\Conid{TreeAlg}}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{62}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\mathbf{type}\;\Conid{ExpCost}\;\Varid{a}\mathrel{=}\Conid{Poly}\;\Varid{a}{}\<[E]% \\ \>[B]{}\mathbf{instance}\;\Conid{Ring}\;\Varid{a}\Rightarrow \Conid{TreeAlg}\;(\Conid{ExpCost}\;\Varid{a})\;\mathbf{where}\;\Varid{res}\mathrel{=}\Varid{resPoly};{}\<[62]% \>[62]{}\Varid{pic}\mathrel{=}\Varid{pickPoly}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{expCost}\mathbin{::}\Conid{Ring}\;\Varid{a}\Rightarrow \Conid{DecTree}\,\to\,\Conid{Poly}\;\Varid{a}{}\<[E]% \\ \>[B]{}\Varid{expCost}\mathrel{=}\Varid{foldDT}{}\<[E]% \ColumnHook \end{hscode}\resethooks Note that the expected cost of any decision tree for a Boolean function of \ensuremath{\Varid{n}} bits will always be a polynomial. We represent polynomials as lists of coefficients: \ensuremath{\Conid{P}\;[\mskip1.5mu \mathrm{1},\mathrm{2},\mathrm{3}\mskip1.5mu]} represents \ensuremath{\lambda \Varid{p}\,\to\,\mathrm{1}\mathbin{+}\mathrm{2}\mathbin{*}\Varid{p}\mathbin{+}\mathrm{3}\mathbin{*}\ensuremath{\Varid{p}^{\mathrm{2}}}}. The polynomial implementation relies heavily on material from \cite{JanssonIonescuBernardyDSLsofMathBook2022}. This includes the polynomial ring operations (\ensuremath{(\mathbin{+})}, \ensuremath{(\mathbin{-})}, \ensuremath{(\mathbin{*})}), \ensuremath{\Varid{gcd}}, \ensuremath{\Varid{divMod}}, symbolic derivative, and ordering. The \ensuremath{\Varid{res}} and \ensuremath{\Varid{pic}} functions are as follows: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{resPoly}\mathbin{::}\Conid{Ring}\;\Varid{a}\Rightarrow \mathbb{B}\,\to\,\Varid{a}{}\<[E]% \\ \>[B]{}\Varid{resPoly}\;\kern0.06em \vbox{\hrule\@width.5em} \Varid{b}\mathrel{=}\Varid{zero}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{pickPoly}\mathbin{::}\Conid{Ring}\;\Varid{a}\Rightarrow \Conid{Index}\,\to\,\Conid{Poly}\;\Varid{a}\,\to\,\Conid{Poly}\;\Varid{a}\,\to\,\Conid{Poly}\;\Varid{a}{}\<[E]% \\ \>[B]{}\Varid{pickPoly}\;\kern0.06em \vbox{\hrule\@width.5em} \Varid{i}\;\Varid{p}_{0}\;\Varid{p}_{1}\mathrel{=}\Varid{one}\mathbin{+}(\Varid{one}\mathbin{-}\Varid{xP})\mathbin{*}\Varid{p}_{0}\mathbin{+}\Varid{xP}\mathbin{*}\Varid{p}_{1}{}\<[E]% \ColumnHook \end{hscode}\resethooks Here \ensuremath{\Varid{zero}\mathrel{=}\Conid{P}\;[\mskip1.5mu \mskip1.5mu]} and \ensuremath{\Varid{one}\mathrel{=}\Conid{P}\;[\mskip1.5mu \mathrm{1}\mskip1.5mu]} represent \ensuremath{\Varid{const}\;\mathrm{0}} and \ensuremath{\Varid{const}\;\mathrm{1}} respectively while \ensuremath{\Varid{xP}\mathrel{=}\Conid{P}\;[\mskip1.5mu \mathrm{0},\mathrm{1}\mskip1.5mu]} is ``the polynomial \ensuremath{\Varid{p}}''. For \ensuremath{\Varid{pickPoly}\;\kern0.06em \vbox{\hrule\@width.5em} \;\Varid{p}_{0}\;\Varid{p}_{1}} we first have to pick one bit and then if this bit is \ensuremath{\ensuremath{\mathbf{0}}{}} (with probability $\mathbb{P}(\val{i} = \ensuremath{\mathbf{0}}) = (1-p)$) we get $p_0$ which is the polynomial for this case. If the bit is instead \ensuremath{\ensuremath{\mathbf{1}}{}} (with probability $\mathbb{P}(\val{i} = \ensuremath{\mathbf{1}}) = p$) we get $p_1$. The expected cost of the decision tree \ensuremath{\Varid{ex1}} is $2 + 2p - 2p^2$. \subsection{Complexity} Now that we have introduced some notions of cost of decision trees, we can introduce complexity of a Boolean function which is the minimum of the cost over all decision trees for the given function. Using \ensuremath{\Varid{maxCost}} we specify the concept of deterministic complexity as \ensuremath{\Conid{D}\;(\Varid{f})\mathrel{=}\min_{\Varid{t}}\;(\Varid{maxCost}\;\Varid{t})} where \ensuremath{\Varid{t}} ranges over all the decision trees for the function \ensuremath{\Varid{f}}. The type is \ensuremath{\Conid{D}\mathop{:}\Conid{BoFun}\;\Varid{bf}\Rightarrow \Varid{bf}\,\to\,\mathbb{N}} and to calculate it we need to generate all the decision trees. The level-\ensuremath{\Varid{p}}-complexity is defined using \ensuremath{\Varid{expCost}} as $D_p(f) = \ensuremath{\min_{\Varid{t}}\;(\Varid{evalPoly}\;(\Varid{expCost}\;\Varid{t})\;\Varid{p})}$ Thus, for each probability \ensuremath{\Varid{p}} and Boolean function \ensuremath{\Varid{f}} the minimum over all the decision trees will give the smallest expected cost. If we flip the argument order we can see that $D_p(f)$ takes a Boolean function \ensuremath{\Varid{f}} to a function from \ensuremath{\Varid{p}} to the smallest expected cost. As \ensuremath{\Varid{expCost}} always returns a polynomial, the level-\ensuremath{\Varid{p}}-complexity is a continuous, piecewise polynomial, function of \ensuremath{\Varid{p}}. In our implementation, we do not implement a special type for piecewise polynomials, we just represent them as sets of polynomials, and leave the last minimum to the surrounding code. More about the implementation is explained in Section~\ref{sec:method}. \subsection{Examples of Boolean functions and their costs} \label{sec:ex} For the constant functions, we already know the result so \ensuremath{\Varid{maxCost}\;(\Conid{Res}\;\Varid{b})\mathrel{=}\mathrm{0}} and \ensuremath{\Varid{expCost}\;(\Conid{Res}\;\Varid{b})\mathrel{=}\Varid{zero}} and then $D(\ensuremath{\ensuremath{\Varid{const}_{\Varid{n}}}}) = D_p(\ensuremath{\ensuremath{\Varid{const}_{\Varid{n}}}}) = 0$. For the dictator function, there is only one minimizing decision tree irrespective of input: the one that picks bit 1 first. After asking the first bit the function is reduced to the constant function as can be seen in Figure \ref{fig:dict} and we get the optimal decision tree \ensuremath{\Varid{optTree}\mathrel{=}\Conid{Pick}\;\mathrm{1}\;(\Conid{Res}\;\ensuremath{\mathbf{0}}{})\;(\Conid{Res}\;\ensuremath{\mathbf{1}}{})}. The results are \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{10}{@{}>{\hspre}l<{\hspost}@{}}% \column{19}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{maxCost}\;{}\<[10]% \>[10]{}\Varid{optTree}{}\<[19]% \>[19]{}\mathrel{=}\mathrm{1}\mathbin{+}\Varid{max}\;\mathrm{0}\;\mathrm{0}\mathrel{=}\mathrm{1}{}\<[E]% \\ \>[B]{}\Varid{expCost}\;{}\<[10]% \>[10]{}\Varid{optTree}{}\<[19]% \>[19]{}\mathrel{=}\Varid{one}\mathbin{+}(\Varid{one}\mathbin{-}\Varid{xP})\mathbin{*}\Varid{zero}\mathbin{+}\Varid{xP}\mathbin{*}\Varid{zero}\mathrel{=}\Varid{one}\ .{}\<[E]% \ColumnHook \end{hscode}\resethooks This then gives $D(\ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}}}}) = 1$, and similarly $D_p(\ensuremath{\ensuremath{\Varid{dict}_{\Varid{n}}}}) = 1$. The parity function is \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{count}\mathbin{::}\Conid{Eq}\;\Varid{a}\Rightarrow \Varid{a}\,\to\,[\mskip1.5mu \Varid{a}\mskip1.5mu]\,\to\,\Conid{Int}{}\<[E]% \\ \>[B]{}\Varid{count}\;\Varid{x}\mathrel{=}\Varid{length}\mathbin{\circ}\Varid{filter}\;(\Varid{x}\doubleequals){}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{par}_{\!\Varid{n}}\mathbin{::}\mathbb{B}^{\Varid{n}}\,\to\,\mathbb{B}{}\<[E]% \\ \>[B]{}\Varid{par}_{\!\Varid{n}}\mathrel{=}\Varid{odd}\mathbin{\circ}\Varid{count}\;\ensuremath{\mathbf{1}}{}{}\<[E]% \ColumnHook \end{hscode}\resethooks In this case all bits have to be picked to determine the parity, regardless of input. For example, if we first ask one bit to determine \ensuremath{\Varid{par}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}, then we are left with two subtrees: \ensuremath{\Varid{t}_{0}} for \ensuremath{\Varid{par}_{\!\Varid{n}}} and \ensuremath{\Varid{t}_{1}} for \ensuremath{\neg \;\Varid{par}_{\!\Varid{n}}} as seen in Figure \ref{fig:par}. \begin{figure}[tbp] \centering \begin{forest} for tree = minimum height=1ex, text depth = 0.25ex, anchor = north, edge = {-Stealth}, s sep = 1em, l sep = 2em }, EL/.style = before typesetting nodes= where n=1{edge label/.wrap value={node[pos=0.5,anchor=east]{#1}} {edge label/.wrap value={node[pos=0.5,anchor=west]{#1}} } [\ensuremath{\Varid{par}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}, [, edge label = {node[midway,anchor = south] {1}} [\ensuremath{\Varid{par}_{\!\Varid{n}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\neg \;\Varid{par}_{\!\Varid{n}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] [, EL = 2 [\ensuremath{\Varid{par}_{\!\Varid{n}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\neg \;\Varid{par}_{\!\Varid{n}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] [,no edge,EL = $\ldots$] [,no edge] [, edge label = {node[midway,anchor = south]{$n+1$}} [\ensuremath{\Varid{par}_{\!\Varid{n}}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\ensuremath{\neg \;\Varid{par}_{\!\Varid{n}}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] ] \end{forest} \caption{The recursive structure of the parity function (\ensuremath{\Varid{par}_{\!\Varid{n}}}). The pattern repeats all the way down to \ensuremath{\Varid{par}_{\!\mathrm{0}}\mathrel{=}\Varid{const}\;\ensuremath{\mathbf{0}}{}}.} \label{fig:par} \end{figure} Recursively, this gives \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{10}{@{}>{\hspre}l<{\hspost}@{}}% \column{26}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{maxCost}\;{}\<[10]% \>[10]{}(\Conid{Pick}\;\Varid{i}\;\Varid{t}_{0}\;\Varid{t}_{1}){}\<[26]% \>[26]{}\mathrel{=}\mathrm{1}\mathbin{+}\Varid{max}\;(\Varid{maxCost}\;\Varid{t}_{0})\;(\Varid{maxCost}\;\Varid{t}_{1})\mathrel{=}\mathrm{1}\mathbin{+}\Varid{max}\;\Varid{n}\;\Varid{n}\mathrel{=}\mathrm{1}\mathbin{+}\Varid{n}{}\<[E]% \\ \>[B]{}\Varid{expCost}\;{}\<[10]% \>[10]{}(\Conid{Pick}\;\Varid{i}\;\Varid{t}_{0}\;\Varid{t}_{1}){}\<[26]% \>[26]{}\mathrel{=}\Varid{one}\mathbin{+}(\Varid{one}\mathbin{-}\Varid{xP})\mathbin{*}(\Varid{expCost}\;\Varid{t}_{0})\mathbin{+}\Varid{xP}\mathbin{*}(\Varid{expCost}\;\Varid{t}_{1}){}\<[E]% \\ \>[26]{}\mathrel{=}\Varid{one}\mathbin{+}(\Varid{one}\mathbin{-}\Varid{xP})\mathbin{*}\Varid{n}\mathbin{+}\Varid{xP}\mathbin{*}\Varid{n}\mathrel{=}\mathrm{1}\mathbin{+}\Varid{n}{}\<[E]% \ColumnHook \end{hscode}\resethooks Thus, $D(\ensuremath{\Varid{par}_{\!\Varid{n}}}) = D_p(\ensuremath{\Varid{par}_{\!\Varid{n}}}) = n$. This can also be seen if you compare Figure \ref{fig:dict} with Figure \ref{fig:par}, the minimum depth of the dictator tree is 1, while the minimum depth of the parity tree is $n$. We now introduce the Boolean function \ensuremath{\Varid{same}} which checks if all bits are equal: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{same}\mathbin{::}\mathbb{B}^{\Varid{n}}\,\to\,\mathbb{B}{}\<[E]% \\ \>[B]{}\Varid{same}\;\Varid{bs}\mathrel{=}\Varid{and}\;\Varid{bs}\mathrel{\vee}\neg \;(\Varid{or}\;\Varid{bs}){}\<[E]% \ColumnHook \end{hscode}\resethooks Using \ensuremath{\Varid{same}} we construct a very specific function of 5 bits where we first split the bits into two groups, one with the first three bits and the second with the last two bits. On the first group, called \ensuremath{\Varid{as}}, we check if the bits are not the same, and on the second group, called \ensuremath{\Varid{cs}} we check if the bits are the same. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{sim}_{5}\mathbin{::}\mathbb{B}^{\mathrm{5}}\,\to\,\mathbb{B}{}\<[E]% \\ \>[B]{}\Varid{sim}_{5}\;\Varid{bs}\mathrel{=}\neg \;(\Varid{same}\;\Varid{as})\mathrel{\vee}\Varid{same}\;\Varid{cs}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\mathbf{where}\;(\Varid{as},\Varid{cs})\mathrel{=}\Varid{splitAt}\;\mathrm{3}\;\Varid{bs}{}\<[E]% \ColumnHook \end{hscode}\resethooks The point of this function is to illustrate a special case where the best decision tree depends on \ensuremath{\Varid{p}} so that the level-\ensuremath{\Varid{p}}-complexity consists of several different polynomials. This computation is shown in Section \ref{sec:fAC}. One of the major goals of this paper was to calculate the level-\ensuremath{\Varid{p}}-complexity of 9 bit iterated majority called \ensuremath{\Varid{maj}_{3}^2}. When extending the majority function to \ensuremath{\Varid{maj}_{3}^2}, we use \ensuremath{\Varid{maj}_{3}} inside \ensuremath{\Varid{maj}_{3}}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{10}{@{}>{\hspre}l<{\hspost}@{}}% \column{17}{@{}>{\hspre}l<{\hspost}@{}}% \column{23}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{maj}_{3}^2\mathbin{::}\mathbb{B}^{\mathrm{9}}\,\to\,\mathbb{B}{}\<[E]% \\ \>[B]{}\Varid{maj}_{3}^2\;\Varid{bs}\mathrel{=}\Varid{maj}_{3}\;[\mskip1.5mu \Varid{maj}_{3}\;\Varid{bs}_{1},\Varid{maj}_{3}\;\Varid{bs}_{2},\Varid{maj}_{3}\;\Varid{bs}_{3}\mskip1.5mu]{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\mathbf{where}\;{}\<[10]% \>[10]{}(\Varid{bs}_{1},{}\<[17]% \>[17]{}\Varid{rest}{}\<[23]% \>[23]{})\mathrel{=}\Varid{splitAt}\;\mathrm{3}\;\Varid{bs}{}\<[E]% \\ \>[10]{}(\Varid{bs}_{2},{}\<[17]% \>[17]{}\Varid{bs}_{3}{}\<[23]% \>[23]{})\mathrel{=}\Varid{splitAt}\;\mathrm{3}\;\Varid{rest}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{maj}_{\Varid{n}}\mathbin{::}\mathbb{B}^{\Varid{n}}\,\to\,\mathbb{B}{}\<[E]% \\ \>[B]{}\Varid{maj}_{\Varid{n}}\;\Varid{bs}\mathrel{=}\Varid{count}\;\ensuremath{\mathbf{1}}{}\;\Varid{bs}\geq \Varid{count}\;\ensuremath{\mathbf{0}}{}\;\Varid{bs}{}\<[E]% \ColumnHook \end{hscode}\resethooks It is hard to calculate $D_p(\ensuremath{\Varid{maj}_{3}^2})$ by hand because there are very many different decision trees, and this motivated our Haskell implementation of the calculations explained in Section~\ref{sec:method}. \section{Computing the level-$p$-complexity} \label{sec:method} The process of generating decision trees, memoization, thinning and comparing polynomials is explained more in detail. \subsection{Generating decision trees} The decision trees of a function \ensuremath{\Varid{f}} can be described in terms of the decision trees for the immediate subfunctions ($\fixatill{i}{b}{f} = \ensuremath{\Varid{setBit}\;\Varid{i}\;\Varid{b}\;\Varid{f}}$) for different \ensuremath{\Varid{i}\mathbin{::}\Conid{Index}} and \ensuremath{\Varid{b}\mathbin{::}\mathbb{B}}. Given the Boolean function \ensuremath{\Varid{f}}, if $f$ is constant (returning \ensuremath{\Varid{b}}) we return the singleton set \ensuremath{\{\mskip1.5mu \Varid{res}\;\Varid{b}\mskip1.5mu\}}. Otherwise, \ensuremath{\Varid{f}}'s decision trees are generated recursively by asking each bit $i$, and generating the decision trees for the subfunctions $\fixatill{i}{\ensuremath{\mathbf{0}}}{f}$ and $\fixatill{i}{\ensuremath{\mathbf{1}}}{f}$. The recursive step is shown in the formula below: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{56}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\ensuremath{\Varid{genAlg}_{\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}\mathrel{=} \{\mskip1.5mu \Varid{pic}\;\Varid{i}\;\Varid{t}_{0}\;\Varid{t}_{1}\mid \Varid{i}\leftarrow \{\mskip1.5mu \mathrm{1}\mathinner{\ldotp\ldotp}\Varid{n}\mskip1.5mu\},{}\<[56]% \>[56]{}\Varid{t}_{0}\leftarrow \ensuremath{\Varid{genAlg}_{\Varid{n}}}\;\fixatill{\Varid{i}}{\ensuremath{\mathbf{0}}{}}{\Varid{f}},\Varid{t}_{1}\leftarrow \ensuremath{\Varid{genAlg}_{\Varid{n}}}\;\fixatill{\Varid{i}}{\ensuremath{\mathbf{1}}{}}{\Varid{f}}\mskip1.5mu\}{}\<[E]% \ColumnHook \end{hscode}\resethooks The complexity computation starts from a Boolean function \ensuremath{\Varid{f}\mathop{:}\Conid{BoolFun}\;\Varid{n}}, and generates many decision trees for it. There are two top level cases: either the function \ensuremath{\Varid{f}} is constant (and returns \ensuremath{\Varid{b}\mathop{:}\mathbb{B}}), in which case there is only one decision tree: \ensuremath{\Varid{res}\;\Varid{b}}; or the function \ensuremath{\Varid{f}} still depends on some of the input bits. In the latter case, for each index \ensuremath{\Varid{i}\mathop{:}\Conid{Fin}\;\Varid{n}} we can generate two ``subfunctions'' \ensuremath{\Varid{f}_{0}\;\Varid{i}\mathrel{=}\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{0}}{}\;\Varid{f}} and \ensuremath{\Varid{f}_{1}\;\Varid{i}\mathrel{=}\Varid{setBit}\;\Varid{i}\;\ensuremath{\mathbf{1}}{}\;\Varid{f}}. Now, if we recursively generate a decision tree \ensuremath{\Varid{t}_{0}} for \ensuremath{\Varid{f}_{0}\;\Varid{i}} and \ensuremath{\Varid{t}_{1}} for \ensuremath{\Varid{f}_{1}\;\Varid{i}} we can combine them to a bigger decision tree using \ensuremath{\Varid{pic}\;\Varid{i}\;\Varid{t}_{0}\;\Varid{t}_{1}}. Now we only need to do this for all combinations of \ensuremath{\Varid{i}}, \ensuremath{\Varid{t}_{0}}, and \ensuremath{\Varid{t}_{1}}. \begin{figure}[bp] \centering \begin{forest} for tree = anchor = north, edge = {-Stealth}, s sep = 1em, l sep = 2em }, EL/.style = before typesetting nodes= where n=1{edge label/.wrap value={node[pos=0.5,anchor=east]{#1}} {edge label/.wrap value={node[pos=0.5,anchor=west]{#1}} } [\genAlgNode{3}{\ensuremath{\Varid{maj}_{3}}}{\{2 + 2p - 2p^2\}}, [\genAlgNode{2}{\ensuremath{\Varid{and}_{2}}}{\{1+ p\}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}} [\genAlgNode{1}{\ensuremath{\Varid{const}\;\ensuremath{\mathbf{0}}{}}}{\{0\}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}}] [\genAlgNode{1}{\ensuremath{\Varid{id}}}{\{1\}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}} ] ] [\genAlgNode{2}{\ensuremath{\Varid{or}_{2}}}{\{2 -p\}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}} [\genAlgNode{1}{\ensuremath{\Varid{id}}}{\{1\}}, EL = \ensuremath{\ensuremath{\mathbf{0}}{}} ] [\genAlgNode{1}{\ensuremath{\Varid{const}\;\ensuremath{\mathbf{0}}{}}}{\{0\}}, EL = \ensuremath{\ensuremath{\mathbf{1}}{}}] ] ] \end{forest} \caption{A simplified computation tree of \ensuremath{\ensuremath{\Varid{genAlg}_{\mathrm{3}}}\;\Varid{maj}_{3}}. Each node shows the input and output of the local call to \ensuremath{\ensuremath{\Varid{genAlg}_{\cdot }}}. As all the functions involved are ``symmetric'' in the index (\ensuremath{\Varid{setBit}\;\Varid{i}\;\Varid{b}\;\Varid{f}\doubleequals\Varid{setBit}\;\Varid{j}\;\Varid{b}\;\Varid{f}} for all \ensuremath{\Varid{i}} and \ensuremath{\Varid{j}}) we only show edges for \ensuremath{\ensuremath{\mathbf{0}}{}} and \ensuremath{\ensuremath{\mathbf{1}}{}} from each level.} \label{fig:alg} \end{figure} We would like to enumerate the cost polynomials of all the decision trees of a particular Boolean function (\ensuremath{\Varid{n}\mathrel{=}\mathrm{9}}, \ensuremath{\Varid{f}\mathrel{=}\Varid{maj}_{3}^2} is our main goal). Without taking symmetries into account there are \ensuremath{\mathrm{2}\mathbin{*}\Varid{n}} immediate subfunctions $\fixatill{i}{b}{f}$ and if $T_g$ is the cardinality of the enumeration for subfunction $g$ we have that \nopagebreak $$T_{\ensuremath{\Varid{f}}} = \sum_{i=1}^n T_{\fixatill{i}{\ensuremath{\mathbf{0}}}{f}}*T_{\fixatill{i}{\ensuremath{\mathbf{1}}}{f}}$$ These numbers can be really big if we count all decision trees, but if we only care about their cost polynomials, many decision trees will collapse to the same polynomial, making the counts more manageable (but still possibly really big). Even the total number of subfunctions encountered (the number of recursive calls) can be quite big. If all the \ensuremath{\mathrm{2}\mathbin{*}\Varid{n}} immediate subfunctions are different, and if all of them would generate \ensuremath{\mathrm{2}\mathbin{*}(\Varid{n}\mathbin{-}\mathrm{1})} different subfunctions in turn, the number of subfunctions would be $2^n * n!$. But in practice many subfunctions will be the same. When computing the polynomials for the 9-bit function \ensuremath{\Varid{maj}_{3}^2}, for example, only \ensuremath{\mathrm{215}} distinct subfunctions are encountered. As a smaller example, for the 3-bit majority function \ensuremath{\Varid{maj}_{3}}, choosing \ensuremath{\Varid{i}\mathrel{=}\mathrm{1},\mathrm{2},} or \ensuremath{\mathrm{3}} gives exactly the same subfunctions. Fig.~\ref{fig:alg} illustrates a simplified call graph of \ensuremath{\ensuremath{\Varid{genAlg}_{\mathrm{3}}}\;\Varid{maj}_{3}} and the results (the expected cost polynomials) for the different subfunctions. In this case all the sets are singletons, but that is very unusual for more realistic Boolean functions. It would take too long to compute all polynomials for the 9-bit function \ensuremath{\Varid{maj}_{3}^2} but there are 21 distinct 7-bit sub-functions, and the first one of them already has \ensuremath{\mathrm{18021}} polynomials. Thus we can expect billions of polynomials for \ensuremath{\Varid{maj}_{3}^2} and this means we need to look at ways to keep only the most promising candidates at each level. This leads us to the algorithmic design technique of thinning. \subsection{Thinning} The general shape of the specification has two phases: ``generate all candidates'' followed by ``pick the best one(s)''. The first phase is recursive and we would like to push as much as possible of ``pick the best'' into the recursive computation. In the extreme case of a greedy algorithm, we can thin the intermediate sets all the way down to singletons, but even if the sets are a bit bigger than that we can still reduce the computation cost significantly. A good (but abstract) reference for thinning is the Algebra of Programming book \cite[Chapter 8]{bird_algebra_1997} and more concrete references are the corresponding developments in Agda \citep{DBLP:journals/jfp/MuKJ09} and Haskell \citep{bird_gibbons_2020}. We are looking for a ``smallest'' polynomial, but we only have a preorder, not a total order, which means that we may need to keep a set of incomparable candidates (elements \ensuremath{\Varid{x}\not\doubleequals\Varid{y}} for which neither \ensuremath{\Varid{x}\prec\Varid{y}} nor \ensuremath{\Varid{y}\prec\Varid{x}}). We start from a strict preorder \ensuremath{(\prec)\mathop{:}\Varid{a}\,\to\,\Varid{a}\,\to\,\Conid{Prop}} (an irreflexive and transitive relation). You can think of \ensuremath{\Conid{Prop}} as \ensuremath{\mathbb{B}} because we only work with decidable relations and finite sets in this application. As we are looking for minima, we say that \ensuremath{\Varid{x}} \emph{dominates} \ensuremath{\Varid{y}} if \ensuremath{\Varid{x}\prec\Varid{y}}. In our case we will use it for polynomials, but the theory works more generally. We lift the order relation to sets in two steps. First \ensuremath{\Varid{ys}\mathrel{\dot{\prec}}\Varid{x}} means that \ensuremath{\Varid{ys}} \emph{dominates} \ensuremath{\Varid{x}}, meaning that some element in \ensuremath{\Varid{ys}} is smaller than \ensuremath{\Varid{x}}. If this holds, there is no need to add \ensuremath{\Varid{x}} to \ensuremath{\Varid{ys}} because we already have at least one better element in \ensuremath{\Varid{ys}}. Then \ensuremath{\Varid{ys}\mathrel{\ddot{\prec}}\Varid{xs}} means that \ensuremath{\Varid{ys}} dominates all of \ensuremath{\Varid{xs}}. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}(\mathrel{\dot{\prec}})\mathop{:}\Conid{Set}\;\Varid{a}\,\to\,\Varid{a}\,\to\,\Conid{Prop}{}\<[E]% \\ \>[B]{}\Varid{ys}\mathrel{\dot{\prec}}\Varid{x}\mathrel{=}\exists\,\Varid{y}\in \Varid{ys}.\,\,\Varid{y}\prec\Varid{x}{}\<[E]% \\[\blanklineskip]% \>[B]{}(\mathrel{\ddot{\prec}})\mathop{:}\Conid{Set}\;\Varid{a}\,\to\,\Conid{Set}\;\Varid{a}\,\to\,\Conid{Prop}{}\<[E]% \\ \>[B]{}\Varid{ys}\mathrel{\ddot{\prec}}\Varid{xs}\mathrel{=}\forall\,\Varid{x}\in \Varid{xs}.\,\,\Varid{ys}\mathrel{\dot{\prec}}\Varid{x}{}\<[E]% \ColumnHook \end{hscode}\resethooks Finally, we combine subset and domination into the thinning relation: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Conid{Thin}\;\Varid{ys}\;\Varid{xs}\mathrel{=}(\Varid{ys}\subseteq\Varid{xs})\mathrel{\wedge}\Varid{ys}\mathrel{\ddot{\prec}}(\Varid{xs}\mathbin{\char92 \char92 }\Varid{ys}){}\<[E]% \ColumnHook \end{hscode}\resethooks We will use this relation in the specification of our efficient computation to ensure that the small set of polynomials computed, still ``dominates'' the big set of all the polynomials generated by \ensuremath{\ensuremath{\Varid{genAlg}_{\Varid{n}}}\;\Varid{f}}. But first we introduce the helper function \ensuremath{\Varid{thin}\mathop{:}\Conid{Set}\;\Varid{a}\,\to\,\Conid{Set}\;\Varid{a}} which aims at removing some elements, while still keeping the minima in the set. It has to refine the relation \ensuremath{\Conid{Thin}} which means that if \ensuremath{\Varid{ys}\mathrel{=}\Varid{thin}\;\Varid{xs}} then \ensuremath{\Varid{ys}} must be a subset of \ensuremath{\Varid{xs}} (\ensuremath{\Varid{ys}\subseteq\Varid{xs}}) and \ensuremath{\Varid{ys}} must dominate the rest of \ensuremath{\Varid{xs}} (\ensuremath{\Varid{ys}\mathrel{\ddot{\prec}}(\Varid{xs}\mathbin{\char92 \char92 }\Varid{ys})}). A trivial (but useless) implementation would be \ensuremath{\Varid{thin}\mathrel{=}\Varid{id}}, and any implementation which removes some ``dominated'' elements could be helpful. The best we can hope for is that \ensuremath{\Varid{thin}} gives us a set of only incomparable elements. If \ensuremath{\Varid{thin}} compares all pairs of elements, it can compute a smallest thinning. In general that may not be needed (and a linear time greedy approximation is good enough), but in some settings almost any algorithmic cost which can reduce the intermediate sets will pay off. We use the following greedy version (inspired by \citet{bird_gibbons_2020}) as one of the methods of the class \ensuremath{\Conid{Thinnable}}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{thin}\mathbin{::}\Conid{Thinnable}\;\Varid{a}\Rightarrow \Conid{Set}\;\Varid{a}\,\to\,\Conid{Set}\;\Varid{a}{}\<[E]% \\ \>[B]{}\Varid{thin}\mathrel{=}\Varid{foldl}\;\Varid{thinStep}\;\emptyset{}\<[E]% \\ \>[B]{}\Varid{thinStep}\mathbin{::}\Conid{Thinnable}\;\Varid{a}\Rightarrow \Conid{Set}\;\Varid{a}\,\to\,\Varid{a}\,\to\,\Conid{Set}\;\Varid{a}{}\<[E]% \\ \>[B]{}\Varid{thinStep}\;\Varid{ys}\;\Varid{x}\mathrel{=}\mathbf{if}\;\Varid{ys}\mathrel{\dot{\prec}}\Varid{x}\;\mathbf{then}\;\Varid{ys}\;\mathbf{else}\;\Varid{insert}\;\Varid{x}\;\Varid{ys}{}\<[E]% \ColumnHook \end{hscode}\resethooks It starts from an empty set and considers one element \ensuremath{\Varid{x}} at a time. If the set \ensuremath{\Varid{ys}} collected thus far already dominates \ensuremath{\Varid{x}}, it is kept unchanged, otherwise \ensuremath{\Varid{x}} is inserted. (The optimal version also removes from \ensuremath{\Varid{ys}} all elements dominated by \ensuremath{\Varid{x}}.) It is easy to prove that \ensuremath{\Varid{thin}} implements the specification \ensuremath{\Conid{Thin}}. Now we have what we need to specify when an efficient \ensuremath{\ensuremath{\Varid{genAlgT}_{\!\Varid{n}}}\;\Varid{f}} computation is correct. Our specification (\ensuremath{\Varid{spec}\;\Varid{n}\;\Varid{f}}) states a relation between a (very big) set \ensuremath{\Varid{xs}\mathrel{=}\ensuremath{\Varid{genAlg}_{\Varid{n}}}\;\Varid{f}} and a smaller set \ensuremath{\Varid{ys}\mathrel{=}\ensuremath{\Varid{genAlgT}_{\!\Varid{n}}}\;\Varid{f}} we get by applying thinning at each recursive step. We want to prove that \ensuremath{\Varid{ys}\subseteq\Varid{xs}} and \ensuremath{\Varid{ys}\mathrel{\ddot{\prec}}(\Varid{xs}\mathbin{\char92 \char92 }\Varid{ys})} because then we know we have not lost any of the candidates for minimality. \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{15}{@{}>{\hspre}l<{\hspost}@{}}% \column{20}{@{}>{\hspre}l<{\hspost}@{}}% \column{24}{@{}>{\hspre}l<{\hspost}@{}}% \column{40}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{spec}\;\Varid{n}\;\Varid{f}\mathrel{=}{}\<[15]% \>[15]{}\mathbf{let}\;{}\<[20]% \>[20]{}\Varid{xs}{}\<[24]% \>[24]{}\mathrel{=}\ensuremath{\Varid{genAlg}_{\Varid{n}}}\;{}\<[40]% \>[40]{}\Varid{f}{}\<[E]% \\ \>[20]{}\Varid{ys}{}\<[24]% \>[24]{}\mathrel{=}\ensuremath{\Varid{genAlgT}_{\!\Varid{n}}}\;{}\<[40]% \>[40]{}\Varid{f}{}\<[E]% \\ \>[15]{}\mathbf{in}\;{}\<[20]% \>[20]{}(\Varid{ys}\subseteq\Varid{xs})\mathrel{\wedge}(\Varid{ys}\mathrel{\ddot{\prec}}(\Varid{xs}\mathbin{\char92 \char92 }\Varid{ys})){}\<[E]% \ColumnHook \end{hscode}\resethooks We can first take care of the simplest case (for any \ensuremath{\Varid{n}}). If the function \ensuremath{\Varid{f}} is constant (returning some \ensuremath{\Varid{b}\mathop{:}\mathbb{B}}), both \ensuremath{\Varid{xs}} and \ensuremath{\Varid{ys}} will be the singleton set containing \ensuremath{\Varid{res}\;\Varid{b}}. Thus both properties trivially hold. We then proceed by induction on \ensuremath{\Varid{n}} to prove \ensuremath{\Conid{S}_{\Varid{n}}\mathrel{=}\forall\,\Varid{f}\mathop{:}\Conid{BoolFun}\;\Varid{n}.\,\,\Varid{spec}\;\Varid{n}\;\Varid{f}}. In the base case \ensuremath{\Varid{n}\mathrel{=}\mathrm{0}} the function is necessarily constant, and we have already covered that above. In the inductive step case, assume the induction hypothesis \ensuremath{\Conid{IH}\mathrel{=}\Conid{S}_{\Varid{n}}} and prove \ensuremath{\Conid{S}_{\Varid{n}\mathbin{+}\mathrm{1}}} for a function \ensuremath{\Varid{f}\mathop{:}\Conid{BoolFun}\;(\Varid{n}\mathbin{+}\mathrm{1})}. We have already covered the constant function case, so we focus on the main recursive clause of the definitions of \ensuremath{\ensuremath{\Varid{genAlg}_{\Varid{n}}}\;\Varid{f}} and \ensuremath{\ensuremath{\Varid{genAlgT}_{\!\Varid{n}}}\;\Varid{f}}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{19}{@{}>{\hspre}l<{\hspost}@{}}% \column{31}{@{}>{\hspre}l<{\hspost}@{}}% \column{43}{@{}>{\hspre}l<{\hspost}@{}}% \column{62}{@{}>{\hspre}l<{\hspost}@{}}% \column{66}{@{}>{\hspre}l<{\hspost}@{}}% \column{83}{@{}>{\hspre}l<{\hspost}@{}}% \column{112}{@{}>{\hspre}l<{\hspost}@{}}% \column{129}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\ensuremath{\Varid{genAlg}_{\Varid{n}\mathbin{+}\mathrm{1}}}\;{}\<[19]% \>[19]{}\Varid{f}\mathrel{=}{}\<[31]% \>[31]{}\{\mskip1.5mu \Varid{pic}\;\Varid{i}\;\Varid{x}_{0}\;{}\<[43]% \>[43]{}\Varid{x}_{1}\mid \Varid{i}\leftarrow [\mskip1.5mu \mathrm{1}\mathinner{\ldotp\ldotp}\Varid{n}\mskip1.5mu],{}\<[62]% \>[62]{}\Varid{x}_{0}{}\<[66]% \>[66]{}\leftarrow \ensuremath{\Varid{genAlg}_{\Varid{n}}}\;{}\<[83]% \>[83]{}\fixatill{\Varid{i}}{\ensuremath{\mathbf{0}}{}}{\Varid{f}},\Varid{x}_{1}{}\<[112]% \>[112]{}\leftarrow \ensuremath{\Varid{genAlg}_{\Varid{n}}}\;{}\<[129]% \>[129]{}\fixatill{\Varid{i}}{\ensuremath{\mathbf{1}}{}}{\Varid{f}}\mskip1.5mu\}{}\<[E]% \\ \>[B]{}\ensuremath{\Varid{genAlgT}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}\;{}\<[19]% \>[19]{}\Varid{f}\mathrel{=}\Varid{thin}\;{}\<[31]% \>[31]{}\{\mskip1.5mu \Varid{pic}\;\Varid{i}\;\Varid{y}_{0}\;{}\<[43]% \>[43]{}\Varid{y}_{1}\mid \Varid{i}\leftarrow [\mskip1.5mu \mathrm{1}\mathinner{\ldotp\ldotp}\Varid{n}\mskip1.5mu],{}\<[62]% \>[62]{}\Varid{y}_{0}{}\<[66]% \>[66]{}\leftarrow \ensuremath{\Varid{genAlgT}_{\!\Varid{n}}}\;{}\<[83]% \>[83]{}\fixatill{\Varid{i}}{\ensuremath{\mathbf{0}}{}}{\Varid{f}},\Varid{y}_{1}{}\<[112]% \>[112]{}\leftarrow \ensuremath{\Varid{genAlgT}_{\!\Varid{n}}}\;{}\<[129]% \>[129]{}\fixatill{\Varid{i}}{\ensuremath{\mathbf{1}}{}}{\Varid{f}}\mskip1.5mu\}{}\<[E]% \ColumnHook \end{hscode}\resethooks All subfunctions \ensuremath{\fixatill{\Varid{i}}{\Varid{b}}{\Varid{f}}\mathop{:}\Conid{BoolFun}\;\Varid{n}} used in the recursive calls satisfy the induction hypothesis: \ensuremath{\Varid{spec}\;\Varid{n}\;\fixatill{\Varid{i}}{\Varid{b}}{\Varid{f}}}. If we name the sets involved in these hypotheses \ensuremath{\fixatill{\Varid{i}}{\Varid{b}}{\Varid{xs}}} and \ensuremath{\fixatill{\Varid{i}}{\Varid{b}}{\Varid{ys}}} we can thus assume \ensuremath{\fixatill{\Varid{i}}{\Varid{b}}{\Varid{ys}}\subseteq\fixatill{\Varid{i}}{\Varid{b}}{\Varid{xs}}} and \ensuremath{\fixatill{\Varid{i}}{\Varid{b}}{\Varid{ys}}\mathrel{\ddot{\prec}}(\fixatill{\Varid{i}}{\Varid{b}}{\Varid{xs}}\mathbin{\char92 \char92 }\fixatill{\Varid{i}}{\Varid{b}}{\Varid{ys}})}. First, the subset property: we want to prove that \ensuremath{\ensuremath{\Varid{genAlgT}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}\subseteq\ensuremath{\Varid{genAlg}_{\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}}, or equivalently, \ensuremath{\forall\,\Varid{y}.\,\,(\Varid{y}\in \ensuremath{\Varid{genAlgT}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f})\Rightarrow (\Varid{y}\in \ensuremath{\Varid{genAlg}_{\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f})}. Let \ensuremath{\Varid{y}\in \ensuremath{\Varid{genAlgT}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}}. We know from the specification of \ensuremath{\Varid{thin}} and the definition of \ensuremath{\ensuremath{\Varid{genAlgT}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}} that \ensuremath{\Varid{y}\mathrel{=}\Varid{pic}\;\Varid{i}\;\Varid{y}_{0}\;\Varid{y}_{1}} for some \ensuremath{\Varid{y}_{0}\in \fixatill{\Varid{i}}{\mathrm{0}}{\Varid{ys}}} and \ensuremath{\Varid{y}_{1}\in \fixatill{\Varid{i}}{\mathrm{1}}{\Varid{ys}}}. The subset part of the induction hypothesis gives us that \ensuremath{\Varid{y}_{0}\in \fixatill{\Varid{i}}{\mathrm{0}}{\Varid{xs}}} and \ensuremath{\Varid{y}_{1}\in \fixatill{\Varid{i}}{\mathrm{1}}{\Varid{xs}}}. Thus we can see from the definition of \ensuremath{\ensuremath{\Varid{genAlg}_{\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}} that \ensuremath{\Varid{y}\in \ensuremath{\Varid{genAlg}_{\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}}. Now for the ``domination'' property we need to show that \ensuremath{\forall\,\Varid{x}\in \Varid{xs}\mathbin{\char92 \char92 }\Varid{ys}.\,\,\Varid{ys}\mathrel{\dot{\prec}}\Varid{x}} where \ensuremath{\Varid{xs}\mathrel{=}\ensuremath{\Varid{genAlg}_{\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}} and \ensuremath{\Varid{ys}\mathrel{=}\ensuremath{\Varid{genAlgT}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}}. Let \ensuremath{\Varid{x}\in \Varid{xs}\mathbin{\char92 \char92 }\Varid{ys}}. Given the definition of \ensuremath{\Varid{xs}} it must be of the form \ensuremath{\Varid{x}\mathrel{=}\Varid{pic}\;\Varid{i}\;\Varid{x}_{0}\;\Varid{x}_{1}} where \ensuremath{\Varid{x}_{0}\in \fixatill{\Varid{i}}{\ensuremath{\mathbf{0}}{}}{\Varid{xs}}} and \ensuremath{\Varid{x}_{1}\in \fixatill{\Varid{i}}{\ensuremath{\mathbf{1}}{}}{\Varid{xs}}}. The (second part of the) induction hypothesis provides the existence of \ensuremath{\Varid{y}_{\Varid{b}}\in \fixatill{\Varid{i}}{\Varid{b}}{\Varid{ys}}} such that \ensuremath{\Varid{y}_{\Varid{b}}\prec\Varid{x}_{\Varid{b}}}. From these \ensuremath{\Varid{y}_{\Varid{b}}} we can build \ensuremath{\Varid{y'}\mathrel{=}\Varid{pic}\;\Varid{i}\;\Varid{y}_{0}\;\Varid{y}_{1}} as a candidate element to ``dominate'' \ensuremath{\Varid{xs}}. We can now show that \ensuremath{\Varid{y'}\prec\Varid{x}} by polynomial algebra: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{7}{@{}>{\hspre}l<{\hspost}@{}}% \column{16}{@{}>{\hspre}c<{\hspost}@{}}% \column{16E}{@{}l@{}}% \column{17}{@{}>{\hspre}c<{\hspost}@{}}% \column{17E}{@{}l@{}}% \column{20}{@{}>{\hspre}l<{\hspost}@{}}% \column{23}{@{}>{\hspre}l<{\hspost}@{}}% \column{26}{@{}>{\hspre}c<{\hspost}@{}}% \column{26E}{@{}l@{}}% \column{30}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[3]{}\Varid{true}{}\<[E]% \\ \>[B]{}\implies\mbox{\onelinecomment Follows from the induction hypothesis}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}(\Varid{y}_{0}\prec\Varid{x}_{0}){}\<[17]% \>[17]{}\mathrel{\wedge}{}\<[17E]% \>[23]{}(\Varid{y}_{1}\prec\Varid{x}_{1}){}\<[E]% \\ \>[B]{}\implies\mbox{\onelinecomment In the interval \ensuremath{(\mathrm{0},\mathrm{1})} both \ensuremath{\mathrm{1}\mathbin{-}\Varid{xP}} and \ensuremath{\Varid{xP}} are positive}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\mathrm{1}\mathbin{+}(\mathrm{1}\mathbin{-}\Varid{xP})\mathbin{*}\Varid{y}_{0}\mathbin{+}\Varid{xP}\mathbin{*}\Varid{y}_{1}{}\<[26]% \>[26]{}\prec{}\<[26E]% \>[30]{}\mathrm{1}\mathbin{+}(\mathrm{1}\mathbin{-}\Varid{xP})\mathbin{*}\Varid{x}_{0}\mathbin{+}\Varid{xP}\mathbin{*}\Varid{x}_{1}{}\<[E]% \\ \>[B]{}\Leftrightarrow\mbox{\onelinecomment Def. of \ensuremath{\Varid{pic}} for polynomials}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\Varid{pic}\;\Varid{i}\;\Varid{y}_{0}\;\Varid{y}_{1}{}\<[16]% \>[16]{}\prec{}\<[16E]% \>[20]{}\Varid{pic}\;\Varid{i}\;\Varid{x}_{0}\;\Varid{x}_{1}{}\<[E]% \\ \>[B]{}\Leftrightarrow\mbox{\onelinecomment Def. of \ensuremath{\Varid{y'}} and \ensuremath{\Varid{x}}}{}\<[E]% \\ \>[B]{}\hsindent{7}{}\<[7]% \>[7]{}\Varid{y'}\prec\Varid{x}{}\<[E]% \ColumnHook \end{hscode}\resethooks We are not quite done, because \ensuremath{\Varid{y'}} may not be in \ensuremath{\Varid{ys}}. It is clear from the definition of \ensuremath{\ensuremath{\Varid{genAlgT}_{\!\Varid{n}\mathbin{+}\mathrm{1}}}\;\Varid{f}} that \ensuremath{\Varid{y'}} is in the set \ensuremath{\Varid{ys'}} sent to \ensuremath{\Varid{thin}}, but it may be ``thinned away''. But, either \ensuremath{\Varid{y'}\in \Varid{ys}\mathrel{=}\Varid{thin}\;\Varid{ys'}} in which case we take the final \ensuremath{\Varid{y}\mathrel{=}\Varid{y'}}, or there exists another \ensuremath{\Varid{y}\in \Varid{ys}} such that \ensuremath{\Varid{y}\prec\Varid{y'}} and then we get get \ensuremath{\Varid{y}\prec\Varid{x}} by transitivity. To sum up, we have now proved that we can push a powerful \ensuremath{\Varid{thin}} step into the recursive enumeration of all cost polynomials in such a way that any minimum is guaranteed to reside in the much smaller set of polynomials thus computed. \subsection{Memoization} \label{sec:memo} The call graph of \ensuremath{\ensuremath{\Varid{genAlgT}_{\!\Varid{n}}}\;\Varid{f}} is the same as the call graph of \ensuremath{\ensuremath{\Varid{genAlg}_{\Varid{n}}}\;\Varid{f}} and, as mentioned above, it can be exponentially big. Thus, even though thinning helps in making the intermediate sets exponentially smaller, we still have one source of exponential computational complexity to tackle. Fortunately, the same subfunctions often appear in many different nodes and this means we can save a significant amount of computation time using memoization. The classical example of memoization is the Fibonacci function. Naively computing \ensuremath{\Varid{fib}\;(\Varid{n}\mathbin{+}\mathrm{2})\mathrel{=}\Varid{fib}\;(\Varid{n}\mathbin{+}\mathrm{1})\mathbin{+}\Varid{fib}\;\Varid{n}} leads to exponential growth in the number of function calls. But if we fill in a table indexed by \ensuremath{\Varid{n}} with already computed results we get can compute \ensuremath{\Varid{fib}\;\Varid{n}} in linear time. Similarly, here we ``just'' need to tabulate the result of the calls to \ensuremath{\ensuremath{\Varid{genAlg}_{\Varid{n}}}\;\Varid{f}} so as to avoid recomputation. The challenge is that the input we need to tabulate is now a Boolean function which is not as nicely structured as a natural number index. Fortunately, thanks to \citet{DBLP:journals/jfp/Hinze00a}, Elliot, and others we have generic Trie-based memo functions only a hackage library away\footnote{Available on hackage as the \href{https://hackage.haskell.org/package/MemoTrie}{MemoTrie} Haskell package.}. The \ensuremath{\Conid{MemoTrie}} library provides the \ensuremath{\Conid{Memoizable}} class and suitable instances and helper functions for most types. We only need to provide a \ensuremath{\Conid{Memoizable}} instance for \ensuremath{\Conid{BDD}}s, and we do this using \ensuremath{\Varid{inSig}} and \ensuremath{\Varid{outSig}} from the \ensuremath{\Conid{BDD}} package (decision-diagrams). They expose the top-level structure of a \ensuremath{\Conid{BDD}}: \ensuremath{\Conid{Sig}\;\Varid{bf}} is isomorphic to \ensuremath{\Conid{Either}\;\mathbb{B}\;(\Conid{Index},\Varid{bf},\Varid{bf})} where \ensuremath{\Varid{bf}\mathrel{=}\Conid{BDDFun}}. We define our top-level function \ensuremath{\Varid{genAlgThinMemo}} by applying memoization to \ensuremath{\ensuremath{\Varid{genAlgT}_{\!\cdot }}}. \subsection{Comparing polynomials} As argued above that the key to an efficient computation of the best cost polynomials is to compare polynomials as soon as possible and throw away those which are ``uniformly worse''. The specification of \ensuremath{\Varid{p}\prec\Varid{q}} is \ensuremath{\Varid{p}\;\Varid{x}\leq \Varid{q}\;\Varid{x}} for all \ensuremath{\mathrm{0}\leq \Varid{x}\leq \mathrm{1}} and \ensuremath{\Varid{p}\;\Varid{x}\mathbin{<}\Varid{q}\;\Varid{x}} for some \ensuremath{\mathrm{0}\mathbin{<}\Varid{x}\mathbin{<}\mathrm{1}}. Note that \ensuremath{(\prec)} is a strict pre-order --- if the polynomials cross, neither is ``uniformly worse'' and we keep both. If we have two polynomials \ensuremath{\Varid{p}}, and \ensuremath{\Varid{q}}, we want to know if \ensuremath{\Varid{p}\leq \Varid{q}} for all inputs in the interval \ensuremath{[\mskip1.5mu \mathrm{0},\mathrm{1}\mskip1.5mu]}. Equivalently, we need to check if \ensuremath{\mathrm{0}\leq \Varid{q}\mathbin{-}\Varid{p}} in that interval. As the difference is also a polynomial, we can focus our attention to locating polynomial roots in the unit interval. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width = \textwidth]{plots/noroot.pdf} \caption{No root.} \label{fig:noroot} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width = \textwidth]{plots/singleroot.pdf} \caption{Single root.} \label{fig:singleroot} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width = \textwidth]{plots/doubleroot.pdf} \caption{Double root.} \label{fig:doubleroot} \end{subfigure} \caption{To compare two polynomials \ensuremath{\Varid{p}} and \ensuremath{\Varid{q}} we use root counting for \ensuremath{\Varid{q}\mathbin{-}\Varid{p}} and there are the three main cases to consider.} \label{fig:roots} \end{figure} If there are no roots (Fig.~\ref{fig:noroot}) in the unit interval, the polynomial stays on ``one side of zero'' and we just need to check the sign of the polynomial at any point. If there is at least one single-root (Fig.~\ref{fig:singleroot}), the original polynomials cross and we return \ensuremath{\Conid{Nothing}}. Similarly for triple-roots or roots of any odd order. Finally, if the polynomial only has roots of even order (some double-roots, or quadruple-roots, etc.\ as in Fig.~\ref{fig:doubleroot}) the polynomial stays on one side of zero, and we can check a few points to see what side that is. (If the number of distinct roots is \ensuremath{\Varid{r}} we check up to \ensuremath{\Varid{r}\mathbin{+}\mathrm{1}} points to make sure at least one will be non-zero and thus tell us on which side of zero the polynomial lies.) Thus, the top-level of the polynomial partial order implementation is as follows: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{3}{@{}>{\hspre}l<{\hspost}@{}}% \column{10}{@{}>{\hspre}l<{\hspost}@{}}% \column{12}{@{}>{\hspre}l<{\hspost}@{}}% \column{18}{@{}>{\hspre}l<{\hspost}@{}}% \column{42}{@{}>{\hspre}l<{\hspost}@{}}% \column{57}{@{}>{\hspre}c<{\hspost}@{}}% \column{57E}{@{}l@{}}% \column{61}{@{}>{\hspre}l<{\hspost}@{}}% \column{70}{@{}>{\hspre}l<{\hspost}@{}}% \column{89}{@{}>{\hspre}l<{\hspost}@{}}% \column{101}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{cmpPoly}\mathbin{::}(\Conid{Ord}\;\Varid{a},\Conid{Field}\;\Varid{a})\Rightarrow \Conid{Poly}\;\Varid{a}\,\to\,\Conid{Poly}\;\Varid{a}\,\to\,\Conid{Maybe}\;\Conid{Ordering}{}\<[E]% \\ \>[B]{}\Varid{cmpPoly}\;\Varid{p}\;\Varid{q}\mathrel{=}\Varid{cmpZero}\;(\Varid{q}\mathbin{-}\Varid{p}){}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{cmpZero}\mathbin{::}(\Conid{Ord}\;\Varid{a},\Conid{Field}\;\Varid{a})\Rightarrow \Conid{Poly}\;\Varid{a}\,\to\,\Conid{Maybe}\;\Conid{Ordering}{}\<[E]% \\ \>[B]{}\Varid{cmpZero}\;\Varid{p}{}\<[12]% \>[12]{}\mid \Varid{isZero}\;\Varid{p}{}\<[57]% \>[57]{}\mathrel{=}{}\<[57E]% \>[61]{}\Conid{Just}\;\Conid{EQ}{}\<[E]% \\ \>[12]{}\mid \Varid{all}\;\Varid{even}\;(\Varid{numRoots'}\;\Varid{p}){}\<[57]% \>[57]{}\mathrel{=}{}\<[57E]% \>[61]{}\mathbf{if}\;{}\<[70]% \>[70]{}\Varid{any}\;(\mathrm{0}\mathbin{<})\;\Varid{vals}\;{}\<[89]% \>[89]{}\mathbf{then}\;\Conid{Just}\;\Conid{LT}{}\<[E]% \\ \>[61]{}\mathbf{else}\;\mathbf{if}\;{}\<[70]% \>[70]{}\Varid{any}\;(\mathrm{0}\mathbin{>})\;\Varid{vals}\;{}\<[89]% \>[89]{}\mathbf{then}\;\Conid{Just}\;\Conid{GT}{}\<[E]% \\ \>[61]{}\mathbf{else}\;\Conid{Just}\;\Conid{EQ}{}\<[E]% \\ \>[12]{}\mid \Varid{otherwise}{}\<[57]% \>[57]{}\mathrel{=}{}\<[57E]% \>[61]{}\Conid{Nothing}{}\<[101]% \>[101]{}\mbox{\onelinecomment incomparable}{}\<[E]% \\ \>[B]{}\hsindent{3}{}\<[3]% \>[3]{}\mathbf{where}\;{}\<[10]% \>[10]{}\Varid{r}{}\<[18]% \>[18]{}\mathrel{=}\Varid{length}\;(\Varid{numRoots'}\;\Varid{p}){}\<[42]% \>[42]{}\mbox{\onelinecomment the number of distinct roots}{}\<[E]% \\ \>[10]{}\Varid{rp2}{}\<[18]% \>[18]{}\mathrel{=}\Varid{fromIntegral}\;(\Varid{r}\mathbin{+}\mathrm{2}){}\<[E]% \\ \>[10]{}\Varid{points}{}\<[18]% \>[18]{}\mathrel{=}[\mskip1.5mu \Varid{i}\mathbin{/}\Varid{rp2}\mid \Varid{i}\leftarrow \Varid{take}\;(\Varid{r}\mathbin{+}\mathrm{1})\;(\Varid{iterate}\;(\Varid{one}\mathbin{+})\;\Varid{one})\mskip1.5mu]{}\<[E]% \\ \>[10]{}\Varid{vals}{}\<[18]% \>[18]{}\mathrel{=}\Varid{map}\;(\Varid{evalP}\;\Varid{p})\;\Varid{points}{}\<[E]% \ColumnHook \end{hscode}\resethooks To make this work, we ``just'' need to implement the root-counting functions \ensuremath{\Varid{numRoots}} and \ensuremath{\Varid{numRoots'}}: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{numRoots}\mathbin{::}(\Conid{Ord}\;\Varid{a},\Conid{Field}\;\Varid{a})\Rightarrow \Conid{Poly}\;\Varid{a}\,\to\,\Conid{Int}{}\<[E]% \\ \>[B]{}\Varid{numRoots}\mathrel{=}\Varid{sum}\mathbin{\circ}\Varid{numRoots'}{}\<[E]% \\[\blanklineskip]% \>[B]{}\Varid{numRoots'}\mathbin{::}(\Conid{Ord}\;\Varid{a},\Conid{Field}\;\Varid{a})\Rightarrow \Conid{Poly}\;\Varid{a}\,\to\,[\mskip1.5mu \Conid{Int}\mskip1.5mu]{}\<[E]% \ColumnHook \end{hscode}\resethooks The second function computes real root multiplicities: \ensuremath{\Varid{numRoots'}\;\Varid{p}\mathrel{=}[\mskip1.5mu \mathrm{1},\mathrm{3}\mskip1.5mu]} means \ensuremath{\Varid{p}} has one single and one triple root in the open interval \ensuremath{(\mathrm{0},\mathrm{1})}. From this we get that \ensuremath{\Varid{p}} has \ensuremath{\mathrm{2}\mathrel{=}\Varid{length}\;[\mskip1.5mu \mathrm{1},\mathrm{3}\mskip1.5mu]} distinct real roots and \ensuremath{\mathrm{4}\mathrel{=}\Varid{sum}\;[\mskip1.5mu \mathrm{1},\mathrm{3}\mskip1.5mu]} real roots if we count multiplicities. We will not provide all the code here, because that would take us too far from the main topic of the paper, but we will illustrate the main algorithms and concepts. \subsection{Isolating real roots and Descartes rule of signs} First out is Yun's algorithm \citep{10.1145/800205.806320} for square-free factorisation: given a polynomial \ensuremath{\Varid{p}} it computes a list of polynomial factors \(p_i\), each of which only has single-roots, and such that \(p = C \prod_{i} {p_i}^i\). Note the exponent \ensuremath{\Varid{i}}: the factor \ensuremath{\Varid{p}_{2}}, for example, appears squared in \ensuremath{\Varid{p}}. If \ensuremath{\Varid{p}} only has single-roots, the list from Yun's algorithm has just one element, \ensuremath{\Varid{p}_{1}}, but in any case we get a finite list of polynomials, each of which is ``square-free''.% \footnote{Yun's algorithm is built around repeated computation of the polynomial greatest common divisor of \ensuremath{\Varid{p}} and its derivative, \ensuremath{\Varid{p'}}. % See the associated code for the details.} Second in line is Descartes rule of signs which can be used to determine the number of real zeros of a polynomial function. It tells us that the number of positive real zeros in a polynomial function \ensuremath{\Varid{f}} is the same, or less than by an even number, as the number of changes in the sign of the coefficients. Together with some polynomial transformations, this is used to count the zeroes in the interval \([0,1)\). If the rule gives zero or one, we are done: we have isolated an interval \([0,1)\) with either no root or exactly one root. For our use case we don't need to know the actual root, just if it exists in the interval or not. If the rule gives more than one, we don't quite know the exact number of roots yet (only an upper bound). In that case we subdivide the interval into the lower \([0,1/2)\) and upper \([1/2, 1)\) halves. Fortunately the polynomial coefficients can be transformed to make the domain the unit interval again so that we can call ourselves recursively. After a finite number of steps, this bisection terminates and we get a list of disjoint isolating intervals where we know there is exactly one root in each. Combining Yun and Descartes, we implement our ``root counter'', and thus our partial order on polynomials. \section{Results} Using the method from the previous section we can now calculate the level-$p$-complexity of Boolean functions with our function \ensuremath{\Varid{genAlgThinMemo}}. First we return to our example from the beginning (\ensuremath{\Varid{sim}_{5}}), where we get several polynomials which are optimal in different intervals. Then, we calculate the level-$p$-complexity for \ensuremath{\Varid{maj}_{3}^2} which is lower than the proposed result in \citep{jansson2022level}, which means that our current method is better. \subsection{Level-$p$-complexity for \ensuremath{\Varid{sim}_{5}}} \label{sec:fAC} When we run \ensuremath{\Varid{genAlgThinMemo}\;\mathrm{5}\;\Varid{sim}_{5}} it returns a set of four polynomials: \begin{align*} \{&P_1(p) = 2 + 6 p - 10 p^2 + 8 p^3 - 4 p^4, &P_2(p) &= 4 - 2 p - 3 p^2 + 8 p^3 - 2 p^4,\\ &P_3(p) = 5 - 8 p + 9 p^2 - 2 p^4,&P_4(p) &= 5 - 8 p + 8 p^2 \} \end{align*} We don't compute their intersection points, just that they do intersect in the unit interval. The four polynomials were shown already in Fig.~\ref{fig:4polys}. The level-$p$-complexity for \ensuremath{\Varid{sim}_{5}} is the piecewise polynomial, pointwise minimum, of these four, with two different polynomials in different intervals: $D_p(\ensuremath{\Varid{sim}_{5}}) = P_4(p)$ for $p \in [\approx0.356,\approx0.644]$ and $D_p(\ensuremath{\Varid{sim}_{5}}) = P_1(p)$ in the rest of the unit interval. As seen in Figure \ref{fig:ACDp}, the level-$p$-complexity has two maxima. \begin{figure}[htbp] \centering \includegraphics[width = 0.6\textwidth]{plots/ACDp.pdf} \caption{Level-\ensuremath{\Varid{p}}-complexity of \ensuremath{\Varid{sim}_{5}}, where the dots show the intersections of the costs of the decision trees.} \label{fig:ACDp} \end{figure} \begin{comment} \TODO{more of a discussion point, rewrite and move to conclusions with references to this section, or consider structure} This example illustrates limitations of the method when it comes to intersections of the polynomials. In this case our method returns a set of four polynomials, and it does not specify their intersection points, just that they do intersect in the interval. Mathematica is then used to calculate the level-$p$-complexity from these four polynomials. It would be desirable to extend our method to include this calculation, especially since we are already calculating roots and intersections, just not precisely enough. However, it is not strictly needed since Mathematica does this in a simple way. $$D_p(f) = \begin{cases} P_1(p)= 2 + 6 p - 10 p^2 + 8 p^3 - 4 p^4, &p \in [0,0.356]\\ P_4(p) = 5 - 8 p + 8 p^2, &p \in [0.356,0.644]\\ P_1(p)= 2 + 6 p - 10 p^2 + 8 p^3 - 4 p^4, &p \in [0.644,1] \end{cases}$$ \end{comment} \subsection{Level-$p$-complexity for \ensuremath{\Varid{maj}_{3}^2}} Running \ensuremath{\Varid{genAlgThinMemo}\;\mathrm{9}\;\Varid{maj}_{3}^2} we get \ensuremath{\{\mskip1.5mu \Conid{P}\;[\mskip1.5mu \mathrm{4},\mathrm{4},\mathrm{6},\mathrm{9},\mathbin{-}\mathrm{61},\mathrm{23},\mathrm{67},\mathbin{-}\mathrm{64},\mathrm{16}\mskip1.5mu]\mskip1.5mu\}}, which means that the expected cost (\(P_*\)) of the best decision tree (\ensuremath{\ensuremath{\Conid{T}_{\mathbin{*}}}}) is $$P_*(p) = 4 + 4 p + 6 p^2 + 9 p^3 - 61 p^4 + 23 p^5 + 67 p^6 - 64 p^7 + 16 p^8\,.$$ This can be compared to the decision tree (that we call \ensuremath{\ensuremath{\Conid{T}_{\Varid{t}}}}) conjectured in \citep{jansson2022level} to be the best. Its expected cost is slightly higher (thus worse): $$P_t(p) = 4 + 4 p + 7 p^2 + 6 p^3 - 57 p^4 + 20 p^5 + 68 p^6 - 64 p^7 + 16 p^8\,.$$ The expected costs for decision trees \ensuremath{\ensuremath{\Conid{T}_{\mathbin{*}}}} and \ensuremath{\ensuremath{\Conid{T}_{\Varid{t}}}} can be seen in Figure~\ref{fig:itermajalgs2}. \begin{figure} \centering \includegraphics[width = 0.8\textwidth]{plots/itermajalgs2.pdf} \caption{Expected costs of the two different decision trees. % Because they are very close we also show their difference in Fig.~\ref{fig:itermajalgsdiff2}.} \label{fig:itermajalgs2} \end{figure} Comparing the two polynomials using \ensuremath{\Varid{cmpPoly}\;\ensuremath{\Conid{P}_{\mathbin{*}}}\;\ensuremath{\Conid{P}_{\Varid{t}}}} shows that the new one has strictly lower expected cost than the one from the thesis. The difference, which factors to exactly \(p^2(1-p)^2(1-p+p^2)\), is illustrated in Fig.~\ref{fig:itermajalgsdiff2}, and we note that it is non-negative in the whole interval. \begin{figure} \centering \includegraphics[width = 0.9\textwidth]{plots/itermajalgsdiff2.pdf} \caption{Difference between the expected costs of \ensuremath{\ensuremath{\Conid{T}_{\Varid{t}}}} and \ensuremath{\ensuremath{\Conid{T}_{\mathbin{*}}}}.} \label{fig:itermajalgsdiff2} \end{figure} The value of the polynomials at the endpoints is 4 and the maximum of \ensuremath{\ensuremath{\Conid{P}_{\mathbin{*}}}} is $\approx6.14$ compared to the maximum of \ensuremath{\ensuremath{\Conid{P}_{\Varid{t}}}} which is $\approx6.19$. The conjecture in \citep{jansson2022level} is thus false and the correct formula for the level-$p$-complexity of \ensuremath{\Varid{maj}_{3}^2} is \ensuremath{\ensuremath{\Conid{P}_{\mathbin{*}}}}. At the time of publication of \citep{jansson2022level} it was believed that sifting through all the possible decision trees would be intractable. Fortunately, using a combination of thinning, memoization, and exact comparison of polynomials, it is now possible to compute the correct complexity in less than a second on the author's laptop. \section{Conclusions} This paper describes a Haskell library for computing level-$p$-complexity of Boolean functions, and applies it to two-level iterated majority (\ensuremath{\Varid{maj}_{3}^2}). The problem specification is straightforward: generate all possible decision trees, compute their expected cost polynomials, and select the best ones. The implementation is more of a challenge because of two sources of exponential computational cost: an exponential growth in the set of decision trees and an exponential growth in the size of the recursive call graph (the collection of subfunctions). The library uses thinning to tackle the first and memoization to handle the second source of inefficiency. In combination with efficient data structures (binary decision diagrams for the Boolean function input, sets of polynomials for the output) this enables computing the level-\ensuremath{\Varid{p}}-complexity for our target example \ensuremath{\Varid{maj}_{3}^2} in less than a second. From the mathematics point of view the strength of the methods used in this paper to compute the level-\ensuremath{\Varid{p}}-complexity is that we can get a correct result to something which is very hard to calculate by hand. From a computer science point of view the paper is an instructive example of how a combination of algorithmic and symbolic tools can tame a doubly exponential computational cost. The library uses type-classes for separation of concerns: the actual implementation type for Boolean functions (the input) is abstracted over by the \ensuremath{\Conid{BoFun}} class; and the corresponding type for the output is modelled by the \ensuremath{\Conid{TreeAlg}} class. We also use our own class \ensuremath{\Conid{Thinnable}} for thinning (and pre-orders), and the \ensuremath{\Conid{Memoizable}} class from hackage. This means that our main function has the following type: \begin{hscode}\SaveRestoreHook \column{B}{@{}>{\hspre}l<{\hspost}@{}}% \column{20}{@{}>{\hspre}l<{\hspost}@{}}% \column{E}{@{}>{\hspre}l<{\hspost}@{}}% \>[B]{}\Varid{genAlgThinMemo}\mathbin{::}{}\<[20]% \>[20]{}(\Conid{BoFun}\;\Varid{bf},\Conid{Memoizable}\;\Varid{bf},\Conid{TreeAlg}\;\Varid{a},\Conid{Thinnable}\;\Varid{a})\Rightarrow {}\<[E]% \\ \>[20]{}\mathbb{N}\,\to\,\Varid{bf}\,\to\,\Conid{Set}\;\Varid{a}{}\<[E]% \ColumnHook \end{hscode}\resethooks All the Haskell code is available on GitHub\footnote{The paper repository is at \url{https://github.com/juliajansson/BoFunComplexity}.} and parts of it has been reproduced in Agda to check some of the stronger invariants. One direction of future work is to complete the Agda formalisation so that we can provide a formally verified library, perhaps helped by \citet{swierstra_2022, 10.1145/3547636}. The set of polynomials we compute are all incomparable in the pre-order and, together with the thinning relation this means that we actually compute what is called a Pareto front from economics: a set of solutions where no objective can be improved without sacrificing at least one other objective. It would be interesting to explore this in more detail and to see what the overlap is between thinning as an algorithm design method and different concepts of optimality from economics. The computed level-$p$-complexity for \ensuremath{\Varid{maj}_{3}^2} is better than the result conjectured in \citep{jansson2022level}, and the library allows easy exploration of other Boolean functions. In the future it would be interesting to try to compute the level-\ensuremath{\Varid{p}}-complexity of iterated majority on 3 levels (27 bits). \section*{Acknowledgments} The authors would like to extend their gratitude to Tim Richter and Jeremy Gibbons for taking their time to give valuable feedback on the first draft. The work presented in this paper heavily relies on free software, among others on GHC, Agda, Haskell, git, Emacs, \LaTeX\ and on the Ubuntu operating system, Mathematica, and Visual Studio Code. It is our pleasure to thank all developers of these excellent products. \subsection*{Conflicts of Interest} None.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Main results} In this paper we study the initial value problem (IVP) \begin{equation}\label{NLS} \left\{\begin{aligned} &{\rm i} \partial_t u+\partial_{xx} u+P*u+f(u,u_x,u_{xx})=0, \quad u=u(t,x), \quad x\in \mathbb{T},\\ &u(0,x)=u_0(x) \end{aligned}\right. \end{equation} where $\mathbb{T}:=\mathds{R}/2\pi\mathds{Z}$, the nonlinearity $f$ is in $C^{\infty}(\mathds{C}^3;\mathds{C})$ in the \emph{real sense} (i.e. $f(z_1,z_2,z_3)$ is $C^{\infty}$ as function of ${\rm Re}(z_i)$ and ${\rm Im}(z_i)$ for $i=1,2,3$) vanishing at order $2$ at the origin, the potential $P(x)=\sum_{j\in\mathds{Z}} \hat{p}(j) \frac{e^{{\rm i} jx}}{{\sqrt{2\pi}}}$ is a function in $C^1(\mathbb{T};\mathds{C})$ with real Fourier coefficients $\hat{p}(j)\in\mathds{R}$ for any $j\in\mathds{Z}$ and $P*u$ denotes the convolution between $P$ and $u=\sum_{j\in \mathds{Z}}\hat{u}(j)\frac{e^{{\rm i} jx}}{\sqrt{2\pi}}$ \begin{equation}\label{convpotential} P*u(x):=\int_{\mathbb{T}}P(x-y)u(y)dy=\sum_{j\in\mathds{Z}} \hat{p}(j)\hat{u}(j) e^{{\rm i} j x}. \end{equation} Our aim is to prove the local existence, uniqueness and regularity of the classical solution of \eqref{NLS} on Sobolev spaces \begin{equation}\label{SpaziodiSobolev} H^{s}:=H^{s}(\mathbb{T};\mathds{C}):\left\{u(x)=\sum_{k\in\mathds{Z}}\hat{u}(k)\frac{e^{{\rm i} kx}}{\sqrt{2\pi}}\; : \; \|u\|^{2}_{H^s}:=\sum_{j\in \mathds{Z}}\langle j\rangle^{2 s}|\hat{u}(j)|^{2}<\infty \right\}, \end{equation} where $\langle j\rangle:=\sqrt{1+|j|^{2}}$ for $j\in \mathds{Z}$, for $s$ large enough. Similar problems have been studied in the case $x\in\mathds{R}^n$, $n\geq 1$. For $x\in\mathds{R}$, in the paper \cite{Pop1}, it was considered the fully nonlinear Schr\"odinger type equation ${\rm i} \partial_t u= F(t,x,u,u_x,u_{xx})$; it has been shown that the IVP associated to this equation is locally in time well posed in $H^{\infty}(\mathds{R};\mathds{C})$ (where $H^{\infty}(\mathds{R};\mathds{C})$ denotes the intersection of all Sobolev spaces $H^{s}(\mathds{R};\mathds{C})$, $s\in \mathds{R}$) if the function $F$ satisfies some suitable ellipticity hypotheses. Concerning the $n$-dimensional case the IVP for quasi-linear Schr\"odinger equations has been studied in \cite{KPV1} in the Sobolev spaces $H^{s}(\mathds{R}^n;\mathds{C})$ with $s$ sufficiently large. Here the key ingredient used to prove energy estimates is a Doi's type lemma which involves pseudo-differential calculus for symbols defined on the Euclidean space $\mathds{R}^n$. Coming back to the case $x\in\mathbb{T}$ we mention \cite{BHM}. In this paper it is shown that if $s$ is big enough and if the size of the initial datum $u_0$ is sufficiently small, then \eqref{NLS} is well posed in the Sobolev space $H^{s}(\mathbb{T})$ if $P=0$ and $f$ is \emph{Hamiltonian} (in the sense of Hypothesis \ref{hyp1}). The proof is based on a Nash-Moser-H\"ormander implicit function theorem and the required energy estimates are obtained by means of a procedure of reduction to constant coefficients of the equation (as done in \cite{F}, \cite{FP}). We remark that, even for the short time behavior of the solutions, there are deep differences between the problem \eqref{NLS} with periodic boundary conditions ($x\in\mathbb{T}$) and \eqref{NLS} with $x\in\mathds{R}$. Indeed Christ proved in \cite{Cris} that the following family of problems \begin{equation}\label{christ-eq} \left\{ \begin{aligned} & \partial_t u+{\rm i} u_{xx}+u^{p-1}u_x=0\\ & u(0,x)=u_0(x) \end{aligned}\right. \end{equation} is ill-posed in all Sobolev spaces $H^s(\mathbb{T})$ for any integer $p\geq 2$ and it is well-posed in $H^s(\mathds{R})$ for $p\geq 3$ and $s$ sufficiently large. The ill-posedness of \eqref{christ-eq} is very strong, in \cite{Cris} it has been shown that its solutions have the following norm inflation phenomenon: for any $\varepsilon>0$ there exists a solution $u$ of \eqref{christ-eq} and a time $t_{\varepsilon}\in(0,\varepsilon)$ such that \begin{equation*} \norm{u_0}{H^s}\leq\varepsilon \quad {\rm{and}} \quad \norm{u(t_{\varepsilon})}{H^s}>\varepsilon^{-1}. \end{equation*} The examples exhibited in \cite{Cris} suggest that some assumptions on the nonlinearity $f$ in \eqref{NLS} are needed. In this paper we prove local well-posedness for \eqref{NLS} in two cases. The first one is the \emph{Hamiltonian} case. We assume that equation \eqref{NLS} can be written in the complex Hamiltonian form \begin{equation}\label{mega3} \partial_t u={\rm i} \nabla_{\bar{u}}{\mathcal H}(u), \end{equation} with Hamiltonian function \begin{equation}\label{HAMILTONIANA} {\mathcal H}(u)=\int_{\mathbb{T}}-|u_{x}|^{2} +(P*u) \bar{u}+ F(u,u_{x}) dx, \end{equation} for some real valued function $F\in\mathbb{C}^{\infty}(\mathds{C}^2;\mathbb{R})$ and where $\nabla_{\bar{u}}:=(\nabla_{{\rm Re}(u)}+{\rm i} \nabla_{{\rm Im}(u)})/2$ and $\nabla$ denotes the $L^{2}(\mathbb{T};\mathds{R})$ gradient. Note that the assumption $\hat{p}(j)\in\mathds{R}$ implies that the Hamiltonian $\int_{\mathbb{T}}(P*u) \bar{u}dx$ is real valued. We denote by $\partial_{z_i}:=(\partial_{{\rm Re}({z}_i)}-{\rm i} \partial_{{\rm Im}(z_i)})/2$ and $\partial_{\bar{z}_i}:=(\partial_{{\rm Re}({z}_i)}+{\rm i} \partial_{{\rm Im}({z}_i)})/2$ for $i=1,2$ the Wirtinger derivatives. We assume the following. \begin{hyp}[{\bf Hamiltonian structure}]\label{hyp1} We assume that the nonlinearity $f$ in equation \eqref{NLS} has the form \begin{equation}\label{NLS5} \begin{aligned} f(z_1,z_2,z_{3})&=(\partial_{\bar{z}_1}F)(z_1,z_2)-\Big((\partial_{z_1\bar{z}_2}F)(z_1,z_2)z_2+\\ &(\partial_{\bar{z}_1\bar{z}_2}F)(z_1,z_2)\bar{z}_2+ (\partial_{z_2\bar{z}_2}F)(z_1,z_2)z_3+(\partial_{\bar{z}_2\bar{z}_2}F)(z_1,z_2)\bar{z}_3\Big), \end{aligned} \end{equation} where $F$ is a real valued $C^{\infty}$ function (in the real sense) defined on $\mathds{C}^2$ vanishing at $0$ at order $3$. \end{hyp} Under the hypothesis above equation \eqref{NLS} is \emph{quasi-linear} in the sense that the non linearity depends linearly on the variable $z_3$. We remark that Hyp. \ref{hyp1} implies that the nonlinearity $f$ in \eqref{NLS} has the Hamiltonian form \begin{equation*} f(u,u_x,u_{xx})=(\partial_{\bar{z}_1}F)(u,u_{x})- \frac{d}{dx}[(\partial_{\bar{z}_{2}}F)(u,u_{x})]. \end{equation*} The second case is the \emph{parity preserving} case. \begin{hyp}[{\bf Parity preserving structure}]\label{hyp2} Consider the equation \eqref{NLS}. Assume that $f$ is a $C^{\infty}$ function in the real sense defined on $\mathds{C}^{3}$ and that it vanishes at order $2$ at the origin. Assume $P$ has real Fourier coefficients. Assume moreover that $f$ and $P$ satisfy the following \begin{enumerate} \item $f(z_1,z_2,z_3)=f(z_1,-z_2,z_3)$; \item $(\partial_{z_3}f)(z_1,z_2,z_3)\in \mathds{R}$; \item $P(x)=\sum_{j\in\mathds{Z}}\hat{p}(j) e^{{\rm i} j x}$ is such that $\hat{p}(j)=\hat{p}(-j)\in\mathds{R}$ (this means that $P(x)=P(-x)$). \end{enumerate} \end{hyp} Note that item 1 in Hyp. \ref{hyp2} implies that if $u(x)$ is even in $x$ then $f(u,u_x,u_{xx})$ is even in $x$; item 3 implies that if $u(x)$ is even in $x$ so is $P*u$. Therefore the space of functions even in $x$ is invariant for \eqref{NLS}. We assume item 2 to avoid parabolic terms in the non linearity, so that \eqref{NLS} is a \emph{Schr\"odinger-type} equation; note that in this case the equation may be \emph{fully-nonlinear}, i.e. the dependence on the variable $z_{3}$ is not necessary linear. In order to treat initial data with big size we shall assume also the following \emph{ellipticity condition}. \begin{hyp}[{\bf Global ellipticity}]\label{hyp3} We assume that there exist constants $\mathtt{c_1},\,\mathtt{c_2}>0$ such that the following holds. If $f$ in \eqref{NLS} satisfies Hypothesis \ref{hyp1} (i.e. has the form \eqref{NLS5}) then \begin{equation}\label{constraint} \begin{aligned} &1-\partial_{z_2}\partial_{\bar{z}_2}F(z_1,z_2)\geq \mathtt{c_1},\\ &\big((1-\partial_{z_2}\partial_{\bar{z}_2}F)^{2}-|\partial_{\bar{z}_2}\partial_{\bar{z}_2}F|^{2}\big)(z_1,z_2)\geq \mathtt{c_2}\\ \end{aligned} \end{equation} for any $(z_1,z_2)$ in $\mathbb{C}^2$. If $f$ in \eqref{NLS} satisfies Hypothesis \ref{hyp2} then \begin{equation}\label{constraint2} \begin{aligned} &1+\partial_{z_3}f(z_1,z_2,z_3)\geq \mathtt{c_1},\\ &\big((1+\partial_{z_3}f)^{2}-|\partial_{\bar{z}_{3}}f|^{2}\big)(z_1,z_2,z_3)\geq\mathtt{c_2} \end{aligned} \end{equation} for any $(z_1,z_2,z_3)$ in $\mathbb{C}^3$. \end{hyp} The main result of the paper is the following. \begin{theo}[{\bf Local existence}]\label{teototale} Consider equation \eqref{NLS}, assume Hypothesis \ref{hyp1} (respectively Hypothesis \ref{hyp2}) and Hypothesis \ref{hyp3}. Then there exists $s_0>0$ such that for any $s\geq s_0 $ and for any $u_0$ in $H^{s}(\mathbb{T};\mathds{C})$ (respectively any $u_0$ even in $x$ in the case of Hyp. \ref{hyp2}) there exists $T>0$, depending only on $\|u_0\|_{H^{s}}$, such that the equation \eqref{NLS} with initial datum $u_0$ has a unique classical solution $u(t,x)$ (resp. $u(t,x)$ even in $x$) such that $$ u(t,x) \in C^{0}\Big([0,T); H^{s}(\mathbb{T})\Big)\bigcap C^{1}\Big([0,T); H^{s-2}(\mathbb{T})\Big). $$ Moreover there is a constant $C>0$ depending on $\norm{u_0}{H^{s_0}}$ and on $\norm{P}{C^1}$ such that $$ \sup_{t\in [0,T)}\|u(t,\cdot)\|_{H^{s}}\leq C\|u_0\|_{H^{s}}. $$ \end{theo} We make some comments about Hypotheses \ref{hyp1}, \ref{hyp2} and \ref{hyp3}. We remark that the class of Hamiltonian equations satisfying Hyp. \ref{hyp1} is different from the parity preserving one satisfying Hyp. \ref{hyp2}. For instance the equation \begin{equation}\label{manuela} \partial_t u={\rm i}\Big[(1+|u|^2)u_{xx}+u_x^2\bar{u}+(u-\bar{u})u_x\Big] \end{equation} has the form \eqref{mega3} with Hamiltonian function \begin{equation*} \mathcal{H}=\int_{\mathbb{T}}-|u_x|^2(1+|u|^2)+|u|^2(u_x+\bar{u}_x)dx, \end{equation*} but does not have the parity preserving structure (in the sense of Hyp. \ref{hyp2}). On the other hand the equation \begin{equation}\label{manuela1} \partial_tu={\rm i}(1+|u|^2)u_{xx} \end{equation} has the parity preserving structure but is not Hamiltonian with respect to the symplectic form $(u,v)\mapsto {\rm Re}\int_{\mathbb{T}} {\rm i} u\bar{v}d x$. To check this fact one can reason as done in the appendix of \cite{ZGY}. Both the examples \eqref{manuela} and \eqref{manuela1} satisfy the ellipticity Hypothesis \ref{hyp3}. Furthermore there are examples of equations that satisfy Hyp. \ref{hyp1} or Hyp. \ref{hyp2} but do not satisfy Hyp. \ref{hyp3}, for instance \begin{equation}\label{manuela2} \partial_tu={\rm i}(1-|u|^2)u_{xx}. \end{equation} The equation \eqref{manuela2} has the parity preserving structure and it has the form \eqref{NLS} with $P\equiv 0$ and $f(u,u_x,u_{xx})=-|u|^2u_{xx}$, therefore such an $f$ violates \eqref{constraint2} for $|u|\geq 1.$ Nevertheless we are able to prove local existence for equations with this kind of non-linearity if the size of the initial datum is sufficiently small; indeed, since $f$ in \eqref{NLS} is a $C^{\infty}$ function vanishing at the origin, conditions \eqref{constraint2} in the case of Hyp. \ref{hyp2} and \eqref{constraint} in the case of Hyp. \ref{hyp1} are always locally fulfilled for $|u|$ small enough. More precisely we have the following theorem. \begin{theo}[{\bf Local existence for small data}]\label{teototale1} Consider equation \eqref{NLS} and assume only Hypothesis \ref{hyp1} (respectively Hypothesis \ref{hyp2}). Then there exists $s_0>0$ such that for any $s\geq s_0$ there exists $r_0>0$ such that, for any $0\leq r\leq r_0$, the thesis of Theorem \ref{teototale} holds for any initial datum $u_0$ in the ball of radius $r$ of $H^{s}(\mathbb{T};\mathds{C})$ centered at the origin. \end{theo} \vspace{1em} Our method requires a high regularity of the initial datum. In the rest of the paper we have not been sharp in quantifying the minimal value of $s_0$ in Theorems \ref{teototale} and \ref{teototale1}. The reason for which we need regularity is to perform suitable changes of coordinates and having a symbolic calculus at a sufficient order, which requires smoothness of the functions of the phase space. The convolution potential $P$ in equation \eqref{NLS} is motivated by possible future applications. For instance the potential $P$ can be used, as external parameter, in order to modulate the linear frequencies with the aim of studying the long time stability of the small amplitude solutions of \eqref{NLS} by means of Birkhoff Normal Forms techniques. For semilinear NLS-type equation this has been done in \cite{BG}. As far as we know there are no results regarding quasi-linear NLS-type equations. For quasi-linear equations we quote \cite{Delort-2009}, \cite{Delort-Sphere} for the Klein-Gordon and \cite{maxdelort} for the capillary Water Waves. \subsection{Functional setting and ideas of the proof} \vspace{0.5em} Here we introduce the phase space of functions and we give some ideas of the proof. It is useful for our purposes to work on the product space $H^s\times H^s:=H^{s}(\mathbb{T};\mathds{C})\times H^{s}(\mathbb{T};\mathds{C})$, in particular we will often use its subspace \begin{equation}\label{Hcic} \begin{aligned} &{\bf{ H}}^s:={\bf{ H}}^s(\mathbb{T},\mathds{C}^2):=\big(H^{s}\times H^{s}\big)\cap \mathcal{U},\\ &\mathcal{U}:=\{(u^{+},u^{-})\in L^{2}(\mathbb{T};\mathds{C})\times L^{2}(\mathbb{T};\mathds{C})\; : \; u^{+}=\overline{u^{-}}\},\\ \end{aligned} \end{equation} endowed with the product topology. On ${\bf{ H}}^0$ we define the scalar product \begin{equation}\label{comsca} (U,V)_{{\bf{ H}}^0}: \int_{\mathbb{T}}U\cdot\overline{V}dx. \end{equation} We introduce also the following subspaces of $H^s$ and of ${\bf{H}}^{s}$ made of even functions in $x\in\mathbb{T}$ \begin{equation}\label{spazipari} \begin{aligned} H^{s}_{e}&:=\{u\in H^{s}\; : \; u(x)=u(-x)\},\qquad {\bf{ H}}_{e}^s&:=(H^{s}_{e}\times H^{s}_{e})\cap {\bf{H}}^{0}. \end{aligned} \end{equation} We define the operators $\lambda[\cdot]$ and $\bar{\lambda}[\cdot]$ by linearity as \begin{equation}\label{NLS1000} \begin{aligned} \lambda [e^{{\rm i} jx}]&:= \lambda_{j} e^{{\rm i} jx}, \qquad \lambda_{j}:=({\rm i} j)^{2}+\hat{p}(j), \quad \;\; j\in \mathds{Z},\\ \bar{\lambda}[e^{{\rm i} jx }]&:= \lambda_{-j}e^{{\rm i} jx}, \end{aligned} \end{equation} where $\hat{p}(j)$ are the Fourier coefficients of the potential $P$ in \eqref{convpotential}. Let us introduce the following matrices \begin{equation}\label{matrici} E:=\left(\begin{matrix} 1 &0\\ 0 &-1\end{matrix}\right), \quad J:=\left(\begin{matrix} 0 &1\\ -1 &0\end{matrix}\right), \quad \mathds{1}:=\left(\begin{matrix} 1 &0\\ 0 &1\end{matrix}\right), \end{equation} and set \begin{equation}\label{DEFlambda} \Lambda U:=\left(\begin{matrix} \lambda [u] \\ \overline{\lambda}\; [\bar{u}]\end{matrix}\right), \qquad \; \forall \;\; U=(u,\bar{u})\in {\bf H}^{s}. \end{equation} We denote by $\mathfrak{P}$ the linear operator on ${\bf{H}}^{s}$ defined by \begin{equation}\label{convototale} \mathfrak{P}[U]:=\left(\begin{matrix} P*u\\ \bar{P}*\bar{u} \end{matrix}\right), \quad U=(u,\bar{u})\in {\bf{H}}^s, \end{equation} where $P*u$ is defined in \eqref{convpotential}. With this formalism we have that the operator $\Lambda$ in \eqref{DEFlambda} and \eqref{NLS1000} can be written as \begin{equation}\label{DEFlambda2} \Lambda:=\left( \begin{matrix} \partial_{xx} & 0 \\0 & \partial_{xx} \end{matrix} \right)+\mathfrak{P}. \end{equation} It is useful to rewrite the equation \eqref{NLS} as the equivalent system \begin{equation}\label{NLSnaif} \begin{aligned} &\partial_{t}U={\rm i} E\Lambda U+\mathtt{F}(U), \qquad \mathtt{F}(U):=\left( \begin{matrix} f(u,u_x,u_{xx})\\ \overline{f(u,u_x,u_{xx})}\end{matrix} \right), \end{aligned} \end{equation} where $U=(u,\bar{u})$. The first step is to rewrite \eqref{NLSnaif} as a paradifferential system by using the paralinearization formula of Bony (see for instance \cite{Metivier}, \cite{Tay-Para}). In order to do that, we will introduce rigorously classes of symbols in Section \ref{capitolo3}, here we follow the approach used in \cite{maxdelort}. Roughly speaking we shall deal with functions $\mathbb{T}\times \mathds{R}\ni (x,\xi)\to a(x,\xi)$ with limited smoothness in $x$ satisfying, for some $m\in \mathds{R}$, the following estimate \begin{equation}\label{falsisimboli} |\partial_{\xi}^{\beta}a(x,\xi)|\leq C_{\beta}\langle \xi\rangle^{m-\beta}, \;\; \forall \; \beta\in \mathds{N}, \end{equation} where $\langle \xi\rangle :=\sqrt{1+|\xi|^{2}}$. These functions will have limited smoothness in $x$ because they will depend on $x$ through the dynamical variable $U$ which is in ${\bf{H}}^{s}(\mathbb{T})$ for some $s$. From the symbol $a(x,\xi)$ one can define the \emph{paradifferential} operator ${\rm Op}^{\mathcal{B}}(a(x,\xi))[\cdot]$, acting on periodic functions of the form $u(x)=\sum_{j\in\mathds{Z}}\hat{u}(j)\frac{e^{{\rm i} j x}}{\sqrt{2\pi}}$, in the following way: \begin{equation}\label{sanbenedetto} {\rm Op}^{\mathcal{B}}(a(x,\xi))[u]:=\frac{1}{2\pi}\sum_{k\in \mathds{Z}}e^{{\rm i} k x}\left( \sum_{j\in \mathds{Z}}\chi\left(\frac{k-j}{\langle j\rangle}\right)\hat{a}(k-j,j)\hat{u}(j) \right), \end{equation} where $\hat{a}(k,j)$ is the $k^{th}$-Fourier coefficient of the $2\pi$-periodic in $x$ function $a(x,\xi)$, and where $\chi(\eta)$ is a $C^{\infty}_0$ function supported in a sufficiently small neighborhood of the origin. With this formalism \eqref{NLSnaif} is equivalent to the paradifferential system \begin{equation}\label{falsaparali} \partial_{t}U= {\rm i} E\mathcal{G}(U)[U]+\mathcal{R}(U), \end{equation} where ${\mathcal G}(U)[\cdot]$ is \begin{equation}\label{sanfrancesco} \begin{aligned} {\mathcal G}(U)[\cdot]&:=\left(\begin{matrix} {\rm Op}^{\mathcal{B}}(({\rm i}\xi)^{2}+a(x,\xi))[\cdot] & {\rm Op}^{\mathcal{B}}(b(x,\xi))[\cdot] \\ {\rm Op}^{\mathcal{B}}(\overline{b(x,-\xi)})[\cdot] & {\rm Op}^{\mathcal{B}}(({\rm i}\xi)^{2}+\overline{a(x,-\xi)})[\cdot] \end{matrix} \right),\\ a(x,\xi)&:=a(U;x,\xi)=\partial_{u_{xx}}f ({\rm i}\xi)^{2}+\partial_{u_x}f ({\rm i}\xi)+\partial_{u}f,\\ b(x,\xi)&:=b(U;x,\xi)=\partial_{\bar{u}_{xx}}f ({\rm i}\xi)^{2}+\partial_{\bar{u}_x}f ({\rm i}\xi)+\partial_{\bar{u}}f,\\ \end{aligned} \end{equation} and where $\mathcal{R}(U)$ is a smoothing operator \[ \mathcal{R}(\cdot) : {\bf{H}}^{s}\to {\bf{H}}^{s+\rho}, \] for any $s>0$ large enough and $\rho\sim s$. Note that the symbols in \eqref{sanfrancesco} are of order $2$, i. e. they satisfy \eqref{falsisimboli} with $m=2$. One of the most important property of being a paradifferential operator is the following: if $U$ is sufficiently regular, namely $U\in {\bf{H}}^{s_0}$ with $s_0$ large enough, then $\mathcal{G}(U)[\cdot]$ extends to a bounded linear operator from ${\bf{H}}^{s}$ to ${\bf{H}}^{s-2}$ for any $s$ in $\mathds{R}$. This paralinearization procedure will be discussed in detail in Section \ref{PARANLS}, in particular in Lemma \ref{paralinearizza} and Proposition \ref{montero}. Since equation \eqref{NLS} is quasi-linear the proofs of Theorems \ref{teototale}, \ref{teototale1} do not rely on direct fixed point arguments; these arguments are used to study the local theory for the semi-linear equations (i.e. when the nonlinearity $f$ in \eqref{NLS} depends only on $u$). The local theory for the semi-linear Schr\"odinger type equations is, nowadays, well understood; for a complete overview we refer to \cite{caze}. Our approach is based on the following quasi-linear iterative scheme (a similar one is used for instance in \cite{ABK}). We consider the sequence of linear problems \begin{equation}\label{sequenz0} \mathcal{A}_0:=\left\{ \begin{aligned} &\partial_{t}U_0-{\rm i} E\partial_{xx}U_0=0 ,\\ &U_0(0)=U^{(0)} \end{aligned}\right. \end{equation} and for $n\geq1$ \begin{equation}\label{sequenzn} \mathcal{A}_n:=\left\{ \begin{aligned} &\partial_{t}U_n-{\rm i} E{\mathcal G}(U_{n-1})[U_{n}]-\mathcal{R}(U_{n-1})=0 ,\\ &U_n(0)=U^{(0)} \end{aligned}\right. \end{equation} where $U^{(0)}(x)=(u_0(x),\overline{u_0}(x))$ with $u_0(x)$ given in \eqref{NLS}. The goal is to show that there exists $s_0>0$ such that for any $s\geq s_0$ the following facts hold: \begin{enumerate} \item the iterative scheme is well-defined, i.e. there is $T>0$ such that for any $n\geq0$ there exists a unique solution $U_{n}$ of the problem $\mathcal{A}_{n}$ which belongs to the space $C^{0}([0,T)]; {\bf{H}}^{s})$; \item the sequence $\{U_{n}\}_{n\geq0}$ is bounded in $C^{0}([0,T)]; {\bf{H}}^{s})$; \item $\{U_n\}_{n\geq0}$ is a Cauchy sequence in $C^{0}([0,T)]; {\bf{H}}^{s-2})$. \end{enumerate} From these properties the limit function $U$ belongs to the space $L^{\infty}([0,T); {\bf{H}}^{s})$. In the final part of Section \ref{local} we show that actually $U$ is a \emph{classical} solution of \eqref{NLS}, namely $U$ solves \eqref{NLSnaif} and it belongs to $C^{0}([0,T);{\bf{H}}^{s})$. Therefore the key point is to obtain energy estimates for the linear problem in $V$ \begin{equation}\label{seqseqseq} \left\{ \begin{aligned} &\partial_{t}V-{\rm i} E{\mathcal G}(U)[V]-\mathcal{R}(U)=0 ,\\ &V(0)=U^{(0)} \end{aligned}\right. \end{equation} where $\mathcal{G}$ is given in \eqref{sanfrancesco} and $U=U(t,x)$ is a fixed function defined for $t\in[0,T], T>0$, regular enough and $\mathcal{R}(U)$ is regarded as a non homogeneous forcing term. Note that the regularity in time and space of the coefficients of operators $\mathcal{G}, \mathcal{R}$ depends on the regularity of the function $U$. Our strategy is to perform a paradifferential change of coordinates $W:=\Phi(U)[V]$ such that the system \eqref{seqseqseq} in the new coordinates reads \begin{equation}\label{seqseqseq2} \left\{ \begin{aligned} &\partial_{t}W-{\rm i} E\widetilde{\mathcal{G}}(U)[W]-\widetilde{\mathcal{R}}(U)=0 ,\\ &W(0)=\Phi(U^{(0)})[U^{(0)}] \end{aligned}\right. \end{equation} where the operator $\widetilde{\mathcal{G}}(U)[\cdot]$ is self-adjoint with constant coefficients in $x\in \mathbb{T}$ and $\widetilde{\mathcal{R}}(U)$ is a bounded term. More precisely we show that the operator $\widetilde{{\mathcal G}}(U)[\cdot]$ has the form \begin{equation}\label{scopo} \begin{aligned} \widetilde{{\mathcal G}}(U)[\cdot]&:=\left(\begin{matrix} {\rm Op}^{\mathcal{B}}(({\rm i}\xi)^{2}+m(U;\xi))[\cdot] & 0 \\ 0 & {\rm Op}^{\mathcal{B}}(({\rm i}\xi)^{2}+m(U;\xi))[\cdot] \end{matrix} \right),\\ m(U;\xi)&:=m_{2}(U)({\rm i}\xi)^{2}+m_{1}(U)({\rm i}\xi)\in \mathds{R},\\ \end{aligned} \end{equation} with $m(U;\xi)$ real valued and independent of $x\in \mathbb{T}$. Since the symbol $m(U;\xi)$ is real valued the linear operator ${\rm i} E \widetilde{\mathcal{G}}(U)$ generates a well defined flow on $L^2\times L^2$, since it has also constant coefficients in $x$ it generates a flow on $H^{s}\times H^s$ for $s\geq 0$. This idea of conjugation to constant coefficients up to bounded remainder has been developed in order to study the linearized equation associated to quasi-linear system in the context of Nash-Moser iterative scheme. For instance we quote the papers \cite{BBM}, \cite{BBM1} on the KdV equation, \cite{FP}, \cite{FP2} on the NLS equation and \cite{IPT}, \cite{BM1}, \cite{BBHM}, \cite{alaz-baldi-periodic} on the water waves equation, in which such techniques are used in studying the existence of periodic and quasi-periodic solutions. Here, dealing with the paralinearized equation \eqref{falsaparali}, we adapt the changes of coordinates, for instance performed in \cite{FP}, to the paradifferential context following the strategy introduced in \cite{maxdelort} for the water waves equation. \vspace{1em} {\bf Comments on Hypotheses \ref{hyp1}, \ref{hyp2} and \ref{hyp3}.} \vspace{0.5em} Consider the following linear system \begin{equation}\label{IVPlineare} \begin{aligned} \partial_{t}V-{\rm i} E \mathfrak{L}(x)\partial_{xx} V=0 ,\\ \end{aligned} \end{equation} where ${\mathfrak L}(x)$ is the non constant coefficient matrix \begin{equation}\label{isabella} {\mathfrak L}(x):=\left( \begin{matrix} 1+a_{2}(x) & b_{2}(x) \\ \overline{b_{2}(x)} & 1+a_{2}(x) \end{matrix} \right), \quad a_{2} \in C^{\infty}(\mathbb{T};\mathds{R}), \;\; b_{2}\in C^{\infty}(\mathbb{T};\mathds{C}), \end{equation} Here we explain how to diagonalize and conjugate to constant coefficients the system \eqref{IVPlineare} at the highest order, we also discuss the role of the Hypotheses \ref{hyp1}, \ref{hyp2} and \ref{hyp3}. The analogous analysis for the paradifferential system \eqref{seqseqseq} is performed in Section \ref{descent1}. {\emph{First step: diagonalization at the highest order.}} We want to transform \eqref{IVPlineare} into the system \begin{equation}\label{IVPlineare2} \begin{aligned} &\partial_{t}V_{1}={\rm i} E \left(A^{(1)}_2(x)\partial_{xx} V_1+A_{1}^{(1)}(x)\partial_{x}V_1+A_0^{(1)}(x)V_1\right) ,\\ \end{aligned} \end{equation} where $A_{1}^{(1)}(x),A_0^{(1)}(x)$ are $2\times2$ matrices of functions, and $A_{2}^{(1)}(x)$ is the diagonal matrix of functions \[ A_{2}^{(1)}(x)=\left( \begin{matrix} 1+a^{(1)}_{2}(x) & 0 \\ 0 & 1+a_{2}^{(1)}(x) \end{matrix} \right), \] for some real valued function $a_{2}^{(1)}(x)\in C^{\infty}(\mathbb{T};\mathds{R})$. See Section \ref{secondord} for the paradifferential linear system \eqref{seqseqseq}. The matrix $E{\mathfrak L}(x)$ can be diagonalized through a regular transformation if the determinant of $E{\mathfrak L}(x)$ is strictly positive, i.e. there exists $c>0$ such that \begin{equation}\label{condlineare} {\rm det}\Big(E{\mathfrak L}(x)\Big)=(1+a_{2}(x))^{2}-b_{2}(x)^{2}\geq c, \end{equation} for any $x\in \mathbb{T}$. Note that the eigenvalues of $E{\mathfrak L}(x)$ are $\lambda_{1,2}(x)=\pm \sqrt{{\rm det}E{\mathfrak L}(x)}$. Let $\Phi_1(x)$ be the matrix of functions such that \[ \Phi_1(x)\big(E{\mathfrak L}(x)\big)\Phi_1^{-1}(x)=EA_{2}^{(1)}(x), \] where $(1+a_{2}^{(1)}(x))$ is the positive eigenvalue of $E{\mathfrak L}(x)$. One obtains the system \eqref{IVPlineare2} by setting $V_1:=\Phi_1(x)V$. Note that condition \eqref{condlineare} is the transposition at the linear level of the second inequality in \eqref{constraint} or \eqref{constraint2}. Note also that if $\|a_{2}\|_{L^{\infty}},\|b_{2}\|_{L^{\infty}}\leq r$ then condition \eqref{condlineare} is automatically fulfilled for $r$ small enough. {\emph{Second step: reduction to constant coefficients at the highest order.}} In order to understand the role of the first bound in conditions \eqref{constraint} and \eqref{constraint2} we perform a further step in which we reduce the system \eqref{IVPlineare2} to \begin{equation}\label{IVPlineare3} \begin{aligned} &\partial_{t}V_{2}={\rm i} E \left(A^{(2)}_2\partial_{xx} V_2+A_{1}^{(2)}(x)\partial_{x}V_2+A_{0}^{(2)}(x)V_2\right) ,\\ \end{aligned} \end{equation} where $A_{1}^{(2)}(x),A_{0}^{(2)}(x)$ are $2\times2$ matrices of functions, and \[ A_{2}^{(2)}=\left( \begin{matrix} m_2 & 0 \\ 0 & m_2 \end{matrix} \right), \] for some constant $m_2\in \mathds{R}$, $m_2>0$. See Section \ref{ridu2} for the reduction of the paradifferential linear system \eqref{seqseqseq}. In order to do this we use the torus diffeomorphism $x\to x+\beta(x)$ for some periodic function $\beta(x)$ with inverse given by $y\to y+\gamma(y)$ with $\gamma(y)$ periodic in $y$. We define the following linear operator $(Au)(x)=u(x+\beta(x))$, such operator is invertible with inverse given by $(A^{-1}v)(y)=v(y+\gamma(y))$. This change of coordinates transforms \eqref{IVPlineare2} into \eqref{IVPlineare3} where \begin{equation} A_{2}^{(2)}(x)= \left( \begin{matrix} A[(1+a_{2}^{(1)}(y))(1+\gamma_{y}(y))^{2}] & 0 \\ 0 & A[(1+a_{2}^{(1)}(y))(1+\gamma_{y}(y))^{2}] \end{matrix} \right). \end{equation} Then the highest order coefficient does not depend on $y\in\mathbb{T}$ if \[ (1+a_{2}^{(1)}(y))(1+\gamma_{y})^{2}=m_2, \] with $m_2\in \mathds{R}$ independent of $y$. This equation can be solved by setting \begin{equation}\label{monastero} \begin{aligned} m_{2}&:=\left[2\pi \left(\int_{\mathbb{T}}\frac{1}{\sqrt{1+a_{2}^{(1)}(x)}}d x\right)^{-1}\right]^{2},\\ \gamma(y)&:=\partial_{y}^{-1}\left(\sqrt{\frac{m_2}{1+a_{2}^{(1)}(y)}}-1\right), \end{aligned} \end{equation} where $\partial_{y}^{-1}$ is the Fourier multiplier with symbol $1/({\rm i}\xi)$, hence it is defined only on zero mean functions. This justifies the choice of $m_2$. Note that $m_2$, $\gamma$ in \eqref{monastero} are well-defined if $(1+a_{2}^{(1)}(x))$ is real and strictly positive for any $x\in \mathbb{T}$. This is the first condition in \eqref{constraint} and \eqref{constraint2}. {\emph{Third step: reduction at lower orders.}} One can show that it is always possible to conjugate system \eqref{IVPlineare3} to a system of the form \begin{equation}\label{esempio} \partial_{t}V_3={\rm i} E \left(A^{(3)}_2\partial_{xx} V_2+A_{1}^{(3)}(x)\partial_{x}V_2+A_{0}^{(3)}(x)V_2\right), \end{equation} where $A_{2}^{(3)}\equiv A_{2}^{(2)}$ and \[ A_{1}^{(3)}:=\left(\begin{matrix}m_{1} & 0\\ 0 & \overline{m_1} \end{matrix} \right), \] with $m_{1}\in \mathds{C}$ and $A_{0}^{(3)}(x)$ is a matrix of functions up to bounded operators. See Sections \ref{diago1}, \ref{ridu1} for the analogous reduction for paradifferential linear system \eqref{seqseqseq}. It turns out that no extra hypotheses are needed to perform this third step. We obtained that the unbounded term in the r.h.s. of \eqref{esempio} is pseudo-differential constant coefficients operator with symbol $m(\xi):=m_2({\rm i}\xi)^{2}+m_1({\rm i}\xi)$. This is not enough to get energy estimates because the operator $A_{2}^{(3)}\partial_{xx}+A_{1}^{(3)}\partial_{x}$ is not self-adjoint since the symbol $m(\xi)$ is not \emph{a-priori} real valued. This example gives the idea that the global ellipticity hypothesis Hyp. \ref{hyp3} (or the smallness of the initial datum), are needed to conjugate the highest order term of $\mathcal{G}$ in \eqref{seqseqseq} to a diagonal and constant coefficient operator. Of course there are no \emph{a-priori} reasons to conclude that $\widetilde{{\mathcal G}}$ is self-adjoint. This operator is self-adjoint if and only if its symbol $m(U;\xi)$ in \eqref{scopo} is real valued for any $\xi\in \mathds{R}$. The Hamiltonian hypothesis \ref{hyp1} implies that $m_1(U)$ in \eqref{scopo} is purely imaginary, while the parity preserving assumption \ref{hyp2} guarantees that $m_1(U)\equiv0$. Indeed it is shown Lemma \ref{struttura-ham-para} that if $f$ is Hamiltonian (i.e. satisfies Hypothesis \ref{hyp1}) then the operator $\cal{G}(U)[\cdot]$ is formally self-adjoint w.r.t. the scalar product of $L^{2}\times L^{2}$. In our reduction procedure we use transformations which preserve this structure. On the other hand in the case that $f$ is parity preserving (i.e. satisfies Hyp. \ref{hyp2}) then, in Lemma \ref{struttura-rev-para} it is shown that the operator ${\mathcal G}(U)[\cdot]$ maps even functions in even functions if $U$ is even in $x\in \mathbb{T}$. In this case we apply only transformations which preserve the parity of the functions. An operator of the form $\widetilde{\mathcal{G}}$ as in \eqref{scopo} preserves the subspace of even function only if $m_1(U)=0$. \section*{Acknowledgements} We warmly thanks Prof. Massimiliano Berti for the interesting discussions and for the very useful advices. \section{Linear operators} We define some special classes of linear operators on spaces of functions. \begin{de}\label{barrato} Let $A : H^{s}\to H^{s'}$, for some $s,s'\in \mathds{R}$, be a linear operator. We define the operator $\overline{A}$ as \begin{equation}\label{barrato2} \overline{A}[h]:= \overline{A[\bar{h}]}, \qquad h\in H^{s}. \end{equation} \end{de} \begin{de}[{\bf Reality preserving}]\label{realpre} Let $A, B : H^{s}\to H^{s'}$, for some $s,s'\in \mathds{R}$, be linear operators. We say that a matrix of linear operators $\mathfrak{F}$ is \emph{reality} preserving if has the form \begin{equation}\label{barrato4} \mathfrak{F}:=\left(\begin{matrix} A & B \\ \overline{B} & \overline{A}\end{matrix}\right), \end{equation} for $A$ and $B$ linear operators. \end{de} \begin{rmk}\label{operatoripreserving1} Given $s,s'\in \mathds{R}$, one can easily check that a \emph{reality} preserving linear operator $\mathfrak{F} $ of the form \eqref{barrato4} is such that \begin{equation}\label{mappare} \mathfrak{F} : {\bf{H}}^{s}\to {\bf{H}}^{s'}. \end{equation} \end{rmk} Given an operator $\mathfrak{F}$ of the form \eqref{barrato4} we denote by $\mathfrak{F^*}$ its adjoint with respect to the scalar product $\eqref{comsca}$ \begin{equation*} (\mathfrak{F}U,V)_{{\bf{ H}}^0}=(U,\mathfrak{F}^{*}V)_{{\bf{ H}}^0}, \quad \forall\,\, U,\, V\in {\bf{ H}}^s. \end{equation*} One can check that \begin{equation}\label{natale} \mathfrak{F}^*:=\left(\begin{matrix} A^* & \overline{B}^* \\ {B}^* & \overline{A}^*\end{matrix}\right), \end{equation} where $A^*$ and $B^*$ are respectively the adjoints of the operators $A$ and $B$ with respect to the complex scalar product on $L^{2}(\mathbb{T};\mathds{C})$ \[ (u,v)_{L^{2}}:=\int_{\mathbb{T}}u\cdot \bar{v}dx, \quad u,v\in L^{2}(\mathbb{T};\mathds{C}). \] \begin{de}[{\bf Self-adjointness}]\label{selfi} Let $\mathfrak{F}$ be a reality preserving linear operator of the form \eqref{barrato4}. We say that $\mathfrak{F}$ is \emph{self-adjoint} if $A,A^*,B,B^* : H^{s}\to H^{s'}$, for some $s,s'\in \mathds{R}$ and \begin{equation}\label{calu} A^{*}=A,\;\; \;\; \overline{B}=B^{*}. \end{equation} \end{de} We have the following definition. \begin{de}[{\bf Parity preserving}]\label{revmap} Let $A : H^{s}\to H^{s'}$, for some $s,s'\in \mathds{R}$ be a linear operator. We say that $A$ is \emph{parity} preserving if \begin{equation}\label{revmap2000} A : H^{s}_{e}\to H^{s'}_{e}, \end{equation} i.e. maps even functions in even functions of $x\in \mathbb{T}$. Let $\mathfrak{F} :{\bf{H}}^{s} \to {\bf{H}}^{s'} $ be a reality preserving operator of the form \eqref{barrato4}. We say that $\mathfrak{F}$ is parity preserving if the operators $A,B$ are parity preserving operators. \end{de} \begin{rmk}\label{operatoripreserving2} Given $s,s'\in \mathds{R}$, and let $\mathfrak{F} :{\bf{H}}^{s} \to {\bf{H}}^{s'} $ be a reality and parity preserving operator of the form \eqref{barrato4}. One can check that that \begin{equation}\label{revmap2} \mathfrak{F} : {\bf H}_{e}^{s}\to {\bf H}_{e}^{s'}. \end{equation} \end{rmk} We note that $\Lambda$ in \eqref{DEFlambda} has the following properties: \begin{itemize} \item the operator $\Lambda$ is reality preserving (according to Def. \ref{realpre}). \item the operator $\Lambda$ is self-adjoint according to Definition \ref{selfi} since the coefficients $\hat{p}(j)$ for $j\in \mathds{Z}$ are real; \item under the parity preserving assumption Hyp. \ref{hyp2} the operator $\Lambda$ is parity preserving according to Definition \ref{revmap}, since $\hat{p}(j)=\hat{p}(-j)$ for $j\in \mathds{Z}$. \end{itemize} \paragraph{Hamiltonian and parity preserving vector fields.} Let $\mathfrak{F}$ be a reality preserving, self-adjoint (or parity preserving respectively) operator as in \eqref{barrato4} and consider the linear system \begin{equation}\label{linearsystem} \partial_t U={\rm i} E\mathfrak{F}U, \end{equation} on ${\bf{H}}^{s}$ where $E$ is given in \eqref{matrici}. We want to analyze how the properties of the system \eqref{linearsystem} change under the conjugation through maps $$ \Phi : {\bf{H}}^{s}\to {\bf{H}}^{s}, $$ which are reality preserving. We have the following lemma. \begin{lemma}\label{lemmalemma2} Let ${{\mathcal X}} : {\bf{H}}^{s}\to {\bf{H}}^{s-m}$, for some $m\in \mathds{R}$ and $s>0$ be a reality preserving, self-adjoint operator according to Definitions \ref{realpre}, \ref{selfi} and assume that its flow \begin{equation}\label{system15} \partial_{\tau}\Phi^{\tau}={\rm i} E{{\mathcal X}} \Phi^{\tau}, \qquad \Phi^{0}=\mathds{1}, \end{equation} satisfies the following. The map $\Phi^{\tau}$ is a continuous function in $\tau\in[0,1]$ with values in the space of bounded linear operators from ${\bf{H}}^s$ to ${\bf{H}}^s$ and $\partial_{\tau}\Phi^{\tau}$ is continuous as well in $\tau\in[0,1]$ with values in the space of bounded linear operators from ${\bf{H}}^s$ to ${\bf{H}}^{s-m}$. Then the map $\Phi^{\tau}$ satisfies the condition \begin{equation}\label{symsym10} (\Phi^{\tau})^{*}(-{\rm i} E )\Phi^{\tau}=- {\rm i} E. \end{equation} \end{lemma} \begin{proof} First we note that the adjoint operator $(\Phi^{\tau})^{*}$ satisfies the equation $\partial_{\tau}(\Phi^{\tau})^{*}=(\Phi^{\tau})^{*}{\mathcal X}(-{\rm i} E)$. Therefore one can note that $$ \partial_{\tau}\Big[ (\Phi^{\tau})^{*}(-{\rm i} E )\Phi^{\tau} \Big]=0, $$ which implies $(\Phi^{\tau})^{*}(-{\rm i} E )\Phi^{\tau}=(\Phi^{0})^{*}(-{\rm i} E )\Phi^{0}=-{\rm i} E$. \end{proof} \begin{lemma}\label{lemmalemma} Consider a reality preserving, self-adjoint linear operator $\mathfrak{F}$ (i.e. which satisfies \eqref{barrato4} and \eqref{calu}) and a reality preserving map $\Phi$. Assume that $\Phi$ satisfies condition \eqref{symsym10} and consider the system \begin{equation}\label{system} \partial_t W={\rm i} E \mathfrak{F} W, \qquad W\in {\bf H}^{s}. \end{equation} By setting $V=\Phi W$ one has that the system \eqref{system} reads \begin{equation}\label{system22} \partial_t V={\rm i} E {\mathcal Y} V, \end{equation} \begin{equation}\label{system2} {\mathcal Y}:=-{\rm i} E\Phi({\rm i} E)\mathfrak{F}\Phi^{-1}-{\rm i} E(\partial_{t}\Phi)\Phi^{-1}, \end{equation} and ${\mathcal Y}$ is self-adjoint, i.e. it satisfies conditions \eqref{calu}. \end{lemma} \begin{proof} One applies the changes of coordinates and one gets the form in \eqref{system2}. We prove that separately each term of ${\mathcal Y}$ is self-adjoint. Note that by \eqref{symsym10} one has that $(-{\rm i} E )\Phi=(\Phi^{*})^{-1}(-{\rm i} E)$, hence $ -{\rm i} E\Phi({\rm i} E)\mathfrak{F}\Phi^{-1}=(\Phi^{*})^{-1} \mathfrak{F} \Phi^{-1} $. Then \begin{equation}\label{system12} \Big( (\Phi^{*})^{-1} \mathfrak{F} \Phi^{-1} \Big)^{*}=(\Phi^{-1})^{*} \mathfrak{F} [(\Phi^{*})^{-1}]^{*}, \end{equation} since $\mathfrak{F}$ is self-adjoint. Moreover we have that $(\Phi^{-1})^{*}=(\Phi^{*})^{-1}$. Indeed again by \eqref{symsym10} one has that $$ \Phi^{-1}=({\rm i} E)\Phi^{*}(-{\rm i} E), \qquad (\Phi^{-1})^{*}=({\rm i} E) \Phi (-{\rm i} E), \quad \Phi^{*}=(-{\rm i} E)\Phi^{-1}({\rm i} E) $$ Hence one has \begin{equation}\label{system13} (\Phi^{-1})^{*}\Phi^{*}=({\rm i} E) \Phi (-{\rm i} E)(-{\rm i} E)\Phi^{-1}({\rm i} E)=-({\rm i} E)({\rm i} E)=\mathds{1}. \end{equation} Then by \eqref{system12} we conclude that $(-{\rm i} E)\Phi {\rm i} E \Phi^{-1}$ is self-adjoint. Let us study the second term of \eqref{system2}. First note that \begin{equation}\label{palomba} \partial_{t}[\Phi^{*}]=-(\Phi^{*})(-{\rm i} E)(\partial_{t}\Phi)\Phi^{-1}({\rm i} E), \qquad (\partial_{t}\Phi)^{*}=\Phi^{*}({\rm i} E)(\partial_{t}(\Phi^{*}))^{*}\Phi^{-1}({\rm i} E) \end{equation} then \begin{equation}\label{system14} \Big( (-{\rm i} E)(\partial_{t}\Phi)(\Phi^{-1}) \Big)^{*}=(\Phi^{-1})^{*}(\partial_{t}\Phi)^{*}({\rm i} E)=(-{\rm i} E)(\partial_{t}(\Phi^{*}))^{*}\Phi^{-1}. \end{equation} By \eqref{palomba} we have $\partial_{t}(\Phi^{*})=(\partial_{t}\Phi)^{*}$, hence we get the result. \end{proof} \begin{lemma}\label{revmap100} Consider a reality and parity preserving linear operator $\mathfrak{F}$ (i.e. \eqref{barrato4} and \eqref{revmap2} hold) and a map $\Phi$ as in \eqref{barrato4} which is parity preserving (see Def. \ref{revmap}). Consider the system \begin{equation}\label{system100} \partial_t W={\rm i} E \mathfrak{F} W, \qquad W\in {\bf H}^{s}. \end{equation} By setting $V=\Phi W$ one has that the system \eqref{system} reads \begin{equation}\label{system22100} \partial_t V={\rm i} E {\mathcal Y} V, \end{equation} \begin{equation}\label{system2100} {\mathcal Y}:=-{\rm i} E\Phi({\rm i} E)\mathfrak{F}\Phi^{-1}-{\rm i} E(\partial_{t}\Phi)\Phi^{-1}, \end{equation} and ${\mathcal Y}$ is reality preserving and parity preserving, i.e. satisfies condition \eqref{barrato4} and \eqref{revmap2}. \end{lemma} \begin{proof} It follows straightforward by the Definitions \ref{revmap} and \ref{realpre}. \end{proof} \setcounter{equation}{0} \section{Paradifferential calculus}\label{capitolo3} \subsection{Classes of symbols} We introduce some notation. If $K\in\mathbb{N}$, $I$ is an interval of $\mathbb{R}$ containing the origin, $s\in\mathbb{R}^{+}$ we denote by $C^K_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2))$, sometimes by $C^K_{*\mathbb{R}}(I,{\bf{H}}^{s})$, the space of continuous functions $U$ of $t\in I$ with values in ${\bf{H}}^{s}(\mathbb{T},\mathds{C}^2)$, which are $K$-times differentiable and such that the $k-$th derivative is continuous with values in ${\bf{H}}^{s-2k}(\mathbb{T},\mathds{C}^2)$ for any $0\leq k\leq K$. We endow the space $C^K_{*\mathbb{R}}(I,{\bf{H}}^{s})$ with the norm \begin{equation}\label{spazionorm} \sup_{t\in I}\norm{U(t,\cdot)}{K,s}, \quad \mbox {where} \quad \norm{U(t,\cdot)}{K,s}:=\sum_{k=0}^{K}\norm{\partial_t^k U(t,\cdot)}{{\bf{H}}^{s-2k}}. \end{equation} Moreover if $r\in\mathbb{R}^{+}$ we set \begin{equation}\label{pallottola} B_{s}^K(I,r):=\set{U\in C^K_{*\mathbb{R}}(I,{\bf{H}}^{s}):\, \sup_{t\in I}\norm{U(t,\cdot)}{K,s}<r}. \end{equation} \begin{de}[\bf{Symbols}]\label{nonomorest} Let $m\in\mathbb{R}$, $K'\leq K$ in $\mathbb{N}$, $r>0$. We denote by $\Gamma^m_{K,K'}[r]$ the space of functions $(U;t,x,\xi)\mapsto a(U;t,x,\xi)$, defined for $U\in B_{\sigma_0}^K(I,r)$, for some large enough $\sigma_0$, with complex values such that for any $0\leq k\leq K-K'$, any $\sigma\geq \sigma_0$, there are $C>0$, $0<r(\sigma)<r$ and for any $U\in B_{\sigma_0}^K(I,r(\sigma))\cap C^{k+K'}_{*\mathbb{R}}(I,{\bf{H}}^{\sigma})$ and any $\alpha, \beta \in\mathbb{N}$, with $\alpha\leq \sigma-\sigma_0$ \begin{equation}\label{simbo} \asso{\partial_t^k\partial_x^{\alpha}\partial_{\xi}^{\beta}a(U;t,x,\xi)}\leq C \norm{U}{k+K',\sigma}\langle\xi\rangle^{m-\beta}, \end{equation} for some constant $C=C(\sigma, \norm{U}{k+K',\sigma_0})$ depending only on $\sigma$ and $\norm{U}{k+K',\sigma_0}$. \end{de} \begin{rmk}\label{notazionetempo} In the rest of the paper the time $t$ will be treated as a parameter. In order to simplify the notation we shall write $a(U;x,\xi)$ instead of $a(U;t,x,\xi)$. On the other hand we will emphasize the $x$-dependence of a symbol $a$. We shall denote by $a(U;\xi)$ only those symbols which are independent of the variable $x\in \mathbb{T}$. \end{rmk} \begin{rmk}\label{differenzaclassidisimbo} If one compares the latter definition of class of symbols with the one given in Section $2$ in \cite{maxdelort} one note that they have been more precise on the expression of the constant $C$ in the r.h.s. of \eqref{simbo}. First of all we do not need such precision since we only want to study local theory. Secondly their classes are modeled in order to work in a small neighborhood of the origin. \end{rmk} \begin{lemma}\label{unasbarretta} Let $a\in \Gamma^{m}_{K,K'}[r]$ and $U\in B_{\sigma_0}^{K}(I,r)$ for some $\sigma_0$. One has that \begin{equation}\label{Normaunabarra2} \sup_{\xi\in\mathds{R}}\langle\xi\rangle^{-m}\|a(U;\cdot,\xi)\|_{K-K',s}\leq C \norm{U}{K,s+\sigma_0+1}. \end{equation} for $s\geq 0$. \end{lemma} \begin{proof} Assume that $s \in \mathds{N}$. We have \begin{equation}\label{Normaunabarra} \begin{aligned} \|a(U;x,\xi)\|_{K-K',s}&\leq C_1 \sum_{k=0}^{K-K'}\sum_{j=0}^{s-2k} \|\partial_{t}^{k}\partial_{x}^{j}a(U; \cdot,\xi)\|_{L^{\infty}}\\ &\leq C_2 \langle \xi\rangle^{m}\sum_{k=0}^{K-K'}\|U\|_{k+K',s+\sigma_0}, \end{aligned} \end{equation} with $C_{1},C_{2}>0$ depend only on $s,K$ and $\norm{U}{k+K',\sigma_0}$, and where we used formula \eqref{simbo} with $\sigma=s+\sigma_0$. Equation \eqref{Normaunabarra} implies \eqref{Normaunabarra2} for $s\in \mathds{N}$. The general case $s\in \mathds{R}_{+}$, follows by using the log-convexity of the Sobolev norm by writing $s=[s]\tau+(1-\tau)(1+[s])$ where $[s]$ is the integer part of $s$ and $\tau\in[0,1]$. \end{proof} We define the following special subspace of $\Gamma^0_{K,K'}[r]$. \begin{de}[\bf{Functions}]\label{apeape} Let $K'\leq K$ in $\mathbb{N}$, $r>0$. We denote by ${\mathcal F}_{K,K'}[r]$ the subspace of $\Gamma^0_{K,K'}[r]$ made of those symbols which are independent of $\xi$. \end{de} \subsection{Quantization of symbols}\label{opmulti2} Given a smooth symbol $(x,\xi) \to a(x,\xi)$, we define, for any $\sigma\in [0,1]$, the quantization of the symbol $a$ as the operator acting on functions $u$ as \begin{equation}\label{operatore} {\rm Op}_{\sigma}(a(x,\xi))u=\frac{1}{2\pi}\int_{\mathds{R}\times\mathds{R}}e^{{\rm i}(x-y)\xi}a(\sigma x+(1-\sigma)y,\xi)u(y)dy d\xi. \end{equation} This definition is meaningful in particular if $u\in C^{\infty}(\mathbb{T})$ (identifying $u$ to a $2\pi$-periodic function). By decomposing $u$ in Fourier series as $u=\sum_{j\in\mathds{Z}}\hat{u}(j)\frac{e^{{\rm i} jx}}{\sqrt{2\pi}}$, we may calculate the oscillatory integral in \eqref{operatore} obtaining \begin{equation}\label{bambola} {\rm Op}_{\sigma}(a)u:=\frac{1}{2\pi}\sum_{k\in \mathds{Z}}\left(\sum_{j\in\mathds{Z}}\hat{a}\big(k-j,(1-\sigma)k+\sigma j\big)\hat{u}(j)\right)e^{{\rm i} k x}, \quad \forall\;\; \sigma\in[0,1], \end{equation} where $\hat{a}(k,\xi)$ is the $k^{th}-$Fourier coefficient of the $2\pi-$periodic function $x\mapsto a(x,\xi)$. In the paper we shall use two particular quantizations: \paragraph{Standard quantization.} We define the standard quantization by specifying formula \eqref{bambola} for $\sigma=1$: \begin{equation}\label{bambola2} {\rm Op}(a)u:={\rm Op}_{1}(a)u=\frac{1}{2\pi}\sum_{k\in \mathds{Z}}\left(\sum_{j\in\mathds{Z}}\hat{a}\big(k-j, j\big)\hat{u}(j)\right)e^{{\rm i} k x}; \end{equation} \paragraph{Weyl quantization.} We define the Weyl quantization by specifying formula \eqref{bambola} for $\sigma=\frac{1}{2}$: \begin{equation}\label{bambola202} {\rm Op}^{W}(a)u:={\rm Op}_{\frac{1}{2}}(a)u=\frac{1}{2\pi}\sum_{k\in \mathds{Z}}\left(\sum_{j\in\mathds{Z}}\hat{a}\big(k-j, \frac{k+j}{2}\big)\hat{u}(j)\right)e^{{\rm i} k x}. \end{equation} Moreover one can transform the symbols between different quantization by using the formulas \begin{equation}\label{bambola5} {\rm Op}(a)={\rm Op}^{W}(b), \qquad {\rm where} \quad \hat{b}(j,\xi)=\hat{a}(j,\xi-\frac{j}{2}). \end{equation} In order to define operators starting from the classes of symbols introduced before, we reason as follows. Let $n\in \mathds{Z}$, we define the projector on $n-$th Fourier mode as \begin{equation}\label{Fou} \left(\Pi_{n} u\right)(x):=\hat{u}({n})\frac{e^{{\rm i} n x}}{\sqrt{2\pi}}; \quad u(x)=\sum_{n\in\mathds{Z}}\hat{u}(n)\frac{e^{{\rm i} jx}}{\sqrt{2\pi}}. \end{equation} For $U\in B^K_s(I,r)$ (as in Definition \ref{nonomorest}), a symbol $a$ in $ \Gamma^{m}_{K,K'}[r]$, and $v\in C^{\infty}(\mathbb{T},\mathds{C})$ we define \begin{equation}\label{quanti2} {\rm Op}(a(U;j))[v]:=\sum_{k\in \mathds{Z}}\left(\sum_{j\in \mathds{Z}}\Pi_{k-j}a(U;j)\Pi_{j}v \right). \end{equation} Equivalently one can define ${\rm Op}^{W}(a)$ according to \eqref{bambola202}. We want to define a \emph{paradifferential} quantization. First we give the following definition. \begin{de}[{\bf Admissible cut-off functions}]\label{cutoff1} We say that a function $\chi\in C^{\infty}(\mathbb{R}\times\mathbb{R};\mathbb{R})$ is an admissible cut-off function if it is even with respect to each of its arguments and there exists $\delta>0$ such that \begin{equation*} \rm{supp}\, \chi \subset\set{(\xi',\xi)\in\mathbb{R}\times\mathbb{R}; |\xi'|\leq\delta \langle\xi\rangle},\qquad \xi\equiv 1\,\,\, \rm{ for } \,\,\, |\xi'|\leq\frac{\delta}{2} \langle\xi\rangle. \end{equation*} We assume moreover that for any derivation indices $\alpha$ and $\beta$ \begin{equation*} |\partial_{\xi}^{\alpha}\partial_{\xi'}^{\beta}\chi(\xi',\xi)|\leq C_{\alpha,\beta}\langle\xi\rangle^{-\alpha-\beta},\,\,\forall \alpha, \,\beta\in\mathds{N}. \end{equation*} \end{de} An example of function satisfying the condition above, and that will be extensively used in the rest of the paper, is $\chi(\xi',\xi):=\widetilde{\chi}(\xi'/\langle\xi\rangle)$, where $\widetilde{\chi}$ is a function in $C_0^{\infty}(\mathds{R};\mathds{R})$ having a small enough support and equal to one in a neighborhood of zero. For any $a\in C^{\infty}(\mathbb{T})$ we shall use the following notation \begin{equation}\label{pseudoD} (\chi(D)a)(x)=\sum_{j\in\mathds{Z}}\chi(j)\Pi_{j}{a}. \end{equation} \begin{prop}[{\bf Regularized symbols}] Fix $m\in \mathds{R}$, $p,K,K'\in \mathds{N}$, $K'\leq K$ and $r>0$. Consider $a\in {\Gamma}^{m}_{K,K'}[r]$ and $\chi$ an admissible cut-off function according to Definition \ref{cutoff1}. Then the function \begin{equation}\label{nsym2} a_{\chi}(U;x,\xi) := \sum_{n\in \mathds{Z}}\chi\left(n,\xi \right) \Pi_{n}a(U;x,\xi) \end{equation} belongs to ${\Gamma}^{m}_{K,K'}[r]$. \end{prop} For the proof we refer the reader to the remark after Definition $2.2.2$ in \cite{maxdelort}. We define the Bony quantization in the following way. Consider an admissible cut-off function $\chi$ and a symbol $a$ belonging to the class $ {\Gamma}^{m}_{K,K'}[r]$, we set \begin{equation}\label{boninon} \textrm{Op}^{{\mathcal B}}(a(U;x,j))[v]:= \textrm{Op}(a_{\chi}(U;x,j))[v], \end{equation} where $a_{\chi}$ is defined in \eqref{nsym2}. Analogously we define the Bony-Weyl quantization \begin{equation}\label{bweylq} {\rm Op}^{\mathcal{B}W}(b(U;x,j))[v]:= {\rm Op}^{W}(b_{\chi}(U;x,j))[v]. \end{equation} The definition of the operators ${\rm Op}^{{\mathcal B}}(b)$ and ${\rm Op}^{\mathcal{B}W}(b)$ is independent of the choice of the cut-off function $\chi$ modulo smoothing operators that we define now. \begin{de}[\bf{Smoothing remainders}]\label{nonomoop} Let $K'\leq K\in\mathbb{N}$, $\rho\geq0$ and $r>0$. We define the class of remainders $\mathcal{R}^{-\rho}_{K,K'}[r]$ as the space of maps $(V,u)\mapsto R(V)u$ defined on $B^K_{s_0}(I,r)\times C^K_{*\mathbb{R}}(I,H^{s_0}(\mathbb{T},\mathds{C}))$ which are linear in the variable $u$ and such that the following holds true. For any $s\geq s_0$ there exists a constant $C>0$ and $r(s)\in]0,r[$ such that for any $V\in B^K_{s_0}(I,r)\cap C^K_{*\mathbb{R}}(I,H^{s}(\mathbb{T},\mathds{C}^2))$, any $u\in C^K_{*\mathbb{R}}(I,H^{s}(\mathbb{T},\mathds{C}))$, any $0\leq k\leq K-K'$ and any $t\in I$ the following estimate holds true \begin{equation}\label{porto20} \norm{\partial_t^k\left(R(V)u\right)(t,\cdot)}{H^{s-2k+\rho}}\leq \sum_{k'+k''=k}C\Big[\norm{u}{k'',s}\norm{V}{k'+K',s_0}+\norm{u}{k'',s_0}\norm{V}{k'+K',s}\Big], \end{equation} where $C=C(s,\norm{V}{k'+K',s_0})$ is a constant depending only on $s$ and $\norm{V}{k'+K',s_0}$. \end{de} \begin{lemma}\label{equiv} Consider $\chi_{1}$ and $\chi_2$ admissible cut-off functions. Fix $m\in\mathds{R}$, $r>0$, $K'\leq K\in\mathbb{N}$. Then for $a\in {\Gamma}^{m}_{K,K'}[r]$, we have ${\rm Op}(a_{\chi_1}-a_{\chi_{2}})\in {\mathcal R}^{-\rho}_{K,K'}[r]$ for any $\rho\in \mathds{N}$. \end{lemma} For the proof we refer the reader to the remark after the proof of Proposition $2.2.4$ in \cite{maxdelort}. Now we state a proposition describing the action of paradifferential operators defined in \eqref{boninon} and in \eqref{bweylq}. \begin{prop}[{\bf Action of paradifferential operators}]\label{boni2} Let $r>0$, $m\in\mathbb{R},$ $K'\leq K\in\mathbb{N}$ and consider a symbol $a\in\Gamma^{m}_{K,K'}[r]$. There exists $s_0>0$ such that for any $U\in B^{K}_{s_0}(I,r)$, the operator ${\rm Op}^{\mathcal{B}W}(a(U;x,\xi))$ extends, for any $s\in\mathbb{R}$, as a bounded operator from the space $C^{K-K'}_{*\mathbb{R}}(I,H^{s}(\mathbb{T},\mathds{C}))$ to $C^{K-K'}_{*\mathbb{R}}(I,H^{s-m}(\mathbb{T},\mathds{C}))$. Moreover there is a constant $C>0$ depending on $s$ and on the constant in \eqref{simbo} such that \begin{equation}\label{paraparaest} \|{\rm Op}^{\mathcal{B}W}(\partial_{t}^{k}a(U;x,\cdot))\|_{{\mathcal L}(H^{s},H^{s-m})}\leq C \|U\|_{k+K',s_0}, \end{equation} for $k\leq K-K'$, so that \begin{equation}\label{paraest} \norm{{\rm Op}^{\mathcal{B}W}(a(U;x,\xi))(v)}{K-K',{s-m}} \leq C \norm{U}{{K,{s_0}}}\norm{v}{K-K',s}, \end{equation} for any $v\in C^{K-K'}_{*\mathbb{R}}(I,H^{s}(\mathbb{T},\mathds{C}))$. \end{prop} For the proof we refer to Proposition 2.2.4 in \cite{maxdelort}. \begin{rmk}\label{ronaldo10} Actually the estimates \eqref{paraparaest} and \eqref{paraest} follow by \[ \norm{{\rm Op}^{\mathcal{B}W}(a(U;x,\xi))(v)}{K,{s-m}} \leq C_1 \sup_{\xi\in\mathds{R}}\langle\xi\rangle^{-m}\|a(U;\cdot,\xi)\|_{K-K',s_0}\|v\|_{K-K',s}, \] where $C_1>0$ is some constant depending only on $s,s_0$ and Remark \ref{unasbarretta}. \end{rmk} \begin{rmk}\label{ronaldo2} We remark that Proposition \ref{boni2} (whose proof is given in \cite{maxdelort}) applies if $a$ satisfies \eqref{simbo} with $|\alpha |\leq 2$ and $\beta=0$. Moreover, by following the same proof, one can show that \begin{equation} \|{\rm Op}^{W}(\partial_{t}^{k}a_{\chi}(U;x,\cdot))\|_{{\mathcal L}(H^{s},H^{s-m})}\leq C \|U\|_{k+K',s_0}, \end{equation} if $\chi(\eta,\xi)$ is supported for $|\eta|\leq \delta \langle\xi\rangle$ for $\delta>0$ small. Note that this is slightly different from the Definition \ref{cutoff1} of admissible cut-off function since we are not requiring that $\chi\equiv1$ for $|\eta|\leq \frac{\delta}{2}\langle\xi\rangle$. \end{rmk} \begin{rmk}\label{inclusione-nei-resti} Note that, if $m<0$, and $a\in \Gamma^{m}_{K,K'}[r]$, then estimate \eqref{paraparaest} implies that the operator ${\rm Op}^{\mathcal{B}W}(a(U;x,\xi))$ belongs to the class of smoothing operators ${\mathcal R}^{m}_{K,K'}[r]$. \end{rmk} We consider paradifferential operators of the form: \begin{equation}\label{prodotto} \quad {\rm Op}^{\mathcal{B}W}(A(U;x,\xi)):={\rm Op}^{\mathcal{B}W}\left(\begin{matrix} {a}(U;x,\xi) & {b}(U;x,\xi)\\ {\overline{b(U;x,-\xi)}} & {\overline{a(U;x,-\xi)}} \end{matrix} \right) :=\left(\begin{matrix} {\rm Op}^{\mathcal{B}W}({a}(U;x,\xi)) & {\rm Op}^{\mathcal{B}W}({b}(U;x,\xi))\\ {\rm Op}^{\mathcal{B}W}({\overline{b(U;x,-\xi)}}) & {\rm Op}^{\mathcal{B}W}({\overline{a(U;x,-\xi))}} \end{matrix} \right), \end{equation} where $a$ and $b$ are symbols in $\Gamma^{m}_{K,K'}[r]$ and $U$ is a function belonging to $B^{K}_{s_0}(I,r)$ for some $s_0$ large enough. Note that the matrix of operators in \eqref{prodotto} is of the form \eqref{barrato4}. Moreover it is self-adjoint (see \eqref{calu}) if and only if \begin{equation}\label{quanti801} a(U;x,\xi)=\overline{a(U;x,\xi)}\,,\quad b(U;x,-\xi)= b(U;x,\xi), \end{equation} indeed conditions \eqref{calu} on these operators read \begin{equation}\label{megaggiunti} \left({\rm Op}^{\mathcal{B}W}(a(U;x,\xi))\right)^{*}={\rm Op}^{\mathcal{B}W}\left(\overline{a(U;x,\xi)}\right)\, ,\quad \overline{{\rm Op}^{\mathcal{B}W}(b(U;x,\xi))}={\rm Op}^{\mathcal{B}W}\left(\overline{b(U;x,-\xi)}\right). \end{equation} Analogously, given $R_{1}$ and $R_{2}$ in ${\mathcal R}^{-\rho}_{K,K'}[r]$, one can define a reality preserving smoothing operator on ${\bf{H}}^s(\mathbb{T},\mathds{C}^2)$ as follows \begin{equation}\label{vinello} R(U)[\cdot]:=\left(\begin{matrix} R_{1}(U)[\cdot] & R_{2}(U)[\cdot] \\ \overline{R_{2}}(U)[\cdot] & \overline{R_{1}}(U)[\cdot] \end{matrix} \right). \end{equation} We use the following notation for matrix of operators. \begin{de}[{\bf Matrices}]\label{matrixmatrix} We denote by $\Gamma^{m}_{K,K'}[r]\otimes{\mathcal M}_2(\mathds{C})$ the matrices $A(U;x,\xi)$ of the form \eqref{prodotto} whose components are symbols in the class $\Gamma^{m}_{K,K'}[r]$. In the same way we denote by $ {\mathcal R}^{-\rho}_{K,K'}[r]\otimes{\mathcal M}_2(\mathds{C})$ the operators $R(U)$ of the form \eqref{vinello} whose components are smoothing operators in the class ${\mathcal R}^{-\rho}_{K,K'}[r]$. \end{de} \begin{rmk}\label{compsimb} An important class of \emph{parity preserving} maps according to Definition \ref{revmap} is the following. Consider a matrix of symbols $C(U;x,\xi)$, with $U$ even in $x$, in $\Gamma^m_{K,K'}[r]\otimes{\mathcal M}_2(\mathds{C})$ with $m\in \mathds{N}$, if \begin{equation}\label{compmatr} C(U;x,\xi)=C(U;-x,-\xi) \end{equation} then one can check that ${\rm Op}^{\mathcal{B}W}(C(U;x,\xi))$ preserves the subspace of even functions. Moreover consider the system \[ \left\{\begin{aligned} &\partial_{\tau}\Phi^{\tau}(U)[\cdot]={\rm Op}^{\mathcal{B}W}(C(U;x;\xi))\Phi^{\tau}(U)[\cdot],\\ &\Phi^{0}(U)=\mathds{1}. \end{aligned}\right. \] If the flow $\Phi^{\tau}$ is well defined for $\tau\in [0,1]$, then it defines a family of \emph{parity preserving} maps according to Def. \ref{revmap}. \end{rmk} \subsection{Symbolic calculus} We define the following differential operator \begin{equation}\label{cancello} \sigma(D_{x},D_{\xi},D_{y},D_{\eta})=D_{\xi}D_{y}-D_{x}D_{\eta}, \end{equation} where $D_{x}:=\frac{1}{{\rm i}}\partial_{x}$ and $D_{\xi},D_{y},D_{\eta}$ are similarly defined. If $a$ is a symbol in $\Gamma^{m}_{K,K'}[r]$ and $b$ in $\Gamma^{m'}_{K,K'}[r]$, if $U\in B^K_{s_0}(I,r)$ with $s_0$ large enough, we define \begin{equation}\label{sbam8} (a\sharp b)_{\rho}(U;x,\xi):=\sum_{\ell=0}^{\rho-1}\frac{1}{ \ell!}\tonde{\frac{i}{2}\sigma(D_x,D_{\xi},D_{y},D_{\eta})}^{\ell}\left[a(U;x,\xi)b(U;y,\eta)\right]_{|_{x=y; \,y=\eta}}, \end{equation} modulo symbols in $\Gamma^{m+m'-\rho}_{K,K'}[r]$. Assume also that the $x$-Fourier transforms $\hat{a}(\eta,\xi)$, $\hat{b}(\eta,\xi)$ are supported for $|\eta|\leq \delta \langle\xi \rangle$ for small enough $\delta>0$. Then we define \begin{equation}\label{sbam8infinito} (a\sharp b)(x,\xi):= \frac{1}{4\pi^{2}}\int_{\mathds{R}^{2}}e^{{\rm i} x (\xi^* + \eta^*)}\hat{a}(\eta^*,\xi+\frac{\xi^*}{2})\hat{b}(\xi^*,\xi-\frac{\eta^*}{2}) d\xi^* d\eta^*. \end{equation} Thanks to the hypothesis on the support of the $x$-Fourier transform of $a$ and $b$, this integral is well defined as a distribution in $(\xi^*,\eta^*)$ acting on the $C^{\infty}$-function $(\xi^*,\eta^*)\mapsto e^{{\rm i} x(\xi^*+\eta^*)}$. Lemma 2.3.4 in \cite{maxdelort} guarantees that according to the notation above one has \begin{equation}\label{ronaldo} {\rm Op}^{\mathcal{B}W}(a)\circ{\rm Op}^{\mathcal{B}W}(b)={\rm Op}^{W}(c), \quad c(x,\xi):=(a_{\chi}\sharp b_{\chi})(x,\xi), \end{equation} where $a_{\chi}$ and $b_{\chi}$ are defined in \eqref{nsym2}. We state here a Proposition asserting that the symbol $(a\sharp b)_{\rho}$ is the symbol of the composition up to smoothing operators. \begin{prop}[{\bf Composition of Bony-Weyl operators}]\label{componiamoilmondo} Let $a$ be a symbol in $\Gamma^{m}_{K,K'}[r]$ and $b$ a symbol in $\Gamma^{m'}_{K,K'}[r]$, if $U\in B^K_{s_0}(I,r)$ with $s_0$ large enough then \begin{equation}\label{sharp} {\rm Op}^{\mathcal{B}W}(a(U;x,\xi))\circ{\rm Op}^{\mathcal{B}W}(b(U;x,\xi))-{\rm Op}^{\mathcal{B}W}((a\sharp b)_{\rho}(U;x,\xi)) \end{equation} belongs to the class $\mathcal{R}^{-\rho+m+m'}_{K,K'}[r]$. \end{prop} For the proof we refer to Proposition 2.3.2 in \cite{maxdelort}. In the following we will need to compose smoothing operators and paradifferential ones, the next Proposition asserts that the outcome is another smoothing operator. \begin{prop}\label{componiamoilmare} Let $a$ be a symbol in $\Gamma^{m}_{K,K'}[r]$ with $m\geq 0$ and $R$ be a smoothing operator in $\mathcal{R}^{-\rho}_{K,K'}[r]$. If $U$ belongs to $B^{K}_{s_0}[I,r]$ with $s_0$ large enough, then the composition operators \begin{equation*} {\rm Op}^{\mathcal{B}W}(a(U;x,\xi))\circ R(U)[\cdot]\,, \quad R(U) \circ {\rm Op}^{\mathcal{B}W}(a(U;x,\xi))[\cdot] \end{equation*} belong to the class $\mathcal{R}^{-\rho+m}_{K,K'}[r]$. \end{prop} For the proof we refer to Proposition 2.4.2 in \cite{maxdelort}. We can compose smoothing operators with smoothing operators as well. \begin{prop} Let $R_1$ be a smoothing operator in $\mathcal{R}^{-\rho_1}_{K,K'}[r]$ and $R_2$ in $\mathcal{R}^{-\rho_2}_{K,K'}[r]$. If $U$ belongs to $B^{K}_{s_0}[I,r]$ with $s_0$ large enough, then the operator $R_1(U)\circ R_{2}(U)[\cdot]$ belongs to the class $\mathcal{R}^{-\rho}_{K,K'}[r]$, where $\rho=\min(\rho_1,\rho_2)$. \end{prop} We need also the following. \begin{lemma}\label{est-prod} Fix $K,K'\in \mathds{N}$, $K'\leq K$ and $r>0$. Let $\{c_i\}_{i\in \mathds{N}}$ a sequence in ${\mathcal F}_{K,K'}[r]$ such that for any $i\in\mathds{N}$ \begin{equation}\label{puta} \asso{\partial_t^k\partial_x^{\alpha}c_i(U;x)}\leq M_i \norm{U}{k+K',s_0}, \end{equation} for any $0\leq k\leq K-K'$ and $|\alpha|\leq 2$ and for some $s_0>0$ big enough. Then for any $s\geq s_0$ and any $0\leq k\leq K-K'$ there exists a constant $C>0$ (independent of $n$) such that for any $n\in\mathds{N}$ \begin{equation}\label{prodotti1} \norm{\partial_t^k\left[{\rm Op}^{\mathcal{B}W}\Big(\prod_{i=1}^nc_i(U;x)\Big)h\right]}{H^{s-2k}}\leq C^{n}\prod_{i=1}^n M_i \sum_{k_1+k_2=k}\norm{U}{k_1+K',s_0}^{n}\norm{h}{k_2,s}, \end{equation} for any $h\in C^{K-K'}_{*\mathds{R}}(I,H^{s}(\mathbb{T};\mathds{C}))$. Moreover there exists $\widetilde{C}$ such that \begin{equation}\label{prodotti} \|{\rm Op}^{\mathcal{B}W}\Big(\prod_{i=1}^n c_i\Big)h\|_{K-K',s}\leq \widetilde{C}^{n} \prod_{i=1}^nM_i\|U\|_{K,s_0}^{n}\|h\|_{K-K',s}, \end{equation} for any $h\in C^{K-K'}_{*\mathds{R}}(I,H^{s}(\mathbb{T};\mathds{C}))$. \end{lemma} \begin{proof} Let $\chi$ an admissible cut-off function and set $b(U;x,\xi):=(\prod_{i=1}^nc_i(U;x))_{\chi}$. By Liebniz rule and interpolation one can prove that \begin{equation}\label{est-prod1} |\partial_t^k\partial_x^{\alpha}\partial_{\xi}^{\beta}b(U;x,\xi)|\leq C^{n}\norm{U}{k+K',s_0}^n\prod_{i=1}^n M_i \end{equation} for any $0\leq k\leq K-K'$, $\alpha\leq 2 $, any $\xi\in\mathds{R}$ and where the constant $C$ is independent of $n$. Denoting by $\widehat{b}(U;\ell,\xi)=\widehat{b}(\ell,\xi)$ the $\ell^{th}$ Fourier coefficient of the function $b(U;x,\xi)$, from \eqref{est-prod1} with $\alpha=2$ one deduces the following decay estimate \begin{equation}\label{est-prod2} |\partial_t^k\widehat{b}(\ell,\xi)|\leq C^{n} \norm{U}{k+K',s_0}^n\prod_{i=1}^n M_i\langle\ell\rangle^{-2}. \end{equation} With this setting one has \begin{equation*} \begin{aligned} {\rm Op}^{\mathcal{B}W}\Big(\prod_{i=1}^n&c_i(U;x)\Big)h={\rm Op}^{W}(b(U;x,\xi))h\\ &=\frac{1}{2\pi}\sum_{\ell\in\mathds{Z}}\left(\sum_{n'\in\mathds{Z}}\widehat{b}\Big(\ell-n',\frac{\ell+n'}{2}\Big)\widehat{h}(n')\right)e^{{\rm i}\ell x}, \end{aligned}\end{equation*} where the sum is restricted to the set of indices such that $|\ell-n'|\leq\delta\frac{|\ell+n'|}{2}$ with $0<\delta<1$ (which implies that $\ell\sim n'$). Let $0\leq k\leq K-K'$, one has \begin{equation*}\begin{aligned} &\norm{\partial_t^k\left[{\rm Op}^{\mathcal{B}W}\Big(\prod_{i=1}^nc_i(U;x)\Big)h\right]}{H^{s-2k}}^2\\ \leq &C^{n}\sum_{k_1+k_2=k}\sum_{\ell\in\mathds{Z}}\langle \ell\rangle^{2(s-2k)}\left|\sum_{n'\in\mathds{Z}}\partial_t^{k_1}\left(\widehat{b}\Big(\ell-n',\frac{\ell+n'}{2}\Big)\right)\partial_t^{k_2}\Big(\widehat{h}(n')\Big)\right|^2 \\ \leq&C^{n}\prod_{i=1}^n M_i^2 \sum_{k_1+k_2=k}\norm{U}{k_1+K',s_0}^{2n}\sum_{\ell\in\mathds{Z}}\left(\sum_{n'\in\mathds{Z}}\langle\ell-n'\rangle^{-2}\langle n'\rangle^{s-2k}\left|\partial_t^{k_2}\widehat{h}(n')\right|\right)^2, \end{aligned}\end{equation*} where in the last passage we have used \eqref{est-prod2} and that $\ell\sim n'$. By using Young inequality for sequences one can continue the chain of inequalities above and finally obtain the \eqref{prodotti1}. The estimate \eqref{prodotti} follows summing over $0\leq k\leq K-K'$. \end{proof} \begin{prop}\label{diff-prod-est} Fix $K,K'\in \mathds{N}$, $K'\leq K$ and $r>0$. Let $\{c_i\}_{i\in \mathds{N}}$ a sequence in ${\mathcal F}_{K,K'}[r]$ satisfying the hypotheses of Lemma \ref{est-prod}. Then the operator \begin{equation}\label{QNN} Q^{(n)}_{c_1,\ldots,c_n}:={\rm Op}^{\mathcal{B}W}(c_{1})\circ\cdots\circ{\rm Op}^{\mathcal{B}W}(c_{n})-{\rm Op}^{\mathcal{B}W}(c_1\cdots c_n) \end{equation} belongs to the class ${\mathcal R}^{-\rho}_{K,K'}[r]$ for any $\rho\geq 0$. More precisely there exists $s_0>0$ such that for any $s\geq s_0$ the following holds. For any $0\leq k\leq K-K'$ and any $\rho\geq0$ there exists a constant $\mathtt{C}>0$ (depending on $\norm{U}{K,s_0}$, $s,s_0,\rho,k$ and independent of $n$) such that \begin{equation}\label{qn} \norm{\partial_t^k\left(Q^{(n)}_{c_1,\ldots,c_n} [h]\right)}{s+\rho-2k}\leq \mathtt{C}^{n}\mathtt{M}\sum_{k_1+k_2=k}\left(\|U\|_{K'+k_1,s_0}^{n}\|h\|_{k_2,s}+ \|U\|_{K'+k_1,s_0}^{n-1}\|h\|_{k_2,s_0}\|U\|_{K'+k_1,s}\right), \end{equation} for any $n\geq1$, any $h$ in $C^K_{*\mathbb{R}}(I,H^s(\mathbb{T},\mathds{C}))$, any $U\in C^K_{*\mathbb{R}}(I,{\bf{H}}^s)\cap B^K_s(I,r)$ and where $\mathtt{M}=M_1\cdots M_n$ (see \eqref{puta}). \end{prop} \begin{proof} We proceed by induction. For $n=1$ is trivial. Let us study the case $n=2$. Since $c_1,c_2$ belong to ${\mathcal F}_{K,K'}[r]$, then $c_1 \cdot c_2=(c_1\sharp c_{2})_{\rho}$ for any $\rho>0$. Then by \eqref{ronaldo} there exists an admissible cut-off function $\chi$ such that \begin{equation}\label{ronaldo3} \begin{aligned} {\rm Op}^{\mathcal{B}W}(c_1)&\circ{\rm Op}^{\mathcal{B}W}(c_2)-{\rm Op}^{\mathcal{B}W}(c_1\cdot c_2)= {\rm Op}^{\mathcal{B}W}(c_1)\circ{\rm Op}^{\mathcal{B}W}(c_2)-{\rm Op}^{\mathcal{B}W}( (c_1\sharp c_2)_{\rho} )\\ &={\rm Op}^{W}((c_1)_{\chi}\sharp (c_{2})_{\chi})-{\rm Op}^{W}((c_1\sharp c_{2})_{\rho,\chi})= {\rm Op}^{W}(r_1)+{\rm Op}^{W}(r_2), \end{aligned} \end{equation} where \begin{equation} \begin{aligned} r_{1}(x,\xi)&=(c_1)_{\chi}\sharp (c_{2})_{\chi}-((c_1)_{\chi}\sharp (c_{2})_{\chi})_{\rho},\\ r_2(x,\xi)&=((c_1)_{\chi}\sharp (c_{2})_{\chi})_{\rho}-(c_1\sharp c_{2})_{\rho,\chi}. \end{aligned} \end{equation} Then, by Lemma $2.3.3$ in \cite{maxdelort} and \eqref{puta}, one has that $r_1$ satisfies the bound \begin{equation}\label{ronaldo4} |\partial_{t}^{k}\partial_{x}^{\ell}r_1(U;x,\xi)|\leq \widetilde{C}M_1 M_2\langle\xi\rangle^{-\rho+\ell}\|U\|_{k+K',s_0}^{2} \end{equation} for any $|\ell|\leq 2$ and some universal constant $\widetilde{C}>0$ depending only on $s,s_0,\rho$. Therefore Proposition \ref{boni2} and Remark \ref{ronaldo2} imply that \begin{equation}\label{ronaldo5} \norm{ {\rm Op}^{W}(\partial_t^kr_1(U;x,\cdot) ) }{{\mathcal L}(H^s, H^{s+\rho-2})}\leq \widetilde{C}M_1 M_2\|U\|_{k+K',s_0}^{2}, \end{equation} for $\widetilde{C}>0$ possibly larger than the one in \eqref{ronaldo4}, but still depending only on $k,s, s_0,\rho$. From the bound \eqref{ronaldo5} one deduces the estimate \eqref{qn} for some $\mathtt{C}\geq 2\widetilde{C}$. One can argue in the same way to estimate the term ${\rm Op}^{W}(r_2)$ in \eqref{ronaldo3}. Assume now that \eqref{qn} holds for $j\leq n-1$ for $n\geq3$. We have that \begin{equation} \begin{aligned} {\rm Op}^{\mathcal{B}W}(c_{1})\circ\cdots\circ{\rm Op}^{\mathcal{B}W}(c_{n})=\big({\rm Op}^{\mathcal{B}W}(c_1\cdots c_{n-1})+Q_{n-1}\big)\circ {\rm Op}^{\mathcal{B}W}(c_n), \end{aligned} \end{equation} where $Q_{n-1}$ satisfies condition \eqref{qn}. For the term ${\rm Op}^{\mathcal{B}W}(c_1\cdots c_{n-1})\circ{\rm Op}^{\mathcal{B}W}(c_n)$ one has to argue as done in the case $n=2$. Consider the term $Q_{n-1}\circ {\rm Op}^{\mathcal{B}W}(c_n)$ and let $C>0$ be the universal constant given by Lemma \ref{est-prod}. Using the inductive hypothesis on $Q_{n-1}$ and estimate \eqref{prodotti1} in Lemma \ref{est-prod} (in the case $n=1$) we have \begin{equation*} \begin{aligned} &\|\partial_{t}^{k}\big(Q_{n-1}\circ{\rm Op}^{\mathcal{B}W}(c_n) h\big)\|_{s+\rho-2k} \leq \mathtt{K}\mathtt{C}^{n-1}M_1\cdots M_{n-1}\sum_{k_1+k_2=k}\sum_{j_1+j_2=k_{2}} CM_{n}\|U\|^{n-1}_{K'+k_1,s_0}\|U\|_{K'+j_1,s_0}\|h\|_{j_2,s}\\ &\qquad+ \mathtt{K}\mathtt{C}^{n-1}M_1\cdots M_{n-1}\sum_{k_1+k_2=k}\sum_{j_1+j_2=k_{2}} CM_{n}\|U\|^{n-2}_{K'+k_1,s_0}\|U\|_{K'+k_1,s}\|U\|_{K'+j_1,s_0}\|h\|_{j_2,s_0}\\ &\qquad\leq \mathtt{K}\mathtt{M}\mathtt{C}^{n-1}C \sum_{k_1=0}^{k}\sum_{j_1=0}^{k-k_1} \|U\|^{n}_{K'+k_1+j_1,s_0}\|h\|_{k-k_1-j_1,s}\\ &\qquad+ \mathtt{K}\mathtt{M}\mathtt{C}^{n-1}C \sum_{k_1=0}^{k}\sum_{j_1=0}^{k-k_1} \|U\|^{n-1}_{K'+k_1+j_1,s_0}\|U\|_{K'+k_1+j_1,s}\|h\|_{k-k_1-j_1,s_0}\\ &\qquad\leq \mathtt{K}\mathtt{M}\mathtt{C}^{n-1}C \sum_{m=0}^{k} (\|U\|^{n}_{K'+m,s_0}\|h\|_{k-m,s}+ \|U\|^{n-1}_{K'+m,s_0}\|U\|_{K'+m,s}\|h\|_{k-m,s_0})(m+1), \end{aligned} \end{equation*} for constant $\mathtt{K}$ depending only on $k$. This implies \eqref{qn} by choosing $\mathtt{C}> (k+1)C\mathtt{K}$. \end{proof} \begin{coro}\label{esponanziale} Fix $K,K'\in \mathds{N}$, $K'\leq K$ and $r>0$. Let $s(U;x)$ and $z(U;x)$ be symbols in the class ${\mathcal F}_{K,K'}[r]$. Consider the following two matrices \begin{equation} S(U;x):=\left(\begin{matrix}s(U;x) & 0\\ 0 & \overline{s(U;x)}\end{matrix}\right),\quad Z(U;x):=\left(\begin{matrix}0 & z(U;x)\\ \overline{z(U;x)} & 0\end{matrix}\right)\in {\mathcal F}_{K,K'}[r]\otimes{\mathcal M}_{2}(\mathds{C}). \end{equation} Then one has the following \begin{equation*} \begin{aligned} &\exp\left\{{\rm Op}^{\mathcal{B}W}(S(U;x))\right\}-{\rm Op}^{\mathcal{B}W}(\left\{\exp{S(U;x)}\right\}) \in {\mathcal R}^{-\rho}_{K,K'}[r]\otimes{\mathcal M}_2(\mathds{C}), \\ &\exp\left\{{\rm Op}^{\mathcal{B}W}(Z(U;x))\right\}-{\rm Op}^{\mathcal{B}W}(\left\{\exp{Z(U;x)}\right\}) \in {\mathcal R}^{-\rho}_{K,K'}[r]\otimes{\mathcal M}_2(\mathds{C}), \end{aligned} \end{equation*} for any $\rho\geq 0$. \end{coro} \begin{proof} Let us prove the result for the matrix $S(U;x)$. Since $s(U;x)$ belongs to ${\mathcal F}_{K,K'}[r]$ then there exists $s_0>0$ such that if $U\in B^K_{s_0}(I,r)$, then there is a constant $\mathtt{N}>0$ such that $$\asso{\partial_t^k\partial_x^{\alpha}s(U;x)}\leq \mathtt{N} \norm{U}{k+K',s_0},$$ for any $0\leq k\leq K-K'$ and $|\alpha|\leq 2$. By definition one has \begin{equation*} \begin{aligned} \exp&\Big({\rm Op}^{\mathcal{B}W}(S(U;x))\Big)=\sum_{n=0}^{\infty}\frac{\big({\rm Op}^{\mathcal{B}W}(S(U;x))\big)^n}{n!}\\ &=\sum_{n=0}^{\infty}\frac{1}{n!}\left(\begin{matrix} & \big({\rm Op}^{\mathcal{B}W}(s(U;x))\big)^n & 0\\ & 0 & \big({\rm Op}^{\mathcal{B}W}(\overline{s(U;x)})\big)^n \end{matrix}\right), \end{aligned} \end{equation*} on the other hand \begin{equation*} \begin{aligned} {\rm Op}^{\mathcal{B}W}&\Big(\exp\big(S(U;x)\big)\Big)=\sum_{n=0}^\infty\frac{1}{n!}{\rm Op}^{\mathcal{B}W}\left(\begin{matrix} &\big[s(U;x)\big]^n & 0\\ & 0 & \big[\overline{s(U;x)}\big]^n \end{matrix}\right)\\ &=\sum_{n=0}^\infty\frac{1}{n!}\left(\begin{matrix} &{\rm Op}^{\mathcal{B}W}\big(\big[s(U;x)\big]^n\big) & 0\\ & 0 &{\rm Op}^{\mathcal{B}W} \big(\big[\overline{s(U;x)}\big]^n\big) \end{matrix}\right). \end{aligned} \end{equation*} We argue componentwise. Let $h$ be a function in $C^K_{*\mathbb{R}}(I,H^s(\mathbb{T},\mathds{C}))$, then using Proposition \ref{diff-prod-est}, one has \begin{equation*} \begin{aligned} & \norm{\sum_{n=0}^{\infty}\frac{1}{n!}\partial_t^k\Big(\big[{\rm Op}^{\mathcal{B}W}(s(U;x))\big]^n[h]-{\rm Op}^{\mathcal{B}W}\big(s(U;x)^n\big)[h]\Big)}{s+\rho-2k}\leq\\ & \sum_{n=1}^{\infty}\frac{\mathtt{C}^n\mathtt{N}^n}{n!}\sum_{k_1+k_2=k} \left(\norm{U}{K'+k_1,s_0}^n\norm{h}{k_2,s}+\norm{U}{K'+k_1,s_0}^{n-1}\norm{h}{k_2,s_0}\norm{U}{K'+k_1,s}\right) \leq\\ & \sum_{k_1+k_2=k}\left(\norm{U}{K'+k_1,s_0}\norm{h}{k_2,s}+\norm{U}{K'+k_1,s}\norm{h}{k_2,s_0}\right)\sum_{n=1}^{\infty} \frac{\mathtt{C}^n\mathtt{N}^n}{n!}\norm{U}{K'+k_1,s_0}^{n-1}. \end{aligned}\end{equation*} Therefore we have proved the \eqref{porto20} with constant $$C=\sum_{n=1}^{\infty} \frac{\mathtt{C}^n\mathtt{N}^n}{n!}\norm{U}{K'+k_1,s_0}^{n-1}=\frac{\exp(\mathtt{C}\mathtt{N} \norm{U}{K'+k_1,s_0})-1}{\norm{U}{K'+k_1,s_0}}.$$ For the other non zero component of the matrix the argument is the same. In order to simplify the notation, set $z(U;x)=z$ and $\overline{z(U;x)}=\overline{z}$, therefore for the matrix $Z(U;x)$, by definition, one has \begin{equation*} {\rm Op}^{\mathcal{B}W}\left(\exp(Z(U;x))\right)={\rm Op}^{\mathcal{B}W}\left(\sum_{n=0}^{\infty}\frac{1}{n!}\left(\begin{matrix} & |z|^{2n} & |z|^{2n+1}z\\ & |z|^{2n+1} \overline{z} & |z|^{2n} \end{matrix}\right)\right). \end{equation*} On the other hand, setting $A^n_{z,\bar{z}}=\big({\rm Op}^{\mathcal{B}W}(z)\circ{\rm Op}^{\mathcal{B}W}(\bar{z})\big)^n$ and $B^n_{z,\bar{z}}=A^n_{z,\bar{z}}\circ{\rm Op}^{\mathcal{B}W}(z)$, one has \begin{equation*} \exp\left({\rm Op}^{\mathcal{B}W}(Z(U;x))\right)=\sum_{n=0}^{\infty}\frac{1}{n!}\left(\begin{matrix} & A^n_{z,\bar{z}} & B^n_{z,\bar{z}}\\ & \overline{B^n_{z,\bar{z}}} & \overline{A^n_{z,\bar{z}}} \end{matrix}\right). \end{equation*} Therefore one can study each component of the matrix $\exp\left({\rm Op}^{\mathcal{B}W}(Z(U;x))\right)-{\rm Op}^{\mathcal{B}W}\left(\exp{Z(U;x)}\right)$ in the same way as done in the case of the matrix $S(U,x)$. \end{proof} \section{Paralinearization of the equation}\label{PARANLS} In this section we give a paradifferential formulation of the equation \eqref{NLS}. In order to paralinearize the equation \eqref{NLS} we need to ``double" the variables. We consider a system of equations for the variables $(u^+,u^-)$ in $H^{s}\times H^{s}$ which is equivalent to \eqref{NLS} if $u^+=\bar{u}^-$. More precisely we give the following definition. \begin{de}\label{vectorNLS} Let $f$ be the $C^{\infty}(\mathds{C}^3;\mathds{C})$ function in the equation \eqref{NLS}. We define the ``vector'' NLS as \begin{equation}\label{sistemaNLS} \begin{aligned} &\partial_t U={\rm i} E\left[\Lambda U +\mathtt{F}(U,U_x,U_{xx})\right], \quad U\in H^{s}\times H^{s}, \\ &\mathtt{F}(U,U_x,U_{xx}):= \left(\begin{matrix} f_1(U,{ U}_{x},{ U}_{xx})\\ {{f_2({U },{U}_{x},{U}_{xx})}} \end{matrix} \right), \end{aligned} \end{equation} where \[ \mathtt{F}(Z_1,Z_{2},Z_{3})=\left(\begin{matrix} f_1(z_{1}^{+},z_{1}^{-},z_{2}^{+},z_{2}^{-},z_{3}^{+},z_{3}^{-})\\ f_2(z_{1}^{+},z_{1}^{-},z_{2}^{+},z_{2}^{-},z_{3}^{+},z_{3}^{-})\end{matrix}\right), \quad Z_{i}=\left(\begin{matrix} z_{i}^{+} \\ z_{i}^{-} \end{matrix}\right), \quad i=1,2,3, \] extends $(f,\overline{f})$ in the following sense. The functions $f_{i}$ for $i=1,2$ are $C^{\infty}$ on $\mathds{C}^{6}$ (in the real sense). Moreover one has the following: \begin{equation}\label{sistNLS1} \begin{aligned} \left(\begin{matrix} f_1(z_1,\bar{z}_1,z_2,\bar{z}_2,z_{3},\bar{z}_{3}) \\ f_2(z_1,\bar{z}_1,z_2,\bar{z}_2,z_{3},\bar{z}_{3}) \end{matrix} \right)=\left(\begin{matrix} f(z_1,z_2,z_{3}) \\ \overline{f(z_1,z_2,z_{3})} \end{matrix} \right), \end{aligned} \end{equation} and \begin{equation}\label{sistNLS2} \begin{aligned} & \partial_{z^{+}_{3}}f_{1}=\partial_{z^{-}_{3}}f_2, \quad \partial_{z^{+}_{i}}f_1=\overline{\partial_{z^{-}_{i}}f_{2}}, \;\; i=1,2,\;\;\; \partial_{z^{-}_{i}}f_1=\overline{\partial_{z^{+}_{i}}f_2}, \quad i=1,2,3 \\ &\partial_{\overline {z^{+}_{i}}}f_{1} =\partial_{\overline {z^{+}_{i}}}f_{2}=\partial_{\overline {z^{-}_{i}}}f_{1} =\partial_{\overline {z^{-}_{i}}}f_{2}=0 \;\; \end{aligned} \end{equation} where $ \partial_{\overline {z^{\sigma}_{j}}}= \partial_{{\rm Re}\,z^{\sigma}_j}+ {\rm i} \partial_{{\rm Im}\,z^{\sigma}_j}, \;\; \sigma=\pm.$ \end{de} \begin{rmk} In the case that $f$ has the form \[ f(z_1,z_2,z_3)=C z_{1}^{\alpha_1}{\bar{z}}_{1}^{\beta_1}z_{2}^{\alpha_2}\bar{z}_{2}^{\beta_{2}} \] for some $C\in\mathds{C}$, $\alpha_{i},\beta_i\in \mathds{N}$ for $i=1,2$, a possible extension is the following: \[ \begin{aligned} &f_{1}(z^{+}_1,z^{-}_1,z^{+}_2,z^{-}_2)=C (z^{+}_1)^{ \alpha_1}(z^{-}_1)^{\beta_1}(z^{+}_2)^{\alpha_2}(z^{-}_2)^{\beta_2},\\ & f_{2}(z^{+}_1,z^{-}_1,z^{+}_2,z^{-}_2)=\overline{C} (z^{-}_1)^{ \alpha_1}(z^{+}_1)^{\beta_1}(z^{-}_2)^{\alpha_2}(z^{+}_2)^{\beta_2}. \end{aligned} \] \end{rmk} \begin{rmk} Using \eqref{sistNLS1} one deduces the following relations between the derivatives of $f$ and $f_j$ with $j=1,2$: \begin{equation}\label{derivate} \begin{aligned} &\partial_{z_i}f(z_1,z_2,z_3)=(\partial_{z_i^+}f_1)(z_1,\bar{z}_1,z_2,\bar{z}_2,z_3,\bar{z}_3)\\ &\partial_{\bar{z}_i}f(z_1,z_2,z_3)=(\partial_{z_i^-}f_1)(z_1,\bar{z}_1,z_2,\bar{z}_2,z_3,\bar{z}_3)\\ & \overline{\partial_{\bar{z}_i}f(z_1,z_2,z_3)}={(\partial_{z_i^+}f_2)(z_1,\bar{z}_1,z_2,\bar{z}_2,z_3,\bar{z}_3)}\\ & \overline{\partial_{{z}_i}f(z_1,z_2,z_3)}={(\partial_{z_i^-}f_2)(z_1,\bar{z}_1,z_2,\bar{z}_2,z_3,\bar{z}_3)}. \end{aligned} \end{equation} \end{rmk} In the rest of the paper we shall use the following notation. Given a function $g(z_{1}^{+},z_{1}^{-},z_{2}^{+},z_{2}^{-},z_{3}^{+},z_{3}^{-})$ defined on $\mathds{C}^{6}$ which is differentiable in the real sense, we shall write \begin{equation}\label{notazione} \begin{aligned} &(\partial_{\partial_{x}^{i}u}g)(u,\bar{u},u_{x},\bar{u}_{x},u_{xx},\bar{u}_{xx}):=(\partial_{z_{i+1}^{+}}g)(u,\bar{u},u_{x},\bar{u}_{x},u_{xx},\bar{u}_{xx}),\\ &(\partial_{\overline{\partial_{x}^{i}u}}g)(u,\bar{u},u_{x},\bar{u}_{x},u_{xx},\bar{u}_{xx}):=(\partial_{z_{i+1}^{-}}g)(u,\bar{u},u_{x},\bar{u}_{x},u_{xx},\bar{u}_{xx}),\quad i=0,1,2. \end{aligned} \end{equation} By Definition \ref{vectorNLS} one has that equation \eqref{NLS} is equivalent to the system \eqref{sistemaNLS} on the subspace ${\bf{H}}^{s}$. We state the Bony paralinearization lemma, which is adapted to our case from Lemma 2.4.5 of \cite{maxdelort}. \begin{lemma}[\bf{Bony paralinearization of the composition operator}]\label{paralinearizza} Let $f$ be a complex-valued function of class $C^{\infty}$ in the real sense defined in a ball centered at $0$ of radius $r>0$, in $\mathds{C}^6$, vanishing at $0$ at order $2$. There exists a $1\times 2$ matrix of symbols $q\in\Gamma^2_{K,0}[r]$ and a $1\times 2$ matrix of smoothing operators $Q(U)\in{\mathcal R}^{-\rho}_{K,0}[r]$, for any $\rho$, such that \begin{equation}\label{finalmentesiparalinearizza1} f(U,U_x,U_{xx})={\rm Op}^{\mathcal{B}W}(q(U,U_x,U_{xx};x,\xi))[U]+ Q(U)U. \end{equation} Moreover the symbol $q(U;x,\xi)$ has the form \begin{equation}\label{PPPPPP1} q(U;x,\xi):=d_{2}(U;x)({\rm i}\xi)^{2}+d_1(U;x)({\rm i}\xi)+d_{0}(U;x), \end{equation} where $d_{j}(U;x)$ are $1\times2$ matrices of symbols in $ {\mathcal F}_{K,0}[r]$, for $j=0,1,2$. \end{lemma} \begin{proof} By the paralinearization formula of Bony, we know that \begin{equation}\label{finalmentesiparalinearizza2} f(U,U_{x},U_{xx})=T_{D_U f} U+T_{D_{U_{x}}f} U_x+ T_{D_{U_{xx}}f}U_{xx}+R_0(U)U, \end{equation} where $R_0(U)$ satisfies estimates \eqref{porto20} and where \begin{equation*} \begin{aligned} &T_{D_U f} U=\frac{1}{2\pi}\int e^{{\rm i}(x-y)\xi} \chi(\langle\xi\rangle^{-1} D)[c_U(U;x,\xi)]U(y)dyd\xi,\\ &T_{D_{U_x} f} U_{x}=\frac{1}{2\pi}\int e^{{\rm i}(x-y)\xi} \chi(\langle\xi\rangle^{-1} D)[c_{U_x}(U;x,\xi)]U(y)dyd\xi,\\ &T_{D_{U_{xx}} f}U_{xx}=\frac{1}{2\pi}\int e^{{\rm i}(x-y)\xi} \chi(\langle\xi\rangle^{-1} D)[c_{U_{xx}}(U;x,\xi)]U(y)dyd\xi, \end{aligned}\end{equation*} with \begin{equation}\label{quaderno} \begin{aligned} &c_{U}(U;x,\xi)= D_U f,\\ &c_{U_x}(U;x,\xi)=D_{U_x} f({\rm i}\xi),\\ &c_{U_{xx}}(U;x,\xi)=D_{U_{xx}} f({\rm i}\xi)^2,\\ \end{aligned}\end{equation} for some $\chi\in\ C^{\infty}_0(\mathbb{R})$ with small enough support and equal to $1$ close to $0$. Using \eqref{bambola5} we define the $x$-periodic function $b_i(U;x,\xi)$, for $i=0,1,2$, through its Fourier coefficients \begin{equation}\label{diego} \hat{b}_i(U;n,\xi):= \hat{c}_{U_i}(U;n,\xi-n/2) \end{equation} where $U_{i}:=\partial_{x}^{i}U$. In the same way we define the function $d_{i}(U;x,\xi)$, for $i=0,1,2$, as \begin{equation}\label{armando} \hat{d}_i(U;n,\xi):= \chi\tonde{n\langle\xi-n/2\rangle^{-1}} \hat{c}_{U_i}(U;n,\xi-n/2). \end{equation} We have that $T_{D_U f} U={\rm{Op}}^{W}(d_0(U,\xi))U$. We observe the following \begin{equation}\label{mara} \hat{d}_0(U;n,\xi)=\chi\tonde{n\langle\xi\rangle^{-1}}\widehat{D_Uf}(n)+\tonde{\chi\tonde{n\langle\xi-n/2\rangle^{-1}}-n\langle\xi\rangle^{-1}}\widehat{D_Uf}(n) \end{equation} therefore if the support of $\chi$ is small enough, thanks to Lemma \ref{equiv}, we obtained \begin{equation}\label{ciaone} T_{D_U f} U={\rm Op}^{\mathcal{B}W}(b_0(U;x,\xi))U+R_{1}(U)U, \end{equation} for some smoothing reminder $R_1(U)$. Reasoning in the same way we get \begin{equation}\label{ciaone1} \begin{aligned} & T_{D_{U_x} f} U_{x}={\rm Op}^{\mathcal{B}W}\big(b_1(U;\xi)\big)U+R_{2}(U)U\\ & T_{D_{U_{xx}} f} U_{xx}={\rm Op}^{\mathcal{B}W}\big(b_2(U;\xi))\big)U+R_{3}(U)U. \end{aligned} \end{equation} The theorem is proved defining $Q(U)=\sum_{k=0}^{3}R_{k}(U)$ and $q(U;x,\xi)=b_2(U;\xi)+b_1(U;\xi)+b_0(U;\xi)$. Note that the symbol $q$ satisfies conditions \eqref{PPPPPP1} by \eqref{quaderno} and formula \eqref{bambola5}. \end{proof} We have the following Proposition. \begin{prop}[\bf{Paralinearization of the system}]\label{montero} There are a matrix $A(U;x,\xi)$ in $\Gamma^2_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ and a smoothing operator $R$ in ${\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, for any $K, r>0$ and $\rho\geq0$ such that the system \eqref{sistemaNLS} is equivalent to \begin{equation}\label{6.666para} \partial_t U:={\rm i} E\Big[\Lambda U +{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))[U] +R(U)[U]\Big], \end{equation} on the subspace $\mathcal{U}$ (see \eqref{Hcic} and Def. \ref{vectorNLS}) and where $\Lambda$ is defined in \eqref{DEFlambda} and \eqref{NLS1000}. Moreover the operator $R(U)[\cdot]$ has the form \eqref{vinello}, the matrix $A$ has the form \eqref{prodotto}, i.e. \begin{equation}\label{matriceA} A(U;x,\xi) :=\left( \begin{matrix} a(U;x,\xi) & b(U;x,\xi)\\ \overline{b(U;x,-\xi)} & \overline{a(U;x,-\xi)} \end{matrix} \right) \in \Gamma^{2}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C}) \end{equation} with $a$, $b$ in $\Gamma^{2}_{K,0}[r]$. In particular we have that \begin{equation}\label{PPPPA} A(U;x,\xi)=A_{2}(U;x)({\rm i}\xi)^{2}+A_{1}(U;x)({\rm i}\xi)+A_0(U;x), \quad A_{i}\in {\mathcal F}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C}), \;\; i=0,1,2. \end{equation} \end{prop} \begin{proof} The functions $f_{1},f_{2}$ in \eqref{sistemaNLS} satisfy the hypotheses of Lemma \ref{paralinearizza} for any $r>0$. Hence the result follows by setting $q(U;x,\xi)=:(a(U;x,\xi),b(U;x,\xi))$. \end{proof} In the following we study some properties of the system in \eqref{6.666para}. We first prove some lemmata which translate the Hamiltonian Hyp. \ref{hyp1}, parity-preserving Hyp. \ref{hyp2} and global ellipticity Hyp. \ref{hyp3} in the paradifferential setting. \begin{lemma}[{\bf Hamiltonian structure}]\label{struttura-ham-para}\ Assume that $f$ in \eqref{NLS} satisfies Hypothesis \ref{hyp1}. Consider the matrix $A(U;x,\xi)$ in \eqref{matriceA} given by Proposition \ref{montero}. Then the term \[ A_{2}(U;x)({\rm i}\xi)^{2}+A_{1}(U;x)({\rm i}\xi) \] in \eqref{PPPPA} satisfies conditions \eqref{quanti801}. More explicitly one has \begin{equation}\label{matriceAA} A_{2}(U;x) :=\left( \begin{matrix} a_{2}(U;x) & b_2(U;x)\\ \overline{b_2(U;x)} & {a_2(U;x)} \end{matrix} \right), \quad A_{1}(U;x) :=\left( \begin{matrix} a_{1}(U;x) & 0\\ 0 & \overline{a_1(U;x)} \end{matrix} \right), \end{equation} with $a_{2},a_1,b_2\in {\mathcal F}_{K,0}[r]$ and $a_{2}\in \mathds{R}$. \end{lemma} \begin{proof} Recalling the notation introduced in \eqref{notazione} we shall write \begin{equation}\label{derivate2} \partial_{\partial_{x}^{i}u}f:=\partial_{z_{i+1}^{+}}f_1, \quad \partial_{\overline{\partial_{x}^{i}u}}f:=\partial_{z_{i+1}^{-}}f_1, \quad i=0,1,2, \end{equation} when restricted to the real subspace $\mathcal{U}$ (see \eqref{Hcic}). Using conditions \eqref{sistNLS1}, \eqref{sistNLS2} and \eqref{derivate} one has that \begin{equation}\label{computer} \begin{aligned} &\left(\begin{matrix} f(u,u_x,u_{xx}) \\ \overline{f(u,u_x,u_{xx}) } \end{matrix} \right)= \left( \begin{matrix} f_{1}(U,U_x,U_{xx})\\ {f_2(U,U_x,U_{xx})} \end{matrix} \right)\\ &\qquad ={\rm Op}^{\mathcal{B}}\left[\left( \begin{matrix} \partial_{u_{xx}}f & \partial_{\bar{u}_{xx}}f \\ \overline{\partial_{\bar{u}_{xx}}f} & \overline{\partial_{u_{xx}}f} \end{matrix}\right)({\rm i}\xi)^{2}\right]U+ {\rm Op}^{\mathcal{B}}\left[\left( \begin{matrix} \partial_{u_x}f & \partial_{\bar{u}_x}f \\ \overline{\partial_{\bar{u}_x}f} & \overline{\partial_{u_x}f} \end{matrix}\right)({\rm i}\xi)\right]U+R(U)[U] \end{aligned} \end{equation} where $R(U)$ belongs to ${\mathcal R}^{0}_{K,0}[r]$. By Hypothesis \ref{hyp1} we have that \begin{equation}\label{5} \begin{aligned}&\partial_{u_{xx}}f= -\partial_{u_{x}\bar{u}_{x}}F,\\ &\partial_{\bar{u}_{xx}}f=-\partial_{\bar{u}_{x}\bar{u}_{x}}F,\\ &\partial_{u_{x}}f=- \frac{\rm d}{{\rm d}x}\left[\partial_{u_{x}\bar{u}_{x}}F\right]-\partial_{u\bar{u}_{x}}F +\partial_{u_{x}\bar{u}}F,\\ &\partial_{\bar{u}_{x}}f=- \frac{\rm d}{{\rm d}x}\left[\partial_{\bar{u}_{x}\bar{u}_{x}}F\right]. \end{aligned} \end{equation} We now pass to the Weyl quantization in the following way. Set \[ c(x,\xi)=\partial_{u_{xx}}f(x)({\rm i}\xi)^{2}+\partial_{u_{x}}f(x)({\rm i}\xi). \] Passing to the Fourier side we have that \[ \widehat{c}(j,\xi-\frac{j}{2})=\widehat{(\partial_{u_{xx}}f)}(j)({\rm i}\xi)^{2}+\Big[\widehat{(\partial_{u_{x}}f)}(j)-({\rm i} j)\widehat{(\partial_{u_{xx}}f)}(j)\Big]({\rm i}\xi) +\Big[ \frac{({\rm i} j)^{2}}{4}\widehat{(\partial_{u_{xx}}f)}(j)-\frac{({\rm i} j)}{2}\widehat{(\partial_{u_{x}}f)}(j) \Big], \] therefore by using formula \eqref{bambola5} we have that ${\rm Op}^{\mathcal{B}}(c(x,\xi))={\rm Op}^{\mathcal{B}W}(a(x,\xi))$, where \[ a(x,\xi)=\partial_{u_{xx}}f(x)({\rm i}\xi)^{2}+[\partial_{u_{x}}f(x)-\frac{{\rm d}}{{\rm d}x}(\partial_{u_{xx}}f)]({\rm i}\xi)+\frac{1}{4}\frac{{\rm d}^2}{{\rm d}x^2}(\partial_{u_{xx}}f)-\frac{1}{2}\frac{{\rm d}}{{\rm d}x}(\partial_{u_x}f). \] Using the relations in \eqref{5} we obtain a matrix $A$ as in \eqref{matriceAA}, and in particular we have \begin{equation}\label{quantiselfi} a_{2}(U;x)= -\partial_{u_{x}\bar{u}_{x}}F, \quad a_{1}(U;x)=-\partial_{u\bar{u}_{x}}F +\partial_{u_{x}\bar{u}}F, \quad b_{2}(U;x)=-\partial_{\bar{u}_{x}\bar{u}_{x}}F. \end{equation} Since $F$ is real then $a_{2}$ is real, while $a_{1}$ is purely imaginary. This implies conditions \eqref{quanti801}. \end{proof} \begin{lemma}[{\bf Parity preserving structure}]\label{struttura-rev-para} Assume that $f$ in \eqref{NLS} satisfies Hypothesis \ref{hyp2}. Consider the matrix $A(U;x,\xi)$ in \eqref{matriceA} given by Proposition \ref{montero}. One has that $A(U;x,\xi)$ has the form \eqref{PPPPA} where \begin{equation}\label{matriceAAA} \begin{aligned} &A_{2}(U;x) :=\left( \begin{matrix} a_{2}(U;x) & b_2(U;x)\\ \overline{b_2(U;x)} & {a_2(U;x)} \end{matrix} \right), \quad A_{1}(U;x) :=\left( \begin{matrix} a_{1}(U;x) & b_{1}(U;x)\\ \overline{b_{1}(U;x)} & \overline{a_1(U;x)} \end{matrix} \right),\\ & A_{0}(U;x) :=\left( \begin{matrix} a_{0}(U;x) & b_0(U;x)\\ \overline{b_0(U;x)} & \overline{a_0(U;x)} \end{matrix}\right), \end{aligned} \end{equation} with $a_{2},b_{2},a_1,b_1, a_0, b_0\in {\mathcal F}_{K,0}[r]$ such that, for $U$ even in $x$, the following holds: \begin{subequations}\label{simbolirev} \begin{align} & a_{2}(U;x)=a_{2}(U;-x), \quad b_{2}(U;x)=b_{2}(U;-x),\\ & a_{1}(U;x)=-a_{1}(U;-x), \quad b_{1}(U;x)=-b_{1}(U;-x),\\ & a_{0}(U;x)=a_{0}(U;-x), \quad b_{0}(U;x)=b_{0}(U;-x), \quad U\in {\bf H}^{s}_{e}, \end{align} \end{subequations} and \begin{equation}\label{simbolirev2} a_{2}(U;x)\in \mathds{R}. \end{equation} The matrix $R(U)$ in \eqref{6.666para} is parity preserving according to Definition \ref{revmap}. \end{lemma} \begin{proof} Using the same notation introduced in the proof of Lemma \ref{struttura-ham-para} (recall \eqref{derivate}) we have that formula \eqref{computer} holds. Under the Hypothesis \ref{hyp2} one has that the functions $\partial_u f, \partial_{\overline{u}}f, \partial_{u_{xx}}f, \partial_{\bar{u}_{xx}}f$ are \emph{even} in $x$ while $\partial_{u_{x}}f, \partial_{\bar{u}_{x}}f$ are \emph{odd} in $x$. Passing to the Weyl quantization by formula \eqref{bambola5} we get \begin{equation}\label{quantiselfi2} \begin{aligned} &a_{2}(U;x)= \partial_{u_{xx}}f, \\ &a_{1}(U;x)=\partial_{u_{x}}f-\partial_{x}(\partial_{u_{xx}}f), \\ & a_0(U;x)=\partial_{u}f+\frac14 \partial_x^2(\partial_{u_{xx}}f)-\frac12\partial_x(\partial_{u_x}f), \end{aligned} \begin{aligned} &b_{2}(U;x)=\partial_{\bar{u}_{xx}}f,\\ &b_{1}(U;x)=\partial_{\bar{u}_{x}}f-\partial_{x}(\partial_{\bar{u}_{xx}}f),\\ &b_0(U;x)=\partial_{\bar{u}}f+\frac14 \partial_x^2(\partial_{\bar{u}_{xx}}f)-\frac12\partial_x(\partial_{\bar{u}_x}f) \end{aligned} \end{equation} which imply conditions \eqref{simbolirev}, while \eqref{simbolirev2} is implied by item $2$ of Hypothesis \ref{hyp2}. The term $R$ is parity preserving by difference. \end{proof} \begin{lemma}[\bf Global ellipticity]\label{simboli-ellittici} Assume that $f$ in \eqref{NLS} satisfies Hyp. \ref{hyp1} (respectvely Hyp. \ref{hyp2}). If $f$ satisfies also Hyp. \ref{hyp3} then the matrix $A_2(U;x)$ in \eqref{matriceAA} (resp. in \eqref{matriceAAA}) is such that \begin{equation}\label{determinante} \begin{aligned} & 1+a_{2}(U;x)\geq \mathtt{c_1}\\ & (1+a_{2}(U;x))^{2}-|b_{2}(U;x)|^{2}\geq \mathtt{c_2}>0, \end{aligned} \end{equation} where $\mathtt{c_1}$ and $\mathtt{c_2}$ are the constants given in \eqref{constraint} and \eqref{constraint2}. \end{lemma} \begin{proof} It follows from \eqref{quantiselfi} in the case of Hyp. \ref{hyp1} and from \eqref{quantiselfi2} in the case of Hyp. \ref{hyp2}. \end{proof} \begin{lemma}[{\bf Lipschitz estimates}]\label{stimelip-dei-simboli} Fix $r>0$, $K>0$ and consider the matrices $A$ and $R$ given in Proposition \ref{montero}. Then there exists $s_0>0$ such that for any $s\geq s_0$ the following holds true. For any $U,V\in C_{*\mathds{R}}^{K}(I;{\bf{H}}^{s})\cap B^K_{s_0}(I,r)$ there are constants $C_1>0$ and $C_2>0$, depending on $s$, $\|U\|_{K,s_0}$ and $\|V\|_{K,s_0}$, such that for any $H\in C_{*\mathds{R}}^{K}(I;{\bf{H}}^{s})$ one has \begin{equation}\label{nave77} \|{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))[H]-{\rm Op}^{\mathcal{B}W}(A(V;x,\xi))[H]\|_{{K},{s-2}}\leq C_{1} \|H\|_{{K},{s}}\|U-V\|_{{K},{s_0}} \end{equation} \begin{equation}\label{nave101} \|R(U)[U]-R(V)[V]\|_{{K},{s+\rho}}\leq C_2 (\| U\|_{{K},{s}}+\| V\|_{{K},{s}})\|U-V\|_{{K},s}, \end{equation} for any $\rho\geq0$. \end{lemma} \begin{proof} We prove bound \eqref{nave77} on each component of the matrix $A$ in \eqref{matriceA} in the case that $f$ satisfies Hyp. \ref{hyp2}. The Hamiltonian case of Hyp. \ref{hyp1} follows by using the same arguments. From the proof of Lemma \ref{struttura-rev-para} we know that the symbol $a(U;x,\xi)$ of the matrix in \eqref{matriceA} is such that $a(U;x,\xi)=a_{2}(U;x)({\rm i}\xi)^{2}+a_{1}(U;x)({\rm i}\xi)+a_{0}(U;x)$ where $a_{i}(U;x)$ for $i=0,1,2$ are given in \eqref{quantiselfi2}. By Remark \ref{ronaldo10} there exists $s_0>0$ such that for any $s\geq s_0$ one has \begin{equation}\label{vialli} \|{\rm Op}^{\mathcal{B}W}\big((a_{2}(U;x)-a_{2}(V;x))({\rm i} \xi)^{2}\big)h\|_{K,s-2}\leq C \sup_{\xi}\langle\xi\rangle^{-2}\|(a_{2}(U;x)-a_{2}(V;x))({\rm i} \xi)^{2}\big)\|_{K,s_0}\|h\|_{K,s}. \end{equation} with $C$ depending on $s,s_0$. Let $U,V\in C_{*\mathds{R}}^{K}(I;{\bf{H}}^{s})\cap B^K_{s_0+2}(I,r)$, by Lagrange theorem, recalling the relations in \eqref{derivate}, \eqref{notazione} and \eqref{derivate2}, one has that \begin{equation}\label{peruzzi} \begin{aligned} \big(a_{2}(U;x)-a_{2}(V;x)\big)({\rm i}\xi)^{2}&=\big((\partial_{u_{xx}}f_1)(U,U_x,U_{xx})- (\partial_{u_{xx}}f_{1})(V,V_x,V_{xx})\big)({\rm i}\xi)^{2}\\ &=(\partial_U\partial_{u_{xx}}f_1)(W^{(0)},U_x,U_{xx})(U-V)({\rm i}\xi)^{2}+\\ &+ (\partial_{U_{x}}\partial_{u_{xx}}f_1)(V,W^{(1)},U_{xx})(U_x-V_x)({\rm i}\xi)^{2}+\\ &+(\partial_{U_{xx}}\partial_{u_{xx}}f_1)(V,V_x,W^{(2)})(U_{xx}-V_{xx})({\rm i}\xi)^{2} \end{aligned} \end{equation} where $W^{(j)}=\partial_{x}^{j}V+t_{j}(\partial_{x}^{j}U-\partial_{x}^{j}V)$, for some $t_{j}\in [0,1]$ and $j=0,1,2$. Hence, for instance, the first summand of \eqref{peruzzi} can be estimated as follows \begin{equation} \begin{aligned} \sup_{\xi}\langle\xi\rangle^{-2}&\|(\partial_U\partial_{u_{xx}}f_1)(W^{(0)},U_x,U_{xx})(U-V)({\rm i}\xi)^{2}\|_{K,s_0}\\ &\leq C_1\|U-V\|_{K,s_0}\sup_{U,V\in B_{s_0+2}^{K}(I,r)}\|(\partial_U\partial_{u_{xx}}f_1)(W^{(0)},U_x,U_{xx})\|_{K,s_0}\\ &\leq C_2 \|U-V\|_{K,s_0}, \end{aligned} \end{equation} where $C_1$ depends on $s_0$ and $C_2$ depends only on $s_0$ and $\|U\|_{K,s_0+2},\|V\|_{K,s_0+2}$ and where we have used a Moser type estimates on composition operators on $H^{s}$ since $f_1$ belongs to $C^{\infty}(\mathbb{C}^6;\mathbb{C})$. We refer the reader to Lemma $A.50$ of \cite{FP} for a complete statement (see also \cite{Ba2}, \cite{Moser-Pisa-66}). The other terms in the r.h.s. of \eqref{peruzzi} can be treated in the same way. Hence from \eqref{vialli} and the discussion above we have obtained \begin{equation} \|{\rm Op}^{\mathcal{B}W}\big((a_{2}(U;x)-a_{2}(V;x))({\rm i} \xi)^{2}\big)h\|_{K,s-2}\leq C \|U-V\|_{K,s_0+2} \| h\|_{K,s}, \end{equation} with $C$ depending on $s$ and $\|U\|_{K,s_0+2},\|V\|_{K,s_0+2}$. One has to argue exactly as done above for the lower order terms $a_1(U;x)({\rm i}\xi)$ and $a_0(U;x)$ of $a(U;x,\xi)$. In the same way one is able to prove the estimate \begin{equation} \begin{aligned} \|{\rm Op}^{\mathcal{B}W}\big((b(U;x,\xi)-b(V;x,\xi))\big)\bar{h}\|_{K,s-2}\leq C \|U-V\|_{K,s_0+2} \| \bar{h}\|_{K,s}. \end{aligned} \end{equation} Thus the \eqref{nave77} is proved renaming $s_0$ as $s_0+2.$ In order to prove \eqref{nave101} we show that the operator ${\rm d}_{U}(R(U)U)[\cdot]$ belongs to the class ${\mathcal R}^{-\rho}_{K,K'}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ for any $\rho\geq 0$ (where $d_{U}(R(U)U)[\cdot]$ denotes the differential of $R(U)[U]$ w.r.t. the variable $U$). We recall that the operator $R$ in \eqref{6.666para} is of the form \[ R(U)[\cdot]:=\left(\begin{matrix} Q(U)[\cdot] \\ \overline{Q(U)}[\cdot] \end{matrix} \right), \] where $Q(U)[\cdot]$ is the $1\times 2$ matrix of smoothing operators in \eqref{finalmentesiparalinearizza1} with $f$ given in \eqref{NLS}. We claim that ${\rm d}_{U}(Q(U)U)[\cdot]$ is $1\times 2$ matrix of smoothing operators in ${\mathcal R}^{-\rho}_{K,0}[r]$. By Lemma \ref{paralinearizza} we know that $Q(U)[\cdot]=R_0(U)+\sum_{j=1}^{3}R_{j}(U)$, where $R_0$ is $1\times 2$ matrix of smoothing operators coming from the Bony paralinearization formula (see \eqref{finalmentesiparalinearizza2}), while $R_{j}$, for $j=1,2,3$, are the $1\times 2$ matrices of smoothing operators in \eqref{ciaone} and \eqref{ciaone1}. One can prove the claim for the terms $R_{j}$, $j=1,2,3$, by arguing as done in the proof of \eqref{nave77}. Indeed we know the explicit paradifferential structure of these remainders. For instance, by \eqref{diego}, \eqref{armando}, \eqref{mara} and \eqref{ciaone} we have that \begin{equation} R_{1}(U)[\cdot]:={\rm op}\Big( k(x,\xi)\Big)[\cdot], \end{equation} where $k(x,\xi)=\sum_{j\in\mathds{Z}}\hat{k}(j,\xi)e^{{\rm i} jx}$ and \[ k(j,\xi)=\tonde{\chi\tonde{n\langle\xi-n/2\rangle^{-1}}-\chi(n\langle\xi\rangle^{-1})}\widehat{D_Uf}(n) \] (see formula \eqref{mara}). The remainders $R_2,R_{3}$ have similar expressions. We reduced to prove the claim for the term $R_0$. Recalling \eqref{quaderno} we set \[ c(U;x,\xi):=c_{U}(U;x,\xi)+c_{U_x}(U;x,\xi)+c_{U_{xx}}(U;x,\xi). \] Using this notation, formula \eqref{finalmentesiparalinearizza2} reads \begin{equation}\label{finalmentesiparalinearizza4} f(u,u_x,u_{xx})=f_1(U,U_{x},U_{xx})={\rm Op}^{\mathcal{B}}(c(U;x,\xi))U+R_0(U)U. \end{equation} Differentiating \eqref{finalmentesiparalinearizza4} we get \begin{equation}\label{finalmentesiparalinearizza5} {\rm d}_U(f_1(U,U_{x},U_{xx}))[H]={\rm Op}^{\mathcal{B}}(c(U;x,\xi))[H]+{\rm Op}^{\mathcal{B}}(\partial_{U}c(U;x,\xi)\cdot H)[U]+{\rm d}_{U}(R_0(U)[U])[H]. \end{equation} The l.h.s. of \eqref{finalmentesiparalinearizza5} is nothing but \[ \partial_{U}f_1(U,U_{x},U_{xx})\cdot H+\partial_{U_{x}}f_1(U,U_{x},U_{xx})\cdot H_{x}+\partial_{U_{xx}}f_1(U,U_{x},U_{xx})\cdot H_{xx}=: G(U,H). \] By applying the Bony paralinearization formula to $G(U,H)$ (as a function of the six variables $U,U_{x},U_{xx}$, $H,H_{x},H_{xx}$) we get \begin{equation}\label{finalmentesiparalinearizza6} \begin{aligned} G(U,H)&= {\rm Op}^{\mathcal{B}}( \partial_{U}G(U,H))[U]+ {\rm Op}^{\mathcal{B}}( \partial_{U_{x}}G(U,H))[U_{x}]+ {\rm Op}^{\mathcal{B}}( \partial_{U_{xx}}G(U,H))[U_{xx}]\\ &+ {\rm Op}^{\mathcal{B}}( \partial_{H}G(U,H))[H]+ {\rm Op}^{\mathcal{B}}( \partial_{H_x}G(U,H))[H_{x}]+ {\rm Op}^{\mathcal{B}}( \partial_{H_{xx}}G(U,H))[H_{xx}]+ R_{4}(U)[H], \end{aligned} \end{equation} where $R_{4}(U)[\cdot]$ satisfies estimates \eqref{porto20} for any $\rho\geq 0$. By \eqref{quaderno} and \eqref{finalmentesiparalinearizza6} we have that \eqref{finalmentesiparalinearizza5} reads \begin{equation}\label{finalmentesiparalinearizza7} {\rm d}_{U}(R_0(U)U)[H]=R_{4}(U)[H]. \end{equation} Therefore ${\rm d}_{U}(R_0(U)U)[\cdot]$ is a $1\times2$ matrix of operators in the class $R^{-\rho}_{K,0}[r]$ for any $\rho\geq 0$. \end{proof} \setcounter{equation}{0} \section{Regularization}\label{descent1} We consider the system \begin{equation}\label{sistemainiziale} \begin{aligned} \partial_t V={\rm i} E\Big[\Lambda V&+{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))[V]+R_1^{(0)}(U)[V]+R_2^{(0)}(U)[U]\Big], \\ & U\in B^K_{s_0}(I,r)\cap C^K_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2)), \end{aligned} \end{equation} for some $s_0$ large, $s\geq s_0$ and where $\Lambda$ is defined in \eqref{NLS1000}. The operators $R^{(0)}_1(U)$ and $R^{(0)}_2(U)$ are in the class ${\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ for some $\rho\geq0$ and they have the reality preserving form \eqref{vinello}. The matrix $A(U;x,\xi)$ satisfies the following. \begin{const}\label{Matriceiniziale} The matrix $A(U;x,\xi)$ belongs to $ \Gamma^{2}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C}) $ and has the following properties: \begin{itemize} \item $A(U;x,\xi)$ is \emph{reality preserving}, i.e. has the form \eqref{prodotto}; \item the components of $A(U;x,\xi)$ have the form \begin{equation}\label{simbolidiA} \begin{aligned} & a(U;x,\xi)=a_{2}(U;x)({\rm i}\xi)^{2}+a_{1}(U;x)({\rm i}\xi),\\ & b(U;x,\xi)=b_{2}(U;x)({\rm i}\xi)^{2}+b_{1}(U;x)({\rm i}\xi), \end{aligned} \end{equation} for some $a_{i}(U;x),b_{i}(U;x)$ belonging to ${\mathcal F}_{K,0}[r]$ for $i=1,2$. \end{itemize} \end{const} In addition to Constraint \ref{Matriceiniziale} we assume that the matrix $A$ satisfies one the following two Hypotheses: \begin{hyp}[{\bf Self-adjoint}]\label{ipoipo} The operator ${\rm Op}^{\mathcal{B}W}(A(U;x,\xi))$ is self-adjoint according to Definition \ref{selfi}, i.e. the matrix $A(U;x,\xi)$ satisfies conditions \eqref{quanti801}. \end{hyp} \begin{hyp}[{\bf Parity preserving}]\label{ipoipo2} The operator ${\rm Op}^{\mathcal{B}W}(A(U;x,\xi))$ is parity preserving according to Definition \ref{revmap}, i.e. the matrix $A(U;x,\xi)$ satisfies the conditions \begin{equation}\label{ipoipo3} A(U;x,\xi)=A(U;-x,-\xi), \qquad a_{2}(U;x)\in \mathds{R}. \end{equation} The function $P$ in \eqref{convpotential} is such that $\hat{p}(j)=\hat{p}(-j)$ for $j\in \mathds{Z}$. \end{hyp} Finally we need the following \emph{ellipticity condition}. \begin{hyp}[{\bf Ellipticity}]\label{ipoipo4} There exist $\mathtt{c}_1, \mathtt{c}_2>0$ such that components of the matrix $A(U;x,\xi)$ satisfy the condition \begin{equation}\label{benigni} \begin{aligned} & 1+a_{2}(U;x)\geq \mathtt{c_1},\\ & (1+a_{2}(U;x))^{2}-|b_{2}(U;x)|^{2}\geq \mathtt{c_2}>0, \end{aligned} \end{equation} for any $U\in B^K_{s_0}(I,r)\cap C^K_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2))$. \end{hyp} The goal of this section is to transform the linear paradifferential system \eqref{sistemainiziale} into a constant coefficient one up to bounded remainder. The following result is the core of our analysis. \begin{theo}[{\bf Regularization}]\label{descent} Fix $K\in \mathds{N}$ with $K\geq 4$, $r>0$. Consider the system \eqref{sistemainiziale}. There exists $s_0>0$ such that for any $s\geq s_0$ the following holds. Fix $U$ in $B^K_{s_0}(I,r)\cap C^K_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2))$ (resp. $U\in B^K_{s_0}(I,r)\cap C^K_{*\mathbb{R}}(I,{\bf{H}}_{e}^{s}(\mathbb{T},\mathds{C}^2))$ ) and assume that the system \eqref{sistemainiziale} has the following structure: \begin{itemize} \item the operators $R_{1}^{(0)}$, $R_{2}^{(0)}$ belong to the class ${\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$; \item the matrix $A(U;x,\xi)$ satisfies Constraint \ref{Matriceiniziale}, \item the matrix $A(U;x,\xi)$ satisfies Hypothesis \ref{ipoipo} (resp. together with $P$ satisfy Hyp. \ref{ipoipo2}) \item the matrix $A(U;x,\xi)$ satisfies Hypothesis \ref{ipoipo4}. \end{itemize} Then there exists an invertible map (resp. an invertible and {parity preserving} map) $$ \Phi=\Phi(U) : C^{K-4}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2))\to C^{K-4}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2)), $$ with \begin{equation}\label{stimona} \|(\Phi(U))^{\pm1}V\|_{K-4,s}\leq \|V\|_{K-4,s}(1+C\|U\|_{K,s_0}), \end{equation} for a constant $C>0$ depending on $s$, $\|U\|_{K,s_0}$ and $\|P\|_{C^{1}}$ such that the following holds. There exist operators ${R}_{1}(U),{R}_2(U)$ in ${\mathcal R}^{0}_{K,4}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, and a diagonal matrix $L(U)$ in $\Gamma^{2}_{K,4}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ of the form \eqref{prodotto} satisfying condition \eqref{quanti801} and independent of $x\in \mathbb{T}$, such that by setting $W=\Phi(U)V$ the system \eqref{sistemainiziale} reads \begin{equation}\label{sistemafinale} \partial_t W={\rm i} E\Big[\Lambda W+{\rm Op}^{\mathcal{B}W}(L(U;\xi))[W]+R_{1}(U)[W]+R_{2}(U)[U]\Big]. \end{equation} \end{theo} \begin{rmk} Note that, under the Hypothesis \ref{ipoipo2}, if the term $R_1^{(0)}(U)[V]+R_2^{(0)}(U)[U]$ in \eqref{sistemainiziale} is \emph{parity preserving}, according to Definition \ref{revmap}, then the flow of the system \eqref{sistemainiziale} preserves the subspace of even functions. Since the map $\Phi(U)$ in Theorem \ref{descent} is \emph{parity preserving}, then Lemma \ref{revmap100} implies that also the flow of the system \eqref{sistemafinale} preserves the same subspace. \end{rmk} The proof of Theorem \ref{descent} is divided into four steps which are performed in the remaining part of the section. We first explain our strategy and set some notation. We consider the system \eqref{sistemainiziale} \begin{equation}\label{sistemaini1} V_{t}={\mathcal L}^{(0)}(U)[V]:={\rm i} E\Big[\Lambda V+{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))[V]+R_1^{(0)}(U)[V]+R_2^{(0)}(U)[U]\Big]. \end{equation} The idea is to construct several maps \[ \Phi_{i}[\cdot]:=\Phi_{i}(U)[\cdot] : C^{K-(i-1)}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T}))\to C^{K-(i-1)}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T})), \] for $i=1,\ldots,4$ which conjugate the system ${\mathcal L}^{(i)}(U)$ to ${\mathcal L}^{(i+1)}(U)$, with ${\mathcal L}^{(0)}(U)$ in \eqref{sistemaini1} and \begin{equation}\label{sistemaiesimo} {\mathcal L}^{(i)}(U)[\cdot]:={\rm i} E\Big[\Lambda+{\rm Op}^{\mathcal{B}W}(L^{(i)}(U;\xi))[\cdot]+{\rm Op}^{\mathcal{B}W}(A^{(i)}(U;x,\xi))[\cdot]+R_{1}^{(i)}[\cdot]+R_{2}^{(i)}(U)[U]\Big], \end{equation} where $R^{(i)}_1$ and $R^{(i)}_{2}$ belong to ${\mathcal R}^{0}_{K,i}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, $L^{(i)}$ belong to $\Gamma^{2}_{K,i}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ and moreover they are diagonal, self-adjoint and independent of $x\in \mathbb{T}$ and finally $A^{(i)}$ are in $\Gamma^{2}_{K,i}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. As we will see, the idea is to look for $\Phi_{i}$ in such a way $A^{(i+1)}$ is actually a matrix with symbols of order less or equal than the order of $A^{(i)}$. We now prove a lemma in which we study the conjugate of the convolution operator. \begin{lemma}\label{Convocoj} Let $Q_{1},Q_{2}$ operators in the class ${\mathcal R}^{0}_{K,K'}[r]\otimes {\mathcal M}_{2}{(\mathds{C})}$ and $P : \mathbb{T}\to \mathds{R}$ a continuous function. Consider the operator $\mathfrak{P}$ defined in \eqref{convototale}. Then there exists $R$ belonging to ${\mathcal R}^{0}_{K,K'}[r]\otimes {\mathcal M}_{2}{(\mathds{C})}$ such that \begin{equation}\label{convoluzionetot} (\mathds{1}+Q_{1}(U))\circ \mathfrak{P}\circ (\mathds{1}+Q_{2}(U))[\cdot] =\mathfrak{P}[\cdot]+R(U)[\cdot]. \end{equation} Moreover if $P$ is even in $x$ and the operators $Q_1(U)$ and $Q_2(U)$ are parity-preserving then the operator $R(U)$ is parity preserving according to Definition \ref{revmap}. \end{lemma} \begin{proof} By linearity it is enough to show that the terms \[ Q_{1}(U)\circ\mathfrak{P}\circ (\mathds{1}+Q_{2}(U))[h] ,\quad (\mathds{1}+Q_{1}(U))\circ\mathfrak{P} \circ Q_{2}(U)[h] , \quad Q_{1}(U)\circ\mathfrak{P}\circ Q_{2}(U)[h] \] belong to ${\mathcal R}^{0}_{K,K'}[r]\otimes {\mathcal M}_{2}{(\mathds{C})}$. Note that, for any $0\leq k\leq K-K'$, \begin{equation}\label{stimaVVV} \|\partial_{t}^{k}(P*h)\|_{H^{s-2k}}\leq C \|\partial_{t}^{k}h\|_{H^{s-2k}}, \end{equation} for some $C>0$ depending only on $\|P\|_{L^{\infty}}$. The \eqref{stimaVVV} and the estimate \eqref{porto20} on $Q_1$ and $Q_{2}$ imply the thesis. If $P$ is even in $x$ then the convolution operator with kernel $P$ is a parity preserving operator according to Definition \ref{revmap}. Therefore if in addiction $Q_1(U)$ and $Q_2(U)$ are parity preserving so is $R(U)$. \end{proof} \subsection{Diagonalization of the second order operator}\label{secondord} Consider the system \eqref{sistemainiziale} and assume the Hypothesis of Theorem \ref{descent}. The matrix $A(U;x,\xi)$ satisfies conditions \eqref{simbolidiA}, therefore it can be written as \begin{equation}\label{espansionediA} A(U;x,\xi):=A_{2}(U;x)({\rm i} \xi)^{2}+A_{1}(U;x)({\rm i} \xi), \end{equation} with $A_{i}(U;x)$ belonging to ${\mathcal F}_{K,0}[r]\otimes {\mathcal M}_{2}(\mathds{C})$ and satisfying eighter Hyp. \ref{ipoipo} or Hyp. \ref{ipoipo2}. In this Section, by exploiting the structure of the matrix $A_{2}(U;x)$, we show that it is possible to diagonalize the matrix $E(\mathds{1}+A_{2})$ through a change of coordinates which is a multiplication operator. We have the following lemma. \begin{lemma}\label{step1} Under the Hypotheses of Theorem \ref{descent} there exists $s_0>0$ such that for any $s\geq s_0$ there exists an invertible map (resp. an invertible and parity preserving map) $$ \Phi_{1}=\Phi_{1}(U) : C^{K}_{*\mathbb{R}}(I,{\bf{H}}^{s})\to C^{K}_{*\mathbb{R}}(I,{\bf{H}}^{s}), $$ with \begin{equation}\label{stimona1} \|(\Phi_1(U))^{\pm1}V\|_{K,s}\leq \|V\|_{K,s}(1+C\|U\|_{K,s_0}) \end{equation} where $C$ depends only on $s$ and $\|U\|_{K,s_0}$ such that the following holds. There exists a matrix $A^{(1)}(U;x,\xi)$ satisfying Constraint \ref{Matriceiniziale} and Hyp. \ref{ipoipo} (resp. Hyp. \ref{ipoipo2}) of the form \begin{equation}\label{gorilla} \begin{aligned} A^{(1)}(U;x,\xi)&:=A_{2}^{(1)}(U;x)({\rm i} \xi)^{2}+A_{1}^{(1)}(U;x)({\rm i}\xi),\\ A_{2}^{(1)}(U;x)&:= \left(\begin{matrix} {a}_{2}^{(1)}(U;x)& 0\\ 0 & {{a_{2}^{(1)}(U;x)}} \end{matrix} \right)\in {\mathcal F}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C}),\\ A^{(1)}_{ 1}(U;x)&:= \left(\begin{matrix} {a}_{1}^{(1)}(U;x) & {b}_{1}^{(1)}(U;x)\\ {\overline{b_{1}^{(1)}(U;x,)}} & {\overline{a_{1}^{(1)}(U;x)}} \end{matrix} \right) \in {\mathcal F}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C}) \end{aligned} \end{equation} and operators ${R}^{(1)}_{1}(U), \, {R}^{(1)}_2(U)$ in ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ such that by setting $V_1=\Phi(U)V$ the system \eqref{sistemainiziale} reads \begin{equation}\label{sistemafinale1} \partial_t V_{1}={\rm i} E\Big[\Lambda V_1+{\rm Op}^{\mathcal{B}W}(A^{(1)}(U;x,\xi))[V_{1}]+R^{(1)}_{1}(U)[V_{1}]+R^{(1)}_{2}(U)[U]\Big]. \end{equation} Moreover there exists a constant $\mathtt{k}>0$ such that \begin{equation}\label{elly2} 1+a_2^{(1)}(U;x)\geq \mathtt{k}. \end{equation} \end{lemma} \begin{proof} Let us consider a symbol $z(U;x)$ in the class ${\mathcal F}_{K,0}[r]$ and set \begin{equation}\label{generatore} Z(U;x):=\left(\begin{matrix}0 & z(U;x)\\ \overline{z(U;x)} & 0\end{matrix}\right)\in {\mathcal F}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C}). \end{equation} Let $\Phi_{1}^{\tau}(U)[\cdot]$ the solution at time $\tau\in[0,1]$ of the system \begin{equation}\label{generatore2} \left\{\begin{aligned} &\partial_{\tau}\Phi_{1}^{\tau}(U)[\cdot]={\rm Op}^{\mathcal{B}W}(Z(U;x))\Phi_{1}^{\tau}(U)[\cdot],\\ &\Phi_1^{0}(U)[\cdot]=\mathds{1}[\cdot]. \end{aligned}\right. \end{equation} Since ${\rm Op}^{\mathcal{B}W}(Z(U;x))$ is a bounded operator on ${\bf{H}}^{s}$, by standard theory of Banach space ODE we have that the flow $\Phi_1^{\tau}$ is well defined, moreover by Proposition \ref{boni2} one gets \begin{equation}\label{eneest} \begin{aligned} \partial_{\tau}\|\Phi_{1}^{\tau}(U)V\|^{2}_{{\bf{H}}^{s}}&\leq \|\Phi_{1}^{\tau}(U)V\|_{{\bf{H}}^{s}}\|{\rm Op}^{\mathcal{B}W}(Z(U;x))\Phi_{1}^{\tau}(U)V\|_{{\bf{H}}^{s}}\\ &\leq \|\Phi_{1}^{\tau}(U)V\|^{2}_{{\bf{H}}^{s}}C\|U\|_{{\bf{H}}^{s_0}}, \end{aligned} \end{equation} hence one obtains \begin{equation} \|\Phi_{1}^{\tau}(U)[V]\|_{{\bf{H}}^{s}}\leq \|V\|_{{\bf{H}}^s}(1+C\|U\|_{{\bf{H}}^{s_0}}), \end{equation} where $C>0$ depends only on $\|U\|_{{\bf{H}}^{s_0}}$. The latter estimate implies \eqref{stimona1} for $K=0$. By differentiating in $t$ the equation \eqref{generatore2} we note that \begin{equation} \partial_{\tau}\partial_{t}\Phi_{1}^{\tau}(U)[\cdot]={\rm Op}^{\mathcal{B}W}(Z(U;x))\partial_{t}\Phi_{1}^{\tau}(U)[\cdot]+{\rm Op}^{\mathcal{B}W}(\partial_{t}Z(U;x))\Phi_{1}^{\tau}(U)[\cdot]. \end{equation} Now note that, since $Z$ belongs to the class ${\mathcal F}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, one has that $\partial_{t}Z$ is in ${\mathcal F}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. By performing an energy type estimate as in \eqref{eneest} one obtains \[ \|\Phi_{1}^{\tau}(U)[V]\|_{C^{1}{\bf{H}}^{s}}\leq \|V\|_{C^{1}{\bf{H}}^s}(1+C\|U\|_{C^{1}{\bf{H}}^{s_0}}), \] which implies \eqref{stimona1} with $K=1$. Iterating $K$ times the reasoning above one gets the bound \eqref{stimona1}. By using Corollary \ref{esponanziale} one gets that \begin{equation}\label{exp} \Phi_{1}^{\tau}(U)[\cdot]=\exp\{\tau{\rm Op}^{\mathcal{B}W}(Z(U;x))\}[\cdot]={\rm Op}^{\mathcal{B}W}(\exp\{\tau Z(U;x)\})[\cdot]+Q_1^{\tau}(U)[\cdot], \end{equation} with $Q_1^{\tau}$ belonging to ${\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}({\mathds{C}})$ for any $\rho>0$ and any $\tau\in[0,1]$. We now set $\Phi_{1}(U)[\cdot]:=\Phi_{1}^{\tau}(U)[\cdot]_{|_{\tau=1}}$. In particular we have \begin{equation}\label{iperbolici} \begin{aligned} \Phi_1(U)[\cdot]&={\rm Op}^{\mathcal{B}W} (C(U;x))[\cdot]+Q_{1}^{1}(U)[\cdot]\\ C(U;x):=\exp\{Z(U;x)\}:&=\left( \begin{matrix}c_1(U;x)& c_{2}(U;x) \\ \overline{c_{2}(U;x)} & c_{1}(U;x) \end{matrix} \right), \quad C(U;x)-\mathds{1}\in {\mathcal F}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C}), \end{aligned}\end{equation} where \begin{equation}\label{iperbolici2} c_1(U;x):=\cosh(|z(U;x)|) , \qquad c_{2}(U;x):=\frac{z(U;x)}{|z(U;x)|}\sinh(|z(U;x)|). \end{equation} Note that the function $c_2(U;x)$ above is not singular indeed \begin{equation*} \begin{aligned} c_{2}(U;x)&=\frac{z(U;x)}{|z(U;x)|}\sinh(|z(U;x)|)=\frac{z(U;x)}{|z(U;x)|}\sum_{k=0}^{\infty}\frac{(|z|(U;x))^{2k+1}}{(2k+1)!}\\ &=z(U;x)\sum_{k=0}^{\infty}\frac{\big(z(U;x)\overline{z(U;x)}\big)^k}{(2k+1)!}. \end{aligned} \end{equation*} We note moreover that for any $x\in \mathbb{T}$ one has $\det(C(U;x))=1$, hence its inverse $C^{-1}(U;x)$ is well defined. In particular, by Propositions \ref{componiamoilmondo} and \ref{componiamoilmare}, we note that \begin{equation}\label{inversaC} {\rm Op}^{\mathcal{B}W}(C^{-1}(U;x))\circ \Phi_1=\mathds{1}+\tilde{Q}({U}),\qquad \tilde{Q}\in {\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C}), \end{equation} for any $\rho>0$, since the expansion of $(C^{-1}(U;x)\sharp C(U;x))_{\rho}$ (see formula \eqref{sharp}) is equal to $C^{-1}(U;x)C(U;x)$ for any $\rho$. This implies that \begin{equation}\label{iperbolici3} (\Phi_1(U))^{-1}[\cdot]={\rm Op}^{\mathcal{B}W}( C^{-1}(U;x))[\cdot]+Q_{2}(U)[\cdot], \end{equation} for some $Q_{2}(U)$ in the class ${\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ for any $\rho>0$. By setting $V_{1}:=\Phi_1(U)[V]$ the system \eqref{sistemainiziale} in the new coordinates reads \begin{equation}\label{expexpexp} \begin{aligned} (V_{1})_{t}&=\Phi_1(U)\Big( {\rm i} E(\Lambda+{\rm Op}^{\mathcal{B}W}(A(U;x,\xi)) )\Phi_1^{-1}(U) \Big)V_{1}+(\partial_{t}\Phi_{1}(U))\Phi_1^{-1}(U)V_{1}+\\ &+\Phi_{1}(U)({\rm i} E)R_1^{(0)}(U)\Phi_1^{-1}(U)[V_1]+ \Phi_1(U)({\rm i} E)R_2^{(0)}(U)[U]\\ &= {\rm i} \Phi_1(U)\Big[E\mathfrak{P}[\Phi_1^{-1}(U)[V_1]] \Big]+ {\rm i}\Phi_1(U) E{\rm Op}^{\mathcal{B}W}\big((\mathds{1}+A_{2}(U;x))({\rm i}\xi)^{2} \big)\Phi_1^{-1}(U)[V_1]+\\ &+{\rm i}\Phi_1(U) E{\rm Op}^{\mathcal{B}W}\big(A_{1}(U;x)({\rm i}\xi) \big)\Phi_1^{-1}(U)[V_1] +(\partial_{t}\Phi_{1})\Phi_1^{-1}(U)V_{1}+\\ &+\Phi_{1}(U)({\rm i} E)R_1^{(0)}(U)\Phi_1^{-1}(U)[V_1]+ \Phi_1(U)({\rm i} E)R_2^{(0)}(U)[U], \end{aligned} \end{equation} where $\mathfrak{P}$ is defined in \eqref{convototale}. We have that \[ \Phi_1(U)\circ E=E\circ{\rm Op}^{\mathcal{B}W}\left(\begin{matrix} c_1(U;x)& -c_2(U;x) \\ - \overline{c_2(U;x)} & c_1(U;x) \end{matrix} \right), \] up to remainders in ${\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}({\mathds{C}})$, where $c_{i}(U;x)$, $i=1,2$, are defined in \eqref{iperbolici2}. Since the matrix $C(U;x)-\mathds{1} \in {\mathcal F}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ (see \eqref{iperbolici}) then by Lemma \ref{Convocoj} one has that \[ \Phi_1(U)\circ E \mathfrak{P}\circ\Phi_1^{-1}(U)[V_1]] =E \mathfrak{P}[V_1]+Q_{3}(U)[V_1], \] where $Q_3(U)$ belongs to ${\mathcal R}^{0}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. The term $(\partial_{t}\Phi_1)$ is ${\rm Op}^{\mathcal{B}W}(\partial_{t}C(U;x))$ plus a remainder in the class ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. Note that, since $(C(U;x)-\mathds{1})$ belongs to the class $\Gamma^{0}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, one has that $\partial_{t}C(U;x)$ is in $\Gamma^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. Therefore, by the composition Propositions \ref{componiamoilmondo} and \ref{componiamoilmare}, Remark \ref{inclusione-nei-resti}, and using the discussion above we have that, there exist operators $R^{(1)}_{1},R^{(1)}_{2}$ belonging to ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ such that \begin{equation}\label{nuovosistema} \begin{aligned} (V_{1})_{t}&={\rm i} E \mathfrak{P}V_1+{\rm i} {\rm Op}^{\mathcal{B}W}\big(C(U;x)E(\mathds{1}+A_{2}(U;x))C^{-1}(U;x)({\rm i}\xi)^{2})\big)V_{1}+ {\rm i} E{\rm Op}^{\mathcal{B}W}(A_{1}^{(1)}(U;x) ({\rm i}\xi))V_{1}+\\ +{\rm i} E\Big(R_{1}^{(1)}(U)[V_{1}]+R_{2}^{(1)}(U)[U]\Big) \end{aligned} \end{equation} where \begin{equation}\label{primoordine} \begin{aligned} A_{1}^{(1)}(U;x)&:=EC(U;x)E(\mathds{1}+A_{2}(U;x))\partial_{x}C^{-1}(U;x)-(\partial_{x}(C)(U;x))E(\mathds{1}+A_{2}(U;x))C^{-1}(U;x)\\ &+EC(U;x)A_{1}(U;x)C^{-1}(U;x), \end{aligned} \end{equation} with $A_{1}(U;x),A_{2}(U;x)$ defined in \eqref{espansionediA}. Our aim is to find a symbol $z(U;x)$ such that the matrix of symbols $C(U;x) E(\mathds{1}+A_{2}(U;x))C^{-1}(U;x)$ is diagonal. We reason as follows. One can note that the eigenvalues of $E(\mathds{1}+A_{2}(U;x))$ are \[ \lambda^{\pm}:=\pm \sqrt{(1+a_{2}(U;x))^{2}-|b_{2}(U;x)|^{2}}. \] We define the symbols \begin{equation}\label{nuovadiag} \begin{aligned} \lambda_{2}^{(1)}(U;x)&:=\lambda^{+},\\ a_{2}^{(1)}(U;x)&:=\lambda_{2}^{(1)}(U;x)-1\in {\mathcal F}_{K,0}[r]. \end{aligned} \end{equation} The symbol $\lambda_{2}^{(1)}(U;x)$ is well defined and satisfies \eqref{elly2} thanks to Hypothesis \ref{ipoipo4}. The matrix of the normalized eigenvectors associated to the eigenvalues of $E(\mathds{1}+A_{2}(U;x))$ is \begin{equation}\label{transC} \begin{aligned} S(U;x)&:=\left(\begin{matrix} {s}_1(U;x) & {s}_2(U;x)\\ {\overline{s_2(U;x)}} & {{s_1(U;x)}} \end{matrix} \right),\\ s_{1}(U;x)&:=\frac{1+a_{2}(U;x)+\lambda_{2}^{(1)}(U;x)}{\sqrt{2\lambda_{2}^{(1)}(U;x)\big(1+a_{2}(U;x)+ \lambda_{2}^{(1)}(U;x)\big) }},\\ s_{2}(U;x)&:=\frac{-b_{2}(U;x)}{\sqrt{2\lambda_{2}^{(1)}(U;x)\big(1+a_{2}(U;x)+\lambda_{2}^{(1)}(U;x)\big) }}. \end{aligned} \end{equation} Note that $1+a_{2}(U;x)+\lambda_{2}^{(1)}(U;x) \geq \mathtt{c_1}+\sqrt{\mathtt{c_2}}>0$ by \eqref{benigni}. Therefore one can check that $S(U;x)-\mathds{1} \in {\mathcal F}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. Therefore the matrix $S$ is invertible and one has \begin{equation}\label{diag1} S^{-1}(U;x)\big[E (\mathds{1}+A_{2}(U;x))\big] S(U;x)=E \left(\begin{matrix}1+a_{2}^{(1)}(U;x) & 0\\ 0 & 1+a_{2}^{(1)}(U;x) \end{matrix} \right). \end{equation} We choose $z(U;x)$ in such a way that $C^{-1}(U;x):=S(U;x)$. Therefore we have to solve the following equations \begin{equation}\label{losappaimorisolvere???} \cosh(|z(U;x)|)=s_1(U;x), \quad \frac{z(U;x)}{|z(U;x)|}\sinh(|z(U;x)|)=-s_{2}(U;x). \end{equation} Concerning the first one we note that $s_1$ satisfies \[ (s_1(U;x))^{2}-1=\frac{|b_2(U;x)|^{2}}{2\lambda_{2}^{(1)}(U;x)(1+a_{2}(U;x)+\lambda_{2}^{(1)}(U;x))}\geq0, \] indeed we remind that $1+a_{2}(U;x)+\lambda_{2}^{(1)}(U;x) \geq \mathtt{c_1}+\sqrt{\mathtt{c_2}}>0$ by \eqref{benigni}, therefore \[ |z(U;x)|:= {\rm arccosh}(s_1(U;x))=\ln\Big(s_1(U;x)+\sqrt{(s_1(U;x))^{2}-1}\Big), \] is well-defined. For the second equation one observes that the function \[ \frac{\sinh(|z(U;x)|)}{|z(U;x)|}=1+\sum_{k\geq0}\frac{(z(U;x)\bar{z}(U;x))^{k}}{(2k+1)!}\geq 1, \] hence we set \begin{equation}\label{definizioneC} z(U;x):=s_{2}(U;x)\frac{|z(U;x)|}{\sinh(|z(U;x)|)}. \end{equation} We set \begin{equation}\label{nuovosis2} \begin{aligned} A^{(1)}(U;x,\xi)&:=A_{2}^{(1)}(U;x)({\rm i}\xi)^{2}+A_{1}^{(1)}(U;x)({\rm i}\xi),\\ A_{2}^{(1)}(U;x)&:= \left(\begin{matrix}a_{2}^{(1)}(U;x) & 0\\ 0 & a_{2}^{(1)}(U;x) \end{matrix} \right) \end{aligned} \end{equation} where $a_{2}^{(1)}(U;x)$ is defined in \eqref{nuovadiag} and $A_{1}^{(1)}(U;x)$ is defined in \eqref{primoordine}. Equation \eqref{diag1}, together with \eqref{nuovosistema} and \eqref{nuovosis2} implies that \eqref{sistemafinale1} holds. By construction one has that the matrix $A^{(1)}(U;x,\xi)$ satisfies Constraint \ref{Matriceiniziale}. It remains to show that $A^{(1)}(U;x,\xi)$ satisfies either Hyp. \ref{ipoipo} or Hyp \ref{ipoipo2}. If $A(U;x,\xi)$ satisfies Hyp. \ref{ipoipo2} then we have that ${a_{2}^{(1)}}(U;x)$ in \eqref{nuovadiag} is real. Moreover by construction $S(U;x)$ in \eqref{transC} is even in $x$, therefore by Remark \ref{compsimb} we have that the map $\Phi_{1}(U)$ in \eqref{exp} is parity preserving according to Definition \ref{revmap}. This implies that the matrix $A^{(1)}(U;x,\xi)$ satisfies Hyp. \ref{ipoipo2}. Let us consider the case when $A(U;x,\xi)$ satisfies Hyp. \ref{ipoipo}. One can check, by an explicit computation, that the map $\Phi_1(U)$ in \eqref{exp}, is such that \begin{equation}\label{quasisimplettica} \Phi_{1}^{*}(U)(-{\rm i} E )\Phi_{1}(U)=(-{\rm i} E)+\tilde{R}(U), \end{equation} for some smoothing operators $\tilde{R}(U)$ belonging to ${\mathcal R}^{-\rho}_{K,0}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. In other words, up to a $\rho-$smoothing operator, the map $\Phi_1(U)$ satisfies conditions \eqref{symsym10}. By following essentially word by word the proof of Lemma \ref{lemmalemma} one obtains that, up to a smoothing operator in the class ${\mathcal R}^{-\rho}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, the operator ${\rm Op}^{\mathcal{B}W}(A^{(1)}(U;x,\xi))$ in \eqref{sistemafinale1} is self-adjoint. This implies that the matrix $A^{(1)}(U;x,\xi)$ satisfies Hyp. \ref{ipoipo}. This concludes the proof. \end{proof} \subsection{Diagonalization of the first order operator}\label{diago1} In the previous Section we conjugated system \eqref{sistemainiziale} to \eqref{sistemafinale1}, where the matrix $A^{(1)}(U;x,\xi)$ has the form \begin{equation}\label{espansionediA1} A^{(1)}(U;x,\xi)=A_{2}^{(1)}(U;x)({\rm i}\xi)^{2}+A_{1}^{(1)}(U;x)({\rm i}\xi), \end{equation} with $A_{i}^{(1)}(U;x)$ belonging to ${\mathcal F}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ and where $A_{2}^{(1)}(U;x)$ is diagonal. In this Section we show that, since the matrices $A_{i}^{(1)}(U;x)$ satisfy Hyp. \ref{ipoipo} (respectively Hyp. \ref{ipoipo2}), it is possible to diagonalize also the term $A_{1}^{(1)}(U;x)$ through a change of coordinates which is the identity plus a smoothing term. This is the result of the following lemma. \begin{lemma}\label{step2} If the matrix $A^{(1)}(U;x,\xi)$ in \eqref{sistemafinale1} satisfies Hypothesis \ref{ipoipo} (resp. together with $P$ satisfy Hyp. \ref{ipoipo2}) then there exists $s_0>0$ (possibly larger than the one in Lemma \ref{step1}) such that for any $s\geq s_0$ there exists an invertible map (resp. an invertible and parity preserving map) $$ \Phi_{2}=\Phi_{2}(U) : C^{K-1}_{*\mathbb{R}}(I,{\bf{H}}^{s})\to C^{K-1}_{*\mathbb{R}}(I,{\bf{H}}^{s}), $$ with \begin{equation}\label{stimona2} \|(\Phi_{2}(U))^{\pm1}V\|_{K-1,s}\leq \|V\|_{K-1,s}(1+C\|U\|_{K,s_0}) \end{equation} where $C>0$ depends only on $s$ and $\|U\|_{K,s_0}$ such that the following holds. There exists a matrix $A^{(2)}(U;x,\xi)$ satisfying Constraint \ref{Matriceiniziale} and Hyp. \ref{ipoipo} (resp. Hyp. \ref{ipoipo2}) of the form \begin{equation}\label{gorilla2} \begin{aligned} A^{(2)}(U;x,\xi)&:=A_{2}^{(2)}(U;x)({\rm i} \xi)^{2}+A_{1}^{(2)}(U;x)({\rm i} \xi),\\ A_{2}^{(2)}(U;x)&:=A_{2}^{(1)}(U;x);\\ A_{1}^{(2)}(U;x)&:= \left(\begin{matrix} {a}_{1}^{(2)}(U;x) & 0\\ 0 & \overline{{a_{1}^{(2)}(U;x)}} \end{matrix} \right)\in {\mathcal F}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C}),\\ \end{aligned} \end{equation} and operators ${R}^{(2)}_{1}(U),{R}^{(2)}_2(U)$ in ${\mathcal R}^{0}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, such that by setting $V_{2}=\Phi_2(U)V_{1}$ the system \eqref{sistemafinale1} reads \begin{equation}\label{sistemafinale2} \partial_t V_{2}={\rm i} E\Big[\Lambda V_{2}+{\rm Op}^{\mathcal{B}W}(A^{(2)}(U;x,\xi))[V_{2}]+R^{(2)}_{1}(U)[V_{2}]+R^{(2)}_{2}(U)[U]\Big]. \end{equation} \end{lemma} \begin{proof} We recall that by Lemma \ref{step1} we have that \begin{equation*} \begin{aligned} A^{(1)}(U;x,\xi):=\left( \begin{matrix} a^{(1)}(U;x,\xi) & b^{(1)}(U;x,\xi)\\ \overline{b^{(1)}(U;x,-\xi)} & \overline{a^{(1)}(U;x,-\xi)} \end{matrix} \right). \end{aligned} \end{equation*} Moreover by \eqref{gorilla} we can write \[ \begin{aligned} a^{(1)}(U;x,\xi)&=a_{2}^{(1)}(U;x)({\rm i}\xi)^{2}+a_{1}^{(1)}(U;x)({\rm i}\xi),\\ b^{(1)}(U;x,\xi)&=b_{1}^{(1)}(U;x)({\rm i}\xi), \end{aligned} \] with $a_{2}^{(1)}(U;x),a_{1}^{(1)}(U;x),b_{1}^{(1)}(U;x)\in {\mathcal F}_{K,1}[r]$. In the case that $A^{(1)}(U;x,\xi)$ satisfies Hyp. \ref{ipoipo}, we can note that $b_{1}(U;x)\equiv0$. Hence it is enough to choose $\Phi_{2}(U)\equiv\mathds{1}$ to obtain the thesis. On the other hand, assume that $A^{(1)}(U;x,\xi)$ satisfies Hyp. \ref{ipoipo2} we reason as follows. Let us consider a symbol $d(U;x,\xi)$ in the class $\Gamma^{-1}_{K,1}[r]$ and define \begin{equation}\label{mappa3} \begin{aligned} &D(U;x,\xi):=\left( \begin{matrix} 0&d(U;x,\xi)\\ \overline{d(U;x,-\xi)} & 0 \end{matrix} \right)\in \Gamma^{-1}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C}). \end{aligned} \end{equation} Let $\Phi_{2}^{\tau}(U)[\cdot]$ be the flow of the system \begin{equation} \left\{\begin{aligned} &\partial_{\tau}\Phi_{2}^{\tau}(U)={\rm Op}^{\mathcal{B}W}(D(U;x,\xi))\Phi_{2}^{\tau}(U)\\ &\Phi_{2}^{0}(U)=\mathds{1}. \end{aligned}\right. \end{equation} Reasoning as done for the system \eqref{generatore2} one has that there exists a unique family of invertible bounded operators on ${\bf{H}}^{s}$ satisfying with \begin{equation}\label{stimona200} \|(\Phi_{2}^{\tau}(U))^{\pm1}V\|_{K-1,s}\leq \|V\|_{K-1,s}(1+C\|U\|_{K,s_0}) \end{equation} for $C>0$ depending on $s$ and $\|U\|_{K,s_0}$ for $\tau\in [0,1]$. The operator $W^{\tau}(U)[\cdot]:=\Phi_{2}^{\tau}(U)[\cdot]-(\mathds{1}+\tau{\rm Op}^{\mathcal{B}W}(D(U;x,\xi)))$ solves the following system: \begin{equation}\label{sistW} \left\{\begin{aligned} &\partial_{\tau}W^{\tau}(U)={\rm Op}^{\mathcal{B}W}(D(U;x,\xi))W^{\tau}(U)+\tau {\rm Op}^{\mathcal{B}W}(D(U;x,\xi))\circ{\rm Op}^{\mathcal{B}W}(D(U;x,\xi))\\ &W^{0}(U)=0. \end{aligned}\right. \end{equation} Therefore, by Duhamel formula, one can check that $W^{\tau}(U)$ is a smoothing operator in the class ${\mathcal R}^{-2}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ for any $\tau\in[0,1] $. We set $\Phi_{2}(U)[\cdot]:=\Phi_{2}^{\tau}(U)[\cdot]_{|_{\tau=1}}$, by the discussion above we have that there exists $Q(U)$ in ${\mathcal R}^{-2}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ such that \[ \Phi_{2}(U)[\cdot]=\mathds{1}+{\rm Op}^{\mathcal{B}W}(D(U;x,\xi))+Q(U). \] Since $\Phi_{2}^{-1}(U)$ exists, by symbolic calculus, it is easy to check that there exists $\tilde{Q}(U)$ in ${\mathcal R}^{-2}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ such that \[ \Phi^{-1}_{2}(U)[\cdot]=\mathds{1}-{\rm Op}^{\mathcal{B}W}(D(U;x,\xi))+\tilde{Q}(U). \] We set $V_{2}:=\Phi_{2}(U)[V_{1}]$, therefore the system \eqref{sistemafinale1} in the new coordinates reads \begin{equation}\label{nuovosist} \begin{aligned} (V_2)_{t}&=\Phi_{2}(U){\rm i} E\Big(\Lambda+{\rm Op}^{\mathcal{B}W}(A^{(1)}(U;x,\xi))+R_{1}^{(1)}(U)\Big)(\Phi_{2}(U))^{-1}[V_{2}]+\\ &+\Phi_{2}(U){\rm i} ER_{2}^{(1)}(U)[U]+{\rm Op}^{\mathcal{B}W}(\partial_{t}\Phi_2(U))(\Phi_{2}(U))^{-1}[V_{2}]. \end{aligned} \end{equation} The summand $\Phi_{2}(U){\rm i} ER_{2}^{(1)}(U)[\cdot] $ belongs to the class ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ by composition Propositions. Since $\partial_{t}D(U;x,\xi)$ belongs to $\Gamma^{-1}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ and $\partial_t Q$ is in ${\mathcal R}^{-2}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ then the last summand in \eqref{nuovosist} belongs to ${\mathcal R}^{0}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. We now study the first summand. First we note that $\Phi_{2}(U){\rm i} ER_{1}^{(1)}(U)\Phi_{2}^{-1}(U)$ is a bounded remainder in ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. It remains to study the term \[ {\rm i} \Phi_{2}(U)\Big[E\mathfrak{P}\big(\Phi_{2}^{-1}(U)[V_2]\big)\Big) \Big]+ {\rm i} \Phi_{2}(U)\Big[ {\rm Op}^{\mathcal{B}W}\big(E(\mathds{1}+A_{2}^{(1)}(U;x))({\rm i}\xi)^{2}+EA_{1}^{(1)}(U;x)({\rm i}\xi) \big) \Big]\Phi_{2}^{-1}(U)[V_2], \] where $\mathfrak{P}$ is defined in \eqref{convototale}. The first term is equal to ${\rm i} E(\mathfrak{P}V_2)$ up to a bounded term in ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ by Lemma \ref{Convocoj}. The second is equal to \begin{equation} \begin{aligned} &{\rm i} {\rm Op}^{\mathcal{B}W}\big(E(\mathds{1}+A_{2}^{(1)}(U;x))({\rm i}\xi)^{2}+EA_{1}^{(1)}(U;x)({\rm i}\xi) \big)+\\ &+\Big[ {\rm Op}^{\mathcal{B}W}(D(U;x,\xi)), {\rm i} E{\rm Op}^{\mathcal{B}W}\big((\mathds{1}+A_{2}^{(1)}(U;x))({\rm i}\xi)^{2}\big) \Big] \end{aligned} \end{equation} modulo bounded terms in ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. By using formula \eqref{sbam8} one get that the commutator above is equal to ${\rm Op}^{\mathcal{B}W}(M(U;x,\xi))$ with \begin{equation} \begin{aligned} M(U;x,\xi)&:=\left(\begin{matrix} 0& m(U;x,\xi)\\ {\overline{m(U;x,-\xi)}} &0 \end{matrix} \right),\\ m(U;x,\xi)&:=-2d(U;x,\xi)(1+a_{2}^{(1)}(U;x))({\rm i}\xi)^{2}, \end{aligned} \end{equation} up to terms in ${\mathcal R}^{0}_{K,1}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. Therefore the system obtained after the change of coordinates reads \begin{equation} (V_{2})_{t}={\rm i} E\Big[\Lambda V_{2}+{\rm Op}^{\mathcal{B}W}(A^{(2)}(U;x,\xi))[V_{2}]+Q_1(U)[V_{2}]+Q_{2}(U)[U]\Big], \end{equation} where $Q_1(U)$ and $Q_2(U)$ are bounded terms in ${\mathcal R}^{0}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ and the new matrix $A^{(2)}(U;x,\xi))$ is \begin{equation}\label{riassunto} \left(\begin{matrix} {a}_{2}^{(1)}(U;x) & 0\\ 0 & \overline{{a_{2}^{(1)}(U;x)}} \end{matrix} \right)({\rm i}\xi)^2+ \left(\begin{matrix} {a}_{1}^{(1)}(U;x) & b_1^{(1)}(U;x)\\ \overline{b_1^{(1)}(U;x)} & \overline{{a_{1}^{(1)}(U;x)}} \end{matrix} \right)({\rm i}\xi)+M(U;x,\xi). \end{equation} Hence the elements on the diagonal are not affected by the change of coordinates, now our aim is to choose $d(U;x,\xi)$ in such a way that the symbol \begin{equation}\label{leespansioni-belle} b_{1}(U;x)({\rm i}\xi)+m(U;x,\xi)=b_{1}(U;x)({\rm i}\xi)-2d(U;x,\xi)(1+a_{2}^{(1)}(U;x))({\rm i}\xi)^{2}, \end{equation} belongs to $\Gamma^{0}_{K,2}[r]$. We split the symbol in \eqref{leespansioni-belle} in low-high frequencies: let $\varphi(\xi)$ a function in $C^{\infty}_0(\mathds{R};\mathds{R})$ such that $\rm{supp}(\varphi)\subset [-1,1]$ and $\varphi\equiv 1$ on $[-1/2,1/2]$. Trivially one has that $\varphi(\xi)(b_{1}(U;x)({\rm i}\xi)+m(U;x,\xi))$ is a symbol in $\Gamma^0_{K,1}[r]$, so it is enough to solve the equation \begin{equation}\label{monk} \big(1-\varphi(\xi)\big)\left[b_{1}(U;x)({\rm i}\xi)-2d(U;x,\xi)\left(1+a_{2}^{(1)}(U;x))({\rm i}\xi)^{2}\right)\right]=0. \end{equation} So we should choose the symbol $d$ as \begin{equation}\label{simbolod} \begin{aligned} &d(U;x,\xi)=\left(\frac{b_{1}^{(1)}(U;x)}{2(1+a_{2}^{(1)}(U;x))} \right)\cdot\gamma{(\xi)}\\ &\gamma(\xi)=\left\{ \begin{aligned} &\frac{1}{{\rm i}\xi} \quad \rm{if\,\,} |\xi|\geq \frac12 \\ & \rm{odd\, continuation\, of\, class\, } C^{\infty} \quad \rm{if\,\,} |\xi|\in [0,\frac12).\\ \end{aligned}\right. \end{aligned} \end{equation} Clearly the symbol $d(U;x,\xi)$ in \eqref{simbolod} belongs to $\Gamma^{-1}_{K,1}[r]$, hence the map $\Phi_{2}(U)$ in \eqref{mappa3} is well defined and estimate \eqref{stimona2} holds. It is evident that, after the choice of the symbol in \eqref{simbolod}, the matrix $A^{(2)}(U;x,\xi)$ is \begin{equation} \left(\begin{matrix} {a}_{2}^{(1)}(U;x) & 0\\ 0 & \overline{{a_{2}^{(1)}(U;x)}} \end{matrix} \right)({\rm i}\xi)^2+\left(\begin{matrix} {a}_{1}^{(1)}(U;x) & 0\\ 0 & \overline{{a_{1}^{(1)}(U;x)}}. \end{matrix} \right)({\rm i}\xi)\end{equation} The symbol $d(U;x,\xi)$ is equal to $d(U;-x,-\xi)$ because $b_1^{(1)}(U;x)$ is odd in $x$ and $a_2^{(1)}(U;x)$ is even in $x$, therefore, by Remark \ref{compsimb} the map $\Phi_2(U)$ is \emph{parity preserving}. \end{proof} \subsection{Reduction to constant coefficients 1: paracomposition}\label{ridu2} Consider the diagonal matrix of functions $A_{2}^{(2)}(U;x)\in {\mathcal F}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ defined in \eqref{gorilla2}. In this section we shall reduce the operator ${\rm Op}^{\mathcal{B}W}(A_2^{(2)}(U;x)({\rm i}\xi)^{2})$ to a constant coefficient one, up to bounded terms (see \eqref{gorilla3}). For these purposes we shall use a paracomposition operator (in the sense of Alinhac \cite{AliPARA}) associated to the diffeomorphism $x\mapsto x+\beta{(x)}$ of $\mathbb{T}$. We follow Section 2.5 of \cite{maxdelort} and in particular we shall use their alternative definition of paracomposition operator. Consider a real symbol $\beta(U;x)$ in the class $\mathcal{F}_{K,K'}[r]$ and the map \begin{equation}\label{diffeo1} \Phi_U: x\mapsto x+\beta(U;x). \end{equation} We state the following. \begin{lemma}\label{LEMMA252} Let $0\leq K'\leq K$ be in $\mathds{N}$, $r>0$ and $\beta(U;x)\in \mathcal{F}_{K,K'}[r]$ for $U$ in the space $C^{K}_{*\mathds{R}}(I,{\bf{H}}^{s_0})$. If $s_0$ is sufficiently large and $\beta$ is $2\pi$-periodic in $x$ and satisfies \begin{equation}\label{conddiffeo} 1+ \beta_{x}(U;x)\geq \Theta>0, \quad x\in \mathds{R}, \end{equation} for some constant $\Theta$ depending on $\sup_{t\in I}\norm{U(t)}{{\bf{H}}^{s_0}}$, then the map $\Phi_{U}$ in \eqref{diffeo1} is a diffeomorphism of $\mathbb{T}$ to itself, and its inverse may be written as \begin{equation}\label{diffeo2} (\Phi_U)^{-1}: y\mapsto y+\gamma(U;y) \end{equation} for $\gamma$ in $\mathcal{F}_{K,K'}[r]$. \end{lemma} \begin{proof} Under condition \eqref{conddiffeo} there exists $\gamma(U; y)$ such that \begin{equation}\label{conddiffeo2} x+\beta(U;x)+\gamma(U;x+\beta(U;x))=x, \quad \;\; x\in \mathds{R}. \end{equation} One can prove the bound \eqref{simbo} on the function $\gamma(U;y)$ by differentiating in $x$ equation \eqref{conddiffeo2} and using that $\beta(U;x)$ is a symbol in ${\mathcal F}_{K,K'}[r]$. \end{proof} \begin{rmk} The Lemma above is very similar to Lemma $2.5.2$ of \cite{maxdelort}. The authors use a smallness assumption on $r$ to prove the result. Here this assumption is replaced by \eqref{conddiffeo} in order to treat big sized initial conditions. \end{rmk} \begin{rmk}\label{pathdiffeo} By Lemma \ref{LEMMA252} one has that $x\mapsto x+\tau\beta(U;x)$ is a diffeomorphism of $\mathbb{T}$ for any $\tau\in [0,1]$. Indeed \begin{equation*} 1+\tau\beta_x({U;x})=1-\tau+\tau(1+\beta_x({U;x}))\geq (1-\tau)+\tau\Theta\geq\min\{1,\Theta\}>0, \end{equation*} for any $\tau\in [0,1]$. Hence the \eqref{conddiffeo} holds true with $\mathtt{c}=\min\{1,\Theta\}$ and Lemma \ref{LEMMA252} applies. \end{rmk} With the aim of simplifying the notation we set $\beta{(x)}:=\beta(U;x)$, $\gamma(y):=\gamma(U;x)$ and we define the following quantities \begin{equation}\label{bbb} \begin{aligned} B(\tau;x,\xi)=B(\tau,U;x,\xi)&:=-{\rm i} b(\tau;x)({\rm i} \xi), \\ b(\tau;x)&:=\frac{\beta(x)}{(1+\tau \beta_{x}(x))}. \end{aligned} \end{equation} Then one defines the paracomposition operator associated to the diffeomorphism \eqref{diffeo1} as $\Omega_{B(U)}(1)$, where $\Omega_{B(U)}(\tau)$ is the flow of the linear paradifferential equation \begin{equation}\label{flow} \left\{ \begin{aligned} &\frac{d}{d\tau}\Omega_{B(U)}(\tau)={\rm i} {\rm Op}^{\mathcal{B}W}{(B(\tau;U,\xi))}\Omega_{B(U)}(\tau)\\ &\Omega_{B(U)}(0)=\rm{id}. \end{aligned}\right. \end{equation} We state here a Lemma which asserts that the problem \eqref{flow} is well posed and whose solution is a one parameter family of bounded operators on $H^s$, which is one of the main properties of a paracomposition operator. For the proof of the result we refer to Lemma $2.5.3$ in \cite{maxdelort}. \begin{lemma}\label{torodiff} Let $0\leq K'\leq K$ be in $\mathds{N}$, $r>0$ and $\beta(U;x)\in \mathcal{F}_{K,K'}[r]$ for $U$ in the space $C^{K}_{*\mathds{R}}(I,{\bf{H}}^{s})$. The system \eqref{flow} has a unique solution defined for $\tau\in[-1,1]$. Moreover for any $s$ in $\mathbb{R}$ there exists a constant $C_s>0$ such that for any $U$ in $B^K_{s_0}(I,r)$ and any $W$ in $H^s$ \begin{equation}\label{flow2} C_{s}^{-1}\|W\|_{H^{s}}\leq \|\Omega_{B(U)}(\tau)W\|_{H^{s}} \leq C_{s} \|W\|_{H^{s}}, \quad \forall\; \tau\in[-1,1], \;\;\; W\in H^{s}, \end{equation} and \begin{equation}\label{flow3} \|\Omega_{B(U)}(\tau)W\|_{K-K', s}\leq (1+C\|U\|_{K,s_0})\|W\|_{K-K',s}, \end{equation} where $C>0$ is a constant depending only on $s$ and $\|U\|_{K,s_0}$. \end{lemma} \begin{rmk} As pointed out in Remark \ref{differenzaclassidisimbo}, our classes of symbols are slightly different from the ones in \cite{maxdelort}. For this reason the authors in \cite{maxdelort} are more precise about the constant $C$ in \eqref{flow3}. However the proof can be adapted straightforward. \end{rmk} \begin{rmk}\label{pathdi} In the following we shall study how symbols $a(U;x,\xi)$ changes under conjugation through the flow $\Omega_{B(U)}(\tau)$ introduced in Lemma \ref{torodiff}. In order to do this we shall apply Theorem $2.5.8$ in \cite{maxdelort}. Such result requires that $x\mapsto x+\tau \beta(U;x)$ is a path of diffemorphism for $\tau\in [0,1]$. In \cite{maxdelort} this fact is achieved by using the smallness of $r$, here it is implied by Remark \ref{pathdiffeo}. \end{rmk} We now study how the convolution operator $P*$ changes under the flow $\Omega_{B(U)}(\tau)$ introduced in Lemma \ref{torodiff}. \begin{lemma}\label{Convocoj2} Let $P : \mathbb{T}\to \mathds{R}$ be a $C^{1}$ function, let us define $P_{*}[h]=P*h$ for $h\in H^{s}$, where $*$ denote the convolution between functions, and set $\Phi(U)[\cdot]:=\Omega_{B(U)}(\tau)_{|_{\tau=1}}$. There exists $R$ belonging to ${\mathcal R}^{0}_{K,K'}[r]$ such that \begin{equation}\label{convoluzionetot1000} \Phi(U)\circ P_{*}\circ \Phi^{-1}(U)[\cdot] =P_{*}[\cdot]+R(U)[\cdot]. \end{equation} Moreover if $P(x)$ is even in $x$ and $\Phi(U)$ is parity preserving according to Definition \ref{revmap} then the remainder $R(U)$ in \eqref{convoluzionetot1000} is parity preserving. \end{lemma} \begin{proof} Using equation \eqref{flow} and estimate \eqref{paraparaest} one has that, for $0\leq k\leq K-K'$, the following holds true \begin{equation}\label{biascica} \|\partial_{t}^{k}\big(\Phi^{\pm1}(U)-{\rm Id}\big) h\|_{H^{s-1-2k}}\leq \sum_{k_1+k_{2}=k} C \|U\|_{K'+k_1,s_0}\|h\|_{k_2,s} \end{equation} where $C>0$ depends only on $\|U\|_{K,s_0}$ and ${\rm Id}$ is the identity map on $H^{s}$. Therefore we can write \begin{equation}\label{duccio} \Phi(U)\Big[ P*\big[\Phi^{-1}(U)h\big] \Big]=P*h+\Big((\Phi(U)-{\rm Id})(P*h)\Big)+\Phi\Big[P*\Big((\Phi^{-1}(U)-{\rm Id})h\Big)\Big]. \end{equation} Using estimate \eqref{biascica} and the fact that the function $P$ is of class $C^1(\mathbb{T})$ we can estimate the last two summands in the r.h.s. of \eqref{duccio} as follows \begin{equation*} \begin{aligned} &\norm{\partial_{t}^{k}(\Phi(U)-{\rm{Id}})(P*h)}{H^{s-2k}}\leq \sum_{k_1+k_{2}=k}C\norm{U}{K'+k_1,s_0}\norm{P*h}{k_2,s+1}\leq \sum_{k_1+k_{2}=k}C\norm{U}{K'+k_1,s_0}\norm{h}{k_2,s}\\ &\norm{\partial_{t}^{k}\Big(\Phi(U)\left[P*\left((\Phi^{-1}(U)-{\rm{Id}})h\right)\right]\Big)}{ s-2k} \leq \sum_{k_1+k_2=k}C\norm{U}{K'+k_1,s_0} \norm{(\Phi^{-1}(U)-{\rm Id})h}{k_2,s-1}\\ &\qquad \qquad \leq \sum_{k_1+k_2=k}C\norm{U}{K'+k_1,s_0} \norm{h}{k_2,s}, \end{aligned} \end{equation*} for $0\leq k\leq K-K'$ and where $C$ is a constant depending on $\norm{P}{C^1}$ and $\norm{U}{K,s_0}$. Hence they belong to the class ${\mathcal R}^{0}_{K,K'}[r]$. Finally if $P(x)$ is even in $x$ then the operator $P_{*}$ is parity preserving according to Definition \ref{revmap}, therefore if in addiction $\Phi(U)$ is parity preserving so must be $R(U)$ in \eqref{convoluzionetot1000}. \end{proof} We are now in position to prove the following. \begin{lemma}\label{step3} If the matrix $A^{(2)}(U;x,\xi)$ in \eqref{sistemafinale2} satisfies Hyp. \ref{ipoipo} (resp. together with $P$ satisfy Hyp. \ref{ipoipo2}) then there exists $s_0>0$ (possibly larger than the one in Lemma \ref{step2}) such that for any $s\geq s_0$ there exists an invertible map (resp. an invertible and {parity preserving} map) $$ \Phi_{3}=\Phi_{3}(U) : C^{K-2}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2))\to C^{K-2}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2)), $$ with \begin{equation}\label{stimona3} \|(\Phi_{3}(U))^{\pm1}V\|_{K-2,s}\leq \|V\|_{K-2,s}(1+C\|U\|_{K,s_0}) \end{equation} where $C>0$ depends only on $s$ and $\|U\|_{K,s_0}$ such that the following holds. There exists a matrix $A^{(3)}(U;x,\xi)$ satisfying Constraint \ref{Matriceiniziale} and Hyp. \ref{ipoipo} (resp. Hyp. \ref{ipoipo2}) of the form \begin{equation}\label{gorilla3} \begin{aligned} A^{(3)}(U;x,\xi)&:=A_{2}^{(3)}(U)({\rm i} \xi)^{2}+A_{1}^{(3)}(U;x)({\rm i} \xi),\\ A_{2}^{(3)}(U)&:=\left(\begin{matrix} a_{2}^{(3)}(U) & 0 \\ 0 & a_{2}^{(3)}(U) \end{matrix} \right), \quad a_{2}^{(3)}\in {\mathcal F}_{K,3}[r], \quad {\rm independent\,\, of} \; x\in \mathbb{T}, \\ A_{1}^{(3)}(U;x)&:= \left(\begin{matrix} {a}_{1}^{(3)}(U;x) & 0\\ 0 & \overline{{a_{1}^{(3)}(U;x)}} \end{matrix} \right)\in {\mathcal F}_{K,3}[r]\otimes{\mathcal M}_{2}(\mathds{C}),\\ \end{aligned} \end{equation} and operators ${R}^{(3)}_{1}(U),{R}^{(3)}_2(U)$ in ${\mathcal R}^{0}_{K,3}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, such that by setting $V_{3}=\Phi_{3}(U)V_{2}$ the system \eqref{sistemafinale2} reads \begin{equation}\label{sistemafinale3} \partial_t V_{3}={\rm i} E\Big[\Lambda V_{3}+{\rm Op}^{\mathcal{B}W}(A^{(3)}(U;x,\xi))[V_{3}]+R^{(3)}_{1}(U)[V_{3}]+R^{(3)}_{2}(U)[U]\Big]. \end{equation} \end{lemma} \begin{proof} Let $\beta(U;x)$ be a real symbol in $\mathcal{F}_{K,2}[r]$ to be chosen later such that condition \eqref{conddiffeo} holds. Set moreover $\gamma(U;x)$ the symbol such that \eqref{conddiffeo2} holds. Consider accordingly to the hypotheses of Lemma \ref{torodiff} the system \begin{equation}\label{felice} \begin{aligned} \dot{W}={\rm i} E M W,\quad W(0)=\mathds{1}, \quad M:={\rm Op}^{\mathcal{B}W} \left(\begin{matrix} B(\tau,x,\xi) & 0\\ 0 & \overline{B(\tau,x,-\xi)} \end{matrix}\right), \end{aligned} \end{equation} where $B$ is defined in \eqref{bbb}. Note that $\overline{B(\tau,x,-\xi)}=-B(\tau, x, \xi)$. By Lemma \ref{torodiff} the flow exists and is bounded on ${\bf{H}}^s(\mathbb{T},\mathds{C}^2)$ and moreover \eqref{stimona3} holds. We want to conjugate the system \eqref{sistemafinale2} through the map $\Phi_3(U)[\cdot]=W(1)[\cdot]$. Set $V_3=\Phi_3(U)V_2$. The system in the new coordinates reads \begin{equation}\label{chi} \begin{aligned} \frac{d}{dt}V_3= \Phi_3(U)\Big[ {\rm i} E (\mathfrak{P}\big[\Phi_3^{-1}(U)V_3)\big]&+ ( \partial_{t}\Phi_{3}(U))\Phi_{3}^{-1}(U)[V_3] \\ &+\Phi_3(U)\Big[{\rm i} E{\rm Op}^{\mathcal{B}W}((\mathds{1}+A_2^{(2)}(U;x))({\rm i}\xi)^2)\Big] \Phi_{3}^{-1}(U)[V_{3}]\\ &+\Phi_3(U)\Big[{\rm i} E{\rm Op}^{\mathcal{B}W}(A_1^{(2)}(U;x)({\rm i}\xi))\Big] \Phi_{3}^{-1}(U)[V_{3}]\\ &+\Phi_3(U)\Big[{\rm i} ER_1^{(2)}(U)\Big] \Phi_{3}^{-1}(U)[V_{3}]+\Phi_3(U){\rm i} ER_{2}^{(2)}(U)[U], \end{aligned} \end{equation} where $\mathfrak{P}$ is defined in \eqref{convototale}. We now discuss each term in \eqref{chi}. The first one, by Lemma \ref{Convocoj2}, is equal to ${\rm i} E (\mathfrak{P}V_3)$ up to a bounded remainder in the class ${\mathcal R}^{0}_{K,2}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. The last two terms also belongs to the latter class because the map $\Phi_3$ is a bounded operator on ${\bf{H}}^s$. For the term $(\partial_{t}\Phi_{3}(U))\Phi_{3}^{-1}(U)[V_3]$ we apply Proposition $2.5.9$ of \cite{maxdelort} and we obtain that \begin{equation} (\partial_{t}\Phi_{3}(U))\Phi_{3}^{-1}(U)[V_3]= {\rm Op}^{\mathcal{B}W}\left(\begin{matrix}e(U;x)({\rm i}\xi) & 0 \\ 0 & \overline{e(U;x)}({\rm i}\xi)\end{matrix}\right)[V_3]+\tilde{R}(U)[V_3], \end{equation} where $\tilde{R}$ belongs to ${\mathcal R}^{-1}_{K,3}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ and $e(U;x)$ is a symbol in ${\mathcal F}_{K,3}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ such that ${\rm{Re}}(e(U;x))=0$. It remains to study the conjugate of the paradifferential terms in \eqref{chi}. We note that \[ \begin{aligned} \Phi_3(U)&\Big[{\rm i} E{\rm Op}^{\mathcal{B}W}((\mathds{1}+A_2^{(2)}(U;x))({\rm i}\xi)^2)\Big] \Phi_{3}^{-1}(U)[V_{3}] +\Phi_3(U)\Big[{\rm i} E{\rm Op}^{\mathcal{B}W}(A_1^{(2)}(U;x)({\rm i}\xi))\Big] \Phi_{3}^{-1}(U)[V_{3}]\\ &=\left(\begin{matrix} T & 0 \\ 0 & \overline{T} \end{matrix} \right) \end{aligned} \] where $T$ is the operator \begin{equation}\label{egoego} T=\Omega_{B(U)}(1){\rm Op}^{\mathcal{B}W}\Big((1+a_{2}^{(2)}(U;x))({\rm i}\xi)^2+a_{1}^{(2)}(U;x)({\rm i}\xi)\Big)\Omega^{-1}_{B(U)}(1). \end{equation} The Paracomposition Theorem $2.5.8$ in \cite{maxdelort}, which can be used thanks to Remarks \ref{pathdiffeo} and \ref{pathdi}, guarantees that \begin{equation} T={\rm Op}^{\mathcal{B}W} ( \tilde{a}_{2}^{(3)}(U;x,\xi)+a_{1}^{(3)}(U;x)({\rm i}\xi))[\cdot] \end{equation} up to a bounded term in ${\mathcal R}^0_{K,3}[r]$ and where \begin{equation}\label{sol} \begin{aligned} \tilde{a}_{2}^{(3)}(U;x,\xi)&= \big(1+a^{(2)}_2(U;y)\big)\big(1+\gamma_{y}(1,y)\big)^2_{|_{y=x+\beta(x)}}({\rm i}\xi)^2, \\ {a}_{1}^{(3)}(U;x)&=a^{(2)}_1(U;y)\big(1+\gamma_{y}(1,y)\big)_{|_{y=x+\beta(x)}}. \end{aligned} \end{equation} Here $\gamma(1,x)=\gamma(\tau,x)_{|_{\tau=1}}=\gamma(U;\tau,x)_{|_{\tau=1}}$ with \[ y=x+\tau\beta(U;x)\; \Leftrightarrow x=y+\gamma(\tau,y), \;\; \tau\in [0,1], \] where $x+\tau\beta(U;x)$ is the path of diffeomorphism given by Remark \ref{pathdiffeo}. By Lemma 2.5.4 of Section 2.5 of \cite{maxdelort} one has that the new symbols $\tilde{a}_{2}^{(3)}(U;x,\xi), a_{1}^{(3)}(U;x)$ defined in \eqref{sol} belong to the class $\Gamma^2_{K,3}[r]$ and ${\mathcal F}_{K,3}[r]$ respectively. At this point we want to choose the symbol $\beta(x)$ in such a way that $\tilde{a}^{(3)}_2(U;x,\xi)$ does not depend on $x$. One can proceed as follows. Let $a_{2}^{(3)}(U)$ a $x$-independent function to be chosen later, one would like to solve the equation \begin{equation}\label{mgrande} \big(1+a_2^{(2)}(U;y)\big)\big(1+\gamma_{y}(1,y)\big)^2_{|_{y=x+\beta(x)}}({\rm i}\xi)^2=(1+a_{2}^{(3)}(U))({\rm i}\xi)^2. \end{equation} The solution of this equation is given by \begin{equation}\label{mgrande2} \gamma(U;1,y)=\partial_y^{-1}\left(\sqrt{\frac{1+a_{2}^{(3)}(U)}{1+a^{(2)}_2(U;y)}}-1\right). \end{equation} In principle this solution is just formal because the operator $\partial_y^{-1}$ is defined only for function with zero mean, therefore we have to choose $a_{2}^{(3)}(U)$ in such a way that \begin{equation}\label{mgrande3} \int_{\mathbb{T}}\left(\sqrt{\frac{1+a_{2}^{(3)}(U)}{1+a^{(2)}_2(U;y)}}-1\right)dx=0, \end{equation} which means \begin{equation}\label{felice6} 1+a_{2}^{(3)}(U):=\left[2\pi \left(\int_{\mathbb{T}}\frac{1}{\sqrt{1+a^{(2)}_{2}(U;y)}}dy \right)^{-1}\right]^{2}. \end{equation} Note that everything is well defined thanks to the positivity of $1+a_{2}^{(2)}$. Indeed $a_{2}^{(2)}=a_{2}^{(1)}$ by \eqref{gorilla2}, and $a_{2}^{(1)}$ satisfies \eqref{elly2}. Indeed every denominator in \eqref{mgrande2}, \eqref{mgrande3} and in \eqref{felice6} stays far away from $0$. Note that $\gamma(U;y)$ belongs to ${\mathcal F}_{K,2}[r]$ and so does $\beta(U;x)$ by Lemma \ref{LEMMA252}. By using \eqref{conddiffeo2} one can deduce that \begin{equation} 1+\beta_x(U;x)=\frac{1}{1+\gamma_y(U;1,y)} \end{equation} where \begin{equation} 1+\gamma_{y}(U;1,y)=2\pi \left(\int_{\mathbb{T}}\frac{1}{\sqrt{1+a^{(2)}_{2}(U;y)}}dy\right)^{-1}\frac{1}{\sqrt{1+a_2^{(2)}(U;y)}}, \end{equation} thanks to \eqref{mgrande2} and \eqref{felice6}. Since the matrix $A_2^{(2)}$ satisfies Hypothesis \ref{ipoipo4} one has that there exists a universal constant $\mathtt{c}>0$ such that $1+a_2^{(2)}(U;y)\geq\mathtt{c}$. Therefore one has \[ \begin{aligned} 1+\beta_x({U;x})=\frac{1}{1+\gamma_y(U;1,y)}&\geq \sqrt{\mathtt{c}}\frac{1}{2\pi}\int_{\mathbb{T}}\frac{1}{\sqrt{1+a_{2}^{(2)}(U;y) }}dy\\ &\geq\frac{1}{2\pi}\frac{\sqrt{\mathtt{c}}}{1+C\|U\|_{0,s_0}}:=\Theta>0, \end{aligned} \] for some $C$ depending only on $\|U\|_{K,s_0}$, where we used the fact that $a_{2}^{(2)}(U;y)$ belongs to the class ${\mathcal F}_{K,2}[r]$ (see Def. \ref{apeape}). This implies that $\beta(U;x)$ satisfies condition \eqref{conddiffeo}. We have written system \eqref{sistemafinale2} in the form \eqref{sistemafinale3} with matrices defined in \eqref{gorilla3}. It remains to show that the new matrix $A^{(3)}(U;x,\xi)$ satisfies either Hyp. \ref{ipoipo} or \ref{ipoipo2}. If $A^{(2)}(U;x,\xi)$ is selfadjoint, i.e. satisfies Hypothesis \ref{ipoipo}, then one has that the matrix $A^{(3)}(U;x,\xi)$ is selfadjoint as well thanks to the fact that the map $W(1)$ satisfies the hypotheses (condition \eqref{symsym10}) of of Lemma \ref{lemmalemma}, by using Lemma \ref{lemmalemma2}. In the case that $A^{(2)}(U;x,\xi)$ is parity preserving, i.e. satisfies Hypothesis \ref{ipoipo2}, then $A^{(3)}(U;\xi)$ has the same properties for the following reasons. The symbols $\beta(U;x)$ and $\gamma(U;x)$ are odd in $x$ if the function $U$ is even in $x$. Hence the flow map $W(1)$ defined by equation \eqref{felice} is parity preserving. Moreover the matrix $A^{(3)}(U;x,\xi)$ satisfies Hypothesis \ref{ipoipo2} by explicit computation. \end{proof} \subsection{Reduction to constant coefficients 2: first order terms }\label{ridu1} Lemmata \ref{step1}, \ref{step2}, \ref{step3} guarantee that one can conjugate the system \eqref{sistemainiziale} to the system \eqref{sistemafinale3} in which the matrix $A^{(3)}(U;x,\xi)$ (see \eqref{gorilla3}) has the form \begin{equation}\label{espansionediA3} A^{(3)}(U;x,\xi)=A_{2}^{(3)}(U)({\rm i}\xi)^{2}+A_{1}^{(3)}(U;x)({\rm i}\xi), \end{equation} where the matrices $A_{2}^{(3)}(U),A_{1}^{(3)}(U;x)$ are diagonal and belong to ${\mathcal F}_{K,3}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, for $i=1,2$. Moreover $A_{2}^{(3)}(U)$ does not depend on $x\in \mathbb{T}$. In this Section we show how to eliminate the $x$ dependence of the symbol $A_{1}^{(3)}(U;x)$ in \eqref{gorilla3}. We prove the following. \begin{lemma}\label{step4} If the matrix $A^{(3)}(U;x,\xi)$ in \eqref{sistemafinale3} satisfies Hyp. \ref{ipoipo} (resp. together with $P$ satisfy Hyp. \ref{ipoipo2}) then there exists $s_0>0$ (possibly larger than the one in Lemma \ref{step3}) such that for any $s\geq s_0$ there exists an invertible map (resp. an invertible and {parity preserving} map) $$ \Phi_{4}=\Phi_{4}(U) : C^{K-3}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2))\to C^{K-3}_{*\mathbb{R}}(I,{\bf{H}}^{s}(\mathbb{T},\mathds{C}^2)), $$ with \begin{equation}\label{stimona4} \|(\Phi_{4}(U))^{\pm1}V\|_{K-3,s}\leq \|V\|_{K-3,s}(1+C\|U\|_{K,s_0}) \end{equation} where $C>0$ depends only on $s$ and $\|U\|_{K,s_0}$ such that the following holds. Then there exists a matrix $A^{(4)}(U;\xi)$ independent of $x\in\mathbb{T}$ of the form \begin{equation}\label{gorilla4} \begin{aligned} A^{(4)}(U;\xi)&:= \left(\begin{matrix} {a}_{2}^{(3)}(U) & 0\\ 0 & \overline{{a_{2}^{(3)}(U)}} \end{matrix} \right) ({\rm i} \xi)^{2}+ \left(\begin{matrix} {a}_{1}^{(4)}(U) & 0\\ 0 & \overline{{a_{1}^{(4)}(U)}} \end{matrix} \right) ({\rm i} \xi),\\ \end{aligned} \end{equation} where $a_{2}^{(3)}(U)$ is defined in \eqref{gorilla3} and $a_1^{(4)}(U)$ is a symbol in $\mathcal{F}_{K,4}[r]$, independent of $x$, which is purely imaginary in the case of Hyp. \ref{ipoipo} (resp. is equal to $0$). There are operators ${R}^{(4)}_{1}(U),{R}^{(4)}_2(U)$ in ${\mathcal R}^{0}_{K,4}[r]\otimes{\mathcal M}_{2}(\mathds{C})$, such that by setting $V_{4}=\Phi_{4}(U)V_{3}$ the system \eqref{sistemafinale3} reads \begin{equation}\label{sistemafinale4} \partial_t V_{4}={\rm i} E\Big[\Lambda V_{4}+{\rm Op}^{\mathcal{B}W}(A^{(4)}(U;\xi))[V_{4}]+R^{(4)}_{1}(U)[V_{4}]+R^{(4)}_{2}(U)[U]\Big]. \end{equation} \end{lemma} \begin{proof} Consider a symbol $s(U;x)$ in the class ${\mathcal F}_{K,3}[r]$ and define \[ S(U;x):=\left( \begin{matrix} s(U;x) & 0\\ 0 & \overline{s(U;x)} \end{matrix} \right). \] Let $\Phi_{4}^{\tau}(U)[\cdot]$ be the flow of the system \begin{equation}\label{ordine1} \left\{\begin{aligned} &\partial_{\tau}\Phi_{4}^{\tau}(U)[\cdot]={\rm Op}^{\mathcal{B}W}(S(U;x))\Phi_{4}^{\tau}(U)[\cdot]\\ &\Phi_{4}^{0}(U)[\cdot]=\mathds{1}. \end{aligned}\right. \end{equation} Again one can reason as done for the system \eqref{generatore2} to check that there exists a unique family of invertible bounded operators on ${\bf{H}}^{s}$ satisfying \begin{equation}\label{stimona2000} \|(\Phi_{4}^{\tau}(U))^{\pm1}V\|_{K-3,s}\leq \|V\|_{K-3,s}(1+C\|U\|_{K,s_0}) \end{equation} for $C>0$ depending on $s$ and $\|U\|_{K,s_0}$ for $\tau\in [0,1]$. We set \begin{equation}\label{defmappa4} \Phi_{4}(U)[\cdot]=\Phi_{4}^{\tau}(U)[\cdot]_{|_{\tau=1}}=\exp\{{\rm Op}^{\mathcal{B}W}(S(U;x))\}. \end{equation} By Corollary \ref{esponanziale} we get that there exists $Q(U)$ in the class of smoothing remainder ${\mathcal R}^{-\rho}_{K,3}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ for any $\rho>0$ such that \begin{equation}\label{fluxapprox} \Phi_{4}(U)[\cdot]:={\rm Op}^{\mathcal{B}W}(\exp\{S(U;x)\})[\cdot]+Q(U)[\cdot]. \end{equation} Since $\Phi_{4}^{-1}(U)$ exists, by symbolic calculus, it is easy to check that there exists $\tilde{Q}(U)$ in ${\mathcal R}^{-\rho}_{K,3}[r]\otimes{\mathcal M}_{2}(\mathds{C})$ such that \[ \Phi^{-1}_{4}(U)[\cdot]={\rm Op}^{\mathcal{B}W}(\exp\{-S(U;x)\})[\cdot]+\tilde{Q}(U)[\cdot]. \] We set $G(U;x)=\exp\{S(U;x)\}$ and $V_4=\Phi_4(U)V_3$. Then the system \eqref{sistemafinale3} becomes \begin{equation}\label{nuovosist600} \begin{aligned} (V_4)_{t}&=\Phi_{4}(U){\rm i} E\Big(\Lambda+{\rm Op}^{\mathcal{B}W}(A^{(3)}(U;x,\xi))+R_{1}^{(3)}(U)\Big)(\Phi_{4}(U))^{-1}[V_{4}]+\\ &+\Phi_{4}(U){\rm i} ER_{2}^{(3)}(U)[U]+{\rm Op}^{\mathcal{B}W}(\partial_{t}G(U;x,\xi))(\Phi_{4}(U))^{-1}[V_{4}]. \end{aligned} \end{equation} Recalling that $\Lambda =\mathfrak{P}+\frac{d^2}{dx^2}$ (see \eqref{DEFlambda2}) we note that by Lemma \ref{Convocoj} the term ${\rm i} \Phi_{4}(U)\big[E\mathfrak{P}\big(\Phi_{4}^{-1}(U)[V_4]\big]$ is equal to ${\rm i} E \mathfrak{P}V_4$ up to a remainder in ${\mathcal R}^0_{K,4}[r]\otimes{\mathcal M}_2(\mathds{C})$. Secondly we note that the operator \begin{equation}\label{pezzibbd} \hat{Q}(U)[\cdot]:=\Phi_{4}(U) {\rm i} ER_{1}^{(3)}(U) \Phi_{4}^{-1}(U)[\cdot]+\Phi_{4}(U) {\rm i} ER_{2}^{(3)}(U)[U]+{\rm Op}^{\mathcal{B}W}(\partial_{t}G(U;x))\circ\Phi_{4}^{-1}(U)[ \cdot] \end{equation} belongs to the class of operators ${\mathcal R}^{0}_{K,4}[r]\otimes{\mathcal M}_{2}(\mathds{C})$. This follows by applying Propositions \ref{componiamoilmondo}, \ref{componiamoilmare}, Remark \ref{inclusione-nei-resti} and the fact that $\partial_t G(U;x)$ is a matrix in ${\mathcal F}_{K,4}[r]\otimes{\mathcal M}_2(\mathds{C})$. It remains to study the term \begin{equation}\label{ferretti} \Phi_{4}(U){\rm i} E\Big({\rm Op}^{\mathcal{B}W}\big((\mathds{1}+A_2^{(3)}(U))({\rm i}\xi)^2\big)+{\rm Op}^{\mathcal{B}W}(A^{(3)}_1(U;x)({\rm i}\xi))\Big)(\Phi_{4}(U))^{-1}. \end{equation} By using formula \eqref{sbam8} and Remark \ref{inclusione-nei-resti} one gets that, up to remainder in ${\mathcal R}^{0}_{K,4}[r]\otimes{\mathcal M}_2(\mathds{C})$, the term in \eqref{ferretti} is equal to \begin{equation}\label{boris} {\rm i} E{\rm Op}^{\mathcal{B}W}\big((\mathds{1}+A_2^{(3)}(U))({\rm i}\xi)^2\big)+{\rm i} E{\rm Op}^{\mathcal{B}W}\left( \begin{matrix} r(U;x)({\rm i}\xi) & 0\\ 0 & \overline{r(U;x)}({\rm i}\xi) \end{matrix} \right) \end{equation} where \begin{equation}\label{nonmifermo} \begin{aligned} r(U;x):=a_{1}^{(3)}(U;x)+2(1+a_{2}^{(3)}(U))\partial_{x}s(U;x). \end{aligned} \end{equation} We look for a symbol $s(U;x)$ such that, the term of order one has constant coefficient in $x$. This requires to solve the equation \begin{equation}\label{cancella1} a_{1}^{(3)}(U;x)+2(1+a_{2}^{(3)}(U))\partial_{x}s(U;x)=a_{1}^{(4)}(U), \end{equation} for some symbol $a_{1}^{(4)}(U)$ constant in $x$ to be chosen. Equation \eqref{cancella1} is equivalent to \begin{equation}\label{cancella2} \partial_{x}s(U;x)=\frac{-a_{1}^{(3)}(U;x)+a_{1}^{(4)}(U)}{2(1+a_{2}^{(3)}(U))}. \end{equation} We choose the constant $a_{1}^{(4)}(U)$ as \begin{equation}\label{sceltaA1} a_{1}^{(4)}(U):=\frac{1}{2\pi}\int_{\mathbb{T}}a_{1}^{(3)}(U;x)dx, \end{equation} so that the r.h.s. of \eqref{cancella2} has zero average, hence the solution of \eqref{cancella2} is given by \begin{equation}\label{solcancella} s(U;x):=\partial_{x}^{-1}\left(\frac{-a_{1}^{(3)}(U;x)+a_{1}^{(4)}(U)}{2(1+a_{2}^{(3)}(U))}\right). \end{equation} It is easy to check that $s(U;x)$ belongs to ${\mathcal F}_{K,4}[r]$. Using equation \eqref{nonmifermo} we get \eqref{sistemafinale4} with $A^{(4)}(U;\xi)$ as in \eqref{gorilla4}. It remains to prove that the constant $a_{1}^{(4)}(U)$ in \eqref{sceltaA1} is purely imaginary. On one hand, if $A^{(3)}(U;x,\xi)$ satisfies Hyp. \ref{ipoipo}, we note the following. The coefficient $a_{1}^{(3)}(U;x)$ must be purely imaginary hence the constant $a_{1}^{(4)}(U)$ in \eqref{sceltaA1} is purely imaginary. On the other hand, if $A^{(3)}(U;x,\xi)$ satisfies Hyp. \ref{ipoipo2}, we note that the function $a_{1}^{(3)}(U;x)$ is \emph{odd} in $x$. This means that the constants $a_{1}^{(4)}(U)$ in \eqref{sceltaA1} is zero. Moreover the symbol $s(U;x)$ in \eqref{solcancella} is even in $x$, hence the map $\Phi_{4}(U)$ in \eqref{ordine1} is parity preserving according to Def. \ref{revmap} thanks to Remark \ref{compsimb}. This concludes the proof. \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{descent}}] It is enough to choose $\Phi(U):=\Phi_4(U)\circ\cdots\circ\Phi_1(U)$. The estimates \eqref{stimona} follow by collecting the bounds \eqref{stimona1}, \eqref{stimona2}, \eqref{stimona3} and \eqref{stimona4}. We define the matrix of symbols $L(U;\xi)$ as \begin{equation} L(U;\xi):=\left(\begin{matrix}{\mathtt{m}}(U,\xi) & 0\\0 & \mathtt{m}(U,-\xi)\end{matrix}\right), \quad \mathtt{m}(U,\xi):=a_{2}^{(3)}(U)({\rm i}\xi)^{2}+a_{1}^{(4)}(U)({\rm i}\xi) \end{equation} where the coefficients $a_{2}^{(3)}(U),a_{1}^{(4)}(U)$ are $x$-independent (see \eqref{gorilla4}). One concludes the proof by setting $R_{1}(U):=R_{1}^{(4)}(U)$ and $R_{2}(U):=R_{2}^{(4)}(U)$ . \end{proof} An important consequence of Theorem \ref{descent} is that system \eqref{sistemainiziale} admits a regular and unique solution. More precisely we have the following. \begin{prop}\label{energia} Let $s_0$ given by Theorem \ref{descent} with $K=4$. For any $s\geq s_0+2$ let $U=U(t,x)$ be a function in $ B^4_{s}([0,T),\theta)$ for some $T>0$, $r>0$, $\theta\geq r$ with $\|U(0,\cdot)\|_{{\bf{H}}^{s}}\leq r$ and consider the system \begin{equation}\label{energia11} \left\{\begin{aligned} &\partial_t V={\rm i} E\Big[\Lambda V+{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))[V]+R_1^{(0)}(U)[V]+R_2^{(0)}(U)[U]\Big],\\ & V(0,x)=U(0,x)\in{\bf{H}}^s, \end{aligned}\right. \end{equation} where the matrix $A(U;x,\xi)$, the operators $R_1^{(0)}(U)$ and $R_{2}^{(0)}(U)$ satisfy the hypotheses of Theorem \ref{descent}. Then the following holds true. \begin{itemize} \item[$(i)$] There exists a unique solution $\psi_{U}(t)U(0,x)$ of the system \eqref{energia11} defined for any $t\in[0,T)$ such that \begin{equation}\label{energia2} \norm{\psi_U(t)U(0,x)}{4,s}\leq\mathtt{C}\norm{U(0,x)}{{\bf{H}}^s}(1+t\mathtt{C}\norm{U}{4,s})e^{t\mathtt{C}\norm{U}{4,s}}+t\mathtt{C}\norm{U}{4,s}e^{t\mathtt{C}\norm{U}{4,s}}+\mathtt{C}, \end{equation} where $\mathtt{C}$ is constant depending on $s,r$, $\sup_{t\in[0,T]}\norm{U}{4,s-2}$ and $\|P\|_{C^{1}}$. \item[$(ii)$] In the case that $U$ is even in $x$, the matrix $A(U;x,\xi)$ and the operator $\Lambda$ satisfy Hyp. \ref{ipoipo2}, the operator $R_1^{(0)}(U)[\cdot]$ is parity preserving according to Def. \ref{revmap} and $R_{2}^{(0)}(U)[U]$ is even in $x$, then the solution $\Psi_{U}(t)U(0,x)$ is even in $x\in \mathbb{T}$. \end{itemize} \end{prop} \begin{proof} We apply to system \eqref{energia11} Theorem \ref{descent} defining $W=\Phi(U)V$. The system in the new coordinates reads \begin{equation}\label{modificato} \left\{ \begin{aligned} &\partial_{t}W-{\rm i} E\Big[\Lambda W+{\rm Op}^{\mathcal{B}W}(L(U;\xi))W+{R}_1(U)W+R_{2}(U)[U]\Big]=0 \\ &W(0,x)=\Phi(U(0,x))U(0,x):=W^{(0)}(x), \end{aligned}\right. \end{equation} where $L(U;\xi)$ is a diagonal, self-adjoint and constant coefficient in $x$ matrix in $\Gamma^2_{4,4}[r]\otimes\mathcal{M}_2(\mathds{C})$, $R_1(U), R_2(U)$ are in $\mathcal{R}^0_{4,4}[r]\otimes\mathcal{M}_2(\mathds{C})$. Therefore the solution of the linear problem \begin{equation} \left\{ \begin{aligned} &\partial_{t}W-{\rm i} E\Big[\Lambda W+ {\rm Op}^{\mathcal{B}W}(L(U;\xi))\Big]W=0 \\ &W(0,x)=W^{(0)}(x), \end{aligned}\right. \end{equation} is well defined as long as $U$ is well defined, moreover it is an isometry of ${\bf{H}}^s$. We denote by $\psi_L^t$ the flow at time $t$ of such equation. Then one can define the operator \begin{equation}\label{mappacontr} T_{W^{(0)}}(W)(t,x)=\psi_L^t \big(W^{(0)}(x)\big)+\psi_L^t\int_0^t (\psi_L^s)^{-1}{\rm i} E\Big({R}_1(U(s,x))W(s,x)+R_{2}(U(s,x))U(s,x)\Big)ds. \end{equation} Thanks to \eqref{stimona} and by the hypothesis on $U(0,x)$ one has that $\norm{W^{(0)}}{{\bf{H}}^s}\leq (1+c r)r$ for some constant $c>0$ depending only on $r$. In order to construct a fixed point for the operator $ T_{W^{(0)}}(W)$ in \eqref{mappacontr} we consider the sequence of approximations defined as follows: \[ \left\{ \begin{aligned} W_0(t,x)&=\psi_{L}^tW^{(0)}(x), \\ W_{n}(t,x)&=T_{W^{(0)}}(W_{n-1})(t,x), \qquad n\geq 1, \end{aligned} \right. \] for $t\in [0,T)$. For the rest of the proof we will denote by $\mathtt{C}$ any constant depending on $r$, $s$, $\sup_{t\in[0,T)}\norm{U(t,\cdot)}{4,s-2}$ and $\norm{P}{C^1}$. Using estimates \eqref{porto20} one gets for $n\geq 1$ \[ \|(W_{n+1}-W_{n})(t,\cdot)\|_{{\bf{H}}^{s}}\leq \mathtt{C}\norm{U(t,\cdot)}{{\bf{H}}^s}\int_{0}^{t}\|(W_{n}-W_{n-1})(\tau,\cdot)\|_{{\bf{H}}^{s}}d\tau. \] Arguing by induction over $n$, one deduces \begin{equation}\label{referee} \|(W_{n+1}-W_{n})(t,\cdot)\|_{{\bf{H}}^{s}}\leq \frac{({\mathtt{C}}\norm{U(t,\cdot)}{{\bf{H}}^s})^{n}t^n}{n!}\|(W_{1}-W_{0})(t,\cdot)\|_{{\bf{H}}^{s}}, \end{equation} which implies that $W(t,x)=\sum_{n=1}^{\infty}(W_{n+1}-W_{n})(t,x)+W_0(t,x)$ is a fixed point of the operator in \eqref{mappacontr} belonging to the space $C^{0}_{*\mathbb{R}}([0,T);{\bf{H}}^{s}(\mathbb{T};\mathds{C}^2))$. Therefore by Duhamel principle the function $W$ is the unique solution of the problem \eqref{modificato}. Moreover, by using \eqref{porto20}, we have that the following inequality holds true \begin{equation*} \norm{W_1(t,\cdot)-W_{0}(t,\cdot)}{{\bf{H}}^s}\leq t \mathtt{C}\big(\norm{U}{{\bf{H}}^s}\norm{W^{(0)}}{{\bf{H}}^s}+\norm{U}{{\bf{H}}^{s-2}}\norm{U}{{\bf{H}}^s}\big), \end{equation*} from which, together with estimates \eqref{referee}, one deduces that \begin{equation*} \begin{aligned} \norm{W(t,\cdot)}{{\bf{H}}^s}&\leq \sum_{n=0}^{\infty}\norm{(W_{n+1}-W_n)(t,\cdot)}{{\bf{H}}^s}+\norm{W^{(0)}}{{\bf{H}}^s} \\ &\leq \norm{W^{(0)}}{{\bf{H}}^s}\Big(1+t\mathtt{C}\norm{U}{{\bf{H}}^s}\sum_{n=0}^{\infty}\frac{(t\mathtt{C}\norm{U}{{\bf{H}}^s})^n}{n!}\Big)+t\mathtt{C}\norm{U}{{\bf{H}}^s}\sum_{n=0}^{\infty}\frac{(t\mathtt{C}\norm{U}{{\bf{H}}^s})^n}{n!}\\ &=\norm{W^{(0)}}{{\bf{H}}^s}\big(1+t\mathtt{C}\norm{U}{{\bf{H}}^s}e^{t\mathtt{C}\norm{U}{{\bf{H}}^s}}\big)+t\mathtt{C}\norm{U}{{\bf{H}}^s}e^{{t\mathtt{C}\norm{U}{{\bf{H}}^s}}}\\ &\leq \mathtt{C}\norm{U^{(0)}}{{\bf{H}}^s}\big(1+t\mathtt{C}\norm{U}{{\bf{H}}^s}e^{t\mathtt{C}\norm{U}{{\bf{H}}^s}}\big)+t\mathtt{C}\norm{U}{{\bf{H}}^s}e^{{t\mathtt{C}\norm{U}{{\bf{H}}^s}}} \end{aligned} \end{equation*} Applying the inverse transformation $V=\Phi^{-1}(U)W$ and using \eqref{stimona} we find a solution $V$ of the problem \eqref{energia11} such that \begin{equation*} \norm{V}{{\bf{H}}^s}\leq\mathtt{C}\norm{U^{(0)}}{{\bf{H}}^s}\big(1+t\mathtt{C}\norm{U}{{\bf{H}}^s}e^{t\mathtt{C}\norm{U}{{\bf{H}}^s}}\big)+t\mathtt{C}\norm{U}{{\bf{H}}^s}e^{{t\mathtt{C}.\norm{U}{{\bf{H}}^s}}} \end{equation*} We now prove a similar estimate for $\partial_{t}V$. More precisely one has \begin{equation}\label{derivata1} \begin{aligned} \|\partial_{t}V\|_{{\bf{H}}^{s-2}}&\leq \|\Lambda V+{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))V\|_{{\bf{H}}^{s-2}}+\|{R}_1^{(0)}(U)V\|_{{\bf{H}}^{s-2}} +\|{R}_2^{(0)}(U)U\|_{{\bf{H}}^{s-2}}\\ &\leq\mathtt{C}\norm{V}{{\bf{H}}^s}+\mathtt{C}\norm{V}{{\bf{H}}^{s-2}}+\mathtt{C}\\ &\leq\mathtt{C}\norm{U(0,x)}{{\bf{H}}^s}(1+t\mathtt{C}\norm{U}{4,s})e^{t\mathtt{C}\norm{U}{4,s}}+t\mathtt{C}\norm{U}{4,s}e^{t\mathtt{C}\norm{U}{4,s}}+\mathtt{C}, \end{aligned} \end{equation} where we used estimates \eqref{paraest} and \eqref{porto20}. By differentiating equation \eqref{energia11} and arguing as done in \eqref{derivata1} one can bound the terms $\|\partial_{t}^{k}V\|_{{\bf{H}}^{s-2k}}$, for $2\leq k\leq 4$, and hence obtain the \eqref{energia2}. In the case that $U$ is even in $x$, $\Lambda, A(U;x,\xi)$ satisfy Hyp. \ref{ipoipo2}, $R_1(U)[\cdot]$ is parity preserving according to Def. \ref{revmap} and $R_{2}^{(0)}(U)[U]$ is even in $x$ we have, by Theorem \ref{descent}, that the map $\Phi(U)$ is parity- preseving. Hence the flow of the system \eqref{modificato} preserves the subspace of even functions. This follows by Lemma \ref{revmap100}. Hence the solution of \eqref{energia11} defined as $V=\Phi^{-1}(U)W$ is even in $x$. This concludes the proof. \end{proof} \begin{rmk}\label{stimaprecisa2} In the notation of Prop. \ref{energia} the following holds true. \begin{itemize} \item If $R_{2}^{(0)}\equiv 0$ in \eqref{energia11}, then the estimate \eqref{energia2} may be improved as follows: \begin{equation}\label{energia2tris} \norm{\psi_U(t)U(0,x)}{4,s}\leq\mathtt{C}\norm{U(0,x)}{{\bf{H}}^s}(1+t\mathtt{C}\norm{U}{4,s})e^{t\mathtt{C}\norm{U}{4,s}}. \end{equation} This follows straightforward from the proof of Prop. \ref{energia}. \item If $R_{2}^{(0)}\equiv R_1^{(0)}\equiv 0$ then the flow $\psi_{U}(t)$ of \eqref{energia11} is invertible and $(\psi_{U}(t))^{-1}U(0,x)$ satisfies an estimate similar to \eqref{energia2tris}. To see this one proceed as follows. Let $\Phi(U)[\cdot]$ the map given by Theorem \ref{descent} and set $\Gamma(t):=\Phi(U)\psi_U(t)$. Thanks to Theorem \ref{descent}, $\Gamma(t)$ is the flow of the linear para-differential equation \begin{equation*} \left\{\begin{aligned} &\partial_t \Gamma(t)={\rm i} E{\rm Op}^{\mathcal{B}W}(L(U;\xi))\Gamma(t)+R(U)\Gamma(t),\\ &\Gamma(0)={\rm Id}, \end{aligned}\right. \end{equation*} where $R(U)$ is a remainder in $\mathcal{R}^0_{K,4}[r]$ and ${\rm Op}^{\mathcal{B}W}(L(U;\xi))$ is diagonal, self-adjoint and constant coefficients in $x$. Then, if $\psi_L(t)$ is the flow generated by ${\rm i}{\rm Op}^{\mathcal{B}W}(L(U;\xi))$ (which exists and is an isometry of ${\bf{H}}^s$), we have that $\Gamma(t)=\psi_L(t)\circ F(t)$, where $F(t)$ solves the Banach space ODE \begin{equation*} \left\{\begin{aligned} &\partial_tF(t)=\big((\psi_L(t))^{-1}R(U)\psi_L(t)\big) F(t),\\ &F(0)={\rm{Id}}. \end{aligned}\right. \end{equation*} To see this one has to use the fact that the operators ${\rm i} {\rm Op}^{\mathcal{B}W}({L(U;\xi)})$ and $\psi_L(t)$ commutes. Standard theory of Banach spaces ODE implies that $F(t)$ exists and is invertible, therefore $\psi_U(t)$ is invertible as well and $(\psi_U(t))^{-1}=(F(t))^{-1}\circ(\psi_L(t))^{-1}\circ\Phi(U)$. To deduce the estimate satisfied by $(\psi_U(t))^{-1}$ one has to use \eqref{porto20} to control the contribution coming from $R(U)$, the fact that $\psi_L(t)$ is an isometry and \eqref{stimona}. \end{itemize} \end{rmk} \setcounter{equation}{0} \section{Local Existence}\label{local} In this Section we prove Theorem \ref{teototale}. By previous discussions we know that \eqref{NLS} is equivalent to the system \eqref{6.666para} (see Proposition \ref{montero}). Our method relies on an iterative scheme. Namely we introduce the following sequence of linear problems. Let $U^{(0)}\in {\bf{H}}^{s}$ such that $\|U^{(0)}\|_{{\bf{H}}^s}\leq r$ for some $r>0$. For $n=0$ we set \begin{equation}\label{rondine0} \mathcal{A}_0:=\left\{ \begin{aligned} &\partial_{t}U_0-{\rm i} E\Lambda U_0=0,\\ &U_0(0)=U^{(0)} \end{aligned}\right. \end{equation} The solution of this problem exists and is unique, defined for any $t\in \mathds{R}$ by standard linear theory, it is a group of isometries of ${\bf{H}}^s$ (its $k$-th derivative is a group of isometries of ${\bf{H}}^{s-2k}$) and hence satisfies $\norm{U_0}{4,s}\leq r$ for any $t\in\mathbb{R}$. For $n\geq1$, assuming $U_{n-1}\in B^K_{s_0}(I,r)\cap C^K_{*\mathbb{R}}(I,H^{s}(\mathbb{T},\mathds{C}^2))$ for some $s_{0},K>0$ and $s\geq s_0$, we define the Cauchy problem \begin{equation}\label{rondinen} \mathcal{A}_n:=\left\{ \begin{aligned} &\partial_{t}U_n-{\rm i} E\Big[\Lambda U_n+{\rm Op}^{\mathcal{B}W}(A(U_{n-1};x,\xi))U_n+R(U_{n-1})[U_{n-1}]\Big]=0 ,\\ &U_n(0)=U^{(0)} \end{aligned}\right. \end{equation} where the matrix of symbols $A(U;x,\xi)$ and the operator $R(U)$ are defined in Proposition \ref{montero} (see \eqref{6.666para}). One has to show that each problem $\mathcal{A}_{n}$ admits a unique solution $U_{n}$ defined for $t\in I$. We use Proposition \ref{energia} in order to prove the following lemma. \begin{lemma}\label{esistenzaAN} Let $f$ be a $C^{\infty}$ function from $\mathbb{C}^3$ in $\mathbb{C}$ satisfying Hyp. \ref{hyp1} (resp. Hyp. \ref{hyp2}). Let $r>0$ and consider $U^{(0)}$ in the ball of radius $r$ of ${\bf{H}}^s$ (resp. of ${\bf{H}}^s_{e}$) centered at the origin. Consider the operators $\Lambda, R(U)$ and the matrix of symbols $A(U;x,\xi)$ given by Proposition \ref{montero} with $K=4$, $\rho=0$. If $f$ satisfies Hyp. \ref{hyp3}, or $r$ is sufficiently small, then there exists $s_0>0$ such that for all $s\geq s_0$ the following holds. There exists a time $T$ and a constant $\theta$, both of them depending on $r$ and $s$, such that for any $n\geq0$ one has: \begin{description} \item [$(S1)_{n}$] for $0\leq m\leq n$ there exists a function $U_{m}$ in \begin{equation}\label{uno} U_{m}\in B_{s}^{4}([0,T),\theta ), \end{equation} which is the unique solution of the problem $\mathcal{A}_{m}$; in the case of parity preserving Hypothesis \ref{hyp2} the functions $U_{m}$ for $0\leq m\leq n$ are even in $x\in \mathbb{T}$; \item[$(S2)_{n}$] for $0\leq m\leq n$ one has \begin{equation}\label{due} \norm{U_{m} -U_{m-1} }{4,s'}\leq 2^{-m} r,\quad s_0\leq s'\leq s-2, \end{equation} where $U_{-1}:=0$. \end{description} \end{lemma} \begin{proof} We argue by induction. The $(S1)_0$ and $(S2)_0$ are true thanks to the discussion following the equation \eqref{rondine0}. Suppose that $(S1)_{n-1}$,$(S2)_{n-1}$ hold with a constant $\theta=\theta(s,r,\|P\|_{C^{1}})\gg1$ and a time $T=T(s,r,\|P\|_{C^{1}},\theta)\ll 1$. We show that $(S1)_{n}$,$(S2)_{n}$ hold with the same constant $\theta$ and $T$. The Hypothesis \ref{hyp1}, together with Lemma \ref{struttura-ham-para} (resp. Hyp. \ref{hyp2} together with Lemma \ref{struttura-rev-para}) implies that the matrix $A(U;x,\xi)$ satisfies Hyp. \ref{ipoipo} (resp. Hyp. \ref{ipoipo2}) and Constraint \ref{Matriceiniziale}. The Hypothesis \ref{hyp3}, together with Lemma \ref{simboli-ellittici}, (or $r$ small enough) implies that $A(U;x,\xi)$ satisfies also the Hypothesis \ref{ipoipo4}. Therefore the hypotheses of Theorem \ref{descent} are fulfilled. In particular, in the case of Hyp. \ref{hyp2}, Lemma \ref{struttura-rev-para} guarantees also that the matrix of operators $R(U)[\cdot]$ is parity preserving according to Def. \ref{revmap}. Moreover by \eqref{uno}, we have that $\|U_{n-1}\|_{4,s}\leq \theta $, hence the hypotheses of Proposition \ref{energia} are fullfilled by system \eqref{rondinen} with $R_{1}^{(0)}=0$, $R_{2}^{(0)}=R$, $U\rightsquigarrow U_{n-1}$ and $V\rightsquigarrow U_{n}$ in \eqref{energia11}. We note that, by $(S2)_{n-1}$, one has that the constant $\mathtt{C}$ in \eqref{energia2} does not depend on $\theta$, but it depend only on $r>0$. Indeed \eqref{due} implies \begin{equation}\label{666beast} \norm{U_{n-1}}{4,s-2}\leq \sum_{m=0}^{n-1}\norm{U_{m} -U_{m-1} }{4,s-2}\leq r\sum_{m=0}^{n-1}\frac{1}{2^{m}}\leq 2r, \quad \forall \; t\in [0,T]. \end{equation} Proposition \ref{energia} provides a solution $U_{n}(t)$ defined for $t\in [0,T]$. By \eqref{energia2} one has that \begin{equation}\label{energia2000} \norm{U_{n}(t)}{4,s} \leq\mathtt{C}\norm{U^{(0)}}{{\bf{H}}^s}(1+t\mathtt{C}\norm{U_{n-1}}{4,s})e^{t\mathtt{C}\norm{U_{n-1}}{4,s}} +t\mathtt{C}\norm{U_{n-1}}{4,s}e^{t\mathtt{C}\norm{U_{n-1}}{4,s}}+\mathtt{C}, \end{equation} where $\mathtt{C}$ is a constant depending on $\|U_{n-1}\|_{4,s-2}$, $r$, $s$ and $\|P\|_{C^{1}}$, hence, thanks to \eqref{666beast}, it depends only on $r$, $s$, $\|P\|_{C^{1}}$. We deduce that, if \begin{equation}\label{pane} T\mathtt{C}\theta \ll 1, \qquad \theta> \mathtt{C}r2e+e+\mathtt{C}, \end{equation} then $\|U_{n}\|_{4,s}\leq \theta$. If $A(U_{n-1};x,\xi)$ and $\Lambda$ satisfy Hyp. \ref{ipoipo2}, $R(U_{n-1})$ is parity preserving then the solution $U_{n}$ is even in $x\in \mathbb{T}$. Indeed by the inductive hypothesis $U_{n-1}$ is even, hence item $(ii)$ of Proposition \ref{energia} applies. This proves $(S1)_n$. \smallskip Let us check $(S2)_n$. Setting $V_n=U_n-U_{n-1}$ we have that \begin{equation}\label{eqdiff} \left\{\begin{aligned} &\partial_{t}V_n-{\rm i} E\Big[\Lambda V_n +{\rm Op}^{\mathcal{B}W}(A(U_{n-1};x,\xi))V_n + f_n\Big]=0,\\ &V_n(0)=0, \end{aligned}\right. \end{equation} where \begin{equation}\label{eqdiff2} f_n:={\rm Op}^{\mathcal{B}W}\Big(A(U_{n-1};x,\xi)-A(U_{n-2};x,\xi)\Big)U_{n-1}+R(U_{n-1})U_{n-1}-R(U_{n-2})U_{n-2}. \end{equation} Note that, by \eqref{nave77}, \eqref{nave101}, we have \begin{equation}\label{eqdiff3} \begin{aligned} \|f_{n}\|_{4,s'}&\leq\norm{{\rm Op}^{\mathcal{B}W}\Big(A(U_{n-1};x,\xi)-A(U_{n-2};x,\xi)\Big)U_{n-1}}{4,s'}+\norm{R(U_{n-1})U_{n-1}-R(U_{n-2})U_{n-2}}{4,s'}\\ &\leq C\Big[\norm{V_{n-1}}{4,s_0}\norm{U_{n-1}}{4,s'+2}+(\norm{U_{n-1}}{4,s'}+\norm{U_{n-2}}{4,s'})\norm{V_{n-1}}{4,s'}\Big]\\ &\leq C\Big(\norm{U_{n-1}}{4,s'+2}+\norm{U_{n-2}}{4,s'+2}\Big)\norm{V_{n-1}}{4,s'}, \end{aligned} \end{equation} where $C>0$ depends only on $s$, $\|U_{n-1}\|_{4,s_0}, \|U_{n-2}\|_{4,s_0}$. Recalling the estimate \eqref{666beast} we can conclude that the constant $C$ in \eqref{eqdiff3} depends only on $s,r$. The system \eqref{eqdiff} with $f_n=0$ has the form \eqref{energia11} with $R_2^{(0)}=0$ and $R_1^{(0)}=0$. Let $\psi_{U_{n-1}}(t)$ be the flow of system \eqref{eqdiff} with $f_n=0$, which is given by Proposition \ref{energia}. The Duhamel formulation of \eqref{eqdiff} is \begin{equation} V_{n}(t)=\psi_{U_{n-1}}(t)\int_0^t (\psi_{U_{n-1}}(\tau))^{-1}{\rm i} E f_{n}(\tau)d\tau. \end{equation} Then using the inductive hypothesis \eqref{uno}, inequality \eqref{energia2tris} and the second item of Remark \ref{stimaprecisa2} we get \begin{equation} \|V_n\|_{4,s'}\leq \theta \mathtt{K}_1 T \|V_{n-1}\|_{4,s'}, \quad \forall \; t\in [0,T], \end{equation} where $\mathtt{K}_1>0$ is a constant depending $r$, $s$ and $\|P\|_{C^{1}}$. If $\mathtt{K}_1 \theta T\leq1/2$ then we have $\|V_n\|_{4,s'}\leq 2^{-n}r$ for any $ t\in [0,T)$ which is the $(S2)_n$. \end{proof} We are now in position to prove Theorem \ref{teototale}. \begin{proof}[{\bf Proof of Theorem \ref{teototale}}] Consider the equation \eqref{NLS}. By Lemma \ref{paralinearizza} we know that \eqref{NLS} is equivalent to the system \eqref{6.666para}. Since $f$ satisfies Hyp. \ref{hyp1} (resp. Hyp \ref{hyp2}) and Hyp. \ref{hyp3}, then Lemmata \ref{struttura-ham-para} (resp \ref{struttura-rev-para}) and \ref{simboli-ellittici} imply that the matrix $A(U;x,\xi)$ satisfies Constraint \ref{Matriceiniziale} and Hypothesis \ref{ipoipo} (resp. Hypothesis \ref{ipoipo2} and $R(U)$ is parity preserving according to Definition \ref{revmap}). According to this setting consider the problem $\mathcal{A}_{n}$ in \eqref{rondinen}. By Lemma \ref{esistenzaAN} we know that the sequence $U_n$ defined by \eqref{rondinen} converges strongly to a function $U$ in $C_{*\mathbb{R}}^0 ([0,T),{\bf{H}}^{{s'}})$ for any ${s'}\leq s-2$ and, up to subsequences, \begin{equation}\label{debolesol} \begin{aligned} &U_{n}(t) \rightharpoonup U(t), \;\;\; {\rm in } \;\;\; {\bf{H}}^s,\\ &\partial_{t}U_{n}(t) \rightharpoonup \partial_{t}U(t) , \;\;\; {\rm in } \;\;\; {\bf{H}}^{s-2}, \end{aligned} \end{equation} for any $t\in [0,T)$, moreover the function $U$ is in $L^{\infty}([0,T),{\bf{H}}^s)\cap{\rm Lip}([0,T),{\bf{H}}^{s-2})$. In order to prove that $U$ solves \eqref{6.666para} it is enough to show that \begin{equation*} \norm{{\rm Op}^{\mathcal{B}W}(A(U_{n-1};x,\xi))]U_n+R(U_{n-1})[U_{n-1}]-{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))]U -R(U)[U]}{{\bf H}^{s-2}} \end{equation*} goes to $0$ as $n$ goes to $\infty$. Using \eqref{nave77} and \eqref{paraest} we obtain \begin{equation*} \begin{aligned} &\|{\rm Op}^{\mathcal{B}W}(A(U_{n-1};x,\xi)U_n-{\rm Op}^{\mathcal{B}W}(A(U;x,\xi)U)\|_{{\bf H}^{s-2}} \leq\\ & \|{\rm Op}^{\mathcal{B}W}(A(U_{n-1};x,\xi)-A(U;x,\xi))U_n\|_{{\bf H}^{s-2}}+ \|{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))(U-U_n)\|_{{\bf H}^{s-2}}\leq \\ &C\Big(\norm{U-U_n}{{\bf H}^{s-2}}\norm{U}{{\bf H}^{s_0}}+\norm{U-U_{n-1}}{{\bf H}^{s_0}}\norm{U_n}{{\bf H}^{s}}\Big), \end{aligned} \end{equation*} which tends to $0$ since $s-2\geq s'$. In order to show that $R(U_{n-1})[U_{n-1}]$ tends to $R(U)[U]$ in ${\bf H}^{s-2}$ it is enough to use \eqref{nave101}. Using the equation \eqref{6.666para} and the discussion above the solution $U$ has the following regularity: \begin{equation}\label{regolarita} \begin{aligned} U\in B^{4}_{s'}([0,T);\theta )\cap L^{\infty}&([0,T),{\bf{H}}^s)\cap {\rm Lip}([0,T),{\bf{H}}^{s-2}), \quad \forall \; s_0\leq s'\leq s-2,\\ &\norm{U}{L^{\infty}([0,T),{\bf{H}}^s)}\leq\theta , \end{aligned} \end{equation} where $\theta$ and $s_0$ are given by Lemma \ref{esistenzaAN}. We show that $U$ actually belongs to $C^{0}_{*\mathbb{R}}([0,T), {\bf{H}}^{s})$. Let us consider the problem \begin{equation}\label{morteneran} \left\{ \begin{aligned} &\partial_{t}V-{\rm i} E\Big[\Lambda V+{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))V+R(U)[U]\Big]=0 ,\\ &V(0)=U^{(0)},\quad U^{(0)}\in {\bf H}^{s}, \end{aligned}\right. \end{equation} where the matrices $A$ and $R$ are defined in Proposition \ref{montero} (see \eqref{6.666para}) and $U$ is defined in \eqref{debolesol} (hence satisfies \eqref{regolarita}). Theorem \ref{descent} applies to system \eqref{morteneran} and provides a map \begin{equation}\label{regolarita2} \Phi(U)[\cdot] : C^{0}_{*\mathbb{R}}([0,T),{\bf{H}}^{s'}(\mathbb{T},\mathds{C}^2))\to C^{0}_{*\mathbb{R}}([0,T),{\bf{H}}^{s'}(\mathbb{T},\mathds{C}^2)),\quad \end{equation} which satisfies \eqref{stimona} with $K=4$ and $s'$ as in \eqref{regolarita}. One has that the function $W:=\Phi(U)[U]$ solves, the problem \begin{equation}\label{modificato1} \left\{ \begin{aligned} &\partial_{t}W-{\rm i} E\Big[\Lambda +{\rm Op}^{\mathcal{B}W}(L(U;\xi))\Big]W+R_2(U)[U]+{R}_1(U)W=0 \\ &W(0)=\Phi(U^{(0)})U^{(0)}:=W^{(0)}, \end{aligned}\right. \end{equation} where $L(U)$ is a diagonal, self-adjoint and constant coefficient in $x$ matrix of symbols in $\Gamma^{2}_{K,4}[\theta ]$, and $R_1(U), {R}_2(U)$ are matrices of bounded operators (see eq. \eqref{sistemafinale}). We prove that $W$ is weakly-continuous in time with values in ${\bf{H}}^{s}$. First of all note that $U\in C^{0}([0,T); {\bf{H}}^{s'})$ with $s'$ given in \eqref{regolarita}, therefore $W$ belongs to the same space thanks to \eqref{regolarita2}. Moreover $W$ is in $L^{\infty}([0,T),{\bf{H}}^s)$ (again by \eqref{regolarita} and \eqref{regolarita2}). Consider a sequence $\tau_{n}$ converging to $\tau$ as $n\to\infty$. Let $\phi\in {\bf{H}}^{-s}$ and $\phi_{\varepsilon}\in C^{\infty}_{0}(\mathbb{T};\mathds{C}^2)$ such that $\|\phi-\phi_{\varepsilon}\|_{{\bf{H}}^{-s}}\leq \varepsilon$. Then we have \begin{equation} \begin{aligned} \left|\int_{\mathbb{T}}\big(W(\tau_{n})-W(\tau) \big) \phi d x\right|&\leq \left|\int_{\mathbb{T}}\big(W(\tau_{n})-W(\tau) \big) \phi_{\varepsilon} d x\right|+\left|\int_{\mathbb{T}}(W(\tau_{n})-W(\tau) ) (\phi-\phi_\varepsilon) d x\right|\\ &\leq \|W(\tau_n)-W(\tau)\|_{{\bf{H}}^{s'}}\|\phi_\varepsilon\|_{{\bf{H}}^{-s'}}+ \|W(\tau_n)-W(\tau)\|_{{\bf{H}}^{s}}\|\phi-\phi_\varepsilon\|_{{\bf{H}}^{-s}}\\ &\leq C \varepsilon+2\|W\|_{L^{\infty}{\bf{H}}^{s}}\varepsilon \end{aligned} \end{equation} for $n$ sufficiently large and where $s'\leq s-2$ as above. Therefore $W$ is weakly continuous in time with values in ${\bf{H}}^s$. In order to prove that $W$ is in $C^0_{*\mathds{R}}([0,T),{\bf{H}}^s)$, we show that the map $t\mapsto \norm{W(t)}{{\bf{H}}^s}$ is continuous on $[0,T)$. We introduce, for $0<\epsilon\leq 1$, the Friedrichs mollifier $J_{\epsilon}:=(1-\epsilon\partial_{xx})^{-1}$ and the Fourier multiplier $\Lambda^{s}:=(1-\partial_{xx})^{s/2}$. Using the equation \eqref{modificato1} and estimates \eqref{porto20} one gets \begin{equation}\label{energia1} \frac{d}{dt}\norm{\Lambda^s J_{\epsilon}W(t)}{{\bf{H}}^0}^2\leq C\Big[ \norm{U(t)}{{\bf{H}}^s}^2\norm{W(t)}{{\bf{H}}^s}+\norm{W(t)}{{\bf{H}}^s}^2\norm{U(t)}{{\bf{H}}^s}\Big], \end{equation} where the right hand side is independent of $\epsilon$ and the constant $C$ depends on $s$ and $\norm{U}{{\bf{H}}^{s_0}}$. Moreover, since $U,\, W$ belong to $L^{\infty}([0,T),{\bf{H}}^s)$, the right hand side of inequality \eqref{energia1} is bounded from above by a constant independent of $t$. Therefore the function $t\mapsto \norm{J_{\epsilon}W(t)}{{\bf{H}}^0}$ is Lipschitz continuous in $t$, uniformly in $\epsilon$. As $J_{\epsilon}W(t)$ converges to $W(t)$ in the ${\bf{H}}^s$-norm, the function $t\mapsto \norm{W(t)}{{\bf{H}}^0}$ is Lipschitz continuous as well. Therefore $W$ belongs to $C^0_{*\mathds{R}}([0,T),{\bf{H}}^s)$ and so does $U$. To recover the regularity of $\frac{d}{dt}U$ one may use equation \eqref{6.666para}. Let us show the uniqueness. Suppose that there are two solution $U$ and $V$ in $C^0_{*\mathds{R}}([0,T),{\bf{H}}^s)$ of the problem \eqref{6.666para} . Set $H:=U-V$, then $H$ solves the problem \begin{equation}\label{ultimo} \left\{ \begin{aligned} &\partial_{t}H-{\rm i} E\Big[\Lambda H +{\rm Op}^{\mathcal{B}W}(A(U;x,\xi))[H]+R(U)[H]\Big]+{\rm i} E F=0 \\ &H(0)=0, \end{aligned}\right. \end{equation} where $$F:={\rm Op}^{\mathcal{B}W}\big(A(U;x,\xi)-A(V;x,\xi)\big)V+\big(R(U)-R(V)\big)[V].$$ Thanks to estimates \eqref{nave77} and \eqref{nave101} we have the bound \begin{equation}\label{stimetta} \norm{F}{{\bf{H}}^{s-2}}\leq C \norm{H}{{\bf{H}}^{s-2}}\Big(\norm{U}{{\bf{H}}^s}+\norm{V}{{\bf{H}}^s}\Big). \end{equation} By Proposition \ref{energia}, using Duhamel principle and \eqref{stimetta}, it is easy to show the following: \[ \|H(t)\|_{{\bf{H}}^{s-2}}\leq C(r) \int_{0}^{t}\|H(\sigma)\|_{{\bf{H}}^{s-2}}d\sigma. \] Thus by Gronwall Lemma the solution is equal to zero for almost everywhere time $t$ in $[0,T)$ . By continuity one gets the unicity. \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{teototale1}}] The proof is the same of the one of Theorem \ref{teototale}, one only has to note that the matrix $A(U;x,\xi)$ satisfies Hypothesis \ref{ipoipo4} thanks to the smallness of the initial datum instead of Hyp. \ref{hyp3}. \end{proof} \def$'${$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Image alignment is a fundamental research problem that has been studied for decades, which has been applied in various applications~\cite{BrownL03,zaragoza2013projective,guo2016joint,WronskiGEKKLLM19,liu2013bundled}. Commonly adopted registration methods include homography~\cite{gao2011constructing}, mesh-based deformation~\cite{zaragoza2013projective,LiuTYSZ16}, and optical flow~\cite{dosovitskiy2015FlowNet,EpicFlow_2015}. These methods look at the image contents for the registration, which often require rich textures~\cite{lin2017direct,zhang2020content} and similar illumination variations~\cite{sun2018pwc} for good results. In contrast, gyroscopes can be used to align images, where image contents are no longer required~\cite{karpenko2011digital}. The gyroscope in a mobile phone provides the camera 3D rotations, which can be converted into homographies given camera intrinsic parameters for the image alignment~\cite{karpenko2011digital, hartley2003multiple}. In this way, the rotational motions can be compensated. We refer to this as gyro image alignment. One drawback is that translations cannot be handled by the gyro. Fortunately, rotational motions are prominent compared with translational motions~\cite{shan2007rotational}, especially when filming scenes or objects that are not close to the camera~\cite{liu2017hybrid}. Compared with image-based methods, gyro-based methods are attractive. First, it is irrelevant to image contents, which largely improves the robustness. Second, gyros are widely available and can be easily accessed on our daily mobiles. Many methods have built their applications based on the gyros~\cite{huang2018online,guse2012gesture,zaki2020study}. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{images/Teaser.pdf} \end{center} \caption{(a) inputs without the alignment, (b) gyroscope alignment on a non-OIS camera, (c) gyroscope alignment on an OIS camera, and (d) our method on an OIS camera. We replace the red channel of one image with that of the other image, where misaligned pixels are visualized as colored ghosts. The same visualization is applied for the rest of the paper.} \label{fig:teaser} \end{figure} On the other hand, the cameras of smartphones keep evolving, where optical image stabilizer (OIS) becomes more and more popular, which promises less blurry images and smoother videos. It compensates for 2D pan and tilt motions of the imaging device through lens mechanics~\cite{chiu2007optimal,yeom2009optical}. However, OIS terminates the possibility of image registration by gyros. As the homography derived from the gyros is no longer correspond to the captured images, which have been adjusted by OIS with unknown quantities and directions. One may try to read pans and tilts from the camera module. However, this is not easy as it is bounded with the camera sensor, which requires assistance from professionals of the manufacturers~\cite{koo2009optical}. In this work, we propose a deep learning method that compensates the OIS motion without knowing its readings, such that the gyro can be used for image alignment on OIS equipped cell-phones. Fig.~\ref{fig:teaser} shows an alignment example. Fig.~\ref{fig:teaser} (a) shows two input images. Fig.~\ref{fig:teaser} (b) is the gyro alignment produced by a non-OIS camera. As seen, the images can be well aligned with no OIS interferences. Fig.~\ref{fig:teaser} (c) is the gyro alignment produced by an OIS camera. Misalignments can be observed due to the OIS motion. Fig.~\ref{fig:teaser} (d) represents our OIS compensated result. Two frames are denoted as $I_a$ and $I_b$, the motion from gyro between them as $G_{ab}$. The real motion (after OIS adjustment) between two frames is $G'_{ab}$. We want to find a mapping function that transforms $G_{ab}$ to $G'_{ab}$: \begin{equation} \small G'_{ab} = f(G_{ab}). \end{equation} We propose to train a supervised convolutional neural network for this mapping. To achieve this, we record videos and their gyros as training data. The input motion $G_{ab}$ can be obtained directly given the gyro readings. However, obtaining the ground-truth labels for $G'_{ab}$ is non-trivial. We propose to estimate the real motion from the captured images. If we estimate a homography between them, then the translations are included, which is inappropriate for rotation-only gyros. The ground-truth should merely contain rotations between $I_a$ and $I_b$. Therefore, we estimate a fundamental matrix and decompose it for the rotation matrix~\cite{hartley2003multiple}. However, the cell-phone cameras are rolling shutter (RS) cameras, where different rows of pixels have slightly distinct rotations matrices. In this work, we propose a Fundamental Mixtures model that estimates an array of fundamental matrices for the RS camera, such that rotational motions can be extracted as the ground-truth. In this way, we can learn the mapping function. For evaluations, we capture a testing dataset with various scenes, where we manually mark point correspondences for quantitative metrics. According to our experiments, our network can accurately recover the mapping, achieving gyro alignments comparable to non-OIS cameras. In summary, our contributions are: \begin{itemize} \item We propose a new problem that compensates OIS motions for gyro image alignment on cell-phones. To the best of our knowledge, the problem is not explored yet, but important to many image and video applications. \item We propose a solution that learns the mapping function between gyro motions and real motions, where a Fundamental Mixtures model is proposed under the RS setting for the real motions. \item We propose a dataset for the evaluation. Experiments show that our method works well when compared with non-OIS cameras, and outperforming image-based opponents in challenging cases. \end{itemize} \section{Related Work} \subsection{Image Alignments} Homography~\cite{gao2011constructing}, mesh-based~\cite{zaragoza2013projective}, and optical flow~\cite{sun2018pwc} methods are the most commonly adopted motion models, which align images in a global, middle, and pixel level. They are often estimated by matching image features or optimize photometric loss~\cite{lin2017direct}. Apart from classical traditional features, such as SIFT~\cite{Lowe04}, SURF~\cite{BayTG06}, and ORB~\cite{RubleeRKB11}, deep features have been proposed for improving robustness, e.g., LIFT~\cite{YiTLF16} and SOSNet~\cite{TianYFWHB19}. Registration can also be realized by deep learning directly, such as deep homography estimation~\cite{le2020deep,zhang2020content}. In general, without extra sensors, these methods align images based on the image contents. \subsection{Gyroscopes} Gyroscope is important in helping estimate camera rotations during mobile capturing. The fusion of gyroscope and visual measurements have been widely applied in various applications, including but not limited to, image alignment and video stabilization~\cite{karpenko2011digital}, image deblurring~\cite{mustaniemi2019gyroscope}, simultaneous localization and mapping (SLAM)~\cite{huang2018online}, gesture-based user authentication on mobile devices~\cite{guse2012gesture}, and human gait recognition~\cite{zaki2020study}. In mobiles, one important issue is the synchronization between the timestamps of gyros and video frames, which requires gyro calibration~\cite{jia2013online}. In this work, we access the gyro data at the Hardware Abstraction Layer (HAL) of the android layout~\cite{siddha2012hardware}, to achieve accurate synchronizations. \subsection{Optical Image Stabilizer} Optical Image Stabilizer (OIS) has been around commercially since the mid-90s~\cite{sato1993control} and becomes more and more popular in our daily cell-phones. Both the image capturing and video recording can benefit from OIS, producing results with less blur and improved stability~\cite{koo2009optical}. It works by controlling the path of the image through the lens and onto the image sensor, which is achieved by measuring the camera shakes using sensors such as gyroscope, and move the lens horizontally or vertically to counteract shakes by electromagnet motors~\cite{chiu2007optimal,yeom2009optical}. Once a mobile is equipped with OIS, it cannot be turn-off easily~\cite{nasiri2012optical}. On one hand, OIS is good for daily users. On the other hand, it is not friendly to mobile developers who need gyros to align images. In this work, we enable the gyro image alignment on OIS cameras. \section{Algorithm} \begin{figure*}[t] \begin{center} \includegraphics[width=1\textwidth]{images/method_pipeline_v14.pdf} \end{center} \caption{The overview of our algorithm which includes (a) gyro-based flow estimator, (d) the fundamental-based flow estimator, and (b) neural network predicting an output flow. For each pair of frames $I_a$ and $I_b$, the homography array is computed using the gyroscope readings from $t_{I_{a}}$ to $t_{I_{b}}$, which is converted into the source motion $G_{ab}$ as the network input. On the other side, we estimate a Fundamental Mixtures model to produce the target flow $F_{ab}$ as the guidance. The network is then trained to produce the output $G'_{ab}$.} \label{fig:pipeline} \end{figure*} Our method is built upon convolutional neural networks. It takes one gyro-based flow $G_{ab}$ from the source frame $I_a$ to the target frame $I_b$ as input, and produces OIS compensated flow $G'_{ab}$ as output. Our pipeline consists of three modules: a gyro-based flow estimator, a Fundamental Mixtures flow estimator, and a fully convolutional network that compensates the OIS motion. Fig.~\ref{fig:pipeline} illustrates the pipeline. First, the gyro-based flows are generated according to the gyro readings (Fig.~\ref{fig:pipeline}(a) and Sec.~\ref{sec:gyroflow}), then they are fed into a network to produce OIS compensated flows $G'_{ab}$ as output (Fig.~\ref{fig:pipeline}(b) and Sec.~\ref{sec:network}). To obtain the ground-truth rotations, we propose a Fundamental Mixtures model, so as to produce the Fundamental Mixtures flows $F_{ab}$ (Fig.~\ref{fig:pipeline} (d) and Sec.~\ref{sec:fundamental}) as the guidance to the network (Fig.~\ref{fig:pipeline} (c)). During the inference, the Fundamental Mixtures model is not required. The gyro readings are converted into gyro-based flows and fed to the network for compensation. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{images/gyro_ts_v5.pdf} \end{center} \caption{Illustration of rolling shutter frames. $t_{I_a}$ and $t_{I_b}$ are the frame starting time. $t_s$ is the camera readout time and $t_f$ denotes the frame period ($t_f > t_s$). $t_a(i)$ and $t_b(i)$ represent the starting time of patch $i$ in $I_a$ and $I_b$. } \label{fig:gyro_ts} \end{figure} \subsection{Gyro-Based Flow}\label{sec:gyroflow} We compute rotations by compounding gyro readings consisting of angular velocities and timestamps. In particular, we read them from the HAL of android architecture for synchronization. The rotation vector $n=\left(\omega_{x}, \omega_{y}, \omega_{z}\right) \in \mathbb{R}^{3}$ is computed from gyro readings between frames $I_a$ and $I_b$~\cite{karpenko2011digital}. The rotation matrix $R(t)\in SO(3)$ can be produced according to the Rodrigues Formula~\cite{dai2015euler}. If the camera is global shutter, the homography is modeled as: \begin{equation} \small \mathbf{H}(t)=\mathbf{K} \mathbf{R}(t) \mathbf{K}^{-1}\label{eq:globalH}, \end{equation} \noindent where $K$ is the intrinsic camera matrix and $R(t)$ denotes the camera rotation from $I_a$ to $I_b$. In an RS camera, every row of the image is exposed at a slightly different time. Therefore, Eq.(\ref{eq:globalH}) is not applicable since every row of the image has slightly different rotation matrices. In practice, assigning each row of pixels with a rotation matrix is unnecessary. We group several consecutive rows into a row patch and assign each patch with a rotation matrix. Fig.~\ref{fig:gyro_ts} shows an example. Let $t_s$ denotes the camera readout time that is the time duration between the exposure of the first row and the last row of pixels. \begin{equation} \small t_{a}(i)=t_{I}+t_{s} \frac{i}{N}, \end{equation} \noindent where $t_{a}(i)$ denotes the start of the exposure of the $i$-$th$ patch in $I_a$ as shown in Fig.~\ref{fig:gyro_ts}, $t_{I}$ denotes the starting timestamp of the corresponding frame, $N$ denotes the number of patches per frame. The end of the exposure is: \begin{equation} \small t_{b}(i)=t_{a}(i)+t_{f}, \end{equation} \noindent where $t_{f}=1/FPS$ is the frame period. Here, the homography between the $i$-th row at frame $I_a$ and $I_b$ can be modeled as: \begin{equation} \small \mathbf{H}=\mathbf{K} \mathbf{R}\left(t_{b}\right) \mathbf{R}^{\top}\left(t_{a}\right) \mathbf{K}^{-1}, \end{equation} where $\small\mathbf{R}\left(t_{b}\right) \mathbf{R}^{\top}\left(t_{a}\right)$ can be computed by accumulating rotation matrices from $t_a$ to $t_b$. In our implementation, we divide the image into $6$ patches which computes a homography array containing $6$ horizontal homographies between two consecutive frames. We convert the homography array into a flow field~\cite{mustaniemi2019gyroscope} so that it can be fed as input to a convolutional neural network. For every pixel $p$ in the $I_a$, we have: \begin{equation} \small \mathbf{p}^{\prime}=\mathbf{H}(t) \mathbf{p}, \quad \mathbf{(u,v)}=\mathbf{p}^{\prime}- \mathbf{p}, \label{eq:homo2flow} \end{equation} \noindent computing the offset for every pixel produces a gyro-based flow $G_{ab}$. \subsection{Fundamental Mixtures}\label{sec:fundamental} Before introducing our model of Fundamental Mixtures, we briefly review the process of estimating the fundamental matrix. If the camera is global-shutter, every row of the frame is imaged simultaneously at a time. Let $p_1$ and $p_2$ be the projections of the 3D point $X$ in the first and second frame, $p_1=P_{1} X$ and $p_2=P_{2} X$, where $P_{1}$ and $P_{2}$ represent the projection matrices. The fundamental matrix satisfies the equation~\cite{hartley2003multiple}: \begin{equation} \small \mathbf{p}_{1}^{T} \mathbf{F} \mathbf{p}_{2}=0, \label{F-matrix} \end{equation} where $p_1=(x_1, y_1, 1)^{T}$ and $p_2=\left(x_1^{\prime}, y_1^{\prime}, 1\right)^{T}$. Let $\mathbf{f}$ be the 9-element vector made up of $F$, then Eq.(\ref{F-matrix}) can be written as: \begin{equation} \small \left(x_1^{\prime} x_1, x_1^{\prime} y_1, x_1^{\prime}, y_1^{\prime} x_1, y_1^{\prime} y_1, y_1^{\prime}, x_1, y_1, 1\right) \mathbf{f}=0, \label{dlt_F} \end{equation} given $n$ correspondences, yields a set of linear equations: \begin{equation} \small \begin{aligned} &A \mathbf{f}= \left[\begin{array}{ccc} x_{1}^{\prime}p_1^{T} & y_{1}^{\prime}p_1^{T} & p_1^{T} \\ \vdots & \vdots & \vdots \\ x_{n}^{\prime}p_n^{T} & y_{n}^{\prime}p_n^{T} & p_n^{T} \end{array}\right] \mathbf{f} = 0. \end{aligned} \end{equation} Using at least 8 matching points yields a homogenous linear system, which can be solved under the constraint $\|\mathbf{f}\|_{2}=1$ using the Singular Value Decomposition(SVD) of $A=U D V^{\top}$ where the last column of $V$ is the solution~\cite{hartley2003multiple}. In the case of RS camera, projection matrices $P_{1}$ and $P_{2}$ vary across rows instead of being frame-global. Eq.(\ref{F-matrix}) does not hold. Therefore, we introduce Fundamental Mixtures assigning each row patch with a fundamental matrix. We detect FAST features~\cite{trajkovic1998fast} and track them by KLT~\cite{shi1994good} between frames. We modify the detection threshold for uniform feature distributions~\cite{grundmann2012calibration,guo2016joint}. To model RS effects, we divide a frame into $N$ patches, resulting in $N$ unknown fundamental matrices $F_{i}$ to be estimated per frame. If we estimate each fundamental matrix independently, the discontinuity is unavoidable. We propose to smooth neighboring matrices during the estimation as shown in Fig.~\ref{fig:pipeline} (d), where a point $p_1$ not only contributes to its own patch but also influences its nearby patches weighted by the distance. The fundamental matrix for point $p$ is the mixture: \begin{equation} \small F({p_1}) =\sum_{i=1}^{N} F_{i} w_{i}(p_1), \end{equation} \noindent where $w_{i}(p_1)$ is the gaussian weight with the mean equals to the middle of each patch, and the sigma $\sigma = 0.001 * h$, where $h$ represents the frame height. To fit a Fundamental Mixtures $F_{i}$ given a pair of matching points $\left(p_{1}, p_{2}\right)$, we rewrite Eq.(\ref{F-matrix}) as: \begin{equation} \small 0=\mathbf{p}_{1}^{T} \mathbf{F_{p_1}} \mathbf{p}_{2}=\sum_{i=1}^{N} w_{i}(p_1) \cdot \mathbf{p}_{1}^{T} \mathbf{F_i} \mathbf{p}_{2}, \label{p1Fp2} \end{equation} where $\mathbf{p}_{1}^{T} \mathbf{F}_{\mathbf{k}} \mathbf{p}_{2}$ can be transformed into: \begin{equation} \small A_{p_1}^{i} f_{i} = \left(\begin{array}{ccc} x_{1}^{\prime}p_1^{T} & y_{1}^{\prime}p_1^{T} & p_1^{T} \end{array}\right) f_{i}, \label{dlt_ak} \end{equation} where $f_{i}$ denotes the vector formed by concatenating the columns of $F_{i}$. Combining Eq.(\ref{p1Fp2}) and Eq.(\ref{dlt_ak}) yields a $1 \times 9 i$ linear constraint: \begin{equation} \small \underbrace{\left(w_{1}(p_1) A_{p_1}^{1} \ldots w_{i}(p_1) A_{p_1}^{i}\right)}_{A_{p_1}} \underbrace{\left(\begin{array}{c} f_{1} \\ \vdots \\ f_{i} \end{array}\right)}_{f}=A_{p_1} f=0. \label{svd_f_mix} \end{equation} Aggregating all linear constraints $A_{p_j}$ for every match point $(p_j, p_{j+1})$ yields a homogenous linear system $A\mathbf{f}=0$ that can be solved under the constraint $\|\mathbf{f}\|_{2}=1$ via SVD. For robustness, if the number of feature points in one patch is inferior to $8$, Eq.(\ref{svd_f_mix}) is under constrained. Therefore, we add a regularizer to constrain $\lambda\left\|A_{p}^{i}-A_{p}^{i-1}\right\|_{2}=0\small$ to the homogenous system with $\lambda = 1\small$. \subsubsection{Rotation-Only Homography} Given the fundamental matrix $F_i$ and the camera intrinsic $K$, we can compute the essential matrix $E_i$ of the $i$-th patch: $\mathbf{E_i}=\mathbf{K}^{T} \mathbf{F_i} \mathbf{K}\small.$ The essential matrix $E_i$~\cite{hartley2003multiple} can be decomposed into camera rotations and translations, where only rotations $R_i$ are retained. We use $R_i$ to form a rotation-only homography similar to Eq.(\ref{eq:globalH}) and convert the homography array into a flow field as Eq.(\ref{eq:homo2flow}). We call this flow field as Fundamental Mixtures Flow $F_{ab}$. Note that, $R_i$ is spatially smooth, as $F_i$ is smooth, so does $F_{ab}$. \subsection{Network Structure}\label{sec:network} The architecture of the network is shown in Fig.~\ref{fig:pipeline} that utilizes a backbone of UNet~\cite{ronneberger2015u} consists of a series of convolutional and downsampling layers with skip connections. The input to the network is gyro-based flow $G'_{ab}$ and the ground-truth target is Fundamental Mixtures flow $F_{ab}$. Our network aims to produce an optical flow of size $H \times W \times 2$ which compensates the motion generated by OIS between $G'_{ab}$ and $F_{ab}$. Besides, the network is fully convolutional which accepts input of arbitrary sizes. Our network is trained on $9k$ rich-texture frames with resolution of $360$x$270$ pixels over $1k$ iterations by an Adam optimizer~\cite{kingma2014adam} whose $l_{r}=1.0 \times 10^{-4}$, $\beta_1=0.9$, $\beta_2=0.999$. The batch size is $8$, and for every $50$ epochs, the learning rate is reduced by $20\%$. The entire training process costs about $50$ hours. The implementation is in PyTorch and the network is trained on one NVIDIA RTX 2080 Ti. \section{Experimental Results} \subsection{Dataset} Previously, there are some dedicated datasets which are designed to evaluate the homography estimation~\cite{zhang2020content} or the image deblurring with the artificial-generated gyroscope-frame pair~\cite{mustaniemi2019gyroscope}, whether none of them combine real gyroscope readings with corresponding video frames. So we propose a new dataset and benchmark GF4. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{images/dataset_v2.pdf} \end{center} \caption{A glance at our evaluation dataset. Our dataset contains 4 categories, regular(RE), low-texture(LT), low-light(LL) and moving-foreground(MF). Each category contains $350$ pairs, a total of $1400$ pairs, with synchronized gyroscope readings.} \label{fig:dataset} \end{figure} \noindent\textbf{Training Set} To train our network, we record a set of videos with the gyroscope readings using a hand-held cellphone. We choose scenes with rich textures so that sufficient feature points can be detected to calculate the Fundamental Mixtures model. The videos last $300$ seconds, yielding $9$,$000$ frames in total. Note that, the scene type is not important as long as it can provide enough features as needed for Fundamental Mixtures estimation. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{images/data_annotation_v4.pdf} \end{center} \caption{We mark the correspondences manually in our evaluation set for quantitative metrics. For each pair, we mark $6 \sim 8$ point matches. } \vspace{-5pt} \label{fig:annotation} \end{figure} \noindent\textbf{Evaluation Set} For the evaluation, we capture scenes with different types, to compare with image-based registration methods. Our dataset contains $4$ categories, including regular (RE), low-texture (LT), low-light (LL), and moving-foregrounds (MF) frame-gyroscope pairs. Each scene contains $350$ pairs. So, there are $1400$ pairs in the dataset. We show some examples in Fig.~\ref{fig:dataset}. For quantitative evaluation, we manually mark $6\sim8$ point correspondences per pair, distributing uniformly on frames. Fig.~\ref{fig:annotation} shows some examples. \subsection{Comparisons with non-OIS camera} Our purpose is to enable gyro image alignment on OIS cameras. Therefore, we compare our method with non-OIS cameras. In general, our method should perform equally well as non-OIS cameras, if the OIS motion could be compensated successfully. For comparison, ideally, we should use one camera with OIS turn on and off. However, the OIS cannot be turned off easily. Therefore, we use two cell-phones with similar camera intrinsics, one with OIS and one without, and capture the same scene twice with similar motions. Fig.~\ref{fig:qualitycompare_ois} shows some examples. Fig.~\ref{fig:qualitycompare_ois} (a) shows the input frames. Fig.~\ref{fig:qualitycompare_ois} (b) shows the gyro alignment on a non-OIS camera. As seen, images can be well aligned. Fig.~\ref{fig:qualitycompare_ois} (c) shows the gyro alignment on an OIS camera. Due to the OIS interferences, images cannot be aligned directly using the gyro. Fig.~\ref{fig:qualitycompare_ois} (d) shows our results. With OIS compensation, images can be well aligned on OIS cameras. \begin{table}[t] \small \centering \resizebox*{0.45 \textwidth}{!} { \begin{tabular}{lccc} \toprule &Non-OIS Camera & OIS Camera & Ours\\ \midrule Geometry Distance &0.688 & 1.038 & 0.709 \\ \bottomrule \end{tabular} } \caption{Comparisons with non-OIS camera. } \vspace{-10pt} \label{tab:non-ois} \end{table} We also calculate quantitative values. Similarly, we mark the ground-truth for evaluation. The average geometry distance between the warped points and the manually labeled GT points are computed as the error metric (the lower the better). Table~\ref{tab:non-ois} shows the results. Our result $0.709$ is comparable with non-OIS camera $0.688$ (slightly worse), while no compensation yields $1.038$, which is much higher. \subsection{Comparisons with Image-based Methods} \begin{figure*} \begin{center} \includegraphics[width=1\textwidth]{images/ois_on_off_gyro_align_v8.pdf} \end{center} \vspace{-10pt} \caption{Comparisons with non-OIS cameras. (a) input two frames. (b) gyro alignment results on the non-OIS camera. (c) gyro alignment results on the OIS camera. (d) our OIS compensation results. Without OIS compensation, clear misalignment can be observed in (c) whereas our method can solve this problem and be comparable with non-OIS results in (b).} \label{fig:qualitycompare_ois} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[width=1\linewidth]{images/quality_exp_v8.pdf} \end{center} \caption{Comparisons with image-based methods. We compare with SIFT~\cite{Lowe04} + RANSAC~\cite{FischlerB81}, Meshflow~\cite{LiuTYSZ16}, and the recent deep homography~\cite{zhang2020content} method. We show examples covering all scenes in our evaluation dataset. Our method can align images robustly while image-based methods contain some misaligned regions. } \label{fig:qualitycompare} \end{figure*} Although it is a bit unfair to compare with image-based methods as we adopt additional hardware. We desire to show the importance and robustness of the gyro-based alignment, to highlight the importance of enabling this capability on OIS cameras. \begin{table*}[h] \small \label{tab:results} \centering \resizebox*{1.0 \textwidth}{!} { \begin{tabular}{>{\arraybackslash}p{5cm} >{\centering\arraybackslash}p{2.4cm} >{\centering\arraybackslash}p{2.4cm} >{\centering\arraybackslash}p{2.4cm} >{\centering\arraybackslash}p{2.4cm} >{\centering\arraybackslash}p{2.4cm}} \toprule 1) &RE& LT& LL& MF& Avg \\ \midrule 2) $\mathcal{I}_{3 \times 3}$ & 7.098(+2785.37\%) & 7.055(+350.80\%) & 7.035(+519.55\%) & 7.032(+767.08\%) & 7.055(+313.97\%)\\ \midrule 3) SIFT~\cite{Lowe04}+ RANSAC~\cite{FischlerB81} & 0.340(+38.21\%) & 6.242(+298.85\%) & 2.312(+103.58\%) & 1.229(+51.54\%) & 2.531(+48.49\%) \\ 4) SIFT~\cite{Lowe04} + MAGSAC~\cite{barath2019magsac} &\textcolor[rgb]{1,0,0}{0.213(\textminus13.41\%)} & 5.707(+264.66\%) & 2.818(+148.17\%) & \textcolor[rgb]{0,0,1}{0.811(+0.00\%)} & 2.387(+40.08\%) \\ 5) ORB~\cite{rublee2011orb} + RANSAC~\cite{FischlerB81} & 0.653(+165.45\%) & 6.874(+339.23\%) & \textcolor[rgb]{0,0,1}{1.136(+0.00\%)} & 2.27(+179.28\%) & 2.732(+60.30\%) \\ 6) ORB~\cite{rublee2011orb} + MAGSAC~\cite{barath2019magsac} &0.919(+273.58\%) & 6.859(+338.27\%) & 1.335(+17.60\%) & 2.464(+203.82\%) & 2.894(+69.83\%) \\ 7) SOSNET~\cite{tian2019sosnet} + RANSAC~\cite{FischlerB81} & \textcolor[rgb]{0,0,1}{0.246(+0.00\%)} & 5.946(+279.94\%) & 1.977(+74.11\%) & 0.907(+11.84\%) & 2.269(+33.14\%) \\ 8) SOSNET~\cite{tian2019sosnet} + MAGSAC~\cite{barath2019magsac} & 0.309(+25.61\%) & 5.585(+256.87\%) & 1.972(+73.67\%) & 1.142(+40.81\%) & 2.252(+32.14\%) \\ 9) SURF~\cite{BayTG06} + RANSAC~\cite{FischlerB81} & 0.343(+39.43\%) & 3.161(+101.98\%) & 2.213(+94.89\%) & 1.420(+75.09\%) & 1.784(+4.69\%)\\ 10) SURF~\cite{BayTG06} + MAGSAC~\cite{barath2019magsac} & 0.307(+24.80\%) & 3.634(+132.20\%) & 2.246(+97.78\%) & 1.267(+56.23\%) & 1.863(+9.34\%)\\ \midrule 11) MeshFlow~\cite{LiuTYSZ16} & 0.843(+242.68\%) & 7.042(+349.97\%) & 1.729(+52.27\%) & 1.109(+36.74\%) & 2.681(+57.30\%)\\ \midrule 12) Deep Homography~\cite{zhang2020content} & 1.342(+445.53\%) & \textcolor[rgb]{0,0,1}{1.565(+0.00\%)} & 2.253(+98.41\%) & 1.657(+104.32\%) & \textcolor[rgb]{0,0,1}{1.704(+0.00\%)} \\ \midrule 13) Ours & 0.609(+147.56\%) &\textcolor[rgb]{1,0,0}{1.01(\textminus35.27\%)} &\textcolor[rgb]{1,0,0}{0.637(\textminus43.90\%)} &\textcolor[rgb]{1,0,0}{0.736(\textminus9.25\%)} &\textcolor[rgb]{1,0,0}{0.749(\textminus56.07\%)}\\ \bottomrule \end{tabular} } \caption{Quantitative comparisons on the evaluation dataset. The best performance is marked in red and the second-best is in blue. } \label{table:quality} \end{table*} \subsubsection{Qualitative Comparisons} Firstly, we compare our method with one frequently used traditional feature-based algorithm, i.e. SIFT~\cite{Lowe04} and RANSAC~\cite{FischlerB81} that compute a global homography, and another feature-based algorithm, i.e. Meshflow~\cite{LiuTYSZ16} that deforms a mesh for the non-linear motion representation. Moreover, we compare our method with the recent deep homography method~\cite{zhang2020content}. Fig.~\ref{fig:qualitycompare} (a) shows a regular example where all the methods work well. Fig.~\ref{fig:qualitycompare} (b), SIFT+RANSAC fails to find a good solution, so does deep homography, while Meshflow works well. One possible reason is that a single homography cannot cover the large depth variations. Fig.~\ref{fig:qualitycompare} (c) illustrates a moving-foreground example that SIFT+RANSAC and Meshflow cannot work well, as few features are detected on the background, whereas Deep Homography and our method can align the background successfully. A similar example is shown in Fig.~\ref{fig:qualitycompare} (d), SIFT+RANSAC and Deep Homography fail. Meshflow works on this example as sufficient features are detected in the background. In contrast, our method can still align the background without any difficulty. Because we do not need the image contents for the registration. Fig.~\ref{fig:qualitycompare}(e) is an example of low-light scenes, and Fig.~\ref{fig:qualitycompare}(f) is a low-texture scene. All the image-based methods fail as no high-quality features can be extracted, whereas our method is robust. \subsubsection{Quantitative Comparisons} We also compare our method with other feature-based methods quantitatively, i.e., the geometry distance. For the feature descriptors, we choose SIFT~\cite{Lowe04}, ORB~\cite{rublee2011orb}, SOSNet~\cite{TianYFWHB19}, SURF~\cite{BayTG06}. For the outlier rejection algorithms, we choose RANSAC~\cite{FischlerB81} and MAGSAC~\cite{barath2019magsac}. The errors for each category are shown in Table~\ref{table:quality} followed by the overall averaged error, where $\mathcal{I}_{3 \times 3}$ refers to a $3 \times 3$ identity matrix as a reference. In particular, feature-based methods sometimes crash, when the error is larger than $\mathcal{I}_{3 \times 3}$ error, we set the error equal to $\mathcal{I}_{3 \times 3}$ error. Regarding the motion model, from $3$) to $10$) and $12$) are single homography, $11$) is mesh-based, and $13$) is a homography array. In Table~\ref{table:quality}, we mark the best performance in red and the second-best in blue. As shown, except for comparing to feature-based methods in RE scenes, our method outperforms the others for all categories. It is reasonable because, in regular(RE) scenes, a set of high-quality features is detected which allows to output a good solution. In contrast, gyroscopes can only compensate for rotational motions, which decreases scores to some extent. For the rest scenes, our method beats the others with an average error being lower than the $2$nd best by $56.07\%$. Especially for low-light(LL) scenes, our method computes an error which is at least lower than the $2$nd best by $43.9\%$. \subsection{Ablation Studies} \subsubsection{Fully Connected Neural Network} \begin{figure}[h] \begin{center} \includegraphics[width=1\linewidth]{images/h4pt_pipeline_v5.pdf} \end{center} \caption{Regression of the homography array using the fully connected network. For each pair of frames $I_a$ and $I_b$, a homography array is computed by using gyro readings, which is fed to the network. On the other side, a Fundamental Mixtures model is produced as targets to guide the training process.} \label{fig:h4pt} \end{figure} Our network is fully convolutional, where we convert gyroscope data into homography arrays, and then into flow fields as image input to the network. However, there is another option where we can directly input homography arrays as input. Similarly, on the other side, the Fundamental Mixtures are converted into rotation-only homography arrays and then used as guidance. Fig.~\ref{fig:h4pt} shows the pipeline, where we test two homography representations, including $3 \times 3$ homography matrix elements and $H_{4pt}$ representation from~\cite{detone2016deep} that represents homography by 4 motion vectors. The network is fully connected and L2 loss is adopted for the regression. We adopt the same training data as described above. The result is that neither representations converge, where the $H_{4pt}$ is slightly better than directly regressing matrix elements. Perhaps, there exist other representations or network structures that may work well or even better than our current proposal. Here, as the first try, we have proposed a working pipeline and want to leave the improvements as future works. \subsubsection{Global Fundamental vs. Mixtures} To verify the effectiveness of our Fundamental Mixtures model, we compare it with a global fundamental matrix. Here, we choose the evaluation dataset of the regular scenes to alleviate the feature problem. We estimate global fundamental matrix and Fundamental Mixtures, then convert to the rotation-only homographies, respectively. Finally, we align the images with rotation-only homographies accordingly. An array of homographies from Fundamental Mixtures produces an error of \textbf{0.451}, which is better than an error of \textbf{0.580} produced by a single homography from a global fundamental matrix. It indicates that the Fundamental Mixtures model is functional in the case of RS cameras. Moreover, we generate GT with the two methods and train our network, respectively. As shown in Table~\ref{table:GT_F_Fmix}, the network trained on Fundamental Mixtures-based GT outperforms the global fundamental matrix, which demonstrates the effectiveness of our Fundamental Mixtures. \begin{table}[h] \small \centering \resizebox{0.48 \textwidth}{!} {\begin{tabular}{lccccc} \toprule Ground Truth & RE & LT & LL & MF & Avg \\ \midrule Global Fundamental & 0.930 &1.189 & 0.769 & 0.917 & 0.951 \\ Fundamental Mixtures & 0.609 & 1.010 & 0.637 & 0.736 & 0.749 \\ \bottomrule \end{tabular}} \caption{The performance of networks trained on two different GT.} \label{table:GT_F_Fmix} \end{table} \subsubsection{Backbone} \begin{table}[h] \centering \small \begin{tabular}{lccccc} \toprule &RE& LT& LL& MF& Avg \\ \midrule R2UNet\cite{alom2018recurrent} &0.713 &1.006 &0.652 &0.791 &0.791\\ AttUNet\cite{oktay2018attention} &0.896 &1.058 &0.752 &0.993 &0.925\\ R2AttUNet\cite{alom2018recurrent} &0.651 &1.014 &0.668 &0.722 &0.764\\ \midrule Ours &0.609 & 1.01 &0.637 &0.736 &0.749\\ \bottomrule \end{tabular} \caption{The performance of networks with different backbones.} \label{tab:backbone} \end{table} We choose the UNet~\cite{ronneberger2015u} as our network backbone, we also test several other variants~\cite{alom2018recurrent,oktay2018attention}. Except for AttUNet~\cite{oktay2018attention}, performances are similar, as shown in Table~\ref{tab:backbone}. \section{Conclusion} We have presented a DeepOIS pipeline for the compensation of OIS motions for gyroscope image registration. We have captured the training data as video frames as well as their gyro readings by an OIS camera and then calculated the ground-truth motions with our proposed Fundamental Mixtures model under the setting of rolling shutter cameras. For the evaluation, we have manually marked point correspondences on our captured dataset for quantitative metrics. The results show that our compensation network works well when compared with non-OIS cameras and outperforms other image-based methods. In summary, a new problem is proposed and we show that it is solvable by learning the OIS motions, such that gyroscope can be used for image registration on OIS cameras. We hope our work can inspire more researches in this direction.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Koopman operator is an infinite-dimensional operator that governs the evolution of scalar observations in the state space for a nonlinear dynamical system, as shown in Equation \ref{eqn.koopman}. It has been proven \cite{koopman1931hamiltonian} that there exists a Koopman Operator $\mathcal{K}$ that can advance measurement functions in a linear operator fashion in an infinite-dimensional space. \begin{equation}\label{eqn.koopman} \begin{gathered} \mathcal{K} g = g \circ \boldsymbol{F} \end{gathered} \end{equation} where, the $\boldsymbol{F}$ represents the dynamics that map the state of the system forward in time; the $g$ is the representation of the system in the infinite-dimensional space. For practical use, existing approaches attempt to compute a finite-dimensional approximation of this operator. Dynamic Mode Decomposition (DMD) has been proven to be an effective way to find reduced-order models to represent the higher-dimensional complex systems \cite{williams2015data, williams2016extending, williams2014kernel, koopman1931hamiltonian}. For more extension work of Koopman operator methods to controlled dynamical refer to \cite{korda2018linear, proctor2016dynamic, kaiser2017data, kaiser2020data, broad2018learning, you2018deep,ma2019optimal}. However, the use of the Koopman operator as a linear predictor has been hindered by the computational complexity. A major concern is that, as the dimension of state space increases, the numerical method to approximate Koopman operator is running at a rapidly increasing polynomial time complexity to compute a rich set of basis functions. Another route to find Koopman operator approximation is through using Deep Neural Networks (DNN). The DNN provides a tremendous capacity to store and map lower-dimensional measurements to higher-dimensional lifted representations. Fruitful research results have been found in applying DNN in the linear embedding of nonlinear dynamics field \cite{lusch2018deep}. Deep Learning of Koopman Representation for Control (DKRC) exemplifies a recent research trend of using DNN as a linear embedding tool to learn nonlinear dynamics and control the system based on the learned dynamics. In this framework, the optimal control is being achieved by combining a global model of the system dynamics and a local model for local replanning. The key feature of DKRC is that it extends the use of DNN for the finite-dimensional representation of the Koopman operator in a controlled dynamical system setting. It benefits from a data-driven learning algorithm that can automatically search for a rich set of suitable basis functions ($\psi(x)$) to construct the approximated linear model in the lifted space \cite{Han:2020f}. We can then rewrite the Equation \ref{eqn.koopman} into Equation \ref{eqn.koopman_psi}: \begin{equation}\label{eqn.koopman_psi} \begin{gathered} \psi(x_{t+1}) = \mathcal{K} \psi(x_t) \end{gathered} \end{equation} In this paper, we implement the Model Predictive Control (MPC) after learning the dynamics of the lifted high-dimensional linear system with DKRC. Other methods, such as regression or DAGGER (i-LQR or MCTS for planning) methods have been introduced after the learned model is available. However, those methods suffer from distribution mismatch problems or issues with the performance of open-loop control in stochastic domains. We propose using MPC iteratively, which can provide robustness to small model errors that benefit from the close loop control. As authors mentioned in Ref \cite{psrl}, if the system dynamics were to be linear and the cost function convex, the global optimization problem solved by MPC methods would be convex, which means in this situation the convex programming optimization techniques could be promising to achieve some theoretical guarantees of convergence to an optimal solution. Therefore MPC would certainly perform better in a linear world. MPC represents state of the art for the practice of real-time optimal control \cite{ddmpc}. MPC is an online optimization solver method that seeks optimal state solutions at each time step under certain defined constraints. We only take the first value of the planned control sequence of each time step. MPC has been proven to be tolerant to the modeling errors within Adaptive Cruise Control(ACC) simulation \cite{acc} \cite{cprm}. To solve the convex problem, we used CVXPY \cite{cvxpy} in this study. Deep Deterministic Policy Gradient (DDPG) is a model-free, off-policy algorithm using DNN function approximators that can learn policies in high-dimensional, continuous action spaces \cite{ddpgo}. DDPG is a combination of the deterministic policy gradient approach and insights from the success of Deep Q Network (DQN) \cite{dqno}. DQN is a policy optimization method that achieves optimal policy by inputting observation and updates policy by back-propagate policy gradients. However, DQN can only handle discrete and low-dimensional action spaces, limiting its application considering many control problems have high-dimensional observation space and continuous control requirements. The innovation of DDPG is that it extends the DQN to continuous control domain and higher dimensional system and has been claimed that it robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving \cite{ddpgo}. In addition, DDPG has been demonstrated to be sample-efficient compared to the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is also a black-box optimization method widely used for robot learning \cite{ddpgc}. Since DKRC and DDPG are both capable of solving dynamic systems control problems with high-dimensional and continuous control space, direct comparison is desirable by constructing the two algorithms from the ground up and applying them to solve the same tasks. We also compare the DKRC to the analytical model obtained by the classic Euler-Lagrange Linearization method. The comparison are examined in the Inverted pendulum environment and Lunar Lander Continuous Control environment in OpenAI Gym \cite{brockman2016openai}. This paper is organized into four sections. In Section \Romannum{2}, we briefly introduce the algorithms we use to achieve the comparison results. In Section \Romannum{3}, we set up the optimization problems in our simulation environment. In Section \Romannum{4}, we present the comparison results between the model-based and model-free approaches. Section \Romannum{5} concludes. \section{Algorithm} In this section we briefly introduce the algorithm of DDPG and DKRC. \subsection{Deep Koopman Representation for Control} Deep Koopman Representation for Control (DKRC) is a model-based control method which transfers a nonlinear system to a high-dimensional linear system with a neural network and deploys model-based control approaches like model predictive control (MPC). DKRC benefits from massive parameters of the neural network. By utilizing neural networks as a lift function, we can get a robust transformation from a nonlinear system to a lifted linear system as requirements. This paper also presents an auto decoder neural network \cite{dec} to map the planned states of MPC in lifted state-space back to the non-lifted state space to verify the DKRC control beyond the above comparison. A schematic diagram of the DKRC framework is shown in Figure \ref{fig:DKRC}: \begin{figure}[ht] \centering \includegraphics[width=0.65\textwidth]{Attachments/Alg/DKRC_schematics.pdf} \caption{Schematics of DKRC framework}\label{fig:DKRC} \end{figure} The implementation of DKRC can be divided into the following steps: \begin{enumerate} \item Build a neural network $\psi_{N}(x|\theta)$, where $N$ is the dimension we intend to lift, $x_t$ is the state space observations of the dynamical system at time $t$, $\theta^\psi$ represent parameters of the neural network. We decouple the optimal control problem into two function modules. The DKRC module handles the mapping process from the state space to the system in higher-dimensional space, which exhibits linear system behavior. We then feed the mapped states $z_t$ to the controller, which uses state-of-the-art planning algorithms to construct cost-to-go function and plan optimal control for the system. Those state-of-the-art control algorithms, such as iLQG and MPC, typically cannot work to the best efficiency when the system is highly nonlinear in nature. The introduction of the DKRC algorithm provides promising results in expanding the success of those conventional control algorithms into the field of challenging control problems in nonlinear dynamical systems. \item The DKRC algorithm achieves the lifting from low-dimension space to high-dimension space using the recent success of the deep neural networks in the control field. We use an autoencoder neural network structure as the backbone of DKRC. The encoder part of the autoencoder network will serve as the lifting function that maps the low-dimensional state observations to high-dimensional Koopman representations. Considering DKRC executes optimal MPC control in a lifted high-dimensional state space, it is also interesting to know how the DKRC makes control planning in high dimensional state space. To verify DKRC's planning behavior in a high-dimensional state space, we build a decoder neural network that maps the lifted space back to the original space. The schematic of the relationship between DKRC encoder and decoder is shown in Figure \ref{fig:decoder}. In this example, we have three state-space observations, for which we lift to Koopman representation with a dimension of eight (intermediate result in the center hidden layer). The decoder network will also map the lifted states back to the original state space with a dimension of three as the resulting output. Through this way, we can verify that our approach is behaving consistently in mapping between the low and high dimensional spaces, which is vital in assessing its reliability and avoiding a complete black-box behavior from a simple feed-forward neural network without validation. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{Attachments/decoder.png} \caption{Schematics of autoencoder neural network sturcture of DKRC}\label{fig:decoder} \end{figure} \item To tune the parameters of the autoencoder network ($\theta^\psi$), we carefully design and closely monitor loss functions during the training process. Two loss functions are used in this task. We define the first loss function, $L1$, in Equation \ref{eqn.loss1}, \begin{equation}\label{eqn.loss1} \begin{gathered} L1(\theta)=\frac{1}{L-1}\sum_{t=0}^{L-1}\parallel \psi(x_{t+1};\theta)-K*\psi(x_t;\theta)\parallel\\ K=\psi(x_{t+1};\theta)*\psi(x_t;\theta)^\dagger \end{gathered} \end{equation} \item The $\dagger$ sign represents the pseudo-inverse operation. By minimizing this loss function, we ensure that the Koopman operator theorem in Equation \ref{eqn.koopman} and \ref{eqn.koopman_psi} is being strictly enforced. Since we are interested in the dynamical system with control, we seek to identify the linearized system with $A$ and $B$ coefficient matrices. During the model training iterations, we can get the matrices by minimizing Equation \ref{eqn.ab} \begin{equation}\label{eqn.ab} \begin{gathered} M = [A,B] = argmin_{A,B}{|| \psi_{N}(X_{t+1}|\theta) - {A}\psi_{N}{({X_t}}|\theta) - {B}{U_t} ||}\\ \end{gathered} \end{equation} For larger data sets with $K\gg N$, instead of solving the least square problem associated with Equation \ref{eqn.ab} directly, it is beneficial to solve a slightly modified normal equation as defined in Equation \ref{eqn.absol} \begin{equation}\label{eqn.absol} \begin{gathered} \boldsymbol{V} = M \boldsymbol{G} \end{gathered} \end{equation} where,\ $$\boldsymbol{G} = {\begin{bmatrix} \psi_{N}(X_t|\theta)\\ U_t\\\end{bmatrix}}{\begin{bmatrix} \psi_{N}(X_t|\theta)\\ U_t\\\end{bmatrix}}^{T}, \ \boldsymbol{V} = \psi_{N}(X_{t+1}|\theta){\begin{bmatrix} \psi_{N}(X_t|\theta)\\ U_t\\\end{bmatrix}}^{T}$$\\ Any solution to Equation \ref{eqn.absol} is a solution to Equation \ref{eqn.ab}. The size of the matrices $\boldsymbol{G}$ and $\boldsymbol{V}$ is the $(N+m)\times (N+m)$ and $N\times (N+m)$ respectively, hence independent of the number of samples ($K$) in the data set \cite{lpfnd}.\\ We can also define a second loss function $L2(\theta)$ in Equation (\ref{eqn.loss2}), which ensures a controllable lifted linear states space. \begin{equation}\label{eqn.loss2} \begin{gathered} L2(\theta) = (N-rank(controllability(A,B))) \end{gathered} \end{equation} \item After training converges, we achieve a neural network model $\psi_{N}(x|\theta)$ that is capable of lifting state space to high-dimensional space with learned dynamic models coefficients $A$, $B$, and $C$. The $C$ matrix is necessary for later controller cost function design and is being defined as $C = X_t*\psi_{N}(X_t|\theta)^\dagger$ \item Finally, we implement Model Predictive Control (MPC) or Linear-Quadratic Regulator (LQR), with a cost function in the format of $J(V) =\sum_t \psi_{N}(X_t|\theta)^\top C^\top Q C \psi_{N}(X_t|\theta)+u_t^\top R u_t$, \\ where $Q_{lifted} = C^{T} Q C$ \end{enumerate} To summarize the above discussion and also the information depicted in Figure \ref{fig:DKRC}, the algorithm is listed below: \DontPrintSemicolon \begin{algorithm}[ht] \SetAlgoLined \KwIn{observations: x, control: u} \KwOut{Planned trajectory and optimal control inputs: ($z_{plan}$, $v_{plan}$)} \begin{itemize} \item Initialization \begin{enumerate} \item Set goal position $x^*$ \item Build Neural Network: $\psi_N(x_t;\theta)$ \item Set $z(x_t;\theta) = \psi_N(x_t;\theta) - \psi_N(x^*;\theta)$ \end{enumerate} \item Steps \begin{enumerate} \item Set $K=z(x_{t+1};\theta)*z(x_t;\theta)^\dagger$ \item Set the first loss function $L1$\\ $L1(\theta)=\frac{1}{L-1}\sum_{t=0}^{L-1}\parallel z(x_{t+1};\theta)-K*z(x_t;\theta)\parallel$ \item Set the second loss function $L2$\\ $[A,B]={\bf z}_{t+1} \begin{bmatrix} {\bf z}_{t} \\ U \end{bmatrix} \begin{pmatrix} \begin{bmatrix} {\bf z}_{t}\ U \end{bmatrix} \begin{bmatrix} {\bf z}_{t} \\ U \end{bmatrix} \end{pmatrix}^\dagger$\\ $L2(\theta) = (N-rank(controllability(A,B)))+||A||_1+||B||_1$ \item Train the neural network, updating the complete loss function\\ $L(\theta)=L1(\theta)+L2(\theta)$ \item After converging, We can get system identity matrices A, B, C\\ $C=\boldsymbol X_t*\psi_N(\boldsymbol X_t)^\dagger$ \item Apply LQR or MPC control with constraints \end{enumerate} \end{itemize} \caption{Deep Koopman Representation for Control (DKRC)} \end{algorithm} \subsection{Deep Deterministic Policy Gradient} Deep Deterministic Policy Gradient (DDPG) is a reinforcement learning algorithm based on an actor-critic framework. The DDPG and its variants have been demonstrated to be very successful in designing optimal continuous control for many dynamical systems. However, compared to the above model-based optimal control, DDPG employs a more black-box style neural network structure that outputs control directly based on learned DNN parameters. The DDPG and its variants typically require a significant amount of training data and a good design of the reward function to achieve good performance without learning an explicit model of the system dynamics. A schematic diagram of the DDPG framework is shown in Figure \ref{fig:DDPG}. \begin{figure}[ht] \centering \includegraphics[width=0.65\textwidth]{Attachments/Alg/DDPG_schematics.pdf} \caption{Schematics of DDPG framework}\label{fig:DDPG} \end{figure} The control goal of the DDPG algorithm is to find control action $u$ to maximize the rewards (Q-values) evaluated at the current time step. The training goal of the algorithm is to learn neural network parameters for the above networks so that rewards can be maximized at each sampled batch, while still being deterministic. The training of the DDPG network will then follow a gradient ascend update from each iteration, as defined in Equation \ref{eqn.DDPG_grad} \begin{equation}\label{eqn.DDPG_grad} \begin{gathered} {\nabla_{\theta^{\mu}}}J \approx \mathbb{E}[\nabla_{\theta^{\mu}}Q(x,u|\theta^Q)|x=x_t, u=\mu(x_t|\theta^\mu)] \end{gathered} \end{equation} In Equation \ref{eqn.DDPG_grad}, we have two separate networks involved: Value function network $\theta^Q$ and policy network $\theta^\mu$. To approximate the gradient from those two networks, we design the training process relying on one replay buffer and four neural networks: Actor, Critic, Target Actor, Target Critic. At the gathering training samples stage, we use a replay buffer to store samples. A replay buffer of DDPG essentially is a buffered list storing a stack of training samples $(x_t, u_t, r_t, x_{t+1})$. During training, samples can be drawn from the replay buffer in batches, which enables training using a batch normalization method \cite{replayb}. For each stored sample vector, $t$ is current time step, $x$ is the collection of state observations at current time step $t$, $r$ is numerical value of reward at current state, $u$ is the action of time step $t$, and finally, the $x_{t+1}$ is the state in the next time step $t+1$ as a result of taking action $u_t$ at state $x_t$. Replay buffer makes sure an independent and efficient sampling \cite{replaye}. As discussed above, the DDPG framework is a model-free reinforcement learning architecture, which means it does not seek an explicit model for the dynamical system. Instead, we use a value function network, also called Critic network $Q(s,a|{\theta^{Q}})$, to approximate the result from a state-action pair. The result of a trained Critic network estimates the expected future reward by taking action $u_t$ at state $x_t$. If we take the gradient of the change in the updated reward, we will be able to use that gradient to update our Policy network, also called Actor network. At the end of the training, we obtain an Actor network capable of designing optimal control based on past experience. Since there are two components in the formulation of the gradient of reward in Equation \ref{eqn.DDPG_grad}, we can apply the chain rule to break the process into evaluating gradients from Actor and Critic networks separately. The total gradient is then defined as the mean of the gradient over the sampled mini-batch, where $N$ is the size of the training mini-batch taken from replay buffer $R$, as shown in Equation \ref{eqn.policy}. Silver et al. showed proof in Ref. \cite{silver2014deterministic} that Equation \ref{eqn.policy} is the policy gradient that can guide the DDPG model to search a policy network to yield the maximum expected reward. \begin{equation}\label{eqn.policy} \begin{gathered} {\nabla_{\theta^{\mu}}}J \approx \frac{1}{N} \sum_t [\nabla_u Q(x,u|\theta^Q)|_{x=x_t, u=\mu(x_t)} \nabla_{\theta^{\mu}} \mu(x|\theta^\mu)|_{x_t}] \end{gathered} \end{equation} In this case, dynamical responses from the system are built into the modeling of the Critic network, thus indirectly modeled. The accuracy in predicted reward values is used as the only training criterion in this process, signifying the major difference between the model-free reinforcement learning and the model-based optimal control method such as DKRC. Some of the later observations in different behaviors from model-based vs. model-free comparison have roots stemming from this fundamental difference. We separate the training of the DDPG into the training of the Actor and Critic networks. Actor, or policy network ($\theta^{\mu}$), is a simple neural network with weights $\theta^\mu$, which takes states as input and outputs control based on a trained policy $\mu(x,u|{\theta^{\mu}})$. The Actor is updated using the sampled policy gradient generated from the Critic network, as proposed in Ref. \cite{pga}. Critic, or value function network, is a neural network with weights $\theta^Q$, which takes a state-action pair $(x_t,u_t)$ as input. We define a temporal difference error term, $e_t$ (TD error), to track the error between the current output compared to a target Critic network with future reward value using future state-action pair as input in the next time step. The total loss from the Critic network during a mini-batch is defined as $L$ in Equation \ref{eqn.error_loss}, which is based on Q-learning, a widely used off-policy algorithm \cite{qlr}. \begin{equation}\label{eqn.error_loss} \begin{gathered} e_t = (r_t + \gamma Q^{'}(x_{t+1}, \mu^{'}(x_{t+1}|{\theta^{\mu^{'}}})| \theta^{Q^{'}})) - Q(x_t,u_t|\theta^{Q})\\ L = \frac{1}{N} \sum_t{e_t}^2 \end{gathered} \end{equation} In this Critic loss function in Equation \ref{eqn.error_loss}, the terms with prime symbol $(')$ is the Actor/Critic model from the target networks. Target Actor $\mu^{'}$ and target Critic $Q^{'}$ of DDPG are used to sustain a stable computation during the training process. These two target networks are time-delayed copies of the actual Actor/Critic networks in training, which only take a small portion of the new information from the current iteration to update target networks. The update scheme of these two target networks are defined in Equation \ref{eqn.target_nn}, where $\tau$ is Target Network Hyper Parameters Update rate super parameter with a small value ($\tau \ll 1$) to improve the stability of learning. \begin{equation}\label{eqn.target_nn} \begin{gathered} {\theta^{Q^{'}}} \leftarrow \tau \theta^Q + (1-\tau)\theta^{Q^{'}}\\ {\theta^{\mu^{'}}} \leftarrow \tau \theta^{\mu} + (1-\tau)\theta^{\mu^{'}}\\ \end{gathered} \end{equation} To train DDPG, we initialize four networks ($Q(s,a|\theta^Q), \mu(s|\theta^\mu), Q^{'}(s,a|\theta^{Q^{'}}), \mu^{'}(s|\theta^{\mu^{'}})$) and a list buffer ($R$) first, then select an initial action from actor, executing action $a_t$ and observe reward $r_t$ and new state $s_{t+1}$, store $(s_t, a_t, r_t, s_{t+1})$ in the buffer $R$, sampling mini training batch input randomly from R, updating critic by Equation \ref{eqn.error_loss}, updating actor by Equation \ref{eqn.policy}, updating target actor and critic by Equation \ref{eqn.target_nn} until loss value converges. \bigskip \section{Experiment Setup} To obtain benchmark comparison results, we use the classic 'Pendulum-v0' OpenAI Gym environment \cite{brockman2016openai} to examine the behaviors of controllers built based on DDPG and DKRC for this inverted pendulum problem. The OpenAI Gym is a toolkit designed for developing and comparing reinforcement learning \cite{rl}. The inverted pendulum swing-up problem is a classic problem in the control literature, and the goal of the system is to swing the pendulum up and make it stay upright, as depicted in Figure \ref{fig:env_intro}. Although both the two approaches are data-driven in nature, thus versatile in deployment, We would like to still stick with the classic control problem, which has rich documentation of analytical solution for later comparisons. As will be shown in the next section, the DKRC approach not only successfully finishes the control task using a learned dynamics directly resembling the analytical solution but can also explain the system in lifted dimensions using Hamiltonian Energy Level theorem. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{Attachments/env_intro/pendulum.png} \caption{Environment visualization}\label{fig:env_intro} \end{figure} \subsection{Problem set-up} A visualization of the simulation environment is shown in Figure \ref{fig:env_intro}. As shown in the picture, the simple system has two states $x = [\theta,\dot{\theta}]$ and one continuous control moment at the joint, which numerically must satisfy $-2\leq u\leq2$. This limit in control magnitude is applied with the intention to add difficulty in designing the control strategy. For the default physical parameter setup, the maximum moment that is allowed for the environment will not be sufficient to raise the inverted pendulum to the upright position in one single move. The control strategy needs to learn how to build momentum by switching moment direction at the right time. The OpenAI Gym defines the observations and control in Equation \ref{eqn.state_space_pen}.\\ \begin{equation}\label{eqn.state_space_pen} \begin{gathered} \boldsymbol{\chi}=[\cos{\theta}, \sin{\theta}, \dot{\theta}], \quad where\ \theta\in[-\pi,\pi], \dot{\theta}\in[-8,8] \\ \boldsymbol{U}=[u], \quad where\ u\in[-2,2] \end{gathered} \end{equation}\\ The cost function is designed in Equation \ref{eqn.reward}. The cost function tracks the current states ($\theta$) and control input ($u$). The environment also limits the rotational speed of the up-swing motion, by including a $\dot{\theta}^2$ term in the cost function. By minimizing this cost function, we ask for the minimum control input to ensure the pendulum ends at the upright position with minimum kinetic energy levels. During the comparison, we are using the Equation \ref{eqn.reward} directly in the controller design for the DKRC, whereas we negate cost function and transforms it into a negative reward function, where 0 is the highest reward for the DDPG training. By using this simple simulation environment, we ensure the two approaches are being compared on the same basis with the same reward/cost function definition. \begin{equation}\label{eqn.reward} \begin{gathered} reward = \theta^2 + 0.1\dot{\theta}^2 + 0.001u^2 \end{gathered} \end{equation} The dynamical system of the swing-up pendulum can be analytically solved by Euler–Lagrange method \cite{ele}, with its governing equation shown in Equation \ref{eqn.lagrange}, where $\theta = 0$ is the upright position, $g$ is the gravitational acceleration, $m$ is the mass of the pendulum, $l$ is the length of the pendulum. \begin{equation}\label{eqn.lagrange} \begin{gathered} \ddot\theta = -\frac{3g}{2l}\sin(\theta + \pi) + \frac{3}{ml^2}u \end{gathered} \end{equation} By converting the second order ordinary differential equation(ODE) in Equation \ref{eqn.lagrange} to the first order ODE, we can get Equation \ref{eqn.ODE}. \begin{equation}\label{eqn.ODE} \begin{gathered} \frac{d}{dt}\begin{bmatrix} \cos\theta\\ \sin\theta\\ \dot\theta \end{bmatrix} = \begin{bmatrix} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & \frac{3g}{2l} & 0 \end{bmatrix}\begin{bmatrix} \cos\theta\\ \sin\theta\\ \dot\theta \end{bmatrix}+ \begin{bmatrix} 0\\ 0\\ \frac{3}{ml^2} \end{bmatrix}u\\ \end{gathered} \end{equation} We use the default physical parameters defined in the OpenAI Gym, i.e., $m=1$, $l=1$, $g=10.0$, $dt=0.05$. We discretize the model using zero order hold (ZOH) method \cite{zoh} with a sampling period $dt$. As a result, we can achieve the linearized governing Equation \ref{eqn.zoh} with observed states $y_t$. The numerical values for the hyperparameters used in the default problem setting are also listed in Equation \ref{eqn.zoh_mat}. \begin{equation}\label{eqn.zoh} \begin{gathered} x_{t+1} = {A_d}{x_t} + {B_d}{u_t}\\ y_t = C{x_t} \end{gathered} \end{equation} with\\ \begin{equation}\label{eqn.zoh_mat} \begin{gathered} A_d = \begin{bmatrix} 0.9988 & -0.04998 & 0\\ 0.04998 & 0.9988 & 0.05\\ 0.01875 & 0.7497 & 1 \end{bmatrix}\ B_d = \begin{bmatrix} 0\\ 0\\ 0.15 \end{bmatrix}\\ C = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \end{gathered} \end{equation} We use $A_d$ and $B_d$ as the benchmark of learned dynamic models comparison between DKRC and Euler-Lagrange Linearization in later sections. \subsection{Parameters of DDPG and DKRC} The following Table (\ref{my-label}) is the parameters we use to obtain DDPG and DKRC solutions. Both two methods are trained on NVIDIA Tesla V-100 GPUs on an NVIDIA-DGX2 supercomputer cluster. \begin{table}[ht] \centering \caption{Parameters of different methods} \label{my-label} \begin{tabular}{ |c|c|c|c| } \hline Method & Parameter & Value \\ \hline {} & {Buffer size} & 1e6 \\ {} & {Batch size} & 64 \\ {} & {$\gamma$ (Discount factor)} & 0.9 \\ DDPG & {$\tau$ (Target Network Update rate)} & 0.001 \\ {} & {Target \& Actor learning rate} & 1e-3 \\ {} & {Target \& Critic learning rate} & 1e-2 \\ {} & {Training epochs} & 5e4 \\ \hline DKRC & {Lift dimension} & 8 \\ {} & {Training epochs} & 70 \\ \hline \end{tabular} \end{table} The result of the DDPG solution is an Actor neural network with optimal parameters for the nonlinear pendulum system - $\mu(x;\theta^{*})$. The result of the DKRC solution is identity matrices $A_{lift}$, $B_{lift}$, $C$ of the lifted space, and a lift neural network $\psi_{N}(x;\theta)$ for observations of the unknown dynamical system. \bigskip \section{Results and Comparison} \subsection{Control Strategies of DDPG and DKRC} We present results from the DKRC vs DDPG by specifying five initialization configuration for the problem. We have choices of defining the starting position and also the initial disturbance in the form of starting angular velocity of the pendulum. In this study, we initialize the pendulum at five different positions:\ $\theta_0 = \pi$ (lowest position), \ $\theta_0 = \frac{\pi}{2}$ (left horizontal position), \ $\theta_0 = -\frac{\pi}{2}$ (right horizontal position), $\theta_0 = \pm \frac{\pi}{18}$ (close to upright position). The initial angular velocity of pendulum at different initial positions is $\dot{\theta_0} = 1rad/s$ (clockwise). For better visualization, we map the angle from $[-\pi, \pi]$ to $[0, 2\pi]$, where the upright position (goal position) is always achieved at $\theta=0$ and $\dot{\theta} = 0$. The result is shown in Figure \ref{fig:traj_lowes}$-$ \ref{fig:traj_lu}. \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{Attachments/beha/low0.png} \includegraphics[width=0.4\textwidth]{Attachments/beha/low1.png} \caption{Trajectories of Swinging Pendulum implementing DDPG model (left) vs. DKRC model (right) initialized at the lowest position}\label{fig:traj_lowes} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{Attachments/beha/lh0.png} \includegraphics[width=0.4\textwidth]{Attachments/beha/lh1.png} \caption{Trajectories of Swinging Pendulum implementing DDPG model (left) vs. DKRC model (right) initialized at the left horizontal position}\label{fig:traj_lh} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{Attachments/beha/rh0.png} \includegraphics[width=0.4\textwidth]{Attachments/beha/rh1.png} \caption{Trajectories of Swinging Pendulum implementing DDPG model (left) vs. DKRC model (right) initialized at the right horizontal position}\label{fig:traj_rh} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{Attachments/beha/lu0.png} \includegraphics[width=0.4\textwidth]{Attachments/beha/lu1.png} \caption{Trajectories of Swinging Pendulum implementing DDPG model (left) vs. DKRC model (right) initialized at the close left upright position}\label{fig:traj_lu} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{Attachments/beha/ru0.png} \includegraphics[width=0.4\textwidth]{Attachments/beha/ru1.png} \caption{Trajectories of Swinging Pendulum implementing DDPG model (left) vs. DKRC model (right) initialized at the close right upright position}\label{fig:traj_ru} \end{figure} Figure \ref{fig:traj_lowes}$-$ \ref{fig:traj_ru} show that DDPG and DKRC have similar control strategies in most initial positions. DDPG needs less time to arrive at the goal position than DKRC when the pendulum is initialized on the right side. It tends to use smaller control torque as a result of that DDPG has constraints term for control torque. Still, DDPG never succeeds in getting an absolute upright position, i.e., at the final position, a non-zero control input is always required to sustain a small displacement away from the goal position. On the contrary, DKRC can achieve a precise goal position with much less training time than DDPG. Once a proper dynamical system model can be learned directly from data, it makes more sense to execute control using model-based controller design such as MPC. Another way to show the differences in control strategies is by plotting the measured trajectories during repeated tests. In Figure \ref{fig:deployment}, we test the pendulum game for 50 games with a total of 10000 time-steps utilizing both methods (DDPG \& DKRC) solutions. \begin{figure}[ht] \centering \includegraphics[width=0.42\textwidth]{Attachments/50games/DDPGtheta.png} \includegraphics[width=0.42\textwidth]{Attachments/50games/dkrctheta.png} \caption{50 pendulum games data recorded using DDPG model (left) vs. DKRC model (right), color mapped by energy}\label{fig:deployment} \end{figure} In this comparison, we plot the measurements of $\dot{\theta}$ vs. $\theta$ on a 2D basis, colored by the magnitude of the cost function defined in Equation \ref{eqn.reward}. The goal is to arrive at the goal position ($\dot{\theta}=0$ and $\theta=0$, the center of each plot) as quickly as possible. Figure \ref{fig:deployment} shows that DDPG tends to drive the states into "pre-designed" patterns, and execute similar control strategies for the 50 games. Therefore the data points on the left subplot appear to be less than the one on the right. DKRC, on the other hand, tends to exhibit different control behavior due to the local replanning using MPC. The result of the local replanning is that it generates multiple trajectories solving the 50 games with different random initializations. During this comparison, we illustrate that DDPG is indeed deterministic, which is a good indicator of the system's reliability. However, we want to point out that the robustness of the system will also benefit from a local replanner available under different initializations or disturbances since the "pre-designed" patterns are learned from past experience, therefore, cannot guarantee a viable solution for unforeseeable situations when we move onto more complex systems. By transforming the observed states, we can also show the relationship between the designed control and the energy levels in the system. Consequently, our control goal is to achieve the lowest energy level of the system in terms of lowest magnitudes in both kinetic energy and potential energy. In Figure \ref{fig:pendulum_energy}, we present the Hamiltonian energy level plots with respect to the measured states ($\theta$, $cos(\theta)$, $sin(\theta)$, and $\dot{\theta}$) for the forced frictionless pendulum system, each colored by the same energy level scale. The energy term used in forced pendulum is defined as $\frac{1}{2}\dot{\theta}^2+cos(\theta)+u$. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{Attachments/50games/ddpgenergy.png} \includegraphics[width=0.49\textwidth]{Attachments/50games/DKRC_energy.png} \caption{DDPG(left) vs. DKRC(right): Recorded Observations \& Energy}\label{fig:pendulum_energy} \end{figure} Figure \ref{fig:pendulum_energy} shows that DKRC's trajectories are concentrated in lower energy areas compared to DDPG, which means it intends to minimize the energy directly. This behavior also resembles many design strategies used in classic energy-based controller design approaches. \subsection{Control Visualization using Decoder Neural Network} Another benefit of having a model-based controller design is the interpretability of the system. We can preview the design trajectory since our MPC controller implements the receding horizon control. By deploying one pendulum game, we obtain one planned trajectory for the linear system in the lifted dimension space. To get the comparison between the planned trajectory and the measured trajectory in the state space, we utilize the decoder neural network obtained during the training to map the trajectory in higher-dimensional space (8-dimension for inverted pendulum) to the lower-dimensional space (2-dimension). The comparison between such recovered trajectory planning and the actual trajectory is presented in Figure \ref{fig:mpctraj}. In this figure, we again mapped the $\theta$ in the range of $0$ to $2\pi$ for visualization purposes, whereas the goal position is still at the $\dot{\theta}=0$ and $\theta=0$ position. In this example case, we initialize the pendulum close to the goal position but give it a moderate initial angular velocity pointing in the opposite direction to the goal position. The DKRC can plan a simple trajectory with continuous control. During the process, we executed MPC multiple times and used feedback measurements to improve the design trajectory. The planned trajectory is being closely followed except certain locations close to $0$ and $\pi$ position. The outlier behavior comes from the neural network treatment for the discontinuity and does not pollute the efficient control for the entire problem. It is worth noticing that, to arrive at this result, we learn the unknown nonlinear dynamics using a purely data-driven approach, and we have to go through an encoding-decoding process to recover the planned trajectory. It is promising to state that the use of Koopman representation for nonlinear control can help with system interpretability, which is currently an active research area. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{Attachments/mpctraj.png} \caption{DKRC's planned trajectory and actually executed trajectory in Inverted pendulum}\label{fig:mpctraj} \end{figure} To verify whether DKRC is valid in a more complex environment than Inverted pendulum, we also deploy it in the Lunar Lander - continuous control environment of OpenAI Gym. A simple explanation of 'Lunar Lander' is exhibited in Figure \ref{fig:envll}. The control goal is to guide the lunar lander to arrive at the landing zone as smoothly as possible \cite{Han:2020f}. The system is also unknown and must be learned from data. We implement the DKRC framework and use MPC for trajectory planning, and the result is shown in Figure \ref{fig:llmpctraj}. \begin{figure}[ht] \centering \includegraphics[width=0.46\textwidth]{Attachments/env_intro/lunar.png} \caption{Lunar Lander Environment}\label{fig:envll} \end{figure} The actual trajectory measured in state-space is shown in hollow red circles. The planned trajectory is colored by the distance away from the originally planned location. It is evident that in the region where the originally planned location is in the immediate vicinity (dark blue color), the actual trajectory is following the plan very precisely. We implement a finite-horizon control during each MPC planning phase. We plan and execute control with several steps beyond the current state as displayed in lighter green color. The actual trajectory slowly deviates from the planned trajectory when it starts to drift away from the originally planned location. This behavior is expected since we are relying on open-loop control during those finite horizon plannings. The actual trajectory and the projected trajectory merge again once the next round of the MPC control is executed. In this figure, we demonstrate that the deviation from the planned trajectory and the measured trajectory is from the open-loop planning, rather than from the error introduced while passing the state inputs through the encoder-decoding neural network. The proposed structure is capable of recovering the designed strategy in higher-dimensional space and improving the system's interpretability. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{Attachments/decoderll.png} \caption{DKRC's planned trajectory and actual executed trajectory in Lunar Lander, color mapped by the distance between the planned point and the executed point}\label{fig:llmpctraj} \end{figure} \subsection{Robustness Comparison} To compare the robustness of these two methods, we introduce noises to the state measurements and observe the control outcome from DDPG and DKRC. The noise is designed as multiplying the states with a noise ratio, which is randomly selected from range $[0.6, 1]$ during deployment. In this test, we assume the learned dynamics from DDPG and DKRC are not affected by the noise; only the observations during deployment are affected, representing a cyber-security attack during the operation phase. The new state inputs would be $x = x*noise$ in this scenario. The result for five repeat games is shown in Figure \ref{fig:stable}. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{Attachments/robust/ddpg.png} \includegraphics[width=0.48\textwidth]{Attachments/robust/dkrc.png} \caption{Five pendulum games with input states noise using DDPG (left) vs. DKRC (right) solutions}\label{fig:stable} \end{figure} Each line with the same color in the subplot represents measurements from a single game. Even with a high noise ratio, DKRC can succeed in the control task for three out of five games, whereas DDPG fails every game in Figure \ref{fig:stable}. In this test, we demonstrate that the DKRC is capable of designing a control strategy based on noisy data and continuously adjust the control based on the feedback control loop so that it is still robust in a noisy environment. The robustness of DKRC is another advantage compared to a neural-network-based control system, which relies heavily on state measurement accuracy. \subsection{Learned dynamics of DKRC compared to Euler-Lagrange analytical solution} To illustrate the validity of the learned dynamics using DKRC, we present benchmark comparisons between the proposed DKRC framework and the Euler-Lagrange method's analytical solutions. As previously shown in Equation \ref{eqn.zoh} and \ref{eqn.zoh_mat}, we have obtained the identity matrices for the linearized system using ZOH method. We are making the same assumption for the linearized system in the lifted-dimensional space by the DKRC method. For comparison with different dimension embeddings, we pick the matrices corresponding to the top $n$ eigenvalues from the $A$ and $B$ obtained through DKRC. For the inverted pendulum problem, we collect $K$ time-step data (only need $K=2000$ data points) to obtain Koopman representation of the system. The resultant dimension is $N=8$ ($K \gg {N}$). The learned dynamics $A, B$ of the lifted linear system $\psi_{N}(x|\theta)$ are shown in the following matrices\\ \\ $A_{DRKC} = \scriptsize{\begin{bmatrix} \begin{array}{ccc|ccccc} 1.01 & -0.06 & -0.01 & -0.08 & -0.08 & 0.06 & -0.01 & -0.01 \\ 0.00 & 0.98 & -0.04 & 0.02 & 0.06 & -0.07 & 0.01 & 0.02 \\ 0.06 & 0.05 & 0.93 & 0.06 & 0.00 & 0.00 & -0.03 & 0.02\\ \hline 0.03 & -0.00 & -0.01 & 0.97 & -0.04 & 0.05 & -0.01 & -0.01\\ 0.11 & 0.02 & -0.03 & 0.01 & 0.98 & 0.05 & -0.07 & 0.01\\ 0.04 & 0.05 & 0.01 & 0.02 & 0.01 & 1.01 & -0.05 & 0.04\\ 0.01 & 0.014 & 0.02 & 0.01 & 0.08 & -0.04 & 0.96 & 0.04\\ -0.00 & -0.01 & -0.02 & -0.00 & 0.08 & -0.08 & -0.00 & 1.00 \end{array} \end{bmatrix}} \ B_{DRKC} = \scriptsize{\begin{bmatrix} 0.00014\\ 0.00018\\ -0.00024\\ 0.00017\\ 0.00021\\ 0.00038\\ 0.00008\\ -0.0001 \end{bmatrix}}$\\ \\ To measure the similarity of $A_d$ (in Equation \ref{eqn.zoh_mat}), $A_{DDPG}$, $A_{DKRC}$ we achieved, we adopt the Pearson correlation coefficient(PCC) method \cite{pcc} in Equation \ref{eqn.pcc}, where matrices with bar operator, e.g. $\bar{A}$, is the sample mean of that matrix. The result $r(A,B)\in[0,1]$ represents the correlation between the two matrices. Two matrices ($A$ and $B$) are more similar when $r$ is closer to 1.\\ \begin{equation}\label{eqn.pcc} \begin{gathered} r(A,B) = \frac{\sum_m \sum_n(A_{mn}-\bar{A})(B_{mn}-\bar{B})}{\sqrt{(\sum_m \sum_n(A_{mn}-\bar{A})^2)(\sum_m \sum_n(B_{mn}-\bar{B})^2)}} \end{gathered} \end{equation}\\ To compare the similarity of these two matrices with different dimensions, we extract the left top $3\times3$ part of the $A_{DKRC}$. We achieve a correlation score of $r(A_{d}, A_{DKRC})=0.8926$, which means the lifted linear system of DKRC is very similar to the analytical system model solved by Euler-Lagrange method. This high correlation score indicates that the data-driven Koopman representation of the system can reveal the intrinsic dynamics purely based on data samples. On a separate note, we do not expect the two results are exactly the same since by lifting the system to a higher-dimensional space, we have more neurons in DKRC neural networks to store the system information that was not included in the previous comparison. In addition, we also deploy the linearization model obtained by the Euler-Lagrange method with MPC to the same OpenAI Gym environment to examine the effectiveness by direct linearization without lifting. Unlike DKRC or DDPG, which can work for any arbitrary initial configuration, control designed by the Euler-Lagrange method only works when the pendulum's initial position is between $[-23.4^{\circ}, 23.4^{\circ}]$ with small initial angular velocity disturbance. A sample comparison between Euler-Lagrange MPC and DKRC is exhibited in Figure \ref{fig:lec}. As shown in Figure \ref{fig:lec}, DKRC only spends around $20\%$ time of Euler-Lagrange linearization method to make pendulum converge to the upright position. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{Attachments/LE/dkrcandll.png} \includegraphics[width=0.48\textwidth]{Attachments/LE/ll.png} \caption{One pendulum game using DKRC model (left) vs. Euler-Lagrange method (right); Both tests initialized at $\theta_0 = \frac{\pi}{18}$, $\dot{\theta_0} = 0.5$}\label{fig:lec} \end{figure} \section{Conclusions} We provide a systematic discussion of two different data-driven control frameworks: Deep Deterministic Policy Gradient (DDPG) and Deep Learning of Koopman Representation for Control (DKRC). Both the two methods are data-driven and adopt neural networks as central architecture, but the controller design is model-free for DDPG, whereas DKRC utilizes a model-based approach. Our experiments in a simple swing-up pendulum environment demonstrate the different solutions achieved by DKRC and DDPG. The DKRC method can achieve the same control goal as effective as the DDPG method but requires much less training epochs and training samples (70 epochs vs. $5\times10^4$ epochs). Due to the physics model-based nature, DKRC provides better model interpretability and robustness, which are both critical for real-world engineering applications. \section*{Acknowledgment} Y.Han would like to acknowledge research support from ONR award N00014-19-1-2295. \bigskip \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Experiments} \label{experiments} In this section, we evaluate our \TAAD method along with \tfa modules on \multisports and \ucftwofour datasets. We start by defining the metrics used in \refsec{subsec:metrics} and motion category classification in \refsec{subsec:motion_classifcation}. First, we study the impact of different \tfa modules under different motion conditions in \refsec{subsec:tfa}. Secondly, we compare our \TAAD method with \sotalong methods in \refsec{subsec:comparisons}. Later, we discuss baseline model and impact of tracker on it in \refsec{subsec:baseline_exp}. We finish with a discussion in section~\refsec{subsec:discussion}. \subsection{Metrics} \label{subsec:metrics} We report metrics that measure our detector's performance both at frame- and video-level, computing frame and video \maplong (\map), denoted as \fmap{} and \vmap{} respectively. These metrics are common in action detection works~\cite{kalogeiton2017action,weinzaepfel2015learning,li2020actionsas}. A detection is correct if and only if its \ioulong (\iou) with a ground-truth box or tube, for frame and video metrics respectively, is larger than a given threshold (e.g. $0.5$) and the predicted label matches the ground-truth one. From this, we can compute \aplong (\ap) for each class and then take mean across classes, to get the desired \map metric. Tube overlap is measured by \spatiotemporal-\iou proposed by ~\cite{weinzaepfel2015learning}, similar to ~\cite{li2021multisports}, we evaluation code from ACT\footnote{\url{https://github.com/vkalogeiton/caffe/tree/act-detector}}. \subsection{Motion categories} \label{subsec:motion_classifcation} We split action into three motion categories: large, medium and small. Computing per-motion-category metrics requires labelling the ground-truth action tubes. We start this process by computing the \iou between a pair of boxes separated offsets equal to $[4,8,16,24,36]$ in sliding window fashion. We average these 5 \iou values and get the final \iou value as a measure of speed. We then split the dataset into three equal size bins, each containing instances of actions with relative speed difference from other. Now we can assign the motion label to each instance based on IoU of instance and splits boundaries of the dataset: \begin{equation} \text{\multisports} = \begin{cases} \text{Large} , & \quad \text{\iou} \in [0.00, 0.21] \\ \text{Medium} , & \quad \text{\iou} \in [0.21, 0.51] \\ \text{Small} , & \quad \text{\iou} \in [0.51, 1.00] \end{cases} \\ \end{equation} \begin{equation} \text{\ucftwofour} = \begin{cases} \text{Large} , & \quad \text{\iou} \in [0.00, 0.49] \\ \text{Medium} , & \quad \text{\iou} \in [0.49, 0.66] \\ \text{Small} , & \quad \text{\iou} \in [0.66, 1.00] \end{cases} \end{equation} Given these labels, we can compute \ap metrics per motion category. There are two options for these metrics. The first is to compute \ap for large, medium and small motions per action class and then average across actions. We call this metric \emph{\motionmap}. The alternative is to ignore action classes and compute \ap for large, medium and small motions, irrespective of the action, which we call \emph{\motionap}. This essentially measures action detection accuracy \wrt to motion speed, irrespective of class. We compute the metrics both on a per-frame and on a per-video level, same as above. Video metrics are denoted with a \emph{video} prefix. We will release the code training and testing our \taad network along with evaluation scripts for \motionap{} and \motionmap. \input{tabtex/TAB_motion_speed} \subsection{Motion-wise (main) results} \label{subsec:tfa} As the main objective set out for this work, we first study how cuboid-aware baseline compares against track-based \taad under significant motion. We compare different choices for temporal feature aggregation. In \cref{tab:splits}, we measure the frame- and video-motion-\map, for models trained with different \tfa{s}, on \multisports and \ucftwofour. Pooling features across tracks, instead of neighbouring frames, even with a relatively simple pooling strategy, \ie Max-Pool over the \spatiotemporal dimensions, results in stronger action detectors, with a 5.7 \% and 5.8 \% frame and video \map boost on \multisports. More involved feature aggregation strategies, such as the temporal convolution blocks (\tcn) or \aspp variant, lead to further gains. Note that the biggest improvements on \multisports occur in the large motion category, + 8.4 \% \motionmap{}, with smaller gains in meduim (+3.9\%) and small (+5.5 \%) motions. \input{tabtex/TAB_motion_ap} \cref{tab:motion_ap} contains \motionap{} results on \multisports for different choices of \tfa modules. It is clear that \taad combined with any of the \tfa modules leads to large performance gain. % Larger motions' performance benefits the most, followed by medium and small motions. For example, the \aspp module helps more with large motions ($+7.9$) than with small motions ($+4.5$). We observe the same trend in \cref{tab:splits}, both for frame and video \motionmap. The above results signify that there is a large gap between the performance on action instance with large-motion and small-motion for baseline method. \taad combined with any of \tfa module helps reduce this discrepancy and improves overall performance on both the datasets. \input{tabtex/TAB_Multisports} \input{tabtex/TAB_UCF101-24} \subsection{Comparison to the State-of-the-art} \label{subsec:comparisons} We compare our proposed detector with the state-of-the-art on the \multisports and \ucftwofour with approaches that focus on both frame- and tube-level action detection, unlike approaches~\cite{pan2021actor,tang2020asynchronous} which solely focus on frame-level evaluation. It is important to note that, similar to baseline, \taad does not make use of any spatial context modelling. So that we can understand gains made using track-aware feature aggregation rather than gains made by mixing other \spatiotemporal context modelling modules~\cite{tuberZhao2022}. We report frame- and video-\map for different methods in \cref{tab:multisports}, namely \slowfast variants from the original \multisports paper, Ning \etal's \cite{ning2021person} Person-Context Cross-Attention Modelling network and our improved baseline, and three versions of our model, one with \maxpool along the temporal dimensions, the \aspp variant and the temporal convolutional network (\tcn). \cref{tab:multisports} contains the results of this experiments, where we clearly see the benefit of using tracks for action detection. The addition of feature pooling along tracks, even with the simpler \maxpool version, outperforms our improved baseline by 4.3 \% frame \map. Better temporal fusion strategies, \ie \aspp and \tcn, lead to further benefits. As a result, we set a new \sotalong on \multisports dataset. Note that all of our TFA modules add only less than 1G FLOPS ($<2\%$) to total computation of the whole network. Finally, we compare our proposed \taad model on the older \ucftwofour dataset in \cref{tab:ucf24}. Our model outperforms most of existing methods, with the exception of TuebR~\cite{tuberZhao2022} and MOC~\cite{li2020actionsas}. We think the reason for this, is TubeR make use of set prediction framework~\cite{carion2020end} with a transformer head (plus 3 layer for each encoder- and decoder-transformer) on top a CNN backbone (CSN-152), additionally they also use actor context modelling similar to~\cite{pan2021actor}. Also, I3D based TubeR need $132M$ FLOPS, which much higher than $97M$ needed by SlowFastR5-TCN based \taad. In case of MOC, it uses flow stream as additional input and uses DLA-34\cite{yu2018deep} as backbone network. Note that our goal here is to analyse and improve action detection performance across different actor motion speeds. That is why we do not make use of any spatial attention or context modelling~\cite{tuberZhao2022} between actors. These are certainly very interesting topics and orthogonal to our proposed approach. Nevertheless, in fair comparison to baseline, our network consistently shows improvement in all metrics on both datasets. Additionally, the low quality of proposals on \ucftwofour given by \yolovfive also hampers performance final performance. We report the corresponding \yolovfive+DeepSort metrics in \cref{tab:trackers_recall}. Fine-tuning the detector on each dataset is a necessary step, especially on \ucftwofour where the video quality is worse than \multisports. \subsection{Building a Strong Baseline on \multisports} \label{subsec:baseline_exp} \input{tabtex/TAB_baseline} Here, we investigate the effect of our proposed changes on the performance of the baseline action detector. \cref{tab:baseline_progression} contains the \fmap{@0.5} values, computed on \multisports, for our re-implementation of the \resnet \slowfast network, the addition of the background negative frames, the conversion of the multi-label to a multiclass classification and finally the addition of the \fpn. Each component improves the performance of the detector, leading to a much stronger baseline. \input{tabtex/TAB_detector_recall} \qheading{Tracker as filtering module:} Using trackers as a post processing step for action detection has many advantages, which we demonstrate in all the above tables, including \cref{tab:ucf24}, where we get substantial improvement in \fmap{}, labeled as ``Baseline + tracks''. First, the tracker helps filter out false positive person detections with high scores that appear spuriously for few frames, which reduces the load on person detection bounding box thresholding. Most of current \sota methods use a relatively high threshold to filter out unwanted false positives person detections, e.g. \pyslowfast~\cite{fan2020pyslowfast} uses $0.8$ and mmaction2~\cite{mmaction2} uses $0.9$. Furthermore, strict thresholds can eliminate some crucial true positives. In contrast to standard methods, we use a relatively liberal ($0.05$) threshold value for our track-based method. Secondly, using a good tracker greatly simplifies tube construction. Trackers are specifically designed to solve the linking problem, removing the need for greedy linking algorithms used in previous work \cite{singh2017online, kalogeiton2017action,li2020actionsas}. The performance gains, both in \vmap{} and \fmap{}, obtained by ``Baseline + tracks'' rows of \cref{tab:multisports} and \cref{tab:ucf24} over the ``Baseline'' row, clearly demonstrate this. \subsection{Discussion} \label{subsec:discussion} In this work, our main objective is to study action detection under large motion. The experiments on \multisports and \ucftwofour, see \cref{tab:splits,tab:motion_ap}, demonstrate that \taad, \ie utilizing track information for feature aggregation, improves performance across the board. Nevertheless, this does not mean that there is no room for improvement. Our method is sensitive to the performance of tracker, since this is the first step of our pipeline. Using a better state-of-the-art tracker and person detector, such as the ones employed by other contemporary methods \cite{pan2021actor,tang2020asynchronous,ning2021person,kopuklu2019you}), should further boost performance, especially on \ucftwofour, where \yolovfive struggles. Further, we can improve action detection performance by incorporating spatial/actor context modelling~\cite{tuberZhao2022,pan2021actor} or long-term temporal context~\cite{tang2020asynchronous} or transformer head~\cite{tuberZhao2022} or backbone~\cite{liu2022video,li2022mvitv2} with \taad. \input{figtex/FIG_large_visuals_three} One could argue that our definition of motion categories is not precise. Unlike object size categories in \mscoco~\cite{lin2014microsoft} however, motion categories are not easy to define. Apart from the complex camera motion, including zoom, translation and rotation, which is pretty common, and quick actor motion, both of which we show in \cref{fig:reasons}, special care has to be taken to avoid mislabelling cyclic motions. \multisports for example contains multiple actions, \eg in aerobics, where the actor starts and end at the same position, which would result in a high \iou between the boxes and thus lead to an erroneous small motion label. To solve this problem, we use an average of \iou{s} computed at different frame offsets. While our motion labelling scheme is not perfect, visual examples show that it correlates with motion speed. Lastly, we can see visuals in Fig. ~\ref{fig:large_motion_visual} where baseline fails to detect action tubes but \taad to able to detect them. We will provide more qualitative examples in the \mbox{\textbf{Sup.~Mat.}}\xspace to illustrate this point. \section{Conclusion} \label{sec:conclusion} In this work, we analyse and identify three coarse motion categories in action detection datasets. We observe that existing action detection methods struggle in the presence of large motions, \eg motion due to fast actor movement or large camera motion, To remedy this, We introduce \taadlong (\taad), a method that utilizes actor tracks to solve this problem. \taad aggregates information across actor tracks, rather than using a tube made of cuboid from proposal boxes. We evaluate the proposed method on two datasets, \multisports and \ucftwofour. \multisports is the ideal benchmark for this task, thanks to its large number of instances with fast-paced actions. \taad not only bridges the performance gap between motion categories, but also sets a new state-of-the-art for \multisports by beating last year's challenge winner by a large margin. \section{Introduction} \label{sec:intro} \Spatiotemporal action detection, which classifies and localises actions in space and time, is gaining attention, thanks to AVA~\cite{gu2018ava} and UCF24~\cite{soomro2012ucf101} datasets. However, most of the current state-of-the-art works~\cite{li2020actionsas,singh2017online,feichtenhofer2019slowfast,pan2021actor,tuberZhao2022} focus on pushing action detection performance usually by complex context modelling~\cite{tuberZhao2022,pan2021actor,tang2020asynchronous}, larger backbone networks~\cite{feichtenhofer2020x3d,li2022mvitv2,liu2022video}, or by incorporating optical flow~\cite{zhao2019dance,singh2017online} stream. These above methods use cuboid-aware temporal pooling for % feature aggregation. In this work, we aim to study cuboid-aware action detection under varying degree of action instance motion using \multisports~\cite{li2021multisports} dataset which contains instances with large motions unlike AVA~\cite{gu2018ava} as shown in \cref{fig:dataset_movement_stats_cumulative}. \input{figtex/FIG_cumulative_IoUs} \input{figtex/FIG_reasons_for_large_motions} \input{figtex/FIG_motion_types} Large object motion can occur for various reasons, e.g., fast camera motion, fast action, body shape deformation due to pose change, or mixed camera and action motions. These reasons are depicted in \cref{fig:reasons}. Furthermore, the speed of motions within an action class can vary because of a mixture of the above reasons and nature of action type, e.g., pose based or interaction based action. Either of these reason can cause sub-optimal feature aggregation and lead to error in action classification of a given reason. We propose to split actions into three categories: Large-motion, medium-motion, and small-motion, as shown in \cref{fig:dataset_movement_stats_cumulative,fig:movement_types}. The distinction is based on the \iou of boxes of the same actor over time, which we can compute using the ground truth tubes of the actors. We propose to study the performance on different motion categories of a baseline cuboid-aware method, without other bells and whistles like context features~\cite{pan2021actor,ning2021person,tang2020asynchronous} or long-term features \cite{wu2019long,tang2020asynchronous}, because large-motion happens quickly in a small time window, as seen in \cref{fig:dataset_movement_stats_cumulative} and \ref{fig:reasons}. In large-motion cases \iou would be small (\cref{fig:movement_types} (a)), as a result a 3D cuboid-aware feature extractor will not be able to capture the feature that is centred on the actor's location through out the action duration. To handle the large-motion case, we propose to track the actor over time and extract features using \toialignlong{ }(\toialign); resulting in \taadlong (\taad). Further, we study different types of feature aggregation modules on top of TOI-Aligned features for our proposed \taad network, showing \cref{fig:main_figure}. To this end, we make the following contributions: (a) we are the first to study the large-motion action detection systematically, using evaluation metrics for each type of motion, similar to object detection studies on \mscoco~\cite{lin2014microsoft} based on object sizes. (b) we propose to use tube/track-aware feature aggregation modules to handle large motions, and we show that this type of module helps in achieving great improvements over the baseline, especially for instances with such large motion. (c) in the process, we set a new state-of-the-art for the \multisports dataset by beating last year's challenge winner by a substantial margin. \section{Related Work} \label{sec:related_work} Action recognition~\cite{carreira2017quo,wang2018nonlocal,feichtenhofer2019slowfast,weinzaepfel2021mimetics,feichtenhofer2020x3d,singh2019recurrent,liu2022video,li2022mvitv2} models provide strong video representation models. However, action recognition as a problem is not as rich as action detection, where local motion in the video needs to be understood more precisely. Thus, action detection is the more relevant problem to understand actions under large motion. We are particularly interested in the \spatiotemporal~\emph{action detection} problem \cite{Georgia-2015a,gu2018ava,girdhar2018better,wu2019long,tuberZhao2022}, where an action instance is defined as a set of linked bounding boxes over time, called action tube. Recent advancements in online action detection~\cite{soomro2016predicting,singh2017online,behl2017incremental,kalogeiton2017action,li2020actionsas,yang2019step} lead to performance levels very competitive with (generally more accurate) offline action detection methods~\cite{gu2018ava,wang2018nonlocal,saha2016deep,saha2017amtnet,peng2016eccv,zhao2019dance,singh2018tramnet,singh2018predicting,van2015apt} on \ucftwofour~\cite{soomro2012ucf101} dataset. \ucftwofour has been a major benchmark for \spatiotemporal action detection (i.e. action tube detection), rather than \ava ~\cite{gu2018ava}. The former is well suited for action tube detection research, as it provides dense action tube annotations, where every frame of the untrimmed videos is annotated (unlike \ava~\cite{gu2018ava}, in which videos are only annotated at one frame per second). More recently, Li~\etal~\cite{li2021multisports} proposed the \multisports dataset, which resolves two main problems with the \ucftwofour dataset. Firstly, it has more fine-grained action classes. Secondly, it has multiple actors performing multiple types of action in the same video. As a result, the \multisports dataset is comparable to \ava in terms of diversity and scale. Moreover, the \multisports dataset is densely annotated, every frame at a rate of 25 frames per second, which makes it ideal to understand action under large motion, as shown in Fig.~\ref{fig:dataset_movement_stats_cumulative}. At the same time, there have been many interesting papers~\cite{feichtenhofer2019slowfast,feichtenhofer2020x3d,tang2020asynchronous,pan2021actor,chen2021watch} that focus on keyframe based action detection on \ava \cite{gu2018ava}. \ava has been helpful in pushing action detection research on three fronts. Firstly, backbone model representations are much better now thanks to works like~\cite{feichtenhofer2019slowfast,feichtenhofer2020x3d,liu2022video,wang2018nonlocal,chen2021watch}. Secondly, long-term feature banks (LBF) ~\cite{wu2019long} came to the fore~\cite{zhang2019structured,tang2020asynchronous,pan2021actor}, capturing some temporal context, but without temporal associations between actors. Thirdly, interactions between actors and object have been studied~\cite{pan2021actor,zhang2019structured,tang2020asynchronous,ning2021person}. Once again, the problem we want to study is action detection under large motion, which happens quickly at a small temporal scale. All the above methods use cuboid-aware pooling for local feature aggregation, which - as we will show - is not ideal when the motion is quick and large. As a result, we borrow the \slowfast~\cite{feichtenhofer2019slowfast} network as the baseline network for its simplicity and \spatiotemporal representational power. Also, it has been used for \multisports~\cite{li2021multisports} as baseline and in many other works on \ucftwofour as a basic building block. The work of Weinzaepfel~\etal~\cite{weinzaepfel2015learning} is the first to use tracking for action detection. That said, their goal was different than ours. They used a tracker to solve the linking problem in the tube generation part, where action classification was done on a frame-by-frame basis given the bounding box proposals from tracks. We, on the other hand, propose action detection by pooling features from within entire tracks. Gabriellav2~\cite{dave2022gabriellav2} is another method that makes use of tracking to solve the problem of temporal detection of co-occurring activities, but it relies on background subtraction which would fail in challenging in-the wild videos. Singh~\etal~\cite{singh2018tramnet}, Li~\etal~\cite{li2020actionsas} and Zhao~\etal~\cite{tuberZhao2022} are the only works generating flexible micro-tube proposals without the help of tracking. However, these approaches are limited to few number of frames (2-10), and without the possibility to scale to larger time windows of 1-2 seconds as it would require multi-frame tube anchors/query to regress of box coordinates on large number of frames together, and performance drops after few frames. \section{Methodology} \label{sec:method} \input{figtex/FIG_tracking_action_detector} In this section, we describe the proposed method to handle actions with large motions, which we call \taadlong (\taad). We start by tracking actors in the video, using a tracker described in Section~\ref{subsec:tracker}. At the same time, we use a neural network designed for video recognition, \slowfast \cite{feichtenhofer2019slowfast}, to extract features from each clip. Using the track boxes and video features, we pool per-frame features with a \roialign operation \cite{he2017mask}. Afterwards, a \tfalong (\tfa) module receives the per-track features and computes a single feature vector, from which a classifier predicts the final action label. \refFig{fig:main_figure} illustrates each step of our proposed approach. \subsection{Baseline Action Detector} \label{subsec:backbone} We select a \slowfast \cite{feichtenhofer2019slowfast} network as our video backbone. The first reason for this choice is that its performance is still competitive to larger scale transformer models, such as \videoswin \cite{liu2022video} or \mvit \cite{fan2021multiscale,li2022mvitv2}, on the task of \spatiotemporal action detection. Furthermore, \slowfast is more computationally efficient than the transformer alternatives, with a cost of 65.7 \gflops compared to 88, at least, and 170 for \videoswin \cite{liu2022video} and \mvit \cite{fan2021multiscale} respectively, and offers features at two different temporal scales. Having different temporal scales is important, especially since we aim at handling fast and/or large motions, where a smaller scale is necessary. Finally, \slowfast is the default backbone network of choice for \multisports and \ucftwofour datasets, which are the main benchmarks in this work, facilitating comparisons with existing work. We implement our baseline using \pyslowfast\cite{fan2020pyslowfast} with a \resnet~\cite{he2016deep} based \slowfast~\cite{feichtenhofer2019slowfast} architecture, building upon the works of Feichtenhofer \etal \cite{feichtenhofer2019slowfast} and Li \etal \cite{li2021multisports}. First, we add background frames (+bg-frames), i.e. frames erroneously detected by our detector, \yolovfive, as extra negative samples for training the action detector. Next, we replace the multi-label with a multiclass classifier, switching from a binary cross entropy per class to a cross entropy loss (CE-loss). Finally, we also added a downward \fpn block (see Sup.Mat. for details). Through these changes, we aim to build strongest possible baseline. \subsection{Tracker} \label{subsec:tracker} We employ class agnostic version of \tracker~\cite{yolov5deepsort2020} as our tracker, which is based on \yolovfive \cite{yolov5,redmon2016you} and TorchReID\cite{torchreid}. We fine-tune the medium size version of \yolovfive as the detection model for `person` classes. A pretrained OsNet-x0-25~\cite{zhou2021osnet} is used as re-identification (ReID) model. As we will show in the experiment section, a tracker with high recall, \ie small number of missing associations, is key for improving performance action tube detection. We will also show that the fine-tuning the detector is a necessary step, particularly for \ucftwofour, where the quality and resolution of the videos is small. Tracker can also be used as bounding box proposal filtering module. Sometimes, the detector produces multiple high scoring detection which are spurious and lead to false positives but these detection does not match to any of the tracks being generated because they are not temporally consistent. The proposals generated by tracks can be used with the baseline methods at test-time, as a result it helps improve the performance of the baseline method. \subsection{Temporal Feature Aggregation} \label{subsec:tpa} \qheading{\toialignlong{ }(\toialign):} The \slowfast video backbone processes the input clip and produces a $T \times H \times W$ feature tensor, while our tracker returns an array with size $N_t \times T \times 4$ that contains the boxes around the subjects. An \roialign \cite{he2017mask} takes these two arrays as input and produces a feature array of size $N_t \times T \times H \times W$, \ie one feature tube per track. In case the length of the track is smaller than length of the input clip then we replicate the last available bounding box in temporal direction, which occurs around $~3\%$ of input clips in \multisports dataset. \qheading{Feature aggregation:} In order to predict the label of a bounding box in a key-frame, we need to aggregate features across space and time, first we apply average pooling is spatial dimensions on features extracted by \toialign, then \tfalong role performed by one of the following variants considered: \setlist{itemsep=0pt} \begin{enumerate} \item Max-pooling over the temporal axes (\maxpool). \item A sequence of temporal convolutions (\tcn). \item A temporal variant of Atrous Spatial Pyramid Pooling (\aspp) \cite{deeplabv3plus2018}. We modify \detectron{'s} \cite{wu2019detectron2} ASPP implementation, replacing 2D with 1D convolutions. \end{enumerate} We also tried temporal version of \convnext~\cite{liu2022convnet} and \videoswin~\cite{liu2022video} blocks, however these resulted in unstable training, even with the tunning of learning rates and other hyperparameters. In our experiments, we only used one layer of temporal convolution for our \tcn module, adding more layer did not help. See the \mbox{\textbf{Sup.~Mat.}}\xspace for more details. \subsection{Tube Construction} \label{subsec:tube_con} Video-level tube detection requires the construction of action tubes from per-frame detections. This process is split into two steps~\cite{saha2016deep}, one to link the proposal to form tube hypothesis (i.e. action-tracks), second, these hypothesis need to be trimmed to the part where action is present in these tracks. One can think of these two steps as tracking step plus temporal (start and end time) action detection step. The majority of the existing action tube detection methods \cite{li2021multisports,singh2022road,li2020actionsas,singh2018tramnet} use a greedy proposal linking algorithm first proposed by in~\cite{singh2017online,kalogeiton2017action} for the first step. % For baseline approach, we use the same method for tube linking process from~\cite{singh2017online}. Whereas for our method (\taad), we already have tracks, as a result, the linking step is already complete. While the temporal trimming of action-tracks is performed using label-smoothing optimisation~\cite{saha2016deep}, which is used by many previous works~\cite{kalogeiton2017action,li2020actionsas}, specifically, we use class-wise temporal trimming implementation provided by~\cite{singh2017online}. \subsection{Datasets} We evaluate our idea on two densely annotated datasets (\multisports~\cite{li2021multisports} and \ucftwofour \cite{soomro2012ucf101}) with frame- and tube-level evaluation metrics for actions detection, unlike \ava \cite{gu2018ava}, which is sparsely annotated and mostly used for frame-level action detection. \qheading{\multisports~\cite{li2021multisports}} is built using 4 sports categories, collecting $3200$ video clips annotated at 25 \fps, and annotating 37701 action-tubes instances with $902$k bounding boxes. Although it contains $66$ action classes, we follow the official evaluation protocol~\footnote{\url{https://github.com/MCG-NJU/MultiSports/}} that uses $60$ classes. Due the fine granularity of the action labels the length of each action segment in a clip is short, with an average tube length of $24$ frames, equal to one second, while the average video length is $750$ frames. Each video is annotated with multiple instances of multiple action classes, with well defined temporal boundaries. \multisports contains action instances with large motions around actors, as shown in \reffig{fig:dataset_movement_stats_cumulative} \qheading{\ucftwofour \cite{soomro2012ucf101}} consists of $3207$ videos annotated at $25$ \fps with $24$ action classes from different sports, $4458$ action-tube instances with $560$K bounding boxes. Videos are untrimmed for in nature, where average video length is $170$ frames and the average action tube length is $120$ frames. The disadvantages of \ucftwofour are the presence of only one action class per video and the low quality of its images, due to compression and the small resolution, namely $320 \times 240$ pixels. Even though \ucftwofour has less diversity, less motion, less classes and more labelling noise compared to \multisports, it is still useful for evaluating action detection performance, thanks to its temporally dense annotation. \subsection{Implementation Details} \label{subsec:training_hparams} We use 32 frames as input with sampling rate of 2, which means more 2 seconds of video clip. We use Slowfast-R50-$\tiny{8\times8}$~ \cite{feichtenhofer2019slowfast}, meaning speed ratio $\alpha=8$ and channel ratio $\beta=1/8$. We use stochastic gradient descent (SGD) to optimise the weights, with a learning rate of $0.05$ and batch size of $32$ on $4$ GPUs. We use 1 epoch to warm up the learning rate linearly, followed by a cosine learning rate schedule \cite{cosine_lr}, with an final learning rate of $0.0005$, for a total of 5 epochs. Note that we only train for $3$ epochs on \ucftwofour. All our networks are trained with a batch size equal to $32$ on $4$ Titan X GPUs. We use frame-level proposal released by \cite{li2021multisports} with \multisports dataset for fair comparison. More details can be found in \mbox{\textbf{Sup.~Mat.}}\xspace. \section{Overview} \label{experiments} Here, we aim to provide additional details about the certain parts of main paper. First, we show architecture of backbone network, where feature pyramid network structure (FPN) incorporated into \slowfast~\cite{feichtenhofer2019slowfast} network in figure~\reffig{fig:backbone}. Second, we show the structure of \tfa modules used in our \taad model in \refsec{sec:tfa_structure}. Then, we present frame-level \motionmap and \motionap on individual time scales in \refsec{sec:indiv_scale} Finally, we show visual results of detected action-tube instances under different motion types in \refsec{subsec:visuals}. \input{figtex/FIG_slowfast_backbone} \input{text_supp_mat/01_tfa} \section{Individual time scales results} \label{sec:indiv_scale} Frame-level \motionmap and \motionap{} on individual time scales are shown in following tables: \begin{enumerate}[label=(\arabic*)] \item \motionap{} on \multisports in \reftab{tab:motion_ap_ms} \item \motionap{} on \ucftwofour in \reftab{tab:motion_ap_ucf24} \item \motionmap{} on \multisports in \reftab{tab:motion_map_ms} \item \motionmap{} on \ucftwofour in \reftab{tab:motion_map_ucf24} \end{enumerate} \input{tabtex/TAB_supp_motionAP_indivScaleMS} \input{tabtex/TAB_supp_motionAP_indivScaleUCF24} \input{tabtex/TAB_supp_motion-mAP_indivScaleUCF24} \input{tabtex/TAB_supp_motion-mAP_indivScaleMS} The main take-away from these tables is that results are consistent across different time scales compared to the average over all time scales. Through the \motionap{} metric, we show that that large-motion action instances are harder to detect compared to medium motions, which in turn are even harder to detect compared to small-motion action instances, or in other words performance of large-motion $<$ medium-motion $<$ small-motion. This result is consistent across both our benchmarks, \ie \multisports and \ucftwofour. Such pattern is desirable and intuitive to understand, it is missing in \motionmap{} because some class might not have any (or very few) ground-truth instances in one or two motion type categories resulting very small value of \map or zero \map, since medium is middle motion category it has more classes with some instances with medium motion class, hence fewer classes with zero \motionap{} resulting in higher mean-AP i.e. \motionmap{} for medium motion type. \section{Motion-wise visual results} \label{subsec:visuals} In this section we show visual results obtained using our baseline and \taad model. We discuss some interesting observation in the caption of figures. The figures are best viewed in colour. The qualitative results contain the following scenarios: \begin{enumerate}[label=(\arabic*)] \item \lmotion due to fast execution of actions in \reffig{fig:large_motion_action}. \item \lmotion due to fast camera motion in \reffig{fig:large_motion_camera_plus}. \item \mmotion action instances in \reffig{fig:medium_motion}. \item \smotion action instance in \reffig{fig:small_motion}. \item An action instance where \taad fails due to tracking error shown in \reffig{fig:tracking_error}. \end{enumerate} In all the captions, ``Overlap'' denotes the spatiotemporal overlap of the detected tube with the ground-truth tube, as defined by Weinzaepfel \etal \cite{weinzaepfel2015learning}. Ground-truth boxes and frames (dot at the bottom of the frame) are shown in green colour, while the detected track is shown in red colour. We use ``baseline+tracks'' in these figure as ``baseline'' method. Since all the methods use same set of tracks, red box is used to annotate track boxes. Each method's score has a separate colour, described in the sub-caption. \input{figtex_supp_mat/FIG_large_action_motion} \input{figtex_supp_mat/FIG_large_camera_motion_plus} \input{figtex_supp_mat/FIG_medium_motion} \input{figtex_supp_mat/FIG_small_motion} \input{figtex_supp_mat/Fig_tracking_error} \section{Structure of \tfalong Modules} \label{sec:tfa_structure} \refListing{lst:maxpool}, ~\ref{lst:tcn}, and ~\ref{lst:aspp} contain the \mbox{PyTorch}\xspace implementation of our MaxPool, \tcn and \aspp \tfa modules, used with our \taadlong (\taad) method. Similar to 1D-\aspp module (see Listing-Fig.~\ref{lst:aspp}), In addition to these blocks, we tried 1D-ConvNeXt~\cite{liu2022convnet} and 1D-\swin~\cite{liu2022video} blocks as well, but observed that training was unstable and that the final performance was worse. For example, 1D-ConvNext could only reach up to 44\% \fmap{} compared to 53.3\% using MaxPool module (Listing-Fig.~\ref{lst:maxpool}). In all the listings that follow, $C$ is the number of channels and $T$ is the number of input frames. \begin{figure*} \begin{lstlisting}[caption={MaxPool module with input feature of size $T \times C$ and output of $1 \times C$.},captionpos=b,label={lst:maxpool}] { (tube_temporal_pool): AdaptiveMaxPool1d(output_size=1) } \end{lstlisting} \end{figure*} \begin{figure*} \begin{lstlisting}[caption={TCN module with input feature of size $T \times C$ with output of $1 \times C$.},captionpos=b,label={lst:tcn}] { (TCN): Conv1d(576, 576, kernel_size=3, stride=1, padding=2, dilation=2) (tube_temporal_pool): AdaptiveMaxPool1d(output_size=1) } \end{lstlisting} \end{figure*} \begin{figure*} \begin{lstlisting}[caption={1D-ASPP~\cite{deeplabv3plus2018} module with input feature of size $T \times C$ with output of $1 \times C$.}, captionpos=b, label={lst:aspp}] { (ASPP): ASPP1D( (convs): ModuleList( (0): Conv1d(576, 256, kernel_size=1, stride=1) (1): Sequential( (0): Conv1d(256, 576, kernel_size=1, stride=1) (1): ReLU() ) 2: ASPPConv1D( (0): Conv1d(256, 576, kernel_size=3, stride=1, padding=1) (1): ReLU() ) (3): ASPPConv1D( (0): Conv1d(256, 576, kernel_size=3, stride=1, padding=3, dilation=3) (1): ReLU() ) (4): ASPPConv1D( (0): Conv1d(256, 576, kernel_size=3, stride=1, padding=(5), dilation=5) (1): ReLU() ) (5): ASPPPooling1D( (0): AdaptiveAvgPool1d(output_size=1) (1): Conv1d(256, 576, kernel_size=1, stride=1) 2: ReLU() ) ) (project): Sequential( (0): Conv1d(2880, 576, kernel_size=1, stride=1, bias=False) (1): ReLU() ) ) (tube_temporal_pool): AdaptiveMaxPool1d(output_size=1) } \end{lstlisting} \end{figure*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec-intro} Massive graphs arise in many applications \emph{e.g.,}\xspace network traffic data, social networks, citation graphs, and transportation networks. These graphs are highly dynamic: Network traffic data averages to about $10^9$ packets per hour per router for large ISPs~\cite{DBLP:journals/pvldb/GuhaM12}; Twitter sees 100 million users login daily, with around 500 millions tweets per day~\cite{twitterstat}. \begin{example} \label{exam:graph} We consider a graph stream\footnote{Without loss of generality, we use a directed graph for illustration. Our method can also work on undirected graphs.} as a sequence of elements $(x,y; t)$, which indicates that the edge $(x,y)$ is encountered at time $t$. A graph stream $\langle(a, b; t_1), (a, c; t_2), \cdots (b, a; t_{14})\rangle$ is depicted in Fig.~\ref{fig:graph}, where all timestamps are omitted for simplicity. Each edge is associated with a weight, which is $1$ by default. \end{example} Queries over big graph streams have many important applications, \emph{e.g.,}\xspace monitoring cyber security attack and traffic networks. Unfortunately, in a recent survey for businesses in analyzing big data streams~\cite{Vitria}, although 41\% of respondents stated that the ability to analyze and act on streaming data in minutes was critical, 67\% also admitted that they do not have the infrastructure to support that goal. The situation is more complicated and challenging when handling graph streams. It poses unique space and time constraints on storing, summarizing, maintaining and querying graph streams, due to its sheer volume, high velocity, and the complication of various queries. In fact, for the applications of data streams, fast and approximated answers are often preferred than exact answers. Hence, sketch synopses have been widely studied for estimations over data streams. Although they allow false positives, the space savings normally prevail over this drawback, when the probability of an error is sufficiently low. There exist many sketches:~AMS~\cite{DBLP:journals/jcss/AlonMS99}, Lossy Counting~\cite{DBLP:conf/vldb/MankuM02}, {\sf CountMin}\xspace~\cite{DBLP:journals/jal/CormodeM05}, Bottom-$k$~\cite{DBLP:journals/pvldb/CohenK08} and {\sf gSketch}\xspace~\cite{DBLP:journals/pvldb/ZhaoAW11}. They have also been used in commercial tools, \emph{e.g.,}\xspace~{\sf CountMin}\xspace is in Cisco OpenSOC\footnote{http://opensoc.github.io/} cyber security analytics framework. \begin{figure}[t] \begin{minipage}{\columnwidth} \centerline{ \xymatrixcolsep{0.6in} \xymatrix{ & b \ar@/^/[dl]^1 \ar[dd]^1 \ar[r]^1 \ar[ddr]^(.5)1 & d \ar[dr]^1 & \\ a \ar@/^/[ur]^1 \ar[dr]_1 & & e \ar[u]_(0.2)1 \ar[d]^1 \ar[ul]_(.3)1 & g \ar[llu]_(.4)1 \\ & c \ar[ur]^1 \ar[r]_1 & f \ar[llu]_(.7)1 & } } \caption{A sample graph stream} \label{fig:graph} \end{minipage} \begin{minipage}{\columnwidth} \vspace{4ex} \centering \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{$h(\cdot)$} & 5 & 5 & 3 & 1 \\ \cline{2-5} \multicolumn{1}{c}{\color{white} {\Large I} } & \multicolumn{1}{c}{$ab, ac, ed, eb, ef$} & \multicolumn{1}{c}{$bc, bd, ba, bf, fa$} & \multicolumn{1}{c}{$ce, cf, gb$} & \multicolumn{1}{c}{$dg$}\\ \end{tabular} \caption{A hash-based sketch} \label{fig:cm} \end{minipage} \begin{minipage}{\columnwidth} \vspace{4ex} \centerline{ \xymatrix{ & II (bf) \ar@/^/[dl]_2 \ar@/^1pc/[dd]^(.7){1} \ar@/_/[dr]^1 \ar@(ur,ul)[]^1 & \\ I (ae) \ar@/^/[ur]^3 \ar@/_/[dr]^1 \ar@/^/[rr]^1 & & III(cg) \ar@/^/[ll]_1 \ar@/_/[ul]_2 \\ & IV(d) \ar@/_/[ur]^1 & } } \caption{An example of our proposed sketch} \label{fig:sys} \end{minipage} \end{figure} \eat{ \begin{figure}[t] \begin{minipage}{\columnwidth} \centerline{ \xymatrix{ & 2 (b) \ar[dd]^{t_3} \ar[r]^{t_4} & 4(d) \ar[r]^{t_{10}} & 7(g) \ar[d]^{t_{11}} \\ 1 (a) \ar[ur]^{t_1} \ar[dr]^{t_2} & & 5(e) \ar[u]^{t_7} \ar[d]^{t_8} \ar[r]^{t_9} & 8(b) \ar[ld]^{t_{12}} \ar[d]^{t_{14}} \\ & 3(c) \ar[ur]^{t_5} \ar[r]^{t_6} & 6(f) \ar[r]^{t_{13}} & 9(a) } } \caption{An example graph stream} \label{fig:graph} \end{minipage} \begin{minipage}{\columnwidth} \vspace{4ex} \centering \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{$h(\cdot)$} & 5 & 5 & 3 & 1 \\ \cline{2-5} \multicolumn{1}{c}{\color{white} {\Large I} } & \multicolumn{1}{c}{$ab, ac, ed, eb, ef$} & \multicolumn{1}{c}{$bc, bd, ba, bf, fa$} & \multicolumn{1}{c}{$ce, cf, gb$} & \multicolumn{1}{c}{$dg$}\\ \end{tabular} \caption{A hash-based sketch} \label{fig:cm} \end{minipage} \begin{minipage}{\columnwidth} \vspace{4ex} \centerline{ \xymatrix{ & II (bf) \ar@/^/[dl]_2 \ar@/^1pc/[dd]^(.7){1} \ar@/_/[dr]^1 \ar@(ur,ul)[]^1 & \\ I (ae) \ar@/^/[ur]^3 \ar@/_/[dr]^1 \ar@/^/[rr]^1 & & III(cg) \ar@/^/[ll]_1 \ar@/_/[ul]_2 \\ & IV(d) \ar@/_/[ur]^1 & } } \caption{A sample of {\sf gLava}\xspace sketch} \label{fig:sys} \end{minipage} \end{figure} } \begin{example} \label{exam:gs} Consider the graph stream in Example~\ref{exam:graph} and Fig.~\ref{fig:graph}. {\sf CountMin}\xspace treats each element in the stream independently, and maps them to $w$ hash buckets, by using the pairs of node labels as hash keys. Assume there are $w=4$ hash buckets. The set of values to be hashed is $\{ab, ac, bc, \cdots\}$, where the value $ab$ is to concatenate two node labels for the element $(a, b;t_1)$. A hash function is used to put these values in 4 buckets, \emph{i.e.,}\xspace $h(\cdot)\rightarrow [1, 4]$, as shown in Fig.~\ref{fig:cm}. The first bucket aggregates the weights of values starting with $a$ and $e$, and similar for other buckets. Note that, for simplicity, we use only one hash function for illustration. In fact, {\sf CountMin}\xspace uses $d$ pairwise independent hash functions to alleviate the problem of hash key collisions. {\sf CountMin}\xspace, and its variant {\sf gSketch}\xspace, can be used for estimating certain types of graph queries. (a) {\em Edge query}. It can estimate the weight of particular edges: $5$ for $ab$, and $1$ for $dg$. (b) {\em Aggregate subgraph query.} It can answer queries like: {\em What is the aggregated weight for a graph with two edges $(a, c)$ and $(c, e)$}? It will give $5+3=8$ as the estimated weight for the graph. \end{example} Example~\ref{exam:gs} shows the applicability of {\sf CountMin}\xspace on some query estimation over graph streams. However, their (and other sketch synopses') main weakness is that they are element (or edge) independent. That is, they fall short of maintaining the connections between streaming elements, thus fail in estimating many important queries, such as node monitoring, path queries (\emph{e.g.,}\xspace reachability) and more complicated graph analytics (see Section~\ref{subsec-queries} for a detailed discussion). \stitle{Challenges.} Designing a graph sketch to support a wide range of applications requires to satisfy the following constraints. (1) {\em Space constraint:} {\em sublinear} upper bound is needed. (2) {\em Time constraint:} {\em linear} construction time is required. Noticeably, this is stronger than the constraint with only a constant passes over the stream. (3) {\em Maintenance constraint:} to maintain it for one element insertion/deletion should be in {\em constant} time. (4) {\em Element connectivity:} the {\em connectivity} between elements should be maintained. To this end, we present {\sf gLava}\xspace, a novel generalized graph sketch to meet the above required constraints. Instead of treating elements (edges) in a graph stream independently as those prior art, the key idea of {\sf gLava}\xspace is to compress a graph stream based on a finer grained item, the node in a stream element. \begin{example} \label{exam:sys} Again, consider the graph stream in Fig.~\ref{fig:graph}. Our proposed sketch is shown in Fig.~\ref{fig:sys}. For each edge $(x, y; t)$, {\sf gLava}\xspace uses a hash function to map each node label to 4 node buckets \emph{i.e.,}\xspace~$h'(\cdot)\rightarrow[1,4]$. Node $I$ is the summary of two node labels $a$ and $e$, assuming $h'(a) = 1$ and $h'(e) =1$. The other compressed nodes are computed similarly. The edge weight denotes the aggregated weights from stream elements, \emph{e.g.,}\xspace the number $3$ from node $I$ to $II$ means that there are three elements as $(x, y; t)$ where the label of $x$ (resp. $y$) is $a$ or $e$ (resp. $b$ or $f$). \end{example} \etitle{Remark.} (1) It is readily to see that estimation for edge frequencies, as what {\sf CountMin}\xspace supports, can be easily achieved by {\sf gLava}\xspace. Better still, the idea of using multiple hash functions to alleviate hash collisions can be easily applied to {\sf gLava}\xspace. (2) {\sf gLava}\xspace is represented as a graph, which captures not only the connections inside elements, but also the links across elements. These make it an ideal sketch to support a much wider range of applications, compared with those prior art (see Section~\ref{subsec-queries} for a discussion). \stitle{Contributions.} This paper presents a novel graph sketch for supporting a wide range of graph stream applications. \vspace{1.2ex}\noindent(1) We introduce {\sf gLava}\xspace, a novel graph sketch (Section~\ref{subsec-gmodel}). As shown in Fig.~\ref{fig:sys}, the proposed sketch naturally preserves the graphical connections of the original graph stream, which makes it a better fit than traditional data stream sketches to support analytics over graph streams. We further categorize its supported graph queries (Section~\ref{subsec-queries}). \vspace{1.2ex}\noindent(2) We describe algorithms to process various graph analytics on top of {\sf gLava}\xspace (Section~\ref{sec-algs}). The general purpose is, instead of proposing new algorithms, to show that {\sf gLava}\xspace can easily be used to support many graph queries, and {\em off-the-shelf} graph algorithms can be seamlessly integrated. \vspace{1.2ex}\noindent(3) We perform theoretical analysis to establish error bounds for basic queries, specifically, the edge frequency estimation and point query estimation (Section~\ref{sec-theory}). \vspace{1.2ex}\noindent(4) We describe implementation details (Section~\ref{sec-implementation}). This is important to ensure that the new sketch can be constructed and maintained under the hard time/space constraints to support streaming applications. Moreover, we propose to use non-square matrices, using the same space, to improve the accuracy of estimation. \vspace{1ex} In this history of graph problems over streams, unfortunately, most results showed that a large amount of space is required for {\em complicated} graph problems~\cite{DBLP:books/crc/aggarwal2014, DBLP:journals/sigmod/McGregor14}. The present study fills a gap in the literature by analyzing various graph problems in a small amount of space. We thus contend that {\sf gLava}\xspace will shed new light on graph stream management. \stitle{Organization.} Section~\ref{sec-related} discusses related work. Section~\ref{sec-gapollo} defines the new sketch and queries to be solved. Section~\ref{sec-algs} describes algorithms using {\sf gLava}\xspace. Section~\ref{sec-theory} makes theoretical analysis. Section~\ref{sec-implementation} describes implementation details. Finally, Section~\ref{sec-conclusion} concludes the paper with a summary of our findings. \section{Related Work} \label{sec-related} We categorize related work as follows. \etitle{Sketch synopses.} Given a data stream, the aim of sketch synopses is to apply (linear) projections of the data into lower dimensional spaces that preserve the salient features of the data. A considerable amount of literature has been published on general data streams such as AMS~\cite{DBLP:journals/jcss/AlonMS99}, lossy counting~\cite{DBLP:conf/vldb/MankuM02}, {\sf CountMin}\xspace~\cite{DBLP:journals/jal/CormodeM05} and bottom-$k$~\cite{DBLP:journals/pvldb/CohenK08}. The work {\sf gSketch}\xspace~\cite{DBLP:journals/pvldb/ZhaoAW11} improves {\sf CountMin}\xspace for graph streams, by assuming that data samples or query samples are given. Bloom filters have been widely used in a variety of network problems (see~\cite{DBLP:journals/im/BroderM03} for a survey). There are also sketches that maintain counters only for nodes, \emph{e.g.,}\xspace using a heap for maintaining the nodes the largest degrees for heavy hitters~\cite{DBLP:conf/pods/CormodeM05}. As remarked earlier, {\sf gLava}\xspace is more general, since it maintains connections inside and across elements, {\em without} assuming any sample data or query is given. None of existing sketches maintains both node and edge information. \etitle{Graph summaries.} Summarizing graphs has been widely studied. The most prolific area is in web graph compression. The papers~\cite{DBLP:conf/dcc/AdlerM01,DBLP:conf/dcc/SuelY01} encode Web pages with similar adjacency lists using reference encoding, so as to reduce the number of bits needed to encode a link. The work~\cite{DBLP:conf/icde/RaghavanG03} groups Web pages based on a combination of their URL patterns and $k$-means clustering. The paper~\cite{DBLP:conf/sigmod/FanLWW12} compresses graphs based on specific types of queries. There are also many clustering based methods from data mining community (see \emph{e.g.,}\xspace~\cite{DBLP:books/ph/JainD88}), with the basic idea to group {\em similar} nodes together. These data structures are designed for less dynamic graphs, which are not suitable for graph stream scenarios. \etitle{Graph pattern matching over streams.} There have been several work on matching graph patterns over graph streams, based on either the semantics of subgraph isomorphism~\cite{DBLP:conf/icde/WangC09, DBLP:conf/edbt/ChoudhuryHCAF15, DBLP:conf/icde/GaoZZY14a} or graph simulation~\cite{DBLP:journals/pvldb/SongGCW14}. The work~\cite{DBLP:conf/icde/WangC09} assumes that queries are given, and builds node-neighbor tree to filter false candidate results. The paper~\cite{DBLP:conf/icde/GaoZZY14a} leverages a distributed graph processing framework, Giraph, to approximately evaluate graph quereis. The work~\cite{DBLP:conf/edbt/ChoudhuryHCAF15} uses the subgraph distributional statistics collected from the graph streams to optimize a graph query evaluation. The paper~\cite{DBLP:journals/pvldb/SongGCW14} uses filtering methods to find data that potentially matches for a specific type of queries, namely {\em degree-preserving dual simulation with timing constraints}. Firstly, all the above algorithms are designed for a certain type of graph queries. Secondly, most of them assume the presence of queries, so they can build indices to accelerate. In contrast, {\sf gLava}\xspace aims at summarizing graph streams in a generalized way, so as to support various types of queries, {\em without} any assumption of queries. \etitle{Graph stream algorithms.} There has also been work on algorithms over graph streams (see~\cite{DBLP:journals/sigmod/McGregor14} for a survey). This includes the problems of connectivity~\cite{DBLP:journals/tcs/FeigenbaumKMSZ05}, trees~\cite{targan1983}, spanners~\cite{DBLP:journals/talg/Elkin11}, sparsification~\cite{DBLP:journals/mst/KelnerL13}, counting subgraphs \emph{e.g.,}\xspace triangles~\cite{DBLP:conf/icalp/BravermanOV13, DBLP:conf/kdd/TsourakakisKMF09}. However, they mainly focus on theoretical study for best approximation bound, mostly on $O(n$ polylog $n)$ space, with one to multiple passes over the data stream. In fact, {\sf gLava}\xspace is a friend, instead a competitor, of them. As will be seen later (Section~\ref{sec-algs}), {\sf gLava}\xspace can treat existing algorithms as {\em black-boxes} to help solve existing problems. \etitle{Distributed graph systems.} Many distributed graph computing systems have been proposed to conduct data processing and data analytics in massive graphs, such as Pregel~\cite{DBLP:conf/sigmod/MalewiczABDHLC10}, Giraph\footnote{http://giraph.apache.org}, GraphLab~\cite{DBLP:journals/pvldb/LowGKBGH12}, Power-Graph~\cite{DBLP:conf/osdi/GonzalezLGBG12} and GraphX~\cite{DBLP:conf/osdi/GonzalezXDCFS14}. They have been proved to be efficient on static graphs, but are not ready for doing analytics over big graph streams with real-time response. In fact, they are complementary to and can be used for {\sf gLava}\xspace in distributed settings. \section{gLava and Supported Queries} \label{sec-gapollo} We first define graph streams (Section~\ref{subsec-gs}) and state the studied problem (Section~\ref{subsec-ps}). We then introduce our proposed graph sketch model (Section~\ref{subsec-gmodel}). Finally, we discuss the queries supported by our new sketch (Section~\ref{subsec-queries}). \subsection{Graph Streams} \label{subsec-gs} A {\em graph stream} is a sequence of elements $e = (x, y; t)$ where $x, y$ are node identifiers (labels) and edge $(x, y)$ is encountered at time-stamp $t$. Such a stream, \[ G=\langle e_1, e_2, \cdots, e_m \rangle \] \ni naturally defines a graph $G=(V, E)$ where $V$ is a set of nodes and $E=\{e_1, \cdots, e_m\}$. We write $\omega(e_i)$ the weight for the edge $e_i$, and $\omega(x, y)$ the aggregated edge weight from node $x$ to node $y$. We call $m$ the {\em size} of the graph stream, denoted by $|G|$. Intuitively, the node label, being treated as an identifier, uniquely identifies a node, which could be \emph{e.g.,}\xspace IP addresses in network traffic data or user IDs in social networks. Note that, in the graph terminology, a graph stream is a {\em multigraph}, where each edge can occur many times, \emph{e.g.,}\xspace one IP address can send multiple packets to another IP address. We are interested in properties of the underlying graph. The causes a main challenge with graph streams where one normally does not have enough space to record the edges that have been seen so far. Summarizing such graph streams in one pass is important to many applications. Moreover, we do not explicitly differentiate whether $(x, y)$ is an ordered pair or not. In other words, our approach applies naturally to both directed and undirected graphs. For instance, in network traffic data, a stream element arrives at the form: $(192.168.29.1, 192.168.29.133, 62, 105.12)$\footnote{We omit port numbers and protocols for simplicity.}, where node labels $192.168.29.1$ and $192.168.29.133$ are IP addresses, $62$ is the number of bytes sent {\em from} $192.168.29.1$ {\em to} $192.168.29.133$ in this captured packet (\emph{i.e.,}\xspace the weight of the directed edge), and $105.12$ is the time in seconds that this edge arrived when the server started to capture data. Please see Fig.~\ref{fig:graph} for a sample graph stream. \subsection{Problem statement} \label{subsec-ps} The problem of {\em summarizing graph streams} is, given a graph stream $G$, to design another data structure $S_G$ from $G$, such that: \begin{enumerate} \item $|S_G| \ll |G|$: the size of $S_G$ is far less than $G$, preferably in sublinear space. \item The time to construct $S_G$ from $G$ is in linear time. \item The update cost of $S_G$ for each edge insertion/deletion is in constant time. \end{enumerate} Intuitively, a graph stream summary has to be built and maintained in real time, so as to deal with big volume and high velocity graph stream scenarios. In fact, the {\sf CountMin}\xspace~\cite{DBLP:conf/vldb/MankuM02} and its variant {\sf gSketch}\xspace~\cite{DBLP:journals/pvldb/ZhaoAW11} satisfy the above conditions (see Example~\ref{exam:gs} for more details). Unfortunately, as discussed earlier, {\sf CountMin}\xspace and {\sf gSketch}\xspace can support only limited types of graph queries. \subsection{The gLava Model} \label{subsec-gmodel} \stitle{The graph sketch.} A {\em graph sketch} is a graph $S_G({\cal V}, {\cal E})$, where ${\cal V}$ denotes the set of vertices and ${\cal E}$ its edges. For vertex $v\in {\cal V}$, we simply treat its label as its node identifier (the same as the graph stream model). Each edge $e$ is associated with a weight, denoted as $\omega(e)$. In generating the above graph sketch $S_G$ from a graph $G$, we first set the number of nodes in the sketch, \emph{i.e.,}\xspace~let $|{\cal V}| = w$. For an edge $(x, y;t)$ in $G$, we use a hash function $h$ to map the label of each node to a value in $[1, w]$, and the aggregated edge weight is calculated correspondingly. Please refer to Fig.~\ref{fig:sys} as an example, where we set $w=4$. \etitle{Discussion about edge weight.} The edge weight for an edge $e$ in the graph sketch is computed by an {\em aggregation} function of all edge weights that are mapped to $e$. Such an aggregation function could be $\at{min}(\cdot)$, $\at{max}(\cdot)$, $\at{count}(\cdot)$, $\at{average}(\cdot)$, $\at{sum}(\cdot)$ or other functions. In this paper, we use $\at{sum}(\cdot)$ by default to explain our method. The other aggregation functions can be similarly applied. In practice, which aggregation function to use is determined by applications. For a more general setting that requires to maintain multiple aggregated functions in a graph sketch, we may extend our model to have multiple edge weights \emph{e.g.,}\xspace~$\omega_1(e), \cdots ,\omega_n(e)$, with each $\omega_i(e)$ corresponds to an aggregation function. \etitle{Remark.} One may observe that the graph sketch model is basically the same as the graph stream model, with the main difference that the time-stamps are not maintained. This makes it very useful and applicable in many scenarios when querying a graph stream for a given time window. In other words, for a graph analytics method $M$ that needs to run on a streaming graph $G$, denoted as $M(G)$, one can run it on its sketch $S_G$, \emph{i.e.,}\xspace~$M(S_G)$, to get an estimated result, without modifying the method $M$. \begin{example} \label{exam:gsedge} Consider the graph stream in Fig~\ref{fig:graph} and its sketch in Fig.~\ref{fig:sys}. Assume that query $Q_1$ is to estimate the aggregated weight of edges from $b$ to $c$. In Fig.~\ref{fig:sys}, one can map $b$ to node $II$, $c$ to node $III$, and get the estimated weight $1$ from edge $(II, III)$, which is precise. Now consider $Q_2$, which is to compute the aggregated weight from $g$ to $b$. One can locate the edge $(III, II)$, and the estimated result is $2$, which is not accurate since the real weight of $(g, b)$ in Fig~\ref{fig:graph} is $1$. \end{example} The above result is expected, since given the compression, no hash function can ensure that the estimation on the sketch can be done precisely. Along the same line of {\sf CountMin}\xspace~\cite{DBLP:journals/jal/CormodeM05}, we use multiple independent hash functions to reduce the probability of hash collisions. \begin{figure}[t] \hspace*{2ex} \begin{minipage}{0.4\columnwidth} \centerline{ \xymatrixcolsep{0.1in} \xymatrix{ & II (bc) \ar@/^/[dl]_3 \ar@/^1pc/[dd]^(.7){1} \ar@/_/[dr]^1 \ar@(ur,ul)[]^1 & \\ I (af) \ar@(u,l)[]^1 \ar@/^/[ur]^2 & & III(dg) \ar@/_/[ul]_1 \ar@(u,r)[]^1 \\ & IV(e) \ar@/_/[ur]^1 \ar[ul]^1 \ar@/^/[uu]^(.3){1} & \\ } } \centerline{ \xymatrixrowsep{0.2in} \xymatrix{ & \txt{(a) Sketch $S_1$} & } } \end{minipage} \hspace*{6ex} \begin{minipage}{0.4\columnwidth} \centerline{ \xymatrixcolsep{0.1in} \xymatrix{ & ii (cd) \ar@/^1pc/[dd]^(.4){2} \ar[dr]^1 & \\ i (ab) \ar[ur]^3 \ar@(ru,lu)[]^2 \ar@/_/[dr]_1 & & iii(g) \ar@/^/[ll]_1 \\ & iv(ef) \ar@/_/[ul]^2 \ar@(dr,dl)[]_1 \ar@/^/[uu]^(.6){1} & } } \centerline{ \xymatrixrowsep{0.2in} \xymatrix{ & \txt{(b) Sketch $S_2$} & } } \end{minipage} \caption{A {\sf gLava}\xspace sketch with 2 hash functions} \label{fig:2hash} \end{figure} \stitle{The {\sf gLava}\xspace model.} A {\sf gLava}\xspace sketch is a set of graph sketches $\{S_1({\cal V}_1, {\cal E}_1), \cdots, S_d({\cal V}_d, {\cal E}_d)\}$. Here, we use $d$ hash functions $h_1, \cdots, h_d$, where $h_i$ ($i\in[1, d]$) is used to generate $S_i$. Also, $h_1, \cdots, h_d$ are chosen uniformly at random from a pairwise-independent family (see Section~\ref{subsec-pairwise} for more details). \begin{example} \label{exam:2hash} Figure~\ref{fig:2hash} shows another two sketches for Fig.~\ref{fig:graph}. Again, consider the query $Q_2$ in Example~\ref{exam:gsedge}. Using $S_1$ in Fig.~\ref{fig:2hash} (a), one can locate edge $(III, II)$ for $(g, b)$, which gives $1$. Similarly, $S_2$ in Fig.~\ref{fig:2hash} (b) will also give $1$ from edge $(iii, i)$, where $g$ (resp. $b$) maps to $iii$ (resp. $i$). The minimum of the above two outputs is $1$, which is correct. \end{example} Example~\ref{exam:2hash} shows that using multiple hash functions can indeed improve the accuracy of estimation. \subsection{Supported Graph Queries} \label{subsec-queries} As remarked in Section~\ref{subsec-gmodel}, for any graph analytics method $M$ to run over $G$, \emph{i.e.,}\xspace~$M(G)$, it is possible to run $M$ on each sketch directly and individually, and then merge the result as: $\tilde{M}(G) = \Gamma(M(S_1), \cdots, M(S_d))$, where $\tilde{M}(G)$ denotes the estimated result over $G$, and $\Gamma(\cdot)$ an aggregation function (\emph{e.g.,}\xspace~\at{min}, \at{max}, \at{conjunction}) to merge results returned from $d$ sketches. Whilst the exercise in this section only substantiates several types of graph queries to be addressed in this work, it is evident that the power of {\sf gLava}\xspace is far beyond what are listed below. \stitle{Edge frequency.} Given two node labels $a$ and $b$, we write $f_e(a, b)$ to denote the exact weight from a $a$-node to a $b$-node. We write $\tilde{f_e}(a, b)$ the estimated weight from a sketch. One application of such queries, taking social networks for example, is to estimate the communication frequency between two specific friends. \stitle{Point queries.} For a directed graph, given a node label $a$, we study a boolean query $f_v(a, \leftarrow) > \theta$ (resp. $f_v(a, \rightarrow) < \theta$), which is to monitor whether the aggregated edge weight {\em to} (resp. {\em from}) a node with label $a$ is above (resp. below) a given threshold $\theta$ in the graph stream $G$. For an undirected graph, we write $f_v(a, \perp) > \theta$ and $f_v(a, \perp) < \theta$ correspondingly. Similarly, we use $\tilde{f_v}(a, \leftarrow), \tilde{f_v}(a, \rightarrow), \tilde{f_v}(a, \perp)$ for estimated numbers using sketches. One important application of such queries is DoS (Denial-of-service) attacks in cyber security, which typically flood a target source (\emph{i.e.,}\xspace a computer) with massive external communication requests. \stitle{Path queries.} Given two node labels $a$ and $b$, a (boolean) reachability query $r(a, b)$ is to tell whether they are connected. Also, we write $\tilde{r}(a, b)$ as the estimated reachability. One important monitoring task, for the success of multicast deployment in the Internet, is to verify the availability of service in the network, which is usually referred to as {\em reachability monitoring}. Another reachability application that needs to consider edge weights is IP routing, which is to determine the path of data flows in order to travel across multiple networks. \stitle{Aggregate subgraph query.} The {\em aggregate subgraph query} is considered in~{\sf gSketch}\xspace~\cite{DBLP:journals/pvldb/ZhaoAW11}. It is to compute the aggregate weight of the constituent edges of a sub-graph $Q = \{(x_1,y_1), \cdots, (x_k,y_k)\}$, denoted by $f(Q) = \Omega (f_e(x_1, y_1), \cdots, f_e(x_k,y_k))$. Here, the function $\Omega(\cdot)$ is to merge weights from all $f_e(x_i, y_i)$ for $i\in[1, k]$. We write $\tilde{f}(Q)$ for estimation over sketches. Note that, {\sf gSketch}\xspace simply merges all estimated weights in $G$, \emph{e.g.,}\xspace the sum of all weights, even if one edge is missing (\emph{i.e.,}\xspace~$\tilde{f}(x_i, y_i)=0$ for some edge $e_i$). However, we use a different query semantics in this work: if $\tilde{f}(x_i, y_i)=0$, the estimated aggregate weight should be $0$, since the query graph $Q$ does not have an exact match. We use this revised semantics since it is a more practical setting. \begin{example} \label{exam:subgraph} Consider a subgraph with two edges as $Q: \{(a, b), (a, c)\}$. The query $Q_3: \tilde{f}(Q)$ is to estimate the aggregate weight of $Q$. The precise answer is $2$, which is easy to check from Fig.~\ref{fig:graph}. \end{example} \etitle{Extensions.} We consider an extension of the above aggregate subgraph query, which allows a wildcard $*$ in the node labels that are being queried. More specifically, for the subgraph query $Q = \{(x_1,y_1), \cdots, (x_k,y_k)\}$, each $x_i$ or $y_i$ is either a constant value, or a wildcard $*$ (\emph{i.e.,}\xspace match any label). A further extension is to {\em bound} the wildcards to be matched to the same node, by using a subscript to a wildcard as $*_j$. That is, two wildcards with the same subscripts enforce them to be mapped to the same node. \begin{example} \label{exam:subgraph} A subgraph query $Q_4: \tilde{f}(\{(a, b), (b, c), (c, a)\})$ is to estimate the triangle, \emph{i.e.,}\xspace a 3-clique with three vertices labeled as $a$, $b$, and $c$, respectively. Another subgraph query $Q_5: \tilde{f}(\{(*, b), (b, c), (c, *)\})$ is to estimate paths that start at node with an edge to $b$, and end at any node with an edge from $c$, if the edge $(b, c)$ exists in the graph. If one wants to count the common neighbors of $(b, c)$, the following query $Q_6: \tilde{f}(\{(*_1, b), (b, c), (c, *_1)\})$ can enforce such a condition, which again is a case of counting triangles. \end{example} The extension is apparently more general, with the purpose to cover more useful queries in practice. Unfortunately, {\sf gSketch}\xspace cannot support such extensions. \begin{table}[t] \begin{center} \begin{tabular}{| c | c | } \hline \cellcolor{black}{\textcolor{white}{\at{Symbols}}} & \cellcolor{black}{\textcolor{white}{\at{Notations}}} \\ \hline $G$, $S_G$ & a graph stream, and a graph sketch \\ \hline $\omega(e)$ & weight of the edge $e$ \\ \hline $f_e(a,b)$ & edge weight \\ \hline $f_v(a, \leftarrow)$ & node in-flow weight (directed graphs) \\ \hline $f_v(a, \rightarrow)$ & node out-flow weight (directed graphs) \\ \hline $f_v(a, \perp)$ & node flow weight (undirected graphs) \\ \hline $r(a, b)$ & whether $b$ is reachable from $a$ \\ \hline $f(Q)$ & weight of subgraph $Q$ \\ \hline \end{tabular} } \end{center} \vspace{-2.5ex} \caption{Notations}\label{tbl:notation} \end{table} \etitle{Summary of notations.} The notations of this paper are summarized in Table~\ref{tbl:notation}, which are for both directed and undirected graphs, unless being specified otherwise. \section{Query Processing} \label{sec-algs} We consider the graph stream as $G$, and $d$ graph sketches $\{S_1, \cdots, S_d\}$ of $G$, where $S_i$ is constructed using a hash function $h_i(\cdot)$. In this following, we discuss how to process different types of queries. Again, by default, we assume that we use $\at{sum}(\cdot)$ as the default aggregation function. Also, to facilitate the discussion, we assume that an adjacent matrix is used for storing {\sf gLava}\xspace. \subsection{Edge Query} \label{subsec:edgeq} To evaluate $\tilde{f_e}(a,b)$, the edge weight of $(a, b)$, is straightforward. It is to find the estimated edge weight from each sketch $\omega_i(h_i(a), h_i(b))$ and then use a corresponding function $\Gamma(\cdot)$ to merge them, as the following: \[ \tilde{f_e}(a,b) = \Gamma(\omega_1(h_1(a), h_1(b)), \cdots, \omega_d(h_d(a), h_d(b))) \] In this case, the function $\Gamma(\cdot)$ is to take the minimum. \stitle{Complexity.} It is easy to see that estimating the aggregate weight of an edge query is in $O(d)$ time and in $O(d)$ space, where $d$ is a constant. \subsection{Point Queries} \label{subsec:nodeq} Here, we only discuss the case $\tilde{f_v}(a, \leftarrow) > \theta$. That is, given a new element $(x, y; t)$, estimate in real-time whether the aggregate weight to node $a$ is above the threshold $\theta$. The other cases, \emph{i.e.,}\xspace~$\tilde{f_v}(a, \leftarrow) < \theta$, $\tilde{f_v}(a, \rightarrow) > \theta$, $\tilde{f_v}(a, \rightarrow) < \theta$, $\tilde{f_v}(a, \perp) > \theta$ and $\tilde{f_v}(a, \perp) < \theta$, can be similarly processed. We use the following strategy to monitor $\tilde{f_v}(a, \leftarrow) > \theta$, given an incoming edge $e: (x, y; t)$. Note that if $y \ne a$, we simply update all $d$ sketches in constant time, since this is not about an edge to $a$. Next, we only describe the case of edge $e: (x, a; t)$. \etitle{Step 1.} [Estimate current in-flow.] We write $\tilde{f_v^i}(a, \leftarrow)$ the estimated in-flow from the $i$-th sketch. $\tilde{f_v^i}(a, \leftarrow)$ can be computed by first locating the column in the adjacent matrix corresponding to label $a$ (\emph{i.e.,}\xspace~$h_i(a)$), and then sum up the values in that column, \emph{i.e.,}\xspace $\tilde{f_v^i}(a, \leftarrow) = \sum_{j=1}^w {\cal M}_i[j][h_i(a)]$. Here, $w$ is the width of the adjacent matrix ${\cal M}_i$. Then, \[ \tilde{f_v}(a, \leftarrow) = \Gamma(\tilde{f_v^1}(a, \leftarrow), \cdots, \tilde{f_v^w}(a, \leftarrow)) \] In this case, the function $\Gamma(\cdot)$ is to take the minimum. \etitle{Step 2.} [Monitor $e$.] We only send an alarm, if $\tilde{f_v}(a, \leftarrow) + \omega(e) > \theta$. \etitle{Step 3.} [Update all sketches.] Update all $d$ sketches by aggregating the new edge weight $\omega(e)$. \stitle{Complexity.} It is easy to see that step~1 takes $O(d+w)$ time and $O(d)$ space, where both $d$ and $w$ are constants. \subsection{Path Queries} \label{subsec:pathq} We consider the reachability query $\tilde{r}(a, b)$, which is to estimate whether $b$ is reachable from $a$. For such queries, we treat any off-the-shelf algorithm $\at{reach}(x, y)$ as a {\em black-box} and show our strategy. \etitle{Step 1.} [Map.] We invoke $\at{reach}_i(h_i[a], h_i[b])$ on the $i$-th sketch (for $i\in[1, d]$), to decide whether the mapped node $h_i[b]$ is reachable from the mapped node $h_i[a]$. \etitle{Step 2.} [Reduce.] We merge individual results as follows: \[ \tilde{r}(a, b) = \at{reach}_1(h_1[a], h_1[b]) \wedge \cdots \wedge \at{reach}_d(h_d[a], h_d[b]) \] That is, the estimated result is \mbox{\em true}~only if the mapped nodes are reachable from {\em all} $d$ sketches. The complexity of the above strategy is determined by the algorithm $\at{reach}()$. \subsection{Aggregate Subgraph Query} \label{subsec:subgraphq} We next consider the aggregate subgraph query $\tilde{f}(Q)$, which is to compute the aggregate weight of the constituent edges of a sub-graph $Q$. The process is similar to the above path queries, by using any existing algorithm $\at{subgraph}(Q)$. \etitle{Step 1.} [Map.] We first invoke $\at{subgraph}(Q)$ at each sketch to find subgraph matches, and calculate the aggregate weight, denoted by $\at{weight}\xspace_i(Q)$ for the $i$-th sketch. \etitle{Step 2.} [Reduce.] We merge individual results as follows: \[ \tilde{f}(Q) = \at{min}(\at{weight}\xspace_1(Q), \cdots, \at{weight}\xspace_d(Q)) \] Note that, running a graph algorithm on a sketch is only applicable to {\sf gLava}\xspace. It is not applicable to {\sf gSketch}\xspace since {\sf gSketch}\xspace by nature is an array of frequency counting, without maintaining the graphical structure as {\sf gLava}\xspace does. Also, in the case $\at{weight}\xspace_i(Q)$ from some sketch that says that a subgraph match does not exist, we can terminate the whole process, which provides chances for optimization. \stitle{Optimization.} Recall that a subgraph $Q$ is defined as constituent edges as $\{(x_1,y_1), \cdots, (x_k,y_k)\}$ (Section~\ref{subsec-queries}). An alternative way of estimating the aggregate subgraph query is to first compute the minimum value of each value and sum them up. We denote this approach by $\tilde{f'}(Q)$ as follows: \[ \tilde{f'}(Q) = \sum_{i=1}^k \tilde{f_e}(x_i, y_i) \] It is readily to see that $\tilde{f'}(Q) \leq \tilde{f}(Q)$. Recall that there are two extensions of the subgraph queries (Section~\ref{subsec-queries}). For the first extension that a wildcard $*$ is used, the above optimization can be used. For instance, the edge frequency of $\tilde{f_e}(x, *)$ for a directed graph is indeed $\tilde{f_v}(x, \rightarrow)$. For the second extension that multiple wildcards $*_i$ are used to bound to the same node, this optimization cannot be applied. \section{Error Bounds} \label{sec-theory} We study the error bounds for two basic types of queries: edge queries and point queries. Although {\sf gLava}\xspace is more general, we show that theoretically, it has the same error bounds as {\sf CountMin}\xspace. \subsection{Edge Frequency} Our proof for the error bound of edge frequency queries is an adaption of the proof used in {\sf CountMin}\xspace in Sec.~4.1 of \cite{DBLP:journals/jal/CormodeM05}. \begin{theorem} The estimation $\tilde{f_e}(j, k)$ of the cumulative edge weight of the edge $(j,k)$ has the following guarantees, $f_e(j, k) \leq \tilde{f_e}(j, k)$ with probability at least $1 - \delta$, s.t., $f_e(j, k) \le \tilde{f_e}(j, k) + \epsilon * n$, where $f_e(j, k)$ is the exact answer to the cumulative edge weight of the edge $(j, k)$, $n$ denotes the number of nodes, and the error in answering a query is within a factor of $\epsilon$ with probability $\delta$\footnote{The parameters $\epsilon$ and $\delta$ are usually set by the user.}. \end{theorem} \begin{proof} For any query edge $f_e(j, k)$, for each hash function $h_i$, by construction, for any stream edge $e: (j, k; t)$, where $\omega(e)$ is the edge weight that is added to $\kw{count}\xspace(h_i(j),h_i(k))$. Therefore, by construction, the answer to $f_e(j, k)$ is less than or equal to $min_i (\kw{count}\xspace(h_i(j),h_i(k)))$ for $i\in[1, d]$ where $d$ is the number of hash functions. Consider two edges $(j, k)$ and $(l, m)$. We define an indicator variable $I_{i,j,k,l,m}$, which is 1 if there is a collision between two distinct edges, \emph{i.e.,}\xspace $(j \neq l) \wedge (k \neq m) \wedge (h_i(j)=h_i(l) \wedge h_i(k)=h_i(l))$ , and 0 otherwise. By pairwise independence of the hash functions, \mat{4ex}{ $E(I_{i,j,k,l,m})$ \= $= Pr[(h_i(j)=h_i(l) \wedge h_i(k)=h_i(m)]$ \\ \> $\leq (1/\at{range}(h_i))^2=\epsilon'^2/e^2$ \textcolor{white}{\huge P} } \ni where $e$ is the base used to compute natural logarithm, and $\at{range}(h_i)$ is the number of hash buckets of function $h_i$. Define the variable $X_{i,j,k}$ (random over choices of $h_i$) to count the number of collisions with the edge $(j, k)$, which is formalized as $X_{i,j,k}=\Sigma_{l=1 \ldots n,m=1 \ldots n} I_{i,j,k,l,m} a_{l,m}$, where $n$ is the number of nodes in the graph. Since all $a_i$'s are non-negative in this case, $X_{i,j,k}$ is non-negative. We write $\kw{count}\xspace[i,h_i(j),h_i(k)]$ as the count in the hash bucket relative to hash function $h_i$. By construction, $\kw{count}\xspace[i,h_i(j),h_i(k)] = f_e(j,k) + X_{i,j,k}$. So clearly, $min_i (\kw{count}\xspace[i,h_i(j),h_i(k)]) \geq f_e(j, k)$. \mat{4ex}{ \=$E(X_{i,j,k})$ \\ \>\hspace{4ex}\=$=E(\sum_{l=1 \ldots n, m=1 \ldots n}I_{i,j,k,l}f_e(l,m))$ \textcolor{white}{\huge P} \\ \>\>$\leq (\sum_{l=1 \ldots n, m=1 \ldots n}E(I_{i,j,k,l}).f_e(l,m)) \leq (\epsilon'/e)^2* n$ \textcolor{white}{\huge P} } \ni by pairwise independence of $h_i$, and linearlity of expectation. Let, $\epsilon'^2 = \epsilon$. By the Markov inequality, \mat{4ex}{ \= $Pr[\tilde{f_e}{(j,k)} > f_e(j,k) + \epsilon * n]$ \= \\ \>\hspace{4ex}\= $=Pr[\forall_i. \at{count}[i, h_i(j), h_i(k)] > f_e(j,k) + \epsilon * n$] \textcolor{white}{\huge P}\\ \>\> $=Pr[\forall_i. f_e(j,k)+X_{i,j,k}>f_e(j,k)+ \epsilon * n]$ \textcolor{white}{\huge P}\\ \>\> $=Pr[\forall_i. X_{i,j,k} > e * E(X_{i,j,k})] < e^{-d} \leq \delta$ \textcolor{white}{\huge P} } \end{proof} Our algorithm generates the same number of collisions and the same error bounds under the same probabilistic guarantees as {\sf CountMin}\xspace for edge frequency queries. \subsection{Point Queries} We first discuss the query for node out-degree, \emph{i.e.,}\xspace $f_v(a, \rightarrow)$. Consider the stream of edges $e: (a, *; t)$, \emph{i.e.,}\xspace edges from node $a$ to any other node indicated by a wildcard $*$. Drop the destination (\emph{i.e.,}\xspace the wildcard) of each edge. The stream now becomes a stream of tuples $(a, \omega(t))$ where $\omega(t)$ is the edge weight from node $a$ at time $t$. When a query is posed to find the out-degree of node $a$, {\sf CountMin}\xspace~\cite{DBLP:journals/jal/CormodeM05} returns the minimum of the weights in different hash buckets as the estimation of the flow out of node $a$. If the number of unique neighbors (\emph{i.e.,}\xspace connected using one hop ignoring the weights) is sought, we adapt the procedure above by replacing $\omega(t)$ with 1 in the stream above for all nodes. Clearly, by construction, the answer obtained is an over-estimation of the true out-degree because of two reasons: (a) collisions in the hash-buckets, and (b) self-collision, \emph{i.e.,}\xspace we do not know if an edge has been seen previously and thus count the outgoing edge again even if we have seen it. The case for in-degree point queries is similar. To compute the in-degree of a node, we simply adapt the stream to create a one-dimensional stream as in the case above. Drop the source for each edge to create a stream. The variations in the case of out-degree for the total in-flow and the number of neighbors with in-links to a node can be estimated as in the case of out-degree outlined above. The error estimates for point queries (see Sec.~4.1 of \cite{DBLP:journals/jal/CormodeM05}) hold for these cases. \begin{lemma} \label{degree} The estimated out-degree (in-degree) is within a factor of $\epsilon$ with probability $\delta$ if we use $d=\lceil ln(1/\delta) \rceil$ rows of pair-wise independent hash functions and $w=\lceil e/\epsilon \rceil$. \end{lemma} \eat \stitle{Path.} $(a, b), (b, c)$. \stitle{Star.} $(a, b), (a, c), (a, d)$. \stitle{Cycle.} $(a, b), (b, c), (c, a)$. \begin{figure}[t] \hspace*{2ex} \begin{minipage}{0.25\columnwidth} \centerline{ \xymatrixcolsep{0.1in} \xymatrix{ a \ar[d] \\ b \\ } } \centerline{ \xymatrixrowsep{0.2in} \xymatrix{ & \txt{(a) Edge} & } } \end{minipage} \hspace*{-2ex} \begin{minipage}{0.25\columnwidth} \centerline{ \xymatrixcolsep{0.1in} \xymatrix{ a \ar[d] & \\ b \ar[r] & c \\ } } \centerline{ \xymatrixrowsep{0.2in} \xymatrix{ & \txt{(b) Path} & } } \end{minipage} \hspace*{-2ex} \begin{minipage}{0.25\columnwidth} \centerline{ \xymatrixcolsep{0.1in} \xymatrix{ & a \ar[d]\ar[dl]\ar[dr] & \\ b & c & d \\ } } \centerline{ \xymatrixrowsep{0.2in} \xymatrix{ & \txt{(c) Star} & } } \end{minipage} \hspace*{-2ex} \begin{minipage}{0.25\columnwidth} \centerline{ \xymatrixcolsep{0.1in} \xymatrix{ a \ar[d] & \\ b \ar[r] & c \ar[ul] \\ } } \centerline{ \xymatrixrowsep{0.2in} \xymatrix{ & \txt{(d) Circle} & } } \end{minipage} \caption{Studied cases} \label{fig:2hash} \end{figure} \section{Implementation Details} \label{sec-implementation} In this section, we first discuss the data structures used to implement {\sf gLava}\xspace (Section~\ref{subsec-adjacent}). We then introduce the definition of pairwise independent hash functions (Section~\ref{subsec-pairwise}). We also discuss its potential extension in a distributed environment (Section~\ref{subsec:distributed}). \subsection{Adjacent Matrix and Its Extension} \label{subsec-adjacent} Using hash-based methods, although it is evident that it only requires 1-pass of the graph stream to construct/update a {\sf gLava}\xspace. However, the linear time complexity of constructing and updating a {\sf gLava}\xspace depends on the data structures used. For example, the adjacent list may not be a fit, since searching a specific edge is not in constant time. \subsubsection{Adjacent Matrix} \label{subsubsec-matrix} An adjacency matrix is a means of representing which nodes of a graph are adjacent to which other vertices. \begin{example} \label{exam:matrix} Consider the sketch $S_1$ in Fig.~\ref{fig:2hash}~(a) for example. Its adjacent matrix is shown in Fig.~\ref{fig:matrix}. \end{example} Example~\ref{exam:matrix} showcases a directed graph. In the case of an undirected graph, it will be a symmetric matrix. \begin{figure}[t] \centering \begin{tabular}{|c|c|c|c|c|} \multicolumn{1}{c}{{\color{white} {\Huge j}} $\at{from}\setminus\at{to}$} & \multicolumn{1}{c}{$I(af)$} & \multicolumn{1}{c}{$II(bc)$} & \multicolumn{1}{c}{$III(dg)$} & \multicolumn{1}{c}{$IV(e)$}\\ \cline{2-5} \multicolumn{1}{c|}{$I(af)$} & 1 & 2 & 0 & 0 \\ \cline{2-5} \multicolumn{1}{c|}{$II(bc)$} & 3 & 1 & 1 & 1 \\ \cline{2-5} \multicolumn{1}{c|}{$III(dg)$} & 0 & 1 & 1 & 0 \\ \cline{2-5} \multicolumn{1}{c|}{$IV(e)$} & 1 & 1 & 1 & 0 \\ \cline{2-5} \end{tabular} \caption{The adjacent matrix of $S_1$} \label{fig:matrix} \end{figure} \stitle{Construction.} Consider a graph stream $G=\langle e_1, e_2, $ $\cdots, e_m \rangle$ where $e_i = (x_i, y_i; t_i)$. Given a number of nodes $w$ and a hash function $h(\cdot) \rightarrow [1, w]$. We use the following strategy. \etitle{Step 1.} [Initialization.] Construct a $w\times w$ matrix ${\cal M}$, with all values initialized to be $0$. \etitle{Step 2.} [Insertion of $e_i$.] Compute $h(x_i)$ and $h(y_i)$. Increase the value of ${\cal M}[h(x_i)][h(y_i)]$ by $\omega(e_i)$, the weight of $e_i$. In the above strategy, step 1 will take constant time to allocate a matrix. Step 2 will take constant time for each $e_i$. Hence, the time complexity is $O(m)$ where $m$ is the number of edges in $G$. The space used is $O(w^2)$. \stitle{Deletions.} [Deletion of $e_i$.] Insertions have been discussed in the above step 2. For the deletion of an $e_i$ that is not of interest (\emph{e.g.,}\xspace out of a certain time window), it is simply to decrease the value of ${\cal M}[h(x_i)][h(y_i)]$ by $\omega(e_i)$ in $O(1)$ time. Alternatively, one may consider to use an adjacent hash-list is to maintain, for each vertex, a list of its adjacent nodes using a hash table. Given an edge $e_i (x_i, y_i; t_i)$, two hash operations are needed: The first is to locate $x_i$, and the second is to find $y_i$ from $x_i$'s hash-list. Afterwards, it updates the corresponding edge weight. Adjacent list is known to be suitable when graph is sparse. However, in terms of compressed graph in our case, as will be shown later in experiments, most sketches are relatively dense, which makes the adjacent matrix the default data structure to manage our graph sketches. \subsubsection{Using Non-Square Matrices} \label{subsubsec-matrix2} When using a classical square matrix for storing a sketch, we have an $n * n$ matrix. Consider all edges from node $a$ such as $(a, *)$. Using any hash function will inevitably hash all of these edges to the same row. For example, in Fig.~\ref{fig:matrix}, all edges $(a, *)$ will be hashed to the first row of the matrix. When there is only one hash function to use, due to the lack of {\em a priori} knowledge of data distribution, it is hard to decide the right shape of a matrix. However, the application of $d$ sketches provides us an opportunity to heuristically reduce the chance of combined hash collisions. The basic idea is, instead of using an $n * n$ matrix with one hash function, we use an $m * p$ matrix with two hash functions: $h_1(\cdot) \rightarrow [1, m]$ on the {\em from} nodes and $h_2(\cdot) \rightarrow [1, p]$ on the {\em to} nodes. \begin{figure}[t] \centering \begin{tabular}{|c|c|c|} \multicolumn{1}{c}{{\color{white} {\Huge j}} $\at{from}\setminus\at{to}$} & \multicolumn{1}{c}{$i(abcd)$} & \multicolumn{1}{c}{$ii(efg)$} \\ \cline{2-3} \multicolumn{1}{c|}{$I(a)$} & 2 & 0 \\ \cline{2-3} \multicolumn{1}{c|}{$II(b)$} & 3 & 1 \\ \cline{2-3} \multicolumn{1}{c|}{$III(c)$} & 0 & 2 \\ \cline{2-3} \multicolumn{1}{c|}{$IV(d)$} & 0 & 1 \\ \cline{2-3} \multicolumn{1}{c|}{$IV(e)$} & 2 & 1 \\ \cline{2-3} \multicolumn{1}{c|}{$IV(f)$} & 1 & 0 \\ \cline{2-3} \multicolumn{1}{c|}{$IV(g)$} & 1 & 0 \\ \cline{2-3} \end{tabular} \caption{A non-square matrix} \label{fig:matrix2} \end{figure} \begin{example} \label{exam:matrix2} Consider the graph stream in Fig.\ref{fig:graph}. Assume that we use two hash functions: $h_1(\cdot) \rightarrow [1, 7]$ and $h_2(\cdot) \rightarrow [1, 2]$. The non-square matrix is shown in Fig.~\ref{fig:matrix2}. \end{example} In practice, when we can generate multiple sketches, we heuristics use matrices $n * n$, $2n * n/2$, $n/2 * 2n$, $4n * n/4$, $n/4 * 4n$, etc. That is, we use matrices with the same sizes but different shapes. \subsection{Pairwise Independent Hash Functions} \label{subsec-pairwise} Here, we only borrow the definition of pairwise independent hash functions\footnote{\url{http://people.csail.mit.edu/ronitt/COURSE/S12/handouts/lec5.pdf}}, while relying on existing tools to implement them. A family of functions ${\cal H} = \{h | h(\cdot) \rightarrow [1, w]\}$ is called a {\em family of pairwise independent hash functions} if for two different hash keys $x_i$, $x_j$, and $k, l \in [1, w]$, \[ \at{Pr}_{h \leftarrow {\cal H}}[h(x_i) = k \wedge h(x_j) = l] = 1 / w^2 \] Intuitively, when using multiple hash functions to construct sketches, the hash functions used should be pairwise independent in order to reduce the probability of hash collisions. Please refer to~\cite{DBLP:journals/jal/CormodeM05} for more details. \subsection{Discussion: Distributed Environment} \label{subsec:distributed} In this paper, we mainly discuss how to implement {\sf gLava}\xspace in a centralized environment. However, it is easy to observe that {\sf gLava}\xspace can be easily applied to a distributed environment, since the construction and maintenance of each sketch is independent of each other. Assuming that we have $d$ sketches in one computing node, when $m$ nodes are available, we can use $d\times m$ pairwise independent hash functions, which may significantly reduce the probability of errors. \section{Conclusion} \label{sec-conclusion} We have proposed a new graph sketch for summarizing graph streams. We have demonstrated its wide applicability to many emerging applications. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{main} Within the last decade many fully 3d exact coherent states of the Navier-Stokes equations have been identified. They are important for understanding the transition to turbulence in various shear flows, for determining the thresholds for the transition, and for characterizing the subsequent evolution towards fully developed turbulence \cite{Eckhardt2007,Eckhardt2008b}. Coherent structures provide a scaffold for the turbulent time evolution because they are embedded in the turbulent dynamics \cite{Kawahara2001} and form a network of heteroclinic connections \cite{Halcrow2009}. Tracking exact coherent states also provided insights into the formation of the chaotic saddles and transient turbulence in plane Couette, pipe, and plane Poiseuille flow (PPF) \cite{Kreilos2012,Avila2013,Zammert2015}. Special coherent structures, so called edge states \cite{Skufca2006}, are important for the transition to turbulence since their stable manifold separates the state space in a part with turbulence and another one where initial conditions relaminarize directly. Among the various shear flows that have been studied, PPF is special because it also has a linear instability of the laminar profile which triggers Tollmien-Schlichting (TS) waves \cite{Henningson}. The extend to which they contribute to the observed transition then depends on the order of appearance of the different states. As we will see, some states are \textit{harbingers} that appear well below the experimentally observed transition, and others are \textit{latecommers}, appearing well above the transition. Harbingers prepare the experimentally observed transition to turbulent dynamics, and latecomers add additional states and degrees of freedom at higher Reynolds numbers. While critical Reynolds numbers for linear instabilities of the laminar profile are well defined and unique, this is not the case for the subcritical transitions to be discussed here. For the subcritical case, it does make a difference whether the system is run under conditions of constant mass flux or prescribed pressure or perhaps constant energy input \cite{Yosuke2013}. All exact coherent states have a fixed relation between pressure drop and mean flow rate, but the quest for the bifurcation point focusses on different projections and hence gives different critical Reynolds numbers depending on whether the flow rate or the pressure drop or some other quantity are kept constant. The setting of operating conditions also affects the choice of Reynolds numbers. To be specific, we define \begin{equation} Re_{B}=\frac{3 U_{B}d }{ 2\nu}, \label{Re_B} \end{equation} for the case of constant mean bulk velocity $U_{B}$. The other parameters are $d$, half the channel width and $\nu$, the kinematic viscosity. If the pressure gradient is fixed, we define the pressure based Reynolds number, \begin{equation} Re_{P}=\frac{U_{cl,P}d}{\nu}, \label{Re_P} \end{equation} where $U_{cl,P}=d^2 (dP/dx) / 2\nu$ is the laminar centerline velocity for this value of the pressure gradient. The numerical factors in (\ref{Re_B}) are chosen such that the two Reynolds numbers agree for the case of a laminar profile, $Re_B=Re_P$. For 3d coherent states they are usually different. Therefore, even if the laminar state coincides, the state space of the system at different operating conditions may be different, because in one case some exact coherent structure might already exist that are not yet present in the other case. For direct numerical simulation we used \textit{Channelflow}-code (www.channelflow.org) \cite{J.F.Gibson2012}. Exact coherent structures were identified and continued in Reynolds number using the Newton hookstep \cite[see e.g.][]{Viswanath2007} continuation methods \cite[see e.g.][]{Dijkstra2013} included in the \textit{Channelflow}-package. \section{Spatially extended and localized Tollmien-Schlichting waves} The laminar state of PPF has a subcritical instability. The exact values for the critical Reynolds number and wavelength have remained unclear \cite{Heisenberg1924,Lin1945,Thomas1953} until the issue was finally settled by Orszag \cite{Orszag1971}, who calculated a critical Reynolds number of $5772.22$ for a critical wavelength of $1.96\pi$ using spectral methods. Chen \& Joseph \cite{Chen1973} then demonstrated the existence of two-dimensional travelling wave solutions, so-called finite amplitude TS-waves, bifurcating subcritically from the laminar flow. These two-dimensional exact coherent structures were later studied by Zahn et al. \cite{Zahn1974}, Jimenez \cite{Jimenez1990a} and Soibelman \& Meiron \cite{Soibelman1991}. Estimates for the lowest Reynolds number of the turning points were first given by Grohne \cite{Grohne1969} and Zahn et al. \cite{Zahn1974}. Using a truncation after one or two modes they found that the traveling waves appear slightly above $Re=2700$. Their result were qualitatively confirmed in later studies using a higher number of modes \cite{Herbert76}. In our numerical simulations we use the \textit{channelflow}-code with resolution $N_{x} \times N_{y} \times N_{z}=80 \times 97 \times 4$. The streamwise and spanwise resolution is chosen sufficiently fine to ensure that a further increase does not change the obtained Reynolds numbers. The code is set up to solve the fully 3d problem, however, it can be reduced to an effectively 2d code with the choice of $N_z=4$, since all the modes with $k_z\ne0$ enter with zero amplitude. We therefore can stay within the same algorithmic framework and code and can exploit all the modules of \textit{channelflow}. We identify the traveling waves that bifurcate from the laminar flow (referred to as $TW_{TS}$ in the following) using a Newton-method and continue them in Reynolds number with the continuation methods within \textit{channelflow}. Figure \ref{fig_BifTSwaves}a) shows the bifurcation diagram versus streamwise wavelength. The stability curve for the laminar profile is the same for the constant pressure gradient and constant mass flux case, and has a minimum at $Re=5772$ and a wavelength of $6.16$. The bifurcation is subcritical and reaches to lower Reynolds numbers, but these points then depend on the operating conditions: the minimal values are $Re_{B} = 2609$ and $Re_{P} = 2941$. The critical wavelengths are $4.65$ and $4.81$, respectively, and thus smaller than the one for the linear instability of the laminar profile. The flow field of $TW_{TS}$ at the the minimal Reynolds number $Re_{B}=2609$ is visualized in figure \ref{fig_BifTSwaves}b). \begin{figure}[t]\vspace*{4pt} \centering \includegraphics[]{fig/fig_ReBifVsLx.pdf}\includegraphics[]{fig/fig_TWTS.pdf \caption{Tollmien-Schlichting waves in PPF. (a) Instability of the laminar profile (red) and existence regions for spatially extended TS waves. The green and blue lines show the Reynolds number of the turning point vs. streamwise wavelength $L_{x}$ for the constant mean flow (green) and constant pressure gradient (blue), respectively. The minima of the curves are marked by a red circle: $Re=5772$ and $\lambda=6.16$ for the linear instability of the laminar profile, $Re_B= 2609$ and $\lambda=4.65$ for constant mass flux and $Re_P=2941$ and $\lambda=4.81$ for constant pressure drop. (b) Visualization of the TS wave for the minimal Reynolds number at constant mass flux. The direction of the flow is indicated by the black arrow. An iso-surface $0.01$ of the Q-vortex criterion is shown in yellow. Iso-surfaces $\pm0.12$ of the streamwise velocity (deviation from laminar) are shown ind blue and red. }\vspace*{-6pt} \label{fig_BifTSwaves} \end{figure} In a study on two-dimensional PPF, Jimenez \cite{Jimenez1990a} discovered streamwise modulated packages of TS-waves. They have a turning point significantly lower than the spatially extended traveling wave $TW_{TS}$ \cite{Price1993}. In even longer domains, they turn into fully localized states that are relative periodic orbits, generated out of subharmonic instabilities of the spatially extended TS waves \cite{Drissi1999,Mellibovsky2015}. Figure \ref{fig_BiflocTSwaves}a) shows a bifurcation diagram for the modulated TS-waves and the spatially extended traveling wave from which they bifurcate. The ordinate of the diagram is the amplitude of the flow field, \begin{equation} a(\textbf{u}) = \sqrt{\frac{1}{2L_{x}L_{z}}\int \textbf{u}^{2} dx dy dz}, \end{equation} where $L_{x}$ and $L_{z}$ are the streamwise and spanwise wavelengths of the computational domain. The spatially extended TS-wave shown in the figure has a wavelength $\lambda_{0}=2\pi$ and the localized and modulated TS-waves are created in Hopf-bifurcations that correspond to streamwise wavelengths $L_{x}$ between $4$ to $16$ times $\lambda_{0}$. Just above their bifurcations all solutions are modulated TS-waves but if the wavelength of the modulation is sufficiently long they become increasingly more localized with decreasing $Re_{B}$. E.g. for the short modulation wavelength $\Lambda=6 \lambda_{0}$, the solution is modulated but extends across the domain, whereas for $L_{x}=16 \lambda_{0}$ it becomes well localized at low Reynolds numbers. The lowest Reynolds numbers for the turning point are $Re_{B}=2335$ and $Re_{P}=2372$ and are achieved for the localized TS-wave with modulation wavelength $16\lambda_{0}$. Further increasing the modulation wavelength does not change the localized state and does not result in significant changes of the Reynolds number for the turning point. The spatially extended states as well as their localized counter-parts are extremely unstable to three-dimensional disturbances. However, the lower branches are stable against super-harmonic two-dimensional disturbances. The superharmonic bifurcations of the spatially extended TS-waves were investigated in detail by Casas and Jorba \cite{Casas2012}, who found that the upper branch of $TW_{TS}$ undergoes Hopf bifurcations creating periodic state for the case of constant pressure gradient as well as for constant mass flux. The lower branch of the streamwise localized TS-wave has only one unstable eigenvalue in the two-dimensional subspace and thus it is the edge state of the system \cite{Skufca2006,Kreilos2012}. The upper branches undergo further bifurcations, leading to a chaotic temporal evolution. For the localized TS-wave, the upper branch undergoes a Hopf bifurcation and adds another frequency at $Re_{B}=4470$. Further bifurcations of this quasi-periodic state add additional frequencies leading to a chaotic state. In figure \ref{fig_BifDiag_LocTS32pi}a) a bifurcation diagram that includes the localized TS-wave and the bifurcating states is given. The Hopf bifurcation of the upper branch is marked in the figure by a red square. For $Re_{B}>4493$ the chaotic state becomes unstable and spontaneous transitions to a spatially extended chaotic state are possible. An example of a trajectory which spends several thousand time units in the vicinity of the localized chaotic state before it switches to the spatially extended state is shown in figure \ref{fig_BifDiag_LocTS32pi}b). \begin{figure}[t]\vspace*{4pt} \centering \includegraphics[]{fig/fig_BifDiagTSwaves.pdf}\includegraphics[]{fig/fig_LocTS.pdf} \caption{Localized TS-waves. (a) Bifurcations off the extended state for different domain widths. The black line shows the spatial extended TS-wave with streamwise wavelength $\lambda_{0}=2\pi$ and the colored lines show bifurcating periodic orbits corresponding to streamwise wavelengths $n\lambda_{0}$, where $n$ is integer $2$ and $16$. The squares indicate the bifurcations of the extended state, and the circles the minimal critical Reynolds numbers for the different domains, from which a global minimum of $Re_B=2335$ is estimated. (b) Localized TS-wave with $\L_{x}=16\lambda_{0}$ at the minimum $Re_{B}=2335$. Isosurfaces of $Q=0.01$ are shown in yellow, isosurfaces of $u=0.1$ and $u=-0.1$ in red and blue.}\vspace*{-6pt} \label{fig_BiflocTSwaves} \end{figure} \begin{figure} \centering \includegraphics[]{fig/fig_ReBif_LocTS.pdf}\includegraphics[]{fig/fig_TrajR4496.pdf} \caption{Bifurcations of localized TS-waves. (a) The upper branch of the localized TS-wave undergoes a Hopf bifurcation at $Rd_B=4470$ (red square) to a state with two frequencies. The states are indicated by the minima in $a$, which for a periodic orbit gives one point, and for the quasiperiodic one a line (see inset). The turning point is marked by a red dot, the bifurcation point by a red disk. (b) For even higher Reynolds numbers, the localized state undergoes a transition to a spatially extended state, here illustrated with the time trace of $a(t)$ for a state with $Re_B=4496$. The sharp increase at times $O(4000)$ marks the transition from the localized to an extended stated. \label{fig_BifDiag_LocTS32pi}} \end{figure} \section{Three-dimensional travelling waves} The first three-dimensional solutions for PPF were described by Ehrenstein \& Koch \cite{Ehrensteint1991}. They found three-dimensional traveling waves below the saddle-node point of the two-dimensional TS-waves and studied their dependence on the streamwise and spanwise wavelength. However, their calculation were strongly truncated and attempts to reproduce their results with better resolution failed \cite{Nagata2013a}. Subsequent studies identified many other three-dimensional exact solutions for PPF \cite{Waleffe2001,Nagata2013a,Gibson2014,Zammert2014a,Zammert2014b,Rawat2014}. Waleffe \cite{Waleffe2003} and Nagata \& Deguchi \cite{Nagata2013a} studied the dependence of particular solutions on the streamwise and spanwise wavelengths. They give as estimates for the lowest critical Reynolds numbers for the appearance of a traveling wave the values $Re_{B}=615$ for $L_{z}=1.55\pi$ and $L_{x}=1.14\pi$ in the case of constant mass flux, and $Re_{P}=805.5$ for $L_{x}=1.504\pi$, $L_{z}=1.156\pi$ in the case of constant pressure drop. In this section, we investigate a special three-dimensional coherent structure, obtained by tracking the edge state \cite{Skufca2006,Schneider2007} in appropriately chosen computational domains. We use a numerical resolution of $N_{x}\times N_{y} \times N_{z}=32 \times 65 \times 64$ for small domains and increase it to $N_{x}\times N_{y} \times N_{z}=80 \times 65 \times 112$ for a domain with $L_{x}=L_{z}=4\pi$. For sufficiently large domains the edge state of PPF is a traveling wave \cite{Zammert2014b} that is symmetric with respect to the center-plane, \begin{equation} s_{y}: [u,v,w](x,y,z)=[u,-v,w](x,-y,z), \end{equation} and obeys a shift-and reflect symmetry in addition, \begin{equation} s_{z}\tau_{x}(\lambda_{x}/2): [u,v,w](x,y,z)=[u,v,-w](x+\lambda_{x}/2,y,-z). \end{equation} To obtain the optimal wavelengths for this travelling wave, referred to as $TW_{E}$ in the following, we studied the dependence of the critical Reynolds number on the streamwise and spanwise wavelength. In a straightforward scan of the wavelength domain with a stepping width of $0.1\pi$ and $0.05\pi$ in the spanwise and streamwise direction, respectively, we obtained the results shown in figure \ref{fig_ReMinTW}. The optimal Reynolds number is marked by a star. As indicated by the white lines in figure \ref{fig_ReMinTW} , the lowest Reynolds numbers for the turning points are achieved if spanwise and streamwise wavelengths are of comparable size for small wavelengths where the states are not localized in spanwise direction. In wide domains, for sufficiently large $L_z$, the solution becomes localized in the spanwise direction, and the turning point depends on $L_{x}$ only. The lowest bulk Reynolds number for the turning point is $315.8$ and achieved of $L_{x}=2.9\pi$ and a width $L_{z}= 3.05\pi$. A visualization of the flow structures for these critical parameter values is given in figure \ref{fig_YZplaneTW}. The lowest values of the pressure Reynolds number is $Re_{P}=339.1$, which is also achieved for $L_{x}=2.9\pi$ and $L_{z}=3.05\pi$. Both minimal Reynolds numbers are much lower than those reported in previous studies \cite{Waleffe2003,Nagata2013a}. The optimal state $TW_E$ may be compared to the coherent states in plane Couette flow. There, the lowest critical Reynolds number for the appearance of exact coherent structures is $127.705$ \cite{Clever1997,Waleffe2003}, based on half the channel height, and half the velocity difference between the plates. PPF is like two plane Couette flows staggered on top of each other, though with different boundary conditions (free slip instead of rigid) at the interface. For the Reynolds numbers one then has to take into account that the mean velocity in PPF corresponds to the full velocity difference between the plates in plane Couette flow, and half the channel width corresponds to the full gap. Therefore, when comparing Reynolds numbers, the ones from the usual definition of plane Couette flow have to be multiplied by four. Using this redefinition of the Reynolds number, the minimal Reynolds number of $Re=127.705$ for plane Couette flow corresponds to $Re_{B}=510$ for PPF. The proper steps for this comparison were done taken by Waleffe \cite{Waleffe2003}, who implemented a homotopy between the two flows, including the boundary conditions. He finds an optimal Reynolds number $Re_{B}\approx 642$. Moreover, the optimal wavelength for his state are $L_x=1.86\pi $ and $L_z= 0.74$, much shorter and narrower than for the state described here (see figure \ref{fig_ReMinTW}). Therefore, for the time being, the travelling wave $TW_E$ described here is the lowest lying critical state for plane Poiseuille flow. \begin{figure}[t]\vspace*{4pt} \centering \includegraphics[]{fig/fig_ReMinBif_TWE_v2.pdf} \\ \includegraphics[]{fig/fig_ReMinBif_TWE_ReP.pdf} \caption{Wavelengths dependence of the critical Reynolds numbers for the traveling wave $TW_E$ for (a) constant mean flow and (b) constant pressure drop. The Reynolds number is color coded and the optimal values are marked by a star. They are $Re_B=315.8$ for constant mean flow, and $Re_{P}=339.1$ at for constant pressure drop. In both cases the corresponding wavelength are $L_{x}=2.9\pi$ and $L_{z}=3.05\pi$. The white line marks the optimal $L_x$ for a given $L_z$. }\vspace*{-6pt} \label{fig_ReMinTW} \end{figure} \begin{figure}[t]\vspace*{4pt} \centering \includegraphics[]{fig/fig_TWE_Remin.pdf \caption{Instantaneous snapshot of $TW_{E}$ for $L_{x}=2.9\pi$ and $L_{z}=3.05\pi$ at the turning point at $Re_{B}=315.8$. Iso-surfaces of $Q=0.01$ are shown in yellow, isosurfaces of $u=0.1$, $-0.25$ of the streamwise velocity component (deviation from the laminar profile) in red and blue. The direction of the flow is indicated by the black arrow.}\vspace*{-6pt} \label{fig_YZplaneTW} \end{figure} \begin{figure} \centering \includegraphics[]{fig/fig_BifDiag_TWE.pdf} \caption{Bifurcation diagram of $TW_{E}$ for the spatial wavelengths leading to the lowest bulk Reynolds number for the bifurcation point. For Reynolds numbers where the wave has only one unstable direction ($Re_{B}>490$) a solid red line is used, while if the wave has more unstable directions a dashed line is used. The turning point is marked by a red dot. Red squares mark the positions of a Hopf bifurcations and triangles those of pitchfork bifurcations. The black dots are obtained from maxima and minima during the time-evolution. They indicate further bifurcations and the appearance of temporal chaos. \label{fig_BifDiag_TWE}} \end{figure} For the combination of wavelengths giving the lowest values of $Re_{B}$ for the turning point, the bifurcation diagram of $TW_{E}$ is shown in figure \ref{fig_BifDiag_TWE}. In a subspace with the symmetries $s_{y}$ and $s_{z}\tau_{x}(\lambda_{x}/2)$ the the upper branch of $TW_{E}$ is stable for $Re_{B}<328.6$. At this Reynolds number the traveling wave undergoes a Hopf bifurcation, creating a periodic orbit which is stable in the symmetric subspace. In further bifurcation a chaotic attractor is created which remains stable in the subspace. The attractor is visualized in figure \ref{fig_BifDiag_TWE} by plotting minima and maxima of $a(\vec{u})$ for a trajectory on the attractor. \section{Streamwise localized three-dimensional periodic orbits} In spatially extended computational domains the edge state $TW_E$ undergoes long-wavelength instabilities creating modulated and localized exact solutions \cite{Melnikov2013}. In particular, a subcritical long-wavelength instability of $TW_{E}$ at high Reynolds numbers creates a streamwise localized periodic orbit, henceforth referred to as $PO_{E}$, which is an edge state in long computational domains \cite{Zammert2014b} and appears at low Reynolds numbers in a saddle-node bifurcation. The dependence of the Reynolds number of this saddle-node bifurcation on the spanwise wavelength is shown in figure \ref{fig_POE}a).The lowest Reynolds number for the turning point is achieved for a spanwise wavelength of $1.75\pi$. For this value of $L_{z}$ the bifurcation point lies at $Re_{B}=1018.5$ and $Re_{P}=1023.18$, respectively. For large values of $L_{z}$ also the orbit $PO_{E}$ becomes localized in spanwise direction \cite{Zammert2014b}, but for this spanwise wavelengths the bifurcation point lies at a much higher Reynolds number. A visualization of $PO_{E}$ for the minimal bulk Reynolds number is shown in figure \ref{fig_POE}b). While the localized TS waves, which also arise out of a subharmonic instability of a spatially extended state, exist at lower Reynolds numbers than the corresponding spatially extended solutions, the localized orbit $PO_{E}$ appears at much higher Reynolds number than its corresponding spatially extended solution $TW_{E}$. \begin{figure} \centering \includegraphics[]{fig/fig_ReBif_LocPO.pdf}\includegraphics[]{fig/fig_POE.pdf} \caption{The localized periodic orbit $PO_{E}$. (a) Reynolds number of the bifurcation point of $PO_{E}$ in dependence on the spanwise wavenumber $L_{z}$. Note that while the critical Reynolds numbers again vary between constant mean flow and constant pressure drop, but the optimal wavelengths coincide. (b) Instantaneous snapshot of $PO_{E}$ for $L_{z}=1.75\pi$ and $Re_{B}=Re_{B,min}(PO_{E})=1018.5$. Isosurfaces of $Q=0.002$ are shown in yellow and isosurfaces $u=0.075$, $-0.15$ of the streamwise velocity component (deviation from the laminar profile) in red and blue. The direction of the flow is indicated by the black arrow. In the center-plane the streamwise velocity is color-coded from blue to red. } \label{fig_POE} \end{figure} \section{Summary and conclusion} The different states and their critical Reynolds numbers and wavelengths are summarized in table \ref{tab1}. It is interesting to see that for the TS-waves, the localized structures appear before the extended ones, whereas for the 3d states the localized ones have a higher critical Reynolds number. Moreover, all TS waves appear well above the experimentally observed onset of turbulence, near $Re_B\approx 1000$ \cite{Henningson}. \begin{table} \tbl{Critical values for the different bifurcations in PPF. The columns give the critical Reynolds numbers $Re_{B}$ and $Re_{P}$ and the associated optimal wavelengths for exact coherent structures and the laminar profile.} { \begin{tabular}{cccc|cccc} \toprule & $Re_{B,min}$ & $L_{x}$ & $L_{z}$ & $Re_{P,min}$& $Re_{\tau,min}$ & $L_{x}$ & $L_{z}$ \\ \colrule $TW_{TS}$ & $2610$ & $1.48\pi$ & - &2941& 76.7 &$1.53\pi$&- \\ Localized TS & 2334 & - & - &2373 & 68.9 &- &- \\ $TW_{E}$ & 315.8 & $3.05\pi$ &$2.9\pi$ &339.1 &26.04 &$3.05\pi$& $2.9\pi$\\ $PO_{E}$ & 1018.5 & - &$1.75\pi$& 1023.18 &45.23 & - & $1.75\pi$\\ \colrule Laminar state & 5772.22 & $1.96\pi$&- &5722.22&196.98&$1.96\pi$& - \\ \botrule \end{tabular} } \label{tab1} \end{table} The traveling wave $TW_{E}$ appears at very low Reynolds number, well below the experimentally observed onset of turbulence. The bifurcation diagram in figure \ref{fig_BifDiag_TWE} shows the usual increase in complexity and the presence of a crisis bifurcation, in which the attractor turns into a repellor and the dynamics becomes transient. In addition to the states discussed here, there are a variety of bifurcations in which spanwise localized and doubly-localized solution branch off from $TW_E$. All of these states, as well as exact coherent structures with different flow fields contribute to the temporal evolution, and the network that forms with increasing Reynolds numbers and that can then carry the turbulent dynamics. Very little is known about the lowest possible Reynolds number for the appearance of coherent structures, and there are hardly any methods for determining them reliably \cite{Pausch2014a,Chernyshenko:2014je,Huang:2015jp} However, the state $TW_E$ has the potential to be the lowest possible state in PPF flow by analogy to the lowest lying state in plane Couette flow. Identifcation of lower lying exact coherent structures in either flow should therefore also have implications for the other flow. Among the exact coherent structures, $TW_{E}$ is a harbinger for the occurrence of turbulence, whereas the two dimensional TS-wave are latecomers. They do not contribute to the formation of subcritical transition at low Reynolds, but are related to a secondary path to turbulence in PPF that can be realized if special precautions are taken to prevent the faster transition via 3d structures. \vspace{0.5cm} This work was supported in part by the DFG within FOR 1182. \bibliographystyle{tJOT}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} The \textit{Ricci flow} evolution equation, which is given as the set of partial differential equations \begin{eqnarray}\label{in1} \frac{\partial\tilde{g}_{\mu\nu}}{\partial t}=-2R_{\mu\nu}, \end{eqnarray} was introduced by Hamilton \cite{ham1} and used to prove the sphere theorem, with \(\tilde{g}_{\mu\nu}\) denoting a ``time" dependent metric of a manifold with Ricci curvature \(R_{\mu\nu}\). Self similar solutions to \eqref{in1} are the Einstein metrics, while fixed points are Ricci flat manifolds. The flow \eqref{in1} has proved useful in many applications \cite{df1,df2}, including famously aiding the proof of the Poincare conjecture by Perelman \cite{gp1,gp2,gp3}. A limitation of \eqref{in1} was that it was not geometrical in the sense that it was dependent on the choice of the coordinate parametrising the flow. This led to the modification of \eqref{in1}, known as the \textit{Hamilton-DeTurck} flow, given by \begin{eqnarray}\label{in2} \frac{\partial\tilde{g}_{\mu\nu}}{\partial t}=-2R_{\mu\nu}+\mathcal{L}_{\tilde{X}}\tilde{g}_{\mu\nu}, \end{eqnarray} where \(\tilde{X}\) is a smooth vector field which generates the change of coordinates along the flow (necessarily a diffeomorphism), and \(\mathcal{L}_{\tilde{X}}\) is the Lie derivative along the vector field \(\tilde{X}\). The extra term in \eqref{in2} involving the Lie derivative leaves \eqref{in2} invariant under change of the parameter along the flow. Self similar solutions to \eqref{in2} then solve \begin{eqnarray}\label{in3} R_{\mu\nu}+\left(\frac{1}{2}\mathcal{L}_{\tilde{X}}-\varrho\right)\tilde{g}_{\mu\nu}=0, \end{eqnarray} where \(\varrho\in\mathbb{R}\). The equation \eqref{in3} is the \textit{Ricci soliton} equation, and naturally generalizes Einstein manifolds. These objects therefore have use in the study of the Ricci geometric flow. There is a wealth of literature on Ricci flows with most work dedicated to the so-called gradient Ricci solitons. The geometry of these objects have been extensively studied, even more so in the case of three dimensional Riemannian manifolds, where various classification schemes have been provided. In contrast to the \(3\)-dimensional Riemannian case, the Lorentzian case and the case of embedded hypersurfaces of Lorentzian manifolds is less studied, though some interesting results in this regard have been obtained (see \cite{tom1,eric1} and references therein). In dimension two, the Hamilton's cigar soliton \cite{ham1} is the only steady gradient Ricci soliton with positive curvature which is complete (also see \cite{bern1} for more discussion on the classification of \(2\)-dimensional complete Ricci solitons). Complete classification has been provided in dimension three for which the Ricci soliton is shrinking. Due to results by Ivey \cite{iv1}, Perelman \cite{gp1}, Cao \textit{et al.} \cite{cao1} and others, it is known that shrinking Ricci solitons in \(3\)-dimensions are quotients of the \(3\)-sphere \(\mathbb{S}^3\), the cylinder \(\mathbb{R}\times\mathbb{S}^2\) or the Gaussian gradient Ricci soliton on \(\mathbb{R}^3\). Bryant \cite{bry1} constructed a steady rotationally symmetric gradient Ricci soliton , and Brendle \cite{bre1} showed that this soliton is the only non-flat \(k\)-noncollapsed steady Ricci soliton in dimension three. More recently, classification of the expanding case have been considered under certain integral assumptions on the scalar curvature \cite{cat1,der1} (also, see \cite{oc1,ma1,pet1,wal1,pet2} and associated references for additional results on Ricci solitons). The subject of geometry of hypersurfaces is an extensively studied area in (pseudo) Riemannian geometry. Besides purely mathematical interests, hypersurfaces play very fundamental roles in various areas of theoretical physics, and a lot of applications can be found especially in the area of Einstein's theory of General Relativity. Cauchy surfaces, for example, are used to formulate the Einstein's equations as an initial value problem. Another area of prominence is black hole horizons (or more generally \textit{marginally trapped tubes}), where these hypersurfaces ``separate" the black hole region from external observers. The geometry and topology of these hypersurfaces have been extensively studied under varying assumptions on the spacetime, albeit from different perspectives. In fact, there are nice classification results by Hall and Capocci \cite{hac} and Sousa \textit{et al.} \cite{sea}, for 3-dimensional spacetimes which apply to embedded hypersurfaces. In principle then, the study of Ricci soliton structures on black hole horizons could provide a classification, geometrically, of horizons, which motivates this work. Indeed there might be other geometrical properties one may infer from these hypersurfaces with Ricci soliton structure, including the restrictions placed on the spacetimes in which these hypersurfaces are embedded. \subsection{Objective of paper} Effectively employed to study problems in cosmology, astrophysics and perturbation theory (see \cite{pg1,cc1,crb1,gbc1} and references therein), the \(1+1+2\) covariant formalism will be used here to study Ricci soliton structures on a specified class of hypersurfaces embedded in spacetimes admitting a \(1+1+2\) decomposition, as a first step to potential future applications to the study of the geometry of black hole horizons. We will begin by studying some general properties of a class of hypersurfaces, by using the \(1+1+2\) covariant approach, on which we wish to investigate Ricci soliton structure from the equation point of view (without focus on general properties and the dynamics of the Ricci flow geometric equation). In most cases studied, the spacetime is known. Then a choice of the hypersurface is made and studied as a proper subspace of the spacetime. Here, we shall proceed by first prescribing a form of the Ricci tensor for the hypersurfaces, and working out some of the possible restrictions on these hypersurfaces. (In this case the results will be applicable to classes of spacetimes admitting hypersurfaces with the prescribed form of the Ricci curvature.) The most general form of the Ricci tensor for embedded hypersurfaces in \(1+1+2\) decomposed spacetimes is worked out and the conditions reducing the general case to that which is considered throughout this work is specified. We then investigate Ricci soliton structure on these surfaces and see how the nature of the soliton constrains the geometry of the hypersurfaces, as well as physical quantities specifying the hypersurfaces. \subsection{Outline of paper} This work has the following outline: in Section \ref{soc1} we present a brief introduction to the formalism of the \(1+1+2\) spacetime decomposition. Section \ref{soc4} presents the form of the Ricci tensor for the class of hypersurfaces to be investigated throughout this work. The associated curvature quantities are then written in terms of the \(1+1+2\) covariant quantities, and the Gauss-Codazzi equations explicitly specified. In Section \ref{soc5} we give a characterisation of the hypersurfaces and study the various constraints induced by properties of the curvature quantities. In Section \ref{soc6} we present a detailed investigation of the case when the considered hypersurfaces admit a Ricci soliton structure. Section \ref{soc7} considers the results from the previous sections in context of a well known class of spacetimes, the locally rotationally symmetric class II spacetimes. Finally, we conclude in Section \ref{soc8} with a summary and discussion of the results obtained in this work. \section{\(1+1+2\) spacetime decomposition}\label{soc1} In this section we introduce the \(1+1+2\) covariant splitting of spacetime. We will provide enough details so that those not very familiar with the formalism find it easy to follow the rest of the paper. A great deal of excellent literature exist which details this approach and its applications to relativistic astrophysics and cosmology. The interested reader is referred to \cite{cc1} and references therein. The procedure for implementing the \(1+1+2\) spacetime decomposition starts with the splitting of the spacetime in the following manner: choose a unit tangent vector field, usually denoted \(u^{\mu}\) satisfying \(u_{\mu}u^{\mu}=-1\), along the observer's congruence. This choice of vector field induces a split of the \(4\)-dimensional metric \(g_{\mu\nu}\) as \begin{eqnarray}\label{micel1} h_{\mu\nu}=g_{\mu\nu}+u_{\mu}u_{\nu}, \end{eqnarray} where the tensor \(h_{\mu\nu}\) projects vectors and tensors, orthogonal to \(u^{\mu}\), onto the \(3\)-space resulting from the \(1+3\) splitting. This projector is the first fundamental form for the \(3\)-space. This splitting introduces two derivatives from the full covariant derivative of the spacetime \(\nabla_{\mu}\): \begin{enumerate} \item \textbf{The derivative along the \(u^{\mu}\) direction}: for any tensor \(S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}\) one has the derivative \begin{eqnarray}\label{micel2} \dot{S}^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}=u^{\sigma}\nabla_{\sigma}S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}; \end{eqnarray} \item \textbf{The fully orthogonally projected derivative via \(\bf{h^{\mu\nu}}\) on all indices}: for any tensor \(S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}\) one has the derivative \begin{eqnarray}\label{micel3} D_{\sigma}S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}=h^{\mu}_{\ \gamma}h^{\gamma'}_{\ \mu'}\dots h^{\nu}_{\ \delta}h^{\delta'}_{\ \nu'}h^{\varrho}_{\ \sigma}\nabla_{\varrho}S^{\gamma\dots \delta}_{\ \ \ \ \gamma'\dots \delta'}. \end{eqnarray} \end{enumerate} The first derivative is usually associated with the observer's time, called the covariant `\textit{time} derivative. For simplicity it is usually called the dot derivative. The second derivative is called the `\(D\)' derivative. The next step in the splitting is to make a choice of a normal direction, denoted \(e^{\mu}\), which splits the \(3\)-space, and is orthogonal to \(u^{\mu}\) and satisfies \(e_{\mu}e^{\mu}=1\). This leads to the further splitting of the spacetime metric \(g_{\mu\nu}\) as \begin{eqnarray}\label{micel4a} N_{\mu\nu}=g_{\mu\nu}+u_{\mu}u_{\nu}-e_{\mu}e_{\nu}, \end{eqnarray} where the tensor \(N_{\mu\nu}\) projects to \(2\)-surfaces (referred to as the sheet), vectors and tensors orthogonal to \(e^{\mu}\). The further two derivatives are also introduced: \begin{enumerate} \item \textbf{The derivative along the \(e^{\mu}\) direction}: for any \(3\)-tensor \(S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}\) one has the derivative \begin{eqnarray}\label{micel100w} \hat{S}^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}=e^{\sigma}\nabla_{\sigma}S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}; \end{eqnarray} \item \textbf{The fully projected spatial derivative on the \(2\)-sheet via \(\bf{N^{\mu\nu}}\) on all indices}: for any \(3\)-tensor \(S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}\) one has the derivative \begin{eqnarray}\label{micel101w} \delta_{\sigma}S^{\mu\dots \nu}_{\ \ \ \ \mu'\dots \nu'}=N^{\mu}_{\ \gamma}N^{\gamma'}_{\ \mu'}\dots N^{\nu}_{\ \delta}N^{\delta'}_{\ \nu'}N^{\varrho}_{\ \sigma}D_{\varrho}S^{\gamma\dots \delta}_{\ \ \ \ \gamma'\dots \delta'} \end{eqnarray} \end{enumerate} The first case is usually called the `\textit{hat}' derivative, and the second the `\textit{delta}' derivative. The volume element of the \(2\)-surfaces resulting from the further splitting of the \(3\)-space, is the Levi-Civita tensor \begin{eqnarray*} \varepsilon_{\mu\nu}\equiv\varepsilon_{\mu\nu\delta}e^{\delta}=u^{\gamma}\eta_{\gamma\mu\nu\delta}e^{\delta}, \end{eqnarray*} so that contracting \(\varepsilon_{\mu\nu}\) with \(u^{\nu}\) or \(e^{\nu}\) gives zero. The tensor \(\varepsilon_{\mu\nu}\) also satisfies the additional relations: \begin{eqnarray*} \begin{split} \varepsilon_{\mu\nu\delta}&=e_{\mu}\varepsilon_{\nu\delta}-e_{\nu}\varepsilon_{\mu\delta}+e_{\delta}\varepsilon_{\mu\nu},\\ \varepsilon_{\mu\nu}\varepsilon^{\delta\gamma}&=2N_{[\mu}^{\ \delta}N_{\nu]}^{\ \gamma},\\ \varepsilon_{\mu}^{\ \sigma}\varepsilon_{\sigma\nu}&=N_{\mu\nu},\\ \varepsilon_{\mu\nu}\varepsilon^{\mu\nu}&=2, \end{split} \end{eqnarray*} where the square brackets denote the usual antisymmetrization. Now, let \(S^{\mu}\) be a \(3\)-vector. Then \(S^{\mu}\) may be irreducibly split as \begin{eqnarray}\label{micel4} S^{\mu}=\bf{S}e^{\mu}+\bf{S}^{\mu}, \end{eqnarray} where \(\bf{S}\) is the scalar associated to \(S^{\mu}\) that lies along \(e^{\mu}\), and \(\bf{S}^{\mu}\) lies in the sheet orthogonal to \(e^{\mu}\). Notice that \eqref{micel4} implies the following: \begin{eqnarray}\label{micel5} \bf{S}\equiv S_{\mu}e^{\mu}\quad\mbox{and}\quad\bf{S}^{\mu}\equiv S_{\nu}N^{\mu\nu}\equiv S^{\bar{\mu}}, \end{eqnarray} where the overbar indicates the index projected by \(N^{\mu\nu}\). In a similar manner, projected, symmetric and trace free tensors \(S_{\mu\nu}\) can be irreducibly split as \begin{eqnarray}\label{micel6} S_{\mu\nu}=S_{\langle\mu\nu\rangle}=\bf{S}\left(e_{\mu}e_{\nu}-\frac{1}{2}N_{\mu\nu}\right)+2S_{(\mu}e_{\nu)}+\bf{S}_{\mu\nu}, \end{eqnarray} where we have \begin{eqnarray*} \begin{split} \bf{S}&\equiv e^{\mu}e^{\nu}S_{\mu\nu}=-S_{\mu\nu}N^{\mu\nu},\\ \bf{S}_{\mu}&\equiv e^{\delta}S_{\nu\delta}N_{\mu}^{\ \nu}=\bf{S}_{\bar{\mu}},\\ \bf{S}_{\mu\nu}&\equiv S_{\langle\mu\nu\rangle}\equiv\left(N_{(\mu}^{\ \delta}N_{\nu)}^{\ \gamma}-N_{\mu\nu}N^{\delta\gamma}\right)S_{\delta\gamma}, \end{split} \end{eqnarray*} and the round brackets denoting symmetrization. The angle bracket is being used here to denote the projected, symmetric and trace-free parts of a tensor. Now we can write down the definition of various scalars, vectors and tensors that will appear in the rest of the paper. We have \begin{eqnarray*} \begin{split} \hat{e}_{\mu}&=e^{\nu}D_{\nu}e_{\mu}\equiv a_{\mu},\\ \dot{e}_{\mu}&=\mathcal{A}u_{\mu}+\alpha_{\mu},\\ \hat{u}_{\mu}&=\left(\frac{1}{3}\Theta+\Sigma\right)e_{\mu}+\Sigma_{\mu}+\varepsilon_{\mu\nu}\Omega^{\nu},\\ \dot{u}_{\mu}&=\mathcal{A}e_{\mu}+\mathcal{A}_{\mu}. \end{split} \end{eqnarray*} Here the scalar \(\mathcal{A}\) is the acceleration (thought of as the radial component of the acceleration of the unit timelike vector \(u^{\mu}\)). The vector \(a^{\mu}\) is interpreted as the acceleration of \(e^{\mu}\); \(\Sigma\) denotes the scalar associated to the projected, symmetric and trace-free shear tensor \(\sigma_{\mu\nu}\equiv D_{\langle\mu}u_{\nu\rangle}=\Sigma\left(e_{\mu}e_{\nu}-\frac{1}{2}N_{\mu\nu}\right)+2\Sigma_{(\mu}e_{\nu)}+\Sigma_{\mu\nu}\); \(\Theta\equiv D_{\mu}u^{\mu}\) is the expansion; \(\Omega^{\mu}\) is the part of the rotation vector \(\omega^{\mu}=\Omega e^{\mu}+\Omega^{\mu}\) lying in the sheet orthogonal to \(e^{\mu}\); and \(\alpha_{\mu}\equiv \dot{e}_{\bar{\mu}}\). We also have the following quantities \begin{eqnarray*} \begin{split} \rho&=T_{\mu\nu}u^{\mu}u^{\nu},\quad\mbox{energy density}\\ p&=\frac{1}{3}h^{\mu\nu}T_{\mu\nu},\quad\mbox{isotropic pressure}\\ \phi&\equiv\delta_{\mu}e^{\mu}\quad\mbox{sheet expansion},\\ \xi&\equiv\frac{1}{2}\varepsilon^{\mu\nu}\delta_{\mu}e_{\nu}\quad\mbox{sheet/spatial twist},\\ \zeta_{\mu\nu}&\equiv\delta_{\langle\mu}e_{\nu\rangle}\quad\mbox{the shear of \(e_{\mu}\)},\\ q_{\mu}&= -h^{\sigma}_{\ \mu}T_{\sigma\nu}u^{\nu}=Qe_{\mu}+Q_{\mu}\quad\mbox{heat flux},\\ \pi_{\mu\nu}&\equiv\Pi\left(e_{\mu}e_{\nu}-\frac{1}{2}N_{\mu\nu}\right)+2\Pi_{(\mu}e_{\nu)}+\Pi_{\mu\nu}.\quad\mbox{anisotropic stress},\\ E_{\mu\nu}&\equiv\mathcal{E}\left(e_{\mu}e_{\nu}-\frac{1}{2}N_{\mu\nu}\right)+2\mathcal{E}_{(\mu}e_{\nu)}+\mathcal{E}_{\mu\nu}.\quad\mbox{electric Weyl},\\ H_{\mu\nu}&\equiv\mathcal{H}\left(e_{\mu}e_{\nu}-\frac{1}{2}N_{\mu\nu}\right)+2\mathcal{H}_{(\mu}e_{\nu)}+\mathcal{H}_{\mu\nu}.\quad\mbox{magnetic Weyl}. \end{split} \end{eqnarray*} The full covariant derivatives of the vectors \(u^{\mu}\) and \(u^{\mu}\) are given respectively by \begin{subequations} \begin{align} \nabla_{\mu}u_{\nu}&=-u_{\mu}\left(Ae_{\nu}+A_{\nu}\right)+\left(\frac{1}{3}\Theta+\Sigma\right)e_{\mu}e_{\nu}+\left(\Sigma_{\nu}+\varepsilon_{\nu\delta}\Omega^{\delta}\right)e_{\mu}+\left(\Sigma_{\mu}-\varepsilon_{\mu\delta}\Omega^{\delta}\right)e_{\nu}+\Omega\varepsilon_{\mu\nu}+\Sigma_{\mu\nu}\nonumber\\ &+\frac{1}{2}\left(\frac{2}{3}\Theta-\Sigma\right)N_{\mu\nu},\label{micel20}\\ \nabla_{\mu}e_{\nu}&=-Au_{\mu}u_{\nu}-u_{\mu}\alpha_{\nu}+\left(\frac{1}{3}\Theta+\Sigma\right)e_{\mu}u_{\nu}+\left(\Sigma_{\mu}-\varepsilon_{\mu\delta}\Omega^{\delta}\right)u_{\nu}+e_{\mu}a_{\nu}+\frac{1}{2}\phi N_{\mu\nu}+\xi\varepsilon_{\mu\nu}+\zeta_{\mu\nu}.\label{mice21} \end{align} \end{subequations} \section{The curvature and some related tensors}\label{soc4} This section presents the tensors to be utilised in this work as well as equations obtained from contractions of some key identities. We will not attempt to obtain all of the field equations here. For this, we refer the reader to \cite{cc1} for the detailed obtention of all of the field equations from the \(1+1+2\) splitting. Indeed, the functions \(\lambda\) and \(\beta\) appearing in the form of the Ricci tensor defined below \eqref{pa1} will be written down shortly in terms of the \(1+1+2\) quantities. As shall be seen, the embedded hypersurfaces to be considered in this work are characterised by just few scalars, and we are interested in how constraints on these scalars determine the local dynamics, as well as the geometry of these hypersurfaces. In this work we are interested in spacetimes where the Ricci tensor of embedded \(3\)-manifolds (throughout this work embedded \(3\)-manifolds will be horizons from context) assumes the general form \begin{eqnarray}\label{pa1} \begin{split} ^3R_{\mu\nu}&=\lambda e_{\mu}e_{\nu}+\beta N_{\mu\nu}\\ &=\left(\lambda-\beta\right)e_{\mu}e_{\nu}+\beta h_{\mu\nu}, \end{split} \end{eqnarray} in which case we may write the trace-free part of \eqref{pa1} as \begin{eqnarray}\label{pak1} ^3S_{\mu\nu}=\left(\lambda-\beta\right)\left(e_{\mu}e_{\nu}-\frac{1}{3}h_{\mu\nu}\right). \end{eqnarray} (The 3-dimensional curvatures will be labeled with an upper left superscript 3.) We have the Riemann curvature tensor as \begin{eqnarray}\label{pa2} \begin{split} ^3R_{\mu\nu\delta\gamma}&=^3R_{\mu\delta}h_{\nu\gamma}-^3R_{\mu\gamma}h_{\delta\nu}+^3R_{\nu\gamma}h_{\mu\delta}-^3R_{\delta\nu}h_{\mu\gamma}-\frac{^3R}{2}\left(h_{\mu\delta}h_{\nu\gamma}-h_{\mu\gamma}h_{\delta\nu}\right)\\ &=\left(N_{\delta\nu}+e_{\delta}e_{\nu}\right)\biggl[\frac{\lambda}{2}N_{\gamma\mu}+\left(\beta-\frac{\lambda}{2}\right)e_{\gamma}e_{\mu}\biggr] -\left(N_{\gamma\nu}+e_{\gamma}e_{\nu}\right)\biggl[\frac{\lambda}{2}N_{\delta\mu}+\left(\beta-\frac{\lambda}{2}\right)e_{\delta}e_{\mu}\biggr]\\ &+\left(N_{\delta\mu}+e_{\delta}e_{\mu}\right)\left(\beta N_{\gamma\nu}+\lambda e_{\gamma}e_{\nu}\right) -\left(N_{\gamma\mu}+e_{\gamma}e_{\mu}\right)\left(\beta N_{\delta\nu}+\lambda e_{\delta}e_{\nu}\right), \end{split} \end{eqnarray} where \(h_{\mu\nu}\) is the metric induced from the ambient spacetime on the embedded \(3\)-manifold, and \(R\) is the scalar curvature given by \begin{eqnarray}\label{pa3} ^3R=\lambda+2\beta. \end{eqnarray} Let us note here the well known fact that in dimension \(3\), the Weyl tensor vanishes identically. We shall also assume smoothness of the functions \(\lambda\) and \(\beta\), which are functions of the covariant geometric and matter variables. The Cotton tensor, fully projected to the hypersurface, is given by \begin{eqnarray}\label{pa4} \begin{split} C_{\mu\nu\delta}&=D_{\delta}\left(^3R_{\mu\nu}\right)-D_{\nu}\left(^3R_{\mu\delta}\right)+\frac{1}{4}\left(D_{\nu}\left(^3R\right)h_{\mu\delta}-D_{\delta}\left(^3R\right)h_{\mu\nu}\right)\\ &=2\left(\lambda-\beta\right)\left(e_{\mu}D_{[\delta}e_{\nu]}+e_{[\nu}D_{\delta]}e_{\mu}\right)+2\left[e_{\mu}e_{[\nu}D_{\delta]}+\frac{1}{4}h_{\mu[\delta}D_{\nu]}\right]\lambda+2\left[\left(h_{\mu[\nu}-e_{\mu}e_{[\nu}\right)D_{\delta]}+\frac{1}{2}h_{\mu[\delta}D_{\nu]}\right]\beta\;, \end{split} \end{eqnarray} where we have used the fully orthogonally projected derivative \(D\) (this is the compatible covariant derivative on the hypersurfaces under consideration), and \(C_{\mu\nu\delta}\) is antisymmetric in \(\nu\) and \(\delta\). For dimension \(3\), the cotton tensor can be presented as a tensor density in the form \begin{eqnarray}\label{pa6} \begin{split} C_{\mu}^{\ \nu}&=D_{\delta}\left(^3R_{\sigma\mu}-\frac{1}{4}\left(^3R\right)h_{\sigma\mu}\right)\varepsilon^{\delta\sigma\nu}\\ &=\biggl[\left(\lambda-\beta\right)\left(e_{\sigma}D_{\delta}e_{\mu}+e_{\mu}D_{\delta}e_{\sigma}\right)+\left(e_{\sigma}e_{\mu}-\frac{1}{4}h_{\sigma\mu}\right)D_{\delta}\lambda-\left(e_{\sigma}e_{\mu}-\frac{1}{2}h_{\sigma\mu}\right)D_{\delta}\beta\biggr]\varepsilon^{\delta\sigma\nu}\;, \end{split} \end{eqnarray} sometimes referred to as the Cotton-York tensor. \subsection{The Ricci tensor for general hypersurfaces in $1+1+2$ spacetimes} In principle, one may assume any form of the Ricci tensor on a general hypersurface, and study physics on it. However, it may not necessarily be the case that there exists a spacetime in which this hypersurface is embedded. In this section, we provide a minimum set of conditions which are to be satisfied for a hypersurface with Ricci tensor of form \eqref{pa1} to be embedded in a \(1+1+2\) decomposed spacetime. Let \(M\) be a spacetime admitting a \(1+1+2\) decomposition, and denote by \(\Xi\subset M\) a codimension \(1\) embedded submanifold of \(M\) (from now onwards hypersurfaces will be denoted by \(\Xi\)). In general, one writes the curvature of \(M\) as (see \cite{gfr1} and references therein) \begin{eqnarray}\label{3.1} \begin{split} R^{\mu\nu}_{\ \ \delta\sigma}&=4u^{[\mu}u_{[\delta}\left(E^{\nu]}_{\ \sigma]}-\frac{1}{2}\pi^{\nu]}_{\ \sigma]}\right)+4\bar{h}^{[\mu}_{\ [\delta}\left(E^{\nu]}_{\ \sigma]}+\pi^{\nu]}_{\ \sigma]}\right)\\ &+2\varepsilon^{\mu\nu\gamma}u_{[\delta}\left(H_{\sigma]\gamma}+\frac{1}{2}\varepsilon_{\sigma]\gamma\varrho}q^{\varrho}\right)+2\varepsilon_{\delta\sigma\gamma}u^{[\mu}\left(H^{\nu\gamma}+\frac{1}{2}\varepsilon^{\nu]\gamma\varrho}q_{\varrho}\right)+\frac{2}{3}\left(\rho+3p-2\Lambda\right)u^{[\mu}u_{[\delta}\bar{h}^{\nu]}_{\ \sigma]}+\frac{2}{3}\left(\rho+\Lambda\right) \bar{h}^{\mu}_{\ [\delta}\bar{h}^{\nu}_{\ \sigma]}, \end{split} \end{eqnarray} where \(\Lambda\) is the cosmological constant and \(\bar{h}^{\mu\nu}=g^{\mu\nu}+u^{\mu}u^{\nu}\). One may write \eqref{3.1} in its full \(1+1+2\) form by decomposing \(E_{\mu\nu},H_{\mu\nu}\) and \(\pi_{\mu\nu}\) using \eqref{micel6}, and \(q_{\mu}\) using \eqref{micel4}. We define the curvature quantities on \(\Xi\) as follows \cite{ge2}: define a normal to \(\Xi\) as \begin{eqnarray}\label{3.2} n^{\mu}=a_1u^{\mu}+a_2e^{\mu}. \end{eqnarray} (Whether \(n^{\mu}\) is spacelike of timelike willmplace some constraints on \(a_1\) and \(a_2\).) The first fundamental form of \(\Xi\) is given by \begin{eqnarray}\label{3.3} h_{\mu\nu}=g_{\mu\nu}\mp n_{\mu}n_{\nu}=N_{ab}-\left(1\pm a_1^2\right)u_{\mu}u_{\nu}+\left(1\mp a_2^2\right)e_{\mu}e_{\nu}\mp 2a_1a_2u_{(\mu}e_{\nu)}, \end{eqnarray} where the choice of the ``-" or ``+" sign depends of the whether \(\Xi\) is \textit{timelike} or \textit{spacelike} respectively. Note the relationship between the tensors \(\bar{h}_{\mu\nu}\) and \(h_{\mu\nu}\) here: \begin{eqnarray}\label{yes} \bar{h}_{\mu\nu}=h_{\mu\nu}\Big|_{a_1=\pm 1;\ a_2=0}. \end{eqnarray} (In this case it is the ``-" sign that is chosen for \(\left(1\pm a_1^2\right)\).) We also note that \begin{eqnarray*} \left(E_{\mu\nu},H_{\mu\nu},\pi_{\mu\nu}\right)u^{\mu}=\left(E_{\mu\nu},H_{\mu\nu},\pi_{\mu\nu}\right)\bar{h}^{\mu\nu}=0. \end{eqnarray*} The second fundamental form is then calculated as \begin{eqnarray}\label{3.4} \begin{split} \chi_{\mu\nu}&=h^{\delta}_{\ (\mu}h^{\sigma}_{\ \nu)}\nabla_{\delta}n_{\sigma}\\ &=Z_1u_{\mu}u_{\nu}-Z_2e_{\mu}e_{\nu}\mp Z_3u_{(\mu}e_{\nu)}+\frac{1}{2}\left(a_1\left(\frac{2}{3}\Theta-\Sigma\right)+a_2\phi\right)N_{\mu\nu}\\ &+a_1\Sigma_{\mu\nu}+a_2\zeta_{\mu\nu}+\left(1\pm a_1^2\right)Z'_{\mu\nu}+\left(1\mp a_2^2\right)\left[a_1Z_{\mu\nu}+a_2\bar{Z}_{\mu\nu}\right], \end{split} \end{eqnarray} where we have defined the quantities \begin{subequations} \begin{align} Z_1&=\left(1\pm a_1^2\right)\left(a_1a_2\dot{a}_2-a_1a_2\hat{a}_1-a_2A-\left(1\pm a_1^2\right)\dot{a}_1\right)+a_1a_2^2\left(a_1\hat{a}_2-\left(\frac{1}{3}\Theta+\Sigma\right)\right),\label{3.20a}\\ Z_2&=\left(1\mp a_2^2\right)\left(a_1a_2\dot{a}_2-a_1a_2\hat{a}_1-a_1\left(\frac{1}{3}\Theta+\Sigma\right)-\left(1\mp a_2^2\right)\hat{a}_2\right)+a_1^2a_2\left(a_2\dot{a}_1+A\right),\label{3.21b}\\ Z_3&=\left(1\pm a_1^2\right)\left(\left(1\mp a_2^2\right)\left(\hat{a}_1-\dot{a}_2\right)-2a_1a_2\dot{a}_1-a_1A\right)+\left(1\mp a_2^2\right)\biggl(a_2\left(\frac{1}{3}\Theta+\Sigma\right)-2a_1a_2\hat{a}_2\biggr)\nonumber\\ &+a_1^2a_2\left(a_2\left(\dot{a}_2-\hat{a}_1\right)-\left(\frac{1}{3}\Theta+\Sigma\right)\right),\label{3.22}\\ Z_{\mu\nu}&=2\left(\Sigma_{(\nu}-\varepsilon_{\nu\delta(\nu}\Omega^{\delta}+a_2a_{(\nu}\right)e_{\mu)},\label{3.23}\\ \bar{Z}_{\mu\nu}&=2\left(\Sigma_{(\mu}+\varepsilon_{\delta(\mu}\Omega^c+\delta_{(\mu}a_2\right)e_{\nu)},\label{3.24}\\ Z'_{\mu\nu}&=2\left(u_{(\nu}\delta_{\mu)}a_1-u_{(\mu}\alpha_{\nu)}-a_1u_{(\mu}A_{\nu)}\right).\label{3.25} \end{align} \end{subequations} From \eqref{3.1}, we have the Ricci curvature on \(M\) as \begin{eqnarray}\label{3.5} R_{ab}=\pi_{ab}+2q_{(a}u_{b)}-\frac{1}{2}\left(\rho+3p-2\Lambda\right)\left(\frac{1}{3}\bar{h}_{ab}-u_au_b\right)+\frac{2}{3}\left(\rho+\Lambda\right)\bar{h}_{ab}. \end{eqnarray} The scalar curvature is therefore \begin{eqnarray}\label{3.6} R=\rho-3p+4\Lambda. \end{eqnarray} The relationship between the curvature of \(\Xi\), \(^3R^{\mu}_{\ \nu\delta\sigma}\), and the curvature of \(M\), \(R^{\mu}_{\ \nu\delta\sigma}\), is given by \cite{ge2} \begin{eqnarray}\label{3.7} ^3R^{\mu}_{\ \nu\delta\sigma}=R^{\bar{\mu}}_{\ \bar{\nu}\bar{\delta}\bar{\sigma}}h^{\mu}_{\ \bar{\mu}}h^{\bar{\nu}}_{\ \nu}h^{\bar{\delta}}_{\ \delta}h^{\bar{\sigma}}_{\ \sigma}\pm \chi^{\mu}_{\ \delta}\chi_{\nu\sigma} \mp \chi^{\mu}_{\ \sigma}\chi_{\nu\delta}, \end{eqnarray} where, from now on, we are denoting all curvature quantities associated to \(\Xi\) with an overhead `\textit{tilde}'. From \eqref{3.6} we obtain \begin{eqnarray}\label{3.8} ^3R_{\mu\nu}&=\underline{R}_{\mu\nu}\mp \bar{R}_{\mu\nu}\pm\chi\left(\chi_{\mu\nu}\right) \mp \underline{\chi}_{\mu\nu}, \end{eqnarray} with the associated scalar curvature is \begin{eqnarray}\label{3.9} ^3R&=\left(R\pm\chi^2\right)\mp\left(R_*+\bar{\chi}\right). \end{eqnarray} where we have defined \begin{subequations} \begin{align} \chi&=\chi^{\mu}_{\ \mu}=-Z_1-Z_2+a_1\left(\frac{2}{3}\Theta-\Sigma\right)+a_2\phi,\label{3.10}\\ \bar{\chi}&=\chi_{\mu\nu}\chi^{\mu\nu}=Z_1^2+Z^2-Z_3^2+a_1^2\left(\frac{1}{2}\left(\frac{2}{3}\Theta-\Sigma\right)^2+2\Omega^2\right)+a_2^2\left(\frac{1}{2}\phi^2+2\xi^2\right)+2a_1^2\left(a_{\mu}\Sigma^{\mu}+\varepsilon_{\mu\nu}\left(\Sigma^{\mu}+a_{\mu}a^{\mu}\right)\Omega^{\nu}\right)\nonumber\\ &+\left(1\mp a_2^2\right)^2\biggl[\left(a_1^2+a_2^2\right)\left(\Sigma_{\mu}\Sigma^{\mu}+\Omega_{\mu}\Omega^{\mu}\right)+a_2^2\biggl(a_1^2a_{\mu}a^{\mu}+\delta_{\mu}a_2\delta^{\mu}a_2\biggr)+2a_2^2\left(\delta_{\mu}a_2\Sigma^{\mu}-\varepsilon_{\mu\nu}\left(\Sigma^{\mu}+\delta^{\mu}a_2\right)\Omega^{\nu}\right)\biggr]\nonumber\\ &+2a_1a_2\left(\frac{1}{2}\phi\left(\frac{2}{3}\Theta-\Sigma\right)+2\Omega\xi\right)-\left(1\pm a_1^2\right)^2\biggl(\delta_{\mu}a_1\delta^{\mu}a_1+\alpha_{\mu}\alpha^{\mu}+a_1^2A_{\mu}A^{\mu}+2a_1\alpha_{\mu}A^{\mu}\biggr)+a_1^2\Sigma_{\mu\nu}\Sigma^{\mu\nu}\nonumber\\ &+a_2^2\zeta_{\mu\nu}\zeta^{\mu\nu},\label{3.11}\\ \underline{\chi}_{\mu\nu}&=\chi^{\delta}_{\ \mu}\chi_{\delta\nu}=\left[-Z_1^2\mp Z_3^2+\left(1\pm a_1^2\right)^2\delta_aa_1\delta^aa_1\right]u_{\mu}u_{\nu}+\biggl[Z_2^2\pm\frac{1}{4}Z_3^2+a_2^2\left(1\pm a_2^2\right)^2V\biggr]e_{\mu}e_{\nu}\nonumber\\ &+2\left(1\pm a_1^2\right)\left(\alpha_{(\nu}+a_1A_{(\nu}\right)\left(Z_1u_{\mu)}+\frac{1}{2}Z_3e_{\mu)}\right)+2\left(1\pm a_1^2\right)\left(a_1\Sigma_{\delta(\mu}+a_2\zeta_{\delta(\mu}\right)u_{\nu)}\delta^{\delta}a_1\nonumber\\ &+2\left[\pm\frac{1}{2}Z_3\left(Z_1+Z_2\right)+a_2\left(1\pm a_1^2\right)\left(1\mp a_2^2\right)\delta_{\delta}a_1\left(\Sigma^{\delta}-\varepsilon^{\delta\sigma}\Omega_{\sigma}+\delta^{\delta}a_2\right)\right]u_{(\mu}e_{\nu)}\nonumber\\ &+\left(1\pm a_1^2\right)^2\left(a_1^2A_{\mu}A_{\nu}-\alpha_{\mu}\alpha_{\nu}-2a_1\alpha_{(\mu}A_{\nu)}\right)+a_1\left(1\mp a_2^2\right)\left(\Sigma_{\nu}-\varepsilon_{\delta(\nu}\Omega^{\delta}+a_2a_{(\nu}\right)\biggl(\frac{1}{2}Z_3u_{\mu)}-Z_2e_{\mu)}\biggr)\nonumber\\ &+\frac{1}{2}\left(a_1\left(\frac{2}{3}\Theta-\Sigma\right)+a_2\phi\right)V_{\mu\nu}+2a_2^2\left(1\mp a_2^2\right)\left(\Sigma^{\delta}-\varepsilon^{\delta\sigma}\Omega_{\sigma}+\delta^{\delta}a_2\right)\zeta_{\delta(\mu}e_{\nu)}+\left(a_1\Omega+a_2\xi\right)\bar{V}_{\mu\nu}\nonumber\\ &+2a_1a_2\left[\zeta^{\delta}_{(\mu}\Sigma_{\nu)\delta}+\left(1\mp a_2^2\right)\left(\Sigma^{\delta}-\varepsilon^{\delta\sigma}\Omega_{\sigma}+\delta^{\delta}a_2\right)\Sigma_{\delta(\mu}e_{\nu)}\right],\label{3.12}\\ \underline{R}_{\mu\nu}&=R_{\bar{\nu}\bar{\sigma}}h^{\bar{\nu}}_{\ \mu}h^{\bar{\sigma}}_{\ \nu}=\frac{1}{2}\left(\rho-p-\Pi+2\Lambda\right)N_{\mu\nu}+\frac{1}{2}\biggl[\left(1\pm a_1^2\right)\biggl(\left(1\pm a_1^2\right)\left(\rho+3p-2\Lambda\right)\mp a_1a_2Q\biggr)\nonumber\\ &\mp a_1a_2\left(\left(1\pm a_1^2\right)Q\mp a_1a_2\left(\rho-p+2\Pi+2\Lambda\right)\right)\biggr]u_{\mu}u_{\nu}+\left(Q_{(\mu}+2\Pi_{(\mu}\right)\left[\mp a_1a_2u_{\nu)}+\left(1\mp a_2^2\right)e_{\nu)}\right]\nonumber\\ &+2\left[\left(1\mp a_2^2\right)u_{(\mu}\mp a_1a_2 e_{(\mu}\right]\Pi_{\nu)}+\frac{1}{2}\biggl[\left(1\mp a_2^2\right)\left(\left(1\mp a_2^2\right)\left(\rho-p+2\Pi+2\Lambda\right)\pm a_1a_2Q\right)\nonumber\\ &+2\left[\left(1\pm a_1^2\right)u_{(\mu}\pm a_1a_2 e_{(\mu}\right]Q_{\nu)}\pm a_1a_2\left(\left(1\mp a_2^2\right)Q\pm a_1a_2\left(\rho+3p-2\Lambda\right)\right)\biggr]e_{\mu}e_{\nu}\nonumber\\ &+\biggl[\left(1\mp a_2^2\right)\biggl(\left(1\pm a_1^2\right)Q\mp a_1a_2\left(\rho-p+2\Pi+2\Lambda\right)\biggr)\pm a_1a_2\left(\left(1\pm a_1^2\right)\left(\rho+3p-2\Lambda\right)\mp a_1a_2Q\right)\biggr]u_{(\mu}e_{\nu)},\label{3.13}\\ \bar{R}_{\mu\nu}&=R^{\bar{\mu}}_{\ \bar{\nu}\bar{\delta}\bar{\sigma}}n_{\bar{\mu}}n^{\bar{\delta}}h^{\bar{\nu}}_{\ \nu}h^{\bar{\sigma}}_{\ \mu}=\frac{1}{2}\biggl[\left(a_1a_2Q+\left(a_2^2-1\right)\mathcal{E}+\frac{1}{2}\left(a_2^2+1\right)\Pi\right)+\frac{1}{3}\left(\rho+3p-2\Lambda\right)\biggr]N_{\mu\nu}\nonumber\\ &+a_2^2\left[\left(\mathcal{E}-\frac{1}{2}\Pi\right)+\frac{1}{6}\left(\rho+3p-2\Lambda\right)\right]u_{\mu}u_{\nu}+a_1^2\biggl[\left(\mathcal{E}-\frac{1}{2}\Pi\right)+\frac{1}{6}\left(\rho+3p-2\Lambda\right)\biggr]e_{\mu}e_{\nu}\nonumber\\ &+2a_1a_2\left[\left(\mathcal{E}-\frac{1}{2}\Pi\right)+\frac{1}{6}\left(\rho+3p-2\Lambda\right)\right]u_{(\mu}e_{\nu)}+\frac{1}{4}a_2^2\biggl(\mathcal{E}_{\mu\nu}+\frac{1}{2}\Pi_{\mu\nu}\biggr)\nonumber\\ &+2\left[a_1a_2\left(\mathcal{E}_{(\nu}-\frac{1}{2}\Pi_{(\nu}\right)-\frac{1}{2}a_2^2\left(1\pm a_1^2\right)\left(\mathcal{H}^{\delta}\varepsilon_{\delta(\nu}-\frac{1}{2}Q_{(\nu}\right)\mp a_1^2a_2^2\mathcal{H}^{\delta}\varepsilon_{\delta(\nu}\right]u_{\mu)}\nonumber\\ &+2\left[a_1^2\left(\mathcal{E}_{(\nu}-\frac{1}{2}\Pi_{(\nu}\right)\mp a_1a_2^3\left(\mathcal{H}^{\delta}\varepsilon_{\delta(\nu}-\frac{1}{2}Q_{(\nu}\right) + a_1a_2\left(1\mp a_2^2\right)\mathcal{H}^{\delta}\varepsilon_{\delta(\nu}\right]e_{\mu)}\nonumber\\ &+2\left[a_1a_2\left(\mathcal{E}_{(\mu}-\frac{1}{2}\Pi_{(\mu}\right)+\frac{1}{2}a_2^2\left(1+a_1^2\pm a_1^2\right)\left(\mathcal{H}^{\delta}\varepsilon_{\delta(\mu}-\frac{1}{2}Q_{(\mu}\right)\right]u_{\nu)}\nonumber\\ &+2\left[a_1^2\left(\mathcal{E}_{(\mu}-\frac{1}{2}\Pi_{(\mu}\right)+\frac{1}{2}a_1a_2\left(1-a_2^2\mp a_2^2\right)\left(\mathcal{H}^{\delta}\varepsilon_{\delta(\mu}-\frac{1}{2}Q_{(\mu}\right)\right]e_{\nu)},\label{3.14}\\ R_*&=R_{\mu\nu}n^{\mu}n^{\nu}=\frac{1}{2}a_2^2\left(\rho+p+2\Lambda\right)-\frac{1}{2}a_1^2\left(\rho+3p-2\Lambda\right)+\frac{1}{2}a_2Q\left(3a_2-a_1\right)+a_2^2\Pi+a_1^2\left(\mathcal{E}_{\mu\nu}-\frac{1}{2}\Pi_{\mu\nu}\right),\label{3.15} \end{align} \end{subequations} with \begin{subequations} \begin{align} V&=\Sigma_{\mu}\Sigma^{\mu}+\Omega_{\mu}\Omega^{\mu}+\delta_{\mu}a_1\delta^{\mu}a_1-2\Sigma_{\mu}\varepsilon^{\mu\nu}\Omega_{\nu},\label{3.16}\\ V_{\mu\nu}&=\frac{1}{2}\left(a_1\left(\frac{2}{3}\Theta-\Sigma\right)+a_2\phi\right)N_{\mu\nu}+2a_2\left(1\mp a_2^2\right)\left(\Sigma_{(\mu}+\Sigma_{\delta(\mu}\Omega^{\delta}+\delta_{(\mu}\right)e_{\nu)}+2\left(a_1\Sigma_{\mu\nu}+a_2\zeta_{\mu\nu}\right),\label{3.17}\\ \bar{V}_{\mu\nu}&=2\left(a_1\Omega+a_2\xi\right)N_{\mu\nu}+2\varepsilon_{\delta(\mu}\biggl[a_2\left(1\mp a_2^2\right)\left(\Sigma^{\delta}-\varepsilon^{\delta\sigma}\Omega_{\sigma}+\delta^{\delta}a_2\right)e_{\nu)}+\left(1\pm a_1^2\right)u_{\nu)}\delta^{\delta}a_1+a_1\Sigma_{\nu)}^{\ \delta}+a_2\zeta_{\nu)}^{\ \delta}\biggr].\label{3.18} \end{align} \end{subequations} Now, notice first of all that \eqref{pa1} has no mixed term or \(u_{\mu}u_{\nu}\) term. Also, there are no terms constructed from product of \(2\)-vectors or tensor quantities. The choice of the first fundamental form is just \(\bar{h}_{\mu\nu}=g_{\mu\nu}+u_{\mu}u_{\nu}\), so that \(a_2=0\) and \(a_1=1\) in which case \(h^{\mu\nu}\) and \(\bar{h}^{\mu\nu}\) coincide. In this case, it is easy to see that \(Z_1=Z_3=0\) and \(Z_2=-\left((1/3)\Theta+\Sigma\right)\). The \(u_{\mu}u_{\nu}\) and mixed terms are identically zero. In addition, the following condition on the hypersurface is required for \eqref{3.8} to reduce to \eqref{pa1}: \begin{eqnarray}\label{3.19} \begin{split} 0&=4\mathcal{E}_{(\mu}u_{\nu)}+\left(Q_{(\mu}+2\Pi_{\mu}\right)e_{\nu)}+\left(\frac{2}{3}\Theta-\Sigma\right)\Sigma_{\mu\nu}+2\Omega\varepsilon_{\delta(\mu}\Sigma_{\nu)}^{\ \delta}\\ &-\Theta\left[\Sigma_{\mu\nu}+2\left(\Sigma_{(\mu}-\varepsilon_{\delta(\mu}\Omega^{\delta}\right)e_{\nu)}\right]. \end{split} \end{eqnarray} The scalars \(\lambda\) and \(\beta\) in \eqref{pa1} can now be explicitly written as \begin{subequations} \begin{align} \lambda&=\frac{2}{3}\left(\rho+\Lambda\right)+\mathcal{E}+\frac{1}{2}\Pi-\left(\frac{1}{3}\Theta+\Sigma\right)\left(\frac{2}{3}\Theta-\Sigma\right),\label{3.20}\\ \beta&=\frac{2}{3}\rho-\frac{1}{2}\left(\mathcal{E}+\frac{1}{2}\Pi\right)-\frac{1}{2}\left(\frac{2}{3}\Theta-\Sigma\right)\left(\frac{2}{3}\Theta+\frac{1}{2}\Sigma\right)+2\Omega^2.\label{3.21} \end{align} \end{subequations} Perhaps a well known class of spacetimes having spacelike hypersurfaces with Ricci tensor of such is the LRS II class (see \cite{gbc1}). These are the observers' rest spaces which play a fundamental role in obtaining exact solutions to the Einsten's field equations. \section{Characterization, the equations and constraints}\label{soc5} In this section we provide a characterization of locally symmetric hypersurfaces in spacetimes admitting a \(1+1+2\) decomposition, with Ricci tensor of the form \eqref{pa1}. We then obtain additional equations and constraints that aid further analysis of, and restrictions on these hypersurfaces. The condition of local symmetry of Riemannian manifolds is given by the vanishing of the first covariant derivative of the curvature tensor: \begin{eqnarray}\label{panel1} D_{\sigma}\left(^3R_{\mu\nu\delta\gamma}\right)=0. \end{eqnarray} Properties of locally symmetric Riemannian spaces \cite{al1,kn1,st1} is a well studied subject and complete classification schemes have been provided. The Lorentzian cases have been studied as well \cite{st1,hda1,cdc1}, with classifications provided up to the \(2\)-symmetric and semi-symmetric cases relatively recently by Senovilla \cite{js1}. Riemannian manifolds satisfying \eqref{panel1} have been shown to satisfy the bi implication \begin{eqnarray}\label{panel2} D_{\sigma_1}\ldots D_{\sigma_k}\left(^3R_{\mu\nu\delta\gamma}\right)=0\iff D_{\sigma}\left(^3R_{\mu\nu\delta\gamma}\right)=0, \end{eqnarray} where \(D_{\sigma_k}\) denotes the \(k^{th}\) covariant derivative. Hence, \(C_{\mu\nu\delta}=0\) in which case the hypersurfaces we are considering here are conformally flat, and therefore both \(\mathcal{E}\) and \(\mathcal{H}\) are zero (of course this is well known). By contracting \eqref{pa4} with \(e^{\delta}\varepsilon^{\mu\nu},e^{\delta}N^{\mu\nu}\) and \(e^{\delta}e^{\mu}e^{\nu}\) gives respectively \begin{subequations} \begin{align} 0&=\left(\lambda-\beta\right)\xi,\label{panew1}\\ 0&=\left(\lambda-\beta\right)\phi+\frac{1}{2}\left(\hat{\lambda}-2\hat{\beta}\right),\label{panew2}\\ 0&=\left(\hat{\lambda}-2\hat{\beta}\right),\label{panew3} \end{align} \end{subequations} which reduces to the set \begin{subequations} \begin{align} 0&=\left(\lambda-\beta\right)\xi,\label{pane1}\\ 0&=\left(\lambda-\beta\right)\phi,\label{pane2} \end{align} \end{subequations} Hence, we shall consider the following configurations: \begin{subequations} \begin{align} \lambda-\beta&=0;\qquad\mbox{(Einstein manifold)}\qquad\mbox{or}\label{pane3}\\ \phi=\xi&=0\qquad\left(\mbox{assuming}\quad\lambda-\beta\neq 0\right),\label{pane4} \end{align} \end{subequations} where the condition \(\lambda-\beta=0\) is simply \begin{eqnarray}\label{ddx} \frac{2}{3}\Lambda+\frac{3}{4}\Pi-\frac{3}{4}\left(\frac{2}{3}\Theta-\Sigma\right)-2\Omega^2=0. \end{eqnarray} We therefore have that the set \begin{eqnarray}\label{der} \mathcal{D}:=\lbrace{\lambda,\beta,\xi,\phi\rbrace}, \end{eqnarray} characterizes embedded locally symmetric \(3\)-manifolds in \(4\)-dimensional spacetimes with Ricci tensor of the form \eqref{pa1}. It is also clear that the considered hypersurfaces are flat only in the Einstein case with \(\lambda=\beta=0\). This allows us to state the first useful proposition: \begin{proposition}\label{prop1} Let \(\left(M,g_{\mu\nu}\right)\) be a \(4\)-dimensional spacetime and let \(\Xi\) be a locally symmetric embedded \(3\)-manifold with induced metric \(h_{\mu\nu}\), and with Ricci tensor of the form \eqref{pa1}. Then \(\Xi\) is either \begin{enumerate} \item an Einstein space; or \item is non-twisting with vanishing sheet expansion. \end{enumerate} \end{proposition} Both cases of Proposition \ref{prop1} are very important cases that will be given the necessary considerations. For example, Ricci solitons, which are steady solutions of the Ricci flow evolution equation can be seen as `\textit{perturbations}' of Einstein spaces. On the other hand, the class of \textit{locally rotationally symmetric spacetimes}, which contains a lot of physically relevant spherically symmetric spacetimes in general relativity, have important subclasses that are non-twisting. In context of marginally trapped tubes however (these generalize the boundaries of black holes), the case \(2.\) of Proposition \ref{prop1} are minimal. Now, using the property of the vanishing of the divergence of \eqref{pa6}, we contract \eqref{pa6} by \(e^{\eta}h_{\eta}^{\mu}D_{\nu}\) and obtain \begin{eqnarray}\label{pane7} 2\xi\hat{\beta}+J\left(\lambda-\beta\right)=0, \end{eqnarray} where we have defined the operator \(J=a_{\mu}\varepsilon^{\mu\nu}\delta_{\nu}\). The contracted Ricci identities, \(D^{\mu}R_{\mu\nu}=0\), obtained from \eqref{panel1} can be expressed as \begin{eqnarray}\label{panel4} \left(\lambda-\beta\right)\left(a_{\nu}+\phi e_{\nu}\right)+e_{\nu}\hat{\lambda}+\delta_{\nu}\beta=0, \end{eqnarray} which upon contracting with \(e^{\nu}\) and \(u^{\nu}\) we obtain respectively \begin{eqnarray}\label{panel5} \left(\lambda-\beta\right)\phi+\hat{\lambda}&=0. \end{eqnarray} In any case the first term on the left hand side will vanish by Proposition \ref{prop1}, and hence we must have that \(\hat{\lambda}=0\) which would imply that \(\hat{\beta}=0\) by \eqref{panew3}. In this case we have that \eqref{pane7} reduces to \begin{eqnarray}\label{fray} J\left(\lambda-\beta\right)=0. \end{eqnarray} Since \(u^{\mu}R_{\mu\nu\delta\gamma}=0\), the tensor \(D_{\mu}D_{\nu}u_{\delta}\) is symmetric in the \(\mu\) and \(\nu\) indices by the Ricci identities for \(u^{\mu}\). We note that \begin{eqnarray} \begin{split} D_{\mu}u_{\nu}&=\left(\frac{1}{3}\Theta+\Sigma\right)e_{\mu}e_{\nu}+\frac{1}{2}\left(\frac{2}{3}\Theta-\Sigma\right)N_{\mu\nu}+\left(\Sigma_{\nu}+\varepsilon_{\nu\sigma}\Omega^{\sigma}\right)e_{\mu}+\left(\Sigma_{\mu}-\varepsilon_{\mu\sigma}\Omega^{\sigma}\right)e_{\nu}+\Omega\varepsilon_{\mu\nu}+\Sigma_{\mu\nu}. \end{split} \end{eqnarray} Hence, contracting \(D_{\mu}D_{\nu}u_{\delta}\) with \(\varepsilon^{\mu\nu}\) gives zero. Explicitly, we write this as the equation \begin{eqnarray}\label{panel10} \begin{split} 0&=\left[3\Sigma\xi+\varepsilon^{\nu\sigma}D_{\nu}\left(\Sigma_{\sigma}-\varepsilon_{\sigma\delta}\Omega^{\delta}\right)\right]e_{\mu}+2\xi\left(\Sigma_{\mu}+\varepsilon_{\mu\sigma}\Omega^{\sigma}\right)+\frac{1}{2}\varepsilon^{\sigma}_{\mu}\delta_{\sigma}\left(\frac{2}{3}\Theta-\Sigma\right)\\ &+\varepsilon^{\sigma\nu}\left(\Omega D_{\sigma}\varepsilon_{\nu\mu}+D_{\sigma}\Sigma_{\nu\mu}\right)+\delta_{\mu}\Omega+\left(\Sigma_{\nu}-\varepsilon_{\nu\sigma}\Omega^{\sigma}\right)\left(\frac{1}{2}\phi\varepsilon^{\nu}_{\mu}-\xi N^{\nu}_{\mu}+\varepsilon^{\sigma\nu}\zeta_{\sigma\mu}\right), \end{split} \end{eqnarray} and upon contracting \eqref{panel10} with \(e^{\mu}\) and simplifying gives the following expression: \begin{eqnarray}\label{peen1} 3\Sigma\xi+\Omega \phi-\delta_{\mu}\Omega^{\mu}+\varepsilon^{\mu\nu}\left[\delta_{\mu}\Sigma_{\nu}+\Sigma^{\sigma}_{\nu}\left(\delta_{\mu}e_{\sigma}+\delta_{\{\mu}e_{\sigma\}}\right)\right]&=0. \end{eqnarray} Now, the Gauss and Codazzi embedding equations to be satisfied by a properly embedded hypersurface, with Ricci tensor of the form \eqref{pa1}, in the ambient spacetime are explicitly given by \begin{subequations} \begin{align} 0&=\left[\left(\lambda-\beta\right)+\frac{3}{2}\left(\frac{1}{3}\Theta\Sigma-\mathcal{E}-\frac{1}{2}\Pi\right)\right]\left(e_{\mu}e_{\nu}-\frac{1}{3}h_{\mu\nu}\right)+\frac{1}{3}\Theta\Sigma_{\mu\nu}+\mathcal{E}_{\mu\nu}\nonumber\\ &+2\left(\frac{1}{3}\Theta\Sigma_{(\mu}+\mathcal{E}_{(\mu}+\frac{1}{2}\Pi_{(\mu}\right)e_{\nu)}+\frac{1}{3}h_{\mu\nu}\left(\frac{3}{2}\Sigma^2+2\Sigma_{\mu}\Sigma^{\mu}+\Sigma_{\mu\nu}\Sigma^{\mu\nu}\right),\label{new1}\\ 0&=\lambda+2\beta+\frac{2}{3}\Theta^2-\left(\frac{3}{2}\Sigma^2+2\Sigma_{\mu}\Sigma^{\mu}+\Sigma_{\mu\nu}\Sigma^{\mu\nu}\right)-2\rho,\label{new2}\\ 0&=-\frac{3}{2}e_{(\mu}\varepsilon_{\ \nu)}^{\sigma}\delta_{\sigma}\Sigma-\frac{3}{2}\left(\xi\Sigma-\mathcal{H}\right) \left(h_{\mu\nu}-e_{\mu}e_{\nu}\right)+\xi\Sigma_{(\mu}e_{\nu)}+\varepsilon_{\sigma\delta(\mu}D^{\sigma}\Sigma_{\nu)}^{\delta}\nonumber\\ &-\varepsilon_{\sigma(\mu}D^{\sigma}\Sigma_{\nu)}+\Sigma_{\sigma}\left(\varepsilon^{\ \sigma}_{(\mu}a_{\nu)}+\frac{1}{2}\phi e_{(\mu}\varepsilon_{\nu)}^{\ \sigma}\right)+\frac{1}{2}\left(\Sigma_{\delta}\varepsilon^{\ \delta}_{\sigma(\mu}-3\varepsilon_{\sigma(\mu}\right)\zeta^{\ \sigma}_{\nu)}\nonumber\\ &+\left(\varepsilon_{\sigma\delta(\mu}D^{\sigma}\Sigma^{\delta}+\mathcal{H}_{(\mu}\right)e_{\nu)}+e_{(\nu}\Sigma_{\mu)},\label{new3}\\ 0&=\left[\frac{3}{2}\Sigma\phi+\frac{3}{4}\hat{\Sigma}-\frac{2}{3}\hat{\Theta}-Q+\left(\delta_{\mu}-2a_{\mu}\right)\Sigma^{\mu}\right]e_{\mu}+\frac{3}{2}\left(\Sigma a_{\mu}+\phi\Sigma_{\mu}\right)\nonumber\\ &+\hat{\Sigma}_{\mu}-Q_{\mu}+\left(\xi\varepsilon_{\sigma\mu}+\zeta_{\sigma\mu}\right)\Sigma^{\sigma}-\frac{3}{4}\delta_{\mu}\Sigma-\frac{2}{3}\delta_{\mu}\Theta.\label{new4} \end{align} \end{subequations} Let us take the trace of \eqref{new1} and \eqref{new3} as well as contract with \(e^{\mu}e^{\nu}\). We obtain the set \begin{subequations} \begin{align} 0&=\frac{3}{2}\Sigma^2+2\Sigma_{\mu}\Sigma^{\mu}+\Sigma_{\mu\nu}\Sigma^{\mu\nu},\label{exe1}\\ 0&=3\left(\mathcal{H}-\xi\Sigma\right)-\varepsilon^{\mu\nu}\left(\delta_{\mu}-a_{\mu}\right)\Sigma_{\nu},\label{exe2}\\ 0&=\frac{2}{3}\left[\left(\lambda-\beta\right)+\frac{1}{2}\left(\frac{1}{3}\Theta\Sigma-\frac{1}{2}\Pi\right)\right]+\frac{1}{3}\left[\frac{3}{2}\Sigma^2+2\Sigma_{\mu}\Sigma^{\mu}+\Sigma_{\mu\nu}\Sigma^{\mu\nu}\right],\label{exe3}\\ 0&=\Sigma_{\mu\sigma}\varepsilon^{\mu\nu}\delta_{\nu}e^{\sigma},\label{exe4} \end{align} \end{subequations} (keep in mind that \(\mathcal{H}\) is zero everywhere since the hypersurface is conformally flat) and upon contracting \eqref{new4} with \(e^{\mu}\) we get \begin{eqnarray}\label{exe5} 0=\frac{3}{2}\Sigma\phi+\frac{3}{4}\hat{\Sigma}-\frac{2}{3}\hat{\Theta}-Q+\left(\delta_{\mu}-2a_{\mu}\right)\Sigma^{\mu}. \end{eqnarray} Comparing \eqref{exe1} to \eqref{new2}, we have that \begin{eqnarray}\label{exe6} \frac{2}{3}\Theta^2=2\rho-\left(\lambda+2\beta\right), \end{eqnarray} and hence, \begin{eqnarray}\label{exe7} \lambda+2\beta=R\leq2\rho. \end{eqnarray} Indeed, for finite \(\rho\), the scalar curvature is finite for non-negative \(R\). This would then indicate compactness of the hypersurface since the scalar curvature is assumed to be bounded below. Furthermore, this implies that the energy density \(\rho\) is non-negative and \(\rho=0\) which implies \(^3R=0\). We can use \eqref{3.20} and \eqref{3.21} to substitute into \eqref{exe6} and show that the cosmological constant is proportional to the square of the shear scalar \(\Sigma\), and must be negative in the cases considered in this work. In particular we have that \(\Lambda=-(9/4)\Sigma^2\). Therefore, the cosmological constant would vanish if and only if the hypersurface is not shearing. Clearly, requiring \(R\) to be non-negative forces the energy density \(\rho\) to be non-negative on the hypersurfaces as well. Given a spacetime (of type considered in this work), choosing a hypersurface in the spacetime (this implies specifying the Ricci tensor on the hypersurface, which in turn implies specifying \(\lambda\) and \(\beta\)) will present additionals constraint on the hypersurfaces. So, for example, consider the class II of locall rotationally symmetric spacetimes, with \cite{gbc1} \begin{eqnarray} \begin{split} \lambda&=-\left(\hat{\phi}+\frac{1}{2}\phi^2\right),\label{exe8}\\ \beta&=-\left[\frac{1}{2}\left(\hat{\phi}+\phi^2\right)-K\right],\label{exe9} \end{split} \end{eqnarray} with \(K\) be the Gaussian curvature of \(2\)-surfaces in the spacetimes and given by \begin{eqnarray}\label{exe10} \begin{split} K&=\frac{1}{3}\rho-\frac{1}{2}\Pi+\frac{1}{4}\phi^2-\frac{1}{4}\left(\frac{2}{3}\Theta-\Sigma\right)^2\\ &=\frac{1}{3}\rho-\frac{1}{2}\Pi+\frac{1}{4}\phi^2-\frac{1}{4}\left(\frac{2}{3}\Theta-\Sigma\right)^2, \end{split} \end{eqnarray} where we have set \(\mathcal{E}=0\). Substituting the equations of \eqref{exe8} into \eqref{exe6} and \eqref{exe3}, and comparing the results we obtain the equation \begin{eqnarray}\label{exe11} \frac{1}{3}\Theta\left(\Theta+\frac{1}{2}\Sigma\right)=\rho+\frac{3}{2}\hat{\phi}+\frac{1}{4}\phi^2+\frac{1}{4}\Pi, \end{eqnarray} which is the additional constraint we seek. We also stress that additional constraints may be obtained by taking the dot derivatives of the scalar equations obtained from the Gauss-Codazzi embedding equations. As a limiting case, in shear-free spacetimes (\(\Lambda\) is necessarily zero), the constraints are greatly simplfied. In particular, the rotation satisfies \(\delta_{\mu}\Omega^{\mu}-\phi\Omega=0\), \(\lambda=(1/4)\Pi\) and the heat flux satisfies \(Q=-\left(2/3\right)\hat{\Theta}\). Hence, for the shear-free case we can make the following observation: \begin{remark} A hypersurface with Ricci tensor \eqref{pa1}, in a shear-free spacetime admitting a \(1+1+2\) decomposition is radiating if the expansion decreases along \(e^{\mu}\), is absorbing radiation if the expansion increases along \(e^{\mu}\), and neither radiates nor absorbs radiation if the expansion is constant along \(e^{\mu}\). In the increasing and decreasing cases we make the assumption that the geometry in the vicinity of the hypersurface is smooth. \end{remark} The above remark can be seen in the study of horizon dynamics of black holes particularly in astrophysical and cosmological settings where, for example, there is in-falling radiation across the horizon which increases the horizon area, or a radiating black hole decreases the horizon area. Now, the local symmetry condition also implies that \(^3R_{\mu\nu}\) is Codazzi (by the second contracted Bianchi identity) and hence \begin{eqnarray}\label{panel123} D_{\sigma}\left(^3R_{\mu\nu}\right)-D_{\mu}\left(^3R_{\sigma\nu}\right)=0. \end{eqnarray} Writing \eqref{panel123} explicitly, and contracting with \(e^{\sigma}N^{\mu\nu},e^{\sigma}u^{\mu}u^{\nu},u^{\sigma}e^{\mu}e^{\nu}\), we obtain the following the system of equations \begin{subequations} \begin{align} 2\hat{\beta}-\left(\lambda-\beta\right)\phi&=0,\label{pene1}\\ \dot{\beta}&=0,\label{pene2}\\ \dot{\lambda}-2\dot{\beta}&=0,\label{pene4} \end{align} \end{subequations} with \eqref{pene1} being identically satisfied as was shown previously. We see we also have that \(\dot{\lambda}=\dot{\beta}=0\) from \eqref{pene2} and \eqref{pene4}. The function \(\lambda\), using the Ricci identities for \(e^{\mu}\), can be written as \begin{eqnarray}\label{panel125} -e^{\mu}e^{\gamma}N^{\nu\delta}R_{\mu\nu\delta\gamma}=\lambda=-2e^{\mu}N^{\nu\delta}D_{[\mu}D_{\nu]}e_{\delta}, \end{eqnarray} where the term on the right hand side can be written entirely in terms of the covariant variables. The local symmetry condition further implies the following: \begin{eqnarray}\label{mich1} \begin{split} D_{\mu}\left(^3R\right)&=0,\nonumber\\ \mbox{which implies}\ \ \dot{\lambda}+2\dot{\beta}=0,\ \ \hat{\lambda}+2\hat{\beta}&=0\ \ \mbox{and}\ \ \delta^2\left(\lambda+2\beta\right)=0. \end{split} \end{eqnarray} so that \(\dot{R}=\hat{R}=0\), and \(\delta^2\left(\lambda+2\beta\right)=0\), where the first two conditions are satisfied. The covariant derivative of \(\lambda\) (using the left hand side of \eqref{panel125}) simplifies as \begin{eqnarray}\label{panel127} \begin{split} D_{\sigma}\lambda&=-e^{\mu}e^{\gamma}N^{\nu\delta}D_{\sigma}\left(^3R_{\mu\nu\delta\gamma}\right)\\ &=0, \end{split} \end{eqnarray} by \eqref{panel1}. We consequently have that \(\delta^2\lambda=0\Longrightarrow\delta^2\beta=0\). Indeed \eqref{pane7} is always satisfied. Hence, the metric \(h_{\mu\nu}\) is of constant scalar curvature. We shall now make brief statements on the two cases of Proposition \ref{prop1} individually. \subsection{The Einstein case} Suppose we have that \(\lambda-\beta=0\) (with \(\xi\neq0,\phi\neq0\)). By using the vanishing of the dot derivatives of \(\lambda\) and \(\beta\), one can take derivatives of the scalar equations obtained from the Gauss-Codazzi embedding equations and show that the following equation has to be satisfied on the hypersurface: \begin{eqnarray}\label{ei1} \Theta^2\dot{\Sigma}=0, \end{eqnarray} so that either the expansion vanishes or the shear is constant along \(u^{\mu}\). If the expansion \(\Theta\) vanishes, then the function \(\lambda\) has to be proportional to the energy density \(\rho\) from \eqref{exe6}, in particular \(\lambda=(1/2)\rho\). We also have that the anisotropic stress \(\Pi\) vanishes, and the heat flux satisfies \begin{eqnarray}\label{ei2} Q=\frac{3}{2}\left(\Sigma\phi+\frac{1}{2}\hat{\Sigma}\right)+\left(\delta_{\mu}-2a_{\mu}\right)\Sigma^{\mu}. \end{eqnarray} Clearly in the shear-free case the hypersurface then models a conformally flat perfect fluid. Note in all this we will also consider that \eqref{exe2} will be satisfied. The condition \(\dot{\Sigma}=0\), which we willnot discuss here, can be used to obtain additional constraints on the hypersurface from the field equations. \subsection{Case of vanishing twist and sheet expansion} On the other hand, let us assume that the hypersurfaces are not Einstein and that \(\xi=\phi=0\). The following three constraints are required to be satisfied: \begin{subequations} \begin{align} 0&=\varepsilon^{\mu\nu}\left(\delta_{\mu}-a_{\mu}\right)\Sigma_{\nu},\label{ei3}\\ \delta_{\mu}\Omega^{\mu}&=\varepsilon^{\mu\nu}\left[\delta_{\mu}\Sigma_{\nu}+\Sigma_{\nu}^{\sigma}\left(\delta_{\mu}e_{\sigma}+\delta_{\{\mu}e_{\sigma\}}\right)\right],\label{ei4}\\ Q&=\frac{3}{4}\hat{\Sigma}-\frac{2}{3}\hat{\Theta}+\left(\delta_{\mu}-2a_{\mu}\right)\Sigma^{\mu}.\label{ei5} \end{align} \end{subequations} \section{The hypersurfaces admitting a Ricci soliton structure}\label{soc6} We now proceed to consider the cases for which the hypersurfaces considered in this work admit a Ricci soliton structure. We recall the definition of a Ricci soliton. \begin{definition}\label{deef1} A Riemannian manifold \(\left(\Xi,h_{\mu\nu},\varrho,X^{\mu}\right)\) is called a Ricci soliton if there exists a vector field \(X^{\mu}\) and a real scalar \(\varrho\in \mathbb{R}\) such that \begin{eqnarray}\label{pin22} ^3R_{\mu\nu}=\left(\varrho-\frac{1}{2}\mathcal{L}_{X}\right)h_{\mu\nu}. \end{eqnarray} where \(\mathcal{L}_{X}\) is the Lie derivative operator along the vector field \(X^{\mu}\), and \(\varrho\) is some constant from the set of real numbers. \end{definition} A Ricci soliton \(\left(\Xi,h_{\mu\nu},\varrho,X^{\mu}\right)\) is said to be shrinking, steady or expanding if \(\varrho>0,\varrho=0\) or \(\varrho<0\) respectively. The vector field \(X^{\mu}\) is sometimes referred to as the \textit{soliton field}, and we will sometimes use this terminology for the rest of the work. With respect to the hypersurfaces considered in this work, it is clear that any Ricci soliton would be trivial since they are conformally flat, following from a well known result due to Ivey \cite{iv1}, Perelman \cite{gp1}, Petersen and Wylie \cite{pet1}, and Catino and Mantegazza \cite{cat1} etc., which can be formulated as\\ \ \\ \textit{Any nontrivial homogeneous Riemannian Ricci soliton must be non-compact, non-conformally flat, expanding and non-gradient,} \\ \ \\ Here by non-trivial, it is meant that the Ricci soliton is neither an Einstein space, nor is the product of an Einstein and a (pseudo)-Euclidean space. An immediate result that follows is the following \begin{corollary}\label{theo2} Let \(M\) be a \(4\)-dimensional spacetime, and let \(\Xi\) be a locally symmetric embedded \(3\)-manifold in \(M\), with Ricci tensor of the form \eqref{pa1}, and scalar curvature \(R\geq 0\). If \(\Xi\) admits a Ricci soliton structure, then, either \(\Xi\) is locally an Einstein \(3\)-space, or \(\Xi\) is locally isomorphic to \begin{eqnarray}\label{eei1} \mathcal{M}_E\times\mathbb{R}, \end{eqnarray} where \(\mathcal{M}_E\) denotes a \(2\)-dimensional Einstein manifold. \end{corollary} Of course then any Ricci soliton - as it pertains to this work - of geometry \eqref{eei1}, must have vanishing sheet expansion and twist. That a Ricci soliton is trivial by no means imply uninteresting. In fact, these objects have a very rich structure and have been extensively studied. We will explore some of their properties with regards to our covariant approach. Now, the use of the local symmetry condition on \eqref{pin22} gives \begin{eqnarray}\label{pin35} D_{\sigma}\mathcal{L}_{X}h_{\mu\nu}&=D_{\sigma}D_{(\mu}X_{\nu)}=0. \end{eqnarray} Upon comparing \eqref{pin22} with \eqref{pa1}, we see that the equations to to be solved by the hypersurfaces under consideration are \begin{subequations} \begin{align} D_{(\mu}X_{\nu)}&=-\left[\left(\beta-\varrho\right)h_{\mu\nu}+\left(\lambda-\beta\right)e_{\mu}e_{\nu}\right],\label{pin36}\\ D_{\sigma}D_{(\mu}X_{\nu)}&=0.\label{pin37} \end{align} \end{subequations} It is seen that \eqref{pin36} implies \eqref{pin37}, so we have just the equations \eqref{pin36} to solve. By choosing the general form of the vector field on the hypersurface as \begin{eqnarray}\label{pin38} X^{\mu}=\alpha e^{\mu}+m^{\mu}, \end{eqnarray} confined to the hypersurface, where \(m^{\mu}\) is the component of \(X^{\mu}\) lying in the \(2\)-sheet and \(\alpha\in C^{\infty}\left(\Xi\right)\), we can expand \eqref{pin36} and contract with \(h^{\mu\nu},e^{\mu}e^{\nu}\) and \(u^{\mu}e^{\nu}\) to get the following set of equations \begin{subequations} \begin{align} \hat{\alpha}+\delta_{\mu}m^{\mu}&=3\varrho+\lambda+2\beta,\label{pin39}\\ \hat{\alpha}+e_{\mu}\widehat{m^{\mu}}&=\varrho-\lambda,\label{pin40}\\ \dot{\alpha}&=0,\label{pin41} \end{align} \end{subequations} We shall focus on the case where the vector field \(X^{\mu}\) is parallel to \(e^{\mu}\), i.e. \(m^{\mu}=0\). Subtracting \eqref{pin40} from \eqref{pin39} we obtain \begin{eqnarray}\label{pin44} \varrho=-\lambda-\beta. \end{eqnarray} In the case of vanishing scalar curvature, we have that \(\varrho=\beta\), and hence, the nature of the soliton is entirely specified by \(\beta\). Now, suppose \(^3R>0\). Then from the estimate \cite{ham2} \begin{eqnarray}\label{pin45} \frac{1}{3}\left(^3R\right)^2\leq |^3R_{\mu\nu}|^2\leq \left(^3R\right)^2, \end{eqnarray} it is straightforward to show that \(\beta\geq0\). To see this, the above estimate reduces to the two inequalities \begin{eqnarray*} \begin{split} -\frac{2}{3}\left(\lambda-\beta\right)^2\leq0,\\ -2\beta\left(\beta+2\lambda\right)\leq0, \end{split} \end{eqnarray*} the first of which is of course satisfied. The second can be rewritten as \begin{eqnarray}\label{xxa1} -4\beta\left(^3R-\frac{3}{2}\beta\right)\leq0. \end{eqnarray} Therefore, if \(\beta<0\), then one should have \begin{eqnarray*} ^3R\leq\frac{3}{2}\beta\Longrightarrow\ ^3R<0, \end{eqnarray*} contradicting \(^3R\) being strictly positive. Hence, we must have that \(\beta\geq0\). If \(\beta=0\), then the soliton is necessarily expanding since \(^3R=\lambda>0\). Consider the case \(\beta>0\). From \eqref{pin44}, whether the Ricci soliton is steady, shrinking or expanding will depend on the sign of the sum \(\lambda+\beta\): the Ricci soliton is steady, shrinking or expanding if \(\lambda+\beta\) is \(``=0", ``<0"\) or \(``>0"\) respectively. Notice that \begin{eqnarray}\label{yoyo} \begin{split} ^3R=\lambda+2\beta&\geq0\\ \Longrightarrow\ \ \left(\lambda+\beta\right)+\beta&\geq0\\ \Longrightarrow\ \ -\varrho+\beta&\geq0\\ \Longrightarrow\ \ \varrho&\leq\beta, \end{split} \end{eqnarray} with equality holding if and only if \(^3R=0\). Indeed if \(^3R>0\), then we have that \(\varrho<\beta\) (\(\varrho=\beta\Longrightarrow\ ^3R=0\)). From \eqref{exe7}, this gives the following bound on the scalar \(\varrho\): \(\beta-2\rho\leq\varrho\leq\beta\). Therefore whenever \(\beta\geq 2\rho\), \(\varrho\geq0\) and the soliton is non-expanding, this ensures that the energy density \(\rho\) is non-negative, which is desirable from a physical point of view. It follows that, for \(^3R=0\), the soliton is \begin{itemize} \item[1.] Steady implies \(\Xi\) is flat (\(\lambda=\beta=0\)); \item[2.] Shrinking implies \(\lambda<0\); \item[3.] Expanding implies \(\lambda>0\). \end{itemize} Explicitly, write \eqref{pin44} as \begin{eqnarray}\label{yop1} \varrho=\left(\frac{2}{3}\Theta-\Sigma\right)\left(\frac{2}{3}\Theta+\frac{5}{4}\Sigma\right)-2\Omega^2-\frac{1}{4}\Pi+\frac{2}{3}\left(\rho+\Lambda\right). \end{eqnarray} The condition for the hypersurface to be of Einstein type can be expressed as \begin{eqnarray}\label{yop2} \frac{3}{4}\Sigma\left(\frac{2}{3}\Theta-\Sigma\right)-2\Omega^2+\frac{1}{4}\Pi=0, \end{eqnarray} So, for example, whenever \(X^{\mu}\) is a Killing vector for the metric \(h_{\mu\nu}\) of \(\Xi\) and \(\Xi\) is of Einstein type, one has that the following holds on \(\Xi\): \begin{eqnarray}\label{yop3} \left(\frac{2}{3}\Theta-\Sigma\right)\left(\frac{2}{3}\Theta+\frac{1}{2}\Sigma\right)-\frac{1}{2}\Pi+\frac{2}{3}\left(\rho+\Lambda\right)=0. \end{eqnarray} Indeed, it follows that, if a hypersurface \(\Xi\) admits a Ricci solition structure and on \(\Xi\) we have that \begin{eqnarray}\label{yop4} \frac{3}{4}\Sigma\left(\frac{2}{3}\Theta-\Sigma\right)-2\Omega^2+\frac{1}{4}\Pi\neq0, \end{eqnarray} condition \(2.\) of Proposition \ref{prop1} holds, and \(\Xi\) has geometry \(\mathcal{M}_{E}\times\mathbb{R}\). For example, consider a shear-free spacetime with vanishing anisotropic stress. The condition \eqref{yop4} reduces to the requirement that the hypersurface must rotate. Also, noting that \(^3R\leq2\rho\), one requires that the following inequality must be satisfied on \(\Xi\): \begin{eqnarray}\label{fgh} \Omega^2\leq \frac{1}{3}\Theta^2. \end{eqnarray} Consider the case that the spacetime is expansion-free. Then the hypersurface cannot possibly rotate. The converse is of course possible: if the hypersurface is non-rotating, then it is possible to have \(0<\Theta^2\) for \(\Theta\) being non-zero. Now let us return to the system \eqref{pin39} to \eqref{pin41}. For the solution of \(\alpha\), we can directly integrate \eqref{pin39} to obtain (we will take the constant of integration to be zero) \begin{eqnarray}\label{pin49} \begin{split} \alpha&=\left(\varrho-\lambda\right)\chi\\ &=-\left[^3R+\left(\lambda-\beta\right)\right]\chi\\ &=-2\left(^3R-\frac{3}{2}\beta\right)\chi, \end{split} \end{eqnarray} where \(\chi\) parametrizes integral curves of \(e^{\mu}\). In many instances when dealing with spacetimes of physical interest, the parameter \(\chi\) is can be identified with the radial coordinate. We will consider the interval \(0<\chi<\infty\). Indeed, for the Einstein case, the vector field \(X^{\mu}\) will point opposite the unit direction \(e^{\mu}\) (\(\alpha\neq0\Longrightarrow\ ^3R>0\Longrightarrow\) \(\Xi\) is non-flat). Of course then this does not accommodate the steady case. In the non-Einstein case, for vanishing scalar curvature one has \(\alpha=3\beta\chi\), and since \(\alpha\neq0\), the soliton field points in the unit direction \(e^{\mu}\) if and only if the soliton is shrinking, and points opposite \(e^{\mu}\) if and only if the soliton is expanding. (Clearly \(^3R=0\) does not accommodate the steady case here as well, since otherwise we would have that \(\alpha=0\).) Furthermore, it is not difficult to see that the Einstein case will necessarily have \(\varrho<0\). Consider the case of positive scalar curvature. Recall that \(\beta\geq0\) in this case. If \(\beta=0\) we have an expanding soliton with soliton field pointing opposite \(e^{\mu}\). If \(\beta>0\), then, using \eqref{xxa1} we see that we have an expanding soliton with the soliton field points in the direction of \(e^{\mu}\). This leads us to state the following result: \begin{proposition}\label{porp1} Let \(M\) be a spacetime admitting a \(1+1+2\) decomposition, and let \(\Xi\) be a locally symmetric embedded 3-manifold in \(M\) with Ricci tensor of the form \eqref{pa1} with \(^3R\geq0\), admitting a Ricci soliton structure. Suppose the soliton field is non-trivial and parallel to the unit direction \(e^{\mu}\). If \(\Xi\) is Einstein, then \(\Xi\) is expanding opposite the direction of \(e^{\mu}\). If \(\Xi\) is non-Einstein, and the scalar curvature vanishes, then, \begin{itemize} \item \(\Xi\) is shinking in the direction of \(e^{\mu}\) for \(\beta>0\); or \item \(\Xi\) is expanding opposite the direction of \(e^{\mu}\) for \(\beta<0\). \end{itemize} Otherwise, if the scalar curvature is strictly positive, then \begin{itemize} \item \(\Xi\) is expanding opposite the direction of \(e^{\mu}\) for \(\beta=0\); or \item \(\Xi\) is expanding in the direction of \(e^{\mu}\) for \(\beta>0\). \end{itemize} \end{proposition} One may consider more general cases of \eqref{pin38} where \(X^{\mu}\) also has a component along the \(u^{\mu}\) direction. Consider the vector field \begin{eqnarray}\label{pin50} X^{\mu}=\gamma u^{\mu}+\alpha e^{\mu} + m^{\mu}. \end{eqnarray} Using \eqref{pin36} and contracting with \(h^{\mu\nu},e^{\mu}e^{\nu},u^{\mu}e^{\nu}\) and \(u^{\mu}u^{\nu}\) we obtain the following set of equations \begin{subequations} \begin{align} \dot{\gamma}+\Theta\gamma+\hat{\alpha}+\delta_{\mu}m^{\mu}&=3\varrho+\lambda+2\beta,\label{pin51}\\ \hat{\alpha}+\left(\frac{1}{3}\Theta+\Sigma\right)\gamma+e_{\mu}\widehat{m^{\mu}}&=\varrho-\lambda,\label{pin52}\\ \dot{\alpha}-\hat{\gamma}&=0,\label{pin53}\\ \dot{\gamma}&=0.\label{pin54} \end{align} \end{subequations} As before, if we consider the case of vanishing sheet component of \(X^{\mu}\), then comparing \eqref{pin54}, \eqref{pin52}, and \eqref{pin51} , \(\gamma\) can explicitly be written as \begin{eqnarray}\label{pin55} \left(\frac{2}{3}\Theta-\Sigma\right)\gamma=2\left(\varrho+\lambda+\beta\right). \end{eqnarray} Interestingly, what this implies is that, if \(N^{\mu\nu}D_{\mu}u_{\nu}=(2/3)\Theta-\Sigma\) vanishes, then, whether the Ricci soliton is an expander, shrinker or steady does not depend on the choice of the component along \(u^{\mu}\). The analysis then follows as in the case of the former, with the solution for the component having an additional term in terms of \(\alpha\). Consequently, for hypersurfaces of the form \(\chi=X\left(\tau\right)\) (with \(\tau\) parametrizing intregral curves of \(u^{\mu}\)), if the component along \(u^{\mu}\) is non-vanishing and the Ricci soliton is foliated by \(2\)-surfaces, this leads to an existence result with implications for black holes in spacetimes. Specifically, the Ricci soliton necessarily admits a marginally trapped tube structure which generalizes the notion of black hole boundaries. These are hypersurfaces foliated by 2-surfaces on which the trace of the second fundamental form with respect to the tangent to outgoing null geodesics (called the outgoing expansion null expansion and we denote this by \(\theta^+\)) vanishes \cite{ash1,ash2,ash3,ak1,ib1,boo2,ibb1,ib3,shef1,rit1,shef2}. In case of spacetimes admitting the splitting considered here, this is given by \cite{shef1,shef2} \begin{eqnarray} \theta^+=\frac{1}{\sqrt{2}}\left(\frac{2}{3}\Theta-\Sigma+\phi\right). \end{eqnarray} It is also easily seen from \eqref{pin55} that, if \(\gamma\neq0\), then \begin{eqnarray}\label{pin56} \frac{2}{3}\dot{\Theta}-\dot{\Sigma}=0, \end{eqnarray} The function \(\alpha\) may now be solved for a given \(\gamma\). Notice in the case that \(\alpha=0\) (or constant), \(\gamma\) must be constant, and analysis follows just as the previous cases. Let us now consider a special case where the vector field \(X^{\mu}\) is a generator of symmetries on \(\Xi\) and the ambient spacetime. \subsection{\(X^{\mu}\) is a conformal Killing vector for the induced metric on the hypersurface} Suppose \(X^{\mu}\) is a conformal Killing vector (CKV) for the metric on the hypersurface \(h_{\mu\nu}\). Then, there exists some smooth function \(\Psi\) on the hypersurface such that \begin{eqnarray}\label{pin57} \mathcal{L}_Xh_{\mu\nu}=2\Psi h_{\mu\nu}. \end{eqnarray} Taking the derivative of \eqref{pin57} and noting that \(D_{\sigma}\mathcal{L}_Xh_{\mu\nu}=0\), we have \begin{eqnarray}\label{pin58} h_{\mu\nu}D_{\sigma}\Psi=0, \end{eqnarray} and hence, \(\Psi\) must be constant in which case \(X^{\mu}\) is a homothetic Killing vector (HKV). The associated conformal factor \(\Psi\) can be found by setting \begin{eqnarray}\label{pin59} \Psi h_{\mu\nu}=-\left[\left(\beta-\varrho\right)h_{\mu\nu}+\left(\lambda-\beta\right)e_{\mu}e_{\nu}\right]. \end{eqnarray} Taking the trace of \eqref{pin59} as well as contracting with \(e^{\mu}e^{\nu}\) gives respectively \begin{subequations} \begin{align} \Psi&=-\frac{1}{3}\left(^3R-3\varrho\right),\label{pin60}\\ \Psi&=-\left(\lambda-\varrho\right),\label{pin6001} \end{align} \end{subequations} which upon equating gives \begin{eqnarray}\label{pin61} \frac{2}{3}\left(\lambda-\beta\right)=0. \end{eqnarray} Therefore the hypersurface must be of Einstein type. As expected, the vanishing of the derivative of \eqref{pin60} gives a constant scalar curvature. Notice that from \eqref{pin44} one see that \(\varrho=-\left(\lambda+\beta\right)=-2\beta\), which gives the conformal factor as \(\Psi=-3\beta=-\ ^3R\). Also, \(\varrho=-2\beta\Longrightarrow\varrho\leq0\). This thus allows us to state the following \begin{proposition} Let \(M\) be a spacetime admitting a \(1+1+2\) decomposition, and let \(\Xi\) be a locally symmetric embedded 3-manifold in \(M\) with Ricci tensor of the form \eqref{pa1}, which admits a Ricci soliton structure. If the associated soliton field \(X^{\mu}=\gamma u^{\mu}+\alpha e^{\mu}\) is a conformal Killing vector for the induced metric on \(\Xi\), then \(X^{\mu}\) is a homothetic Killing vector with associated conformal factor given by \begin{eqnarray}\label{pinew1} \Psi=-3\left[2\rho-\frac{2}{3}\Theta^2+\Sigma^2+2\Omega^2\right], \end{eqnarray} and \(\Xi\) is a non-shrinking Ricci soliton of Einstein type. Furthermore, \(\Xi\) is steady if and only if \(\Xi\) is flat. \end{proposition} The above result also agrees with the well known fact that if the soliton field is a Killing field for the metric on \(\Xi\), then, \(h_{\mu\nu}\) is an Einstein metric. \subsection{\(X^{\mu}\) is a conformal Killing vector for both metrics on the hypersurface and the ambient spacetime} Let \(X^{\mu}\) be a CKV for both the metric on the hypersurface and that on the ambient spacetime. Denote by \(\Psi\) and \(\bar{\Psi}\) the associated conformal factors respectively. The systems to be simultaneously solved simultaneously are \eqref{pin57} and \begin{eqnarray}\label{pin62} \mathcal{L}_Xg_{\mu\nu}=2\bar{\Psi} g_{\mu\nu}. \end{eqnarray} We can expand \eqref{pin62} as \begin{eqnarray}\label{pin63} \left(u_{(\mu}\nabla_{\nu)}+\nabla_{(\mu}u_{\nu)}\right)\gamma+\left(e_{(\mu}\nabla_{\nu)}+\nabla_{(\mu}e_{\nu)}\right)\alpha=\bar{\Psi}g_{\mu\nu}, \end{eqnarray} (we are assuming here again the the vector \(X^{\mu}\) has no component lying in the \(2\)-sheet) from which we obtain the following set of equations: \begin{subequations} \begin{align} \left(\frac{2}{3}\Theta-\Sigma\right)\gamma+\phi\alpha&=2\bar{\Psi},\label{pin64}\\ \dot{\gamma}+\mathcal{A}\alpha&=\bar{\Psi},\label{pin65}\\ \hat{\alpha}+\left(\frac{1}{3}\Theta+\Sigma\right)\gamma&=\bar{\Psi},\label{pin66}\\ \dot{\alpha}-\hat{\gamma}+\left(\frac{1}{3}\Theta+\Sigma\right)\left(\gamma-\alpha\right)&=0.\label{pin67} \end{align} \end{subequations} We state and prove the following \begin{proposition}\label{prop4} Let \(M\) be a spacetime admitting a \(1+1+2\) decomposition, and let \(\Xi\) be a locally symmetric embedded 3-manifold in \(M\) with Ricci tensor of the form \eqref{pa1}, which admits a Ricci soliton structure. If the associated soliton field \(X^{\mu}=\gamma u^{\mu}+\alpha e^{\mu}\) is a conformal Killing vector for both \(h_{\mu\nu}\) and \(g_{\mu\nu}\), then either \begin{enumerate} \item \(\frac{1}{3}\Theta+\Sigma=0\); or \item \(X^{\mu}\) is null, in which case, if \(X^{\mu}\) is a Killing vector for the metric \(g_{\mu\nu}\), then \(\Xi\) is flat, \(X^{\mu}\) is a Killing vector for the metric \(h_{\mu\nu}\), and the acceleration \(\mathcal{A}\) must vanish on \(\Xi\). And if \(\Xi\) is foliated by \(2\)-surfaces, then the Ricci soliton has the structure of a marginally trapped tube. \end{enumerate} \end{proposition} \begin{proof} Comparing \eqref{pin53} to \eqref{pin67}, \eqref{pin54} to \eqref{pin65}, and subtracting \eqref{pin66} from \eqref{pin52}, we obtain respectively \begin{subequations} \begin{align} \left(\gamma-\alpha\right)\left(\frac{1}{3}\Theta+\Sigma\right)&=0,\label{pin70}\\ \mathcal{A}\alpha&=\bar{\Psi},\label{pin71}\\ \varrho-\lambda&=\bar{\Psi}.\label{pin72} \end{align} \end{subequations} From \eqref{pin70}, either \((1/3)\Theta+\Sigma=0\) or \(\gamma-\alpha=0\). Let us assume that \((1/3)\Theta+\Sigma\neq0\) and that \(\gamma-\alpha=0\). If \(X^{\mu}\) is a Killing vector for \(g_{\mu\nu}\), \(\bar{\Psi}=0\), and since \(\bar{\Psi}=\Psi\) (by \eqref{pin6001} and \eqref{pin72}), \(\Psi=0\) and \(X^{\mu}\) is a Killing vector for \(h_{\mu\nu}\). From \eqref{pin71} we have \begin{eqnarray}\label{pin75} \mathcal{A}\alpha=0. \end{eqnarray} Since \(\alpha\neq0\) (\(\alpha=0\Longrightarrow\gamma=0\Longrightarrow X^{\mu}=0\)), we must have that \(\mathcal{A}=0\). To show that \(\Xi\) is flat, first notice that as \(X^{\mu}\) is a KV for \(h_{\mu\nu}\), we know that \(\Xi\) is Einstein. From \eqref{pin72}, we have that \(\varrho=\lambda=\beta\), which upon comparing to \eqref{pin44} gives \(\lambda+2\beta=\ ^3R=0\). Hence, \(\beta=\lambda=0\Longrightarrow\ ^3R_{\mu\nu\delta\gamma}=0\). Now, the equation \eqref{pin64} can be written as \begin{eqnarray}\label{pin73} \left(\frac{2}{3}\Theta-\Sigma+\phi\right)\gamma=0. \end{eqnarray} Again, \(\gamma\neq0\) and hence we must have \begin{eqnarray}\label{pin77} \frac{2}{3}\Theta-\Sigma+\phi=0, \end{eqnarray} in which case \(2\)-surfaces in \(\Xi\) are marginally trapped. Therefore, as \(\Xi\) is foliated by \(2\)-surfaces, we have that \(\Xi\) has the structure of a marginally trapped tube. \end{proof} The following corollary follows from Proposition \ref{prop4}: \begin{corollary}\label{cor2} Let \(M\) be a spacetime admitting a \(1+1+2\) decomposition, and let \(\Xi\) be a locally symmetric embedded 3-manifold in \(M\) with Ricci tensor of the form \eqref{pa1} with scalar curvature \(^3R\geq0\). Assume that \(\Xi\) admits a Ricci soliton structure with the soliton field \(X^{\mu}\) (we assume this vector has no component lying in the sheet) being a CKV for both \(h_{\mu\nu}\) and \(g_{\mu\nu}\). If \(X^{\mu}\) is not null, then for simultaneous non-vanishing of the expansion and rotation on \(\Xi\), the anisotropic stress cannot be zero. \end{corollary} \begin{proof} Since \(\gamma\neq\alpha\), we must have \((1/3)\Theta+\Sigma=0\). Direct substitution of \((1/3)\Theta+\Sigma=0\) into \eqref{yop2} (noting the \(\Xi\) is Einstein) gives \begin{eqnarray}\label{cole1} \Pi=\Theta^2+8\Omega^2. \end{eqnarray} The result then follows if \(\Omega\) and \(\Theta\) are not simultaneously zero. \end{proof} Under the assumptions of Proposition \ref{prop4}, hypersurfaces on which \((1/3)\Theta+\Sigma=0\) with the soliton field non-null, can admit non-flat Ricci soliton structure. The below proposition gives a non-trivial case where the soliton field can be explicitly found. \begin{proposition}\label{cor3} Let \(M\) be a spacetime admitting a \(1+1+2\) decomposition, and let \(\Xi\) be a locally symmetric embedded 3-manifold in \(M\) with Ricci tensor of the form \eqref{pa1} with \(^3R\geq0\). Assume that \(\Xi\) admits a Ricci soliton structure with soliton field \(X^{\mu}=\gamma u^{\mu}+\alpha e^{\mu}\) being a CKV for both \(h_{\mu\nu}\) and \(g_{\mu\nu}\), and suppose \(X^{\mu}\) is non-null with the \(u^{\mu}\) component constant along \(e^{\mu}\). If the acceleration \(\mathcal{A}\) is covariantly constant and non-vanishing, then the components \(\alpha\) and \(\gamma\) have the general solutions \begin{subequations} \begin{align} \alpha&=h\left(\tau\right)e^{-\mathcal{A}\tau},\label{dcf1}\\ \gamma&=-\mathcal{A}\int h\left(\tau\right)e^{-\mathcal{A}\tau}d\tau,\label{dcf2} \end{align} \end{subequations} for an arbitrary function \(h\left(\tau\right)\), with the solutions is subject to \begin{eqnarray}\label{dds} \phi h\left(\tau\right)e^{-A\tau}=\mathcal{A}\Theta\int h\left(\tau\right)e^{-\mathcal{A}\tau}d\tau. \end{eqnarray} \end{proposition} \begin{proof} Since \(\gamma\neq\alpha\), we must have \((1/3)\Theta+\Sigma=0\). Noting \(\bar{\Psi}=0\), from combining \eqref{pin66} anb \eqref{pin67} we obtain the linear fiirst order partial differential equation \begin{eqnarray}\label{gfe} \dot{\alpha}+\hat{\alpha}-\mathcal{A}\alpha=0. \end{eqnarray} The above equation can be solved to give the general solution \begin{eqnarray}\label{gfepi1} \alpha&=h\left(\chi-\tau\right)e^{-\mathcal{A}\tau} \end{eqnarray} From \eqref{pin65} we have that \begin{eqnarray}\label{gfe2} \gamma=-\mathcal{A}\int h\left(\chi-\tau\right)e^{-\mathcal{A}\tau}d\tau,. \end{eqnarray} However, by assumption, \(\gamma\) is independent of the parameter \(\chi\). Hence, setting \(h\left(\chi-\tau\right)=h\left(\tau\right)\) we obtain \eqref{dcf1} and \eqref{dcf2}. One can obtain \eqref{dds} by substituting \eqref{dcf1} and \eqref{dcf2} into \eqref{pin64} as well as using the fact that \((1/3)\Theta+\Sigma=0\). \end{proof} It immediately follows that \begin{corollary} Let \(M\) be an expansion-free spacetime admitting a \(1+1+2\) decomposition, and let \(\Xi\) be a locally symmetric embedded 3-manifold in \(M\) with Ricci tensor of the form \eqref{pa1} with \(^3R\geq0\). Assume that \(\Xi\) admits a Ricci soliton structure with soliton field \(X^{\mu}=\gamma u^{\mu}+\alpha e^{\mu}\) being a CKV for both \(h_{\mu\nu}\) and \(g_{\mu\nu}\), and suppose \(X^{\mu}\) is non-null with the \(u^{\mu}\) component constant along \(e^{\mu}\). If the acceleration \(\mathcal{A}\) is covariantly constant and non-vanishing, then the sheet expansion \(\phi\) must vanish. \end{corollary} \begin{proof} If the assumptions herein hold, then we have the solutions \eqref{dcf1} and \eqref{dcf2} for the components along \(e^{\mu}\) and \(u^{\mu}\) respectively, subject to \eqref{dds}. But the spacetime is expansion-free, and hence from \eqref{dds} we have \begin{eqnarray}\label{dds1} \phi h\left(\tau\right)e^{-A\tau}=0. \end{eqnarray} Therefore, we have that either \(\phi=0\) or \(h\left(\tau\right)=0\). We rule out the latter as otherwise this would give \(\alpha=\gamma=0\Longrightarrow X^{\mu}=0\), and hence the result follows. \end{proof} If the function \(h\left(\tau\right)\) is strictly positive, then the converse of the above corollary also holds, i.e. under the assumptions of the corollary the vanishing of the sheet expansion implies the spacetime is expansion-free. \section{Application to general locally rotationally symmetric spacetimes}\label{soc7} We consider spacetimes with locally rotational symmetry \cite{cc1,gbc1}. \begin{definition} A \textbf{locally rotationally symmetric (LRS)} spacetime is a spacetime in which at each point \(p\in M\), there exists a continuous isotropy group generating a multiply transitive isometry group on \(M\) (see \cite{crb1,gbc1} and associated references). The general metric of LRS spacetimes is given by \begin{eqnarray}\label{jan29191} \begin{split} ds^2&=-A^2d\tau^2 + B^2d\chi^2 + F^2 dy^2 + \left[\left(F\bar{D}\right)^2+ \left(Bh\right)^2 - \left(Ag\right)^2\right]dz^2+ \left(A^2gd\tau - B^2hd\chi\right)dz, \end{split} \end{eqnarray} where \(A^2,B^2,F^2\) are functions of \(\tau\) and \(\chi\), \(\bar{D}^2\) is a function of \(y\) and \(k\). The scalar (\(k\) can take the signs of negative, zero or positive, and fixes the geometry of the \(2\)-surfaces. \(k=-1\) corresponds to a hyperbolic \(2\)-surface, \(k=0\) corresponds to the \(2\)-plane, and \(k=+1\) corresponds to a spherical \(2\)-surface. The quantities \(g,h\) are functions of \(y\). \end{definition} For the case \(g=h=0\) one has the LRS II class, a generalization of spherically symmetric solution to Einstein field equations (EFEs). Some other well known solutions of the LRS class include the G\"{o}del's rotating solution, the Kantowski-Sachs models and various Bianchi models. In these spacetimes, all vector and tensor quantities vanish. From \eqref{exe1}, we see that \(\Sigma=0\) (this can also be obtained from the fact that these spacetimes have vanishing cosmological constant, and since the cosmological constant is proportional to the square of the shear, the shear must vanish), and therefore all considerations in this section will be shear-free. (We have also noted the conformal flatness of the hypersurfaces and have set the magnetic and electric Weyl scalars to zero.) The field equations for these spacetimes can be obtained from the Ricci identities for the vector fields \(u^{\mu}\) and \(e^{\mu}\), as well as the contracted Bianchi identities. They are given by \begin{itemize} \item \textit{Evolution} \begin{subequations} \begin{align} \frac{2}{3}\dot{\Theta}&=\mathcal{A}\phi- \frac{2}{9}\Theta^2 - 2\Omega^2 - \frac{1}{2}\Pi - \frac{1}{3}\left(\rho+3p\right),\label{subbe1}\\ \dot{\phi}&=\frac{2}{3}\Theta\left(\mathcal{A}-\frac{1}{2}\phi\right) +2\xi\Omega+ Q,\label{subbe2}\\ -\frac{1}{3}\dot{\rho}+\frac{1}{2}\dot{\Pi}&=-\frac{1}{6}\Theta\Pi+\frac{1}{2}\phi Q+\frac{1}{3}\Theta\left(\rho+p\right),\label{subbe3}\\ \dot{\xi}&=-\frac{1}{3}\Theta\xi + \left(\mathcal{A}-\frac{1}{2}\phi\right)\Omega,\label{sube3}\\ \dot{\Omega}&=\mathcal{A}\xi-\frac{2}{3}\Theta\Omega,\label{sube4} \end{align} \end{subequations} \item \textit{Propagation} \begin{subequations} \begin{align} \frac{2}{3}\hat{\Theta}&=2\xi\Omega + Q,\label{subbe4}\\ \hat{\phi}&=-\frac{1}{2}\phi^2 +\frac{2}{9}\Theta^2+2\xi^2-\frac{2}{3}\rho-\frac{1}{2}\Pi,\label{subbe5}\\ -\frac{1}{3}\hat{\rho}+\frac{1}{2}\hat{\Pi}&=-\frac{3}{4}\phi\Pi-\frac{1}{3}\Theta Q\label{subbe6}\\ \hat{\xi}&=-\phi\xi + \frac{1}{3}\Theta\Omega,\label{sube9}\\ \hat{\Omega}&=\left(\mathcal{A}-\phi\right)\Omega,\label{sube10} \end{align} \end{subequations} \item \textit{Evolution/Propagation} \begin{subequations} \begin{align} \hat{\mathcal{A}}-\dot{\Theta}&=-\left(\mathcal{A}+\phi\right)\mathcal{A}-\frac{1}{3}\Theta^2-2\Omega^2+\frac{1}{2}\left(\rho+3p\right),\label{subbe7}\\ \dot{\rho}+\hat{Q}&=-\Theta\left(\rho+p\right)-\left(2\mathcal{A}+\phi\right)Q,\label{subbe8}\\ \dot{Q}+\hat{p}+\hat{\Pi}&=-\left(\mathcal{A}+\frac{3}{2}\phi\right)\Pi-\frac{4}{3}\Theta Q-\left(\rho+p\right)\mathcal{A}.\label{subbe9} \end{align} \end{subequations} \item \textit{Constraints} \begin{subequations} \begin{align} 0=\Omega Q,\label{gb1}\\ 0=\left(\rho+p-\frac{1}{2}\Pi\right)\Omega+Q\xi,\label{gb2}\\ 0=\left(2\mathcal{A}-\phi\right)\Omega.\label{subbe9} \end{align} \end{subequations} \end{itemize} Furthermore, for an arbitrary scalar \(\psi\) in a locally rotationally symmetric spacetime, the commutation relation for the dot and hat derivatives is given by \cite{cc1} \begin{eqnarray}\label{ghh1} \hat{\dot{\psi}}-\hat{\dot{\psi}}=-\mathcal{A}\dot{\psi}+\left(\frac{1}{3}\Theta+\Sigma\right)\hat{\psi}. \end{eqnarray} From \eqref{gb1} we have that either \(\Omega=0\) or \(Q=0\). We start by assuming that \(\Omega=0\) and \(Q\neq0\). Then, from \eqref{gb2} we must have \(\xi=0\), and hence the hypersurfaces to be considered have LRS II symmetries, and therefore we will treat them as embedded in the solutions of the LRS II class. We consider this below. \subsection{LRS II class with non-vanishing heat flux: $\Omega=\xi=0$ and $Q\neq0$} Firstly, the form of the Ricci tensor on the hypersurfaces can be expressed as \begin{eqnarray}\label{mj1} ^3R_{\mu\nu}=-\left(\hat{\phi}+\frac{1}{2}\phi^2\right)e_{\mu}e_{\nu}-\frac{1}{2}\left(\hat{\phi}+\phi^2-2K\right)N_{\mu\nu}, \end{eqnarray} so we have \begin{subequations} \begin{align} \lambda&=-\left(\hat{\phi}+\frac{1}{2}\phi^2\right),\label{mj2}\\ \beta&=-\frac{1}{2}\left(\hat{\phi}+\phi^2-2K\right),\label{mj3} \end{align} \end{subequations} where \begin{eqnarray}\label{mj4} K=\frac{1}{3}\rho-\frac{1}{2}\Pi+\frac{1}{4}\phi^2-\frac{1}{9}\Theta^2, \end{eqnarray} is the Gaussian curvature of the \(2\)-surfaces. The dot and hat derivatives of the Gaussian curvature are respectively given by \begin{subequations} \begin{align} \dot{K}&=-\frac{2}{3}\Theta K,\label{mj5}\\ \hat{K}&=-\phi K.\label{mj6} \end{align} \end{subequations} Using the fact that \(\dot{\lambda}=\dot{\beta}=\hat{\lambda}=\hat{\beta}=0\), we arrive at the following constraint equation \begin{eqnarray}\label{mj7} -\left(\frac{2}{3}\Theta+\phi\right)K=\frac{1}{2}\phi\left(\dot{\phi}+\hat{\phi}\right), \end{eqnarray} Notice that, whenever \(\phi\) vanishes, the constraint \eqref{mj7} says that either the immersed \(2\)-surfaces in the hypersurface are \(2\)-planes with \(K=0\) or, the hypersurface is expansion-free. Furthermore, from \eqref{exe11} we have that \begin{eqnarray}\label{mj8} \frac{1}{3}\Theta^2=\rho+\frac{3}{2}\hat{\phi}+\frac{1}{4}\phi^2+\frac{1}{4}\Pi, \end{eqnarray} which simplifies as \begin{eqnarray}\label{mj9} \phi^2+\Pi=0. \end{eqnarray} (Notice how this forces the anisotropic stress to be non-positive.) For LRS II class of spacetimes, \(\mathcal{H}=0\) and \(\xi=0\) by definition, and hence, \eqref{exe2} is automatically satisfied. We shall now consider both cases of \ref{prop1}. \subsubsection{Case \(1\): The Einstein case} Let us assume that the hypersurface is Einstein. Consideration was given to this in Section \ref{soc5}. We show that case 1. implies case 2. of Proposition \ref{prop1}. Since the hypersurface is Einstein, using \eqref{mj2} and \eqref{mj3} gives \(\lambda-\beta=0\) as \begin{eqnarray}\label{mj10} K=-\frac{1}{2}\hat{\phi}. \end{eqnarray} Comparing \eqref{mj4} and \eqref{subbe5}, and using \eqref{mj9} we get \(\phi^2=0\), which gives \(K=0\) and therefore all \(2\)-surfaces as subspaces of the hypersurface are planes. \subsubsection{Case \(2\): The case of vanishing sheet expansion} If we assume that the hypersurface is not of Einstein type, then \(\phi=0\). If the Gaussian curvature \(K\) is zero, then, the hypersurface is Einstein The set of field equation \eqref{subbe1} to \eqref{subbe9} reduces to (we use here the fact that \(\phi=0\ \ \mbox{implies}\ \ \Pi=0\), as well as \(Q=-(2/3)\hat{\Theta}\)) \begin{subequations} \begin{align} \dot{\Theta}&=-\frac{1}{3}\Theta^2-\frac{1}{2}\left(\rho+3p\right),\label{mj11}\\ \hat{\Theta}&=0\iff Q=0,\label{mj12}\\ \dot{\rho}&=-\Theta\left(\rho+p\right),\label{mj13}\\ \hat{\rho}&=0,\label{mj14}\\ \hat{\mathcal{A}}&=-\left(\mathcal{A}^2+\frac{2}{3}\Theta^2\right),\label{mj15}\\ \hat{p}&=-\mathcal{A}\left(\rho+p\right),\label{mj16} \end{align} \end{subequations} coupled with the constraints \begin{subequations} \begin{align} \mathcal{A}\Theta&=0,\label{mj17}\\ \rho&=\frac{1}{3}\Theta^2.\label{mj18} \end{align} \end{subequations} If \(K\neq0\), then \(\Theta=0\). But both \(\rho\) and \(p\) are zero, and hence \begin{eqnarray} \lambda-\beta=-K=0. \end{eqnarray} Therefore the hypersurface is flat. This allows for us to remark the following: \begin{remark} Any locally symmetric hypersurface in LRS II class of spacetimes, with Ricci tensor of form \eqref{pa1}, is necessarily flat. \end{remark} It indeed follows that any Ricci soliton structure admitted by the hypersurface is steady. We saw that \(\Omega=0\ \ \mbox{implies}\ \ \xi=0\) if we assume \(Q\neq0\). Now let us assume that \(Q=0\) and \(\Omega\). For this we may consider two scenarios: the case where \(\Omega\neq0\) but \(\xi=0\) (these are the LRS I class of spacetimes which are stationary inhomogeneous), or the more general case for which \(\Omega\) and \(\xi\) are non-zero. \subsection{LRS I with vanishing heat flux: $\xi=0,\Omega\neq0$ and $Q=0$} For these spacetimes the dot product of all scalars vanish (it was shown in \cite{ssgos1} that an arbitrary scalar \(\psi\) in general LRS spacetimes satisfy \(\dot{\psi}\Omega=\hat{\psi}\xi\), and hence for \(\xi=0\) and \(\Omega\neq0\), one has \(\dot{\psi}=0\)). Now, from \eqref{sube4} we have that \begin{eqnarray} 0&=\Theta\Omega\label{mj19}. \end{eqnarray} From \eqref{mj19} it is clear that \(\Theta\) must be zero. But an easy check shows that \eqref{fgh} cannot hold in this case, and hence these hypersurfaces cannot exist under the assumption of this work. \subsection{LRS with vanishing heat flux: $\xi\neq0,\Omega\neq0$ and $Q=0$} Let us now consider the case where both the rotation and the spatial twist are simultaneously non vanishing, noting the vanishing of the heat flux. If the anisotropic stress is zero then at least one of \(\Omega\) or \(\xi\) should vanish \cite{ve1}, and as such we can begin by assuming that \(\Pi\neq0\). Noting the expression \(\dot{\psi}\Omega=\hat{\psi}\xi\) we have the following set of equations \cite{ssgos1} \begin{subequations} \begin{align} 0&=\left(\Omega^2-\Sigma^2\right)+\frac{1}{3}\rho-\frac{1}{9}\Theta^2+\frac{1}{4}\Pi+\frac{1}{2}\mathcal{A}\phi,\label{mj20}\\ 0&=-\left(\Omega^2-\Sigma^2\right)+\frac{1}{6}\left(\rho+3p\right)+\frac{1}{9}\Theta^2+\frac{1}{4}\Pi-\frac{1}{2}\mathcal{A}\phi,\label{mj21}\\ 0&=\rho+p+\Pi.\label{mj22} \end{align} \end{subequations} However, from \eqref{gb2} we have that \begin{eqnarray}\label{mj23} \rho+p=\frac{1}{2}\Pi, \end{eqnarray} which gives \(\Pi=0\) by comparing to \eqref{mj22}. Hence, it follows that for these hypersurfaces to exist at least one of \(\Omega\) or \(\xi\) has to vanish. Therefore the only possibility for which the hypersurface can exist under the assumptions of this work is if \(\Omega=\xi=0\), in which case it was shown to be flat. We can summarize the result as follows: \begin{proposition} Let \(M\) be a locally rotationally symmetric spacetime. Any locally symmetric hypersurface in \(M\) orthogonal to the fluid velocity is necessarily flat. And if the hypersurface admits a Ricci structure, the soliton is steady with the components of the soliton field being constants. \end{proposition} While the applications here do not yield non-flat geometries, a more general form of the Ricci tensor, when applied to the class of solutions in this section, will certainly demonstrate the applicability of the results we have obtained throughout this work. \section{Summary and discussion}\label{soc8} This work evolved out of an interest to employ the \(1+1+2\) and \(1+3\) covariant formalisms in the study of Ricci soliton structures on embedded hypersurfaces in spacetimes. As the geometry of Ricci solitons is well understood with a wealth of literature on the subject, there is potential application to the geometric classification of black hole horizons. In this work, we have carried out a detailed study of a particular class of locally symmetric embedded hypersurfaces in spacetimes with non-negative scalar curvature, admitting a \(1+1+2\) spacetime decomposition. This formalism has the advantage of bringing out the intricate details of the covariant quantities spacifying the spacetimes (or subsets thereof). As a first step, we prescribed the form of the Ricci tensor on the hypersurfaces and computed the associated curvature quantities. We computed the Ricci tensor for general hypersurfaces in \(1+1+2\) spacetimes and then specified the conditions under which the Ricci tensor for the general case reduces to that of the specified case we considered in this work. First, we provided a characterisation of the hypersurfaces being considered. The locally symmetric condition implies conformal flatness. It is shown that a locally symmetric hypersurface embedded in a \(1+1+2\) spacetime, specified by the \eqref{pa1} is either an Einstein space or, the hypersurface is non-twisting with the sheet expansion vanishing. Properties of these cases were then briefly considered. The check for whether a hypersurface is Einstein or not reduces to a simple equation in a few of the matter and geometric variables. In particular, it reduces to whether or not \begin{eqnarray*} \frac{3}{4}\Sigma\left(\frac{2}{3}\Theta-\Sigma\right)-2\Omega^2+\frac{1}{4}\Pi=0 \end{eqnarray*} is satisfied. The components of the Ricci tensor were also shown to be covariantly constant, i.e. the hypersurface is of constant scalar curvature, and that the scalar curvature is bounded above by the energy density. Specifically, it was demonstrated that the scalar curvature has an upper bound as \(^3R\leq2\rho\), and hence \(0\leq\ ^3R\ \leq2\rho\). In essence we deal with metrics of bounded constant scalar curvature. We then went on to consider the case in which a hypersurface admits a Ricci soliton structure. Solutions are considered for the case where the soliton field is parallel to the preferred spatial direction, as well as those also having components along the unit tangent direction orthogonal to the hypersurface. The nature of the soliton is determined by the eigenvalues of the Ricci tensor. This in turn determines the direction of the soliton field is the soliton field has component only along one of the preferred unit directions. It was further considered the case in which the soliton field is a conformal Killing vector field for the induced metric on the hypersurface. It was shown that in this case the the soliton field is a homothetic Killing vector for the induced metric on the hypersurface, and that the hypersurface is of Einstein type. In the case that the Ricci scalar is strictly positive, the Ricci soliton is classified. If the soliton field extends as a conformal Killing vector field to the metric of the ambient spacetime, then it was demonstrated that the quantity \((1/3)\Theta+\Sigma\) either vanishes or otherwise the soliton field is null. And if the soliton field is a Killing vector field, then the soliton was shown to be flat. The flat geometry in this case is a consequence of the soliton field being a Killing vector field, a well known fact. Otherwise, one could possibly have non-flat examples. As another result for the case with \((1/3)\Theta+\Sigma=0\), it is shown that if the hypersurface is simultaneously rotating and expanding, the anisotropic stress cannot vanish on the hypersurface. A non-trivial case (in the sense that the type of soliton field is not specified and the hypersurface is not necessarily flat) was provided. It then followed that the sheet expansion will necessarily vanish on the soliton. As one would expect, once a hypersurface admits a Ricci soliton structure, the geometry of the hypersurface is restricted and even more so when the choice of the soliton field is specified. We emphasize that, if one can find the soliton field and the constant specifying the nature of the soliton, the soliton equations have to be checked against the consistency of the field equations on the hypersurface as well when studying existence of Ricci soliton structure on embedded hypersurfaces. In general, this might not be possible, and possible only if the spacetime or class of spacetimes is specified. A simple application was carried out against spacetimes with a high degree of symmetry, those exhibiting local rotational symmetry (LRS spacetimes). It turns out that the upper bound on the scalar curvature, expressed as a bound on the ratio of the rotation to the expansion scalar, places very strong constraints on the class of hypersurfaces considered in this work that can be admitted by LRS spacetimes. In particular it was shown that in this class of spacetimes, all hypersurfaces of type considered in this work is flat, and can be admitted by these spacetimes only if both rotation and spatial twist vanish simultaneously. And if they do admit a Ricci soliton structure the soliton will be steady, with the components of the soliton field being constants. In subsequent works we seek to apply our approach to Ricci soliton structure on more general hypersurfaces in \(1+1+2\) decomposed spacetimes. Other possible extensions of this work could be studying Ricci soliton structure on general Lorentzian manifolds in the \(1+3\) covariant setting. \section*{Acknowledgements} AS acknowledges that this work was supported by the IBS Center for Geometry and Physics, Pohang University of Science and Technology, Grant No. IBS-R003-D1 and the First Rand Bank, through the Department of Mathematics and Applied Mathematics, University of Cape Town, South Africa. PKSD acknowledges support from the First Rand bank, South Africa. RG acknowledges support for this research from the National Research Foundation of South Africa.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and definitions} Consider the projective space $\mbox{\rm PG}(3,q)$. It is well known that a line of $\mbox{\rm PG}(3,q)$ is the smallest blocking set with relation to the planes of $\mbox{\rm PG}(3,q)$. It is also well known that any blocking set $\mathcal{B}$ with relation to the planes, such that $|\mathcal{B}| < q+\sqrt{q}+1$, contains a line (\cite{Bruen1970}). Consider now any symplectic polarity $\varphi$ of $\mbox{\rm PG}(3,q)$. The points of $\mbox{\rm PG}(3,q)$, together with the totally isotropic lines with relation to $\varphi$, constitute the generalized quadrangle $\mbox{\rm W}(3,q)$. If $\mathcal{B}$ is a blocking set with relation to the planes of $\mbox{\rm PG}(3,q)$, then $\mathcal{B}$ is a set of points of $\mbox{\rm W}(3,q)$ such that on any point of $\mbox{\rm W}(3,q)$ there is at least one line of $\mbox{\rm W}(3,q)$ meeting $\mathcal{B}$ in at least one point. Dualizing to the generalized quadrangle $\mbox{\rm Q}(4,q)$, we find a set $\mathcal{L}$ of lines of $\mbox{\rm Q}(4,q)$ such that every line of $\mbox{\rm Q}(4,q)$ meets at least one line of $\mathcal{L}$. Together with the known bounds on blocking sets of $\mbox{\rm PG}(2,q)$, we observe the following proposition. \begin{Pro}\label{pro:q4q} Suppose that $\mathcal{L}$ is a set of lines of $\mbox{\rm Q}(4,q)$ with the property that every line of $\mbox{\rm Q}(4,q)$ meets at least one line of $\mathcal{L}$. If $|\mathcal{L}|$ is smaller than the size of a non-trivial blocking set of $\mbox{\rm PG}(2,q)$, then $\mathcal{L}$ contains the pencil of $q+1$ lines through a point of $\mbox{\rm Q}(4,q)$ or $\mathcal{L}$ contains a regulus contained in $\mbox{\rm Q}(4,q)$. \end{Pro} This proposition motivates the study of small sets of generators of three particular finite classical polar spaces, meeting every generator. In this section, we define generalized quadrangles and describe briefly the finite classical polar spaces, and we state the main theorems to be proved in the paper. A (finite) \emph{generalized quadrangle} (GQ) is an incidence structure $\S=(\P,\G,\I)$ in which $\mathcal{P}$ and $\mathcal{G}$ are disjoint non-empty sets of objects called {\em points} and {\em lines} (respectively), and for which $\mathrel{\mathrm I} \subseteq (\mathcal{P} \times \mathcal{G}) \cup (\mathcal{G} \times \mathcal{P})$ is a symmetric point-line incidence relation satisfying the following axioms: \begin{compactenum}[\rm (i)] \item Each point is incident with $1+t$ lines $(t \geq 1)$ and two distinct points are incident with at most one line. \item Each line is incident with $1+s$ points $(s \geq 1)$ and two distinct lines are incident with at most one point. \item If $X$ is a point and $l$ is a line not incident with $x$, then there is a unique pair $(Y,m) \in \mathcal{P} \times \mathcal{G}$ for which $X \mathrel{\mathrm I} m \mathrel{\mathrm I} Y \mathrel{\mathrm I} l$. \end{compactenum} The integers $s$ and $t$ are the {\em parameters} of the GQ and $\mathcal{S}$ is said to have {\em order} $(s,t)$. If $\S=(\P,\G,\I)$ is a GQ of order $(s,t)$, we say that $\S'=(\P',\G',\I')$ is a {\em subquadrangle} of order $(s',t')$ if and only if $\mathcal{P}' \subseteq \mathcal{P}$, $\mathcal{G}' \subseteq \mathcal{G}$, and $\S'=(\P',\G',\I')$ is a generalized quadrangle with $\mathrel{\mathrm I}'$ the restriction of $\mathrel{\mathrm I}$ to $\mathcal{P}' \times \mathcal{G}'$. The {\em finite classical polar spaces} are the geometries consisting of the totally isotropic, respectively, totally singular, subspaces of non-degenerate sesquilinear, respectively, non-degenerate quadratic forms on a projective space $\mbox{\rm PG}(n,q)$. So these geometries are the non-singular symplectic polar spaces $\mbox{\rm W}(2n+1,q)$, the non-singular parabolic quadrics $\mbox{\rm Q}(2n,q)$, $n\geq 2$, the non-singular elliptic and hyperbolic quadrics $\mbox{\rm Q}^-(2n+1,q)$, $n\geq 2$, and $\mbox{\rm Q}^+(2n+1,q)$, $n\geq 1$, respectively, and the non-singular hermitian varieties $\mbox{\rm H}(d,q^2)$, $d\geq 3$. For $q$ even, the parabolic polar space $\mbox{\rm Q}(2n,q)$ is isomorphic to the symplectic polar space $\mbox{\rm W}(2n-1,q)$. For our purposes, it is sufficient to recall that every non-singular parabolic quadric in $\mbox{\rm PG}(2n,q)$ can, up to a coordinate transformation be described as the set of projective points satisfying the equation $X_0^2+X_1X_2+\ldots+X_{2n-1}X_{2n} = 0$. Every non-singular elliptic quadric of $\mbox{\rm PG}(2n+1,q)$ can up to a coordinate transformation be described as the set of projective points satisfying the equation $g(X_0,X_1)+X_2X_3+\ldots+X_{2n}X_{2n+1} = 0$, $g(X_0,X_1)$ an irreducible homogeneous quadratic polynomial over $\mbox{\rm GF}(q)$. Finally, the hermitian variety $\mbox{\rm H}(n,q^2)$ can up to a coordinate transformation be described as the set of projective points satisfying the equation $X_0^{q+1}+X_1^{q+1}+\ldots + X_n^{q+1} = 0$. The {\em generators} of a classical polar space are the projective subspaces of maximal dimension completely contained in this polar space. If the generators are of dimension $r-1$, then the polar space is said to be of {\em rank} $r$. Finite classical polar spaces of rank $2$ are examples of generalized quadrangles, and are called {\em finite classical generalized quadrangles}. These are the non-singular parabolic quadrics $\mbox{\rm Q}(4,q)$, the non-singular elliptic quadrics $\mbox{\rm Q}^-(5,q)$, the non-singular hyperbolic quadrics $\mbox{\rm Q}^+(3,q)$, the non-singular hermitian varieties $\mbox{\rm H}(3,q^2)$ and $\mbox{\rm H}(4,q^2)$, and the symplectic generalized quadrangles $\mbox{\rm W}(3,q)$ in $\mbox{\rm PG}(3,q)$. The GQs $\mbox{\rm Q}(4,q)$ and $\mbox{\rm W}(3,q)$ are dual to each other, and have both order $(q,q)$. The GQs $\mbox{\rm Q}(4,q)$ and $\mbox{\rm W}(3,q)$ are self-dual if and only if $q$ is even. Finally, the GQs $\mbox{\rm H}(3,q^2)$ and $\mbox{\rm Q}^-(5,q)$ are also dual to each other, and have respective order $(q^2,q)$ and $(q,q^2)$. The GQ $\mbox{\rm H}(4,q^2)$ has order $(q^2,q^3)$, and the GQ $\mbox{\rm Q}^+(3,q)$ has order $(q,1)$. By taking hyperplane sections in the ambient projective space, it is clear that $\mbox{\rm Q}^+(3,q)$ is a subquadrangle of $\mbox{\rm Q}(4,q)$, that $\mbox{\rm Q}(4,q)$ is a subquadrangle of $\mbox{\rm Q}^-(5,q)$, and that $\mbox{\rm H}(3,q^2)$ is a subquadrangle of $\mbox{\rm H}(4,q^2)$. These well known facts can be found in e.g. \cite{PT2009}. Consider a finite classical polar space $\mathcal{S}$ of rank $r \geq 2$. A set $\mathcal{L}$ of generators of $\mathcal{S}$ is called a {\em generator blocking set} if it has the property that every generator of $\mathcal{S}$ meets at least one element of $\mathcal{L}$ non-trivially. We generalize this definition to non-classical GQs, and we say that $\mathcal{L}$ is a generator blocking set of a GQ $\mathcal{S}$ if $\mathcal{L}$ has the property that every line of $\mathcal{S}$ meets at least one element of $\mathcal{L}$. Clearly, for finite classical generalized quadrangles, both definitions coincide. Suppose that $\mathcal{L}$ is a generator blocking set of a finite classical polar space, respectively a GQ. We call an element $\pi$ of $\mathcal{L}$ {\em essential} if and only if there exists a generator, respectively line, of $\mathcal{S}$ not in $\mathcal{L}$, meeting no element of $\mathcal{L} \setminus \{\pi\}$. We call $\mathcal{L}$ {\em minimal} if and only if all of its elements are essential. A {\em spread} of a finite classical polar space is a set $\mathcal{C}$ of generators such that every point is contained in exactly one element of $\mathcal{C}$. Hence the generators in the set $\mathcal{C}$ are pairwise disjoint. A {\em cover} is a set $\mathcal{C}$ of generators such that every point is contained in at least one element of $\mathcal{C}$. Hence a spread is a cover consisting of pairwise disjoint generators. From the definitions, it follows that spreads and covers are particular examples of generator blocking sets. In this paper, we will study small generator blocking sets of the polar spaces $\mbox{\rm Q}(2n,q)$, $\mbox{\rm Q}^-(2n+1,q)$ and $\mbox{\rm H}(2n,q^2)$, $n \geq 2$, all of rank $n$. The following theorems, inspired by Proposition~\ref{pro:q4q}, will be proved in Section~\ref{sec:rank2}. \begin{Th}\label{th:rank2} Let $\mathcal{L}$ be a generator blocking set of a finite generalized quadrangle of order $(s,t)$, with $|\mathcal{L}| = t+1$. Then $\mathcal{L}$ is the pencil of $t+1$ lines through a point, or $t \geq s$ and $\mathcal{L}$ is a spread of a subquadrangle of order $(s,t/s)$. \end{Th} \begin{Th}\label{th:rank2_gap} \begin{compactenum}[\rm (a)] \item Let $\mathcal{L}$ be a generator blocking set of $\mbox{\rm Q}^-(5,q)$, with $|\mathcal{L}| = q^2 + \delta + 1$. If $\delta \le \frac{1}{2}(3q-\sqrt{5q^2+2q+1})$, then $\mathcal{L}$ contains the pencil of $q^2+1$ generators through a point or $\mathcal{L}$ contains a cover of $\mbox{\rm Q}(4,q)$ embedded as a hyperplane section in $\mbox{\rm Q}^-(5,q)$. \item Let $\mathcal{L}$ be a generator blocking set of $\mbox{\rm H}(4,q^2)$, with $|\mathcal{L}| = q^3 + \delta + 1$. If $\delta < q-3$, then $\mathcal{L}$ contains the pencil of $q^3+1$ generators through a point. \end{compactenum} \end{Th} Section~\ref{sec:rankn} is devoted to a generalization of Proposition~\ref{pro:q4q} and Theorem~\ref{th:rank2_gap} to finite classical polar spaces of any rank. \section{Generalized quadrangles}\label{sec:rank2} In this section, we study minimal generator blocking sets $\mathcal{L}$ of GQs of order $(s,t)$. After general observations and the proof of Theorem~\ref{th:rank2}, we devote two subsections to the particular cases $\mathcal{S}=\mbox{\rm Q}^-(5,q)$ and $\mathcal{S}=\mbox{\rm H}(4,q^2)$. We remind that for a GQ $\S=(\P,\G,\I)$ of order $(s,t)$, $|\mathcal{P}| = (st+1)(s+1)$ and $|\mathcal{G}| = (st+1)(t+1)$, see e.g. \cite{PT2009}. Suppose that $P$ is a point of $\mathcal{S}$, then we denote by $P^\perp$ the set of all points of $\mathcal{S}$ collinear with $P$. By definition, $P \in P^\perp$. For a classical GQ $\mathcal{S}$ with point set $\mathcal{P}$, the set $P^\perp = \pi \cap \mathcal{P}$, with $\pi$ the tangent hyperplane to $\mathcal{S}$ in the ambient projective space at the point $P$ \cite{Hirschfeld,PT2009}. Therefore, when $P$ is a point of a classical GQ $\mathcal{S}$, we also use the notation $P^\perp$ for the tangent hyperplane $\pi$. From the context, it will always be clear whether $P^\perp$ refers to the point set or to the tangent hyperplane. We denote by $\mathcal{M}$ the set of points of $\mathcal{P}$ covered by the lines of $\mathcal{L}$. Suppose that $\mathcal{P} \neq \mathcal{M}$, and consider a point $P \in \mathcal{M} \setminus \mathcal{P}$. Since a GQ does not contain triangles, different lines on $P$ meet different lines of $\mathcal{L}$. As every point lies on $t+1$ lines, this implies that $|\mathcal{L}| = t+1+\delta$ with $\delta \ge 0$. For each point $P \in \mathcal{M}$, we define $w(P)$ as the number of lines of $\mathcal{L}$ on $P$. Also, we define \[ W := \sum_{P \in \mathcal{M}} (w(P)-1), \] then clearly $|\mathcal{M}| = |\mathcal{L}|(s+1)-W$. We denote by $b_i$ the number of lines of $\mathcal{G} \setminus \mathcal{L}$ that meet exactly $i$ lines of $\mathcal{L}$, $0 \leq i$. Derived from this notation, we denote by $b_i(P)$ the number of lines on $P \not \in \mathcal{M}$ that meet exactly $i$ lines of $\mathcal{L}$, $1\leq i$. Remark that there is no a priori upper bound on the number of lines of $\mathcal{L}$ that meet a line of $\mathcal{G} \setminus \mathcal{L}$. In the next lemmas however, we will search for completely covered lines not in $\mathcal{L}$, and therefore we denote by $\tilde{b_i}$ the number of lines of $\mathcal{G} \setminus \mathcal{L}$ that contain exactly $i$ covered points, $0 \leq i \leq s+1$, and we denote by $\tilde{b}_i(P)$ the number of lines on $P \not \in \mathcal{M}$ containing exactly $i$ covered points, $0 \leq i \leq s+1$. \begin{Le}\label{Qminus_basic}\label{H_basic} Suppose that $\delta < s - 1$. \begin{compactenum}[\rm (a)] \item Let the point $X \in \mathcal{P} \setminus \mathcal{M}$. Then $\sum_ib_i(X)(i-1)=\delta$ and \[ \sum_{P \in X^\perp \cap \mathcal{M}}(w(P) - 1) \leq \delta. \] \item A line not contained in $\mathcal{M}$ can meet at most $\delta+1$ lines of $\mathcal{L}$. In particular, $\tilde{b}_i = b_i=0$ for $i=0$ and for $\delta+1<i<s+1$. \item \[ \sum_{i=2}^{\delta+1}\tilde{b}_i(i-1) \leq \sum_{i=2}^{\delta+1}b_i(i-1). \] \item If $P_0$ is a point of $\mathcal{M}$ that lies on a line $l$ meeting $\mathcal{M}$ only in $P_0$, then \[ \sum_{P \in \mathcal{M} \setminus P_0^\perp}(w(P)-1)\le\delta s. \] \item \[ (s-\delta)\sum_{i=1}^{\delta+1}b_i(i-1)\le (st-t-\delta)(s+1)\delta+W\delta. \] \item If not all lines on a point $P$ belong to $\mathcal{L}$, then at most $\delta+1$ lines on $P$ belong to $\mathcal{L}$, and less than $\frac{t}{s}+1$ lines on $P$ not in $\mathcal{L}$ are completely contained in $\mathcal{M}$. \end{compactenum} \end{Le} \begin{proof} \begin{compactenum}[\rm (a)] \item Consider a point $X \in \mathcal{P} \setminus \mathcal{M}$. Each of the $t+1$ lines on $X$ meets a line of $\mathcal{L}$, and every line of $\mathcal{L}$ meets exactly one of these $t+1$ lines. Hence \[ |X^\perp\cap \mathcal{M}| \ge t+1 =\sum_ib_i(X) \,. \] Furthermore, \[ \sum_{P\in X^\perp\cap \mathcal{M}}w(P) = \sum_ib_i(X)i=|{\mathcal{L}}|=t+1+\delta \,. \] Both assertions follow immediately. \item Since every line of $\mathcal{S}$ meets a line of $\mathcal{L}$, it follows that $\tilde{b}_0=b_0=0$. Consider any line $l \not \in \mathcal{L}$ containing a point $P \not \in \mathcal{M}$. The $t$ lines different from $l$ on $P$ are blocked by at least $t$ lines of $\mathcal{L}$ not meeting $l$. So at most $|\mathcal{L}| - t = \delta + 1$ lines of $\mathcal{L}$ can meet $l$. \item Consider a line $l$ containing $i$ covered points with $0 < i \leq \delta + 1$. Then $l$ must meet at least $i$ lines of $\mathcal{L}$, and, by (b), at most $\delta +1$ lines of $\mathcal{L}$. On the left hand side, this line is counted exactly $i-1$ times, on the right hand side this line is counted at least $i-1$ times. This gives the inequality. \item Each point $P$, with $P\not \in P_0^\perp$, is collinear to exactly one point $X\neq P_0$ of $l$. For $X \in l, X \neq P_0$, the inequality of (a) gives $\sum_{P\in X^\perp \cap \mathcal{M}}(w(P)-1)\leq \delta$. Summing over the $s$ points on $l$ different from $P_0$ gives the expression. \item It follows from (b) that every line with a point not in $\mathcal{M}$ has at least $s-\delta$ points not in $\mathcal{M}$. Taking the sum over all points $P$ not in $\mathcal{M}$ and using the equality of (a), one finds \[ \sum_{i=1}^{\delta+1}b_i(s-\delta)(i-1) \le \sum_{P\not\in\mathcal{M}}\sum_{i=1}^{\delta+1}b_i(P)(i-1) = (|\mathcal{P}|-|\mathcal{M}|)\delta. \] As $|\mathcal{M}|=|\mathcal{L}|(s+1)-W$, the assertion follows. \item Suppose that the point $P$ lies on exactly $x\ge 1$ lines that are not elements of $\mathcal{L}$. It is not possible that all these $x$ lines are contained in $\mathcal{M}$, since this would require $xs$ lines of $\mathcal{L}$ that are not on $P$, and then $|\mathcal{L}|\ge t+1-x+xs\ge t+s$, a contradiction with $\delta < s-1$. Thus we find a point $P_0\in P^\perp\setminus\mathcal{M}$. Then the $t$ lines on $P_0$, different from $\langle P, P_0 \rangle$ must be blocked by a line of $\mathcal{L}$ not on $P$, hence at most $\delta+1$ lines of $\mathcal{L}$ can contain $P$. If $y$ lines on $P$ do not belong to $\mathcal{L}$, but are completely contained in $\mathcal{M}$, then at least $1+ys$ lines contained in $\mathcal{L}$ meet the union of these $y$ lines, so $1+ys \leq |\mathcal{L}| = t+1+\delta$, so $y < \frac{t}{s}+1$ as $\delta < s$. \end{compactenum} \end{proof} \begin{Le}\label{le:twopencil} Suppose that $\delta=0$. If two lines of $\mathcal{L}$ meet, then $\mathcal{L}$ is a pencil of $t+1$ lines through a point $P$. \end{Le} \begin{proof} The lemma follows immediately from Lemma~\ref{Qminus_basic}~(f). \end{proof} \begin{Le}\label{le:pencil_sub} Suppose that $\delta=0$. If $\mathcal{L}$ is not a pencil, then $t \geq s$ and $\mathcal{L}$ is a spread of a subquadrangle of order $(s,t/s)$. \end{Le} \begin{proof} We may suppose that $\mathcal{L}$ is not a pencil, so that the lines of $\mathcal{L}$ are pairwise skew by Lemma \ref{le:twopencil}. Consider the set $\mathcal{G}'$ of all lines completely contained in $\mathcal{M}$. The set $\mathcal{G}'$ contains at least all the elements of $\mathcal{L}$, so $\mathcal{G}'$ is not empty. If $l\in\mathcal{G}'$ and $P\in\mathcal{M}$ not on $l$, then there is a unique line $g \in \mathcal{G}$ on $P$ meeting $l$. As this line contains already two points of $\mathcal{M}$, it is contained in $\mathcal{M}$ by Lemma~\ref{Qminus_basic} (b), that is $g \in \mathcal{G}'$. This shows that $(\mathcal{M}, \mathcal{G}')$ is a GQ of some order $(s, t')$ and hence it has $(t's + 1)(s + 1)$ points. As $|\mathcal{M}| = (t + 1)(s + 1)$, then $t's = t$, that is $t' = t/s$ and hence $t \geq s$. \end{proof} This lemma proves Theorem~\ref{th:rank2}. \subsection{The case $\mathcal{S} = \mbox{\rm Q}^-(5,q)$} In this subsection, $\mathcal{S} = \mbox{\rm Q}^-(5,q)$, so $(s,t) = (q,q^2)$, and $|\mathcal{L}|=q^2+1+\delta$. We suppose that $\mathcal{L}$ contains no pencil and we will show for small $\delta$ that $\mathcal{L}$ contains a cover of a parabolic quadric $\mbox{\rm Q}(4,q) \subseteq \mathcal{S}$. The set $\mathcal{M}$ of covered points blocks all the lines of $\mbox{\rm Q}^-(5,q)$. An easy counting argument shows that $|\mathcal{M}| \geq q^3+1$ (in fact, it follows from \cite{M1998} that $|\mathcal{M}| \geq q^3+q$, but we will not use this stronger lower bound). Thus $W = |\mathcal{L}|(q+1) - |\mathcal{M}| \leq (q+1)(q+\delta)$. \begin{Le}\label{Wbound} If $\delta\le\frac{q-1}{2}$, then $W\le \delta(q+2)$. \end{Le} \begin{proof} Denote by $\mathcal{B}$ the set of all lines not in $\mathcal{L}$, meeting exactly $i$ lines of $\mathcal{L}$ for some $i$, with $2\le i\le \delta+1$. We count the number of pairs $(l,m)$, $l \in \mathcal{L}$, $m \in \mathcal{B}$, $l$ meets $m$. The number of these pairs is $\sum_{i=2}^{\delta+1} b_i i$. It follows from Lemma~\ref{Qminus_basic} (e), $W \leq (q+1)(q+\delta)$, and $\delta \leq \frac{q-1}{2}$, that \begin{eqnarray*} \sum_{i=2}^{\delta+1}b_ii &\le &2\sum_{i=1}^{\delta+1}b_i(i-1)\le 2\cdot \frac{(q^3-q^2-\delta)(q+1)\delta+W\delta}{q-\delta} \\ &\le& 2\frac{(q+1)\delta(q^3-q^2+q)}{q-\delta} \le 2(q-1)(q^3-q^2+q)=:c \end{eqnarray*} Hence, some line $l$ of $\mathcal{L}$ meets at most $\lfloor c/|\mathcal{L}|\rfloor$ lines of $\mathcal{B}$. Denote by $\mathcal{B}_1$ the set of lines not in $\mathcal{L}$ that meet exactly one line of $\mathcal{L}$. If a point $P$ does not lie on a line of $\mathcal{B}_1$, then it lies on at least $q^2-q-\delta$ lines of $\mathcal{B}$ (by Lemma~\ref{Qminus_basic} (f) and since $\mathcal{L}$ contains no pencil). As $\delta \le \frac{q-1}{2}$, then $c/|\mathcal{L}|< 2(q^2-q-\delta)$, so at most one point of $l$ can have this property. Thus $l$ has $x\ge q$ points $P_0$ that lie on a line of $\mathcal{B}_1$, so $l$ is the only line of $\mathcal{L}$ meeting such a line. Apply Lemma~\ref{Qminus_basic}~(d) on these $x$ points. As every point not on $l$ is collinear with at most one of these $x$ points, it follows that \[ \sum_{P\in\mathcal{M}\setminus l}(w(P)-1)\le\frac{x\delta q}{x-1}\le \frac{\delta q^2}{q-1}<\delta(q+1)+1\,. \] All but at most one point of $l$ lie on a line of $\mathcal{B}_1$, so $l$ is the only line of $\mathcal{L}$ on these points. One point of $l$ can be contained in more than one line of $\mathcal{L}$, but then it is contained in at most $\delta+1$ lines of $\mathcal{L}$ by Lemma~\ref{Qminus_basic}~(f). Hence $\sum_{P\in l}(w(P)-1)\le \delta$, and therefore $W\le\delta(q+2)$. \end{proof} \begin{Le}\label{bound_on_bqplus1} If $\delta\le\frac{q-1}{2}$, then \[ \tilde{b}_{q+1}\ge q^3+q-\delta-\frac{(q^3+q^2-q\delta-q+1)\delta}{q-\delta}. \] \end{Le} \begin{proof} We count the number of incident pairs $(P,l)$, $P \in \mathcal{M}$ and $l$ a line of $\mbox{\rm Q}^-(5,q)$, to see \[ |\mathcal{M}|(q^2+1) = |\mathcal{L}|(q+1) + \sum_{i=1}^{q+1}\tilde{b}_i i \,. \] As $\mbox{\rm Q}^-(5,q)$ has $(q^2+1)(q^3+1)=|\mathcal{L}|+\sum_{i=1}^{q+1}\tilde{b}_i$ lines, then \begin{eqnarray*} |\mathcal{L}|q+\sum_{i=1}^{q+1}\tilde{b}_i(i-1)&=& |\mathcal{L}|(q+1)+\sum_{i=1}^{q+1}\tilde{b}_ii-(q^2+1)(q^3+1) \\ &=&|\mathcal{M}|(q^2+1)-(q^2+1)(q^3+1) \\ &=& (q^2+1)(q+1)(q+\delta)-W(q^2+1). \\ &\ge & (q^2+1)(q+1)q-\delta(q^2+1), \end{eqnarray*} where we used $W \leq \delta(q+2)$ from Lemma~\ref{Wbound}. From Lemmas \ref{Qminus_basic}~(c) and (e) and $W \leq \delta(q+2)$, we have \[ (q-\delta)\sum_{i=2}^{\delta+1}\tilde{b}_i(i-1) \leq (q-\delta)\sum_{i=2}^{\delta+1}b_i(i-1)\le (q^3-q^2)(q+1)\delta+\delta^2. \] Together this gives \[ (|\mathcal{L}|+\tilde{b}_{q+1})q\ge (q^2+1)(q+1)q-\delta(q^2+1)-\frac{(q^3-q^2)(q+1)\delta+\delta^2}{q-\delta}. \] Using $|\mathcal{L}|=q^2+1+\delta$, the assertion follows. \end{proof} \begin{Le}\label{le:analyse} If $\delta\le \frac12(3q-\sqrt{5q^2+2q+1})$, then $|\mathcal{L}|(|\mathcal{L}|-1)\delta<\tilde{b}_{q+1}(q+1)q$. \end{Le} \begin{proof} First note that the upper bound on $\delta$ implies that $\delta\le\frac{1}{2}(q-1)$. Using the lower bound for $\tilde b_{q+1}$ from the previous lemma we find \begin{align*} & 2(q-\delta)\left(\tilde b_{q+1}(q+1)q-|\mathcal{L}|(|\mathcal{L}|-1)\delta\right) \\ & \ge 2q^4\cdot g(\delta) +(q-1-2\delta)(-2\delta^2q^2+\delta^2q+3q^4+3q^3+2q^2+q)\\ & +2\delta^4+2\delta^3+q\delta^2+3q^2\delta^2+q+q^2+3q^3+\frac{5}{2} q^4 \,, \end{align*} with \[ g(\delta):=(q^2-\frac{1}{2}q-\frac{1}{4}-3q\delta+\delta^2). \] The smaller zero of $g$ is $\delta_1=\frac12(3q-\sqrt{5q^2+2q+1})$. Hence, if $\delta\le\delta_1$, then $\delta\le \frac12(q-1)$ and $g(\delta)\ge 0$, and therefore $|\mathcal{L}|(|\mathcal{L}|-1)\delta<\tilde b_{q+1}(q+1)q$. \end{proof} \begin{Le}\label{le:q3q} If $\delta \le \frac{1}{2}(3q-\sqrt{5q^2+2q+1})$, then there exists a hyperbolic quadric $\mbox{\rm Q}^+(3,q)$ contained in $\mathcal{M}$. \end{Le} \begin{proof} Count triples $(l_1,l_2,g)$, where $l_1,l_2$ are skew lines of $\mathcal{L}$ and $g \not \in \mathcal{L}$ is a line meeting $l_1$ and $l_2$ and being completely contained in $\mathcal{M}$. Then \[ |\mathcal{L}|(|\mathcal{L}|-1)z\ge \tilde{b}_{q+1}(q+1)q\,, \] where $z$ is the average number of transversals contained in $\mathcal{M}$ and not contained in $\mathcal{L}$, of two skew lines of $\mathcal{L}$. By Lemma~\ref{le:analyse}, we find that $z > \delta$. Hence, we find two skew lines $l_1,l_2\in\mathcal{L}$ such that $\delta+1$ of their transversals are contained in $\mathcal{M}$. The lines $l_1$ and $l_2$ generate a hyperbolic quadric $\mbox{\rm Q}^+(3,q)$ contained in $\mbox{\rm Q}^-(5,q)$, denoted by $\mathcal{Q}^+$. If some point $P$ of $\mathcal{Q}^+$ is not contained in $\mathcal{M}$, then the line on it meeting $l_1,l_2$ has at least two points in $\mathcal{M}$ and the second line of $\mathcal{Q}^+$ on it has at least $\delta+1$ points in $\mathcal{M}$. This is not possible (cf. Lemma~\ref{Qminus_basic} (a)). Hence $\mathcal{Q}^+$ is contained in $\mathcal{M}$. \end{proof} \begin{Le}\label{le:q4q} If $\delta \le \frac{1}{2}(3q-\sqrt{5q^2+2q+1})$, then $\mathcal{M}$ contains a parabolic quadric $\mbox{\rm Q}(4,q)$. \end{Le} \begin{proof} Lemma~\ref{le:q3q} shows that $\mathcal{M}$ contains a hyperbolic quadric $\mbox{\rm Q}^+(3,q)$, which will be denoted by $\mathcal{Q}^+$. We also know that $|\mathcal{M}|=|\mathcal{L}|(q+1)-W\ge q^3+q^2+q+1-\delta$ by Lemma~\ref{Wbound}. There are $q+1$ hyperplanes through $\mathcal{Q}^+$, necessarily intersecting $\mbox{\rm Q}^-(5,q)$ in parabolic quadrics $\mbox{\rm Q}(4,q)$. Hence there exists a parabolic quadric $Q(4,q)$, denoted by $\mathcal{Q}$, containing $\mathcal{Q}^+$ such that \[ c:=|(\mathcal{Q}\setminus\mathcal{Q}^+)\cap\mathcal{M}|\ge \frac{|\mathcal{M}|-(q+1)^2}{q+1}>q^2-q-1\,. \] Hence, $c\ge q^2-q$. From now on we mean in this proof by a hole of $\mathcal{Q}$ a point of $\mathcal{Q}$ that is not in $\mathcal{M}$. Each of the $q^3-q-c$ holes of $\mathcal{Q}$ can be perpendicular to at most $\delta$ points of $(\mathcal{Q}\setminus\mathcal{Q}^+)\cap\mathcal{M}$ (cf. Lemma~\ref{Qminus_basic} (a)). Thus we find a point $P\in (\mathcal{Q}\setminus\mathcal{Q}^+)\cap\mathcal{M}$ that is perpendicular to at most \[ \frac{(q^3-q-c)\delta}{c}\le q\delta \] holes of $\mathcal{Q}$. The point $P$ lies on $q+1$ lines of $\mathcal{Q}$ and if such a line is not contained in $\mathcal{M}$, then it contains at least $q-\delta$ holes of $\mathcal{Q}$ (cf. Lemma~\ref{Qminus_basic} (b)). Thus the number of lines of $\mathcal{Q}$ on $P$ that are not contained in $\mathcal{M}$ is at most $q\delta/(q-\delta)$. The hypothesis on $\delta$ guarantees that this number is less than $q+1-\delta$. Thus, $P$ lies on at least $r\ge \delta+1$ lines of the set $\mathcal{Q}$ that are contained in $\mathcal{M}$. These meet $\mathcal{Q}^+$ in $r$ points of the conic $C:=P^\perp\cap \mathcal{Q}^+$. Denote this set of $r$ points by $C'$. Assume that $\mathcal{Q}\setminus P^\perp$ contains a hole $R$. For $X\in C'$, the hole $R$ has a unique neighbor $Y$ on the line $PX$; if this is not the point $X$, then the line $RY$ has at least two points in $\mathcal{M}$, namely $Y$ and the point $RY\cap \mathcal{Q}^+$. So if $|R^\perp \cap C'| = \emptyset$, then there are at least $r \ge \delta + 1$ lines on the hole $R$ with at least two points in $\mathcal{M}$. This contradicts Lemma~\ref{Qminus_basic}~(a). Therefore $|R^\perp\cap C'|\ge r-\delta\ge 1$. As every point of $C'$ lies on $q+1$ lines of $\mathcal{Q}$, two of which are in $\mathcal{Q}^+$ and one other is contained in $\mathcal{M}$, then every point of $C'$ has at most $(q-2)q$ neighbors in $\mathcal{Q}$ that are holes. Counting pairs $(X,Y)$ of perpendicular points $X\in C'$ and holes $R\in\mathcal{Q}\setminus P^\perp$, it follows that $\mathcal{Q}\setminus P^\perp$ contains at most $r(q-2)q/(r-\delta)\le (\delta+1)q(q-2)$ holes. Since $P^\perp\cap\mathcal{Q}$ contains at most $q\delta$ holes, we see that $\mathcal{Q}$ has at most $q\delta+(\delta+1)q(q-2)$ holes. As $\delta\le (q-1)/2$, this number is less than $\frac{1}{2}q(q^2-1)$. Hence, $c > |\mathcal{Q}|-|\mathcal{Q}^+|-\frac12q(q^2-1)=\frac12q(q^2-1)$. It follows that $P$ is perpendicular to at most \[ \frac{(q^3-q-c)\delta}{c}<\delta \] holes of $\mathcal{Q}$. This implies that all $q+1$ lines of $\mathcal{Q}$ on $P$ are contained in $\mathcal{M}$. Then every hole of $\mathcal{Q}$ must be connected to at least $q+1-\delta$ and thus all points of the conic $C$. Apart from $P$, there is only one such point in $\mathcal{Q}$, so $\mathcal{Q}$ has at most one hole. Then Lemma~\ref{Qminus_basic}~(a) shows that $\mathcal{Q}$ has no hole. \end{proof} \begin{Le}\label{le:q5qfinal} If $\mathcal{M}$ contains a parabolic quadric $\mbox{\rm Q}(4,q)$, denoted by $\mathcal{Q}$, and $|\mathcal{L}|\le q^2+q$, then $\mathcal{L}$ contains a cover of $\mathcal{Q}$. \end{Le} \begin{proof} Consider a point $P \in \mathcal{Q}$. As $|P^\perp \cap \mathcal{Q}|=q^2+q+1$, some line of $\mathcal{L}$ must contain two points of $P^\perp \cap \mathcal{Q}$. Then this line is contained in $\mathcal{Q}$ and contains $P$. \end{proof} In this subsection we assumed that $\mathcal{L}$ contains no pencil. The assumption that $\delta \le \frac{1}{2}(3q-\sqrt{5q^2+2q+1})$ then implies that $\mathcal{L}$ contains a cover of a $\mbox{\rm Q}(4,q)\subseteq \mbox{\rm Q}^-(5,q)$. Hence, we may conclude the following theorem. \begin{Th}\label{pr:q5qresult} If $\mathcal{L}$ is a generator blocking set of $\mbox{\rm Q}^-(5,q)$, $|\mathcal{L}|=q^2+1+\delta$, $\delta \le \frac{1}{2}(3q-\sqrt{5q^2+2q+1})$, then $\mathcal{L}$ contains a pencil of $q^2+1$ lines through a point or $\mathcal{L}$ contains a cover of an embedded $\mbox{\rm Q}(4,q) \subset \mbox{\rm Q}^-(5,q)$. \end{Th} \subsection{The case $\mathcal{S}=\mbox{\rm H}(4,q^2)$} In this subsection, $\mathcal{S} = \mbox{\rm H}(4,q^2)$, so $(s,t) = (q^2,q^3)$. We suppose that $\mathcal{L}$ contains no pencil and that $|\mathcal{L}|=q^3+1+\delta$, and we show that this implies that $\delta \geq q-3$. The set $\mathcal{M}$ of covered points must block all the lines of $\mbox{\rm H}(4,q^2)$. It follows from \cite{DBM2005} that $|\mathcal{M}| \geq q^5+q^2$, and hence $W = |\mathcal{L}|(q^2+1) - |\mathcal{M}| \leq (q^2+1)(q+\delta)$. \begin{Le}\label{WboundH} If $\delta < q-1$, then $W\le \delta(q^2+3)$. \end{Le} \begin{proof} Denote by $\mathcal{B}$ the set of all lines not in $\mathcal{L}$, meeting exactly $i$ lines of $\mathcal{L}$ for some $i$, with $2\le i\le \delta+1$. We count the number of pairs $(l,m)$, $l \in \mathcal{L}$, $m \in \mathcal{B}$, $l$ meets $m$. The number of these pairs is $\sum_{i=2}^{\delta +1}b_i i$. It follows from Lemma~\ref{H_basic} (e), $W \leq (q^2+1)(q+\delta)$, and $\delta < q-1$, that \begin{eqnarray*} \sum_{i=2}^{\delta+1}b_i i & \le & 2\sum_{i=1}^{\delta+1}b_i(i-1) \le \frac{2(q^5-q^3-\delta)(q^2+1)\delta+2W\delta}{q^2-\delta} \\ &\le&\frac{2(q^2+1)\delta(q^5-q^3+q)}{q^2-\delta} \le 2(q^6+1)=:c \end{eqnarray*} Hence, some line $l$ of $\mathcal{L}$ meets at most $\lfloor c/|\mathcal{L}|\rfloor$ lines of $\mathcal{B}$. Denote by $\mathcal{B}_1$ the set of lines not in $\mathcal{L}$ that meet exactly one line of $\mathcal{L}$. If a point $P$ does not lie on a line of $\mathcal{B}_1$, then it lies on at least $q^3-q-\delta$ lines of $\mathcal{B}$ (by Lemma~\ref{Qminus_basic} (f) and since $\mathcal{L}$ contains no pencil). As $\delta < q-1$, then $c/|\mathcal{L}|< 3(q^3-q-\delta)$, so at most two points of $l$ can have this property. Thus $l$ has $x\ge q^2-1$ points $P_0$ that lie on a line of $B_1$, so $l$ is the only line of $\mathcal{L}$ meeting such a line. Apply Lemma~\ref{H_basic}~(d) on these $x$ points. As every point not on $l$ is collinear with at most one of these $x$ points, it follows that \begin{eqnarray*} \sum_{P\notin l}(w(P)-1) &\le& \frac{x\delta q^2}{x-1}\le \delta (q^2+1)+\frac{2\delta}{q^2-2} < \delta(q^2+1)+1. \end{eqnarray*} Hence, $\sum_{P\notin l}(w(P)-1) \le \delta(q^2+1)$. All but at most two points of $l$ lie on a line of $\mathcal{B}_1$, so $l$ is the only line of $\mathcal{L}$ on these at least $q^2-1$ points. At most two points of $l$ can be contained in more than one line of $\mathcal{L}$, but each such point is contained in at most $\delta+1$ lines of $\mathcal{L}$ by Lemma~\ref{Qminus_basic}~(f). Hence $\sum_{P\in l}(w(P)-1) \le 2\delta$, and therefore $W\le\delta(q^2+3)$. \end{proof} \begin{Le}\label{bound_on_bq2plus1} If $\delta\le q-2$, then \[ \tilde{b}_{q^2+1}\ge q^4+q-\delta-\frac{(q^5+2q^3-2q\delta -q+2)\delta}{q^2-\delta}. \] \end{Le} \begin{proof} We count the number of incident pairs $(P,l)$, $P \in \mathcal{M}$ and $l$ a line of $\mbox{\rm H}(4,q^2)$, to see \[ |\mathcal{M}|(q^3+1) = |\mathcal{L}|(q^2+1) + \sum_{i=1}^{q^2+1}\tilde{b}_i i \,. \] As $\mbox{\rm H}(4,q^2)$ has $(q^3+1)(q^5+1)=|\mathcal{L}| + \sum_{i=1}^{q^2+1}\tilde{b}_i$ lines, then \begin{eqnarray*} |\mathcal{L}|q^2+\sum_{i=1}^{q^2+1}\tilde{b}_i(i-1)&=& |\mathcal{L}|(q^2+1)+\sum_{i=1}^{q^2+1}\tilde{b}_ii-(q^3+1)(q^5+1) \\ & = & |\mathcal{M}|(q^3+1)-(q^5+1)(q^3+1) \\ & = & (q^3+1)(q^3+q^2+\delta(q^2+1)) - W(q^3+1) \\ & \ge & (q^3+1)(q+1)q^2 - 2\delta(q^3+1). \end{eqnarray*} From Lemmas \ref{Qminus_basic} (c) and (e) and Lemma~\ref{WboundH}, we have \[ (q^2-\delta)\sum_{i=2}^{\delta+1}\tilde{b}_i(i-1) \leq (q^2-\delta)\sum_{i=2}^{\delta+1}b_i(i-1)\le (q^5-q^3)(q^2+1)\delta+2\delta^2. \] Together this gives \[ (|\mathcal{L}|+\tilde{b}_{q^2+1})q^2\ge (q^3+1)(q+1)q^2-2\delta(q^3+1)-\frac{(q^5-q^3)(q^2+1)\delta+2\delta^2}{q^2-\delta}. \] Using $|\mathcal{L}|=q^3+1+\delta$, the assertion follows. \end{proof} \begin{Le}\label{le:analyse2} If $\delta \le q-4$, then $|\mathcal{L}|(|\mathcal{L}|-1)3q < \tilde{b}_{q^2+1}(q^2+1)q^2$. \end{Le} \begin{proof} First note that by the assumption on $\delta$, we may use the lower bound for $\tilde b_{q^2+1}$ from the previous lemma, and so we find \begin{align*} & (q^2-\delta)\left(\tilde b_{q^2+1}(q^2+1)q^2-|\mathcal{L}|(|\mathcal{L}|-1)3q\right) \\ & \ge (q-4-\delta)(q^6-\delta)(q^3+q^2+5q+5\delta+21) + r(q,\delta)\,, \end{align*} with \begin{align*} r(q,\delta) & = (81+33\delta+5\delta^2)q^6+(1-2\delta+2\delta^2)q^5+(\delta+7\delta^2)q^4\\ &+(-2\delta^2-6\delta)q^3-\delta q^2+(\delta+3\delta^2+3\delta^3)q-84\delta -41\delta^2-5\delta^3 \end{align*} Since $r(q,\delta) \ge 0$ if $\delta \le q-4$, the lemma follows. \end{proof} \begin{Le}\label{le:h4qfinal} If $\mathcal{L}$ contains no pencil, then $\delta \geq q-3$. \end{Le} \begin{proof} Assume that $\delta < q-3$. Consider a hermitian variety $\mbox{\rm H}(3,q^2)$, denoted by $\mathcal{H}$, contained in $\mbox{\rm H}(4,q^2)$. A cover of $\mathcal{H}$ contains at least $q^3+q$ lines by \cite{M1998}, so $\mathcal{H}$ contains at least one hole $P$. Of all lines through $P$ in $\mbox{\rm H}(4,q^2)$, $q^3-q$ are not contained in $\mathcal{H}$. They must all meet a line of $\mathcal{L}$, so at most $q+1+\delta$ lines of $\mathcal{L}$ can be contained in $\mathcal{H}$. Hence, at most $|\mathcal{L}|+(q+1+\delta)q^2=2q^3+q^2+1+\delta(q^2+1)<(q^2+1)(2q+\delta+1)$ points of $\mathcal{H}$ are covered. Now count triples $(l_1,l_2,g)$, where $l_1,l_2$ are skew lines of $\mathcal{L}$ and $g \not \in \mathcal{L}$ is a line meeting $l_1$ and $l_2$ and being completely contained in $\mathcal{M}$. Then \[ |\mathcal{L}|(|\mathcal{L}|-1)z\ge \tilde{b}_{q^2+1}(q^2+1)q^2\,, \] where $z$ is the average number of transversals, contained in $\mathcal{M}$ but not belonging to $\mathcal{L}$, of two skew lines of $\mathcal{L}$. By Lemma~\ref{le:analyse2}, we find that $z > 3q$. So there exist skew lines $l_1$ and $l_2$ in $\cal L$ such that at least $z$ transversals to both lines are contained in $\mathcal{M}$. These transversals are pairwise skew, so the $H(3,q^2)$ induced in the $3$-space generated by $l_1$ and $l_2$ contains at least $z(q^2+1)\ge 3q(q^2+1)>(q^2+1)(2q+\delta+1)$ points of $\mathcal{M}$. This is a contradiction. \end{proof} We have shown that $\delta \geq q-3$ if $\mathcal{L}$ contains no pencil. Note that we have no result for $q \in \{2,3\}$. Hence, we have proved the following result. \begin{Th}\label{pr:h4q2result} If $\mathcal{L}$ is a generator blocking set of $\mbox{\rm H}(4,q^2)$, $q>3$, $|\mathcal{L}|=q^3+1+\delta$, $\delta < q - 3$, then $\mathcal{L}$ contains a pencil of $q^3+1$ lines through a point. \end{Th} \section{Polar spaces of higher rank}\label{sec:rankn} Consider two point sets $V$ and $\mathcal{B}$ in a projective space, $V \cap \mathcal{B} = \emptyset$. The {\em cone with vertex $V$ and base $\mathcal{B}$}, denoted by $V\mathcal{B}$, is the set of points that lie on a line connecting a point of $V$ with a point of $\mathcal{B}$. If $\mathcal{B}$ is empty, then the cone is just the set $V$. In this section, we denote a polar space of rank $r$ by $\mathcal{S}_{r}$. The parameters $(s,t)$ refer in this section always to $(q,q)$, $(q,q^2)$, $(q^2,q^3)$ respectively, for the polar spaces $\mbox{\rm Q}(2n,q)$, $\mbox{\rm Q}^-(2n+1,q)$, $\mbox{\rm H}(2n,q^2)$. The term {\em polar space} refers from now on always to a finite classical polar space. Consider a point $P$ in a polar space $\mathcal{S}$. If $\mathcal{S}$ is determined by a polarity $\phi$ of the ambient projective space, which is true for all polar spaces except for $\mbox{\rm Q}(2n,q)$, $q$ even, then $P^\perp$ denotes the hyperplane $P^\phi$. The set $P^\perp \cap \mathcal{S}$ is exactly the set of points of $\mathcal{S}$ collinear with $P$, including $P$. For any point set $A$ of the ambient projective space, we define $A^\perp := \langle A \rangle^\phi$. For $\mathcal{S}=\mbox{\rm Q}(2n,q)$, $q$ even, $P$ a point of $\mathcal{S}$, $P^\perp$ denotes the tangent hyperplane to $\mathcal{S}$ at $P$. For any point set $A$ containing at least one point of $\mathcal{S}$, we define the notation $A^\perp$ as \[ A^\perp := \bigcap_{X \in A \cap \mathcal{S}} X^\perp\,. \] Using this notation, we can formulate the following property. Consider any polar space $\mathcal{S}_n$ of rank $n$, and any subspace $\pi$ of dimension $l \leq n-1$, completely contained in $\mathcal{S}_n$. Then it holds that $\pi^\perp \cap \mathcal{S}_n = \pi\mathcal{S}_{n-l-1}$, a cone with vertex $\pi$ and base $\mathcal{S}_{n-l-1}$ a polar space of the same type of rank $n-l-1$ \cite{Hirschfeld,HT1991}. A minimal generator blocking set of $\mathcal{S}_{n}$, $n \geq 3$, can be constructed in a cone as follows. Consider an $(n-3)$-dimensional subspace completely contained in $\mathcal{S}_n$, hence $\pi_{n-3}^\perp \cap \mathcal{S}_{n} = \pi_{n-3}\mathcal{S}_2$. Suppose that $\mathcal{L}$ is a minimal generator blocking set of $\mathcal{S}_2$, then $\mathcal{L}$ consists of lines. Each element of $\mathcal{L}$ spans together with $\pi_{n-3}$ a generator of $\mathcal{S}_n$, and these $|\mathcal{L}|$ generators of $\mathcal{S}_n$ constitute a minimal generator blocking set of $\mathcal{S}_n$ of size $|\mathcal{L}|$. Using the smallest generators blocking sets of the mentioned polar spaces of rank 2, we obtain examples of the same size in general rank, listed in Table~\ref{tab:ex}. The notation $\pi_i$ refers to an $i$-dimensional subspace. When the cone is $\pi_i B$, the example consists of the generators through the vertex $\pi_i$, contained in the cone $\pi_iB$, meeting the base of the cone in the elements of the base set, and the size of the example equals the size of the base set. We will call $\pi_i$ the {\em vertex} of the generator blocking set. \begin{table}[h!] \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline polar space & $(s,t)$ & cone & base set & dimension\\ \hline $\mbox{\rm Q}(2n,q)$ & $(q,q)$ & $\pi_{n-2}\mbox{\rm Q}(2,q)$ & $\mbox{\rm Q}(2,q)$ & $n+1$\\ & & $\pi_{n-3}\mbox{\rm Q}^+(3,q)$ & a spread of $\mbox{\rm Q}^+(3,q)$ & $n+1$\\ \hline $\mbox{\rm Q}^-(2n+1,q)$ & $(q,q^2)$ & $\pi_{n-2}\mbox{\rm Q}^-(3,q)$ & $\mbox{\rm Q}^-(3,q)$ & $n+2$\\ & & $\pi_{n-3}\mbox{\rm Q}(4,q)$ & a cover of $\mbox{\rm Q}(4,q)$ & $n+2$\\ \hline $\mbox{\rm H}(2n,q^2)$ & $(q^2,q^3)$ & $\pi_{n-2}\mbox{\rm H}(2,q^2)$ & $\mbox{\rm H}(2,q^2)$ & $n+1$\\ \hline \end{tabular} \caption{small examples in rank $n$}\label{tab:ex} \end{center} \end{table} The natural question is whether these examples are the smallest ones. The answer is yes, and the following theorem, proved by induction on $n$, gives slightly more information. \begin{Th}\label{th:rankn} \begin{compactenum}[\rm (a)] \item Let $\mathcal{L}$ be a generator blocking set of $\mbox{\rm Q}(2n,q)$, with $|\mathcal{L}| = q + 1 + \delta$. Let $\epsilon$ be such that $q+1+\epsilon$ is the size of the smallest non-trivial blocking set in $\mbox{\rm PG}(2,q)$. If $\delta < \mathrm{min}\{\frac{q-1}{2}, \epsilon\}$, then $\mathcal{L}$ contains one of the two examples listed in Table~\ref{tab:ex} for $\mbox{\rm Q}(2n,q)$. \item Let $\mathcal{L}$ be a generator blocking set of $\mbox{\rm Q}^-(2n+1,q)$, with $|\mathcal{L}| = q^2 + 1 + \delta$. If $\delta \le \frac{1}{2}(3q-\sqrt{5q^2+2q+1})$, then $\mathcal{L}$ contains one of the two examples listed in Table~\ref{tab:ex} for $\mbox{\rm Q}^-(2n+1,q)$. \item Let $\mathcal{L}$ be a generator blocking set of $\mbox{\rm H}(2n,q^2)$, $q>3$, with $|\mathcal{L}| = q^3 + 1 + \delta$. If $\delta < q-3$, then $\mathcal{L}$ contains the example listed in Table~\ref{tab:ex} for $\mbox{\rm H}(2n,q^2)$. \end{compactenum} \end{Th} \subsection{Preliminaries} The following technical lemma will be useful. \begin{Le}\label{le:cases} \begin{compactenum}[\rm (a)] \item If a quadric $\pi_{n-4}\mbox{\rm Q}^+(3,q)$ or $\pi_{n-3}\mbox{\rm Q}(2,q)$ in $\mbox{\rm PG}(n,q)$ is covered by generators, then for any hyperplane $T$ of $\mbox{\rm PG}(n,q)$, at least $q-1$ of the generators in the cover are not contained in $T$. \item If a quadric $\pi_{n-4}\mbox{\rm Q}(4,q)$ or $\pi_{n-3}\mbox{\rm Q}^-(3,q)$ in $\mbox{\rm PG}(n+1,q)$ is covered by generators, then for any hyperplane $T$, at least $q^2-q$ of the generators in the cover are not contained in $T$. \item If a hermitian variety $\pi_{n-3}\mbox{\rm H}(2,q^2)$ in $\mbox{\rm PG}(n,q^2)$ is covered by generators, then for any hyperplane $T$ of $\mbox{\rm PG}(n,q^2)$, at least $q^3-q$ of the generators in the cover are not contained in $T$. \end{compactenum} \end{Le} \begin{proof} \begin{compactenum}[\rm (a)] \item This is clear if $T$ does not contain the vertex of the quadric (i.e. the subspace $\pi_{n-4}$, $\pi_{n-3}$ respectively). If $T$ contains the vertex, then going to the quotient space of the vertex, it is sufficient to handle the cases $\mbox{\rm Q}(2,q)$ and $\mbox{\rm Q}^+(3,q)$. The case $\mbox{\rm Q}(2,q)$ is degenerate but obvious, since any line contains at most two points of $\mbox{\rm Q}(2,q)$. So suppose that $C$ is a cover of $\mbox{\rm Q}^+(3,q) \subset \mbox{\rm PG}(3,q)$, then $T$ is a plane. If $T \cap \mbox{\rm Q}^+(3,q)$ contains lines, then it contains exactly two lines of $\mbox{\rm Q}^+(3,q)$. Since at least $q+1$ lines are required to cover $\mbox{\rm Q}^+(3,q)$, at least $q-1$ lines in $C$ do not lie in $T$. \item Again, we only have to consider the case that $T$ contains the vertex, and so it is sufficient to consider the two cases $\mbox{\rm Q}^-(3,q)$ and $\mbox{\rm Q}(4,q)$ in the quotient geometry of the vertex $T$. For $\mbox{\rm Q}^-(3,q)$, the assertion is obvious. Suppose finally that $C$ is a cover of $\mbox{\rm Q}(4,q)\subset \mbox{\rm PG}(4,q)$. Then $T$ has dimension three. If $T\cap \mbox{\rm Q}(4,q)$ contains lines at all, then $T\cap \mbox{\rm Q}(4,q)$ is a hyperbolic quadric $\mbox{\rm Q}^+(3,q)$ or a cone over a conic $\mbox{\rm Q}(2,q)$. As these can be covered by $q+1$ lines and since a cover of $\mbox{\rm Q}(4,q)$ needs at least $q^2+1$ lines, the assertion is obvious also in this case. \item Now we only have to handle the case $\mbox{\rm H}(2,q^2)$. Since all lines of $\mbox{\rm PG}(2,q^2)$ contain at most $q+1$ points of $\mbox{\rm H}(2,q^2)$, the assertion is obvious. \end{compactenum} \end{proof} From now on, we always assume that $\mathcal{S}_{n} \in \{\mbox{\rm Q}(2n,q), \mbox{\rm Q}^-(2n+1,q), \mbox{\rm H}(2n,q^2)\}$. In this section, $\mathcal{L}$ denotes a generator blocking set of size $|\mathcal{L}| = t+1+\delta$ of a polar space $\mathcal{S}_{n}$. Section~\ref{sec:rank2} was devoted to the case $n=2$ of Theorem~\ref{th:rankn}~(b)~and~(c), the case $n=2$ of Theorem~\ref{th:rankn} (a) is Proposition~\ref{pro:q4q}. The case $n=2$ serves as the induction basis. The induction hypothesis is that if $\mathcal{L}$ is a generator blocking set of $\mathcal{S}_{n}$ of size $t+1+\delta$, with $\delta < \delta_0$, then $\mathcal{L}$ contains one of the examples listed in Table~\ref{tab:ex}. The number $\delta_0$ can be derived from the case $n=2$ in Theorem~\ref{th:rankn}. The polar space $\mathcal{S}_{n}$ has $\mbox{\rm PG}(2n+e,s)$ as ambient projective space. Here $e = 1$ if and only if $\mathcal{S}_{n} = \mbox{\rm Q}^-(2n+1,q)$, and $e=0$ otherwise. Call a point $P$ of $\mathcal{S}_{n}$ a {\em hole} if it is not covered by a generator of $\mathcal{L}$. If $P$ is a hole, then $P^\perp$ meets every generator of $\mathcal{L}$ in an $(n-2)$-dimensional subspace. In the polar space $\mathcal{S}_{n-1}$, which is induced in the quotient space of $P$ by projecting from $P$, these $(n-2)$-dimensional subspaces induce a generator blocking set $\mathcal{L}'$, $|\mathcal{L}'| \le |\mathcal{L}|$. Applying the induction hypothesis, $\mathcal{L}'$ contains one of the examples of $\mathcal{S}_{n-1}$ described in Table~\ref{tab:ex}, living in dimension $n+e$; we will denote this example by $\mathcal{L}^P$. Hence, the $(n+1+e)$-space on $P$ containing the $(n-2)$-dimensional subspaces that are projected from $P$ on the elements of $\mathcal{L}^P$, is a cone with vertex $P$ and base the $(n+e)$-dimensional subspace containing a minimal generator blocking set of $\mathcal{S}_{n-1}$ described in Table~\ref{tab:ex}. We denote this $(n+1+e)$-space on $P$ by $S_P$. \begin{Le}\label{fourspace} Consider a polar space $\mathcal{S}_{n} \in \{\mbox{\rm Q}(2n,q),\mbox{\rm Q}^-(2n+1,q),\mbox{\rm H}(2n,q^2)\}$, and a generator blocking set of size $t+1+\delta$. If $P$ is a hole and $T$ an $(n+e)$-dimensional space $\pi$ on $P$ and in $S_P$, then at least $t-\frac{t}{s}$ generators of $\mathcal{L}$ meet $S_P$ in an $(n-2)$-dimensional subspace not contained in $T$. \end{Le} \begin{proof} This assertion follows by going to the quotient space of $P$, and using Lemma~\ref{le:cases} and the induction hypothesis of this section. \end{proof} We recall the following facts from \cite{HT1991}. Consider a quadric $\mathcal{Q}$ in a projective space $\mbox{\rm PG}(n,q)$. An $i$-dimensional subspace $\pi_i$ of $\mbox{\rm PG}(n,q)$ will intersect $\mathcal{Q}$ again in a possibly degenerate quadric $\mathcal{Q}'$. If $\mathcal{Q}'$ is degenerate, then $\pi_i \cap \mathcal{Q} = \mathcal{Q}' = R \mathcal{Q}''$, where $R$ is a subspace completely contained in $\mathcal{Q}$, and where $\mathcal{Q}''$ is a non-singular quadric. We call $R$ the {\em radical} of $\mathcal{Q}'$. Clearly, all generators of $\mathcal{Q}'$ contain $R$. We recall that $\mathcal{Q}''$ does not have necessarily the same type as $\mathcal{Q}$. Consider a hermitian variety $\mathcal{H}$ in a projective space $\mbox{\rm PG}(n,q^2)$. An $i$-dimensional subspace $\pi_i$ of $\mbox{\rm PG}(n,q^2)$ will intersect $\mathcal{H}$ again in a possibly degenerate hermitian variety $\mathcal{H}'$. If $\mathcal{H}'$ is degenerate, then $\pi_i \cap \mathcal{H} = \mathcal{H}' = R \mathcal{H}''$, where $R$ is a subspace completely contained in $\mathcal{H}$, and $\mathcal{H}''$ is a non-singular hermitian variety. We call $R$ the {\em radical} of $\mathcal{H}'$. Clearly, all generators of $\mathcal{H}'$ contain $R$. \begin{Le}\label{fivespace} Let $\mathcal{L}$ be a minimal generator blocking set of size $t+1+\delta$ of $\mathcal{S}_{n}$. If an $(n+1+e)$-dimensional subspace $\Pi$ of $\mbox{\rm PG}(2n+e,s)$ contains more than $\frac{t}{s}+1+\delta$ generators of $\mathcal{L}$, then $\mathcal{L}$ is one of the examples listed in Table~\ref{tab:ex}. \end{Le} \begin{proof} First we show that $\Pi$ is covered by the generators of $\mathcal{L}$. Assume not and let $P$ be a hole of $\Pi$. If $\Pi\cap \mathcal{S}_{n}$ is degenerate, then its radical is contained in all generators of $\Pi\cap \mathcal{S}_{n}$, so $P$ is not in the radical. Hence, $P^\perp\cap \Pi$ has dimension $n+e$ and thus $S_P\cap \Pi$ has dimension at most $n+e$. Lemma~\ref{fourspace} shows that at least $t-\frac{t}{s}$ generators of $\mathcal{L}$ meet $S_P$ in an $(n-2)$-subspace that is not contained in $\Pi$. Hence, $\Pi$ contains at most $\frac{t}{s}+1+\delta$ generators of $\mathcal{L}$. This contradiction shows that $\Pi$ is covered by the generators of $\mathcal{L}$. The subspace $\Pi$ is an $(n+1+e)$-dimensional subspace containing generators of $\mathcal{S}_{n}$. This leaves a restricted number of possibilities for the structure of $\Pi \cap \mathcal{S}_{n}$: $\Pi \cap \mathcal{S}_{n} \in \{\pi_{n-3}\mbox{\rm Q}^+(3,q), \pi_{n-2}\mbox{\rm Q}(2,q)\}$ when $\mathcal{S}_n = \mbox{\rm Q}(2n,q)$, $\Pi \cap \mathcal{S}_{n} \in \{\pi_{n-4}\mbox{\rm Q}^+(5,q), \pi_{n-3}\mbox{\rm Q}(4,q), \pi_{n-2}\mbox{\rm Q}^-(3,q)\}$ when $\mathcal{S}_n = \mbox{\rm Q}^-(2n+1,q)$, and $\Pi\cap \mathcal{S}_{n} \in \{\pi_{n-3} \mbox{\rm H}(3,q^2), \pi_{n-2}\mbox{\rm H}(2,q^2)\}$ when $\mathcal{S}_{n} = \mbox{\rm H}(2n,q^2)$. {\bf Case 1:} $\Pi\cap \mathcal{S}_{n} = \pi_{n-2} \mathcal{S}_{1}$ ($\mathcal{S}_{1}=\mbox{\rm Q}(2,q), \mbox{\rm Q}^-(3,q)$, or $\mbox{\rm H}(2,q^2)$). \\ A generator of $\mathcal{L}$ contained in $\Pi$ contains the vertex $\pi_{n-2}$. If one of the $t+1$ generators on $\pi_{n-2}$ is not contained in $\mathcal{L}$, then at least $s$ generators of $\mathcal{L}$ are required to cover its points outside of $\pi_{n-2}$. Hence, if $x$ of the $t+1$ generators on $\pi_{n-2}$ are not contained in $\mathcal{L}$, then $|\mathcal{L}|\ge t+1-x+xs$. Since $|\mathcal{L}| = t+1+\delta$, with $\delta < s-1$, this implies $x=0$. So $\mathcal{L}$ contains the pencil of generators of $\pi_{n-2}\mathcal{S}_{1}$, and by the minimality of $\mathcal{L}$, it is equal to this pencil. {\bf Case 2:} $\Pi\cap \mathcal{S}_{n} \in \{\pi_{n-3}\mbox{\rm Q}^+(3,q), \pi_{n-3} \mbox{\rm Q}(4,q)\}$. \\ Recall that $\Pi \cap \mathcal{S}_{n} = \pi_{n-3}\mbox{\rm Q}^+(3,q)$ when $\mathcal{S}_n = \mbox{\rm Q}(2n,q)$ and then $(s,t) = (q,q)$, and that $\Pi \cap \mathcal{S}_{n} = \pi_{n-3}\mbox{\rm Q}(4,q)$ when $\mathcal{S}_n = \mbox{\rm Q}^-(2n+1,q)$ and then $(s,t) = (q,q^2)$. All generators of $\mathcal{L}$ contained in $\Pi$ must contain the vertex $\pi_{n-3}$. We will show that the generators of $\mathcal{L}$ contained in $\Pi$ already cover $\Pi\cap \mathcal{S}_{n}$; then $\mathcal{L}$ contains (by minimality) no further generator and thus $\mathcal{L}$ is one of the two examples. Assume that some point $P$ of $\Pi\cap\mathcal{S}_n$ does not lie on any generator of $\mathcal{L}$ contained in $\Pi$. As all generators of $\mathcal{L}$ contained in $\Pi$ contain the vertex $\pi_{n-3}$, then $P$ is not in this vertex. Hence, $P^\perp\cap \Pi\cap \mathcal{S}_{n}$ is a pencil of $\frac{t}{s}+1$ generators $g_0,\dots,g_{\frac{t}{s}}$ on the subspace $\pi_{n-2}=\erz{P,\pi_{n-3}}$. None of the generators $g_i$ is contained in $\mathcal{L}$. Therefore, at least $s+1$ generators of $\mathcal{L}$ are required to cover $g_i$. One such generator of $\mathcal{L}$ may contain the vertex $\pi_{n-2}$ and counts for each generator $g_i$, but this still leaves at least $(\frac{t}{s}+1)s+1$ generators in $\mathcal{L}$ necessary to cover all the generators $g_i$. But $|\mathcal{L}|<t+s$, a contradiction. {\bf Case 3:} $\Pi \cap \mathcal{S}_{n} \in \{\pi_{n-4}\mbox{\rm Q}^+(5,q), \pi_{n-3}\mbox{\rm H}(3,q^2)\}$, and we will show that this case is impossible.\\ Recall that $\Pi \cap \mathcal{S}_{n} = \pi_{n-4}\mbox{\rm Q}^+(5,q)$ when $\mathcal{S}_n = \mbox{\rm Q}^-(2n+1,q)$ and then $(s,t) = (q,q^2)$, and that $\Pi \cap \mathcal{S}_{n} = \pi_{n-3}\mbox{\rm H}(3,q^2)$ when $\mathcal{S}_n = \mbox{\rm H}(2n,q^2)$ and then $(s,t) = (q^2,q^3)$. In both cases, $\frac{t}{s} = q$. Denote by $V$ the vertex of $\Pi\cap \mathcal{S}_n$. All generators of $\mathcal{L}$ contained in $\Pi$ must contain the vertex $V$. We will show that the generators of $\mathcal{L}$ contained in $\Pi$ already cover $\Pi\cap \mathcal{S}_{n}$. Assume that some point $P$ of $\Pi\cap\mathcal{S}_n$ does not lie on any generator of $\mathcal{L}$ contained in $\Pi$. As all generators of $\mathcal{L}$ contained in $\Pi$ contain the vertex $V$, then $P$ is not in $V$. When $\mathcal{S}_n = \mbox{\rm Q}^-(2n+1,q)$, then $P^\perp\cap \Pi\cap \mathcal{S}_{n}$ contains $2(q+1)$ generators on the subspace $\pi = \erz{P,V}$. None of these generators is contained in $\mathcal{L}$. These $2(q+1)$ generators split into two classes, corresponding with the two classes of generators of the hyperbolic quadric $\mbox{\rm Q}^+(3,q)$, the base of the cone $\pi \mbox{\rm Q}^+(3,q) = P^\perp\cap \Pi\cap \mathcal{S}_{n}$. Consider one such class of generators, denoted by $g_0,\dots,g_{q}$. When $\mathcal{S}_n = \mbox{\rm H}(2n,q^2)$, then $P^\perp\cap \Pi\cap \mathcal{S}_{n}$ contains $q+1$ generators on the subspace $\pi = \erz{P,V}$, and none of these generators is contained in $\mathcal{L}$. Also denote these generators by $g_0,\dots,g_{q}$. So in both cases we consider $\frac{t}{s}+1=q+1$ generators $g_0,\dots,g_{q}$ on the subspace $\pi = \erz{P,V}$, not contained in $\mathcal{L}$. Consider now any generator $g_i$, then at least $s+1$ generators of $\mathcal{L}$ are required to cover $g_i$. One such generator of $\mathcal{L}$ may contain the vertex $\pi$ and counts for each generator $g_i$, but this still leaves at least $(\frac{t}{s}+1)s+1$ generators in $\mathcal{L}$ necessary to cover all the generators $g_i$. But $|\mathcal{L}|<t+s$, a contradiction. Hence in the quotient geometry of the vertex $V$, the generators of $\mathcal{L}$ contained in $\Pi$ induce either a cover of $\mbox{\rm Q}^+(5,q)$, which has size at least $q^2+q$ (see \cite{ES2001}) or a cover of $\mbox{\rm H}(3,q^2)$, which has size at least $q^3+q^2$ (see \cite{M1998}). In both cases, this is a contradiction with the assumed upper bound on $|\mathcal{L}|$. \end{proof} \subsection{The polar spaces $\mbox{\rm Q}^-(2n+1,q)$ and $\mbox{\rm H}(2n,q^2)$} This subsection is devoted to the proof of Theorem~\ref{th:rankn}~(b)~and~(c). \begin{Le}\label{conic} Suppose that $\mathcal{C}$ is a line cover of $\mbox{\rm Q}(4,q)$ with $q^2+1+\delta$ lines. Then each conic and each line of $\mbox{\rm Q}(4,q)$ meets at most $(\delta+1)(q+1)$ lines of $\mathcal{C}$. \end{Le} \begin{proof} If $w(P)+1$ is defined as the number of lines of $\mathcal{C}$ on a point $P$, then the sum of the weights $w(P)$ over all points of $\mbox{\rm Q}(4,q)$ is $\delta(q+1)$. Hence, a conic can meet at most $(\delta+1)(q+1)$ lines of $\mathcal{C}$, and the same holds for lines. \end{proof} \begin{Le}\label{le:general1} Suppose that $\mathcal{S}_{n} \in \{\mbox{\rm Q}^-(2n+1,q),\mbox{\rm H}(2n,q^2)\}$, $n \geq 3$. Suppose that $\mathcal{L}$ is a minimal generator blocking set of size $t+1+\delta$ of $\mathcal{S}_{n}$, $\delta < \delta_0$. If there exists a hole $P$ that projects $\mathcal{L}$ on a generator blocking set containing a minimal generator blocking set of $\mathcal{S}_{n-1}$ that has a non-trivial vertex, then $\mathcal{L}$ is one of the examples in Table~\ref{tab:ex}. \end{Le} \begin{proof} Let $P$ be the hole that projects $\mathcal{L}$ on an example with a vertex $\alpha$. Hence, there exists a line $l$ on $P$ in $S_P$ meeting at least $t+1$ of the generators of $\mathcal{L}$, and the vertex of $S_P$ equals $\erz{P,\alpha}$. We have $l^\perp\cap \mathcal{S}_{n}=l \mathcal{S}_{n-2}$, hence the number of planes completely contained in $\mathcal{S}_{n}$ on the line $l$ equals $|\mathcal{P}_{n-2}|$ ($\mathcal{P}_{n-2}$ is the point set of $\mathcal{S}_{n-2}$). Suppose that a generator $g$ of $\mathcal{L}$ meets such a plane $\pi$ in a line, then this line intersects $l$ in a point $P'\neq P$. But then $l^\perp \cap g$ has dimension $n-2$, so the number of lines on $P'$ contained in $l^\perp \cap g$ equals $\theta_{n-3}$, and so $\theta_{n-3}$ planes of $\mathcal{S}_{n}$ on $l$ meet $g$ in a line. Denote by $\lambda$ the number of planes on $l$ contained completely in the vertex of $S_P$. Then $\lambda$ equals the number of points in a hyperplane of $\alpha$; when $\alpha$ is a point, then $\lambda = 0$. Then there are $|\mathcal{P}_{n-2}| -\lambda$ planes on $l$, completely contained in $\mathcal{S}_n$, but not contained in the vertex of $S_P$. Consequently, we find such a plane $\pi$ meeting the vertex of $S_P$ only in $l$, and meeting at most $m := |\mathcal{L}|\cdot\theta_{n-3}/(|\mathcal{P}_{n-2}|-\lambda)$ generators $g_i$ in a line. A calculation shows that $m < 2$ if $n \geq 3$. Hence, from the at least $t+1$ generators of $\mathcal{L}$ that meet $l$, at most one meets $\pi$ in a line, and the at most $\delta$ generators of $\mathcal{L}$ that do not meet $l$ can meet $\pi$ in at most one point. Hence, $\pi$ contains a hole $Q$ not on $l$. At least $t+1$ generators of $\mathcal{L}$ meet $S_P$ in an $(n-2)$-dimensional subspace and meet the line $l$, and at least $t+1$ generators of $\mathcal{L}$ meet $S_Q$ in an $(n-2)$-dimensional subspace. Hence, at least $2(t+1)-|\mathcal{L}| = t+1-\delta$ generators of $\mathcal{L}$ meet both $S_P$ and $S_Q$ in an $(n-2)$-dimensional subspace, and meet the line $l$. Suppose that the projection of $\mathcal{L}$ from $Q$ contains a generators blocking set with a non-trivial vertex $\alpha'$. It is not possible that $l_Q$ is contained in $\alpha'$, since then all elements of $\mathcal{L}$ meeting $S_Q$ in an $(n-2)$-dimensional subspace would meet $\pi$ in a line, a contradiction to $m < 2$. The base of $\mathcal{L}^Q$ is either a parabolic quadric $\mbox{\rm Q}(4,q)$, an elliptic quadric $\mbox{\rm Q}^-(3,q)$ or a hermitian curve $\mbox{\rm H}(2,q^2)$. In the latter two cases, since neither $\mbox{\rm Q}^-(3,q)$ nor $\mbox{\rm H}(2,q^2)$ contain lines, the projection of the line $l$ from $Q$, denoted by $l_Q$, is not contained in the base of $\mathcal{L}^Q$. Suppose now that the base of $\mathcal{L}^Q$ is a parabolic quadric $\mbox{\rm Q}(4,q)$, and that this base contains the line $l_Q$. The $t+1-\delta$ generators of $\mathcal{L}$ meeting both $S_P$ and $S_Q$ in an $(n-2)$-dimensional subspace, all meet $l$. These $t+1-\delta$ generators are projected on generators of $\mathcal{L}^Q$, meeting the base of $\mathcal{L}^Q$ in a cover. Hence, in the quotient geometry of the vertex of $\mathcal{L}^Q$, $l_Q$ is now a line of $\mbox{\rm Q}(4,q)$ meeting at least $t+1-\delta=q^2+1-\delta$ lines of a cover of $\mbox{\rm Q}(4,q)$, a contradiction with Lemma~\ref{conic}, since $t+1-\delta > (\delta+1)(q+1)$ if $\delta_0 \leq q/2$. We conclude that the line $l_Q$ is neither contained in the vertex of $\mathcal{L}^Q$ nor in the base of $\mathcal{L}^Q$. (This excludes also the possibility that $\mathcal{L}^Q$ has a trivial vertex, which is only possible for $n=3$ and $\mathcal{S}_n = \mbox{\rm Q}^-(7,q)$). Hence, $l_Q$ is a line meeting $\alpha'$ and the base of $\mathcal{L}^Q$, and there exists a line $l'\neq l$ in $\pi$ connecting $Q$ with a point of $\alpha'$. The $t+1-\delta$ generators of $\mathcal{L}$ meeting both $S_P$ and $S_Q$ in an $(n-2)$-dimensional subspace also meet $l'$ in a point. At most one of these generators meets $\pi$ in a line, so at least $t-\delta$ of these generators are projected from the different points $P$ and $Q$ on generators through a common point, so before projection, these $t-\delta$ generators of $\mathcal{L}$ must meet in the common point $X := l \cap l'$. Now consider a hole $R$ not in the perp of $X$. Then $S_R$ meets at least $(t-\delta+t+1)-(t+1+\delta) = t-2\delta$ of the generators on $X$ in an $(n-2)$-subspace. These generators are therefore contained in $T:=\erz{S_R,X}$. Finally, consider a hole $R'$ not in $T$ and not in the perp of $X$. Then at least $t-3\delta > \frac{t}{s}+1+\delta$ of the generators that contain $X$ and are contained in $T$ meet $S_{R'}$ in an $(n-2)$-subspace. These generators lie therefore in $\erz{S_{R'}\cap T,X}$, which has dimension $n+1+e$. Now Lemma~\ref{fivespace} completes the proof. \end{proof} \begin{Co}\label{co:easierones} Theorem~\ref{th:rankn} (c) is true for $\mbox{\rm H}(2n,q^2)$, $n \ge 3$. \end{Co} \begin{proof} Theorem~\ref{pr:h4q2result} guarantees that the assumption of Lemma~\ref{le:general1} is true for $\mathcal{S}_{n}=\mbox{\rm H}(2n,q^2)$ and $n=3$. Theorem~\ref{th:rankn} (c) then follows from the induction hypothesis. \end{proof} We may now assume that $\mathcal{S}_{n} = \mbox{\rm Q}^-(2n+1,q)$, $n=3$, and that the projection of $\mathcal{L}$ from every hole contains a generator blocking set with a trivial vertex, i.e. a cover of $Q(4,q)$. As $n=3$, then $\mathcal{L}$ is a set of planes. \begin{Le}\label{sixspace} If a hyperplane $T$ contains more than $q+1+3\delta$ elements of $\mathcal{L}$, then $\mathcal{L}$ is one of the two examples in $\mbox{\rm Q}^-(7,q)$ from Table~\ref{tab:ex}. \end{Le} \begin{proof} Denote by $\mathcal{L}'$ the set of the generators of $\mathcal{L}$ that are contained in $T$. If $P$ is a hole outside of $T$, then $S_P$ meets all except at most $\delta$ planes of $\mathcal{L}$ in a line, and hence more than $q+1+2\delta$ of these planes are contained in $T$. Recall that $S_P$ is a cone with vertex $P$ over $S_P\cap T$, and $S_P\cap T$ has dimension 4. Note that $P^\perp \cap \mbox{\rm Q}^-(7,q) = P\mathcal{Q}_5$ with $\mathcal{Q}_5$ an elliptic quadric $\mbox{\rm Q}^-(5,q)$, and we may suppose that $\mathcal{Q}_5 \subseteq T$. Denote by $\mathcal{Q}_4$ the parabolic quadric $\mbox{\rm Q}(4,q)$ contained in $\mathcal{Q}_5$ such that $S_P = P\mathcal{Q}_4$, then $T \cap S_P \cap \mbox{\rm Q}^-(7,q) = \mathcal{Q}_4$. Consider any point $Q \in (\mbox{\rm Q}^-(7,q) \cap P^\perp) \setminus (S_P \cup \mathcal{Q}_5)$. Clearly $W := Q^\perp \cap T \cap S_P$ meets $\mbox{\rm Q}^-(7,q)$ in an elliptic quadric $\mbox{\rm Q}^-(3,q)$. There are $(q^4-q^2)(q-1)$ points like $Q$, and at most $(q^2-q)(q+1)$ of them are covered by elements of $\mathcal{L}$, since we assumed that more than $q+1+3\delta$ elements of $\mathcal{L}$ are contained in $T$. So at least $q^5-q^4-2q^3+q^2+q > 0$ points of $(\mbox{\rm Q}^-(7,q) \cap P^\perp) \setminus (S_P \cup \mathcal{Q}_5)$ are holes and have the property that $W:=Q^\perp\cap T\cap S_P$ meets $\mbox{\rm Q}^-(7,q)$ in an elliptic quadric $\mbox{\rm Q}^-(3,q)$. As before, $S_Q\cap T$ has dimension four and meets at least $|\mathcal{L}'|-\delta$ planes of $\mathcal{L}'$ in a line. Then at least $|\mathcal{L}'|-2\delta$ planes of $\mathcal{L}'$ meet $S_P\cap T$ and $S_Q\cap T$ in a line. As $S_P\cap S_Q\cap T\subseteq W$ does not contain singular lines, it follows that these $|\mathcal{L}'|-2\delta$ planes of $\mathcal{L}'$ are contained in the subspace $H:=\erz{S_P\cap T,S_Q\cap T}$. We have $W\cap \mbox{\rm Q}^-(7,q)=\mbox{\rm Q}^-(3,q)$, so in the quotient geometry of $P$, the $|\mathcal{L}'|-2\delta$ planes induce $|\mathcal{L}'|-2\delta$ lines all meeting this $\mbox{\rm Q}^-3,q)$. Now $\mathcal{L}$ is projected from $P$ on a cover of a parabolic quadric $\mbox{\rm Q}(4,q)$ with at most $q^2+1+\delta$ lines. Then $|\mathcal{L}'|-2\delta$ lines of the cover must meet more than $q+1$ points of this elliptic quadric $\mbox{\rm Q}^-(3,q)$. It follows that $S_Q\cap T$ contains more than $q+1$ points of the elliptic quadric $\mbox{\rm Q}^-(3,q)$ in $W$ and hence $W\subseteq S_Q$. Then $S_P\cap T$ and $S_Q\cap T$ meet in $W$, so the subspace $H$ they generate has dimension five. As $|\mathcal{L}'|-2\delta>q+1+\delta$ planes of $\mathcal{L}$ lie in $H$, Lemma~\ref{fivespace} completes the proof. \end{proof} \begin{Le}\label{le:q7q} Suppose that $\mathcal{L}$ is a minimal generator blocking set of size $t+1+\delta$ of $\mbox{\rm Q}^-(7,q)$, $\delta < \delta_0$. If there exists a hole $P$ that projects $\mathcal{L}$ on a generator blocking set containing a cover of $\mbox{\rm Q}(4,q)$, then $\mathcal{L}$ is one of the examples in Table~\ref{tab:ex}. \end{Le} \begin{proof} Consider a hole $P$. Then $S_P \cap \mbox{\rm Q}^-(7,q) = P\mbox{\rm Q}(4,q)$. Denote the base of this cone by $\mathcal{Q}_4$. The assumption of the lemma is that $\mathcal{L}^P$ is a minimal cover $\mathcal{C}$ of $\mathcal{Q}_4$. Consider a point $X \in \mathcal{Q}_4$ contained in exactly one line of $\mathcal{C}$. Then $X^\perp \cap \mathcal{Q}_4 = X\mbox{\rm Q}(2,q)$, and each line on $X$ is covered completely, so $X^\perp \cap \mathcal{Q}_4$ meets at least $q^2+1$ lines of $\mathcal{C}$. The lines of $\mathcal{C}$ are projections from $P$ of the intersections of elements of $\mathcal{L}$ with the subspace $S_P$, call $\mathcal{C}'$ this set of intersections that is projected on $\mathcal{C}$. Thus the line $h=PX$ of $S_P$ on $P$ meets exactly one line of $\mathcal{C}$ and $h^\perp \cap S_P \cap \mbox{\rm Q}^-(7,q) = h\mbox{\rm Q}(2,q)$ meets at least $q^2+1$ lines of $\mathcal{C}'$. At most $\delta$ elements of $\mathcal{L}$ are possibly not intersecting $S_P$ in an element of $\mathcal{C}'$, so we find a hole $Q$ on $h$ with $Q\neq P$. There are at least $q^2+1$ elements in $\mathcal{C}'$, so at least $q^2+1-\delta$ elements come from planes $\pi\in\mathcal{L}$ with $\pi\cap Q^\perp\subset S_Q$. For each such element, its intersection with $h\mbox{\rm Q}(2,q)$ lies in $S_Q$. Thus either $S_P\cap S_Q=h^\perp\cap S_P$ or $S_P\cap S_Q$ is a $3$-dimensional subspace of $h^\perp\cap S_P$ that contains a cone $Y\mbox{\rm Q}(2,q)$. In the second case, the vertex $Y$ must be the point $Q$ (as $Q\in S_Q$); but then projecting from $Q$ we see a cover of $\mbox{\rm Q}(4,q)$ containing a conic meeting at least $q^2+1-\delta$ of the lines of the cover. In this situation, Lemma~\ref{conic} gives $q^2+1-\delta\le (\delta+1)(q+1)$, that is $\delta>q-3$, a contradiction. Hence, $S_P\cap S_Q$ has dimension four, so $T=\erz{S_P,S_Q}$ is a hyperplane. At least $q^2$ planes of $\mathcal{L}$ meet $S_P$ in a line that is not contained in $S_P\cap S_Q$. At least $q^2-\delta$ of these also meet $S_Q$ in a line and hence are contained in $T$. It follows from $\delta < q/2$ that $q^2-\delta > q+1+3\delta$, and then Lemma~\ref{sixspace} completes the proof. \end{proof} \begin{Co}\label{co:harderone} Theorem~\ref{th:rankn} (b) is true for $\mbox{\rm Q}^-(2n+1,q)$, $n \ge 3$. \end{Co} \begin{proof} Theorem~\ref{pr:q5qresult} guarantees that for $\mathcal{S}_{n}=\mbox{\rm Q}^-(7,q)$ and $n=3$, the assumption of either Lemma~\ref{le:general1} or Lemma~\ref{le:q7q} is true. Hence, Theorem~\ref{th:rankn} (b) follows for $n=3$. But then the assumption of Lemma~\ref{le:general1} is true for $\mathcal{S}_{n}=\mbox{\rm Q}^-(2n+1,q)$ and $n=4$, and then Theorem~\ref{th:rankn} (b) follows from the induction hypothesis. \end{proof} \subsection{The polar space $\mbox{\rm Q}(2n,q)$} This subsection is devoted to the proof of Theorem~\ref{th:rankn}~(a). Lemma~\ref{le:general1} can also be translated to this case, but only for a bad upper bound on $\delta$. Therefore we treat the polar space $\mbox{\rm Q}(2n,q)$ separately. Recall that for $\mbox{\rm Q}(2n,q)$, $\delta_0 = \mathrm{min}\{\frac{q-1}{2}, \epsilon\}$, with $\epsilon$ such that $q+1+\epsilon$ is the size of the smallest non-trivial blocking set of $\mbox{\rm PG}(2,q)$. We suppose that $\mathcal{L}$ is a generator blocking set of $\mbox{\rm Q}(2n,q)$, $n \geq 3$, of size $q+1+\delta$, $\delta < \delta_0$. Recall that $\mathcal{L}^R$ is the minimal generator blocking set of $\mbox{\rm Q}(2n-2,q)$ contained in the projection of $\mathcal{L}$ from a hole $R$. So when $n=3$, it is possible that $\mathcal{L}^R$ is a generator blocking set of $\mbox{\rm Q}(4,q)$ with a trivial vertex. For the Lemmas~\ref{le:blocking}, \ref{le:proj:regulus}, and~\ref{le:q6qregulus}, the assumption is that $n=3$, and that for any hole $R$, $\mathcal{L}^R$ has a trivial vertex, i.e. $\mathcal{L}^R$ is a regulus. So let $R$ be a hole such that $\mathcal{L}^R$ is a regulus. Let $g_{i}$, $i=1,\ldots,$ $q+1+\delta$, be the elements of $\mathcal{L}$ and denote by $l_i$ the intersection of $R^\bot\cap g_i$. At least $q+1$ of the lines $l_i$ are projected on the lines of the regulus $\mathcal{L}^R$. We denote the $q+1$ lines of the regulus $\mathcal{L}^R$ by $\tilde{l}_i$, $i=1,\ldots,q+1$. The opposite lines of the regulus $\mathcal{L}^R$ are denoted by $\tilde{m}_i$, $i=1,\ldots,q+1$. \begin{Le}\label{le:blocking} Suppose that $\tilde{m}_j$ is a line of the opposite regulus and that $B_j$ is the set of points that are the intersection of the lines $l_i$ with $\erz{R,\tilde{m}_j}$. Then $B_j$ contains a line. \end{Le} \begin{proof} Since at least $q+1$ lines $l_i$ must meet $\erz{R,\tilde{m}_j}$ in a point, $|B_j| \geq q+1$. We show that $B_j$ is a blocking set in $\erz{R,\tilde{m}_j}$. Assume that a line $k$ in $\erz{R,\tilde{m}_j}$ is disjoint to $B_j$ and take a point $R'$ on $k$, then $R'$ is a hole. By the assumption made before this lemma, $\mathcal{L}^{R'}$ is also a generator blocking set with a trivial vertex, i.e. a regulus $\mathcal{R}'$. Consider now the plane $\pi := \langle R, k \rangle$. The plane $\pi$ is contained in $S_R$. If the plane $\pi$ is also contained in $S_{R'}$, then it is projected from $R'$ on a line of $\mathcal{R}'$ or of the opposite regulus of $\mathcal{R}'$; in both cases it is projected on a covered point of $\mathcal{R}'$, and hence the line $k$ must contain an element of $B_j$, a contradiction. So the plane $\pi$ is not contained in $S_{R'}$. There are at least $q+1$ elements of $\mathcal{L}$ that meet $S_{R'}$ in a line; such a line is projected from $R'$ on a line of $\mathcal{R}'$. No two lines that are projected on two different lines of $\mathcal{R}'$ can meet $\pi$ in the same point. Hence, of the at least $q+1$ elements of $\mathcal{L}$ that are projected from $R'$ on $\mathcal{R}'$, at most one can meet $\pi$ in a point, since otherwise $\pi$ is projected from $R'$ on a line of the opposite regulus of $\mathcal{R}'$, but then the plane $\pi$ would be contained in $S_{R'}$. But then at most $\delta+1$ elements of $\mathcal{L}$ can meet $\pi$ in a point, a contradiction with $|B_j| \geq q+1$. \end{proof} We denote the line contained in the set $B_j$ by $m_j$, and so $m_j$ is projected from $R$ on $\tilde{m}_j$. Now we consider again the hole $R$ and the regulus $\mathcal{L}^R$. \begin{Le}\label{le:proj:regulus} The generator blocking set $\mathcal{L}^R$ arises as the projection from $R$ of a regulus, of which the lines are contained in the elements of $\mathcal{L}$. \end{Le} \begin{proof} An element $g_i \in \mathcal{L}$ that is projected from $R$ on the line $\tilde{m}_j$ must meet the plane $\langle R,\tilde{m}_j \rangle$ in a line. But an element $g_i \in \mathcal{L}$ cannot meet a plane $\langle R,\tilde{l}_i\rangle$ and a plane $\langle R, \tilde{m}_j \rangle$ in a line, since then $g_i$ would be a generator of $\mbox{\rm Q}(6,q)$ contained in $R^\perp$ not containing $R$, a contradiction. So at most $\delta$ elements of $\mathcal{L}$ meet $S_R$ in a line that is projected on a line $\tilde{m}_j$. Hence, at least $q+1-\delta$ planes $\erz{R,\tilde{m}_j}$ do not contain a line $l_i$, so, by Lemma~\ref{le:blocking}, there are at least $q+1-\delta$ lines $m_j \subseteq B_j$ not coming from the intersection of an element of $\mathcal{L}$ and $S_R$, that are projected on a line of the opposite regulus of $\mathcal{L}^R$. Number these $n \ge q+1-\delta$ lines from $1$ to $n$. Suppose that $l_1,l_2,\ldots,l_{q+1}$ are transversal to $m_1$. Since $\delta\le \frac{q-1}{2}$, a second transversal $m_2$ has at least $\frac{q+3}{2}$ common transversals with $m_1$. So we find lines $l_1,\ldots,l_{\frac{q+3}{2}}$ lying in the same 3-space $\erz{m_1,m_2}$. A third line $m_j$, $j \neq 1,2$, has at least 2 common transversals with $m_1$ and $m_2$, so all transversals $m_j$ lie in $\erz{m_1,m_2}$. Suppose that we find at most $q$ lines $l_1,\ldots,l_q$ which are transversal to $m_1,\ldots,m_{q+1-\delta}$. Then $q+1-\delta$ remaining points on the lines $m_j$ must be covered by the $\delta+1$ remaining lines $l_i$, so $\delta +1 \geq q+1-\delta$, a contradiction with the assumption on $\delta$. So we find a regulus of lines $l_1,\ldots,l_{q+1}$ that is projected on $\mathcal{L}^R$ from $R$. \end{proof} \begin{Le}\label{le:q6qregulus} The set $\mathcal{L}$ contains $q+1$ generators through a point $P$, which are projected from $P$ on a regulus. \end{Le} \begin{proof} Consider the hole $R$. By Lemma~\ref{le:proj:regulus}, $R^\perp$ contains a regulus $\mathcal{R}_1$ of $q+1$ lines $l_i$ contained in planes of $\mathcal{L}$. Denote the $3$-dimensional space containing $\mathcal{R}_1$ by $\pi_3$. Consider any hole $R' \in \mbox{\rm Q}(6,q) \setminus \pi_3^\perp$. By the assumption made before Lemma~\ref{le:blocking} and Lemma~\ref{le:proj:regulus}, $R'$ gives rise to a regulus $\mathcal{R}_2$ of $q+1$ lines contained in planes of $\mathcal{L}$. Since $R' \in \mbox{\rm Q}(6,q) \setminus \pi_3^\perp$, $\mathcal{R}_1 \neq \mathcal{R}_2$. Hence, at least $\frac{q+3}{2}$ planes of $\mathcal{L}$ contain a line of both $\mathcal{R}_1$ and $\mathcal{R}_2$ and in at most one plane, the reguli $\mathcal{R}_1$ and $\mathcal{R}_2$ can share the same line. The reguli $\mathcal{R}_1$ and $\mathcal{R}_2$ define a $4$- or $5$-dimensional space $\Pi$. If $\Pi$ is $4$-dimensional, then $\Pi \cap \mbox{\rm Q}(6,q) = \erz{P,\mathcal{Q}}$, for some point $P$ and some hyperbolic quadric $\mbox{\rm Q}^+(3,q)$, denoted by $\mathcal{Q}$. For $\mathcal{Q}$ we may choose the hyperbolic quadric containing $\mathcal{R}_1$. There are at least $\frac{q+1}{2}$ planes of $\mbox{\rm Q}(6,q)$, completely contained in $\Pi$, containing a line of $\mathcal{R}_1$ and a different line of $\mathcal{R}_2$. These planes are necessarily planes of $\mathcal{L}$. Consider now a plane $\pi_2$ of $\mbox{\rm Q}(6,q)$, completely contained in $\Pi$, only containing a line of $\mathcal{R}_1$ and not containing a different line of $\mathcal{R}_2$. If $\pi_2$ is not a plane of $\mathcal{L}$, it contains a hole $Q$. Then $Q^\perp$ intersects the at least $\frac{q+1}{2}$ planes of $\mathcal{L}$ on $P$ in a line, and the projection of these at least $\frac{q+1}{2}$ lines from $Q$ is one line $l$. If this line $l$ belongs to $\mathcal{L}^Q$, then at least $q$ more elements of $\mathcal{L}$ are projected from $Q$ on the $q$ other elements of $\mathcal{L}^Q$, hence, $q+\frac{q+1}{2} \le q+1+\delta$, a contradiction with $\delta < \frac{q-1}{2}$. Hence, $\pi_2$ is a plane of $\mathcal{L}$, and $\mathcal{L}$ contains $q+1$ generators of $\mbox{\rm Q}(6,q)$ through $P$, which are projected from $P$ on a regulus. If $\Pi$ is $5$-dimensional, then its intersection with $\mbox{\rm Q}(6,q)$ is a cone $P\mathcal{Q}$, $\mathcal{Q}$ a parabolic quadric $\mbox{\rm Q}(4,q)$, or a hyperbolic quadric $\mbox{\rm Q}^+(5,q)$. If $\Pi \cap \mbox{\rm Q}(6,q) = P\mbox{\rm Q}(4,q)$, then the base $\mathcal{Q}$ can be chosen in such a way that $\mathcal{R}_1 \subset \mathcal{Q}$. But then the same arguments as in the case that $\Pi$ is $4$-dimensional apply, and the lemma follows. So assume that $\Pi \cap \mbox{\rm Q}(6,q) = \mbox{\rm Q}^+(5,q)$. Consider again the at least $\frac{q+1}{2}$ planes $\pi^1 \ldots \pi^n$, of $\mathcal{L}$ containing a line of $\mathcal{R}_1$ and a different line of $\mathcal{R}_2$. Then half of these planes lie in the same equivalence class and so intersect mutually in a point. We can assume that the two planes $\pi^1$ and $\pi^2$ intersect in a point $P$, hence, $\langle \pi^1,\pi^2\rangle$ is a $4$-dimensional space necessarily intersecting $\mbox{\rm Q}(6,q)$ in a cone $P\mathcal{Q}$, $\mathcal{Q}$ a hyperbolic quadric $\mbox{\rm Q}^+(3,q)$. Clearly, since two different lines of $\mathcal{R}_1$ span $\langle \mathcal{R}_1 \rangle$ (and two different lines of $\mathcal{R}_2$ span $\langle \mathcal{R}_2 \rangle$), the reguli $\mathcal{R}_1, \mathcal{R}_2 \subseteq \langle \pi^1,\pi^2\rangle$. But since the planes $\pi^3 \ldots \pi^n$ contain a different line from $\mathcal{R}_1$ and $\mathcal{R}_2$, these at least $\frac{q+1}{2}$ planes of $\mathcal{L}$ are completely contained in $\langle \pi^1,\pi^2 \rangle$. But then again the same arguments as in the case that $\Pi$ is $4$-dimensional apply, and the lemma follows. \end{proof} From now on we assume that $n \geq 3$, and that there exists a hole $R$ such that $\mathcal{L}^R$ has a non-trivial vertex $\alpha$. This means that also for $n=3$, this vertex is non-trivial. This assumption will be in use for Lemmas~\ref{le:nicepoint}, \ref{le:bigvertex}, \ref{le:special_generator}, \ref{le:anja_argument}, and Corollary~\ref{co:nice_in_vertex}. Remark that also the induction hypothesis is used. We will call the $(n-2)$-dimensional subspace $\langle R,\alpha\rangle$ the {\em vertex of $S_R$}. A {\em nice point} is a point that lies in at least $q-\delta$ elements of $\mathcal{L}$. In the next lemma, for $X$ a hole, we denote by $\bar{\mathcal{L}}^X$ the set of generators of $\mathcal{L}$ that are projected from $X$ on the elements of $\mathcal{L}^X$. Hence, the generators of $\bar{\mathcal{L}}^X$ intersect $X^\perp$ in $(n-2)$-dimensional subspaces. \begin{Le}\label{le:nicepoint} Call $\alpha$ the vertex of $\mathcal{L}^R$. Then there exists a nice point $N$ on every line through $R$ meeting $\alpha$. \end{Le} \begin{proof} Let $l$ be a line on $R$ projecting to a point of $\alpha$, and consider the planes of $\mbox{\rm Q}(2n,q)$ on $l$. Consider any generator $g \in \mathcal{L}$. Suppose that $g$ meets two planes $\pi^1$ and $\pi^2$ on $l$ in a line different from $l$. Then in the quotient geometry of $l$, i.e. $l^\perp \cap \mbox{\rm Q}(2n,q) = \mbox{\rm Q}(2n-4,q)$, the two planes $\pi^1$ and $\pi^2$ are two points contained in the generator $l^\perp \cap g$, which is an $(n-3)$-dimensional subspace. Hence, any generator $g \in \mathcal{L}$ meets at most $\theta_{n-3}$ planes through $l$ in a line different from $l$. If $g$ meets two planes $\pi^1$ and $\pi^2$ on $l$ in only one point not on $l$, then in the quotient geometry of $l$, the two planes $\pi^1$ and $\pi^2$ are again two points contained in the generator $l^\perp \cap g$. Hence, any generator $g \in \mathcal{L}$ meets at most $\theta_{n-3}$ planes through $l$ in exactly one point not on $l$. Finally, if a generator $g \in \mathcal{L}$ meets a plane $\pi^1$ in a line different from $l$ and a plane $\pi^2$ in a point not on $l$, then $g$ meets also $\pi^2$ in a line different from $l$, since by the assumption, $g$ also contains a point of $l$. Hence, for $g\in\mathcal{L}$, $l\not\subseteq g$ implies that $g$ can meet at most $\theta_{n-3}$ of these planes in one or more points outside of $l$. As $l$ lies in $\theta_{2n-5}\ge \theta_{n-3}(q+1)>\frac12|{\cal L}|\theta_{n-3}$ planes of $\mbox{\rm Q}(2n,q)$, we can choose a plane $\pi$ on $l$ such that at most one generator of $\cal L$ meets $\pi$ in a line different from $l$ or in exactly one point of $\pi\setminus l$. Let $Q\in\pi\setminus l$ be on no generator of $\cal L$. Also, if there is a generator in $\cal L$ meeting $\pi\setminus l$ in a single point $T$, then choose $Q$ in such a way that this point $T$ does not lie on the line $QR$. If the generator blocking set ${\cal L}^Q$ in the quotient of $Q$ has a non-trivial vertex, then $\pi$ is not a plane of this vertex, since otherwise all the generators of $\bar{\cal L}^Q$ would meet $\pi$ in a line different from $l$, but this is a contradiction with the choice of $\pi$. Since $\bar{\cal L}^Q$ and $\bar{\cal L}^R$ share at least $q+1-\delta$ generators, then $q+1-\delta$ generators of $\bar{\cal L}^Q$ meet $l$, and at most one of these contains a point of $\pi\setminus l$. Hence, we find $q-\delta$ generators in $\bar{\cal L}^Q \cap \bar{\cal L}^R$, each of them meeting $\pi$ in one point, which is on $l$. If the generators of $\bar{\cal L}^Q$ are projected from $Q$ on a generator blocking set with an $(n-3)$-dimensional vertex (and base a conic $\mbox{\rm Q}(2,q)$), then points in different generators of ${\cal L}^Q$ are collinear only if they are in the vertex of the cone. But the points of the $q-\delta$ generators on $l$ are collinear after projection from $Q$. Hence, if two points of these $q-\delta$ generators on $l$ are different, then $l$ is projected from $Q$ on a line of the vertex of $\mathcal{L}^Q$, so $\pi$ is a plane in the vertex of $S_Q$, a contradiction. So the $q-\delta$ generators meeting $l$ in a point all meet $l$ in the same point $X$, and we are done. Now assume that the generators of $\bar{\cal L}^Q$ are projected from $Q$ on a generator blocking set with an $(n-4)$-dimensional vertex, and base a regulus $\mathcal{R}$. Assume that $l$ has no nice point, then at least two of the $q-\delta$ generators do not meet $l$ in a common point. Then $l$ is skew to the vertex of the cone, since otherwise all the generators of $\bar{\cal L}^Q$ would meet $\pi$ in a line different from $l$, but this is a contradiction with the choice of $\pi$. Hence, $l$ is projected from the vertex of $S_Q$ on a line of the regulus $\mathcal{R}$ or on a line of the opposite regulus $\mathcal{R}'$. But a line of $\mathcal{R}$ meets exactly one line of ${\cal L}^Q$, so $l$ must be projected from the vertex of $S_Q$ on a line of the opposite regulus $\mathcal{R}'$. This means that each line of $\pi$ on $Q$ is met by a generator of $\bar{\cal L}^Q$ in a single point. This applies to the line $QR$, so some generator of $\cal L$ meets $\pi$ in a point, which lies on the line $QR$. This is a contradiction with the choice of $Q$ inside $\pi$. \end{proof} \begin{Co}\label{co:nice_in_vertex} If $R$ is a hole and $N\in R^\perp$ a nice point, then $N$ lies in the vertex of $S_R$. \end{Co} \begin{proof} A nice point lies in at least $q-\delta$ generators of $\mathcal{L}$ and at least $q-2\delta \geq 2$ if these must belong to $\mathcal{L}^R$. As two elements of $\mathcal{L}^R$ necessarily meet in a point of the vertex of $S_R$, the assertion follows. \end{proof} \begin{Le}\label{le:bigvertex} Let $n \ge 4$. If $\beta$ denotes the subspace generated by all nice points, then $\dim(\beta)\ge n-3$. \end{Le} \begin{proof} Suppose that $R$ is a hole. If $n \ge 4$, then by the induction hypothesis, the vertex of $\mathcal{L}^R$ has dimension at least $n-4$. Hence, using Lemma~\ref{le:nicepoint}, the nice points generate a subspace $\gamma$ of dimension at least $n-4$. Suppose that $\dim(\gamma)=n-4$, then $\dim(\gamma^\perp)=n+3 < 2n$, and so we find a hole $P \not \in \gamma^\perp$. Consider this hole $P$, then the same argument gives us a subspace $\gamma'$ spanned by nice points in $P^\perp$ of dimension at least $n-4$, different from $\gamma$. So $\dim(\beta)\ge n-3$. \end{proof} \begin{Le}\label{le:special_generator} There exists a hole $R$ and a generator $g$ on the vertex of $S_R$ such that $g$ meets exactly one element of $\mathcal{L}$ in an $(n-2)$-dimensional subspace and such that all other elements of $\mathcal{L}$ do not meet $g$ or meet $g$ only in points of the vertex of $S_R$. \end{Le} \begin{proof} First let $n=3$. By the assumption, there exists a hole $R$ such that $\mathcal{L}^R$ has a non-trivial vertex, which is a point $X$. So the vertex of $S_R$ is the line $RX$ and has dimension $n-2$. Now let $n\geq 4$. By Lemma~\ref{le:bigvertex}, we find a subspace $\gamma$ of dimension $n-3$ spanned by nice points. Consider a hole $R \in \gamma^\perp$. Clearly, the vertex of $S_R$ will be spanned by the projection of $\gamma$ from $R$ and $R$, so has dimension $n-2$. So for $n\geq 3$, we always find a hole $R$ such that the vertex $V$ of $S_R$ has dimension $n-2$, and $V = \langle R,\pi_{n-3}\rangle$, $\pi_{n-3}$ the vertex of $\mathcal{L}^R$. As $\mathcal{L}^R$ consists of the $q+1$ generators of a cone $\alpha \mbox{\rm Q}(2,q)$, points in different elements of $\mathcal{L}^R$ are collinear only when they are contained in $\pi_{n-3}$. So the projection from $R$ of any $(n-2)$-dimensional intersection $\pi_i$ of an element $\mathcal{L}$ and $S_R$, meets at most one element of $\mathcal{L}^R$ outside of the vertex $\pi_{n-3}$. Hence, before projection, no element of $\mathcal{L}$ meets two generators of $\mbox{\rm Q}(2n,q)$ on $V$ in points outside of $V$. Also, at least $q+1$ elements of $\mathcal{L}$ meet $S_R$ in an $(n-2)$-dimensional subspace that is projected from $R$ on an element of $\mathcal{L}^R$. So at most $\delta$ elements of $\mathcal{L}$ can meet a generator on $V$ in points outside of $V$, and thus we find a generator of $\mbox{\rm Q}(2n,q)$ on $V$ only meeting elements of $\mathcal{L}$ in points of $V$. \end{proof} \begin{Le}\label{le:anja_argument} Let $n\geq 3$. There exists an $(n-3)$-dimensional subspace contained in at least $q$ elements of $\mathcal{L}$. \end{Le} \begin{proof} Consider the special hole $R$ from Lemma~\ref{le:special_generator}. Call again $V=\erz{R,\pi_{n-3}}$ the vertex of $S_R$, with $\pi_{n-3}$ the vertex of $\mathcal{L}^R$. Denote the elements of $\mathcal{L}$ intersecting $S_R$ in an $(n-2)$-dimensional subspace by $g_i$. By Lemma~\ref{le:special_generator}, we find a generator $g$ on $V$ intersected by a unique element $g_1$ of $\mathcal{L}$ in an $(n-2)$-dimensional subspace, and intersected by further elements $g_i$ of $\mathcal{L}$ in at most $(n-3)$-dimensional subspaces contained in $V$. So we find a hole $Q \neq P$, $Q \in g \setminus V$. Clearly, at least $q-\delta$ elements of $\mathcal{L}$ that meet $S_R$ in an $(n-2)$-dimensional subspace, also meet $S_Q$ in an $(n-2)$-dimensional subspace and are projected on elements of $\mathcal{L}^Q$. Consider now the hole $Q$, and suppose that $\mathcal{L}^Q$ is a cone $\pi_{n-4}\mathcal{R}$, $\mathcal{R}$ a regulus. The generator $g_1$ is projected from $Q$ on a subspace $\tilde{g}_1$ not in $\mathcal{L}^Q$, since $\tilde{g}_1$ meets at least $q-\delta$ of the projected spaces $g_i$, $i \neq 1$, in an $(n-3)$-dimensional space, which has larger dimension than the vertex of $\mathcal{L}^Q$. But $\tilde{g}_1$ lies in $\pi_{n-4}\mathcal{R}$, since it intersects at least $q-\delta$ spaces $g_i$ in an $(n-3)$-dimensional subspace. Hence, $\tilde{g}_1$ meets the $q+1$ elements of $\mathcal{L}^Q$ in different $(n-3)$-spaces and is completely covered. So the projection of $R$ from $Q$ is covered by elements of $\mathcal{L}^Q$, and hence, the line $l=\langle R,Q\rangle$ must meet an element of $\mathcal{L} \setminus\{g_1\}$, a contradiction. So $\mathcal{L}^Q$ is a cone $\pi'_{n-3}\mbox{\rm Q}(2,q)$. It follows that $\tilde{g}_1 \in \mathcal{L}^Q$, so $\pi'_{n-3} \subset \tilde{g}_1$, and $g_1$ and $V$ are projected from $Q$ on $\tilde{g}_1$. Before projection from $R$, the elements $g_i$ meet $V$ in $(n-3)$-dimensional subspaces contained in $V$. The subspace $\pi'_{n-3}$ lies in the projection from $Q$ of elements of $\mathcal{L}$ meeting $\langle \pi'_{n-3},Q\rangle$ in an $(n-3)$-dimensional subspace. But the choice of $g$ implies that there is only a unique element of $\mathcal{L}$ meeting $\langle \pi'_{n-3},Q\rangle$ in an $(n-3)$-dimensional subspace and in points outside of $V$ (the element meeting $g$ in $g_1$), so, at least $q$ other elements of $\mathcal{L}$ intersect $V$ in the same $(n-3)$-dimensional subspace. \end{proof} The following lemma summarizes in fact Lemmas~\ref{le:nicepoint}, \ref{le:bigvertex}~and~\ref{le:special_generator}, \ref{le:anja_argument}, and Corollary~\ref{co:nice_in_vertex}. The condition on $\delta$ enables the use of the induction hypothesis. \begin{Le}\label{le:general3} Let $n \geq 3$. Suppose that $\mathcal{L}$ is a minimal generator blocking set of size $q+1+\delta$ of $\mbox{\rm Q}(2n,q)$, $\delta \leq \delta_0$. If there exists a hole $R$ that projects $\mathcal{L}$ on a generator blocking set containing a minimal generator blocking set of $\mbox{\rm Q}(2n-2,q)$ that has a non-trivial vertex, then $\mathcal{L}$ is a generator blocking set of $\mbox{\rm Q}(2n,q)$ listed in Table~\ref{tab:ex}. \end{Le} \begin{proof} By Lemma~\ref{le:anja_argument}, we can find an $(n-3)$-dimensional subspace $\alpha$ of $\mbox{\rm Q}(2n,q)$ that is contained in at least $q$ elements of $\mathcal{L}$. Consider now a hole $H \not \in \alpha^\perp$. Then $H^\perp \cap \alpha^\perp$ is an $(n+1)$-dimensional space containing at least $q-\delta$ intersections of $H^\perp$ with elements of $\mathcal{L}$ on $\alpha$ through the $(n-4)$-dimensional subspace $H^\perp \cap \alpha$. Since $S_H$ is $(n+1)$-dimensional, these $q-\delta$ $(n-2)$-dimensional subspaces lie in the $n$-dimensional space $S_H \cap \alpha^\perp$. Hence, we find in the $(n+1)$-dimensional space $\langle \alpha,\mathcal{S}_H\cap \alpha^\perp \rangle$ at least $q-\delta > \delta +2$ elements of $\mathcal{L}$. Lemma~\ref{fivespace} assures that $\mathcal{L}$ is one of the generator blocking sets of $\mbox{\rm Q}(2n,q)$ listed in Table~\ref{tab:ex}. \end{proof} Finally, we can prove Theorem~\ref{th:rankn} (a). \begin{Le}\label{co:q2nhigh} Theorem~\ref{th:rankn} (a) is true for $\mbox{\rm Q}(2n,q)$, $n\geq 3$. \end{Le} \begin{proof} Proposition~\ref{pro:q4q} assures that the assumptions of either Lemma~\ref{le:q6qregulus} or Lemma~\ref{le:general3}, $n=3$ are true. Hence, Theorem~\ref{th:rankn} (a) follows for $n=3$. But then the assumption of Lemma~\ref{le:general3} is true for $\mbox{\rm Q}(2n,q)$ and $n=4$, and then Theorem~\ref{th:rankn} (a) follows by induction. \end{proof} \section{Remarks} We mentioned already that a maximal partial spread is in fact a special generator blocking set. The results of Theorem~\ref{th:rankn} imply an improvement of the lower bound on the size of maximal partial spreads in the polar spaces $\mbox{\rm Q}^-(2n+1,q)$, $\mbox{\rm Q}(2n,q)$, and $\mbox{\rm H}(2n,q^2)$ when the rank is at least $3$. In Table \ref{tab:spreads}, we summarize the known lower bounds on the size of small maximal partial spreads of polar spaces. The results for $\mbox{\rm Q}^+(2n+1,q)$, $\mbox{\rm W}(2n+1,q)$ and $\mbox{\rm H}(2n+1,q^2)$ are proved in \cite{KMS}. \begin{table} \begin{center} \begin{tabular}{|c|l|} \hline Polar space & Lower bound\\ \hline $\mbox{\rm Q}^-(2n+1,q)$ & $n\geq 3: q^2+\frac{1}{2}(3q-\sqrt{5q^2+2q+1})$\\ \hline $\mbox{\rm Q}^+(4n+3,q)$ & $n \geq 1$, $q\geq 7: 2q+1$\\ \hline $\mbox{\rm Q}(2n,q)$ & $n\geq 3: q+1+\delta_0$, with $\delta_0=\mathrm{min}\{\frac{q-1}{2}, \epsilon\}$, \\ & $\epsilon$ such that $q+1+\epsilon$ is the size of the smallest non-trivial \\ & blocking set in $\mbox{\rm PG}(2,q)$.\\ \hline $\mbox{\rm W}(2n+1,q)$ & $n\geq 2$, $q\geq 5: 2q+1$\\ \hline $\mbox{\rm H}(2n,q^2)$ & $n\geq 3: q^3+q-2$\\ \hline $\mbox{\rm H}(2n+1,q^2)$ & $q\geq 13$ and $n\geq 2: 2q+3$\\ \hline \end{tabular} \caption{Bounds on the size of small maximal partial spreads}\label{tab:spreads} \end{center} \end{table} One can wonder what happens with generator blocking sets of the polar spaces $\mbox{\rm Q}^+(2n+1,q)$, $\mbox{\rm W}(2n+1,q)$, $q$ odd, and $\mbox{\rm H}(2n+1,q^2)$. Unfortunately, the approach presented in Section~\ref{sec:rank2} for these polar spaces, fails, which makes the completely approach of this paper not usable for these polar spaces in higher rank. In \cite{BSS:2010}, an overview of the size of the smallest non-trivial blocking sets of $\mbox{\rm PG}(2,q)$ is given. When $q$ is a prime, then $\epsilon = \frac{q+1}{2}$. So when $q$ is a prime, the condition on $\delta$ in the case of generator blocking sets of $\mbox{\rm Q}(2n,q)$, $n \ge 3$, drops to $\delta < \frac{q-1}{2}$. \section*{Acknowledgements} The research of the first author was also supported by a postdoctoral research contract on the research project {\em Incidence Geometry} of the Special Research Fund ({\em Bijzonder Onderzoeksfonds}) of Ghent University. The research of the second author was supported by a research grant of the Research Council of Ghent University. The first and second author thank the Research Foundation Flanders (Belgium) (FWO) for a travel grant and thank Klaus Metsch for his hospitality during their stay at the Mathematisches Institut of the Universit\"at Gie\ss{}en.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There has been a surging interest in applying machine learning techniques to the control of dynamical systems with continuous action spaces (see e.g.,~\cite{duan2016benchmarking, recht2019tour}). An increasing body of recent studies have started to address theoretical and practical aspects of deploying learning-based control policies in dynamical systems~\citep{recht2019tour}. An extended review of related work is given in \preprintswitch{Appendix~A of \cite{ARXIV}}{\cref{app:related_work}}\preprintswitch{\footnote{The report \cite{ARXIV} also contains technical proofs and complementary discussion.}}{}. For data-driven reinforcement learning (RL) control, the existing algorithmic frameworks can be broadly divided into two categories: (a) model-based RL, in which an agent first fits a model for the system dynamics from observed data and then uses this model to design a policy using either the certainty equivalence principle~\citep{aastrom2013adaptive,tu2018gap,mania2019certainty} or classical robust control tools~\citep{zhou1996robust, dean2019sample,tu2017non}; and (b) model-free RL, in which the agent attempts to learn an optimal policy directly from the data without explicitly building a model for the system~\citep{fazel2018global, malik2018derivative,furieri2019learning}. Another interesting line of work formulates model-free control by using past trajectories to predict future trajectories based on so called \emph{fundamental lemma}~\citep{coulson2019data, berberich2019robust,de2019formulas}. For both model-based and model-free methods, it is critical to establish formal guarantees on their sample efficiency, stability and robustness. Recently, the Linear Quadratic Regulator (LQR), one of the most well-studied optimal control problems, has been adopted as a benchmark to understand how machine learning interacts with continuous control~\citep{dean2019sample,tu2018gap,mania2019certainty, fazel2018global, recht2019tour,dean2018regret,malik2018derivative}. It was shown that the simple certainty equivalent model-based method requires asymptotically less samples than model-free policy gradient methods for LQR~\citep{tu2018gap}. Besides, the certainty equivalent control~\citep{mania2019certainty} (scaling as ${\mathcal{O}}(N^{-1})$, where $N$ is the number of samples) is more sample-efficient than robust model-based methods (scaling as ${\mathcal{O}}(N^{-1/2})$) that account for uncertainty explicitly~\citep{dean2019sample}. In this paper, we take a step further towards a theoretical understanding of model-based learning methods for Linear Quadratic Gaussian (LQG) control. As one of the most fundamental control problems, LQG deals with partially observed linear dynamical systems driven by additive white Gaussian noises~\citep{zhou1996robust}. As a significant challenge compared to LQR, the internal system states cannot be directly measured for learning and control purposes. When the system model is known, LQG admits an elegant closed-form solution, combining a Kalman filter together with an LQR feedback gain~\citep{bertsekas2011dynamic,zhou1996robust}. For unknown dynamics, however, much fewer results are available for the achievable closed-loop performance. One natural solution is the aforementioned \emph{certainty equivalence principle}: collect some data of the system evolution, fit a model, and then solve the original LQG problem by treating the fitted model as the truth~\citep{aastrom2013adaptive}. It has been recently proved in~\cite{mania2019certainty} that this certainty equivalent principle enjoys a good statistical rate for sub-optimality gap that scales as the \emph{square} of the model estimation error. However, this procedure does not come with a robust stability guarantee, and it might fail to stabilize the system when data is not sufficiently large. Sample-complexity of Kalman filters has also been recently characterized in \cite{tsiamis2020sample}. Leveraging recent advances in control synthesis~\citep{furieri2019input} and non-asymptotic system identification~\citep{oymak2019non,tu2017non,sarkar2019finite,zheng2020non}, we establish {an end-to-end sample-complexity result} of learning LQG controllers that robustly stabilize the true system with a high probability. In particular, our contribution is on developing a novel tractable robust control synthesis procedure, whose sub-optimality can be tightly bounded as a function of the model uncertainty. By incorporating a non-asymptotic $\mathcal{H}_\infty$ bound on the system estimation error, we establish an end-to-end sample complexity bound for learning robust LQG controllers. \cite{dean2019sample} performed similar analysis for learning LQR controllers with full state measurements. Instead, our method includes noisy output measurements without reconstruting an internal state-space representation for the system. Despite the challenge of hidden states, for open-loop stable systems, our method achieves the same scaling for the sub-optimality gap as~\cite{dean2019sample}, that is, $\mathcal{O}\left(\epsilon\right)$, where $\epsilon$ is the model uncertainty level. Specifically, the highlights of our work include: \begin{itemize}[leftmargin=*] \setlength\itemsep{0em} \item Our design methodology is suitable for general multiple-input multiple-output (MIMO) LTI systems that are open-loop stable. Based on a recent control tool, called Input-Ouput Parameterization (IOP)~\citep{furieri2019input}, we derive a new convex parameterization of robustly stabilizing controllers. Any feasible solution from our procedure corresponds to a controller that is robust against model uncertainty. Our framework directly aims for a class of general LQG problems, going beyond the recent results~\citep{dean2019sample,boczar2018finite} that are built on the system-level parameterization (SLP)~\citep{wang2019system}. \item We quantify the performance degradation of the robust LQG controller, scaling \emph{linearly} with the model error, which is consistent with~\cite{dean2019sample,boczar2018finite}. Our analysis requires a few involved bounding arguments in the IOP framework~\citep{furieri2019input} due to the absence of direct state measurements. We note that this linear scaling is inferior to the simple certainty equivalence controller~\citep{mania2019certainty}, for which the performance degradation scales as the \emph{square} of parameter errors for both LQR and LQG, but without guarantees on the robust stablility against model errors. This brings an interesting trade-off between optimality and robustness, which is also observed in the LQR case~\citep{dean2019sample}. \vspace{-3pt} \end{itemize} The rest of this paper is organized as follows. We introduce Linear Quadratic Gaussian (LQG) control for unknown systems and overview our contributions in \cref{section:problemstatement}. In \cref{section:robustIOP}, we first leverage the IOP framework to develop a robust controller synthesis procedure taking into account estimation errors explicitly, and then derive our main sub-optimality result. This enables our end-to-end sample complexity analysis discussed in \cref{section:performance}. We conclude this paper in \cref{section:Conclusions}. Proofs are postponed to the~appendices\preprintswitch{ of~\cite{ARXIV}}{.} \textit{Notation.} We use lower and upper case letters (\emph{e.g.} $x$ and $A$) to denote vectors and matrices, and lower and upper case boldface letters (\emph{e.g.} $\mathbf{x}$ and $\mathbf{G}$) are used to denote signals and transfer matrices, respectively. Given a stable transfer matrix $\mathbf{G} \in \mathcal{RH}_{\infty}$, where $\mathcal{RH}_{\infty}$ denotes the subspace of stable transfer matrices, we denote its $\mathcal{H}_{\infty}$ norm by $\|\mathbf{G}\|_{\infty}:= \sup_{\omega} {\sigma}_{\max} (\mathbf{G}(e^{j\omega}))$. \section{Problem Statement and Our contributions} \label{section:problemstatement} \subsection{LQG formulation} We consider the following \emph{partially observed} output feedback system \vspace{-3pt} \begin{equation} \label{eq:dynamic} \begin{aligned} x_{t+1} &= A_\star x_t+B_\star u_t, \\ y_t &= C_\star x_t + v_t, \\ u_t &= \pi(y_t,\ldots,y_0)+w_t\,, \end{aligned} \end{equation} where $x_t \in \mathbb{R}^n$ is the state of the system, $u_t\in \mathbb{R}^m$ is the control input and $\pi(\cdot)$ is an output-feedback control policy, $y_t\in \mathbb{R}^p$ is the observed output, and $w_t\in \mathbb{R}^m, v_t\in \mathbb{R}^p$ are Gaussian noise with zero-mean and covariance $\sigma_w^2 I, \sigma_v^2 I$. The setup in~\eqref{eq:dynamic} is convenient from an external input-output perspective, where the noise $w_t$ affects the input $u_t$ directly.\footnote{Letting $\hat{w}_t = B_\star w_t$,~\eqref{eq:dynamic} is an instance of the classical LQG formulation where the process noise $\hat{w}_t$ has variance $\sigma_w^2B_\star B_\star^\mathsf{T}$. The setting~\eqref{eq:dynamic} enables a concise closed-loop representation in~\eqref{eq:responses_maintext}, facilitating the suboptimality analysis in the IOP framework. We leave the general case with unstructured covariance for future work.} This setup was also considered in~\cite{tu2017non,boczar2018finite,zheng2019systemlevel}. When $C = I, v_t =0$, \emph{i.e.}, the state $x_t$ is directly measured, the system is called \emph{fully observed}. Throughout this paper, we make the following assumption. \begin{assumption} \label{assumption:stabilizability} $(A_\star, B_\star)$ is stabilizable and $(C_\star, A_\star)$ detectable. \end{assumption} The classical Linear Quadratic Gaussian (LQG) control problem is defined as \begin{equation} \label{eq:LQG} \begin{aligned} \min_{u_0,u_1,\ldots} \quad & \lim_{T \rightarrow \infty}\mathbb{E}\left[ \frac{1}{T} \sum_{t=0}^T \left(y_t^{{\mathsf T}} Q y_t + u_t^{{\mathsf T}} R u_t\right)\right] \\ \st \quad & ~\eqref{eq:dynamic}, \end{aligned} \end{equation} where $Q$ and $R$ are positive definite. Without loss of generality and for notational simplicity, we assume that $Q=I_p$, $R=I_m$, $\sigma_w=\sigma_v = 1$. When the dynamics~\eqref{eq:dynamic} are known, this problem has a well-known closed-form solution by solving two algebraic Riccati equations~\citep{zhou1996robust}. The optimal solution is $u_t = K \hat{x}_t$ with a fixed $p \times n$ matrix $K$ and $\hat{x}_t$ is the state estimation from the observation $y_0, \ldots, y_t$ using the Kalman filter. Compactly, the optimal controller to~\eqref{eq:LQG} can be written in the form of transfer function $ \mathbf{u}(z) = \mathbf{K}(z) \mathbf{y}(z), $ with $\mathbf{K}(z) = C_k(zI - A_k)^{-1}B_k + D_k$, where $z\in\mathbb{C}$, $\mathbf{u}(z)$ and $\mathbf{y}(z)$ are the $z-$transform of the input $u_t$ and output $y_t$, and the transfer function $K(z)$ has a state-space realization expressed as \begin{equation} \label{eq:dyController} \begin{aligned} \xi_{t+1}&=A_k\xi_t+B_ky_t,\\ u_t&=C_k\xi_t+D_ky_t, \end{aligned} \end{equation} where $\xi \in \mathbb{R}^q$ is the controller internal state, and $A_k, B_k, C_k, D_k$ depends on the system matrices $A, B, C$ and solutions to algebraic Riccati equations. We refer interested readers to~\cite{zhou1996robust,bertsekas2011dynamic} for more details. Throughout the paper, we make another assumption, which is required in both plant estimation algorithms~\citep{oymak2019non} and the robust controller synthesis phase. \begin{assumption} \label{assumption:open_loop_stability} The plant dynamics are open-loop stable, \emph{i.e.}, $\rho(A_\star) < 1$, where $\rho(\cdot)$ denotes the spectral radius. \end{assumption} \subsection{LQG for unknown dynamics} In the case where the system dynamics $A_\star, B_\star, C_\star$ are unknown, one natural idea is to conduct experiments to estimate $\hat{A}, \hat{B}, \hat{C}$~\citep{ljung2010perspectives} and to design the corresponding controller based on the estimated dynamics, which is known as \emph{certainty equivalence control}. When the estimation is accurate enough, the certainty equivalent controller leads to good closed-loop performance~\citep{mania2019certainty}. However, the certainty equivalent controller does not take into account estimation errors, which might lead to instability in practice. It is desirable to explicitly incorporate the estimation errors \begin{equation} \label{eq:estimation_error_state_space} \|\hat{A} - A_\star\|,\quad \|\hat{B} - B_\star\|, \quad \|\hat{C} - C_\star\|, \end{equation} into the controller synthesis, and this requires novel tools from robust control. Unlike the fully observed LQR case~\citep{dean2019sample}, in the partially observed case of \eqref{eq:dynamic}, it remains unclear how to directly incorporate state-space model errors~\eqref{eq:estimation_error_state_space} into robust controller synthesis. Besides, the state-space realization of a partially observed system is not unique, and different realizations from the estimation procedure might have an impact on the controller synthesis. Instead of the state-space form~\eqref{eq:dynamic}, the system dynamics can be described uniquely in the frequency domain in terms of the transfer function as $$ \mathbf{G}_\star(z) = C_\star (zI - A_\star)^{-1} B_\star\,, $$ where $z\in\mathbb{C}$. Based on an estimated model $\hatbf{G}$ and an upper bound $\epsilon$ on its estimation error $\|\bm{\Delta}\|_\infty :=\|\mathbf{G}_\star-\hatbf{G}\|_{\infty}$, we consider a robust variant of the LQG problem that seeks to minimize the worst-case LQG performance of the closed-loop system \begin{equation} \label{eq:robustLQG} \begin{aligned} \min_{\mathbf{K}} \sup_{\|\mathbf{\Delta}\|_{\infty} < \epsilon} \quad & \lim_{T \rightarrow \infty}\mathbb{E}\left[ \frac{1}{T} \sum_{t=0}^T \left(y_t^{{\mathsf T}} Q y_t + u_t^{{\mathsf T}} R u_t\right)\right], \\ \st \quad & \mathbf{y} = (\hat{\mathbf{G}} + \mathbf{\Delta}) \mathbf{u} + \mathbf{v}\,, \quad \mathbf{u} = \mathbf{K} \mathbf{y} + \mathbf{w}, \end{aligned} \end{equation} where $\mathbf{K}$ is a proper transfer function. When $\bm{\Delta} = 0$, \eqref{eq:robustLQG} recovers the standard LQG formulation \eqref{eq:LQG}. Additionally, note that the cost of \eqref{eq:robustLQG} is finite if and only if $\mathbf{K}$ stabilizes all plants $\mathbf{G}$ such that $\mathbf{G} = \hatbf{G} + \bm{\Delta}$ and $\norm{\bm{\Delta}}_\infty<\epsilon$. \subsection{Our contributions} Although classical approaches exist to compute controllers that stabilize all plants $\hatbf{G}$ with $\norm{\bm{\Delta}}_\infty \leq \epsilon\,,$~\citep{zhou1996robust}, these methods typically do not quantify the closed-loop LQG performance degradation in terms of the uncertainty size $\epsilon$. In this paper, we exploit the recent IOP framework~\citep{furieri2019input} to develop a tractable inner approximation of \eqref{eq:robustLQG} (see \cref{theo:mainRobust}). In our main \cref{theo:suboptimality}, if $\epsilon$ is small enough, we bound the suboptimality performance gap as \begin{equation*} \frac{\hat{J} - J_\star}{J_\star} \leq M \epsilon\,, \end{equation*} where $J_\star$ is the globally optimal LQG cost to~\eqref{eq:LQG}, and $\hat{J}$ is the LQG cost when applying the robust controller from our procedure to the true plant $\mathbf{G}_{\star}$, $\epsilon > \|\mathbf{G}_\star-\hat{\mathbf{G}}\|_\infty$ is an upper bound of estimation error, and $M$ is a constant that depends explicitly on true dynamics $\mathbf{G}_\star$, its estimation $\hatbf{G}$, and the true LQG controller $\mathbf{K}_ {\star}$; see \eqref{eq:suboptimality} for a precise expression. Adapting recent non-asymptotic estimation results from input-output trajectories~\citep{oymak2019non,sarkar2019finite,tu2017non,zheng2020non}, we derive an end-to-end sample complexity of learning LQG controllers as \begin{equation*} \frac{\hat{J} - J_\star}{J_\star} \leq {\mathcal{O}}\left(\frac{T}{\sqrt{N}}+\rho(A_\star)^T\right)\,, \end{equation*} with high probability provided $N$ is sufficiently large, where $T$ is the length of finite impulse response (FIR) model estimation, and $N$ is the number of samples in input-output trajectories (see \cref{corollary:samplecomplexity}). When the true plant $\mathbf{G}_{\star}$ is an FIR, the sample complexity scales as ${\mathcal{O}}\left(N^{-1/2}\right)$. If $\mathbf{G}_\star$ is unstable, the residual $\bm{\Delta} = \mathbf{G}_\star-\hatbf{G}$ might be unstable as well, and thus $\normHinf{\bm{\Delta}}=\infty$. Instead, $\bm{\Delta}$ is always a stable residual when $\mathbf{G}_\star$ is stable, and thus $\normHinf{\bm{\Delta}}$ is finite. Also, it is hard to utilize a single trajectory for identifying unstable systems. Finally, unstable residues in the equality constraints of IOP pose a challenge for controller implementation; we refer to Section 6 of \cite{zheng2019systemlevel} for details. We therefore leave the case of unstable systems for future work. % % \section{Robust controller synthesis} \label{section:robustIOP} We first derive a tractable convex approximation for~\eqref{eq:robustLQG} using the recent input-output parameterization (IOP) framework~\citep{furieri2019input}. This allows us to compute a robust controller using convex optimization. We then provide sub-optimality guarantees in terms of the uncertainty size $\epsilon$. {The overall principle is parallel to that of~\cite{dean2019sample} for the LQR case.} \subsection{An equivalent IOP reformulation of \eqref{eq:robustLQG}} Similar to the Youla parameterization~\citep{youla1976modern} and the SLP~\citep{wang2019system}, the IOP framework~\citep{furieri2019input} focuses on the \emph{system responses} of a closed-loop system. In particular, given an arbitrary control policy $\mathbf{u} = \mathbf{K} \mathbf{y}$, straightforward calculations show that the closed-loop responses from the noises $\mathbf{v}, \mathbf{w}$ to the output $\mathbf{y}$ and control action $\mathbf{u}$ are \begin{equation} \label{eq:responses_maintext} \begin{bmatrix} \mathbf{y} \\ \mathbf{u} \end{bmatrix} = \begin{bmatrix}(I-\mathbf{G}_\star\mathbf{K})^{-1}&(I-\mathbf{G}_\star\mathbf{K})^{-1}\mathbf{G}_\star\\\mathbf{K}(I-\mathbf{G}_\star\mathbf{K})^{-1}&(I-\mathbf{K}\mathbf{G}_\star)^{-1}\end{bmatrix} \begin{bmatrix} \mathbf{v} \\ \mathbf{w} \end{bmatrix}. \end{equation} To ease the notation, we can define the closed-loop responses as \begin{equation} \label{eq:YUWZ_main_text} \begin{bmatrix} \mathbf{Y} & \mathbf{W}\\\mathbf{U}&\mathbf{Z}\end{bmatrix} := \begin{bmatrix}(I-\mathbf{G}_\star\mathbf{K})^{-1}&(I-\mathbf{G}_\star\mathbf{K})^{-1}\mathbf{G}_\star\\\mathbf{K}(I-\mathbf{G}_\star\mathbf{K})^{-1}&(I-\mathbf{K}\mathbf{G}_\star)^{-1}\end{bmatrix}. \end{equation} Given any stabilizing $\mathbf{K}$, we can write $\mathbf{K}=\mathbf{UY}^{-1}$ and the square root of the LQG cost in~\eqref{eq:LQG} as \begin{equation} \label{eq:cost_definition_convex_v2} J(\mathbf{G}_{\star},\mathbf{K})=\left\|\begin{bmatrix}\mathbf{Y}&\mathbf{W}\\ \mathbf{U}&\mathbf{Z}\end{bmatrix}\right\|_{\mathcal{H}_2}, \end{equation} with the closed-loop responses $(\mathbf{Y},\mathbf{U},\mathbf{W},\mathbf{Z})$ defined in~\eqref{eq:YUWZ_main_text}; see~\cite{furieri2019input,zheng2019equivalence} for more details on IOP. We present a full equivalence derivation of~\eqref{eq:cost_definition_convex_v2} in \preprintswitch{\cite[Appendix~G]{ARXIV}}{\cref{app:h2norm}}. Our first result is a reformulation of the robust LQG problem~\eqref{eq:robustLQG} in the IOP framework. \begin{myTheorem} \label{pr:robust_LQG} The robust LQG problem~\eqref{eq:robustLQG} is equivalent to \begin{align} \label{eq:robustLQG_YUWZ} \min_{\hat{\mathbf{Y}}, \hat{\mathbf{W}}, \hat{\mathbf{U}}, \hat{\mathbf{Z}}} \; \max_{\|\mathbf{\Delta}\|_\infty< \epsilon} \quad & J(\mathbf{G}_{\star},\mathbf{K}) = \left\|\begin{bmatrix}\hat{\mathbf{Y}}(I-\mathbf{\Delta}\hat{\mathbf{U}})^{-1}&\hat{\mathbf{Y}}(I-\mathbf{\Delta}\hat{\mathbf{U}})^{-1}(\hat{\mathbf{G}}+\mathbf{\Delta})\\\hat{\mathbf{U}}(I-\mathbf{\Delta}\hat{\mathbf{U}})^{-1}&(I-\hat{\mathbf{U}}\mathbf{\Delta})^{-1}\hat{\mathbf{Z}}\end{bmatrix}\right\|_{\mathcal{H}_2} \\ \st \quad & \begin{bmatrix} I&-\hat{\mathbf{G}} \end{bmatrix}\begin{bmatrix} \hat{\mathbf{Y}} & \hat{\mathbf{W}}\\\hat{\mathbf{U}} & \hat{\mathbf{Z}} \end{bmatrix}=\begin{bmatrix} I&0 \end{bmatrix}, \label{eq:ach_hat1}\\ & \begin{bmatrix} \hat{\mathbf{Y}} & \hat{\mathbf{W}}\\\hat{\mathbf{U}} & \hat{\mathbf{Z}} \end{bmatrix} \begin{bmatrix} -\hat{\mathbf{G}}\\I \end{bmatrix}=\begin{bmatrix} 0\\I \end{bmatrix}, \label{eq:ach_hat2}\\ & \hat{\mathbf{Y}}, \hat{\mathbf{W}}, \hat{\mathbf{U}}, \hat{\mathbf{Z}}\in \mathcal{RH}_\infty, \, \|\hat{\mathbf{U}}\|_{\infty} \leq \frac{1}{\epsilon}, \nonumber \end{align} where the optimal robust controller is recovered from the optimal $\hatbf{U}$ and $\hatbf{Y}$ as $\mathbf{K} = \hat{\mathbf{U}}\hat{\mathbf{Y}}^{-1}.$ \end{myTheorem} The proof relies on a novel robust variant of IOP for parameterizing robustly stabilizing controller in a convex way. We provide the detailed proof and a review of the IOP framework in \preprintswitch{\cite[Appendix~C]{ARXIV}}{\cref{app:robuststability}}. Here, it is worth noting that the feasible set in \eqref{eq:robustLQG_YUWZ} is convex in the decision variables $(\hatbf{Y},\hatbf{W},\hatbf{U},\hatbf{Z})$, which represent four closed-loop maps on the estimated plant $\hat{\mathbf{G}}$. Using the small-gain theorem \citep{zhou1996robust}, the convex requirement $\|\hat{\mathbf{U}}\|_{\infty} \leq \frac{1}{\epsilon}$ ensures that any controller $\mathbf{K} = \hat{\mathbf{U}}\hat{\mathbf{Y}}^{-1}$, with $\hatbf{Y},\hatbf{U}$ feasible for \eqref{eq:robustLQG_YUWZ}, stabilizes the real plant $\mathbf{G}_\star$ for all $\bm{\Delta}$ such that $\|\bm{\Delta}\|_\infty < \epsilon$. Due to the uncertainty $\bm{\Delta}$, the cost in~\eqref{eq:robustLQG_YUWZ} is nonconvex in the decision variables. We therefore proceed with deriving an upper-bound on the functional $ J(\mathbf{G}_\star, \mathbf{K})$, which will be exploited to derive a quasi-convex approximation of the robust LQG problem~\eqref{eq:robustLQG}. \subsection{Upper bound on the non-convex cost in~\eqref{eq:robustLQG_YUWZ}} It is easy to derive (see \preprintswitch{Appendix~B of \cite{ARXIV}}{\cref{app:inequalities}}) that \begin{equation} \label{eq:LQGbound_step1} \begin{aligned} J(\mathbf{G}_\star,\mathbf{K})^2 = \|\hat{\mathbf{Y}}(I-\mathbf{\Delta}\hat{\mathbf{U}})^{-1}&\|_{\mathcal{H}_2}^2 + \|\hat{\mathbf{U}}(I-\mathbf{\Delta}\hat{\mathbf{U}})^{-1}\|_{\mathcal{H}_2}^2 \\ & + \|(I-\hat{\mathbf{U}}\mathbf{\Delta})^{-1}\hat{\mathbf{Z}}\|_{\mathcal{H}_2}^2 + \|\hat{\mathbf{Y}}(I-\mathbf{\Delta}\hat{\mathbf{U}})^{-1}(\hat{\mathbf{G}}+\mathbf{\Delta}) \|_{\mathcal{H}_2}^2. \end{aligned} \end{equation} {Similarly to \cite{dean2019sample} for the LQR case, it is relatively easy to bound the first three terms on the right hand side of~\eqref{eq:LQGbound_step1} using small-gain arguments. However, dealing with outputs makes it challenging to bound the last term. The corresponding result is summarized in the following proposition (see \preprintswitch{Appendix~D.1 of \cite{ARXIV}}{\cref{app:upperbound_lastterm}} for proof).} \begin{myProposition} \label{prop:LQGcostupperbound_step1} If $\|\hat{\mathbf{U}}\|_{\infty} < \frac{1}{\epsilon}, \|\mathbf{\Delta}\|_\infty < \epsilon$ and $\hat{\mathbf{G}}\in \mathcal{RH}_{\infty}$, then, we have \begin{equation} \label{eq:LQGbound_step3} \begin{aligned} \|\hat{\mathbf{Y}}(I-\mathbf{\Delta}\hat{\mathbf{U}})^{-1}(\hat{\mathbf{G}}+\mathbf{\Delta})\|_{\mathcal{H}_2} & \leq \frac{\|\hat{\mathbf{W}}\|_{\mathcal{H}_2} + \epsilon \|\hat{\mathbf{Y}}\|_{\mathcal{H}_2} (2 + \|\hat{\mathbf{U}}\|_{\infty}\|\hat{\mathbf{G}}\|_\infty )}{1 - \epsilon \|\hat{\mathbf{U}}\|_{\infty}},\\ \end{aligned} \end{equation} where $\hat{\mathbf{W}} = \hat{\mathbf{Y}}\hat{\mathbf{G}}$. \end{myProposition} We are now ready to present an upper bound on the LQG cost. The proof is based on \cref{prop:LQGcostupperbound_step1} and basic inequalities; see \preprintswitch{\cite[Appendix~D.2]{ARXIV}}{\cref{app:prop_LQGcostupperbound}}. \begin{myProposition} \label{prop:LQGcostupperbound} If $\|\hat{\mathbf{U}}\|_{\infty} \leq \frac{1}{\epsilon}, \|\mathbf{\Delta}\|_\infty < \epsilon$ and $\hat{\mathbf{G}}\in \mathcal{RH}_{\infty}$, the robust LQG cost in~\eqref{eq:robustLQG_YUWZ} is upper bounded by \begin{equation} \label{eq:cost_upperbound_nonconvex} J(\mathbf{G}_\star,\mathbf{K}) \leq \frac{1}{1 - \epsilon \|\hat{\mathbf{U}}\|_{\infty}} \left\|\begin{bmatrix} \sqrt{1 + h(\epsilon,\|\hat{\mathbf{U}}\|_{\infty})} \hat{\mathbf{Y}} & \hat{\mathbf{W}}\\ \hat{\mathbf{U}}& \hat{\mathbf{Z}}\end{bmatrix}\right\|_{\mathcal{H}_2}, \end{equation} where $\hat{\mathbf{Y}}, \hat{\mathbf{W}}, \hat{\mathbf{U}}, \hat{\mathbf{Z}}$ satisfy the constraints in~\eqref{eq:robustLQG_YUWZ}, and the factor $h(\epsilon,\|\hat{\mathbf{U}}\|_{\infty})$ is defined as \begin{equation} \label{eq:factor_h} h(\epsilon,\|\hat{\mathbf{U}}\|_{\infty}) := \epsilon\|\hat{\mathbf{G}}\|_{\infty} (2 + \|\hat{\mathbf{U}}\|_{\infty}\|\hat{\mathbf{G}}\|_\infty) + \epsilon^2 (2 + \|\hat{\mathbf{U}}\|_{\infty}\|\hat{\mathbf{G}}\|_\infty)^2. \end{equation} \end{myProposition} \subsection{Quasi-convex formulation} Building on the LQG cost upper bound \eqref{eq:cost_upperbound_nonconvex}, we derive our first main result on a tractable approximation of \eqref{eq:robustLQG_YUWZ}. The proof is reported in \preprintswitch{\cite[Appendix~D.3]{ARXIV}}{\cref{app:th_approximation}}. \begin{myTheorem} \label{theo:mainRobust} Given $\hat{\mathbf{G}} \in \mathcal{RH}_\infty$, a model estimation error $\epsilon$, and any constant {$\alpha>0$}, the robust LQG problem~\eqref{eq:robustLQG_YUWZ} is upper bounded by the following problem \begin{equation} \label{eq:robustLQG_convex} \begin{aligned} \min_{\gamma \in[0,{1}/{\epsilon})} \frac{1}{1-\epsilon \gamma} \;\; \min_{\hat{\mathbf{Y}}, \hat{\mathbf{W}}, \hat{\mathbf{U}}, \hat{\mathbf{Z}}} \; &\left\|\begin{bmatrix} \sqrt{1 + h(\epsilon, \alpha)} \hat{\mathbf{Y}} & \hat{\mathbf{W}}\\ \hat{\mathbf{U}}& \hat{\mathbf{Z}}\end{bmatrix}\right\|_{\mathcal{H}_2} \\ \st \quad & \eqref{eq:ach_hat1}-\eqref{eq:ach_hat2},~ \hat{\mathbf{Y}}, \hat{\mathbf{W}}, \hat{\mathbf{Z}}\in \mathcal{RH}_\infty, \, \|\hat{\mathbf{U}}\|_{\infty} \leq \min\left(\gamma,\alpha\right), \end{aligned} \end{equation} where h(\epsilon,\alpha) = \epsilon\|\hat{\mathbf{G}}\|_{\infty} (2 + \alpha\|\hat{\mathbf{G}}\|_\infty) + \epsilon^2 (2 + \alpha\|\hat{\mathbf{G}}\|_\infty)^2. $ \end{myTheorem} The hyper-parameter $\alpha$ in~\eqref{eq:robustLQG_convex} plays two important roles: 1) \emph{robust stability}: the resulting controller has a guarantee of robust stability against model estimation error up to $ \frac{1}{\alpha}$, {thus suggesting $\alpha < \frac{1}{\epsilon}$, as we will clarify it later}; 2) \emph{quasi-convexity}: the inner optimization problem~\eqref{eq:robustLQG_convex} is convex when fixing $\gamma$, and the outer optimization is quasi-convex with respect to $\gamma$, which can effectively be solved using the golden section search. \begin{myRemark}[Numerical computation] \begin{enumerate} \item The inner optimization in~\eqref{eq:robustLQG_convex} is convex but infinite dimensional. A practical numerical approach is to apply a finite impulse response (FIR) truncation on the decision variables $\hat{\mathbf{Y}}, \hat{\mathbf{U}}, \hat{\mathbf{W}}, \hat{\mathbf{Z}}$, which leads to a finite dimensional convex semidefinite program (SDP) for each fixed value of $\gamma$; see \preprintswitch{Appendix~B of \cite{ARXIV}}{\cref{app:inequalities}}. The degradation in performance decays exponentially with the FIR horizon \citep{dean2019sample}. \item Since $\hat{\mathbf{G}} \in \mathcal{RH}_\infty$ is stable, the IOP framework is numerically robust \citep{zheng2019systemlevel}, \emph{i.e.}, the resulting controller $\mathbf{K} = \hat{\mathbf{U}}\hat{\mathbf{Y}}^{-1}$ is stabilizing even when numerical solvers induce small computational residues in~\eqref{eq:robustLQG_convex}. \end{enumerate} \end{myRemark} \subsection{Sub-optimality guarantee} Our second main result offers a sub-optimality guarantee on the performance of the robust controller synthesized using the robust IOP framework~\eqref{eq:robustLQG_convex} in terms of the estimation error $\epsilon$. The proof is reported in \preprintswitch{\cite[Appendix~E]{ARXIV}}{\cref{app:theo:suboptimality}}. \begin{myTheorem} \label{theo:suboptimality} Let $\mathbf{K}_\star$ be the optimal LQG controller in~\eqref{eq:LQG}, and the corresponding closed-loop responses be $\mathbf{Y}_\star, \mathbf{U}_\star, \mathbf{W}_\star,\mathbf{Z}_\star$. Let $\hat{\mathbf{G}}$ be the plant estimation with error $\|\mathbf{\Delta}\|_\infty < \epsilon$, where $\mathbf{\Delta} = \mathbf{G}_\star - \hat{\mathbf{G}}$. Suppose that $\epsilon < \frac{1}{5\|\mathbf{U}_\star\|_\infty}$, and choose the constant hyper-parameter $\alpha \in \left[\frac{ \sqrt{2}\|\mathbf{U}_\star\|_\infty}{1-\epsilon \|\mathbf{U}_\star\|_\infty}, \frac{1}{\epsilon}\right)$. We denote the optimal solution to~\eqref{eq:robustLQG_convex} as $\gamma_\star, \hat{\mathbf{Y}}_\star, \hat{\mathbf{U}}_\star, \hat{\mathbf{W}}_\star, \hat{\mathbf{Z}}_\star$. Then, the controller ${\mathbf{K}} =\hat{\mathbf{U}}_\star\hat{\mathbf{Y}}_\star^{-1}$ stabilizes the true plant $\mathbf{G}_\star$ and the relative LQG error is upper bounded by \begin{equation} \label{eq:suboptimality} \frac{J(\mathbf{G}_\star, {\mathbf{K}})^2 - J(\mathbf{G}_\star, {\mathbf{K}}_\star)^2}{J(\mathbf{G}_\star, {\mathbf{K}}_\star)^2} \leq 20 \epsilon \|\mathbf{U}_\star\|_\infty + h(\epsilon,\alpha)+g(\epsilon, \|\mathbf{U}_\star\|_\infty), \end{equation} where $ h(\cdot , \cdot)$ is defined in~\eqref{eq:factor_h} and \begin{equation} \label{eq:constant_g} \begin{aligned} g(\epsilon,\|\mathbf{U}_\star\|_{\infty}) = \epsilon\|{\mathbf{G}}_\star\|_{\infty} (2 + \|\mathbf{U}_\star\|_{\infty}\|{\mathbf{G}}_\star\|_\infty) + \epsilon^2 (2 + \|\mathbf{U}_\star\|_{\infty}\|{\mathbf{G}}_\star\|_\infty)^2. \end{aligned} \end{equation} \end{myTheorem} \cref{theo:suboptimality} shows that the relative error in the LQG cost grows as $\mathcal{O}(\epsilon)$ as long as $\epsilon$ is sufficiently small, and in particular $\epsilon < \frac{1}{5\|\mathbf{U}_\star\|_\infty}$. Previous results in~\cite{dean2019sample} proved a similar convergence rate of $\mathcal{O}(\epsilon)$ for LQR using a robust synthesis procedure based on the SLP~\citep{wang2019system}. Our robust synthesis procedure using the IOP framework extends~\cite{dean2019sample} to a class of LQG problems. Note that our bound is valid for open-loop stable plants, while the method~\cite{dean2019sample} works for all systems at the cost of requiring direct state observations. Similar to~\cite{dean2019sample} and related work, the hyper-parameters $\epsilon$ and $\alpha$ have to be tuned in practice without knowing $\mathbf{U}_\star$. \begin{myRemark}[Optimality versus Robustness] Note that recent results in~\cite{mania2019certainty} show that the certainty equivalent controller achieves a better sub-optimality scaling of $\mathcal{O}(\epsilon^2)$ for both fully observed LQR and partially observed LQG settings, at the cost of a much stricter requirement on admissible uncertainty $\epsilon$. Quoting from \cite{mania2019certainty}, ``\emph{the price of obtaining a faster rate for LQR is that the certainty equivalent controller becomes less robust to model uncertainty.}'' Our result in \cref{theo:suboptimality} shows that this trade-off may hold true for the LQG problem as well. \preprintswitch{}{ A classical result from~\cite{doyle1978guaranteed} states that there are no \emph{a priori} guaranteed gain margin for the optimal LQG controller $\mathbf{K}_\star$, which means that $\mathbf{K}_\star$ might fail to stabilize a plant with a small perturbation from $\mathbf{G}_\star$. In contrast, any feasible solution from~\eqref{eq:robustLQG_convex} can robustly stabilize any system $\hat{\mathbf{G}}$ with $\|\hat{\mathbf{G}} - \mathbf{G}_\star\|_\infty < \epsilon$. Thus, especially when a model is unavailable, it might be better to trade some performance with robustness by directly considering $\epsilon$ when solving~\eqref{eq:robustLQG_convex}. } \end{myRemark} \preprintswitch{}{\begin{myRemark}[Open-loop stability] We note that the upper bound on the suboptimality gap in~\eqref{eq:suboptimality} depends explicitly on the optimal closed-loop performance $\|\mathbf{U}_\star\|_\infty$, open-loop plant dynamics $\|\mathbf{G}_\star\|_\infty$, as well as the estimated plant dynamics $\|\hat{\mathbf{G}}_\star\|_\infty$. If the open-loop system is close to the stability margin, as measured by a larger value of $\|\mathbf{G}_\star\|_\infty$, then the sub-optimality gap might be larger using our robust controller synthesis procedure. This suggests that it might be harder to design a robust controller with small suboptimality if the open-loop system is closer to be unstable. This difficulty is also reflected in the plant estimation procedure; see~\cite{oymak2019non,simchowitz2019learning,zheng2020non} for more discussions. It would be very interesting to establish a lower bound for the robust synthesis procedure, which allows us to establish fundamental limits with respect to system instability. \end{myRemark}} \section{Sample complexity} \label{section:performance} Based on our main results in the previous section, here we discuss how to estimate the plant $\mathbf{G}_\star$, provide a non-asymptotic $\mathcal{H}_\infty$ bound on the estimation error, and finally establish an end-to-end sample complexity of learning LQG controllers. By \cref{assumption:open_loop_stability}, \emph{i.e.}, $\mathbf{G}_\star \in \mathcal{RH}_\infty$, we can write \begin{equation} \label{eq:plantFIR} \mathbf{G}_\star(z) = \sum_{i=0}^\infty \frac{1}{z^i}G_{\star,i} =\sum_{i=0}^{T-1} \frac{1}{z^i}G_{\star,i} + \sum_{i=T}^\infty \frac{1}{z^i}G_{\star,i}, \end{equation} where $G_{\star,i} \in \mathbb{R}^{p \times m}$ denotes the $i$-th spectral component. Given the state-space representation~\eqref{eq:dynamic}, we have $ G_{\star,0}=0\,, G_{\star,i} = C_\star A^{i-1}_\star B_\star,~ \forall i\geq 1\,. $ As $\rho(A_\star) < 1$, the spectral component $G_{\star,i}$ decays exponentially. Thus, we can use a finite impulse response (FIR) truncation of order $T$ for $\mathbf{G}_\star$: $$ G_\star = \begin{bmatrix} 0 & C_\star B_\star & \cdots & C_\star A_\star^{T-2} B_\star\end{bmatrix} \in \mathbb{R}^{p \times Tm}. $$ Many recent algorithms have been proposed to estimate $G_\star$, e.g.,~\cite{oymak2019non,tu2017non,sarkar2019finite,zheng2020non}, and these algorithms differ from the estimation setup; see \cite{zheng2020non} for a comparison. All of them can be used to establish an end-to-end sample complexity. Here, we discuss a recent ordinary least-squares (OLS) algorithm~\citep{oymak2019non}. This OLS estimator is based on collecting a trajectory $\{y_t,u_t\}_{t=0}^{T+N-1}$, where $u_t$ is Gaussian with variance $\sigma_u^2I$ for every $t$. The OLS algorithm details are provided in \preprintswitch{\citet[Appendix~F.1]{ARXIV}}{\cref{app:theo_FIR_identification_procedure}}. From the OLS solution $\hat{G}$, we form the estimated plant \begin{equation} \label{eq:FIRestimation} \hat{\mathbf{G}} := \sum_{k=0}^{T-1} \frac{1}{z^k} \hat{G}_k . \end{equation} To bound the $\mathcal{H}_\infty$ norm of the estimation error $\mathbf{\Delta} := \mathbf{G}_\star - \hat{\mathbf{G}}$, we define $ \Phi(A_\star) = \sup_{\tau \geq 0} \; \frac{\|A_\star^\tau\|}{\rho(A_\star)^\tau}. $ We assume $\Phi(A_\star)$ is finite~\citep{oymak2019non}. We start from \cite[Theorem 3.2]{oymak2019non} to derive two {new }corollaries. The proofs rely on {connecting the} spectral radius of $\hat{G}-G_\star$ with the $\mathcal{H}_\infty$ norm of $\hat{\mathbf{G}} - \mathbf{G}_{\star}$; see \preprintswitch{\citet[Appendix~F.2]{ARXIV}}{\cref{app:theo_FIR_identification_bound}} for details. \begin{myCorollary}\label{co:FIRestimation} Under the OLS estimation setup in Theorem 3.2 of \cite{oymak2019non}, {with high probability}, the FIR estimation $\hat{\mathbf{G}}$ in~\eqref{eq:FIRestimation} satisfies \begin{equation} \label{eq:sysID_bound} {\|\mathbf{G}_\star - \hat{\mathbf{G}}\|_{{\infty}}\leq \frac{R_w + R_v + R_e}{\sigma_u}\sqrt{\frac{T}{{N}}} + \Phi(A_\star) \norm{C_\star} \norm{B_\star} \frac{\rho(A_\star)^T}{1-\rho(A_\star)}}. \end{equation} where $N$ is the length of one input-output trajectory, and $R_w, R_v, R_e \in \mathbb{R}$ are problem-dependent\footnote{See~\cite{oymak2019non} for precise formula and probabilistic guarantees. Note that the dynamics in~\eqref{eq:dynamic} are slightly different from~\cite{oymak2019non}, with an extra matrix $B_\star$ in front of $w_t$. By replacing the matrix $F$ in~\cite{oymak2019non} with $G_\star$, all the analysis and bounds stay the same.}. \end{myCorollary} \begin{myCorollary} \label{corollary:FIRestimation} Fix an $\epsilon > 0$. Let the length of FIR truncation {satisfy} \begin{equation} \label{eq:FIRlength} T > \frac{1}{\log \left(\rho(A_\star)\right)} \log \frac{\epsilon (1 - \rho(A_\star))}{2 \Phi(A_\star)\|C_\star\|\|B_\star\|}. \end{equation} Under the OLS estimation setup in Theorem 3.2 of \cite{oymak2019non}, and further letting \begin{equation} \label{eq:Bound_N_v1} N > \max\left\{\frac{4T}{\sigma_u^2 \epsilon^2}(R_w+R_v+R_e)^2, cTm \log^2{(2Tm)}\log^2{(2Nm)} \right\}, \end{equation} {with high probability}, the FIR estimation $\hat{\mathbf{G}}$ in~\eqref{eq:FIRestimation} satisfies $\|\mathbf{G}_\star - \hat{\mathbf{G}}\|_\infty < \epsilon$. \end{myCorollary} The lower bound on the FIR length $T$ in~\eqref{eq:FIRlength} guarantees that the FIR truncation error is less than $\epsilon/2$, while the lower bound on $N$ in~\eqref{eq:Bound_N_v1} makes sure that the FIR approximation part is less than $\epsilon/2$, thus leading to the desired bound with high probability. We note that the terms $R_w, R_v, R_e$ depend on the system dimensions and on the FIR length as ${\mathcal{O}}(\sqrt{T}(p+m+n))$. \cref{corollary:FIRestimation} states that the number of time steps to achieve identification error $\epsilon$ in $\mathcal{H}_\infty$ norm scales as ${\mathcal{O}}(T^2/\epsilon^2)$. We finally give an end-to-end guarantee by combining \cref{corollary:FIRestimation} with \cref{theo:suboptimality}: \begin{myCorollary} \label{corollary:samplecomplexity} {Let $\mathbf{K}_\star$ be the optimal LQG controller to~\eqref{eq:LQG}, and the corresponding closed-loop responses be $\mathbf{Y}_\star, \mathbf{U}_\star, \mathbf{W}_\star,\mathbf{Z}_\star$. Choose an estimation error $0 < \epsilon < \frac{1}{5\|\mathbf{U}_\star\|_\infty}$, and the hyper-parameter $\alpha \in \left[\frac{ \sqrt{2}\|\mathbf{U}_\star\|_\infty}{1-\epsilon \|\mathbf{U}_\star\|_\infty}, \frac{1}{\epsilon}\right)$. Estimate $\hat{\mathbf{G}}$~\eqref{eq:FIRestimation} with a trajectory of length $N$ in~\eqref{eq:Bound_N_v1} and an FIR truncation length $T$ in~\eqref{eq:FIRlength}. Then, with high probability, the robust controller $\mathbf{K}$ from~\eqref{eq:robustLQG_convex} yields a relative error in the LQG cost satisfying~\eqref{eq:suboptimality}.} \end{myCorollary} Since the bound on the {trajectory length} $N$ in~\eqref{eq:Bound_N_v1} scales as $\tilde{\mathcal{O}}(\epsilon^{-2})$ (where the logarithmic factor comes from the FIR length $T$), the suboptimality gap on the LQG cost roughly scales as $\tilde{\mathcal{O}}\left(\frac{1}{\sqrt{N}}\right)$ when the FIR length $T$ is chosen large-enough accordingly. In particular, when the true plant is FIR of order $\overline{T}$ and $T\geq \overline{T}$, via combining \cref{co:FIRestimation} with \cref{theo:suboptimality}, we see that with high probability, the suboptimality gap behaves as $$ \frac{J(\mathbf{G}_\star, {\mathbf{K}})^2 - J(\mathbf{G}_\star, {\mathbf{K}}_\star)^2}{J(\mathbf{G}_\star, {\mathbf{K}}_\star)^2} \sim \mathcal{O}\left(\frac{1}{\sqrt{N}}\right)\,. $$ Despite the additional difficulty of hidden states, our sample complexity result is on the same level as that obtained in~\cite{dean2019sample} where a robust SLP procedure is used to design a robust LQR controller with full observations. \section{Conclusion and future work} \label{section:Conclusions} We have developed a robust controller synthesis procedure for partially observed LQG problems, by combining non-asymptotic identification methods with IOP for robust control. Our procedure is consistent with the idea of Coarse-ID control in~\cite{dean2019sample}, and extends the results in~\cite{dean2019sample} from fully observed state feedback systems to partially observed output-feedback systems that are open-loop stable. One interesting future direction is to extend these results to open-loop unstable systems. We note that non-asymptotic identification for partially observed open-loop unstable system is challenging in itself; see~\cite{zheng2020non} for a recent discussion. It is also non-trivial to derive a robust synthesis procedure with guaranteed performance, and using a pre-stabilizing controller might be useful~\citep{simchowitz2020improper,zheng2019systemlevel}. Finally, extending our results to an online adaptive setting and performing regret analysis are exciting future directions as well.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Improving earlier work of Littlewood, in 1972 Levinson~\cite{lev} proved that \begin{equation} \label{lev} |\zeta(1+it)| \geq e^\gamma \log_2 t + \mathcal{O}(1) \end{equation} for arbitrarily large $t$.\footnote{Throughout this paper, we write $\zeta(s)$ for the Riemann zeta function and $\log_j$ for the $j$-th iterated logarithm.} This was further improved by Granville and Soundara\-rajan~\cite{gse}, who in 2006 established the lower bound $$ \max_{t \in [0,T]} |\zeta(1+it)| \geq e^\gamma \left( \log_2 T + \log_3 T - \log_4 T + \mathcal{O} (1) \right) $$ for arbitrarily large $t$. Their result also gives bounds for the measure of those $t \in [0,T]$ for which $|\zeta(1+it)|$ is of this size. Also in~\cite{gse}, Granville and Soundararajan state the conjecture that actually \begin{equation} \label{conj} \max_{t \in [T,2T]} |\zeta(1+it)| = e^\gamma \left( \log_2 T + \log_3 T + C_1 + o (1) \right), \end{equation} and even give a conjectural value of the constant $C_1$.\\ The proofs of Levinson and of Granville and Soundararajan rely on estimates for high moments of the zeta function and on Diophantine approximation arguments, respectively. Using a different method, the so-called \emph{resonance method}, Hilberdink~\cite{hilb} re-established~\eqref{lev}. This method can be traced back to a paper of Voronin \cite{voron}, but it was developed independently and significantly refined by Hilberdink and by Soundararajan \cite{sound} about 10 years ago. Roughly speaking, the functional principle of this method is to find a function $R(t)$ such that $$ I_1 := \int_0^T \zeta (\sigma + it) |R(t)|^2 ~dt $$ is ``large'', whereas $$ I_2:=\int_0^T |R(t)|^2 ~dt $$ is ``small''. Then the quotient $|I_1|/I_2$ is a lower bound for the maximal value of $|\zeta(\sigma+it)|$ in the range $t \in [0,T]$. The ``resonator'' $R(t)$ is chosen as a Dirichlet polynomial, and experience shows that it is often suitable to choose a function $R$ with multiplicative coefficients which can be written as a finite Euler product. This method can be implemented relatively easily if the length of $R$ is bounded by a small power of $T$, since then $I_2$ is the square-integral of a sum of essentially orthogonal terms. However, it is desirable to be able to control significantly longer resonator functions as well, and in~\cite{aist} a method was developed which allows the implementation of ``long'' resonators of length roughly $T^{\log_2 T}$. For $\sigma \in (1/2,1)$ this method allowed to recapture Montgomery's~\cite{MR0460255} lower bounds for extreme values of $|\zeta (\sigma + it)|$ by means of the resonance method, while on the critical line Bondarenko and Seip~\cite{bs1} used a ``long resonator'' to obtain lower bounds for extreme values of $|\zeta(1/2+it)|$ which even surpassed those established before by different methods.\footnote{From the work of Bondarenko and Seip it is also visible why the ``long resonator'' does not give any essential improvement of Montgomery's results in the case $\sigma \in (1/2,1)$, other than better values for the involved constants. See~\cite{bs1,bs2}.} In the present paper we will adapt the ``long resonator'' argument to the case $\sigma=1$, and prove the following theorem. \begin{theorem} \label{th1} There is a constant $C$ such that $$ \max_{t \in [\sqrt{T},T]} |\zeta(1+it)| \geq e^\gamma \left( \log_2 T + \log_3 T - C \right) $$ for all sufficiently large $T$. \end{theorem} Note that our theorem is in accordance with the conjecture of Granville and Soundararajan in equation~\eqref{conj}. However, the conjecture is much stronger than the theorem. First of all, our theorem only gives a lower bound with error $\mathcal{O}(1)$, while the conjecture gives an asymptotic equality with error $o(1)$. Thus a proof of the full conjecture would also require a vast improvement of the upper bound $|\zeta(1+it)| \ll (\log t)^{2/3}$ of Vinogradov. Additionally, in the conjecture the range for $t$ is $[T,2T]$, while our theorem requires the larger range $[\sqrt{T},T]$. The requirement for such a longer range is typical for applications of the ``long resonator'', and appears in~\cite{aist} and~\cite{bs1} as well. The range $[\sqrt{T},T]$ in our theorem could be reduced to $[T^{\theta},T]$ for $\theta<1$, at the expense of replacing $C$ by some other constant $C(\theta)$.\\ Before turning to the proof of Theorem~\ref{th1}, we comment on the difficulties when extending the ``long resonator'' method to the case $\sigma=1$. Two key ingredients of the resonance method for a ``long resonator'' are \emph{positivity} and \emph{sparsity}. ``Positivity'' means the introduction of an additional function having non-negative Fourier transform in $I_1$, which ensures that $I_1$ is a sum of non-negative terms, while ``sparsity'' means that nearby frequencies in the resonator function are merged in such a way that one obtains a function having a ``quasi-orthogonality'' property which allows to control $I_2$. For details see~\cite{aist,bs1} and the very recent paper~\cite{bs2}. The ``long resonator'' has only been implemented for $\sigma$ in the range $[1/2,1)$ so far, since the extension of the method to $\sigma=1$ meets serious technical difficulties. The main problem is that the ``sparsification'' of the resonator cannot be carried out to such an extent as to obtain a truly orthogonal sum, but one rather ends up with a function whose square-integral can be estimated only up to multiplicative errors of logarithmic order; if one tried to continue thinning out the resonator function to obtain precise control of $I_2$, then from some point on this would cause a significant loss in $I_1$ instead. Note that these errors do not play a role in the case $\sigma \in [1/2,1)$, where they are negligible in comparison to the main terms, whereas in the case $\sigma=1$ we want to obtain a very precise result where only multiplicative errors of order $(1 + 1/\log_2 T)$ are allowed.\\ Thus for the case $\sigma=1$ it is necessary to devise a novel variant of the ``long resonator'' technique, which will be done in the present paper. The method is genuinely different from those used in~\cite{aist,bs1}, where the ``sparsification'' of the resonator function played a key role. In the present paper we will avoid this sparsity requirement, and actually we make no attempt at all to control the size of $I_2$. Instead we use a self-similarity property of the resonator function, which is due to its construction in a completely multiplicative way. \section{Proof of Theorem~\ref{th1}} We will use the following approximation formula for the zeta function, which appears in the first lines of the proof of Theorem 2 in~\cite{gse}. \begin{lemma} \label{lemma1} Define $\zeta(s;Y) := \prod_{p \leq Y} \left(1 - p^{-s} \right)^{-1}$. Let $T$ be large, and set $Y = \exp((\log T)^{10})$. Then for $T^{1/10} \leq |t| \leq T$ we have $$ \zeta(1+it) = \zeta(1+it;Y) \left( 1 + \mathcal{O} \left( \frac{1}{\log T} \right) \right). $$ \end{lemma} Instead of this approximation of $\zeta$ by an Euler product we could also use the classical approximation by a Dirichlet polynomial (\cite[Theorem 4.11]{titch}), which was used in~\cite{aist,bs1,hilb}. However, the approximation by an Euler product is more convenient for us, since the resonator will also be defined as an Euler product.\\ Assume that $T$ is ``large'', and set $Y = \exp((\log T)^{10})$. By Lemma~\ref{lemma1} it suffices to prove Theorem \ref{th1} for $\zeta(1+it;Y)$ instead of $\zeta(1+it)$. Set $X = (\log T) (\log_2 T) / 6$, and for primes $p \leq X$ set $$ q_p = \left ( 1 - \frac{p}{X} \right). $$ This choice of ``weights'' is inspired by those used in the proof of Theorem 2.3 (case $\alpha=1$) in~\cite{hilb}. We set $q_1=1$ and $q_p =0$ for $p > X$, and extend our definition in a completely multiplicative way such that we obtain weights $q_n$ for all $n \geq 1$. Now we define $$ R(t) = \prod_{p \leq X} \left(1 - q_p p^{it} \right)^{-1}. $$ Then \begin{eqnarray} \log (|R(t)|) & \leq & \sum_{p \leq X} \left( \log X - \log p \right) \nonumber\\ & = & \pi(X) \log X - \vartheta(X), \end{eqnarray} where $\pi$ is the prime-counting function and $\vartheta$ is the first Chebyshev function. It is well-known that by partial summation one has $$ \pi(X) \log X - \vartheta(X) = \int_2^X \frac{\pi(t)}{t}~ dt = (1 + o(1)) \frac{X}{\log X}, $$ and thus we have \begin{equation} \label{R_est} |R(t)|^2 \leq T^{1/3 + o(1)} \end{equation} by our choice of $X$.\\ We can write $R(t)$ as a Dirichlet series in the form \begin{equation} \label{rdi} R(t) = \sum_{n=1}^\infty q_n n^{it}, \end{equation} and accordingly $$ |R(t)|^2 = \sum_{m,n = 1}^\infty q_m q_n \left(\frac{m}{n}\right)^{it}. $$ Note that all the weights $q_n,~n \geq 1,$ are non-negative reals. Set $\Phi(t)= e^{-t^2}$. By our choice of $Y$ we have $$ |\zeta(1+it;Y)| \ll \log Y \ll (\log T)^{10}. $$ Thus, using~\eqref{R_est}, we obtain \begin{equation} \label{err1} \left| \int_{|t| \geq T} \zeta(1+it;Y) |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt \right| \ll 1, \end{equation} and \begin{equation} \label{err2} \left| \int_{|t| \leq \sqrt{T}} \zeta(1+it;Y) |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt \right| \ll T^{5/6 + o(1)}. \end{equation} Using~\eqref{rdi} we can write \begin{eqnarray*} I_2 & := & \int_{-\infty}^\infty |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt \\ & = & \sum_{m,n = 1}^\infty \int_{-\infty}^{\infty} q_m q_n \left(\frac{m}{n}\right)^{it} \Phi\left(\frac{\log T}{T} t\right)~dt. \end{eqnarray*} We have the lower bound \begin{equation} \label{err5} \int_{\sqrt{T}}^T |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt \gg T^{1+o(1)}, \end{equation} which follows from $q_1=1$ and the positivity of the Fourier transform of $\Phi$, together with estimates similar to \eqref{err1} and \eqref{err2} to restrict to the desired range. Again using the fact that $\Phi$ has a positive Fourier transform we have \begin{eqnarray} & & \int_{-\infty}^\infty \zeta(1+it;Y) |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt \nonumber\\ & \geq & \int_{-\infty}^\infty \zeta(1+it;X) |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt. \label{xy} \end{eqnarray} Here we reduced the range of primes in $\zeta(1+it;~\cdot~)$ from $Y$ to $X$, so that the same primes appear in the definitions of $\zeta(1+it;~\cdot~)$ and $R$, respectively. Writing $$ \zeta(1+it;X) = \sum_{k=1}^\infty a_k k^{-it} $$ for appropriate coefficients $(a_k)_{k \geq 1}$, where $a_k \in \{0,1/k\},~k \geq 1$, we have \begin{eqnarray*} I_1& := & \int_{-\infty}^\infty \zeta(1+it;X) |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt \\ & = & \sum_{k=1}^\infty a_k \sum_{m,n = 1}^\infty \int_{-\infty}^{\infty} k^{-it} q_m q_n \left(\frac{m}{n}\right)^{it} \Phi\left(\frac{\log T}{T} t\right)~dt. \end{eqnarray*} Note that we can freely interchange the order of summations and integration, since everything is absolutely convergent. Assume $k$ to be fixed. Then, again using the fact that $\Phi$ has a positive Fourier transform, we have \begin{eqnarray*} & & \sum_{m,n = 1}^\infty \int_{-\infty}^{\infty} k^{-it} q_m q_n \left(\frac{m}{n}\right)^{it} \Phi\left(\frac{\log T}{T} t\right) dt \\ & \geq & \sum_{n = 1}^\infty ~\sum_{\substack{1 \leq m < \infty, \\ k | m }} \int_{-\infty}^{\infty} k^{-it} q_m q_n \left(\frac{m}{n}\right)^{it} \Phi\left(\frac{\log T}{T} t\right)dt \\ & = & \sum_{n = 1}^\infty ~\sum_{r=1}^\infty \int_{-\infty}^{\infty} k^{-it} \underbrace{q_{rk}}_{=q_r q_k} q_n \left(\frac{rk}{n}\right)^{it} \Phi\left(\frac{\log T}{T} t\right) dt \\ & = & q_k \underbrace{\sum_{n = 1}^\infty ~\sum_{r=1}^\infty \int_{-\infty}^{\infty} q_{r} q_n \left(\frac{r}{n}\right)^{it} \Phi\left(\frac{\log T}{T} t\right)~dt.}_{=I_2} \end{eqnarray*} Thus we have \begin{eqnarray} \frac{I_1}{I_2} & \geq & \sum_{k=1}^\infty a_k q_k \nonumber\\ & = & \prod_{p \leq X} \left(1 - q_p p^{-1}\right)^{-1} \nonumber\\ & = & \left(\prod_{p \leq X} \left(1 - p^{-1} \right)^{-1} \right) \left( \prod_{p \leq X} \frac{p-1}{p-q_p} \right). \label{i1i2} \end{eqnarray} For the first product we have \begin{equation} \label{firstp} \prod_{p \leq X} \left(1 - p^{-1} \right)^{-1} = e^\gamma \log X + \mathcal{O}(1) = e^\gamma (\log_2 T + \log_3 T) + \mathcal{O}(1) \end{equation} by Mertens' theorem. For the second product we have \begin{eqnarray*} - \log \left(\prod_{p \leq X} \frac{p-1}{p-q_p} \right) & = & - \sum_{p \leq X} \log \left(1 - \frac{p}{p+(p-1)X} \right) \\ & \ll & \sum_{p \leq X} \frac{1}{X} \\ & \ll & \frac{1}{\log X}. \end{eqnarray*} Thus together with~\eqref{i1i2} and~\eqref{firstp} we have \begin{eqnarray*} \frac{I_1}{I_2} \geq e^\gamma (\log_2 T + \log_3 T) + \mathcal{O} (1). \end{eqnarray*} From~\eqref{err1}--\eqref{xy} we deduce that also $$ \frac{\left| \int_{\sqrt{T}}^T \zeta(1+it;Y) |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt \right|}{\int_{\sqrt{T}}^T |R(t)|^2 \Phi\left(\frac{\log T}{T} t\right) dt} \geq e^\gamma \left(\log_2 T + \log_3 T \right) + \mathcal{O}(1). $$ Thus we have $$ \max_{t \in [\sqrt{T},T]} |\zeta(1+it;Y)| \geq e^\gamma \left(\log_2 T + \log_3 T\right) + \mathcal{O}(1). $$ As noted at the beginning of the proof, by Lemma~\ref{lemma1} this estimate remains valid if we replace $\zeta(1+it;Y)$ by $\zeta(1+it)$. This proves Theorem~\ref{th1}.\\ In conclusion, we add some remarks on the method used in the present paper. As noted in the introduction, earlier implementations of the resonance method relied on a combination of ``positivity'' and ``sparsity'' properties. In the present paper we show that it is possible to leave out the sparsity requirement, at least in one particular instance. One could also obtain the results from~\cite{aist} for the case $\sigma \in (1/2,1)$ using the method from the present paper without problems. However, it does not seem that the results from~\cite{bs1} for the case $\sigma=1/2$ could also be obtained using our argument, since the resonator function there necessarily has a more complicated structure (which is not of a simple Euler product form), while our argument relies on the fact that one can use a resonator which has completely multiplicative coefficients (which is essential for the argument, since any other coefficients would destroy the required self-similarity property).\\ We want to emphasize that the only real restriction on the size of the resonator in our argument is~\eqref{R_est}, which gives an upper bound for $|R(t)|$. This is different from earlier versions of the ``long resonator'' argument, where bounds on the cardinality of the support of $R(t)$ (that is, on the number of non-zero coefficients in the Dirichlet series representation) are required.\\ Another remark is that while the sparsity requirement may be unnecessary (at least in some cases), the positivity requirement still plays a crucial role in our argument. This prevents a possible generalization of the method to the case of functions whose Dirichlet series representation does not only contain non-negative real numbers as coefficients. This topic is also discussed in~\cite{MR3554732} in some detail. In particular, we have not been able to obtain a result for extreme values of $1/\zeta$ beyond those mentioned in \cite{gse}.\\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Our Approach} We propose $\mathtt{MEDL\_CVAE}$ to address two challenges in multi-entity dependence learning. The first challenge is the noisy and potentially multi-modal responses. For example, consider our species distribution modeling application. One bird watcher can make slightly different observations during two consecutive visits to the same forest location. He may be able to detect a song bird during one visit but not the other if, for example, the bird does not sing both times. This suggests that, even under the very best effort of bird watchers, there is still noise inherently associated with the observations. Another, perhaps more complicated phenomenon is the multi-modal response, which results from intertwined ecological processes such as mutualism and commensalism. Consider, territorial species such as the Red-winged and Yellow-headed Blackbirds, both of which live in open marshes in Northwestern United States. However, the Yellowheads tend to chase the Redwings out of their territories. As a result, a bird watcher would see either Red-winged or Yellow-headed Blackbirds at an open marsh, but seldom both of them. This suggests that, conditioned on an open marsh environment, there are two possible modes in the response, seeing the Red-winged but not the Yellow-headed, or seeing the Yellow-headed but not the Red-winged. The second challenge comes from the incorporation of rich contextual information such as remote sensing imagery, wich provides detailed feature description of the underlying environment, especially in conjunction with the flexibility of deep neural networks. Nevertheless, previous multi-entity models, such as in \cite{chen2016deep,Guo2011MultiCDN,Sutton2012CRF}, rely on sampling approaches to estimate the partition function during training. It is difficult to incorporate such sampling process into the back-propagation of the deep neural networks. This limitation poses serious challenges to taking advantage of the rich contextual information. To address the aforementioned two challenges, we propose a conditional generating model, which makes use of hidden variables to represent the noisy and multi-modal responses. $\mathtt{MEDL\_CVAE}$ incorporates this model into an end-to-end training pipeline using a conditional variational autoencoder, which optimizes for a variational lower bound of the conditional likelihood function. \begin{figure}[t] \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=1.0\linewidth]{figs/noise_gen_model} \captionsetup{labelformat=empty} \caption{(a) conditional generating process} \end{minipage}% \begin{minipage}[t]{0.54\linewidth} \centering \includegraphics[width=1.0\linewidth]{figs/network} \captionsetup{labelformat=empty} \caption{(b) neural network architecture} \end{minipage} \caption{(a) Our proposed conditional generating process. Given contextual features $x_i$ such as satellite images, we use hidden variables $z_i$ conditionally generated based on $x_i$ to capture noisy and multi-modal response. The response $y_i$ depends on both contextual information $x_i$ and hidden variables $z_i$. See the main text for details. (b) Overview of the neural network architecture of $\mathtt{MEDL\_CVAE}$ for both training and testing stages. $\oplus$ denotes a concatenation operator.} \label{fig:method} \end{figure} \subsubsection{Conditional Generating Process} Unlike classic approaches such as probit models \cite{chib1998analysis}, which have a single mode, we use a conditional generating process, which models noisy and multi-modal responses using additional hidden variables. The generating process is depicted in Figure.~\ref{fig:method}. In the generating process, we are given contextual features $x_i$, which for example, contain a satellite image. Then we assume a set of hidden variables $z_i$, which are generated based on a normal distribution conditioned on the values of $x_i$. The binary response variables $y_{i,j}$ are drawn from a Bernoulli distribution, whose parameters depend on both the contextual features $x_i$ and hidden variables $z_i$. The complete generating process becomes, \begin{align} x_i &: \mbox{contextual features},\\ z_i|x_i &\sim N\left(\mu_d(x_i), \Sigma_d(x_i)\right),\\ y_{i,j}|z_i,x_i &\sim \mbox{Bernoulli}\left(p_j(z_i,x_i)\right). \end{align} Here, $\mu_d(x_i)$, $\Sigma_d(x_i)$ and $p_j(z_i,x_i)$ are general functions depending on $x_i$ and $z_i$, which are modeled as deep neural networks in our application and learned from data. We denote the parameters in these neural networks as $\theta_d$. The machine learning problem is to find the best parameters that maximize the conditional likelihood $\prod_{i} Pr(y_i|x_i)$. This generating process is able to capture noisy and potentially multi-modal distributions. Consider the Red-winged and the Yellow-headed Blackbird example. We use $y_{i,1}$ to denote the occurrence of Red-winged Blackbird and $y_{i,2}$ to denote the occurrence of Yellow-headed Blackbird. Conditioned on the same environmental context $x_i$ of an open marsh, the output $(y_{i,1}=0, y_{i,2}=1)$ and $(y_{i,1}=1, y_{i,2}=0)$ should both have high probabilities. Therefore, there are two modes in the probability distribution. Notice that it is very difficult to describe this case in any probabilistic model that assumes a single mode. Our generating process provides the flexibility to capture multi-modal distributions of this type. The high-level idea is similar to mixture models, where we use hidden variables $z_i$ to denote which mode the actual probabilistic outcome is in. For example, we can have $z_i|x_i\sim N(0,I)$ and two functions $p_1(z)$ $p_2(z)$, where half of the $z_i$ values are mapped to $(p_1=0,p_2=1)$ and the other half to $(p_1=1,p_2=0)$. Figure~\ref{fig:method} provides an example, where the $z_i$ values in the region with a yellow background are mapped to one value, and the remaining values are mapped to the other value. In this way, the model will have high probabilities to produce both outcomes $(y_{i,1}=0, y_{i,2}=1)$ and $(y_{i,1}=1, y_{i,2}=0)$. \subsubsection{Conditional Variational Autoencoder} Our training algorithm is to maximize the conditional likelihood $Pr(y_i|x_i)$. Nevertheless, a direct method would result in the following optimization problem: \begin{small} \begin{equation} \max_{\theta_d} \sum_i \log Pr(y_i|x_i) = \sum_i \log \int Pr(y_i|x_i,z_i) Pr(z_i|x_i) \mbox{d} z_i\nonumber \end{equation} \end{small} which is intractable because of a hard integral inside the logarithmic function. Instead, we turn to maximizing variational lower bound of the conditional log likelihood. To do this, we use a variational function family $Q(z_i|x_i, y_i)$ to approximate the posterior: $Pr(z_i | x_i, y_i)$. In practice, $Q(z_i|x_i,y_i)$ is modeled using a conditional normal distribution: \begin{equation} Q(z_i|x_i,y_i) = N(\mu_e(x_i, y_i), \Sigma_e(x_i, y_i)). \end{equation} Here, $\mu_e(x_i, y_i)$ and $\Sigma_e(x_i, y_i)$ are general functions, and are modeled using deep neural networks whose parameters are denoted as $\theta_e$. We assume $\Sigma_e$ is a diagonal matrix in our formulation. Following similar ideas in \cite{kingma2013auto,kingma2014semi}, we can prove the following variational equality: \begin{small} \begin{align} &\log Pr(y_i | x_i) - D\left[Q(z_i|x_i,y_i) || Pr(z_i | x_i, y_i)\right]\label{eq:vari}\\ =&\mathtt{E}_{z_i\sim Q(z_i | x_i, y_i)}\left[\log Pr(y_i| z_i, x_i)\right] - D\left[Q(z_i|x_i, y_i) || Pr(z_i | x_i)\right] \nonumber \end{align} \end{small} On the left-hand side, the first term is the conditional likelihood, which is our objective function. The second term is the Kullback-Leibler (KL) divergence, which measures how close the variational approximation $Q(z_i|x_i,y_i)$ is to the true posterior likelihood $Pr(z_i|x_i, y_i)$. Because $Q$ is modeled using a neural network, which captures a rich family of functions, we assume that $Q(z_i|x_i, y_i)$ approximates $Pr(z_i|x_i, y_i)$ well, and therefore the second KL term is almost zero. And because the KL divergence is always non-negative, the right-hand side of Equation~\ref{eq:vari} is a tight lower bound of the conditional likelihood, which is known as the variational lower bound. We therefore directly maximize this value and the training problem becomes: \begin{align} \max_{\theta_d, \theta_e} ~~~~&\sum_i \mathtt{E}_{z_i\sim Q(z_i | x_i, y_i)}\left[\log Pr(y_i| z_i, x_i)\right] - \nonumber\\ &D\left[Q(z_i|x_i, y_i) || Pr(z_i | x_i)\right].\label{eq:obj} \end{align} The first term of the objective function in Equation~\ref{eq:obj} can be directly formalized as two neural networks concatenated together -- one encoder network and the other decoder network, following the reparameterization trick, which is used to backpropogate the gradient inside neural nets. At a high level, suppose $r\sim N(0, I)$ are samples from the standard Gaussian distribution, then $z_i\sim Q(z_i | x_i, y_i)$ can be generated from a ``recognition network'', which is part of the ``encoder network'': $z_i \leftarrow \mu_e(x_i, y_i) + \Sigma_e(x_i, y_i) r$. The ``decoder network'' takes the input of $z_i$ from the encoder network and feeds it to the neural network representing the function $Pr(y_i|z_i, x_i) = \prod_{j=1}^l \left(p_j(z_i,x_i)\right)^{y_{i,j}} \left(1-p_j(z_i,x_i)\right)^{1-y_{i,j}}$ together with $x_i$. The second KL divergence term can be calculated in a close form. The entire neural network structure is shown as Figure \ref{fig:method}. We refer to $Pr(z|x)$ as the \textit{prior network}, $Q(z|x,y)$ as the \textit{recognition network} and $Pr(y|x,z)$ as the \textit{decoder network}. These three networks are all multi-layer fully connected neural networks. The fourth \textit{feature network}, composed of multi-layer convolutional or fully connected network, extracts high-level features from the contextual source. All four neural networks are trained simultaneously using stochastic gradient descent. \section{Conclusion} In this paper, we propose $\mathtt{MEDL\_CVAE}$ for multi-entity dependence learning, which encodes a conditional multivariate distribution as a generating process. As a result, the variational lower bound of the joint likelihood can be optimized via a conditional variational auto-encoder and trained end-to-end on GPUs. Tested on two real-world applications in computational sustainability, we show that $\mathtt{MEDL\_CVAE}$ captures rich dependency structures, scales better than previous methods, and further improves the joint likelihood taking advantage of very rich context information that is beyond the capacity of previous methods. Future directions include exploring the connection between the current formulation of $\mathtt{MEDL\_CVAE}$ based on deep neural nets and the classic multivariate response models in statistics. \section{Experiments} \subsection{Datasets and Implementation Details} We evaluate our method on two computational sustainability related datasets. The first one is a crowd-sourced bird observation dataset collected from the \textit{eBird} citizen science project \cite{munson2012ebird}. Each record in this dataset is referred to as a checklist in which the bird observer reports all the species he detects together with the time and the geographical location of an observational session. Crossed with the National Land Cover Dataset (NLCD) \cite{homer2015completion}, we get a 15-dimension feature vector for each location which describes the nearby landscape composition with 15 different land types such as water, forest, etc. In addition, to take advantages of rich external context information, we also collect satellite images for each observation site by matching the longitude and latitude of the observational site to Google Earth\footnote{https://www.google.com/earth/}. From the upper part of Figure \ref{fig:hist}, the satellite images of different geographical locations are quite different. Therefore these images contain rich geological information. Each image covers an area of 12.3$\mbox{km}^2$ near the observation site. For the use of training and testing, we transform all this data into the form ($x_i,y_i$), where $x_i$ denotes the contextual features including NLCD and satellite images and $y_i$ denotes the multi-species distribution. The whole dataset contains all the checklists from the Bird Conservation Region (BCR) 13 \cite{us2000north} in the last two weeks of May from 2004 to 2014, which has 50,949 observations in total. Since May is a migration season and lots of non-native birds fly over BCR 13, this dataset provides a good opportunity to study these migratory birds using this dataset. We choose the top 100 most frequently observed birds as the target species which cover over 95\% of the records in our dataset. A simple mutual information analysis reveals rich correlation structure among these species. \begin{figure}[tb] \centering \includegraphics[width=0.75\linewidth]{figs/hist_2} \includegraphics[width=0.75\linewidth]{figs/amazon.png} \caption{(Top) Three satellite images (left) of different landscapes contain rich geographical information. We can also see that the histograms of RGB channels for each image (middle) contain useful information and are good summary statistics. (Bottom) Examples of sample satellite image chips and their corresponding landscape composition.} \label{fig:hist} \end{figure} Our second application is the Amazon rainforest landscape analysis\footnote{https://www.kaggle.com/c/planet-understanding-the-amazon-from-space} derived from Planet's full-frame analytic scene products. Each sample in this dataset contains a satellite image chip covering a ground area of 0.9 $\mbox{km}^2$. The chips were analyzed using the Crowd Flower\footnote{https://www.crowdflower.com/} platform to obtain ground-truth composition of the landscape. There are 17 composition label entities and they represent a reasonable subset of phenomena of interest in the Amazon basin and can broadly be broken into three groups: atmospheric conditions, common land cover phenomena and rare land use phenomena. Each chip has one or more atmospheric label entities and zero or more common and rare label entities. Sample chips and their composition are demonstrated in the lower part of Figure \ref{fig:hist}. There exists rich correlation between label entities, for instance, agriculture has a high probability to co-occur with water and cultivation. We randomly choose 34,431 samples for training, validation and testing. The details of the two datasets are listed in table \ref{table:dataset}. We propose two different neural network architectures for the \textit{feature network} to extract useful features from satellite images: multi-layer fully connected neural network (MLP) and convolutional neural network (CNN) We also rescale images into different resolutions: Image64 for 64*64 pixels and Image256 for 256*256 pixels. In addition, we experiment using summary statistics such as the histograms of image's RGB channels (upper part of Figure \ref{fig:hist}) to describe an image (denoted as Hist). Inspired by \cite{you2017deep} that assumes permutation invariance holds and only the number of different pixel type in an image (pixel counts) are informative, we transfer each image into a matrix $\mathbf{H}\in\mathbb{R}^{d\times b}$, where $d$ is the number of band and $b$ is the number of discrete range section, thus $H_{i,j}$ indicates the percentage of pixels in the range of section $j$ for band $i$. We use RGB so $d=3$. We utilize histogram models with two different $b$ settings, Hist64 for $b=64$ and Hist128 for $b=128$. All the training and testing process of our proposed $\mathtt{MEDL\_CVAE}$ are performed on one NVIDIA Quadro 4000 GPU with 8GB memory. The whole training process lasts 300 epochs, using batch size of 512, Adam optimizer \cite{kingma2014adam} with learning rate of $10^{-4}$ and utilizing batch normalization \cite{ioffe2015batch}, 0.8 dropout rate \cite{srivastava2014dropout} and early stopping to accelerate the training process and to prevent overfitting. \begin{table}[] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \begin{tabular}{l|c|c|c} \hline \textbf{Dataset} & \textbf{Training Set Size} & \textbf{Test Set Size} & \textbf{\# Entities} \\ \hline \textit{eBird} & 45855 & 5094 & 100 \\ \textit{Amazon} & 30383 & 4048 & 17 \\ \hline \end{tabular} \caption{the statics of the \textit{eBird} and the \textit{Amazon} dataset} \label{table:dataset} \end{table} \subsection{Experimental Results} \begin{table}[htb!] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \begin{tabular}{l|c|c} \hline \textbf{Method} & \textbf{Neg. JLL} & \tabincell{c}{\textbf{Time} \\ \textbf{(min)}} \\ \hline NLCD+MLP & 36.32 & 2 \\ Image256+ResNet50 & 34.16 & 5.3 hrs \\ NLCD+Image256+ResNet50 & 34.48 & 5.7 hrs\\ NLCD+Hist64+MLP & 34.97 & 3\\ NLCD+Hist128+MLP & 34.62 & 4\\ NLCD+Image64+MLP & 33.73 & 9\\ \hline NLCD+MLP+PCC & 35.99 & 21 \\ NLCD+Hist128+MLP+PCC & 35.07 & 33\\ NLCD+Image64+MLP+PCC & 34.48 & 53\\ NLCD+DMSE & 30.53 & 20 hrs\\ \hline NLCD+MLP+$\mathtt{MEDL\_CVAE}$ & 30.86 & 9 \\ NLCD+Hist64+MLP+$\mathtt{MEDL\_CVAE}$ & 28.86 & 20\\ NLCD+Hist128+MLP+$\mathtt{MEDL\_CVAE}$ & 28.71 & 22\\ NLCD+Image64+MLP+$\mathtt{MEDL\_CVAE}$ & \textbf{28.24} & 48\\ \hline \end{tabular} \caption{Negative joint log-likelihood and training time of models assuming independence (first section), previous multi-entity dependence models (second section) and our $\mathtt{MEDL\_CVAE}$ on the $\textit{eBird}$ test set. $\mathtt{MEDL\_CVAE}$ achieves lower negative log-likelihood compared to other methods with the same feature network structure and context input while taking much less training time. Our model is also the only one among joint models (second and third section) which achieves the best log-likelihood taking images as inputs, while other models must rely on summary statistics to get good but suboptimal results within a reasonable time limit.} \label{table:results_ebird} \end{table} We compare the proposed $\mathtt{MEDL\_CVAE}$ with two different groups of baseline models. The first group is \textit{models assuming independence structures among entities}; i.e., the distribution of all entities are independent of each other conditioned on the feature vector. Within this group, we have tried models with different feature inputs, including models with highly advanced deep neural net structure, ResNet \cite{he2016identity}. The second group is \textit{previously proposed multi-entity dependence models} which have the ability to capture correlations among entities. Within this group, we compare with the recent proposed Deep Multi-Species Embedding (DMSE) \cite{chen2016deep}. This model is closely related to CRF, representing a wide class of energy based approach. Moreover, it further improves classic energy models, taking advantages of the flexibility of deep neural nets to obtain useful feature description. Nevertheless, its training process uses classic MCMC sampling approaches, therefore cannot be fully integrated on GPUs. We also compare with Probabilistic Classifier Chains (PCC) \cite{Dembczynski2010PCC}, which is a representative approach among a series of models proposed in multi-label classification. Our baselines and $\mathtt{MEDL\_CVAE}$ are all trained using different feature network architecture as well as satellite imagery with different resolution and encoding. We use Negative Joint Distribution Loglikelihood (Neg. JLL) as the main indicator of a model's performance: $-\frac{1}{N}\sum\limits_{i=1}^{N}\log Pr(y_i|x_i)$, where $N$ is the number of samples in the test set. For $\mathtt{MEDL\_CVAE}$ models, $Pr(y_i|x_i)=\mathtt{E}_{z_i\sim Pr(z_i | x_i)}\left[ Pr(y_i| z_i, x_i)\right]$. We obtain 10,000 samples from the posterior $Pr(z_i | x_i)$ to estimate the expectation. We also double checked that the estimation is close and within the bound of the variational lower bound in Equation~\ref{eq:vari}. The sampling process can be performed on GPUs within a couple of minutes. The experiment results on $\textit{eBird}$ and $\textit{Amazon}$ dataset are shown in Table \ref{table:results_ebird} and \ref{table:results_amazon}, respectively. \begin{table}[htb!] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \begin{tabular}{l|c} \hline \textbf{Method} & \textbf{Neg. JLL} \\ \hline Image64+MLP & 2.83 \\ Hist128+MLP & 2.44 \\ Image64+CNN & 2.16 \\ \hline Image64+MLP+PCC & 2.95 \\ Hist128+MLP+PCC & 2.60 \\ Image64+CNN+PCC & 2.45 \\ \hline Image64+MLP+$\mathtt{MEDL\_CVAE}$ & 2.37 \\ Hist128+MLP+$\mathtt{MEDL\_CVAE}$ & 2.09 \\ Image64+CNN+$\mathtt{MEDL\_CVAE}$ & \textbf{2.03} \\ \hline \end{tabular} \caption{Performance of baseline models and $\mathtt{MEDL\_CVAE}$ on the \textit{Amazon} dataset. Our method clearly outperforms models assuming independence (first section) and previous multi-entity dependence models (second section) with various types of context input and feature network structures.} \label{table:results_amazon} \end{table} We can observe that: (1){\bf MEDL\_CVAE significantly outperforms all independent models} given the same feature network (CNN or MLP) and context information (Image or Hist), even if we use highly advanced deep neural net structures such as ResNet in independent models. It proves that our method is able to capture rich dependency structures among entities, therefore outperforming approaches that assume independence among entities. (2) Compared with previous multi-entity dependence models, {\bf MEDL\_CVAE trains in an order of magnitude less time}. Using low-dimensional context information NLCD, which is a 15-dimensional vector, PCC's training needs nearly twice the time of $\mathtt{MEDL\_CVAE}$ and DMSE needs over 130 times (20 hours). (3) In each model group in Table~\ref{table:results_ebird}, it is clear that adding satellite images improves the performance, which proves that {\bf rich context is informative.} (4) Only our model $\mathtt{MEDL\_CVAE}$ is able to take full advantage of the rich context in satellite images. Other models, such as DMSE, already suffer from a long training time with low-dimensional feature inputs such as NLCD, and cannot scale to using satellite images. It should be noted that NLCD+Image64+MLP+$\mathtt{MEDL\_CVAE}$ can achieve much better performance with only 1/25 time of DMSE. PCC needs less training time than DMSE but doesn't perform well on joint likelihood. It is clear that {\bf due to the end-to-end training process on GPUs, our method is able to take advantage of rich context input to further improve multi-entity dependence modeling}, which is beyond the capacity of previous models. \begin{figure}[tb] \centering \includegraphics[width=0.7\linewidth]{figs/ebird_ap} \caption{Precison-Recall Curves for a few multi-entity dependence models on the $\textit{ebird}$ dataset (better view in color). $\mathtt{MEDL\_CVAE}$ utilizing images as input outperforms other joint methods without imagery.} \label{fig:ebird_ap} \end{figure} To further prove $\mathtt{MEDL\_CVAE}$'s modeling power, we plot the precision-recall curve shown in Figure \ref{fig:ebird_ap} for all dependency models on the $\textit{ebird}$ dataset. The precision and recall is defined on the marginals to predict the occurrence of individual species and averaged among all 100 species in the dataset. As we can see, our method outperforms other models after taking rich context information into account. \begin{figure}[t] \begin{minipage}[t]{0.55\linewidth} \includegraphics[width=1.0\linewidth]{figs/bird_embed} \captionsetup{labelformat=empty} \caption{(a) bird Embedding} \end{minipage}% \begin{minipage}[t]{0.44\linewidth} \centering \includegraphics[width=1.0\linewidth]{figs/amazon_loc} \captionsetup{labelformat=empty} \caption{(b) satellite image embedding} \end{minipage} \caption{(a) Visualization of the vectors inside decoder network's last fully connected layer gives a reasonable embedding of multiple bird species when our model is trained on the \textit{eBird} dataset. Birds living in the same habitats are clustered together. (b) Visualization of the posterior $z\sim Q(z|x,y)$ gives a good embedding on landscape composition when our model is trained on the \textit{Amazon} dataset. Pictures with similar landscapes are clustered together.} \label{fig:embed} \end{figure} \subsubsection{Latent Space and Hidden Variables Analysis} In order to qualitatively confirm that our $\mathtt{MEDL\_CVAE}$ learns useful dependence structure between entities, we analyze the latent space formed by the hidden variables in the neural network. Inspired by \cite{chen2016deep}, each vector of decoder network's last fully connected layer can be treated as an embedding showing the relationship among species. Figure \ref{fig:embed} visualizes the embedding using t-SNE \cite{maaten2008visualizing}. We can observe that birds of the same category or having similar environmental preferences cluster together. In addition, previous work \cite{kingma2013auto} has shown that the $\textit{recognition network}$ in Variational Auto-encoder is able to cluster high-dimensional data. Therefore we conjecture that the posterior of $z$ from the $\textit{recognition network}$ should carry meaningful information on the cluster groups. Figure \ref{fig:embed} visualizes the posterior of $z\sim Q(z|x,y)$ in 2D space using t-SNE on the $\textit{Amazon}$ dataset. We can see that satellite images of similar landscape composition also cluster together. \section{Introduction} Learning the dependencies among multiple entities is an important problem with many real-world applications. For example, in the sustainability domain, the spatial distribution of one species depends on other species due to their interactions in the form of mutualism, commensalism, competition and predation \cite{MacKenzie2004cooccurrence}. In natural language processing, the topics of an article are often correlated \cite{nam2014large}. In computer vision, an image may have multiple correlated tags \cite{wang2016cnn}. The key challenge behind dependency learning is to capture correlation structures embedded among exponentially many outcomes. One classic approach is the Conditional Random Fields (CRF) \cite{Lafferty2001CRF}. However, to handle the intractable partition function resulting from multi-entity interactions, CRFs have to incorporate approximate inference techniques such as contrastive divergence \cite{hinton2002training}. In a related applicational domain called multi-label classification, the classifier chains (CC) approach \cite{Read09ClassifierChain} decomposes the joint likelihood into a product of conditionals and reduces a multi-label classification problem to a series of binary prediction problems. However, as pointed out by \cite{Dembczynski2010PCC}, finding the joint mode of CC is also intractable, and to date only approximate search methods are available \cite{dembczynski2012label}. \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth]{figs/intro_pic} \caption{Two computational sustainability related applications for $\mathtt{MEDL\_CVAE}$. The first application is to study the interactions among bird species in the crowdsourced \textit{eBird} dataset and environmental covariates including those from satellite images. The second application is to tag satellite images with a few potentially overlapping landscape categories and track human footprint in the Amazon rainforest. } \label{fig:intro} \end{figure} The availability of rich contextual information such as millions of high-resolution satellite images as well as recent developments in deep learning create both opportunities and challenges for multi-entity dependence learning. In terms of opportunities, rich contextual information creates the possibility of improving predictive performance, especially when it is combined with highly flexible deep neural networks. The challenge, however, is to design a nimble scheme that can both tightly integrate with deep neural networks and capture correlation structures among exponentially many outcomes. Deep neural nets are commonly used to extract features from contextual information sources, and can effectively use highly parallel infrastructure such as GPUs. However, classical approaches for structured output, such as sampling, approximate inference and search methods, typically cannot be easily parallelized. Our contribution is {\bf an end-to-end approach to multi-entity dependence learning based on a conditional variational auto-encoder, which handles high dimensional space effectively, and can be tightly integrated with deep neural nets to take advantages of rich contextual information}. Specifically, (i) we propose a novel \emph{generating process} to encode the conditional multivariate distribution in multi-entity dependence learning, in which we bring in a set of hidden variables to capture the randomness in the joint distribution. (ii) The novel conditional generating process allows us to work with imperfect data, capturing noisy and potentially \emph{multi-modal responses}. (iii) The generating process also allows us to encode the entire problem via \emph{a conditional variational auto-encoder}, {tightly integrated with deep neural nets and implemented end-to-end on GPUs}. Using this approach, we are able to leverage rich contextual information to enhance the performance of $\mathtt{MEDL}$ that is beyond the capacity of previous methods. We apply our \underline{M}ulti-\underline{E}ntity \underline{D}ependence \underline{L}earning via \underline{C}onditional \underline{V}ariational \underline{A}uto-\underline{e}ncoder ($\mathtt{MEDL\_CVAE}$) approach to {\bf two sustainability related real-world applications} \cite{gomes2009computational}. In the first application, we study the interaction among multiple bird species with crowdsourced \textit{eBird} data and environmental covariates including those from satellite images. As an important sustainable development indicator, studying how species distribution changes over time helps us understand the effects of climate change and conservation strategies. In our second application, we use high-resolution satellite imagery to study multi-dimensional landscape composition and track human footprint in the Amazon rainforest. See Figure \ref{fig:intro} for an overview of the two problems. Both applications study the correlations of multiple entities and use satellite images as rich context information. We are able to show that our $\mathtt{MEDL\_CVAE}$ (i) {\bf captures rich correlation structures among entities}, therefore {\bf outperforming approaches that assume independence among entities} given contextual information; (ii) {\bf trains in an order of magnitude less time than previous methods} because the full pipeline implemented on GPUs; (iii) achieves a better joint likelihood by incorporating deep neural nets to {\bf take advantage of rich context information, namely satellite images}, which are beyond the capacity of previous methods. \section{Preliminaries} We consider modeling the dependencies among multiple entities on problems with rich contextual information. Our dataset consists of tuples $D=\{(x_i, y_i)| i=1,\ldots,N\}$, in which $x_i = (x_{i,1},\ldots, x_{i,k}) \in \mathcal{R}^k$ is a high-dimensional contextual feature vector, and $y_i = (y_{i,1},\ldots,y_{i,l})\in \{0,1\}^l$ is a sequence of $l$ indicator variables, in which $y_{i,j}$ represents whether the $j$-th entity is observed in an environment characterized by covariates $x_i$. The problem is to learn a conditional joint distribution $Pr(y | x)$ which maximizes the conditional joint log likelihood over $N$ data points: $$\sum\limits_{i=1}^{N} \log Pr(y_i | x_i).$$ Multi-entity dependence learning is a general problem with many applications. For example, in our species distribution application where we would like to model the relationships of multiple bird species, $x_i$ is the vector of environmental covariates of the observational site, which includes a remote sensing picture, the national landscape classification dataset (NLCD) values \cite{homer2015completion} etc. $y_i = (y_{i,1},\ldots,y_{i,l})$ is a sequence of binary indicator variables, where $y_{i,j}$ indicates whether species $j$ is detected in the observational session of the $i$-th data point. In our application to analyze landscape composition, $x_i$ is the feature vector made up with the satellite image of the given site, and $y_i$ includes multiple indicator variables, such as atmospheric conditions (clear or cloudy) and land cover phenomena (agriculture or forest) of the site. Our problem is to capture rich correlations between entities. For example, in our species distribution modeling application, the distribution of multiple species are often correlated, due to their interactions such as cooperation and competition for shared resources. As a result, we often cannot assume that the probability of each entity's existence are mutually independent given the feature vector, i.e., \begin{equation} Pr(y_i|x_i) \neq \prod_{j=1}^l Pr(y_{i,j}|x_i). \label{eq:ind} \end{equation} See Figure \ref{fig:panther} for a specific instance. As a baseline, we call the model which takes the assumption in the righthand side of Equation~(\ref{eq:ind}) an independent probabilistic model. \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth]{figs/panther} \caption{Leopards and dholes both live on steppes. Therefore, the probability that each animal occupies a steppe is high. However, due to the competition between the two species, the probability of their co-existence is low.} \label{fig:panther} \end{figure} \section{Related Work} Multi-entity dependence learning was studied extensively for prediction problems under the names of multi-label classification and structured prediction. Our applications, on the other hand, focus more on probabilistic modeling rather than classification. Along this line of research, early methods include k-nearest neighbors \cite{ZhouZhiHua2005KNearestMultiLabel} and dimension reduction \cite{ZhouZhihua2010MultiLabelDimensionReduct,Li2016MultiLabelBernM}. \textbf{Classifier Chains} (CC) First proposed by \cite{Read09ClassifierChain}, the CC approach decomposes the joint distribution into the product of a series of conditional probabilities. Therefore the multi-label classification problem is reduced to $l$ binary classification problems. As noted by \cite{Dembczynski2010PCC}, CC takes a greedy approach to find the joint mode and the result can be arbitrarily far from the true mode. Hence, Probabilistic Classifier Chains (PCC) were proposed which replaced the greedy strategy with exhaustive search \cite{Dembczynski2010PCC}, $\epsilon$-approximate search \cite{dembczynski2012label}, beam search \cite{kumar2013beam} or A* search \cite{mena2015using}. To address the issue of error propagating in CC, Ensemble of Classifier Chains (ECC) \cite{liu2015optimality} averages several predictions by different chains to improve the prediction. \textbf{Conditional Random Field} (CRF) \cite{Lafferty2001CRF} offers a general framework for structured prediction based on undirected graphical models. When used in multi-label classification, CRF suffers from the problem of computational intractability. To remedy this issue, \cite{ZhouZhihua2011MultiLabelCRF} applied ensemble methods and \cite{deng2014large} proposed a special CRF for problems involving specific hierarchical relations. In addition, \cite{Guo2011MultiCDN} proposed using Conditional Dependency Networks, although their method also depended on the Gibbs sampling for approximate inference. \textbf{Ecological Models} Species distribution modeling has been studied extensively in ecology and \cite{elith2009species} presented a nice survey. For single species models, \cite{Phillips2004Maxent} proposed max-entropy methods to deal with presence-only data. By taking imperfect detection into account, \cite{MacKenzie2004cooccurrence} proposed occupancy models, which were further improved with a stronger version of statistical inference \cite{Hutchinson2011}. Species distribution models have been extended to capture population dynamics using cascade models \cite{sheldon2011collective} and non-stationary predictor response models \cite{fink2013adaptive}. Multi-species interaction models were also proposed \cite{Yu2011multispecies,harris2015generating}. Recently, Deep Multi-Species Embedding (DMSE) \cite{chen2016deep} uses a probit model coupled with a deep neural net to capture inter-species correlations. This approach is closely related to CRF and also requires MCMC sampling during training.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the nonlinear filtering problem one observes a system whose state is known to follow a given stochastic differential equation. The observations that have been made contain an additional noise term, so one cannot hope to know the true state of the system. However, one can reasonably ask what is the probability density over the possible states. When the observations are made in continuous time, the probability density follows a stochastic partial differential equation known as the Kushner--Stratonovich equation. This can be seen as a generalization of the Fokker--Planck equation that expresses the evolution of the density of a diffusion process. Thus the problem we wish to address boils down to finding approximate solutions to the Kushner--Stratonovich equation. For a quick introduction to the filtering problem see Davis and Marcus (1981) \cite{davis81b}. For a more complete treatment from a mathematical point of view see Lipster and Shiryayev (1978) \cite{LipShi}. See Jazwinski (1970) \cite{jazwinski70a} for a more applied perspective. For recent results see the collection of papers \cite{crisan}. The main idea we will employ is inspired by the differential geometric approach to statistics developed in \cite{amari85a} and \cite{pistonesempi}. One thinks of the probability distribution as evolving in an infinite dimensional space ${\cal P}$ which is in turn contained in some Hilbert space $H$. One can then think of the Kushner--Stratonovich equation as defining a vector field in ${\cal P}$: the integral curves of the vector field should correspond to the solutions of the equation. To find approximate solutions to the Kushner--Stratonovich equation one chooses a finite dimensional submanifold $M$ of $H$ and approximates the probability distributions as points in $M$. At each point of $M$ one can use the Hilbert space structure to project the vector field onto the tangent space of $M$. One can now attempt to find approximate solutions to the Kushner--Stratonovich equations by integrating this vector field on the manifold $M$. This mental image is slightly innaccurate. The Kushner--Stratonovich equation is a stochastic PDE rather than a PDE so one should imagine some kind of stochastic vector field rather than a smooth vector field. Thus in this approach we hope to approximate the infinite dimensional stochastic PDE by solving a finite dimensional stochastic ODE on the manifold. Note that our approximation will depend upon two choices: the choice of manifold $M$ and the choice of Hilbert space structure $H$. In this paper we will consider two possible choices for the Hilbert space structure: the direct $L^2$ metric on the space of probability distributions; the Hilbert space structure associated with the Hellinger distance and the Fisher Information metric. Our focus will be on the direct $L^2$ metric since projection using the Hellinger distance has been considered before. As we shall see, the choice of the ``best'' Hilbert space structure is determined by the manifold one wishes to consider --- for manifolds associated with exponential families of distributions the Hellinger metric leads to the simplest equations, whereas the direct $L^2$ metric works well with mixture distributions. We will write down the stochastic ODE determined by this approach when $H=L^2$ and show how it leads to a numerical scheme for finding approximate solutions to the Kushner--Stratonovich equations in terms of a mixture of normal distributions. We will call this scheme the {\em $L^2$ normal mixture projection filter} or simply the L2NM projection filter. The stochastic ODE for the Hellinger metric was considered in \cite{BrigoPhD}, \cite{brigo98} and \cite{brigo99}. In particular a precise numerical scheme is given in \cite{BrigoPhD} for finding solutions by projecting onto an exponential family of distributions. We will call this scheme the {\em Hellinger exponential projection filter} or simply the HE projection filter. We will compare the results of a {\ttfamily C++ } implementation of the L2NM projection filter with a number of other numerical approaches including the HE projection filter and the optimal filter. We can measure the goodness of our filtering approximations thanks to the geometric structure and, in particular, the precise metrics we are using on the spaces of probability measures. What emerges is that the two projection methods produce excellent results for a variety of filtering problems. The results appear similar for both projection methods; which gives more accurate results depends upon the problem. As we shall see, however, the L2NM projection approach can be implemented more efficiently. In particular one needs to perform numerical integration as part of the HE projection filter algorithm whereas all integrals that occur in the L2NM projection can be evaluated analytically. We also compare the L2NM filter to a particle filter with the best possible combination of particles with respect to the L{\'e}vy metric. Introducing the L{\'e}vy metric is needed because particles densities do not compare well with smooth densities when using $L^2$ induced metrics. We show that, given the same number of parameters, the L2NM may outperform a particles based system. The paper is structured as follows: In Section \ref{sec:nonlinfp} we introduce the nonlinear filtering problem and the infinite-dimensional Stochastic PDE (SPDE) that solves it. In Section \ref{sec:SMan} we introduce the geometric structure we need to project the filtering SPDE onto a finite dimensional manifold of probability densities. In Section \ref{sec:PF} we perform the projection of the filtering SPDE according to the L2NM framework and also recall the HE based framework. In Section \ref{sec:NImp} we briefly discuss the numerical implementation, while in Section \ref{sec:Soft} we discuss in detail the software design for the L2NM filter. In Section \ref{NumRes} we look at numerical results, whereas in Section \ref{sec:Part} we compare our outputs with a particle method. Section \ref{sec:Conc} concludes the paper. \section{ The non-linear filtering problem\\ with continuous-time observations}\label{sec:nonlinfp} In the non-linear filtering problem the state of some system is modelled by a process $X$ called the signal. This signal evolves over time $t$ according to an It\^{o} stochastic differential equation (SDE). We measure the state of the system using some observation $Y$. The observations are not accurate, there is a noise term. So the observation $Y$ is related to the signal $X$ by a second equation. \begin{equation} \label{Lanc1-1} \begin{array}{rcl} dX_t &=& f_t(X_t)\,dt + \sigma_t(X_t)\,dW_t, \ \ X_0, \\ \\ dY_t &=& b_t(X_t)\,dt + dV_t, \ \ Y_0 = 0\ . \end{array} \end{equation} In these equations the unobserved state process $\{ X_t, t \geq 0 \}$ takes values in ${\mathbb R}^n$, the observation $\{ Y_t, t\geq 0 \}$ takes values in ${\mathbb R}^d$ and the noise processes $\{ W_t, t\geq 0\}$ and $\{ V_t, t\geq 0\}$ are two Brownian motions. The nonlinear filtering problem consists in finding the conditional probability distribution $\pi_t$ of the state $X_t$ given the observations up to time $t$ and the prior distribution $\pi_0$ for $X_0$. Let us assume that $X_0$, and the two Brownian motions are independent. Let us also assume that the covariance matrix for $V_t$ is invertible. We can then assume without any further loss of generality that its covariance matrix is the identity. We introduce a variable $a_t$ defined by: \[ a_t = \sigma_t \sigma_t^T \] With these preliminaries, and a number of rather more technical conditions which we will state shortly, one can show that $\pi_t$ satisfies the a stochastic PDE called the Kushner--Stratonovich equation. This states that for any compactly supported test function $\phi$ defined on ${\mathbb R}^{n}$ \begin{equation} \label{FKK} \pi_t(\phi) = \pi_0(\phi) + \int_0^t \pi_s({\cal L}_s \phi)\,ds + \sum_{k=1}^d \int_0^t [\pi_s(b_s^k\, \phi) - \pi_s(b_s^k)\, \pi_s(\phi)]\, [dY_s^k-\pi_s(b_s^k)\,ds]\ , \end{equation} where for all $t \geq 0$, the backward diffusion operator ${\cal L}_t$ is defined by \begin{displaymath} {\cal L}_t = \sum_{i=1}^n f_t^i\, \frac{\partial}{\partial x_i} + \inverse{2} \sum_{i,j=1}^n a_t^{ij}\, \frac{\partial^2}{\partial x_i \partial x_j}\ . \end{displaymath} Equation~(\ref{FKK}) involves the derivatives of the test function ${\phi}$ because of the expression ${\cal L}_s \phi$. We assume now that $\pi_t$ can be represented by a density $p_t$ with respect to the Lebesgue measure on ${\mathbb R}^n$ for all time $t \geq 0$ and that we can replace the term involving ${\cal L}_s\phi$ with a term involving its formal adjoint ${\cal L}^*$. Thus, proceeding formally, we find that $p_t$ obeys the following It\^o-type stochastic partial differential equation (SPDE): \[ {\mathrm d} p_t = {\cal L}^*_t p_t {\mathrm d} t + \sum_{k=1}^d p_t [ b_t^k - E_{p_t} \{ b_t^k \} ][ {\mathrm d} Y_t^k - E_{p_t} \{b_t^k \} {\mathrm d} t ] \] where $E_{p_t}\{ \cdot \}$ denotes the expectation with respect to the probability density $p_t$ (equivalently the conditional expectation given the observations up to time $t$). The forward diffusion operator ${\cal L}^*_t$ is defined by: \begin{displaymath} {\cal L}_t^* \phi = -\sum_{i=1}^n \frac{\partial}{\partial x_i} [ f_t^i \phi ] + \inverse{2} \sum_{i,j=1}^n \frac{\partial^2}{\partial x_i \partial x_j}[ a_t^{ij} \phi ]. \end{displaymath} This equation is written in It\^{o} form. When working with stochastic calculus on manifolds it is necessary to use Stratonovich SDE's rather than It\^{o} SDE's. This is because one does not in general know how to interpret the second order terms that arise in It\^{o} calculus in terms of manifolds. The interested reader should consult \cite{elworthy82a}. A straightforward calculation yields the following Stratonvich SPDE: \begin{displaymath} dp_t = {\cal L}_t^\ast\, p_t\,dt - \inverse{2}\, p_t\, [\vert b_t \vert^2 - E_{p_t}\{\vert b_t \vert^2\}] \,dt + \sum_{k=1}^d p_t\, [b_t^k-E_{p_t}\{b_t^k\}] \circ dY_t^k\ . \end{displaymath} We have indicated that this is the Stratonovich form of the equation by the presence of the symbol `$\circ$' inbetween the diffusion coefficient and the Brownian motion of the SDE. We shall use this convention throughout the rest of the paper. In order to simplify notation, we introduce the following definitions~: \begin{equation} \label{coeff} \begin{array}{rcl} \gamma_t^0(p) &:=& \inverse{2}\, [\vert b_t \vert^2 - E_p\{\vert b_t \vert^2\}]\ p, \\ \\ \gamma_t^k(p) &:=& [b_t^k - E_p\{b_t^k\}] p \ , \end{array} \end{equation} for $k = 1,\cdots,d$. The Str form of the Kushner--Stratonovich equation reads now \begin{equation}\label{KSE:str} dp_t = {\cal L}_t^\ast\, p_t\,dt - \gamma_t^0(p_t)\,dt + \sum_{k=1}^d \gamma_t^k(p_t) \circ dY_t^k\ . \end{equation} Thus, subject to the assumption that a density $p_t$ exists for all time and assuming the necessary decay condition to ensure that replacing ${\cal L}$ with its formal adjoint is valid, we find that solving the non-linear filtering problem is equivalent to solving this SPDE. Numerically approximating the solution of equation~(\ref{KSE:str}) is the primary focus of this paper. For completeness we review the technical conditions required in order for equation~\ref{FKK} to follow from~(\ref{Lanc1-1}). \begin{itemize} \item[(A)] Local Lipschitz continuity~: for all $R > 0$, there exists $K_R > 0$ such that \begin{displaymath} \vert f_t(x) - f_t(x') \vert \leq K_R\, \vert x-x' \vert \hspace{1cm}\mbox{and}\hspace{1cm} \Vert a_t(x) - a_t(x') \Vert \leq K_R\, \vert x-x' \vert\ , \end{displaymath} for all $t\geq 0$, and for all $x,x'\in B_R$, the ball of radius $R$. \item[(B)] Non--explosion~: there exists $K > 0$ such that \begin{displaymath} x^T f_t(x) \leq K\, (1+\vert x \vert^2) \hspace{1cm}\mbox{and}\hspace{1cm} \mbox{trace}\; a_t(x) \leq K\, (1+\vert x \vert^2)\ , \end{displaymath} for all $t\geq 0$, and for all $x\in {\bf R}^n$. \item[(C)] Polynomial growth~: there exist $K > 0$ and $r \geq 0$ such that \begin{displaymath} \vert b_t(x) \vert \leq K\, (1+\vert x \vert^r)\ , \end{displaymath} for all $t\geq 0$, and for all $x\in {\bf R}^n$. \end{itemize} Under assumptions~(A) and~(B), there exists a unique solution $\{X_t\,,\,t\geq 0\}$ to the state equation, see for example~\cite{khasminskii}, and $X_t$ has finite moments of any order. Under the additional assumption~(C) the following {\em finite energy} condition holds \begin{displaymath} E \int_0^T \vert b_t(X_t) \vert^2\,dt < \infty\ , \hspace{1cm}\mbox{for all $T\geq 0$}. \end{displaymath} Since the finite energy condition holds, it follows from Fujisaki, Kallianpur and Kunita~\cite{fujisaki72a} that $\{\pi_t\,,\,t\geq 0\}$ satisfies the Kushner--Stratonovich equation~\ref{FKK}. \section{Statistical manifolds }\label{sec:SMan} \subsection{ Families of distributions } As discussed in the introduction, the idea of a projection filter is to approximate solutions to the Kushner--Stratononvich equation~\ref{FKK} using a finite dimensional family of distributions. \begin{example} A {\em normal mixture} family contains distributions given by: \[ p = \sum_{i=1}^m \lambda_i \frac{1}{\sigma_i \sqrt{ 2 \pi}} \exp\left(\frac{-(x-\mu_i)^2}{2 \sigma_i^2}\right) \] with $\lambda_i >0 $ and $\sum \lambda_i = 1$. It is a $3m-1$ dimensional family of distributions. \end{example} \begin{example} A {\em polynomial exponential family} contains distributions given by: \[ p = \exp( \sum_{i=0}^m a_i x^i ) \] where $a_0$ is chosen to ensure that the integral of $p$ is equal to $1$. To ensure the convergence of the integral we must have that $m$ is even and $a_m < 0$. This is an $m$ dimensional family of distributions. Polynomial exponential families are a special case of the more general notion of an exponential family, see for example \cite{amari85a}. \end{example} A key motivation for considering these families is that one can reproduce many of the qualitative features of distributions that arise in practice using these distributions. For example, consider the qualitative specification: the distribution should be bimodal with peaks near $-1$ and $1$ with the peak at $-1$ twice as high and twice as wide as the peak near $1$. One can easily write down a distribution of this approximates form using a normal mixture. To find a similar exponential family, one seeks a polynomial with: local maxima at $-1$ and $1$; with the maximum values at these points differing by $\log(2)$; with second derivative at $1$ equal to twice that at $-1$. These conditions give linear equations in the polynomial coefficients. Using degree $6$ polynomials it is simple to find solutions meeting all these requirements. A specific numerical example of a polynomial meeting these requirements is plotted in Figure~\ref{fig:bimodalsextic}. The associated exponential distribution is plotted in Figure~\ref{fig:bimodalexponential}. \begin{figure}[htp] \begin{centering} \includegraphics{bimodalsextic} \caption{ $y = -18.98-13.15 x+23.54 x^2+25.43 x^3+13.96 x^4-12.63 x^5-17.15 x^6$} \label{fig:bimodalsextic} \end{centering} \end{figure} \begin{figure}[htp] \begin{centering} \includegraphics{bimodalexponential} \caption{ $y = \exp(-18.98-13.15 x+23.54 x^2+25.43 x^3+13.96 x^4-12.63 x^5-17.15 x^6)$} \label{fig:bimodalexponential} \end{centering} \end{figure} We see that normal mixtures and exponential families have a broadly similar power to describe the qualitative shape of a distribution using only a small number of parameters. Our hope is that by approximating the probability distributions that occur in the Kushner--Stratonovich equation by elements of one of these families we will be able to derive a low dimensional approximation to the full infinite dimensional stochastic partial differential equation. \subsection{Two Hilbert spaces of probability distributions} We have given direct parameterisations of our families of probability distributions and thus we have implicitly represented them as finite dimensional manifolds. In this section we will see how families of probability distributions can be thought of as being embedded in a Hilbert space and hence they inherit a manifold structure and metric from this Hilbert space. There are two obvious ways of thinking of embedding a probability density function on ${\mathbb R}^n$ in a Hilbert space. The first is to simply assume that the probability density function is square integrable and hence lies directly in $L^2({\mathbb R}^n)$. The second is to use the fact that a probability density function lies in $L^1({\mathbb R}^n)$ and is non-negative almost everywhere. Hence $\sqrt{p}$ will lie in $L^2({\mathbb R}^n)$. For clarity we will write $L^2_D({\mathbb R}^n)$ when we think of $L^2({\mathbb R}^n)$ as containing densities directly. The $D$ stands for direct. We write ${\cal D} \subset L^2_D({\mathbb R}^n)$ where ${\cal D}$ is the set of square integrable probability densities (functions with integral $1$ which are positive almost everywhere). Similarly we will write $L^2_H({\mathbb R}^n)$ when we think of $L^2({\mathbb R}^n)$ as being a space of square roots of densities. The $H$ stands for Hellinger (for reasons we will explain shortly). We will write ${\cal H}$ for the subset of $L^2_H$ consisting of square roots of probability densities. We now have two possible ways of formalizing the notion of a family of probability distributions. In the next section we will define a smooth family of distributions to be either a smooth submanifold of $L^2_D$ which also lies in ${\cal D}$ or a smooth submanifold of $L^2_H$ which also lies in ${\cal H}$. Either way the families we discussed earlier will give us finite dimensional families in this more formal sense. The Hilbert space structures of $L^2_D$ and $L^2_H$ allow us to define two notions of distance between probability distributions which we will denote $d_D$ and $d_H$. Given two probability distributions $p_1$ and $p_2$ we have an injection $\iota$ into $L^2$ so one defines the distance to be the norm of $\iota(p_1) - \iota(p_2)$. So given two probability densities $p_1$ and $p_2$ on ${\mathbb R}^n$ we can define: \begin{eqnarray*} d_H(p_1,p_2) &=& \left( \int (\sqrt{p_1} - \sqrt{p_2})^2 {\mathrm d} \mu \right)^{\frac{1}{2}} \\ d_D(p_1,p_2) &=& \left( \int (p_1 - p_2)^2 {\mathrm d} \mu \right)^{\frac{1}{2}}. \end{eqnarray*} Here ${\mathrm d} \mu$ is the Lebesgue measure. $d_H$ defines the {\em Hellinger distance} between the two distributions, which explains are use of $H$ as a subscript. We will write $\langle \cdot, \cdot \rangle_H$ for the inner product associated with $d_H$ and $\langle \cdot, \cdot \rangle_D$ or simply $\langle \cdot, \cdot \rangle$ for the inner product associated with $d_D$. In this paper we will consider the projection of the conditional density of the true state of the system given the observations (which is assumed to lie in ${\cal D}$ or ${\cal H}$) onto a submanifold. The notion of projection only makes sense with respect to a particular inner product structure. Thus we can consider projection using $d_H$ or projection using $d_D$. Each has advantages and disadvantages. The most notable advantage of the Hellinger metric is that the $d_H$ metric can be defined independently of the Lebesgue measure and its definition can be extended to define the distance between measures without density functions (see Jacod and Shiryaev~\cite{jacod87a} or Hanzon~\cite{hanzon89a}). In particular the Hellinger distance is indepdendent of the choice of parameterization for ${\mathbb R}^n$. This is a very attractive feature in terms of the differential geometry of our set up. Despite the significant theoretical advantages of the $d_H$ metric, the $d_D$ metric has an obvious advantage when studying mixture families: it comes from an inner product on $L^2_D$ and so commutes with addition on $L^2_D$. So it should be relatively easy to calculate with the $d_D$ metric when adding distributions as happens in mixture families. As we shall see in practice, when one performs concrete calculations, the $d_H$ metric works well for exponential families and the $d_D$ metric works well for mixture families. While the $d_H$ metric leads to the Fisher Information and to an equivalence with Assumed Density Filters when used on exponential families, see \cite{brigo99}, the $d_D$ metric for simple mixture families is equivalent to a Galerkin method, see for example \cite{brigo12}. \subsection{The tangent space of a family of distributions} To make our notion of smooth families precise we need to explain what we mean by a smooth map into an infinite dimensional space. Let $U$ and $V$ be Hilbert spaces and let $f:U \to V $ be a continuous map ($f$ need only be defined on some open subset of $U$). We say that $f$ is Fre\'chet differentiable at $x$ if there exists a bounded linear map $A:U \to V$ satisfying: \[ \lim_{h \to x} \frac{\| f(h) - f(x) - A h \|_V}{\|h\|_U}\] If $A$ exists it is unique and we denote it by ${\mathrm D}f(x)$. This limit is called the Fre\'chet derivative of $f$ at $x$. It is the best linear approximation to $f$ at $0$ in the sense of minimizing the norm on $V$. This allows us to define a smooth map $f:U \to V$ defined on an open subset of $U$ to be an infinitely Fre\'chet differentiable map. We define an {\em immersion} of an open subset of ${\mathbb R}^n$ into $V$ to be a map such that ${\mathrm D}f(x)$ is injective at every point where $f$ is defined. The latter condition ensures that the best linear approximation to $f$ is a genuinely $n$ dimensional map. Given an immersion $f$ defined on a neighbourhood of $x$, we can think of the vector subspace of $V$ given by the image of ${\mathrm D}f(x)$ as representing the tangent space at $x$. To make these ideas more concrete, let us suppose that $p(\theta)$ is a probability distribution depending smoothly on some parameter $\theta = (\theta_1,\theta_2,\ldots,\theta_m) \in U$ where $U$ is some open subset of ${\mathbb R}^m$. The map $\theta \to p(\theta)$ defines a map $i:U \to {\cal D}$. At a given point $\theta \in U$ and for a vector $h=(h_1,h_2,\ldots,h_m) \in {\mathbb R}^m$ we can compute the Fr\'echet derivative to obtain: \[ {\mathrm D} i (\theta) h = \sum_{i=1}^m \frac{\partial p}{\partial \theta_i} h_i \] So we can identify the tangent space at $\theta$ with the following subspace of $L^2_D$: \begin{equation} \label{basisForD} \span \{ \frac{\partial p}{\partial \theta_1}, \frac{\partial p}{\partial \theta_2}, \ldots, \frac{\partial p}{\partial \theta_m} \} \end{equation} We can formally define a smooth $n$-dimensional family of probability distributions in $L^2_D$ to be an immersion of an open subset of ${\mathbb R}^n$ into ${\cal D}$. Equivalently it is a smoothly parameterized probability distribution $p$ such that the above vectors in $L^2$ are linearly independent. We can define a smooth $m$-dimensional family of probability distributions in $L^2_H$ in the same way. This time let $q(\theta)$ be a square root of a probability distribution depending smoothly on $\theta$. The tangent vectors in this case will be the partial derivatives of $q$ with respect to $\theta$. Since one normally prefers to work in terms of probability distributions rather than their square roots we use the chain rule to write the tangent space as: \begin{equation} \label{basisForH} \span \{ \frac{1}{2 \sqrt{p}} \frac{\partial p}{\partial \theta_1}, \frac{1}{2 \sqrt{p}} \frac{\partial p}{\partial \theta_2}, \ldots, \frac{1}{2 \sqrt{p}} \frac{\partial p}{\partial \theta_m} \} \end{equation} We have defined a family of distributions in terms of a single immersion $f$ into a Hilbert space $V$. In other words we have defined a family of distributions in terms of a specific parameterization of the image of $f$. It is tempting to try and phrase the theory in terms of the image of $f$. To this end, one defines an {\em embedded submanifold} of $V$ to be a subspace of $V$ which is covered by immersions $f_i$ from open subsets of ${\mathbb R}^n$ where each $f_i$ is a homeomorphisms onto its image. With this definition, we can state that the tangent space of an embedded submanifold is independent of the choice of parameterization. One might be tempted to talk about submanifolds of the space of probability distributions, but one should be careful. The spaces ${\cal H}$ and ${\cal D}$ are not open subsets of $L^2_H$ and $L^2_D$ and so do not have any obvious Hilbert-manifold structure. To see why, consider Figure~\ref{fig:perturbednormal} where we have peturbed a probability distribution slightly by subtracting a small delta-like function. \begin{figure}[htp] \begin{centering} \includegraphics{perturbednormal} \caption{An element of $L^2$ arbitrarily close to the normal distribution but not in $H$} \label{fig:perturbednormal} \end{centering} \end{figure} \subsection{The Fisher information metric} Given two tangent vectors at a point to a family of probability distributions we can form their inner product using $\langle \cdot, \cdot \rangle_H$. This defines a so-called {\em Riemannian metric} on the family. With respect to a particular parameterization $\theta$ we can compute the inner product of the $i^{th}$ and $j^{th}$ basis vectors given in equation~\ref{basisForH}. We call this quantity $\frac{1}{4} g_{ij}$. \begin{eqnarray*} \frac{1}{4} g_{ij}( \theta) & := & \langle \frac{1}{2 \sqrt{p}} \frac{ \partial p}{\partial \theta_i}, \frac{1}{2 \sqrt{p}} \frac{ \partial p}{\partial \theta_j} \rangle_H \\ & = & \frac{1}{4} \int \frac{1}{p} \frac{ \partial p}{ \partial \theta_i} \frac{ \partial p}{ \partial \theta_j} {\mathrm d} \mu \\ & = & \frac{1}{4} \int \frac{ \partial \log p}{ \partial \theta_i} \frac{ \partial \log p}{ \partial \theta_j} p {\mathrm d} \mu \\ & = & \frac{1}{4} E_p( \frac{ \partial \log p}{\partial \theta_i} \frac{ \partial \log p}{\partial \theta_j}) \end{eqnarray*} Up to the factor of $\frac{1}{4}$, this last formula is the standard definition for the Fisher information matrix. So our $g_{ij}$ is the Fisher information matrix. We can now interpret this matrix as the Fisher information metric and observe that, up to the constant factor, this is the same thing as the Hellinger distance. See \cite{amari85a}, \cite{murray93a} and \cite{aggrawal74a} for more in depth study on this differential geometric approach to statistics. \begin{example}The Gaussian family of densities can be parameterized using parameters mean $\mu$ and variance $v$. With this parameterization the Fisher metric is given by: \[ g( \mu, v) = \frac{1}{v} \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1/(2v) \end{array} \right] \] \end{example} The representation of the metric as a matrix depends heavily upon the choice of parameterization for the family. \begin{example}The Gaussian family may be considered as a particular exponential family with parameters $\theta_1$ and $\theta_2$ given by: \[ p(x,\theta) = \exp( \theta_1 x + \theta_2 x^2 - \psi(\theta) ) \] where $\psi(\theta)$ is chosen to normalize $p$. It follows that: \[ \psi(\theta) = \frac{1}{2} \log \left( \frac{\pi}{-\theta_2} \right) - \frac{{\theta_1}^2}{ 4 \theta_2} \] This is related to the familiar parameterization in terms of $\mu$ and $v$ by: \[ \mu = -\theta_1/(2 \theta_2), \quad v = \sigma^2 = (1/\theta_2 - \theta_1^2/\theta_2^2)/2 \] One can compute the Fisher information metric relative to the parameterization $\theta_1$ to obtain: \[ g(\theta) = \left[ \begin{array}{cc} -1/(2 \theta_2) & \theta_1/(2 \theta_2^2) \\ \theta_1/(2 \theta_2^2) & 1/(2 \theta_2^2) - \theta_1^2/(2 \theta_2^3) \end{array} \right] \] \end{example} The particular importance of the metric structure for this paper is that it allows us to define orthogonal projection of $L^2_H$ onto the tangent space. Suppose that one has $m$ linearly independent vectors $w_i$ spanning some subspace $W$ of a Hilbert space $V$. By linearity, one can write the orthogonal projection onto $W$ as: \[ \Pi(v) = \sum_{i=1}^m [\sum_{i=1}^m A^{ij} \langle v, w_j \rangle ] w_i \] for some appropriately chosen constants $A^{ij}$. Since $\Pi$ acts as the identity on $w_i$ we see that $A^{ij}$ must be the inverse of the matrix $A_{ij}=\langle w_i, w_j \rangle$. We can apply this to the basis given in equation~\ref{basisForH}. Defining $g^{ij}$ to be the inverse of the matrix $g_{ij}$ we obtain the following formula for projection, using the Hellinger metric, onto the tangent space of a family of distributions: \begin{eqnarray*} \Pi_H(v) = \sum_{i=1}^m \left[ \sum_{j=1}^m 4 g^{ij} \langle v, \frac{1}{2 \sqrt{p}} \frac{ \partial p}{ \partial \theta_j} \rangle_H \right] \frac{1}{2 \sqrt{p}} \frac{ \partial p}{ \partial \theta_i} \end{eqnarray*} \subsection{The direct $L^2$ metric} The ideas from the previous section can also be applied to the direct $L^2$ metric. This gives a different Riemannian metric on the manifold. We will write $h=h_{ij}$ to denote the $L^2$ metric when written with respect to a particular parameterization. \begin{example} In coordinates $\mu$, $\nu$, the $L^2$ metric on the Gaussian family is: \[ h( \mu, \nu) = \frac{1}{8 \nu \sqrt{ \nu \pi}} \left[ \begin{array}{cc} 1 & 0 \\ 0 & \frac{3}{4 \nu} \end{array} \right] \] \end{example} We can obtain a formula for projection in $L^2_D$ using the direct $L^2$ metric using the basis given in equation~\ref{basisForD}. We write $h^{ij}$ for the matrix inverse of $h_{ij}$. \begin{eqnarray} \label{l2projectionformula} \Pi_D(v) = \sum_{i=1}^m \left[ \sum_{j=1}^m h^{ij} \left\langle v, \frac{ \partial p}{ \partial \theta_j} \right\rangle_D \right] \frac{ \partial p}{ \partial \theta_i}. \end{eqnarray} \section{The projection filter }\label{sec:PF} Given a family of probability distributions parameterised by $\theta$, we wish to approximate an infinte dimensional solution to the non-linear filtering SPDE using elements of this family. Thus we take the Kushner--Stratonovich equation~\ref{KSE:str}, view it as defining a stochastic vector field in ${\cal D}$ and then project that vector field onto the tangent space of our family. The projected equations can then be viewed as giving a stochastic differential equation for $\theta$. In this section we will write down these projected equations explicitly. Let $\theta \to p(\theta)$ be the parameterization for our family. A curve $t \to \theta( t)$ in the parameter space corresponds to a curve $t \to p( \cdot , \theta(t))$ in ${\cal D}$. For such a curve, the left hand side of the Kushner--Stratonovich equation~\ref{KSE:str} can be written: \begin{eqnarray*} d_t p(\cdot,\theta(t)) & = & \sum_{i=1}^m \frac{\partial p(\cdot,\theta(t))}{\partial \theta_i} d_t \theta_i(t) \\ & = & \sum_{i=1}^m v_i {\mathrm d} \theta_i \end{eqnarray*} where we write $v_i=\frac{\partial p}{\partial \theta_i}$. $\{v_i\}$ is the basis for the tangent space of the manifold at $\theta(t)$. Given the projection formula given in equation~\ref{l2projectionformula}, we can project the terms on the right hand side onto the tangent space of the manifold using the direct $L^2$ metric as follows: \begin{eqnarray*} \Pi_D^{\theta}[ {\cal L}^* p ] & = & \sum_{i=1}^m \left[ \sum_{j=1}^m h^{ij} \langle {\cal L}^* p , v_j \rangle \right] v_i \\ & = & \sum_{i=1}^m \left[ \sum_{j=1}^m h^{ij} \langle p , {\cal L} v_j \rangle \right] v_i \\ \Pi_D^{\theta} [ \gamma^k( p ) ] & = &\sum_{i=1}^m \left[\sum_{j=1}^m h^{ij} \langle \gamma^k( p ), v_j \rangle \right] v_i \end{eqnarray*} Thus if we take $L^2$ projection of each side of equation~(\ref{KSE:str}) we obtain: \[ \sum_{i=1}^m v_i {\mathrm d} \theta^i = \sum_{i=1}^m \left[ \sum_{j=1}^m h^{ij} \left\{ \langle p, {\cal L}v_j \rangle {\mathrm d} t - \langle \gamma^0(p), v_j \rangle {\mathrm d} t + \sum_{k=1}^d \langle \gamma^k(p), v_j \rangle \circ {\mathrm d} Y^k \right\} \right] v_i \] Since the $v_i$ form a basis of the tangent space, we can equate the coefficients of $v_i$ to obtain: \begin{equation} \label{KSE:l2projected} {\mathrm d} \theta^i = \sum_{j=1}^m h^{ij} \left\{ \langle p(\theta), {\cal L}v_j \rangle {\mathrm d} t - \langle \gamma^0(p(\theta)), v_j \rangle {\mathrm d} t + \sum_{k=1}^d \langle \gamma^k(p(\theta)), v_j \rangle \circ {\mathrm d} Y^k \right\}. \end{equation} This is the promised finite dimensional stochastic differential equation for $\theta$ corresponding to $L^2$ projection. If preferred, one could instead project the Kushner--Stratonovich equation using the Hellinger metric instead. This yields the following stochastic differential equation derived originally in \cite{BrigoPhD}: \begin{equation} \label{KSE:hellingerProjected} {\mathrm d} \theta_i = \sum_{j=1}^m g^{ij} \left( \langle \frac{{\cal L}^* p}{p}, v_j \rangle {\mathrm d} t - \langle \frac{1}{2} |b|^2, v_j \rangle {\mathrm d} t + \sum_{k=1}^d \langle b^k, v_j \rangle \circ {\mathrm d} Y^k \right) \end{equation} Note that the inner products in this equation are the direct $L^2$ inner products: we are simply using the $L^2$ inner product notation as a compact notation for integrals. \section{Numerical implementation }\label{sec:NImp} Equations~\ref{KSE:l2projected} and \ref{KSE:hellingerProjected} both give finite dimensional stochastic differential equations that we hope will approximate well the solution to the full Kushner--Stratonovich equation. We wish to solve these finite dimensional equations numerically and thereby obtain a numerical approximation to the non-linear filtering problem. Because we are solving a low dimensional system of equations we hope to end up with a more efficient scheme than a brute-force finite difference approach. A finite difference approach can also be seen as a reduction of the problem to a finite dimensional system. However, in a finite difference approach the finite dimensional system still has a very large dimension, determined by the number of grid points into which one divides ${\mathbb R}^n$. By contrast the finite dimensional manifolds we shall consider will be defined by only a handful of parameters. \section{Software design }\label{sec:Soft} The specific solution algorithm will depend upon numerous choices: whether to use $L^2$ or Hellinger projection; which family of probability distributions to choose; how to parameterize that family; the representation of the functions $f$, $\sigma$ and $b$; how to perform the integrations which arise from the calculation of expectations and inner products; the numerical method selected to solve the finite dimensional equations. To test the effectiveness of the projection idea, we have implemented a {\ttfamily C++ } engine which performs the numerical solution of the finite dimensional equations and allows one to make various selections from the options above. Currently our implementation is restricted to the case of the direct $L^2$ projection for a $1$-dimensional state $X$ and $1$-dimensional noise $W$. However, the engine does allow one to experiment with various manifolds, parameteriziations and functions $f$, $\sigma$ and $b$. We use object oriented programming techniques in order to allow this flexibility. Our implementation contains two key classes \class{FunctionRing} and \class{Manifold}. To perform the computation, one must choose a data structure to represent elements of the function space. However, the most effective choice of representation depends upon the family of probability distributions one is considering and the functions $f$, $\sigma$ and $b$. Thus the {\ttfamily C++ } engine does not manipulate the data structure directly but instead works with the functions via the \class{FunctionRing} interface. A UML (Unified Modelling Language \cite{UML}) outline of the \class{FunctionRing} interface is given in table~\ref{UML:FunctionRing}. \begin{table}[htp] \begin{centering} {\sffamily \begin{tabular}{|l|} \hline FunctionRing \\ \hline + add( $f_1$ : Function, $f_2$ : Function ) : Function \\ + multiply( $f_1$ : Function, $f_2$ : Function ) : Function \\ + multiply( $s$ : Real, $f$ : Function ) : Function \\ + differentiate( $f$ : Function ) : Function \\ + integrate( $f$ : Function ) : Real \\ + evaluate( $f$ : Function ) : Real \\ + constantFunction( $s$ : Real ) : Function \\ \hline \end{tabular} } \caption{UML for the \class{FunctionRing} interface} \label{UML:FunctionRing} \end{centering} \end{table} \begin{table}[htp] \begin{centering} {\sffamily \begin{tabular}{|l|} \hline Manifold \\ \hline + getRing() : FunctionRing \\ + getDensity( $\theta$ ) : Function \\ + computeTangentVectors( $\theta$ : Point ) : Function* \\ + updatePoint( $\theta$ : Point, $\Delta \theta$ : Real* ) : Point \\ + finalizePoint( $\theta$ : Point ) : Point \\ \hline \end{tabular} } \caption{UML for the \class{Manifold} interface} \label{UML:Manifold} \end{centering} \end{table} The other key abstraction is the \class{Manifold}. We give a UML representation of this abstraction in table~\ref{UML:Manifold}. For readers unfamiliar with UML, we remark that the $*$ symbol can be read ``list''. For example, the computeTangentVectors function returns a list of functions. The \class{Manifold} uses some convenient internal representation for a point, the most obvious representation being simply the $m$-tuple $(\theta_1, \theta_2, \ldots \theta_m)$. On request the \class{Manifold} is able to provide the density associated with any point represented as an element of the \class{FunctionRing}. In addition the \class{Manifold} can compute the tangent vectors at any point. The \method{computeTangentVectors} method returns a list of elements of the \class{FunctionRing} corresponding to each of the vectors $v_i = \frac{\partial p}{\partial \theta_i}$ in turn. If the point is represented as a tuple $\theta=(\theta_1, \theta_2, \ldots \theta_n)$, the method \method{updatePoint} simply adds the components of the tuple $\Delta \theta$ to each of the components of $\theta$. If a different internal representation is used for the point, the method should make the equivalent change to this internal representation. The \method{finalizePoint} method is called by our algorithm at the end of every time step. At this point the \class{Manifold} implementation can choose to change its parameterization for the state. Thus the \method{finalizePoint} allows us (in principle at least) to use a more sophisticated atlas for the manifold than just a single chart. One should not draw too close a parallel between these computing abstractions and similarly named mathematical abstractions. For example, the space of objects that can be represented by a given \class{FunctionRing} do not need to form a differential ring despite the \method{differentiate} method. This is because the \method{differentiate} function will not be called infinitely often by the algorithm below, so the functions in the ring do not need to be infinitely differentiable. Similarly the \method{finalizePoint} method allows the \class{Manifold} implementation more flexibility than simply changing chart. From one time step to the next it could decide to use a completely different family of distributions. The interface even allows the dimension to change from one time step to the next. We do not currently take advantage of this possibility, but adapatively choosing the family of distributions would be an interesting topic for further research. \subsection{Outline of the algorithm} The {\ttfamily C++ } engine is initialized with a \class{Manifold} object, a copy of the initial \class{Point} and \class{Function} objects representing $f$, $\sigma$ and $b$. At each time point the engine asks the manifold to compute the tangent vectors given the current point. Using the multiply and integrate functions of the class \class{FunctionRing}, the engine can compute the inner products of any two functions, hence it can compute the metric matrix $h_{ij}$. Similarly, the engine can ask the manifold for the density function given the current point and can then compute ${\cal L}^*p$. Proceeding in this way, all the coefficients of ${\mathrm d} t$ and $\circ {\mathrm d} Y$ in equation~\ref{KSE:l2projected} can be computed at any given point in time. Were equation~\ref{KSE:l2projected} an It\^o SDE one could now numerically estimate $\Delta \theta$, the change in $\theta$ over a given time interval $\Delta$ in terms of $\Delta$ and $\Delta Y$, the change in $Y$. One would then use the \method{updateState} method to compute the new point and then one could repeat the calculation for the next time interval. In other words, were equation~\ref{KSE:l2projected} an It\^o SDE we could numerically solve the SDE using the Euler scheme. However, equation~\ref{KSE:l2projected} is a Stratonovich SDE so the Euler scheme is no longer valid. Various numerical schemes for solving stochastic differential equations are considered in \cite{ burrageburragetian} and \cite{kloedenplaten}. One of the simplest is the Stratonovich--Heun method described in \cite{burrageburragetian}. Suppose that one wishes to solve the SDE: \[ {\mathrm d} y_t = f(y_t) {\mathrm d} t + g( y_t) \circ {\mathrm d} W_t \] The Stratonvich--Heun method generates an estimate for the solution $y_n$ at the $n$-th time interval using the formulae: \begin{eqnarray*} Y_{n+1} & = & y_n + f(y_n) \Delta + g(y_n) \Delta W_n \\ y_{n+1} & = & y_n + \frac{1}{2} (f(y_n) + f(Y_{n+1})) \Delta + \frac{1}{2}( g(y_n) + g(Y_{n+1})) \Delta W_n \end{eqnarray*} In these formulae $\Delta$ is the size of the time interval and $\Delta W_n$ is the change in $W$. One can think of $Y_{n+1}$ as being a prediction and the value $y_{n+1}$ as being a correction. Thus this scheme is a direct translation of the standard Euler--Heun scheme for ordinary differential equations. We can use the Stratonovich--Heun method to numerically solve equation ~\ref{KSE:l2projected}. Given the current value $\theta_n$ for the state, compute an estimate for $\Delta \theta_n$ by replacing ${\mathrm d} t$ with $\Delta$ and ${\mathrm d} W$ with $\Delta W$ in equation~\ref{KSE:l2projected}. Using the \method{updateState} method compute a prediction $\Theta_{n+1}$. Now compute a second estimate for $\Delta \theta_n$ using equation~\ref{KSE:l2projected} in the state $\Theta_{n+1}$. Pass the average of the two estimates to the \method{ updateState} function to obtain the the new state $\theta_{n+1}$. At the end of each time step, the method \method{finalizeState} is called. This provides the manifold implementation the opportunity to perform checks such as validation of the state, to correct the normalization and, if desired, to change the representation it uses for the state. One small observation worth making is that the equation~\ref{KSE:l2projected} contains the term $h^{ij}$, the inverse of the matrix $h_{ij}$. However, it is not necessary to actually calculate the matrix inverse in full. It is better numerically to multiply both sides of equation~\ref{KSE:l2projected} by the matrix $h_{ij}$ and then compute ${\mathrm d} \theta$ by solving the resulting linear equations directly. This is the approach taken by our algorithm. As we have already observed, there is a wealth of choices one could make for the numerical scheme used to solve equation~\ref{KSE:l2projected}, we have simply selected the most convenient. The existing \class{Manifold} and \class{FunctionRing} implementations could be used directly by many of these schemes --- in particular those based on Runge--Kutta schemes. In principle one might also consider schemes that require explicit formulae for higher derivatives such as $\frac{\partial^2 p}{\partial \theta_i \partial \theta_j}$. In this case one would need to extend the manifold abstraction to provide this information. Similarly one could use the same concepts in order to solve equation~\ref{KSE:hellingerProjected} where one uses the Hellinger projection. In this case the \class{FunctionRing} would need to be extended to allow division. This would in turn complicate the implementation of the integrate function, which is why we have not yet implemented this approach. \subsection{Implementation for normal mixture families} Let ${\cal R}$ denote the space of functions which can be written as finite linear combinations of terms of the form: \[ \pm x^n e^{ a x^2 + b x + c} \] where $n$ is non-negative integer and $a$, $b$ and $c$ are constants. ${\cal R}$ is closed under addition, multiplication and differentiation, so it forms a differential ring. We have written an implementation of \class{FunctionRing} corresponding to ${\cal R}$. Although the implementation is mostly straightforward some points are worth noting. Firstly, we store elements of our ring in memory as a collection of tuples $(\pm, a, b, c, n)$. Although one can write: \[ \pm x^n e^{ a x^2 + b x + c } = q x^n e^{a x^2 + b x} \] for appropriate $q$, the use or such a term in computer memory should be avoided as it will rapidly lead to significant rounding errors. A small amount of care is required throughout the implementation to avoid such rounding errors. Secondly let us consider explicitly how to implement integration for this ring. Let us define $u_n$ to be the integral of $x^n e^{-x^2}$. Using integration by parts one has: \[ u_n:= \int_{-\infty}^{\infty} x^n e^{-x^2} {\mathrm d} x = \frac{n-1}{2} \int_{-\infty}^{\infty} x^{n-2} e^{-x^2} {\mathrm d} x = \frac{n-1}{2} u_{n-2} \] Since $u_0 = \sqrt{\pi}$ and $u_1 = 0$ we can compute $u_n$ recursively. Hence we can analytically compute the integral of $p(x) e^{-x^2}$ for any polynomial $p$. By substitution, we can now integrate $p(x-\mu) e^{-(x-\mu)^2}$ for any $\mu$. By completing the square we can analytically compute the integral of $p(x) e^{a x^2 + b x + c}$ so long as $a<0$. Putting all this together one has an algorithm for analytically integrating the elements of ${\cal R}$. Let ${\cal N}^{i}$ denote the space of probability distributions that can be written as $\sum_{k=1}^i c_k e^{a_k x^2 + b_k x}$ for some real numbers $a_k$, $b_k$ and $c_k$ with $a_k<0$. Given a smooth curve $\gamma(t)$ in ${\cal N}^{i}$ we can write: \begin{equation*} \gamma(t) = \sum_{k=1}^i c_k(t) e^{a_k(t) x^2 + b_k(t) x}. \end{equation*} We can then compute: \begin{eqnarray*} \frac{ {\mathrm d} \gamma }{{\mathrm d} t} & = & \sum_{k=1}^i \left( \left( \frac{ {\mathrm d} a_k}{ {\mathrm d} t} x^2 + \frac{ {\mathrm d} b_k}{ {\mathrm d} t} x \right) c_k e^{a_k x^2 + b_k x} + \frac{ {\mathrm d} c_k}{ {\mathrm d} t} e^{a_k x^2 + b_k x} \right) \\ & \in & {\cal R} \end{eqnarray*} We deduce that the tangent vectors of any smooth submanifold of ${\cal N}^{i}$ must also lie in ${\cal R}$. In particular this means that our implementation of \class{FunctionRing} will be sufficient to represent the tangent vectors of any manifold consisting of finite normal mixtures. Combining these ideas we obtain the main theoretical result of the paper. \begin{theorem} Let $\theta$ be a parameterization for a family of probability distributions all of which can be written as a mixture of at most $i$ Gaussians. Let $f$, $a=\sigma^2$ and $b$ be functions in the ring ${\cal R}$. In this case one can carry out the direct $L^2$ projection algorithm for the problem given by equation~(\ref{Lanc1-1}) using analytic formulae for all the required integrations. \end{theorem} Although the condition that $f$, $a$ and $b$ lie in ${\cal R}$ may seem somewhat restrictive, when this condition is not met one could use Taylor expansions to find approximate solutions. Although the choice of parameterization does not affect the choice of \class{FunctionRing}, it does affect the numerical behaviour of the algorithm. In particular if one chooses a parameterization with domain a proper subset of ${\mathbb R}^m$, the algorithm will break down the moment the point $\theta$ leaves the domain. With this in mind, in the numerical examples given later in this paper we parameterize normal mixtures of $k$ Gaussians with a parameterization defined on the whole of ${\mathbb R}^n$. We describe this parameterization below. Label the parameters $\xi_i$ (with $1\leq i \leq k-1$), $x_1$, $y_i$ (with $2\leq i \leq k$) and $s_i$ (with $1 \leq i \leq k$). This gives a total of $3k-1$ parameters. So we can write \[ \theta=(\xi_1,\ldots, \xi_{k-1}, x_1, y_2, \ldots, y_{k}, s_1, \ldots, s_{k}) \] Given a point $\theta$ define variables as follows: \begin{eqnarray*} \lambda_1 & = & \logit^{-1}(\xi_1) \\ \lambda_i & = & \logit^{-1}(\xi_i) (1 - \lambda_1 - \lambda_2 - \ldots - \lambda_{i-1}) \qquad (2 \leq i \leq k-1) \\ \lambda_k & = & 1 - \lambda_1 - \lambda_2 - \ldots - \lambda_{k-1} \\ x_i & = & x_{i-1} + e^{y_i} \qquad (2 \leq i \leq k) \\ \sigma_i &= & e^{s_i} \end{eqnarray*} where the $\logit$ function sends a probability $p \in [0,1]$ to its log odds, $\ln(p/1-p)$. We can now write the density associated with $\theta$ as: \begin{equation*} p(x) = \sum_{i=1}^k \lambda_i \frac{1}{\sigma_i \sqrt{2 \pi}} \exp( -\frac{(x - x_i)^2}{2 \sigma_i^2} ) \end{equation*} We do not claim this is the best possible choice of parameterization, but it certainly performs better than some more na\"ive parameteriations with bounded domains of definition. We will call the direct $L^2$ projection algorithm onto the normal mixture family given with this projection the {\em L2NM projection filter}. \subsection{Comparison with the Hellinger exponential (HE) projection algorithm} A similar algorithm is described in \cite{brigo98,brigo99} for projection using the Hellinger metric onto an exponential family. We refer to this as the {\em HE projection filter}. It is worth highlighting the key differences between our algorithm and the exponential projection algorithm described in \cite{brigo98}. \begin{itemize} \item In \cite{brigo98} only the special case of the cubic sensor was considered. It was clear that one could in principle adapt the algorithm to cope with other problems, but there remained symbolic manipulation that would have to be performed by hand. Our algorithm automates this process by using the \class{FunctionRing} abstraction. \item When one projects onto an exponential family, the stochastic term in equation~(\ref{KSE:hellingerProjected}) simplifies to a term with constant coefficients. This means it can be viewed equally well as either an It\^o or Stratonovich SDE. The practical consequence of this is that the HE algorithm can use the Euler--Maruyama scheme rather than the Stratonvoich--Heun scheme to solve the resulting stochastic ODE's. Moreover in this case the Euler-Maruyama scheme coincides with the generally more precise Milstein scheme. \item In the case of the cubic sensor, the HE algorithm requires one to numerically evaluate integrals such as: \[ \int_{-\infty}^{\infty} x^n \exp( \theta_1 + \theta_2 x + \theta_3 x^2 + \theta_4 x^4) {\mathrm d} x \] where the $\theta_i$ are real numbers. Performing such integrals numerically considerably slows the algorithm. In effect one ends up using a rather fine discretization scheme to evaluate the integral and this somewhat offsets the hoped for advantage over a finite difference method. \end{itemize} \section{Numerical Results }\label{NumRes} In this section we compare the results of using the direct $L^2$ projection filter onto a mixture of normal distributions with other numerical methods. In particular we compare it with: \begin{enumerate} \item A finite difference method using a fine grid which we term the { \em exact filter}. Various convergence results are known (\cite{kushner} and \cite{kushnerHuang}) for this method. In the simulations shown below we use a grid with $1000$ points on the $x$-axis and $5000$ time points. In our simulations we could not visually distinguish the resulting graphs when the grid was refined further justifying us in considering this to be extremely close to the exact result. The precise algorithm used is as described in the section on ``Partial Differential Equations Methods'' in chapter 8 of Bain and Crisan~\cite{crisan10}. \item The {\em extended Kalman filter} (EK). This is a somewhat heuristic approach to solving the non-linear filtering problem but which works well so long as one assumes the system is almost linear. It is implemented essentially by linearising all the functions in the problem and then using the exact Kalman filter to solve this linear problem - the details are given in \cite{crisan10}. The EK filter is widely used in applications and so provides a standard benchmark. However, it is well known that it can give wildly innaccurate results for non-linear problems so it should be unsurprising to see that it performs badly for most of the examples we consider. \item The HE projection filter. In fact we have implemented a generalization of the algorithm given in \cite{BrigoPhD} that can cope with filtering problems where $b$ is an aribtrary polynomial, $\sigma$ is constant and $f=0$. Thus we have been able to examine the performance of the exponential projection filter over a slightly wider range of problems than have previously been considered. \end{enumerate} To compare these methods, we have simulated solutions of the equations ~\ref{Lanc1-1} for various choices of $f$, $\sigma$ and $b$. We have also selected a prior probability distribution $p_0$ for $X$ and then compared the numerical estimates for the probability distribution $p$ at subsequent times given by the different algorithms. In the examples below we have selected a fixed value for the intial state $X_0$ rather than drawing at random from the prior distribution. This should have no more impact upon the results than does the choice of seed for the random number generator. Since each of the approximate methods can only represent certain distributions accurately, we have had to use different prior distributions for each algorithm. To compare the two projection filters we have started with a polynomial exponential distribution for the prior and then found a nearby mixture of normal distributions. This nearby distribution was found using a gradient search algorithm to minimize the numerically estimated $L^2$ norm of the difference of the normal and polynomial exponential distributions. As indicated earlier, polynomial exponential distributions and normal mixtures are qualitatively similar so the prior distributions we use are close for each algorithm. For the extended Kalman filter, one has to approximate the prior distribution with a single Gaussian. We have done this by moment matching. Inevitably this does not always produce satisfactory results. For the exact filter, we have used the same prior as for the $L^2$ projection filter. \subsection{The linear filter} The first test case we have examined is the linear filtering problem. In this case the probability density will be a Gaussian at all times --- hence if we project onto the two dimensional family consisting of all Gaussian distributions there should be no loss of information. Thus both projection filters should give exact answers for linear problems. This is indeed the case, and gives some confidence in the correctness of the computer implementations of the various algorithms. \subsection{The quadratic sensor} The second test case we have examined is the {\em quadratic sensor}. This is problem~\ref{Lanc1-1} with $f=0$, $\sigma=c_1$ and $b(x)=c_2 x^2$ for some positive constants $c_1$ and $c_2$. In this problem the non-injectivity of $b$ tends to cause the distribution at any time to be bimodal. To see why, observe that the sensor provides no information about the sign of $x$, once the state of the system has passed through $0$ we expect the probability density to become approximately symmetrical about the origin. Since we expect the probability density to be bimodal for the quadratic sensor it makes sense to approximate the distribution with a linear combination of two Gaussian distributions. In Figure~\ref{quadraticSensorTimePoints} we show the probability density as computed by three of the algorithms at 10 different time points for a typical quadratic sensor problem. To reduce clutter we have not plotted the results for the exponential filter. The prior exponential distribution used for this simulation was $p(x)=\exp( 0.25 -x^2 + x^3 -0.25 x^4 )$. The initial state was $X_0 = 0$ and $Y_0=0$. As one can see the probability densities computed using the exact filter and the L2NM filter become visually indistinguishable when the state moves away from the origin. The extended Kalman filter is, as one would expect, completely unable to cope with these bimodal distributions. In this case the extended Kalman filter is simply representing the larger of the two modes. \begin{figure}[htp] \begin{centering} \includegraphics{examples/QuadraticSensorExample1-timeSlices} \end{centering} \caption{Estimated probability densities at $10$ time points for the problem $b(x)=x^2$} \label{quadraticSensorTimePoints} \end{figure} In Figure~\ref{quadraticSensorResiduals} we have plotted the {\em $L^2$ residuals} for the different algorithms when applied to the quadratic sensor problem. We define the $L^2$ residual to be the $L^2$ norm of the difference between the exact filter distribution and the estimated distribution. \begin{equation*} \hbox{$L^2$ residual} = \left( \int | p_{\hbox{exact}} - p_{\hbox{approx}} |^2 {\mathrm d} \mu \right)^{\frac{1}{2}} \end{equation*} As can be seen, the L2NM projection filter outperforms the HE projection filter when applied to the quadratic sensor problem. Notice that the $L^2$ residuals are initially small for both the HE and the L2NM filter. The superior performance of the L2NM projection filter in this case stems from the fact that one can more accurately represent the distributions that occur using the normal mixture family than using the polynomial exponential family. If preferred one could define a similar notion of residual using the Hellinger metric. The results would be qualitatively similar. One interesting feature of Figure~\ref{quadraticSensorResiduals} is that the error remains bounded in size when one might expect the error to accumulate over time. This suggests that the arrival of new measurements is gradually correcting for the errors introduced by the approximation. \begin{figure}[htp] \begin{centering} \includegraphics{examples/QuadraticSensorExample1-residuals} \caption{$L^2$ residuals for the problem $b(x)=x^2$} \label{quadraticSensorResiduals} \end{centering} \end{figure} \subsection{The cubic sensor} A third test case we have considered is the {\em general cubic sensor}. In this problem one has $f=0$, $\sigma=c_1$ for some constant $c_1$ and $b$ is some cubic function. The case when $b$ is a multiple of $x^3$ is called the {\em cubic sensor} and was used as the test case for the exponential projection filter using the Hellinger metric considered in \cite{BrigoPhD}. It is of interest because it is the simplest case where $b$ is injective but where it is known that the problem cannot be reduced to a finite dimensional stochastic differential equation \cite{HaMaSu}. It is known from earlier work that the exponential filter gives excellent numerical results for the cubic sensor. Our new implementations allow us to examine the general cubic sensor. In Figure~\ref{cubicSensorTimePoints}, we have plotted example probability densities over time for the problem with $f=0$, $\sigma=1$ and $b=x^3 -x$. With two turning points for $b$ this problem is very far from linear. As can be seen in Figure~\ref{cubicSensorTimePoints} the L2NM projection remains close to the exact distribution throughout. A mixture of only two Gaussians is enough to approximate quite a variety of differently shaped distributions with perhaps surprising accuracy. As expected, the extended Kalman filter gives poor results until the state moves to a region where $b$ is injective. The results of the exponential filter have not been plotted in Figure~\ref{cubicSensorTimePoints} to reduce clutter. It gave similar results to the L2NM filter. The prior polynomial exponential distribution used for this simulation was $p(x)=\exp( 0.5 x^2 -0.25 x^4 )$. The initial state was $X_0 = 0$, which is one of the modes of prior distribution. The inital value for $Y_0$ was taken to be $0$. \begin{figure}[htp] \begin{centering} \includegraphics{examples/FullCubicSensor-timeSlices} \end{centering} \caption{Estimated probability densities at $10$ time points for the problem $b(x)=x^3-x$} \label{cubicSensorTimePoints} \end{figure} One new phenomenon that occurs when considering the cubic sensor is that the algorithm sometimes abruptly fails. This is true for both the L2NM projection filter and the HE projection filter. To show the behaviour over time more clearly, in Figure~\ref{cubicSensorMeansAndSds} we have shown a plot of the mean and standard deviation as estimated by the L2NM projection filter against the actual mean and standard deviation. We have also indicated the true state of the system. The mean for the L2MN filter drops to $0$ at approximately time $7$. It is at this point that the algorithm has failed. \begin{figure}[htp] \begin{centering} \includegraphics{examples/FullCubicSensor-meanAndSd} \caption{Estimates for the mean and standard deviation for the problem $b(x)=x^3-x$} \label{cubicSensorMeansAndSds} \end{centering} \end{figure} What has happened is that as the state has moved to a region where the sensor is reasonably close to being linear, the probability distribution has tended to a single normal distribution. Such a distribution lies on the boundary of the family consisting of a mixture of two normal distributions. As we approach the boundary, $h_{ij}$ ceases to be invertible causing the failure of the algorithm. Analogous phenomena occur for the exponential filter. The result of running numerous simulations suggests that the HE filter is rather less robust than the L2NM projection filter. The typical behaviour is that the exponential filter maintains a very low residual right up until the point of failure. The L2NM projection filter on the other hand tends to give slightly inaccurate results shortly before failure and can often correct itself without failing. This behaviour can be seen in Figure~\ref{cubicSensorResiduals}. In this figure, the residual for the exponential projection remains extremely low until the algorithm fails abruptly - this is indicated by the vertical dashed line. The L2NM filter on the other hand deteriorates from time $6$ but only fails at time $7$. \begin{figure}[htp] \begin{centering} \includegraphics{examples/FullCubicSensor-residuals} \caption{$L^2$ residuals for the problem $b(x)=x^3-x$} \label{cubicSensorResiduals} \end{centering} \end{figure} The $L^2$ residuals of the L2MN method are rather large between times $6$ and $7$ but note that the accuracy of the estimates for the mean and standard deviation in Figure~\ref{cubicSensorMeansAndSds} remain reasonable throughout this time. To understand this note that for two normal distributions with means a distance $x$ apart, the $L^2$ distance between the distributions increases as the standard deviations of the distributions drop. Thus the increase in $L^2$ residuals between times $6$ and $7$ is to a large extent due to the drop in standard deviation between these times. As a result, one may feel that the $L^2$ residual doesn't capture precisely what it means for an approximation to be ``good''. In the next section we will show how to measure residuals in a way that corresponds more closely to the intuitive idea of them having visually similar distribution functions. In practice one's definition of a good approximation will depend upon the application. Although one might argue that the filter is in fact behaving reasonably well between times $6$ and $7$ it does ultimately fail. There is an obvious fix for failures like this. When the current point is sufficiently close to the boundary of the manifold, simply approximate the distribution with an element of the boundary. In other words, approximate the distribution using a mixture of fewer Gaussians. Since this means moving to a lower dimensional family of distributions, the numerical implementation will be more efficient on the boundary. This will provide a temporary fix the failure of the algorithm, but it raises another problem: as the state moves back into a region where the problem is highly non linear, how can one decide how to leave the boundary and start adding additional Gaussians back into the mixture? We hope to address this question in a future paper. \section{ Comparison with Particle Methods }\label{sec:Part} Particle methods approximate the probability density $p$ using discrete measures of the form: \[ \sum_i a_i(t) \delta_{v_i(t)} \] These measures are generated using a Monte Carlo method. The measure can be thought of as the empirical distributions associated with randomly located particles at position $v_i(t)$ and of stochastic mass $a_i(t)$. Particle methods are currently some of the most effective numerical methods for solving the filtering problem. See \cite{crisan10} and the references therein for details of specific particle methods and convergence results. The first issue in comparing projection methods with particle methods is that, as a linear combination of Dirac masses, one can only expect a particle method to converge weakly to the exact solution. In particular the $L^2$ metric and the Hellinger metric are both inappropriate measures of the residual between the exact solution and a particle approximation. Indeed the $L^2$ distance is not defined and the Hellinger distance will always take the value $\sqrt{2}$. To combat this issue, we will measure residuals using the L{\'e}vy metric. If $p$ and $q$ are two probability measures on ${\mathbb R}$ and $P$ and $Q$ are the associated cumulative distribution functions then the L{\'e}vy metric is defined by: \[ d_L(p,q) = \inf \{ \epsilon: P(x-\epsilon) - \epsilon \leq Q(x) \leq P(x + \epsilon) + \epsilon \quad \forall x\} \] This can be interpreted geometrically as the size of the largest square with sides parallel to the coordinate axes that can be inserted between the completed graphs of the cumulative distribution functions (the completed graph of the distribution function is simply the graph of the distribution function with vertical line segments added at discontinuities). The L{\'e}vy metric can be seen as a special case of the L{\'e}vy--Prokhorov metric. This can be used to measure the distance between measures on a general metric space. For Polish spaces, the L{\'e}vy--Prokhorov metric metrises the weak convergence of probability measures \cite{billingsley}. Thus the L{\'e}vy metric provides a reasonable measure of the residual of a particle approximation. We will call residuals measured in this way L{\'e}vy residuals. A second issue in comparing projection methods with particle methods is deciding how many particles to use for the comparison. A natural choice is to compare a projection method onto an $m$-dimensional manifold with a particle method that approximates the distribution using $\lceil (m+1)/2 \rceil$ particles. In other words, equate the dimension of the families of distributions used for the approximation. A third issue is deciding which particle method to choose for the comparison from the many algorithms that can be found in the literature. We can work around this issue by calculating the best possible approximation to the exact distribution that can be made using $\lceil (m+1)/2 \rceil$ Dirac masses. This approach will substantially underestimate the L{\'e}vy residual of a particle method: being Monte Carlo methods, large numbers of particles would be required in practice. \begin{figure}[htp] \begin{centering} \includegraphics{examples/QuadraticSensorExample1-prokhorovResiduals} \caption{L{\'e}vy residuals for the problem $b(x)=x^2$} \label{levyResiduals} \end{centering} \end{figure} In Figure \ref{levyResiduals} we have plotted bounds on the L{\'e}vy residuals for the two projection methods for the quadratic sensor. Since mixtures of two normal distributions lie in a $5$ dimensional family, we have compared these residuals with the best possible L{\'e}vy residual for a mixture of three Dirac masses. To compute the L{\'e}vy residual between two functions we have approximated first approximated the cumulative distribution functions using step functions. We have used the same grid for these steps as we used to compute our ``exact'' filter. We have then used a brute force approach to compute a bound on size of the largest square that can be placed between these step functions. Thus if we have used a grid with $n$ points to discretize the $x$-axis, we will need to make $n^2$ comparisons to estimate the L{\'e}vy residual. More efficient algorithms are possible, but this approach is sufficient for our purposes. The maximum accuracy of the computation of the L{\'e}vy metric is constrained by the grid size used for our ``exact'' filter. Since the grid size in the $x$ direction for our ``exact'' filter is $0.01$, our estimates for the projection residuals are bounded below by $0.02$. The computation of the minimum residual for a particle filter is a little more complex. Let $\hbox{minEpsilon}(F,n)$ denote the minimum L{\'e}vy distance between a distribution with cumulative distribution $F$ and a distribution of $n$ particles. Let $\hbox{minN}(F,\epsilon)$ denote the minimum number of particles required to approximate $F$ with a residual of less than $\epsilon$. If we can compute $\hbox{minN}$ we can use a line search to compute $\hbox{minEspilon}$. To compute $\hbox{minN}(F,\epsilon)$ for an increasing step function $F$ with $F(-\infty)=0$ and $F(\infty)=1$, one needs to find the minimum number of steps in a similar increasing step function $G$ that is never further than $\epsilon$ away from $F$ in the $L^{\infty}$ metric. One constructs candidate step functions $G$ by starting with $G(-\infty)=0$ and then moving along the $x$-axis adding in additional steps as required to remain within a distance $\epsilon$. An optimal $G$ is found by adding in steps as late as possible and, when adding a new step, making it as high as possible. In this way we can compute $\hbox{minN}$ and $\hbox{minEpsilon}$ for step functions $F$. We can then compute bounds on these values for a given distribution by approximating its cumulative density function with a step function. As can be seen, the exponential and mixture projection filters have similar accuracy as measured by the L{\'e}vy residual and it is impossible to match this accuracy using a model containing only $3$ particles. \section{Conclusions and Future Research}\label{sec:Conc} Projection onto a family of normal mixtures using the $L^2$ metric allows one to approximate the solutions of the non-linear filtering problem with surprising accuracy using only a small number of component distributions. In this regard it behaves in a very similar fashion to the projection onto an exponential family using the Hellinger metric that has been considered previously. The L2NM projection filter has one important advantage over the HE projection filter, for problems with polynomial coefficients all required integrals can be calculated analytically. Problems with more general coefficients can be addressed using Taylor series. One expects this to translate into a better performing algorithm --- particularly if the approach is extended to higher dimensional problems. We tested both filters against the optimal filter in simple but interesting systems, and we provided a metric to compare the performance of each filter with the optimal one. We also tested both filters against a particle method, showing that with the same number of parameters the L2NM filter outperforms the best possible particle method in Levy metric. We designed a software structure and populated it with models that make the L2NM filter quite appealing from a numerical and computational point of view. Areas of future research that we hope to address include: the relationship between the projection approach and existing numerical approaches to the filtering problem; the convergence of the algorithm; improving the stability and performance of the algorithm by adaptively changing the parameterization of the manifold; numerical simulations in higher dimensions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{section:introduction} The objective of survival analysis is to estimate the hitting time of one or more events in the presence of censored data. In healthcare, such events can include time of death, inception of a disease, time of organ failure, etc. The ability to predict the time or probability of certain events happening to a patient is a valuable asset for medical professionals and clinical decision makers, as it enables them to manage clinical resources more efficiently, make better informed decisions on the treatment they offer to patients, find the best organ donor-recipient matches, etc. Electronic health records (EHRs) often have characteristics that pose several challenges to the development of reliable survival models. Some EHRs are longitudinal, i.e., multiple observations per covariate per patient over time are recorded. The survival model must be able to process such measurements and learn from their sequential temporal trends. Medical records, especially longitudinal observations, tend to be highly sparse. Therefore, any reliable survival model must effectively handle missing values, even if the missing rate is extremely high. Dealing with right censored data is another complication of survival analysis models. The right censored data happens often when medical centers lose track of a patient after a certain time, called the censoring time. Survival models must take into account censored data during the training phase. In addition, the presence of competing risk events is another challenge that survival models need to deal with. Having a long-tail distribution is another characteristic of many medical datasets. In such skewed datasets, samples with shorter event times form the bulk of the distribution, while samples with longer event times make only a small portion of the dataset. Following the distribution of such skewed datasets is another challenge for survival and regression models. In fact, many existing survival models cannot accurately predict PDFs with a long time span from medical datasets. The major challenge of developing survival analysis models is, however, the non-existence of the ground truth for the probability distribution of risk events. Medical records contain the time and type of events that happened to patients. However, the underlying distribution of time-to-event is unknown. This makes developing statistical models or supervised training of machine learning models for survival analysis difficult. Survival models can be divided into two main categories: parametric and non-parametric. Parametric survival models assume a certain stochastic distribution for the dataset, while trying to estimate the parameters of the assumed distribution. On the other hand, non-parametric survival models do not assume any prior distribution for the events. Instead, they try to estimate distributions purely based on the observed relationship between the covariates and the time of events in the dataset. To address the aforementioned challenges of developing survival analysis models and to alleviate the shortcomings of the existing survival models, we propose Survival sequence-to-sequence (Seq2Seq). Survival Seq2Seq is a non-parametric multi-event deep model, capable of processing longitudinal measurements with very high missing rates. The accuracy of our model in predicting event times as well as the quality of its generated probability distribution functions (PDFs) exceeds that of existing survival models. In addition, Survival Seq2Seq performs superbly on skewed datasets. The superiority of our model is backed by the results obtained by training Survival Seq2Seq on synthetic and medical datasets. These results will be provided in the later sections of this paper. Our proposed Survival Seq2Seq model has the following key features: \begin{itemize} \item The first layer of the recurrent neural network (RNN)-based encoder network of Survival Seq2Seq is made of Gated Recurrent Units with Decay (GRU-D) \cite{grud} cells. GRU-D cells offer superior performance in imputing not-missing-at-random values. Taking advantage of GRU-D, Survival Seq2Seq can effectively handle high missing rates that commonly occur among medical datasets. \item The decoder network of our model is a recurrent network, which can generate substantially smoother PDFs compared to other non-parametric survival models. Since Survival Seq2Seq has fewer trainable parameters compared to a decoder made of dense layers, it suffers less from overfitting compared to other non-parametric models that use Multi-Layer Perceptron (MLP) in their decoders. \item We have enhanced the typical loss function used for training non-parametric survival models by improving the ranking loss term of our model. The improved ranking loss will help the model to better rank samples with longer event times. \item Our proposed model can be effectively trained on datasets with a long-tailed distribution, which is a common characteristic of healthcare datasets. This means that Survival Seq2Seq can accurately predict longer event times as well as the shorter ones. \end{itemize} \section*{Generalizable Insights about Deep Survival Models in the Context of Healthcare} \label{section:healthcareApplications} The goal of survival analysis is to provide a hazard or reversely a survival function about a medical event for a patient, given some clinical observations. The hazard function or equivalently the hazard rate shows the probability of the occurrence of a medical event such as death or organ failure over time. While survival analysis has a long history in healthcare and non-healthcare applications using Cox-based proportional hazard (CPH) models, recent publications in the literature show the superior performance of deep learning models for predicting hazard functions. This means that the occurrence rate of medical events can be predicted more accurately over time. Consequently, healthcare providers can take preemptive actions more accurately and adequately. As an example, the time of death of potential donors can be predicted via hazard function, which allows procurement teams to make a timely attempt. Moreover, a deployed tool based on accurate hazard function can improve matchmaking and the outcome of transplants, leading to shorter wait lists and waiting times, improved longevity of the organs after transplantation, less need for re-transplantation, and longer survival of recipients with a higher quality of life. Currently, medical decision-making tools based on manual calculations are very complex and not fully supported by solid emerging evidence. For example, it is challenging to accurately predict the death time of the potential organ Donation after Circulatory Death (DCD) donors to allow successful donation. Accordingly, over 30\% of DCD attempts are unsuccessful, and some centers in Canada refuse to implement DCD programs due to their low success rate \cite{iscwebsite}. Therefore, there is an urgent need to implement a precise clinical real-time decision support system for donor evaluation as well as organ suitability. Additionally, the major advantage of machine learning techniques is their ability to adapt to new patient data as it accumulates. The rest of this paper is organized as follows: A short discussion of related works and their limitation is provided in section \ref{section:relatedwork}. The architecture of Survival Seq2Seq is described in Section \ref{section:seq2seq}. Section \ref{section:cohort} represents the datasets used for training the model, while the experimental results of the training are provided in Section \ref{section:experiments}. Finally, section \ref{section:conclusion} concludes the paper. \section{ Related Work} \label{section:relatedwork} Classical parametric models rely on strong assumptions about the time-to-event distribution. Such strong assumptions allow these models to estimate the underlying stochastic process based on the observed relationship between covariates and time-to-event. However, the predicted probability distribution of these models is over-simplified and often unrealistic. CPH is an example of a parametric statistical model that simplifies the underlying distribution by assuming that the proportional hazard increases constantly over time. The model estimates the hazard function, $\lambda(t|\textbf{x})$, the probability that an individual will experience an event within the time $t$, given all $\textbf{x}$ features (covariates). Although several works such as \cite{vinzamuri2014active,vinzamuri2013cox,li2016multi} have tried to address the shortcomings of the CPH model to some degree, the over-simplification of the underlying stochastic process limits the flexibility, generalizability, and the prediction power of the CPH model. Besides, the CPH model cannot process longitudinal measurements. Deep Survival Machines (DSM) \cite{dsm2020} is an example of a machine learning-based parametric model that assumes a combination of multiple Weibull and Log-Normal primitive distributions for the survival function. A deep MLP model is trained to estimate the parameters of those distributions and a scaling factor to determine the weight of each distribution in the overall estimated hazard PDF. Training DSM is rather difficult, as the optimizer is easily diverged when trying to minimize its loss function and the model becomes overfit. Despite delivering a high Concordance Index (CI) score, the model performs poorly when estimating the hitting time of events. The CI score is a metric used for evaluating the ranking performance of survival models. The model is also unable to process longitudinal measurements. The random survival forests (RSFs) method \cite{ishwaran2008random} is an extension of random forests that supports the analysis of right-censored data. The training procedure of RSFs is similar to other random forests. However, the branching rule is modified to account for right-censored data by measuring the survival difference between the samples on either side of the split, i.e., survival times. RSFs have become popular as a non-parametric alternative to CPH due to their less restrictive model assumptions. However, similar to CPH, RSFs cannot be trained on longitudinal datasets. In \cite{katzman2018deepsurv}, a non-parametric variation of CPH called DeepSurv is proposed as a treatment recommendation system. DeepSurv uses an MLP network for characterizing the effects of a patient's covariates on their hazard rate. This solves the parametric assumptions of the hazard function in the original CPH model. It also leads to more flexibility of DeepSurv compared to CPH. As a result, DeepSurv outperforms other CPH-based models and it can learn complex relationships between an individual's covariates and the effect of a treatment. DeepCox \cite{nagpal2021deep} proposes Deep Cox Mixtures (DCMs) for survival analysis, which generalizes the proportional hazards assumption via a mixture model, by assuming that there are latent groups and within each, the proportional hazards assumption holds. DCM allows the hazard ratio in each latent group, as well as the latent group membership, to be flexibly modeled by a deep neural network. However, both DeepSurv and DeepCox models still suffer from the same strong assumption of proportional hazards as the original CPH formulation. Also, neither of those two models supports longitudinal measurements. As an alternative to assuming a specific form for the underlying stochastic process, \cite{lee2018deephit} proposes a non-parametric deep model called DeepHit, to model the survival functions for the competing risk events. Since no assumptions are made on the survival distribution, the relationship between covariates and the event(s) can now change over time. This is considered an advantage of DeepHit over CPH-based models. The first part of the DeepHit model, i.e., the encoder, is made of a joint MLP block. The decoder of the model is made of MLP blocks, each specific to one event. The output of each case-specific block is a discrete hazard PDF for each event. The last layer of each case-specific block contains $N$ units, where each unit generates the likelihood for one timestep ($e.g.$ one hour or one month) of the hazard PDF over the prediction horizon. A major drawback of this method for generating probability distributions is that the predictions of the output layer could arbitrarily vary from one unit to the next. This causes the overall generated PDFs to be extremely noisy. Moreover, depending on the number of time steps in the prediction horizon, the number of trainable parameters in the output layer could become too high and may cause overfitting when training the model. Dynamic DeepHit (DDH) \cite{lee2019dynamic} is an extension of DeepHit, capable of processing longitudinal measurements. In this model, the encoder is replaced with RNN layers followed by an attention mechanism. The RNN block can learn the underlying relationship between longitudinal measurements and provide finer predictions compared to the MLP block in DeepHit. The case-specific blocks of DeepHit and DDH are the same, which means that DDH suffers from predicting noisy PDFs and overfitting. Recently, the application of the transformer architecture in survival analysis has been studied in some research such as \cite{wang2021survtrace} and \cite{pmlr-v146-hu21a}. However, none of the models developed in the mentioned research can process longitudinal measurements. \section{Methods: Survival Seq2Seq} \label{section:seq2seq} A successful non-parametric survival model must accept longitudinal EHRs as the input, and must output hazard PDFs for competing risks, while using survival times as the ground truth for training. We realized that the Seq2Seq architecture had a strong potential for performing such a survival analysis. This architecture is commonly used for natural language processing (NLP) tasks. For example, it can translate one language to another. A typical Seq2Seq model is made up of an RNN-based encoder network and another RNN-based network as the decoder. In case of language translation, the encoder encodes a sentence of the source language and sends the encoded sequence to the decoder. The decoder then generates the sentence of the destination language word by word based on the previously generated words in the destination sentence as well as the encoded sequence. Also, adding an attentional mechanism to a Seq2Seq model improves the performance of the model when translating longer sentences. The Seq2Seq architecture can be adapted to perform survival analysis. In a nutshell, the encoder of a Seq2Seq model can process longitudinal EHR and send the encoded sequence to the decoder, while the decoder generates a discrete hazard PDF of an event based on the encoded sequence. Despite showing a strong potential, the original Seq2Seq architecture needs to undergo several modifications to be ready for performing survival analysis. The model needs a missing value handling mechanism to effectively impute the sheer number of missing values in longitudinal medical datasets. The decoder of Seq2Seq can generate a PDF for each competing risk. In addition, a proper loss function is required to train such a non-parametric model. While considering censored data, such a loss function must be able to shape the generated PDFs based on the observed relationship between measurements and event times. Figure \ref{fig:seq2seq1} shows the overall structure of Survival Seq2Seq, a survival model based on the Seq2Seq architecture. The model follows the basic Seq2Seq architecture with an encoder for processing longitudinal measurements and a decoder for generating hazard PDFs for multiple events. The first RNN layer of the encoder is made up of GRU-D cells. These cells are highly effective in imputing missing values of medical datasets. The decoder network is composed of several RNN-based decoder blocks, where each block is responsible for generating the PDF for one event in the dataset. We have added an attentional mechanism to Survival Seq2Seq to improve its overall performance. To train Survival Seq2Seq, we use a multi-term loss function composed of the log-likelihood loss \cite{MLT2006} plus an improved ranking loss term that enhances the ranking performance of the model. Design of Survival Seq2Seq is discussed in greater detail in the rest of this section. \begin{figure} \centering \includegraphics[scale=0.20]{seq2seq.png} \caption{The overall architecture of Survival Seq2Seq} \label{fig:seq2seq1} \end{figure} \subsection{Encoder} The encoder network is responsible for processing longitudinal measurements of patients and passing the encoded sequence to the decoder network. As mentioned earlier, longitudinal EHRs are very sparse. To handle the missing values of longitudinal measurements, we use GRU-D cells in the first layer of the encoder network. There is often a strong correlation between the missing pattern of covariates and labels in medical datasets. A GRU-D cell learns the relationship between labels and the missing pattern of covariates during a supervised learning process and utilizes the observed relationship to impute missing longitudinal measurements for continuous covariates. GRU-D imputes missing values of a covariate by applying a decay rate to the last measured value of that covariate. The influence of the last covariate measurement is reduced proportionally over time if the covariate has not been measured. If a covariate is not measured for a long period of time, GRU-D imputes that missing value by relying more on the mean value of that covariate rather than its last measurement. The imputation process in GRU-D needs three input vectors: a vector containing longitudinal measurements, a mask vector which indicates if a measurement is available or missing (value one represents a covariate is measured and value zero represents the covariate is missing) and a Delta vector which represents the time difference between the current timestamp and the last timestamp at which a covariate for a patient was measured. Readers can refer to \cite{grud} for the implementation details of GRU-D cells. The encoder network of Survival Seq2Seq can be stacked with multiple RNN layers if desired. In that case, the stacked layers can be made up of some other recurrent cells such as Gated Recurrent Unit (GRU) or Long Short-Term Memory (LSTM). A stacked encoder allows the model to learn more complex temporal relationships in data, provided that the training set is large enough to support a deeper network. If not, the encoder can also work with only one GRU-D layer. \subsection{Decoder} As depicted in Figure \ref{fig:seq2seq1}, the decoder network is made of $K$ decoder blocks, each specific to one event in the dataset. The censored event is not considered as an interested event, therefore no decoder block is assigned for censored records. Decoder blocks can be made of stacked RNN layers of vanilla RNN, LSTM or GRU. Similar to the encoder network, a stacked decoder block can be used to learn more complex relationships in data. The generated hazard PDFs are discrete, meaning that the prediction horizon is divided into several timesteps (bins) and a decoder block predicts the value (likelihood) of each bin sequentially. To generate the likelihood for a given timestep, the decoder relies on the likelihood of the previous timestep as well as the encoded sequence. Therefore, the value of the PDF for any timestep is dependent on the previous timesteps and cannot arbitrarily change. As a result, the generated PDFs are smooth and ripple-free. The output of each decoder block is concatenated with the attention vector and then passed through a time distributed dense layer with a \textit{relu} activation function. The attentional mechanism improves the performance of the model in dealing with long encoded sequences, i.e, data samples with too many measurements. The output tensor of all decoder blocks is reshaped to a one-dimensional tensor, where a \textit{softmax} activation is applied on the resulted tensor. The \textit{softmax} activation guarantees that the joint cumulative distribution function (CDF) of all events is always equal to one. The output of the \textit{softmax} layer is then reshaped to its original dimensions. This provides the estimated hazard PDFs for all events as follows: \begin{equation} \label{eq:seq2seq_output} PDF = \begin{pmatrix} p_{1,1} & p_{1,2} & \cdots & p_{1,T_h} \\ p_{2,1} & p_{2,2} & \cdots & p_{2,T_h} \\ \vdots & \vdots & \ddots & \vdots \\ p_{K,1} & p_{K,2} & \cdots & p_{K,T_h} \end{pmatrix}, \end{equation} where each row represents the predicted hazard PDF for an event, while $T_h$ is the number of timesteps (length of decoder) for the estimated PDFs. Using Equation \ref{eq:seq2seq_output}, the estimated hazard CDF for event $k^*$ at timestep $\tau$ for a set of covariates $x$ is given by \begin{equation} \label{eq:seq2seq_cdf} CDF_{k^*}(\tau|x) = \sum_{i=0}^{\tau}p_{k^*, i} \end{equation} Depending on the use case, either of the PDF formulated in Equation \ref{eq:seq2seq_output} or the CDF in Equation \ref{eq:seq2seq_cdf} can be considered as the output of the model. Also, if the hitting time of an event is desired, the expected value of the PDF can be considered as the predicted time of the event. \subsection{Loss function} The loss function used for training Survival Seq2Seq is $L=L_l+L_r$ in which $L_l$ and $L_r$ are log-likelihood \cite{MLT2006} and ranking terms, respectively. The log-likelihood term is defined as follows: \begin{equation} \label{eq:log_likelihood} L_l = -\sum_{j \in U_{uc}}log(p_{k_{t},\tau_t}^j) - \sum_{j \in U_c}log(1-\sum_{k=1}^{K}CDF_{k}(\tau_t|x_j)), \end{equation} in which $U_{uc}$ and $U_c$ are the sets of uncensored and censored patients, respectively. The index $k_t$ is the ground truth for the first hitting event, while $\tau_t$ is the time of the event or censoring. The log-likelihood loss is the main loss term for training Survival Seq2Seq. The ground-truth for the probability distribution of events is unknown to us. The log-likelihood loss allows a non-parametric model such as Survival Seq2Seq to be trained for predicting probability distributions, while using the first hitting event time as the ground truth. The first term of this loss is used for training the model on the first hitting event and its corresponding time for uncensored patients, while the second term is used for training the model on censored data. The first term of $L_l$ is designed to maximize the estimated hazard PDF for event $k_t$ at $\tau_t$, while minimizing it for other timesteps. $L_r$ is a ranking loss and improves the ranking capability of the model. The idea behind the ranking loss is that if an event happens for a given patient at a given time, the estimated CDF for that patient at the time of the event must be higher than patients who have not experienced the event yet. Therefore, this loss term increases the overall CI score of the model. According to \cite{Jing2019} the ranking loss can be defined as: \begin{equation} \label{eq:ranking1} L_r = -\frac{1}{|U_a|}\sum_{(i,j)\in U_a}\Phi(CDF_{k}(\tau_{t_i}|x_{i})-CDF_{k}(\tau_{t_i}|x_{j})), \end{equation} where $\tau_{t_i}$ is time of event for patient $i$ and $\Phi(.)$ is a differentiable convex function where we use the exponential function $\Phi(z)=exp(z)$. The $U_a$ term is a set of acceptable patient pairs. In an acceptable pair, the first element must be uncensored, and the event for the first element must occur before the event or censoring time of the second element. We modify the ranking loss in Equation \ref{eq:ranking1} and propose the following: \begin{equation} \label{eq:ranking2} L_r=-\frac{1}{|U_a|}\sum_{t}\sum_{(i,j)\in U_a}\Phi(CDF_{k}(\tau_t|x_{i})-CDF_{k}(\tau_t|x_{j})). \end{equation} The difference between Equation \ref{eq:ranking2} and Equation \ref{eq:ranking1} is that the updated ranking loss compares two elements of a given pair at every timestep over the prediction horizon, while the loss in Equation \ref{eq:ranking1} only compares the pair at the time of the event. Our experimental results provided in the upcoming sections suggest that the updated ranking loss improves the ranking capability of the model, especially for events with longer hitting times. A longer hitting time means more time steps at which the modified ranking loss evaluates an acceptable pair. This translates to a better ranking performance on events with longer hitting times. \section{Cohort}\label{section:cohort} The datasets used for training the model are described in the following: \textbf{SYNTHETIC:} To evaluate the performance of Survival Seq2Seq, we created a synthetic dataset based on a statistical process. Here, we consider $\boldsymbol{x}=(x^{0},...,x^{K})$ as a tuple of $K$ random variables in a way that each random variable has an isotropic Weibull distribution $$ f(x;\gamma,\mu, \alpha) = \frac{\gamma} {\alpha} (\frac{x-\mu} {\alpha})^{(\gamma - 1)}\exp{(-((x-\mu)/\alpha)^{\gamma})} \hspace{.3in} x \ge \mu; \gamma, \alpha > 0, $$ where $\gamma$ is the shape parameter, $\mu$ is the location parameter and $\alpha$ is the scale parameter. We model the distribution of two event times, $T^{(1)}_i$ and $T^{(2)}_i$, for each data sample $i$ as a nonlinear combination of these $K$ random variables at time index $i$ defined by \begin{equation} T^{(1)}_i = f(\boldsymbol{\alpha}^T\times{(\boldsymbol{x}_{i}^{k_1})}^2+\boldsymbol{\beta}^T\times(\boldsymbol{x}_{i}^{k_2})), \end{equation} \begin{equation} T^{(2)}_i = f(\boldsymbol{\alpha}^T\times{(\boldsymbol{x}_{i}^{k_2})}^2+\boldsymbol{\beta}^T\times(\boldsymbol{x}_{i}^{k_1})). \ \ \ \ \ k_1,k_2\in \{1,..., K\} \end{equation} where $k_1$ and $k_2$ are two randomly-selected subsets of $K$ covariates that satisfy $k_1 \cap k_2 = \varnothing, k_1 \cup k_2 = K$. By selecting the Weibull distribution in our synthetic data generator, the event times will be exponentially distributed with an average that depends on a linear (with parameters set $\boldsymbol{\beta}$) and quadratic (with parameters set $\boldsymbol{\alpha}$) combination of the random variables. This means long-tailed distributions for the event times similar to medical datasets, as shown in Figure \ref{fig:histogram_events}. The figure shows that the number of samples in the histogram are not monotonically decreasing over the time. Instead, the number of samples for each event peaks at some time and then decreases with a long tail. We chose this deliberately to evaluate the performance of the model on a dataset with a complex time-to-event distribution. For each data sample, we have $(\boldsymbol{x}_i, \delta_i, T_i)$, where $\delta_i=min\{T^{(1)}_i,T^{(2)}_i \}$ identifies the type of the event happening at $T_i$, i.e., the event time. In sum, we considered $K=20$ and generated 20000 data samples from the defined stochastic process with 20\% censoring rate, and a not-missing-at-random rate of 77\% and a maximum event time of 200. To simulate longitudinal measurements, we assume that the covariates for each sample are measured a random number of times, while the measured values at each time stamp are increased or decreased nonlinearly with a random rate specific to each sample-covariate pair. \begin{figure}[!tb] \centering \includegraphics[scale=0.50]{histogram_events.png} \caption{The histogram of survival times in the synthetic dataset. The histogram shows that the generated dataset is long-tailed similar to medical datasets.} \label{fig:histogram_events} \end{figure} \textbf{MIMIC-IV:} The MIMIC-IV dataset \cite{johnson2020mimic} contains hospital clinical records for patients admitted to a tertiary academic medical centre in Boston, MA, USA., between 2008-2019. This database contains demographics data, laboratory measurements, medications administered, vital signs and diagnosis of more than 71000 patients with an in-hospital mortality rate of about $12\%$. We considered a 33-day prediction horizon with a 4-hour time resolution for the decoder of Survival Seq2Seq. Therefore, patients with an event time longer than 33 days were considered censored. We also selected 108 covariates from this dataset based on the feedback of our medical team as well as by applying feature selection methods to the dataset. \section{Results}\label{section:experiments} The performance of Survival Seq2Seq was evaluated by training the model on SYNTHETIC and MIMIC-IV datasets. Results were compared to DDH for benchmarking, as DDH was the only survival model known to us, capable of processing longitudinal measurements for survival analysis. \subsection{Evaluation Approach/Study Design} \raggedright Mean absolute error (MAE) is the main metric we use for evaluating the performance of the models. MAE is defined as the mean of the absolute difference between the predicted time of an event and the observed time for that event for uncensored data samples. We consider the expected value of a predicted PDF as the predicted event time. The other metric used in this paper is the time-dependent CI score \cite{antolini2005time} defined as follows: $$ \mathbb{CI}(t)=P(\hat{F}(t|x_i) > \hat{F}(t|x_j)| \delta_i =1, T_i<T_j, T_i\leq t), $$ where $\hat{F}(t|x_i)$ is the estimated CDF of the event, truncated at time $t$, given a set of covariates $x_i$. This metric evaluates the performance of the models when predicting the order of events for data samples. The CI measure can evaluate the performance on both censored and uncensored samples in the test dataset. The time dependency of this metric allows us to evaluate the performance of models when capturing the possible changes in risk over time. The results will be reported in four 25\%, 50\%, 75\% and 100\% quantiles with 5-fold cross-validation. We consider the length of the decoder of Survival Seq2Seq 25\% longer than the maximum event time of each dataset. This is necessary for dealing with censored samples at the maximum event time (200 for SYNTHETIC and 33 days for MIMIC-IV) or a time close to the defined maximum event time. This extra decoder length allows the model to shift a considerable portion of the PDF for those samples to later timesteps, so that the predicted time of the event for censored data happens after the censoring time. \subsection{Evaluating Using the SYNTHETIC Dataset} Table \ref{tab:synthetic_mae} compares the performance of Survival Seq2Seq and DDH on the SYNTHETIC dataset in terms of MAE. The mean and variance of the results of the five folds are used for calculating the confidence interval of predictions. It can be seen that Survival Seq2Seq significantly outperforms DDH for both events in all quantiles except the first quantile. Figure \ref{fig:distribution} represents the difference between the ground truth and prediction for Survival Seq2Seq and DDH for a few uncensored samples of the SYNTHETIC dataset. It can be observed that Survival Seq2Seq can follow the distribution of the dataset by predicting values close to the ground truth, whether the survival time is closer to zero or it is much longer. On the other hand, DDH performs very poorly in predicting longer event times. Prediction results provided in Table \ref{tab:synthetic_mae} as well as Figure \ref{fig:distribution} proves that Survival Seq2Seq can effectively predict the event time on datasets with long-tailed distributions. As Figure \ref{fig:distribution} reveals, DDH has a tendency to predict shorter event times for all events, whether the ground truth is long or short. This translates to a lower MAE for shorter event times for DDH compared to Survival Seq2Seq. This situation is somehow similar to an imbalanced binary classification in which a naively-trained classification model leans toward predicting in favor of the class with the higher share of training samples. The overall accuracy of such a classification model could be very high, although the model would perform very poorly on the minority class. Using the same analogy, a better way to evaluate a model like DDH with skewed predictions is to look at its performance in the last quantile, where events with longer hitting times can be found. This is where Survival Seq2Seq outperforms DDH. The slightly higher MAE of Survival Seq2Seq compared to DDH in the first quantile can also be explained based on the imputation mechanism of GRU-D cells. The number of measurements for data samples with very short event times are scarce. If the value of a covariate for a given data sample is missing at the first measurement timestamp, GRU-D has to rely solely on the mean of that covariate to impute that missing value. This is obviously not ideal and causes higher prediction errors for very early predictions. However, the imputation performance of GRU-D improves as more measurements are accumulated. \begin{table} \centering \begin{tabular}{cc|c|c|c|c|c} \cline{3-6} & & \multicolumn{4}{ c| } {Quantiles} \\ \cline{3-6} & & 25\% & 50\% & 75\% & 100\% \\ \cline{1-6} \multicolumn{1}{ |c } {\multirow{2}{*}{Survival Seq2Seq} } & \multicolumn{1}{ |c| } {event 1} & 11.85$\pm 0.6$ & 12.47 $\pm$ 1.2 & 14.01 $\pm$ 1.4 & 15.54 $\pm$ 1.8 & \\ \cline{2-6} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{event 2} & 12.26 $\pm$ 0.5 & 13.20 $\pm$ 0.9 & 15.55 $\pm$ 2.2 & 20.83 $\pm$ 3.8 & \\ \cline{1-6} \multicolumn{1}{ |c }{\multirow{2}{*}{DDH} } & \multicolumn{1}{ |c| }{event 1} & 8.79 $\pm$ 0.7 & 20.36 $\pm$ 1.8 & 29.93 $\pm$ 1.8 & 35.03 $\pm$ 1.8 & \\ \cline{2-6} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{event 2} & 10.32 $\pm$ 0.3 & 18.45 $\pm$ 1.4 & 33.96 $\pm$ 1.5 & 52.22 $\pm$ 1.2 & \\ \cline{1-6} \end{tabular} \caption{Comparison between the MAE of Survival Seq2Seq and DDH on the SYNTHETIC dataset. Results are reported with 95\% confidence interval.} \label{tab:synthetic_mae} \end{table} \begin{figure} \includegraphics[width=\linewidth]{step_plot_uncensored_comp_syntetic_v2.png} \caption{The difference between the ground truth and predictions of Survival Seq2Seq (top) and DDH (bottom) on a few uncensored samples from the SYNTHETIC dataset. The comparison shows that Survival Seq2Seq follows the distribution of the event times more accurately than DDH. The difference between the performance of the two models is more apparent for longer event times.} \label{fig:distribution} \end{figure} \raggedright Table \ref{tab:synthetic_ci} represents the time-dependent CI score for the two models on the SYNTHETIC dataset. Despite providing marginally lower CI scores for the two events in the first quantile, our model outperforms DDH in all other quantiles, with a significant 0.08 higher average CI score compared to DDH on the last quantile. The higher CI score of Survival Seq2Seq with respect to DDH can be contributed to the introduction of the new ranking loss in Equation \ref{eq:ranking2}, which improves the ranking performance of the model. \begin{table} \centering \begin{tabular}{cc|c|c|c|c|c} \cline{3-6} & & \multicolumn{4}{ c| }{Quantiles} \\ \cline{3-6} & & 25\% & 50\% & 75\% & 100\% \\ \cline{1-6} \multicolumn{1}{ |c }{\multirow{2}{*}{Survival Seq2Seq} } & \multicolumn{1}{ |c| }{event 1} & 0.835$\pm$0.04 & 0.774$\pm$0.06 & 0.827$\pm$0.02 & 0.844$\pm$0.02 & \\ \cline{2-6} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{event 2} & 0.850$\pm$0.03 & 0.804$\pm$0.04 & 0.808$\pm$0.03 & 0.835$\pm$0.03 & \\ \cline{1-6} \multicolumn{1}{ |c }{\multirow{2}{*}{DDH} } & \multicolumn{1}{ |c| }{event 1} & 0.842$\pm$0.01 & 0.766$\pm$0.01 & 0.760$\pm$0.01 & 0.776$\pm$0.01 & \\ \cline{2-6} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{event 2} & 0.853$\pm$0.01 & 0.787$\pm$0.01 & 0.744$\pm$0.01 & 0.743$\pm$0.01 & \\ \cline{1-6} \end{tabular} \caption{Comparison between the time-dependent CI score of Survival Seq2Seq and DDH on the SYNTHETIC dataset. Results are reported with 95\% confidence interval.} \label{tab:synthetic_ci} \end{table} \subsection{Evaluating Using the MIMIC-IV Dataset} \raggedright The performance of Survival Seq2Seq on MIMIC-IV is provided in Table \ref{tab:mimic_scores}. This table shows the MAE and CI scores for our model and compares them to the outcome of DDH. One can observe a pattern similar to Table \ref{tab:synthetic_mae} when comparing the MAE of Survival Seq2Seq and its counterpart. The MAE of our model is marginally higher than DDH in the first quantile. However, Survival Seq2Seq beats DDH in all other quantiles, so that the mean absolute error of our model is less than half of the MAE of DDH on 100\% of the test data. Besides, the performance of Survival Seq2Seq in terms of the CI score exceeds the outcome of DDH for all quantiles. Again, the updated ranking loss in Equation \ref{eq:ranking2} is a factor that contributes to these high CI scores. The quality of generated PDFs is another important factor besides the prediction accuracy when comparing Survival Seq2Seq to other non-parametric models. The superiority of our model becomes apparent from Figure \ref{fig:PDFs}, where the predicted PDFs of Survival Seq2Seq and DDH are compared for a random uncensored patient in MIMIC-IV. As described in earlier sections, the RNN-based decoder of our model can generate smooth and ripple-free probability distributions. This is in contrast to DDH where using an MLP-based decoder results in PDFs with high fluctuations, as shown in Figure \ref{fig:PDFs}. \begin{table} \centering \begin{tabular}{cc|c|c|c|c|c} \cline{3-6} & & \multicolumn{4}{ c| }{Quantiles} \\ \cline{3-6} & & 25\% & 50\% & 75\% & 100\% \\ \cline{1-6} \multicolumn{1}{ |c }{\multirow{2}{*}{Survival Seq2Seq} } & \multicolumn{1}{ |c| }{MAE} & 34.83$\pm$4.1 & 37.06$\pm$4.6 & 39.53$\pm$4.0 & 62.74$\pm$3.2 & \\ \cline{2-6} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{CI} & 0.876$\pm$0.02 & 0.882$\pm$0.02 & 0.885$\pm$0.02 & 0.906$\pm$0.02 & \\ \cline{1-6} \multicolumn{1}{ |c }{\multirow{2}{*}{DDH} } & \multicolumn{1}{ |c| }{MAE} & 32.88$\pm$8.4 & 40.23$\pm$2.4 & 61.17$\pm$2.8 & 125.95$\pm$4.7 & \\ \cline{2-6} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{CI} & 0.863$\pm$0.03 & 0.849$\pm$0.02 & 0.845$\pm$0.03 & 0.528$\pm$0.04 & \\ \cline{1-6} \end{tabular} \caption{Comparison between the results of Survival Seq2Seq and DDH on MIMIC-IV. Results are reported with 95\% confidence interval.} \label{tab:mimic_scores} \end{table} \begin{figure} \includegraphics[scale=0.50]{surivalPDFs.png} \caption{(a) The PDF generated using Survival Seq2Seq for a random uncensored patient of MIMIC-IV and (b) The generated PDF of DDH for the same patient. The expected value of each PDF is considered as the predicted time of death. The comparison shows that the quality of the predicted probability distribution of Survival Seq2Seq is superior than DDH.} \label{fig:PDFs} \end{figure} \section{Discussion} \raggedright \label{section:conclusion} Each of the MAE and CI metrics used in this paper is useful for evaluating a different use case of a survival model. MAE is useful for evaluating the outcome of a survival model when predicting the time of events (calibration) is desired, while CI is a better choice when the model is applied on ranking problems. However, we cannot train a model that provides optimal MAE and CI at the same time. The log-likelihood loss has a higher influence on the calibration capability of the model than the ranking loss. Therefore, to minimize MAE, one must assign a higher weight to the log-likelihood loss than the ranking loss. On the other hand, the ranking loss must be assigned a higher weight if the ranking capability of the model and a higher CI score is desired. Since we believe that calibration is a more important task than ranking, we assigned a higher weight to the log-likelihood loss to minimize MAE. We did the same for DDH to make the comparison between the two models fair. Assigning a higher weight to the ranking loss results in a maximized CI score. For example, we achieved a CI score of 0.932$\pm$0.01 on the last quantile of MIMIC-IV with Survival Seq2Seq when a higher weight was assigned to the ranking loss. This is almost 3\% higher than the CI score reported in Table \ref{tab:mimic_scores} when calibration was the objective. Survival Seq2Seq can accurately follow the distribution of the event time for long-tail datasets, while DDH and many other survival or regression models cannot. This is an interesting feature of our model and we mainly consider this as a side contribution of the RNN-based decoder network to our model. We were initially not sure if this feature of the model is a result of using GRU-D cells in the encoder of our model, the modified ranking loss, or the use of an RNN-based decoder. We created a second version of Survival Seq2Seq with the same RNN-based decoder network as the original model, but without the GRU-D layer in the encoder and without the modified ranking loss. We used a simple masking technique for dealing with missing measurements. This version of Survival Seq2Seq was still able to follow the distribution of the time-to-event on long-tail datasets, although the accuracy of the model dropped. Therefore, we concluded that the RNN-based decoder was the main contributor to this interesting feature of our model in performing well on long-tail datasets. However, the exact underlying mechanism of this phenomenon is unknown to us and requires further investigation. Based on the provided results in terms of MAE and CI score, we proved that Survival Seq2Seq has a high prediction accuracy. However, accuracy metrics are not the only factor that must be taken into account when evaluating the performance of a survival model. We believe that the quality of predicted PDFs is as important as the accuracy metrics. As the provided PDF sample in this paper shows, the generated PDFs of Survival Seq2Seq are exceptionally smooth and ripple-free. This is an outstanding feature for a non-parametric survival model. \subsection*{Limitations} The GRU-D layer that makes the first layer of the encoder of our model relies on previously measured values and the mean of covariates to impute missing data. If a covariate for a row of data (a patient) is missing while it is not previously measured, GRU-D has no choice but to impute that missing value with the mean of the covariate. This is not an ideal behavior and we believe that it is contributing to lower performance of Survival Seq2Seq in the first quantile of data, where there are not enough measurements for GRU-D to properly impute missing variables. We acknowledge this limitation of our model on early predictions and encourage the readers to think of ways that could mitigate this phenomenon. There is a limit to the length of the encoder and decoder of Survival Seq2Seq. Both networks are RNN-based. Consequently, they cannot pass information through time if their length exceeds a certain limit. However, using the attention mechanism solves this problem to a certain extent. We kept the maximum length of the encoder up to 60 time steps for MIMIC-IV. During data pre-processing, if two covariates of a data sample are measured at relatively close timestamps, we consider the average of those timestamps as the unique timestamp for those two measurements. This helps us to reduce the number of longitudinal measurements for a given row of data (or a patient) in the dataset. If the number of measurements exceeds the maximum of time steps, we only keep 60 randomly-selected measurements from that row of data, while discarding the rest. Although, we were able to use a higher maximum length, we did not notice a meaningful accuracy improvement using an encoder longer than 60 steps on MIMIC-IV. Similarly, the length of the decoder is limited as well. However, we do not believe that the limit on the length of the decoder causes an issue for practical use cases. For example, in our experiments, we successfully trained Survival Seq2Seq on MIMIC-IV with a 2-hour time resolution. The high resolution decoder had a length twice the length of the decoder used for reporting the results in this paper. Such an experiment shows that the length of the RNN-based networks does not limit the ability of Survival Seq2Seq for practical use cases. However, one must be careful not to cause vanishing gradients by setting very lengthy encoder and decoder networks. \newpage \bibliographystyle{plainnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Experimental Details of VT-Work} \input{tables/counters} In this section we present more statistics on the number of operations that tree clocks and vector clocks perform on our dataset. For each benchmark we have computed \begin{compactenum} \item The vt-work that was needed for that benchmark, i.e., the total number of vector-time entries that were updated during the execution of the benchmark under the $\operatorname{\acr{HB}}$ and $\operatorname{\acr{SHB}}$ partial order. This corresponds to the dark-gray areas of our figures (e.g., \cref{fig:ctjoin}). \item The tree-clock work that was done for that benchmark, i.e., the total number of nodes of tree clocks that were accessed by the tree-clock join and copy operations. This corresponds to the light-gray areas of our figures (e.g., \cref{fig:ctjoin}). \item The vector-clock work that was done for that benchmark, i.e., the total number of entries of vector clocks that were accessed by the vector-clock join and copy operations. \end{compactenum} \cref{tab:counters} shows the obtained results. \section{Tree Clocks in More Race Prediction}\label{sec:app_race_predictors} In this section we show how tree clocks can replace vector clocks for computing the $\operatorname{\acr{DC}}$~\cite{Roemer18} and $\operatorname{\acr{SDP}}$~\cite{Genc19} partial orders. \input{dc} \input{sdp} \section{Artifact Appendix} \subsection{Abstract} This artifact contains all the source codes and experimental data for replicating our evaluation in Section~$6$. We implemented the analyses programs as part of the tool \textsc{Rapid}~\cite{rapid}. The provided experimental data contains all the $154$ trace logs used in our evaluation. In our artifact we also provide Python scripts that fully automate the process of replicating our evaluation. \subsection{Artifact check-list (meta-information)} \label{sec:check-list} {\small \begin{itemize} \item {\bf Algorithm: } Tree Clock \item {\bf Data set: } Trace logs obtained from the benchmarks described in Section~$6$. \item {\bf Metrics: } Execution time. \item {\bf Output: } CSV files and graphs (optional). \item {\bf How much disk space required (approximately)?: } 150 GB. \item {\bf How much time is needed to prepare workflow (approximately)?:} We provide all the scripts that automate our workflow. \item {\bf How much time is needed to complete experiments (approximately)?: } Replicating all the results: 15 days (without parallelization). Replicating a small set of results: 1 day (without parallelization). We also provide instructions for parallelizing the computation (see Section~\ref{sec:replicating-evaluation}). \item {\bf Publicly available?: } Yes~\cite{zenodo}. \item {\bf Code licenses (if publicly available)?: } MIT License. \item {\bf Data licenses (if publicly available)?: } None. \item {\bf Archived (provide DOI)?: } \href{https://doi.org/10.5281/zenodo.5749092}{doi.org/10.5281/zenodo.5749092} \end{itemize} } \subsection{Description} \subsubsection{How to access}\label{sec:how-to-access} Obtain the artifact from~\cite{zenodo}. The total size is expected to be approximately 50 MB. \subsubsection{Hardware dependencies} Replicating the results of large benc\-hmarks requires up to 30 GB RAM. Otherwise, there are no special hardware requirements for using our artifact. \subsubsection{Software dependencies}\label{sec::software-dep} Java 11, Ant 1.10 or higher, Python 3.7 or higher, including the packages pandas and matplotlib. \subsubsection{Data sets}\label{sec:data-set} The trace logs are available for download at~\cite{tracelogs}. \subsection{Installation} Obtain the artifact (see Section~\ref{sec:how-to-access}), extract the archive files and set the \texttt{\$AE\_HOME} environment variable: \texttt{> export AE\_HOME=/path/to/AE} Next, install \textsc{Rapid}: \texttt{> cd \$AE\_HOME/rapid/}\\ \texttt{> ant jar} Then, download the benchmark traces (see Section~\ref{sec:data-set}) into the folder \texttt{\$AE\_HOME/benchmarks/}. \subsection{Experiment workflow} In Figure~\ref{fig:dir-structure} we display the directory structure of our artifact. The directory \texttt{rapid} contains the \textsc{Rapid} tool which includes our implementation of the tree clock and vector clock data structures and the analyses programs based on HB, SHB and MAZ partial orders. The directory \texttt{benchmarks} is designated for the trace logs. The directory \texttt{scripts} contains a collection of helper scripts that automate our workflow. In particular, the script \texttt{\$AE\_HOME/scripts/run.py} can be utilized to automate the process of replicating the results of Section~\ref{sec:experiments}. In Section~\ref{sec:replicating-evaluation} we describe how the script can be used to replicate all or part of our experimental evaluation. In addition, Section~\ref{sec:experiment-custom} contains instructions on how the script can be used to evaluate a new trace log that is not part of the original benchmark set. The \texttt{README.md} file provides more comprehensive information on certain aspects of our artifact. \begin{figure} \centering \begin{verbatim} AE_HOME/ |--- rapid/ |--- benchmarks/ |--- scripts/ |--- results/ |--- LICENSE.txt |--- README.md \end{verbatim} \caption{Directory structure of the artifact} \label{fig:dir-structure} \end{figure} \subsection{Evaluation and expected results}\label{sec:replicating-evaluation} Executing the following command will run all the analyses on all the trace logs: \texttt{> python \$AE\_HOME/scripts/run.py -b all} The outputs of the executions will be extracted as CSV files under the folder \texttt{\$AE\_HOME/results/}. Note that this command expects to locate all the benchmarks used in our evaluation (see Section~\ref{sec:data-set}) under the folder \texttt{\$AE\_HOME/benchmarks/}. The main goal of this evaluation is to measure the performance benefits of tree clocks over vector clocks for keeping track of logical times in concurrent programs. We expect that the overall speedup would remain similar to the speedups reported in Table~\ref{tab:speedups} for each category. After the CSV output files have been generated, the script \texttt{\$AE\_HOME/scripts/compute\_averages.py} may be utilized to compute the average speedup for each category and replicate the Table~\ref{tab:speedups}: \texttt{> python \$AE\_HOME/scripts/compute\_averages.py\\ \$AE\_HOME/results/} This script expects the path to the results folder as argument and outputs a file named \texttt{table2.csv} under the same folder that replicates the Table~\ref{tab:speedups}. Similarly, the script \texttt{\$AE\_HOME/scripts/plot.py} can be utilized to visualize the obtained outputs and replicate the Figure~\ref{fig:time_comparison}: \texttt{> python \$AE\_HOME/scripts/plot.py \$AE\_HOME/results/} This script also expects the path to the results folder as argument and outputs the plot files under the folder \texttt{\$AE\_HOME/results/\allowbreak plots} that replicates Figure~\ref{fig:time_comparison}. We remark that, as also indicated in Section~\ref{sec:check-list}, replicating the evaluation can take very long if executed serially. We refer the interested readers to the file \texttt{\$AE\_HOME/README.md} where we describe a procedure which may be utilized to parallelize the evaluation. Furthermore, the script \texttt{\$AE\_HOME/scripts/run.py} also provides an option to replicate only parts of our experimental evaluation. The following command runs the analyses on a small set of benchmarks which require moderate system resources and reduced computation time (see Section~\ref{sec:check-list}): \texttt{> python \$AE\_HOME/scripts/run.py -b small} We refer the readers to the \texttt{\$AE\_HOME/README.md} file for more detailed information on customizing the experiments. \subsection{Experiment customization}\label{sec:experiment-custom} Users might utilize the script \texttt{\$AE\_HOME/scripts/run.py} to evaluate a new trace log that is not part of our original benchmark set. This can be achieved with the following command: {\texttt{> python \$AE\_HOME/scripts/run.py -p path/to/trace -n output-folder-name} } The above command will run all the analyses on the input trace located in \texttt{path/to/trace} and extract the output CSV files into \texttt{\$AE\_HOME/results/output-folder-name}. Note that the given input trace must be in one of the formats admitted by the \textsc{Rapid} tool. Readers may refer to the \texttt{\$AE\_HOME/rapid/README.md} file for information regarding the formats admitted by \textsc{Rapid}. \subsection{Notes} We note that the reported execution times correspond to the time taken for performing the respective analyses and do not include the time taken for processing the input files. Hence, the actual execution times are expected to be longer than the reported times. \iffalse \subsection{Methodology} Submission, reviewing and badging methodology: \begin{itemize} \item \url{https://www.acm.org/publications/policies/artifact-review-badging} \item \url{http://cTuning.org/ae/submission-20201122.html} \item \url{http://cTuning.org/ae/reviewing-20201122.html} \end{itemize} \fi \section{Benchmarks}\label{appsec:benchmarks} \renewcommand{\captionsize}{\small} \bottomcaption{ Information on Benchmark Traces. We denote by $\mathcal{N}$, $\mathcal{T}$, $\mathcal{M}$ and $\mathcal{L}$ the total number of events, number of threads, number of memory locations and number of locks, respectively. } \label{tab:trace-info} \tablefirsthead{\toprule \textbf{Benchmark} & $\mathcal{N}$ & $\mathcal{T}$ & $\mathcal{M}$ & $\mathcal{L}$ \\ \midrule} \tablehead{ \toprule \textbf{Benchmark} & $\mathcal{N}$ & $\mathcal{T}$ & $\mathcal{M}$ & $\mathcal{L}$ \\ \midrule} \tabletail{\hline} \tablelasttail{ \hline} \scriptsize \renewcommand{\arraystretch}{0.85} \begin{xtabular}[h*]{rcccc} CoMD-omp-task-1 & 175.1M & 56 & 66.1K & 114\\ CoMD-omp-task-2 & 175.1M & 56 & 66.1K & 114\\ CoMD-omp-task-deps-1 & 174.1M & 16 & 63.0K & 34\\ CoMD-omp-task-deps-2 & 175.1M & 56 & 66.1K & 114\\ CoMD-omp-taskloop-1 & 251.5M & 16 & 4.0M & 35\\ CoMD-omp-taskloop-2 & 251.5M & 56 & 4.0M & 115\\ CoMD-openmp-1 & 174.1M & 16 & 63.0K & 34\\ CoMD-openmp-2 & 175.1M & 56 & 66.1K & 114\\ DRACC-OMP-009-Counter-wrong-critical-yes & 135.0M & 16 & 971 & 36\\ DRACC-OMP-010-Counter-wrong-critical-Intra-yes & 135.0M & 16 & 971 & 36\\ DRACC-OMP-011-Counter-wrong-critical-Inter-yes & 135.0M & 16 & 853 & 21\\ DRACC-OMP-012-Counter-wrong-critical-simd-yes & 104.9M & 16 & 1.5K & 36\\ DRACC-OMP-013-Counter-wrong-critical-simd-Intra-yes & 104.9M & 16 & 1.5K & 36\\ DRACC-OMP-014-Counter-wrong-critical-simd-Inter-yes & 104.9M & 16 & 1.4K & 21\\ DRACC-OMP-015-Counter-wrong-lock-yes & 135.0M & 16 & 971 & 36\\ DRACC-OMP-016-Counter-wrong-lock-Intra-yes & 135.0M & 16 & 971 & 36\\ DRACC-OMP-017-Counter-wrong-lock-Inter-yes-1 & 135.0M & 16 & 853 & 21\\ DRACC-OMP-017-Counter-wrong-lock-Inter-yes-2 & 27.0M & 16 & 853 & 21\\ DRACC-OMP-018-Counter-wrong-lock-simd-yes & 104.9M & 16 & 1.5K & 36\\ DRACC-OMP-019-Counter-wrong-lock-simd-Intra-yes & 104.9M & 16 & 1.5K & 36\\ DRACC-OMP-020-Counter-wrong-lock-simd-Inter-yes & 104.9M & 16 & 1.4K & 21\\ DataRaceBench-DRB062-matrixvector2-orig-no-1 & 183.9M & 16 & 7.0K & 33\\ DataRaceBench-DRB062-matrixvector2-orig-no-2 & 193.2M & 56 & 9.0K & 113\\ DataRaceBench-DRB105-taskwait-orig-no-1 & 134.0M & 16 & 3.2K & 33\\ DataRaceBench-DRB105-taskwait-orig-no-2 & 134.0M & 56 & 9.7K & 113\\ DataRaceBench-DRB106-taskwaitmissing-orig-yes-1 & 134.0M & 16 & 3.6K & 33\\ DataRaceBench-DRB106-taskwaitmissing-orig-yes-2 & 134.0M & 56 & 11.0K & 113\\ DataRaceBench-DRB110-ordered-orig-no-1 & 120.0M & 16 & 775 & 36\\ DataRaceBench-DRB110-ordered-orig-no-2 & 120.0M & 56 & 2.3K & 116\\ DataRaceBench-DRB122-taskundeferred-orig-no-1 & 112.0M & 16 & 956 & 33\\ DataRaceBench-DRB122-taskundeferred-orig-no-2 & 112.0M & 56 & 3.0K & 113\\ DataRaceBench-DRB123-taskundeferred-orig-yes-1 & 112.0M & 16 & 1.2K & 33\\ DataRaceBench-DRB123-taskundeferred-orig-yes-2 & 112.0M & 56 & 3.7K & 113\\ DataRaceBench-DRB144-critical-missingreduction-orig-gpu-yes & 140.0M & 16 & 966 & 35\\ DataRaceBench-DRB148-critical1-orig-gpu-yes & 135.0M & 16 & 971 & 36\\ DataRaceBench-DRB150-missinglock1-orig-gpu-yes & 112.0M & 16 & 968 & 35\\ DataRaceBench-DRB152-missinglock2-orig-gpu-no & 112.0M & 16 & 968 & 35\\ DataRaceBench-DRB154-missinglock3-orig-gpu-no.c & 112.0M & 16 & 851 & 20\\ DataRaceBench-DRB155-missingordered-orig-gpu-no-1 & 50.0M & 16 & 2.0M & 36\\ DataRaceBench-DRB155-missingordered-orig-gpu-no-2 & 50.0M & 56 & 2.0M & 116\\ DataRaceBench-DRB176-fib-taskdep-no-1 & 1.6B & 17 & 12.4K & 33\\ DataRaceBench-DRB176-fib-taskdep-no-2 & 1.6B & 57 & 41.3K & 113\\ DataRaceBench-DRB176-fib-taskdep-no-3 & 618.3M & 17 & 11.7K & 33\\ DataRaceBench-DRB176-fib-taskdep-no-4 & 618.3M & 57 & 38.3K & 113\\ DataRaceBench-DRB176-fib-taskdep-no-5 & 90.2M & 17 & 9.8K & 33\\ DataRaceBench-DRB176-fib-taskdep-no-6 & 90.3M & 57 & 30.9K & 113\\ DataRaceBench-DRB177-fib-taskdep-yes-1 & 1.6B & 17 & 10.5K & 33\\ DataRaceBench-DRB177-fib-taskdep-yes-2 & 382.1M & 17 & 9.3K & 33\\ DataRaceBench-DRB177-fib-taskdep-yes-3 & 236.2M & 17 & 9.0K & 33\\ DataRaceBench-DRB177-fib-taskdep-yes-4 & 1.0B & 17 & 10.0K & 33\\ DataRaceBench-DRB177-fib-taskdep-yes-5 & 618.3M & 17 & 9.8K & 33\\ DataRaceBench-DRB177-fib-taskdep-yes-6 & 618.3M & 57 & 30.6K & 113\\ DataRaceBench-DRB177-fib-taskdep-yes-7 & 90.2M & 17 & 8.7K & 33\\ DataRaceBench-DRB177-fib-taskdep-yes-8 & 90.3M & 57 & 25.5K & 113\\ OmpSCR-v2.0-c-LUreduction-1 & 136.4M & 16 & 181.6K & 34\\ OmpSCR-v2.0-c-LUreduction-2 & 136.9M & 56 & 183.6K & 114\\ OmpSCR-v2.0-c-Mandelbrot-1 & 115.7M & 16 & 3.0K & 34\\ OmpSCR-v2.0-c-Mandelbrot-2 & 115.7M & 56 & 5.1K & 114\\ OmpSCR-v2.0-c-MolecularDynamic-1 & 204.3M & 16 & 7.2K & 34\\ OmpSCR-v2.0-c-MolecularDynamic-2 & 204.4M & 56 & 9.4K & 114\\ OmpSCR-v2.0-c-Pi-1 & 150.0M & 16 & 976 & 34\\ OmpSCR-v2.0-c-Pi-2 & 150.0M & 56 & 3.0K & 114\\ OmpSCR-v2.0-c-QuickSort-1 & 134.3M & 16 & 101.6K & 35\\ OmpSCR-v2.0-c-QuickSort-2 & 134.3M & 56 & 103.6K & 115\\ OmpSCR-v2.0-c-fft-2 & 2.1B & 57 & 29.4M & 115\\ OmpSCR-v2.0-c-fft-3 & 496.0M & 17 & 7.3M & 35\\ OmpSCR-v2.0-c-fft-4 & 496.0M & 57 & 7.3M & 115\\ OmpSCR-v2.0-c-fft6-1 & 146.0M & 16 & 4.3M & 49\\ OmpSCR-v2.0-c-fft6-2 & 146.0M & 56 & 5.3M & 132\\ OmpSCR-v2.0-c-testPath-1 & 30.2M & 16 & 2.8M & 35\\ OmpSCR-v2.0-c-testPath-2 & 37.5M & 56 & 2.8M & 115\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.badSolution-1 & 112.6M & 16 & 161.0K & 34\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.badSolution-2 & 394.0M & 56 & 563.0K & 114\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.solution1-1 & 192.6M & 16 & 321.0K & 34\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.solution1-2 & 674.2M & 56 & 1.1M & 114\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.solution2-1 & 337.2M & 56 & 562.7K & 114\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.solution2-2 & 96.4M & 16 & 160.9K & 34\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.solution3-1 & 337.3M & 56 & 562.7K & 114\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopA.solution3-2 & 96.4M & 16 & 160.9K & 34\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopB.badSolution1-1 & 112.6M & 16 & 161.0K & 34\\ OmpSCR-v2.0-c-LoopsWithDependencies-c-loopB.badSolution1-2 & 394.0M & 56 & 563.0K & 114\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp1-1 & 107.0M & 16 & 101.2K & 35\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp1-2 & 106.8M & 56 & 103.5K & 115\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp2-1 & 107.1M & 56 & 104.7K & 115\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp2-2 & 107.5M & 16 & 101.5K & 35\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp3-1 & 115.4M & 56 & 10.0M & 115\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp3-2 & 141.7M & 16 & 10.0M & 35\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp4-1 & 114.2M & 56 & 104.7K & 115\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp4-2 & 164.0M & 16 & 101.5K & 35\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp6 & 106.7M & 56 & 104.8K & 115\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp7-1 & 295.5M & 56 & 82.9K & 114\\ OmpSCR-v2.0-cpp-sortOpenMP-cpp-qsomp7-2 & 88.9M & 16 & 31.6K & 34\\ HPCCG-1 & 228.1M & 16 & 935.9K & 34\\ HPCCG-2 & 229.5M & 56 & 938.4K & 114\\ graph500-1 & 171.3M & 16 & 228.8K & 34\\ graph500-2 & 172.5M & 56 & 233.4K & 114\\ SimpleMOC & 170.2M & 16 & 4.3M & 5.0K\\ NPBS-DC.S-1 & 11.7M & 16 & 3.6M & 36\\ NPBS-DC.S-2 & 11.7M & 56 & 3.7M & 115\\ NPBS-IS.W-1 & 152.9M & 16 & 2.2M & 34\\ NPBS-IS.W-2 & 300.1M & 56 & 2.3M & 114\\ Kripke-1 & 117.5M & 16 & 203.1K & 35\\ Kripke-2 & 119.2M & 56 & 205.8K & 116\\ Lulesh-1 & 35.3M & 17 & 159.5K & 33\\ Lulesh-2 & 52.1M & 56 & 164.0K & 114\\ Lulesh-3 & 543.4M & 17 & 1.3M & 34\\ Lulesh-4 & 569.5M & 57 & 1.2M & 114\\ QuickSilver & 132.6M & 56 & 914.1K & 116\\ RSBench-1 & 1.3B & 16 & 2.0M & 34\\ RSBench-2 & 1.3B & 56 & 2.0M & 114\\ XSBench-1 & 96.6M & 16 & 25.7M & 33\\ XSBench-2 & 96.6M & 56 & 25.7M & 114\\ amg2013-1 & 169.9M & 18 & 2.8M & 74\\ amg2013-2 & 189.6M & 58 & 4.5M & 154\\ miniFE-1 & 206.7M & 58 & 1.2M & 154\\ miniFE-2 & 207.7M & 18 & 1.2M & 74\\ account & 134 & 5 & 41 & 3\\ airlinetickets & 140 & 5 & 44 & 0\\ array & 51 & 4 & 30 & 2\\ batik & 157.9M & 7 & 4.9M & 1.9K\\ boundedbuffer & 332 & 3 & 63 & 2\\ bubblesort & 4.6K & 13 & 167 & 2\\ bufwriter & 22.2K & 7 & 471 & 1\\ clean & 1.3K & 10 & 26 & 2\\ critical & 59 & 5 & 30 & 0\\ cryptorsa & 58.5M & 9 & 1.7M & 8.0K\\ derby & 1.4M & 5 & 185.6K & 1.1K\\ ftpserver & 49.6K & 12 & 5.5K & 301\\ jigsaw & 3.1M & 12 & 103.5K & 275\\ lang & 6.3K & 8 & 1.5K & 0\\ linkedlist & 1.0M & 13 & 3.1K & 1.0K\\ lufact & 134.1M & 5 & 252.1K & 1\\ luindex & 397.8M & 3 & 2.5M & 65\\ lusearch & 217.5M & 8 & 5.2M & 118\\ mergesort & 3.0K & 6 & 621 & 3\\ moldyn & 200.3K & 4 & 1.2K & 2\\ pingpong & 151 & 7 & 51 & 0\\ producerconsumer & 658 & 9 & 67 & 3\\ raytracer & 15.8K & 4 & 3.9K & 8\\ readerswriters & 11.3K & 6 & 18 & 1\\ sor & 606.9M & 5 & 1.0M & 2\\ sunflow & 11.7M & 17 & 1.3M & 9\\ tsp & 307.3M & 10 & 181.1K & 2\\ twostage & 193 & 13 & 21 & 2\\ wronglock & 246 & 23 & 26 & 2\\ xalan & 122.5M & 7 & 4.4M & 2.5K\\ biojava & 221.0M & 4 & 121.0K & 78\\ cassandra & 259.1M & 173 & 9.9M & 60.5K\\ graphchi & 215.8M & 20 & 24.9M & 60\\ hsqldb & 18.8M & 44 & 945.0K & 401\\ tradebeans & 39.1M & 222 & 2.8M & 6.1K\\ tradesoap & 39.1M & 221 & 2.8M & 6.1K\\ zxing & 546.4M & 15 & 37.8M & 1.5K\\ \end{xtabular} \subsection{Tree Clocks in Atomicity Checking}\label{subsec:atomicity} \section{Race Detection with Clock Trees}\label{sec:clock_trees} In this section we introduce clock trees, a new data structure for representing logical times in concurrent and distributed programs. We first illustrate the intuition behind clock trees, and then present the formal details. \input{intuition} \input{clocktrees_details} \input{optimality} \subsection{Doesn't Commute}\label{subsec:dc} \Paragraph{The $\operatorname{\acr{DC}}$ partial order.} Doesn't-commute is a partial order introduced in~\cite{Roemer18}. It is an unsound weakening of $\operatorname{\acr{WCP}}$, which means that it can report more races, but also make unsound reports. Nevertheless, $\operatorname{\acr{DC}}$ races are candidates of actual races, and are processed by a second, more heavyweight but sound phase. $\operatorname{\acr{DC}}$ races that do not pass this phase are vindicated, whereas the ones that pass are sound races. \Paragraph{$\operatorname{\acr{DC}}$ with tree clocks.} \cref{algo:dc} shows $\operatorname{\acr{DC}}$ using tree clocks. It is a direct adaptation of the $\operatorname{\acr{DC}}$ algorithm based on vector clocks~\cite{Roemer18}. The only observation required is that the copies in \cref{line:dc_copyread,line:dc_copywrite} occurring when the algorithm processes a lock-release event $\rel$ are indeed monotone. This monotonicity holds due to the corresponding joins in \cref{line:dc_readjoin} and \cref{line:dc_writejoin} that occur while the algorithm processes the critical section of $\rel$. \input{algorithms/algo_dc} \section{Tree Clocks in Distributed Systems}\label{sec:tc_distributed} \section{Experiments}\label{sec:experiments} In this section we report on an implementation and experimental evaluation of the tree clock data structure. The primary goal of these experiments is to evaluate the practical advantage of tree clocks over the vector clocks for keeping track of logical times in a concurrent program executions. \Paragraph{Implementation.} Our implementation is in Java and closely follows \cref{algo:clock_tree}. \camera{ The tree clock data structure is represented as two arrays of length $k$ (number of threads), the first one encoding the shape of the tree and the second one encoding the integer timestamps as in a standard vector clock. } For efficiency reasons, recursive routines have been made iterative. \Paragraph{Benchmarks.} Our benchmark set consists of standard benchmarks found in benchmark \camera{suites} and recent literature. In particular, we used the Java benchmarks from the IBM Contest suite~\cite{Farchi03}, Java Grande suite~\cite{Smith01}, DaCapo~\cite{Blackburn06}, and SIR~\cite{doESE05}. In addition, we used OpenMP benchmark programs, whose execution lenghts and number of threads can be tuned, from DataRaceOnAccelerator~\cite{schmitz2019dataraceonaccelerator}, DataRaceBench \cite{liao2017dataracebench}, OmpSCR~\cite{dorta2005openmp} and the NAS parallel benchmarks~\cite{nasbenchmark}, as well as large OpenMP applications contained in the following benchmark suites: CORAL~\cite{coral2, coral}, ECP proxy applications~\cite{ecp}, and Mantevo project~\cite{mantevo}. Each benchmark was instrumented and executed in order to log a single concurrent trace, using the tools RV-Predict~\cite{rvpredict} (for Java programs) and ThreadSanitizer~\cite{threadsanitizer} (for OpenMP programs). Overall, this process yielded a large set of \camera{$153$} benchmark traces that were used in our evaluation. \cref{tab:trace_stats} presents aggregate information about the benchmark traces generated. \begin{arxiv} \camera{Information on the individual traces is provided in \cref{tab:trace-info} in the \cref{appsec:benchmarks}.} \end{arxiv} \begin{asplos} \camera{Information on the individual traces is provided in our technical report \cite{arxiv}.} \end{asplos} \input{tables/trace_stats} \input{figures/fig_time_comparison} \Paragraph{Setup.} Each trace was processed for computing each of the $\operatorname{\acr{MAZ}}$, $\operatorname{\acr{SHB}}$ and $\operatorname{\acr{HB}}$ partial orders using both tree clocks and the standard vector clocks. This allows us to directly measure the speedup conferred by tree clocks in computing the respective partial order, which is the goal of this paper. As the computation of these partial orders is usually the first component of any analysis, in general, we evaluated the impact of the conferred speedup in an overall analysis as follows. For each pair of conflicting events $e_1, e_2$, we computed whether these events are concurrent wrt the corresponding partial order (e.g., whether $e_1 \unordhb{\Trace} e_2$). This test is performed in dynamic race detection (in the cases of $\operatorname{\acr{HB}}$ and $\operatorname{\acr{SHB}}$) where such pairs constitute data races, as well in stateless model checking (in the case of $\operatorname{\acr{MAZ}}$) where the model checker identifies such event pairs and attempts to reverse their order on its way to exhaustively enumerate all Mazurkiewicz traces of the concurrent program. For a fair comparison, in the case of $\operatorname{\acr{HB}}$ we used common epoch optimizations~\cite{Flanagan09} to speed up the analysis for both tree clocks and vector clocks (recall \cref{rem:epochs}). For consistency, every measurement was repeated 3 times and the average time was reported. \Paragraph{Running times.} For each partial order, \cref{tab:speedups} shows the average speedup over all benchmarks, both with and without the analysis component. We see that tree clocks are very effective in reducing the running time of the computation of all 3 partial orders, with the most significant impact being on $\operatorname{\acr{SHB}}$ where the average speedup is 2.53 times. For the cases of $\operatorname{\acr{MAZ}}$ and $\operatorname{\acr{SHB}}$, this speedup also lead to a significant speedup in the overall analysis time. On the other hand, although $\operatorname{\acr{HB}}$ with tree clocks is about 2 times faster than with vector clocks, this speedup has a smaller effect on the overall analysis time. The reason behind this observation is straightforward: $\operatorname{\acr{SHB}}$ and $\operatorname{\acr{MAZ}}$ are much more computationally-heavy, as they are defined using all types of events; on the other hand, $\operatorname{\acr{HB}}$ is defined only on synchronization events ($\acq$ and $\rel$) and on average, only $\simeq9.5\%$ of the events are synchronization events on our benchmark traces. Since our analysis considers all events, the $\operatorname{\acr{HB}}$-computation component occupies a smaller fraction of the overall analysis time. We remark, however, that for programs that are more synchronization-heavy, or for analyses that are more lightweight (e.g., when checking for data races on a specific variable as opposed to all variables), the speedup of tree clocks will be larger on the whole analysis. Indeed, \cref{fig:ratio_sync} shows the obtained speedup on the total analysis time for $\operatorname{\acr{HB}}$ as a function of synchronization events. We observe a trend for the speedup to increase as the percentage of synchronization events increases in the trace. A further observation is that speedup is prominent when the number of threads are large. \input{tables/tab_speedups} \input{figures/fig_ratio_sync} \cref{fig:time_comparison} gives a more detailed view of the tree clocks vs vector clocks times across all benchmarks. We see that tree clocks almost always outperform vector clocks on all partial orders, and in some cases by large margins. Interestingly, the speedup tends to be larger on more demanding benchmarks (i.e., on those that take more time). In the very few cases tree clocks are slower, this is only by a small factor. These are traces where the sub-linear updates of tree clocks only yield a small potential for improvement, which does not justify the overhead of maintaining the more complex tree data structure (as opposed to a vector). Nevertheless, overall tree clocks consistently deliver a generous speedup to each of $\operatorname{\acr{MAZ}}$, $\operatorname{\acr{HB}}$ and $\operatorname{\acr{SHB}}$. Finally, we remark that all these speedups come directly from just replacing the underlying data structure, without any attempt to optimize the algorithm that computes the respective partial order, or its interaction with the data structure. \input{figures/fig_vtwork} \input{figures/fig_vtwork_histogram} \Paragraph{Comparison with vt-work.} We also investigate the total number of entries updated using each of the data structures. Recall that the metric $\operatorname{VTWork}(\Trace)$ (\cref{sec:race_detection}) measures the minimum amount of updates that any implementation of the vector time must perform when computing the $\operatorname{\acr{HB}}$ partial order. We can likewise define the metrics $\operatorname{TCWork}(\Trace)$ and $\operatorname{VCWork}(\Trace)$ corresponding to the number of entries updated when processing each event when using respectively the data structures tree clocks and vector clocks. These metrics are visualized in \cref{fig:vtwork} for \camera{computing the $\operatorname{\acr{HB}}$ partial order in} our benchmark suite. The figure shows that the $\operatorname{VCWork}(\Trace)/ \operatorname{VTWork}(\Trace)$ ratio is often considerably large. In contrast, the ratio $\operatorname{TCWork}(\Trace)/ \operatorname{VTWork}(\Trace)$ is typically significantly smaller. The differences in running times between vector and tree clocks reflect the discrepancies between $\operatorname{TCWork}(\cdot)$ and $\operatorname{VCWork}(\cdot)$. Next, recall the intuition behind the optimality of tree clocks (\cref{thm:vtoptimality}), namely that $\operatorname{TCWork}(\Trace)\leq 3\cdot \operatorname{VTWork}(\Trace)$. \cref{fig:vtwork} confirms this theoretical bound, as the $\operatorname{TCWork}(\Trace)/ \operatorname{VTWork}\allowbreak(\Trace)$ ratio stays nicely upper-bounded by $3$ while the $\operatorname{VCWork}(\Trace)/\allowbreak \operatorname{VTWork}(\Trace)$ ratio grows to \camera{nearly} $100$. Interestingly, for some benchmarks we have $\operatorname{TCWork}(\Trace)\simeq 2.99\cdot \operatorname{VTWork}(\Trace)$, i.e., these benchmarks push tree clocks to their vt-work upper-bound. \camera{ Going one step further, \cref{fig:vtwork-histogram} shows the ratio $\operatorname{VCWork}(\Trace)/ \operatorname{TCWork}(\Trace)$ for each partial order in our dataset. The results indicate that vector clocks perform a lot of unnecessary work compared to tree clocks, and experimentally demonstrate the source of speedup on tree clocks. Although \cref{fig:vtwork-histogram} indicates that the potential for speedup can be large (reaching \hcomment{$55 \times$}), the actual speedup in our experiments (\cref{fig:time_comparison}) is smaller, as a single tree clock operation is more computationally heavy than a single vector clock operation. } \Paragraph{\camera{Scalability.}} \camera{ To get a better insight on the scalability of tree clocks, we performed a set of controlled experiments on custom benchmarks, by controlling the number of threads and the number of locks parameters while keeping the communication patterns constant. Each trace consists of $10$M events, while the number of threads varies between $10$ and $360$. The traces are generated in a way such that a randomly chosen thread performs two consecutive operations, $acq(l)$ followed by a $rel(l)$, on a randomly (when applicable) chosen lock $l$. We have considered four cases: (a)~all threads communicate over a single common lock (single lock); (b)~similar to (a), but there are 50 locks, and $20\%$ of the threads are $5$ times more likely to perform an operation compared to the rest of the threads (50 locks, skewed); (c)~$k-1$ client threads communicate with a single server thread via a dedicated lock per thread (star topology); (d)~similar to (a), but every pair of threads communicates over a dedicated lock (pairwise communication). } \input{figures/fig_scalability} \camera{ \cref{fig:scalability} shows the obtained results. Scenario (a) shows how tree clocks have a constant-factor speedup over vector clocks in this setting. As we move to more locks in scenario (b), thread communication becomes more independent and the benefit of tree clocks may slightly diminish. As a subset of the threads is more active than the rest, timestamps are frequently exchanged through them, making tree clocks faster in this setting as well. Scenario (c) represents a case in which tree clocks thrive: while the time taken by vector clocks increases with the number of threads, it stays constant for tree clocks. This is because the star topology implies that, on average, every tree clock join and copy operation only affects a constant number of tree clock entries, despite the fact that every thread is aware of the state of every other thread. Intuitively, the star communication topology \emph{naturally} affects the shape of the tree to (almost) a star, which leads to this effect. Finally, scenario (d) represents the worst case for tree clocks as all pairs of threads can communicate with each other and the communication is conducted via a unique lock per thread pair. This pattern nullifies the benefit of tree clocks, while their increased complexity results in a general slowdown. However, even in this worst-case scenario, the difference between tree clocks and vector clocks remains relatively small. } \section{Experiments}\label{sec:experiments} In this section we report on an implementation and experimental evaluation of the tree clock data structure. The primary goal of these experiments is to evaluate the practical advantage of tree clocks over the vector clocks for keeping track of logical times in a concurrent program. \Paragraph{Implementation.} Our implementation is in Java and closely follows \cref{algo:clock_tree}. For efficiency reasons, recursive routines have been made iterative. The thread map $\operatorname{ThrMap}$ is implemented as a list with random access, while the $\operatorname{Chld}(u)$ data structure for storing the children of node $u$ is implemented as a doubly linked list. We also implemented \textsc{Djit}$^+$\xspace style optimizations~\cite{Pozniansky03} for both tree and vector clocks. \Paragraph{Benchmarks.} Our benchmark set consists of standard benchmarks found in the recent literature. mostly in the context of race detection~\cite{Huang14,Kini17,Mathur18,Roemer18,Pavlogiannis2020,Mathur21}. It contains several concurrent programs taken from standard benchmark suites --- IBM Contest benchmark suite~\cite{Farchi03}, Java Grande suite~\cite{Smith01}, DaCapo~\cite{Blackburn06}, and SIR~\cite{doESE05}. In order to fairly compare the performance of vector clocks and tree clocks, we logged the execution traces of each benchmark program using RV-Predict~\cite{rvpredict} and analyze both VC-based and TC-based algorithms on the same trace. \Paragraph{Setup.} Our experimental setup consists of two implementations for each of the $\operatorname{\acr{HB}}$ and $\operatorname{\acr{SHB}}$ algorithms, one using vector clocks and the other using tree clocks to represent vector times (\cref{algo:hb} and \cref{algo:shb}). Each algorithm was given as input a trace produced from the above benchmarks, and we measured the running time to construct the respective partial order. We do not include small benchmarks where this time is below 10ms, as these cases are very simple to handle using vector clocks and the tree clock data structure is not likely to offer any advantage. As our focus is on the impact of the vector-time data structure on timestamping, we did not perform any further analysis after constructing partial order (e.g., detecting races). Although these partial orders are useful for various analyses, we remark that any analysis component will be identical under vector clocks and tree clocks, and thus does not contribute to the comparison of the two data structures. \input{tables/tab_times} \begin{figure*}[t!] \includegraphics[scale=0.7]{figures/vt_workHB} \caption{Comparing number of operations (vt-work) performed using vector clocks and tree-clocks in $\operatorname{\acr{HB}}$ (y-axis is log-scale).} \label{fig:vtwork} \end{figure*} \Paragraph{Running times.} The running times for vector clocks and tree clocks are shown in \cref{tab:times}. We see that $\operatorname{\acr{SHB}}$ requires significantly more time to construct than $\operatorname{\acr{HB}}$, for both vector clocks and tree clocks. This is expected, as $\operatorname{\acr{HB}}$ only orders synchronization events (i.e., release/acquire events) across threads, while $\operatorname{\acr{SHB}}$ also orders memory access events events. We observe that tree clocks incur a significant speedup over vector clocks for both partial orders in most benchmarks. In the case of $\operatorname{\acr{HB}}$, the whole benchmark set is processed approximately $5.5$ times faster using tree clocks, while for $\operatorname{\acr{SHB}}$ the speedup is $2.24$ times faster. The speedup for $\operatorname{\acr{SHB}}$ is less than $\operatorname{\acr{HB}}$. This is expected because $\operatorname{\acr{SHB}}$ uses as many vector/tree clocks as there are variables. However, for some of these variables, the corresponding vector/tree clock is only used in a few join and copy operations. As the tree clock is a more heavy data structure, the number of these operations is not large enough to balance the initialization overhead of tree clocks. Nevertheless, overall tree clocks deliver a generous speedup to both $\operatorname{\acr{HB}}$ and $\operatorname{\acr{SHB}}$. To get a better aggregate representation of the advantage of tree clocks, we have also computed the geometric mean of the ratios of vector-clock times over tree-clock times, averaged over all benchmarks. For $\operatorname{\acr{HB}}$, the speedup is $\operatorname{GMean}_{\operatorname{\acr{HB}}}=2$, while for $\operatorname{\acr{SHB}}$ the speedup is $\operatorname{GMean}_{\operatorname{\acr{SHB}}}=1.75$. These numbers show that just by replacing vector clocks with tree clocks, the running time reduces on average to $50\%$ for $\operatorname{\acr{HB}}$ and to $57\%$ for $\operatorname{\acr{SHB}}$, regardless of the total running time of the benchmark. These are significant reductions, especially coming from using a new data structure without any attempt to optimize other aspects such as the algorithm that constructs the respective partial order. \Paragraph{Deep and monotone copies in $\operatorname{\acr{SHB}}$.} Recall that in $\operatorname{\acr{SHB}}$ with tree clocks, the processing of a write event leads to a $\FunctionCTCopyCheckMonotone$ operation, which might resolve to a deep copy between the tree clocks instead of a monotone one (\cref{algo:shb}). If the number of deep copies is large, the advantage of tree clocks is lost as the operation touches the whole data structure. Here we have evaluated our expectation that the frequency of deep copies is negligible compared to monotone copies. Indeed, we have seen that in all benchmarks, the number of deep copies is many orders of magnitude smaller than the number of monotone copies. For a few representative numbers, in the case zxing, $\FunctionCTCopyCheckMonotone$ resolved 137M times to a monotone copy and only 43 times to a deep copy. In the case of hsqldb, the corresponding numbers are 2.3M and 25. Analogous observations hold for all other benchmarks. \Paragraph{Comparison with vt-work.} We also investigate the total number of entries updated using each of the data structures. Recall that the metric $\operatorname{VTWork}(\Trace)$ (\cref{sec:race_detection}) measures the minimum amount of updates that any implementation of the vector time must perform when computing a partial order. We can likewise define the metrics $\operatorname{TCWork}(\Trace)$ and $\operatorname{VCWork}(\Trace)$ corresponding to the number of entries updated when processing each event when using respectively the data structures tree clocks and vector clocks. These metrics are visualized in \cref{fig:vtwork} for our benchmark suite and accurately explain the performance improvement of tree clocks over vector clocks (\cref{tab:times}). The figure shows that the $\operatorname{VCWork}(\Trace)/ \operatorname{VTWork}(\Trace)$ ratio is often considerably large. In contrast, the ratio $\operatorname{TCWork}(\Trace)/ \operatorname{VTWork}(\Trace)$ is typically significantly smaller. The differences in running times between vector and tree clocks are a direct reflection of the discrepancies between $\operatorname{TCWork}(\cdot)$ and $\operatorname{VCWork}(\cdot)$. In fact, the benchmarks which have the highest ratios $\operatorname{TCWork}(\Trace)/\operatorname{VCWork}(\Trace)$ also show a corresponding high speed-up (cassandra, tradebeans, hsqldb and raxextended). Next, we remark $\operatorname{TCWork}(\Trace)$ is always no more than a constant factor larger than $\operatorname{VTWork}(\Trace)$; our proof of \cref{thm:vtoptimality} implies that $\operatorname{TCWork}(\Trace)\leq 3\cdot \operatorname{VTWork}(\Trace)$. The ratios in \cref{fig:vtwork} confirm this theoretical bound. Interestingly, for the benchmark bufwriter, we have $\operatorname{TCWork}(\Trace)\simeq 2.99\cdot \operatorname{VTWork}(\Trace)$, i.e., this benchmark pushes tree clocks to their worst performance relative to vt-work. \section{Key Insights} \label{sec:key-insights} \begin{itemize} \item What are the one or two key new insights in this paper? \item How does it advance the state of the art? \item What makes it more effective than past approaches? \end{itemize} \vspace{2cm} The key insight that this work is based on is that vector clock join and copy operations most of the time perform unnecessary updates. For example, when thread $t_1$ receives a message from $t_2$, the whole vector clock of $t_2$ is processed in order to update the view of $t_1$ about all other threads in the system that $t_2$ knows about. However, if $t_1$ knows of a later time for some thread $t_3$ than $t_2$ does, we ideally would like to \emph{a-priori} avoid checking $t_3$ in the vector clock of $t_2$. Unfortunately, the flat structure of vector clocks does not allow such a-priori conclusions, and hence Armed with this insight, in this work we develop \emph{tree clocks}, a new data structure that replaces vector clocks in storing vector times. In contrast to the flat structure of the traditional vector clocks, the dynamic hierarchical structure of tree clocks naturally captures ad-hoc communication patterns between processes. In turn, this allows for join and copy operations that run in time that is roughly proportional to the number of entries that are being updated, and thus resulting in \emph{sublinear time}. This leads to significant speedups in computing causal orderings in concurrent executions, as well as in performing further dynamic analyses once these causal orderings have been established. As a data structure, tree clocks offer high versatility as they can be used in computing many different ordering relations. \section{Key Results and Contributions} \label{sec:key-contributions} \begin{itemize} \item What are the most important \emph{one or two} empirical or theoretical results of this approach? \item What are the contributions that this paper makes to the state of the art? List them in an \texttt{itemize} section. Each contribution should be no more than a few sentences long. \item Clearly describe its advantages over past work, including how it overcomes their limitations. \end{itemize} \vspace{2cm} Our contributions are as follows. \begin{compactenum} \item We introduce \emph{tree clock}, a new data structure for maintaining logical times in concurrent executions. In contrast to the flat structure of the traditional vector clocks, the dynamic hierarchical structure of tree clocks naturally captures ad-hoc communication patterns between processes. In turn, this allows for join and copy operations that run in \emph{sublinear time}. As a data structure, tree clocks offer high versatility as they can be used in computing many different ordering relations. \item We prove that tree clocks are an \emph{optimal data structure} for computing $\operatorname{\acr{HB}}$, in the sense that, \emph{for every input trace}, the total computation time cannot be improved (asymptotically) by replacing tree clocks with any other data structure. On the other hand, vector clocks do not enjoy this property. \item We illustrate the versatility of tree clocks by presenting tree clock-based algorithms for the $\operatorname{\acr{MAZ}}$ and $\operatorname{\acr{SHB}}$ partial orders. \item We perform a large-scale experimental evaluation of the tree clock data structure for computing the $\operatorname{\acr{MAZ}}$, $\operatorname{\acr{SHB}}$ and $\operatorname{\acr{HB}}$ partial orders, and compare its performance against the standard vector clock data structure. Our results show that just by replacing vector clocks with tree clocks, the computation becomes between 1.73X and 2.53X faster on average. Given our experimental results, we believe that replacing vector clocks by tree clocks in partial order-based algorithms can lead to significant improvements on many applications. \end{compactenum} \section{Limitations of the State of the Art} \label{sec:limitations} \begin{comment} 1. Timestamps are almost always implemented using vector datastructure - flat array style data structure 2. Fundamentally slow as updates need to be done by looking up the entire vector 3. In fact, this is Theta(k), and almost always a bottleneck 4. When special structure in the execution can be explouted, people do so to avoid the cost of full blown vector clocks 5. \end{comment} State-of-the art techniques for analyzing concurrent programs by building partial orders often always rely on the use of \emph{vector clocks}. Vector clocks are data-structures that encode \emph{timestamps} of events, and thus succinctly characterize the causal relationships between events in an execution of a concurrent program. The timestamp of an event is \ucomment{here}. \section{Main Artifacts} \label{sec:main-artifacts} \begin{itemize} \item What are the key artifacts presented in your paper: a methodology, a hardware design, a software algorithm, an optimization or control technique, etc.? \item How were your artifacts implemented and evaluated? \end{itemize} \vspace{2cm} \section{Motivation} \label{sec:motivation} \begin{comment} 1. analysis of concurrent programs is important - model checking, bug finding. 2. To reduce the effectiveness of each of these techniques, partial orders are used -- reduce state space (which might be exponentially otherwise) 3. Timestamps are used to encode the causal relationships induced by the partial order 4. What is timestamp 5. Since timestamp computation happens frequently, it can be the bottleneck 6. Thus it is important to speedup cost of computation of timestamps \end{comment} The analysis of multi-threaded programs is one of the major challenges in building reliable and scalable software systems. The large space of communication interleavings in such programs poses a significant challenge to the programmer, as intended invariants can be broken by unexpected communication patterns. Consequently, significant efforts are made towards understanding and detecting concurrency bugs efficiently \cite{lpsz08,Shi10,Tu2019,SoftwareErrors2009,boehmbenign2011,Farchi03}. \begin{itemize} \item What is the problem your work attacks? Be specific. \item Why is it an important problem? \end{itemize} \vspace{1em} \noindent Articulate the importance of this problem to the broader ASPLOS community, using as little jargon as possible. \emph{Be specific} about the problem you are addressing; it should be the one that your paper directly addresses. \section{Motivation} \label{sec:motivation} \begin{itemize} \item What is the problem your work attacks? Be specific. \item Why is it an important problem? \end{itemize} \vspace{1em} \noindent Articulate the importance of this problem to the broader ASPLOS community, using as little jargon as possible. \emph{Be specific} about the problem you are addressing; it should be the one that your paper directly addresses. \section{Limitations of the State of the Art} \label{sec:limitations} \begin{itemize} \item What is the state of the art in this topic today (if any)? \item What are its limits? \end{itemize} \section{Key Insights} \label{sec:key-insights} \begin{itemize} \item What are the one or two key new insights in this paper? \item How does it advance the state of the art? \item What makes it more effective than past approaches? \end{itemize} \section{Main Artifacts} \label{sec:main-artifacts} \begin{itemize} \item What are the key artifacts presented in your paper: a methodology, a hardware design, a software algorithm, an optimization or control technique, etc.? \item How were your artifacts implemented and evaluated? \end{itemize} \section{Key Results and Contributions} \label{sec:key-contributions} \begin{itemize} \item What are the most important \emph{one or two} empirical or theoretical results of this approach? \item What are the contributions that this paper makes to the state of the art? List them in an \texttt{itemize} section. Each contribution should be no more than a few sentences long. \item Clearly describe its advantages over past work, including how it overcomes their limitations. \end{itemize} \section{Why ASPLOS} \label{sec:why-asplos} \textcolor{red}{ \begin{itemize} \item Concurrency is an important problem studied by different communities (systems, architecture, PL), be it multithreading or districbuted systems. Same for race detection. \item Systems/OS - concurrency bugs are, till this date, common occurences leading to system failure \item Architecture - hardware level multi-threading and hardware level detection \item PL - researchers have previously argued for the development of an "always-on" race detectors so as to allow for "race exceptions" [cite Qadeer and Tesiran, Goldilocks: A Race and Transaction-Aware Java Runtime] [cite Conflict exceptions: simplifying concurrent language semantics with precise hardware exceptions for data-races] \item Our work fundamentally changes the way algorithms for race detection and other analysis for inferring concurrency relationships between events (perhaps consensus algorithms) are implemented. \end{itemize} } ASPLOS emphasizes multidisciplinary research; explain how this paper emphasizes synergy of \emph{two or more ASPLOS areas}: architecture, programming languages, operating systems, and related areas (broadly interpreted). \noindent If you are unsure whether your paper falls within the scope of ASPLOS, please check with the program chairs -- ASPLOS is a broad, multidisciplinary conference and encourages new topics. \section{Revisions} \label{sec:revisions} \emph{Optional:} Describe how this paper has been revised, if it was previously submitted to another conference. \pagebreak \bibliographystyle{plain} \section{Why ASPLOS} \label{sec:why-asplos} Concurrency is one of the most studied topics today among different communities (systems, architecture, PL), be it multithreading or distributed systems. The community of ASPLOS, in particular, has a long tradition in analyzing and optimizing parallel programs, and offering parallel support for programming languages tasks. Our work lies in the intersection of parallelism, programming languages and algorithms, and delivers improvements in both theory and practice of the analysis of concurrent programs, and hence has a very tight fit to the ASPLOS research. Moreover, as our key contribution is a new versatile data structure (as opposed to a dedicated framework), we anticipate that tree clocks will be quickly picked up by the ASPLOS community and find applications beyond the ones presented in this work. Our "Related Work" section of the paper suggests some immediate next steps in this direction. \textcolor{red}{ \begin{itemize} \item Concurrency is an important problem studied by Same for race detection. \item Systems/OS - concurrency bugs are, till this date, common occurences leading to system failure \item Architecture - hardware level multi-threading and hardware level detection \item PL - researchers have previously argued for the development of an "always-on" race detectors so as to allow for "race exceptions" [cite Qadeer and Tesiran, Goldilocks: A Race and Transaction-Aware Java Runtime] [cite Conflict exceptions: simplifying concurrent language semantics with precise hardware exceptions for data-races] \item Our work fundamentally changes the way algorithms for race detection and other analysis for inferring concurrency relationships between events (perhaps consensus algorithms) are implemented. \end{itemize} } \section{$\operatorname{FastHB}$}\label{sec:fasthb} \Paragraph{Happens-before ($\operatorname{\acr{HB}}$).} \Paragraph{Last reads.} Given a trace $\Trace$ and a write event $\wt$ on a variable $x$, we define the \emph{last reads} of $\wt$ as the read events on $x$ that appear between $\wt$ and the last appearance of a write event $\wt'$ on $x$ that precedes $\wt$ in $\Trace$. Formally, \begin{align*} \operatorname{LRDs}^{\Trace}(\wt)=\{&\rd\in \operatorname{Rds}{\Trace}\colon \operatorname{Variable}(\rd)=x \text{ and }\\ & \not\exists \wt' \text{ s.t. } \operatorname{Variable}(\wt)=x \text{ and } \rd<^{\Trace}\wt'<^{\Trace}\wt \}\ . \end{align*} Our motivation behind this definition comes from the following lemma. \smallskip \begin{restatable}{lemma}{lemforgetreads}\label{lem:forget_reads} Consider any trace $\Trace$, variable $x$ and write event $\wt$ on $x$. If $\wt$ is the first racy write event of $\Trace$ on $x$, then one of the following holds. \begin{compactenum} \item $\Trace$ has a data race $(\wt', \wt)$. \item There exists a read event $\rd\in \operatorname{LRDs}^{\Trace}(\wt)$ such that $(\rd, \wt)$ is a data race of $\Trace$. \end{compactenum} \end{restatable} \Andreas{With epochs on both writes and reads, every read and write event is tested at most once.} \input{algorithms/algo_fasthb} \smallskip \begin{restatable}{theorem}{thmfasthb}\label{thm:fasthb} $\operatorname{FastHB}$ is a vector-time optimal race detector. \ucomment{I deleted the def. of Variables in the prelim.} Moreover, on input trace $\Trace$, for every variable $x\in \Variables{\Trace}$, $\operatorname{FastHB}$ reports at least the first $\operatorname{\acr{HB}}$ warning event $\wt(x)$ and $\rd(x)$ \end{restatable} \subsection{The Happens-Before Partial Order}\label{subsec:happens_before_races} Lamport's Happens-Before ($\operatorname{\acr{HB}}$)~\cite{Lamport78} is one of the most frequently used partial orders for the analysis of concurrent executions, with wide applications in domains such as dynamic race detection. Here we use $\operatorname{\acr{HB}}$ to illustrate the disadvantages of vector clocks and form the basis for the tree clock data structure. In later sections we show how tree clocks also apply to other partial orders, such as Schedulably-Happens-Before and the Mazurkiewicz partial order. \Paragraph{Happens-before.} Given a trace $\Trace$, the \emph{happens-before} ($\operatorname{\acr{HB}}$) partial order $\hb{\Trace}$ of $\Trace$ is the smallest partial order over the events of $\Trace$ that satisfies the following conditions. \begin{compactenum} \item $\tho{\Trace} \subseteq \hb{\Trace}$. \item For every release event $\rel(\ell)$ and acquire event $\acq(\ell)$ on the same lock $\ell$ with $\rel(\ell) \stricttrord{\Trace} \acq(\ell)$, we have $\rel(\ell)\hb{\Trace} \acq(\ell)$. \end{compactenum} For two events $e_1, e_2$ in trace $\Trace$, we use $e_1 \unordhb{\Trace} e_2$ to denote that neither $e_1 \hb{\Trace} e_2$, nor $e_2 \hb{\Trace} e_1$. We say $e_1 \stricthb{\Trace} e_2$ when $e_1 \neq e_2$ and $e_1 \hb{\Trace} e_2$. Given a trace $\Trace$, two events $e_1$, $e_2$ of $\Trace$ are said to be in a \emph{happens-before (data) race} if (i)~$\Confl{e_1}{e_2}$ and (ii)~$e_1\unordhb{\Trace} e_2$. \input{algorithms/algo_hb_basic} \Paragraph{The happens-before algorithm.} In light of \cref{lem:PO-timestamps}, race detection based on $\operatorname{\acr{HB}}$ constructs the $\hb{\Trace}$ partial order in terms of vector timestamps and detects races using these. The core algorithm for constructing $\hb{}$ is shown in \cref{algo:hb_basic}. The algorithm maintains a vector clock $\mathbb{C}_{t}$ for every thread $t\in \Threads{}$, and a similar one $\mathbb{C}_{\ell}$ for every lock $\ell$. When processing an event $e=\ev{t, \operatorname{op}}$, it performs an update $\mathbb{C}_{t}.\FunctionVCIncrement(t, 1)$, which is implicit and not shown in \cref{algo:hb_basic}. Moreover, if $\operatorname{op}=\acq(\ell)$ or $\operatorname{op}=\rel(\ell)$, the algorithm executes the corresponding procedure. The $\operatorname{\acr{HB}}$-timestamp of $e$ is then simply the value stored in $\mathbb{C}_{\ThreadOf{e}}$ right after $e$ has been processed. \Paragraph{Running time using vector clocks.} If a trace $\Trace$ has $n$ events and $k$ threads, computing the $\operatorname{\acr{HB}}$ partial order with \cref{algo:hb_basic} and using vector clocks takes $O(n\cdot k)$ time. The quadratic bound occurs because every vector clock join and copy operation iterates over all $k$ threads. \section{Tree Clocks for Happens-Before}\label{sec:race_detection} Let us see how tree clocks are employed for computing the $\operatorname{\acr{HB}}$ partial order. We start with the following observation. \smallskip \begin{restatable}[Monotonicity of copies]{lemma}{lemcopymonotonicity}\label{lem:copy_monotonicity} Right before \cref{algo:hb_basic} processes a lock-release event $\langle t, \rel(\ell)\rangle$, we have $\mathbb{C}_{\ell} \sqsubseteq \mathbb{C}_{t}$. \end{restatable} \Paragraph{Tree clocks for $\operatorname{\acr{HB}}$.} \cref{algo:hb} shows the algorithm for computing $\operatorname{\acr{HB}}$ using the tree clock data structure for implementing vector times. When processing a lock-acquire event, the vector-clock join operation has been replaced by a tree-clock join. Moreover, in light of \cref{lem:copy_monotonicity}, when processing a lock-release event, the vector-clock copy operation has been replaced by a tree-clock monotone copy. \begin{arxiv} We refer to \cref{sec:app_hb_example} for an example run of \cref{algo:hb} on a trace $\Trace$, showing how tree clocks grow during the execution. \end{arxiv} \input{algorithms/algo_hb} \Paragraph{Correctness.} We now state the correctness of \cref{algo:hb}, i.e., we show that the algorithm indeed computes the $\operatorname{\acr{HB}}$ partial order. We start with two monotonicity invariants of tree clocks. \smallskip \begin{restatable}{lemma}{lemtcmonotonicity}\label{lem:tc_monotonicity} Consider any tree clock $\mathbb{C}$ and node $u$ of $\mathbb{C}.\operatorname{T}$. For any tree clock $\mathbb{C}'$, the following assertions hold. \begin{compactenum} \item\label{item:monotonicity1} \emph{Direct monotonicity:} If $u.\operatorname{clk} \leq \mathbb{C}'.\FunctionCTGet(u.\operatorname{\mathsf{tid}})$ then for every descendant $w$ of $u$ we have that $w.\operatorname{clk} \leq \mathbb{C}'.\FunctionCTGet(w.\operatorname{\mathsf{tid}})$. \item\label{item:monotonicity2} \emph{Indirect monotonicity:} If $v.\operatorname{aclk} \leq \mathbb{C}'.\FunctionCTGet(u.\operatorname{\mathsf{tid}})$ where $v$ is a child of $u$ then for every descendant $w$ of $v$ we have that $w.\operatorname{clk}\leq \mathbb{C}'.\FunctionCTGet(w.\operatorname{\mathsf{tid}})$. \end{compactenum} \end{restatable} The following lemma follows from the above invariants and establishes that \cref{algo:hb} with tree clocks computes the correct timestamps on all events, i.e., the correctness of tree clocks for $\operatorname{\acr{HB}}$. \smallskip \begin{restatable}{lemma}{lemhbcor}\label{lem:hb_cor} When \cref{algo:hb} processes an event $e$, the vector time stored in the tree clock $\mathbb{C}_{\operatorname{\mathsf{tid}}(e)}$ is $\POTime{\ord{\Trace}{\operatorname{\acr{HB}}}}{e}$. \end{restatable} \Paragraph{Data structure optimality.} Just like vector clocks, computing $\operatorname{\acr{HB}}$ with tree clocks takes $\Theta(n\cdot k)$ time in the worst case, and it is known that this quadratic bound is likely to be tight for common applications such as dynamic race prediction~\cite{Kulkarni2021}. However, we have seen that tree clocks can take sublinear time on join and copy operations, whereas vector clocks always require time linear in the size of the vector (i.e., $\Theta(k)$). A natural question arises: is there a more efficient data structure than tree clocks? More generally, what is the most efficient data structure for the $\operatorname{\acr{HB}}$ algorithm to represent vector times? To answer this question, we define \emph{vector-time work}, which gives a lower bound on the number of data structure operations that $\operatorname{\acr{HB}}$ has to perform regardless of the actual data structure used to store vector times. Then, we show that tree clocks match this lower bound, hence achieving optimality for $\operatorname{\acr{HB}}$. \Paragraph{Vector-time work.} Consider the general $\operatorname{\acr{HB}}$ algorithm (\cref{algo:hb_basic}) and let $\mathfrak{D}=\{ \mathbb{C}_{1}, \mathbb{C}_{2},\dots, \mathbb{C}_{m} \}$ be the set of the vector-time data structures used. Consider the execution of the algorithm on a trace $\Trace$. Given some $1\leq i \leq |\Trace|$, we let $C_{j}^i$ denote the vector time represented by $\mathbb{C}_{j}$ after the algorithm has processed the $i$-th event of $\Trace$. We define the \emph{vector-time work} (or \emph{vt-work}, for short) on $\Trace$ as \[ \operatorname{VTWork}(\Trace) = \sum_{1\leq i\leq |\Trace|} \sum_{j} | t\in \Threads{} \colon C_{j}^{i-1}(t)\neq C_{j}^{i}(t)|. \] In words, for every processed event, we add the number of vector-time entries that change as a result of processing the event, and $\operatorname{VTWork}(\Trace)$ counts the total number of entry updates in the overall course of the algorithm. Note that vt-work is independent of the data structure used to represent each $\mathbb{C}_{j}$, and satisfies the inequality \[ n \leq \operatorname{VTWork}(\Trace) \leq n\cdot k. \] as with every event of $\Trace$ the algorithm updates one of $\mathbb{C}_{j}$. \Paragraph{Vector-time optimality.} Given an input trace $\Trace$, we denote by $\mathcal{T}_{\operatorname{DS}}(\Trace)$ the time taken by the $\operatorname{\acr{HB}}$ algorithm (\cref{algo:hb_basic}) to process $\Trace$ using the data structure $\operatorname{DS}$ to store vector times. Intuitively, $\operatorname{VTWork}(\Trace)$ captures the number of times that instances of $\operatorname{DS}$ change state. For data structures that represent vector times explicitly, $\operatorname{VTWork}(\Trace)$ presents a natural lower bound for $\mathcal{T}_{\operatorname{DS}}(\Trace)$. Hence, we say that the data structure $\operatorname{DS}$ is \emph{vt-optimal} if $\mathcal{T}_{\operatorname{DS}}(\Trace)=O(\operatorname{VTWork}(\Trace))$. It is not hard to see that vector clocks are not vt-optimal, i.e., taking $\operatorname{DS}=\operatorname{VC}$ to be the vector clock data structure, one can construct simple traces $\Trace$ where $\operatorname{VTWork}(\Trace)=O(n)$ but $\mathcal{T}_{\operatorname{DS}}(\Trace)=\Omega(n\cdot k)$, and thus the running time is $k$ times more than the vt-work that must be performed on $\Trace$. In contrast, the following theorem states that tree clocks are vt-optimal. \smallskip \begin{restatable}[Tree-clock Optimality]{theorem}{thmvtoptimality}\label{thm:vtoptimality} For any input trace $\Trace$, we have $\mathcal{T}_{\operatorname{TC}}(\Trace)=O(\operatorname{VTWork}(\Trace))$. \end{restatable} The key observation behind \cref{thm:vtoptimality} is that, when $\operatorname{\acr{HB}}$ uses tree clocks, the total number of tree-clock entries that are accessed over all join and monotone copy operations (i.e., the sum of the sizes of the light-gray areas in \cref{fig:ctjoin} and \cref{fig:ctmonotonecopy}) is $\leq 3\cdot \operatorname{VTWork}(\Trace)$. \smallskip \begin{remark} \cref{thm:vtoptimality} establishes \emph{strong optimality} for tree clocks, in the sense that they are vt-optimal \emph{on every input}. This is in contrast to usual notions of optimality that is guaranteed on \emph{only some} inputs. \end{remark} \section{Example of Tree Clocks in $\operatorname{\acr{HB}}$}\label{sec:app_hb_example} \input{figures/fig_hb_example} \cref{fig:hb_example} shows an example run of \cref{algo:hb} on a trace $\Trace$, showing how tree clocks grow during the execution. The figure shows the tree clocks $\mathbb{C}_{t}$ of the threads; the tree clocks of locks $\mathbb{C}_{\ell}$ are just copies of the former after processing a lock-release event (shown in parentheses in the figure). \cref{fig:hb_example2} presents a closer look of the $\FunctionCTJoin$ and $\FunctionCTMonotoneCopy$ operations for the last events of $\Trace$. \input{figures/fig_hb_example2} \section{Introduction}\label{sec:intro} The analysis of concurrent programs is one of the major challenges in formal methods, due to the non-determinism of inter-thread communication. The large space of communication interleavings poses a significant challenge to the programmer, as intended invariants can be broken by unexpected communication patterns. The subtlety of these patterns also makes verification a demanding task, as exposing a bug requires searching an exponentially large space~\cite{Musuvathi08}. Consequently, significant efforts are made towards understanding and detecting concurrency bugs efficiently \cite{lpsz08,Shi10,Tu2019,SoftwareErrors2009,boehmbenign2011,Farchi03}. \SubParagraph{Dynamic analyses and partial orders.} One popular approach to the scalability problem of concurrent program verification is dynamic analysis~\cite{Mattern89,Pozniansky03,Flanagan09,Mathur2020b}. Such techniques have the more modest goal of discovering faults by analyzing program executions instead of whole programs. Although this approach cannot prove the absence of bugs, it is far more scalable than static analysis and typically makes sound reports of errors. These advantages have rendered dynamic analyses a very effective and widely used approach to error detection in concurrent programs. The first step in virtually all techniques that analyze concurrent executions is to establish a causal ordering between the events of the execution. Although the notion of causality varies with the application, its transitive nature makes it naturally expressible as a \emph{partial order} between these events. One prominent example is the Mazurkiewicz partial order ($\operatorname{\acr{MAZ}}$), which often serves as the canonical way to represent concurrent traces~\cite{Mazurkiewicz87,Bertoni1989} (aka Shasha-Snir traces~\cite{Shasha1988}). Another vastly common partial order is Lamport's \emph{happens-before} ($\operatorname{\acr{HB}}$)~\cite{Lamport78}, initially proposed in the context of distributed systems~\cite{schwarz1994detecting}. In the context of testing multi-threaded programs, partial orders play a crucial role in dynamic race detection techniques, and have been thoroughly exploited to explore trade-offs between soundness, completeness, and running time of the underlying analysis. Prominent examples include the widespread use of $\operatorname{\acr{HB}}$~\cite{Itzkovitz1999,Flanagan09,Pozniansky03,threadsanitizer,Elmas07}, schedulably-happens-before ($\operatorname{\acr{SHB}}$)~\cite{Mathur18}, causally-precedes ($\operatorname{\acr{CP}}$)~\cite{Smaragdakis12}, weak-causally-precedes ($\operatorname{\acr{WCP}}$)~\cite{Kini17}, doesn't-commute ($\operatorname{\acr{DC}}$)~\cite{Roemer18}, and strong/weak-dependently-pre\-cedes ($\operatorname{\acr{SDP}}$/$\operatorname{\acr{WDP}}$)~\cite{Genc19}, $\textsf{M2}$~\cite{Pavlogiannis2020} and SyncP~\cite{Mathur21}. Beyond race detection, partial orders are often employed to detect and reproduce other concurrency bugs such as atomicity violations~\cite{Flanagan2008,Biswas14,Mathur2020}, deadlocks~\cite{Samak2014,Sulzmann2018}, and other concurrency vulnerabilities~\cite{Yu2021}. \input{figures/motivating} \SubParagraph{Vector clocks in dynamic analyses.} Often, the computational task of determining the partial ordering between events of an execution is achieved using a simple data structure called \emph{vector clock}. Informally, a vector clock $\mathbb{C}$ is an integer array indexed by the processes/threads in the execution, and succinctly encodes the knowledge of a process about the whole system. For vector clock $\mathbb{C}_{t_1}$ associated with thread $t_1$, if $\mathbb{C}_{t_1}(t_2) = i$ then it means that the latest event of $t_1$ is ordered after the first $i$ events of thread $t_2$ in the partial order. Vector clocks, thus seamlessly capture a partial order, with the point-wise ordering of the vector timestamps of two events capturing the ordering between the events with respect to the partial order of interest. For this reason, vector clocks are instrumental in computing the $\operatorname{\acr{HB}}$ parial order efficiently~\cite{Mattern89,fidge1988timestamps,Fidge91}, and are ubiquitous in the efficient implementation of analyses based on partial orders even beyond $\operatorname{\acr{HB}}$~\cite{Flanagan09,Mathur18,Kini17,Roemer18,Mathur2020,Samak2014,Sulzmann2018,Kulkarni2021}. The fundamental operation on vector clocks is the pointwise \emph{join} $\mathbb{C}_{t_1}\gets \mathbb{C}_{t_1} \sqcup \mathbb{C}_{t_2}$. This occurs whenever there is a causal ordering from thread $t_2$ to $t_1$. Operationally, a join is performed by updating $\mathbb{C}_{t_1}(t)\gets \max( \mathbb{C}_{t_1}(t), \mathbb{C}_{t_2}(t))$ for every thread $t$, and captures the transitivity of causal orderings:~as $t_1$ learns about $t_2$, it also learns about other threads $t$ that $t_2$ knows about. Note that if $t_1$ is aware of a later event of $t$, this operation is vacuous. With $k$ threads, a vector clock join takes $\Theta(k)$ time, and can quickly become a bottleneck even in systems with moderate $k$. This motivates the following question:~is it possible to speed up join operations by proactively avoiding vacuous updates? The challenge in such a task comes from the efficiency of the join operation itself---since it only requires linear time in the size of the vector, any improvement must operate in sub-linear time, i.e., not even touch certain entries of the vector clock. We illustrate this idea on a concrete example, and present the key insight in this work. \Paragraph{Motivating example.} Consider the example in \cref{fig:motivating}. It shows a partial trace from a concurrent system with 6 threads, along with the vector timestmamps at each event. When event $e_2$ is ordered before event $e_3$ due to synchronization, the vector clock $\mathbb{C}_{t_2}$ of $t_2$ is joined with that of $\mathbb{C}_{t_1}$, i.e., the $t_j$-th entry of $\mathbb{C}_{t_1}$ is updated to the maximum of $\mathbb{C}_{t_1}(t_j)$ and $\mathbb{C}_{t_2}(t_j)$\footnote{\camera{As with many presentations of dynamic analyses using vector clocks~\cite{Itzkovitz1999}, we assume that the \emph{local} entry of a thread's clock increments by $1$ after each event it performs. Hence, in \cref{fig:motivating}, the $t_1$-th entry of $\mathbb{C}_{t_1}$ increases from $27$ to $28$ after $e_1$ is performed. }}. Now assume that thread $t_2$ has learned of the current times of threads $t_3$, $t_4$, $t_5$ and $t_6$ via thread $t_3$. Since the $t_3$-th component of the vector timestamp of event $e_1$ is larger than the corresponding component of event $e_2$, $t_1$ cannot possibly learn any \emph{new} information about threads $t_4$, $t_5$, and $t_6$ through the join performed at event $e_3$. Hence the naive pointwise updates will be redundant for the indices $j=\{3,4,5,6 \}$. Unfortunately, the flat structure of vector clocks is not amenable to such reasoning and cannot avoid these redundant operations. To alleviate this problem, we introduce a new hierarchical tree-like data structure for maintaining vector times called a \emph{tree clock}. The nodes of the tree encode local clocks, just like entries in a vector clock. In addition, the structure of the tree naturally captures which clocks have been learned transitively via intermediate threads. \cref{fig:motivating} (right) depicts a (simplified) tree clock encoding the vector times of $\mathbb{C}_{t_2}$. The subtree rooted at thread $t_3$ encodes the fact that $t_2$ has learned about the current times of $t_4$, $t_5$ and $t_6$ \emph{transitively}, via $t_3$. To perform the join operation $\mathbb{C}_{t_1}\gets \mathbb{C}_{t_1} \sqcup \mathbb{C}_{t_2}$, we start from the root of $\mathbb{C}_{{t}_2}$, and traverse the tree as follows. Given a current node $u$, we proceed to the children of $u$ \emph{if and only if} $u$ represents the time of a thread that is not known to $t_1$. Hence, in the example, the join operation will now access only the light-gray area of the tree, and thus compute the join without accessing the whole tree, resulting in a \emph{sublinear running time} of the join operation. The above principle, which we call \emph{direct monotonicity} is one of two key ideas exploited by tree clocks; the other being \emph{indirect monotonicity}. The key technical challenge in developing the tree clock data structure lies in (i)~using direct and indirect monotonicity to perform efficient updates, and (ii)~perform these updates such that direct and indirect monotonicity are preserved for future operations. \cref{subsec:intuition} illustrates the intuition behind these two principles in depth. \Paragraph{Contributions.} Our contributions are as follows. \begin{compactenum} \item We introduce \emph{tree clock}, a new data structure for maintaining logical times in concurrent executions. In contrast to the flat structure of the traditional vector clocks, the dynamic hierarchical structure of tree clocks naturally captures ad-hoc communication patterns between processes. In turn, this allows for join and copy operations that run in \emph{sublinear time}. As a data structure, tree clocks offer high versatility as they can be used in computing many different ordering relations. \item We prove that tree clocks are an \emph{optimal data structure} for computing $\operatorname{\acr{HB}}$, in the sense that, \emph{for every input trace}, the total computation time cannot be improved (asymptotically) by replacing tree clocks with any other data structure. On the other hand, vector clocks do not enjoy this property. \item We illustrate the versatility of tree clocks by presenting tree clock-based algorithms for the $\operatorname{\acr{MAZ}}$ and $\operatorname{\acr{SHB}}$ partial orders. \item We perform a large-scale experimental evaluation of the tree clock data structure for computing the $\operatorname{\acr{MAZ}}$, $\operatorname{\acr{SHB}}$ and $\operatorname{\acr{HB}}$ partial orders, and compare its performance against the standard vector clock data structure. Our results show that just by replacing vector clocks with tree clocks, the computation becomes up to $2.97 \times$ faster on average. Given our experimental results, we believe that replacing vector clocks by tree clocks in partial order-based algorithms can lead to significant improvements on many applications. \end{compactenum} \subsection{Intuition}\label{subsec:intuition} Similar to vector clocks, a tree clock is a data structure for storing vector times. In contrast to the flat structure of a vector clock, a tree clock stores time information hierarchically in a tree structure. Each node of the tree represents the clock of a thread, while the tree structure represents clock information that has been obtained transitively via intermediate threads. Below we illustrate two principles behind tree clocks in full generality. For conciseness, we use the operation $\sync(\ell)$ to denote the effect of two events by the same thread in succession, performing the operation sequence $\acq(\ell) \rel(\ell)$. \Paragraph{1.~Direct monotonicity.} Our first insight is a form of direct monotonicity. Let us say we are computing the $\operatorname{\acr{HB}}$ order for the trace $\Trace$ of \cref{subfig:intuition1}, and assume that we are currently processing the acquire event in $e_{7}$. At this point, an $\operatorname{\acr{HB}}$ ordering is discovered $e_{6} \stricthb{\Trace} e_{7}$ (dashed), and thread $t_4$ (performing event $e_7$) transitively \emph{learns} about all events known to thread $t_3$ by performing a join operation (`$\mathbb{C}_{t_4}.\FunctionVCJoin(\mathbb{C}_{\ell})$'). A careful look at what happens in the algorithm prior to this reveals the following. \begin{compactenum} \item Thread $t_3$ has learned of event $e_1$ of $t_1$ transitively, by having learned of event $e_2$ of thread $t_2$. \item Thread $t_4$ has learned of event $e_4$ of thread $t_2$. \end{compactenum} Thus we have a form of \emph{direct monotonicity}:~since $t_4$ already knows of a later event of $t_2$ than $t_3$, any events that are known to $t_3$ via $t_2$, they are also known to $t_4$. Hence, we don't need to iterate over the corresponding threads (thread $t_1$ in this case), as the $\max$ operation of the join will be vacuous. \Paragraph{2.~Indirect monotonicity.} Our second insight is a form of indirect monotonicity. Consider the trace $\Trace$ of \cref{subfig:intuition2}, and assume that the algorithm is currently processing the lock-acquire event $e_{7}$ (dashed edge). At this point, an $\operatorname{\acr{HB}}$ ordering is discovered $e_{6}<^{\Trace}_{\operatorname{\acr{HB}}}e_{7}$ (dashed), and thus thread $t_4$ learns transitively of all events known to thread $t_3$ by performing a join operation. Since thread $t_4$ does not yet know of $e_{6}$, there is no directed monotonicity exploited here, and we will proceed to examine all other threads known to thread $3$. Note, however, the following. \begin{compactenum} \item Thread $t_3$ has learned of event $e_3$ of thread $t_2$ through its own event $e_4$. Similarly, it has learned of event $e_1$ of thread $t_1$ through its own event $e_2$. \item Thread $t_4$ has learned of event $e_4$ of thread $t_3$. \end{compactenum} Thus we have a form of \emph{indirect monotonicity}:~since $t_4$ knows of $e_4$ of $t_3$, any events that are known to $t_3$ \emph{via earlier events}, they are also known to $t_4$. Thus, we don't need to iterate over the corresponding threads (thread $t_1$ in this case), as the $\max$ operation of the join will be vacuous. Direct and indirect monotonicity reason about transitive orderings. The flat structure of vector clocks misses the transitivity of information sharing, and thus arguments based on monotonicity are lost, resulting in vacuous operations during vector clock joins. On the other hand, tree clocks maintain transitivity in their hierarchical structure. In turn, this allows to reason about direct and indirect monotonicity, and thus often avoid redundant operations. \subsection{Intuition}\label{subsec:intuition} \input{figures/fig_intuition1} Like vector clocks, tree clocks represent vector timestamps that record a thread's knowledge of events in other threads. Thus, for each thread $t$, a tree clock records the last known local time of $t$. However, unlike a vector clock which is flat, a tree clock maintains this information hierarchically --- nodes store local times of a thread, while the tree structure records how this information has been obtained transitively through intermediate threads. In the following examples we use the operation $\sync(\ell)$ to denote the sequence $\acq(\ell), \rel(\ell)$. \Paragraph{1.~Direct monotonicity.} Recall that a vector clock-based algorithm like \cref{algo:hb_basic} maintains a vector clock $\mathbb{C}_t$ which intuitively captures thread $t$'s knowledge about all threads. However, it does not maintain \emph{how} this information was acquired. Knowledge of how such information was acquired can be exploited in join operations, as we show through an example. Consider a computation of the $\operatorname{\acr{HB}}$ partial order for the trace $\Trace$ shown in \cref{subfig:intuition1}. At event $e_7$, thread $t_4$ transitively learns information about events in the trace through thread $t_3$ because $e_{6} \stricthb{\Trace} e_{7}$ (dashed edge in \cref{subfig:intuition1}). This is accomplished by joining with clock $\mathbb{C}_{t_3}$ of thread $t_3$. Such a join using vector clocks will take 4 steps because we need to take the pointwise maximum of two vectors of length $4$. Suppose in addition to these timestamps, we maintain how these timestamps were updated in each clock. This would allow one to make the following observations. \begin{compactenum} \item Thread $t_3$ knows of event $e_1$ of $t_1$ transitively, through event $e_2$ of thread $t_2$. \item Thread $t_4$ (before the join at $e_7$) knows of event $e_1$ through $e_4$ of thread $t_2$. \end{compactenum} Before the join, since $t_4$ has a more recent view of $t_2$ when compared to $t_3$, it is aware of all the information that thread $t_3$ knows about the world via thread $t_2$. Thus, when performing the join, we need not examine the component corresponding to thread $t_1$ in the two clocks. Tree clocks, by maintaining such additional information, can avoid examining some components of a vector timestamp and yield sublinear updates. \Paragraph{2.~Indirect monotonicity.} We now illustrate that if in addition to information about ``how a view of a thread was updated'', we also maintained ``when the view of a thread was updated'', the cost of join operations can be further reduced. Consider the trace $\Trace$ of \cref{subfig:intuition2}. At each of the events of thread $t_4$, it learns about events in the trace transitively through thread $t_3$ by performing two join operations. At the first join (event $e_5$), thread $t_4$ learns about events $e_1$, $e_2$, $e_3$ transitively through event $e_4$. At event $e_7$, thread $t_4$ finds out about new events in thread $t_3$ (namely, $e_6$). However, it does not need to update its knowledge about threads $t_1$ and $t_2$ --- thread $t_3$'s information about threads $t_1$ and $t_2$ were acquired by the time of event $e_4$ about which thread $t_4$ is aware. Thus, if information about when knowledge was acquired is also kept, this form of ``indirect monotonicity'' can be exploited to avoid examining all components of a vector timestamp. The flat structure of vector clocks misses the transitivity of information sharing, and thus arguments based on monotonicity are lost, resulting in vacuous operations. On the other hand, tree clocks maintain transitivity in their hierarchical structure. This enables reasoning about direct and indirect monotonicity, and thus avoid redundant operations. \subsection{The Mazurkiewicz Partial Order}\label{subsec:maz} The Mazurkiewicz partial order ($\operatorname{\acr{MAZ}}$)~\cite{Mazurkiewicz87} has been the canonical way to represent concurrent executions algebraically using an independence relation that defines which events can be reordered. This algebraic treatment allows to naturally lift language-inclusion problems from the verification of sequential programs to concurrent programs~\cite{Bertoni1989}. As such, it has been the most studied partial order in the context of concurrency, with deep applications in dynamic analyses~\cite{Netzer1990,Flanagan2008,Mathur2020}, ensuring consistency~\cite{Shasha1988} and stateless model checking~\cite{Flanagan2005}. In shared memory concurrency, the standard independence relation deems two events as dependent if they conflict, and independent otherwise~\cite{Godefroid1996}. In particular, $\operatorname{\acr{MAZ}}$ is the smallest partial order that satisfies the following conditions. \begin{compactenum} \item\label{item:maz1} $ \hb{\Trace} \subseteq \maz{\Trace}$. \item\label{item:maz2} for every two events $e_1,e_2$ such that $e_1\trord{\Trace}e_2$ and $\Confl{e_1}{e_2}$, we have $e_1\maz{\Trace} e_2$. \end{compactenum} \Paragraph{$\operatorname{\acr{MAZ}}$ with tree clocks.} The algorithm for computing $\operatorname{\acr{MAZ}}$ is similar to that for $\operatorname{\acr{SHB}}$. The main difference is that $\operatorname{\acr{MAZ}}$ includes read-to-write orderings, and thus we need to store additional vector times $\mathbb{R}_{t,x}$ of the last event $\rd(x)$ of thread $t$. In addition, we use the set $\operatorname{LRDs}_x$ to store the threads that have executed a $\rd(x)$ event after the latest $\wt(x)$ event so far. This allows us to only spend computation time in the first read-to-write ordering, as orderings between the read event and later write events follow transitively via intermediate write-to-write orderings. Overall, this approach yields the efficient time complexity $O(n\cdot k)$ for $\operatorname{\acr{MAZ}}$, similarly to $\operatorname{\acr{HB}}$ and $\operatorname{\acr{SHB}}$. We refer to \cref{algo:maz} for the pseudocode. \input{algorithms/algo_maz} \subsection{Concurrent Model and Traces}\label{subsec:model} We start with our main notation on traces. The exposition is standard and follows related work (e.g.,~\cite{Flanagan09,Smaragdakis12,Kini17}). \Paragraph{Events and traces.} We consider execution traces of concurrent programs represented as a sequence of events performed by different threads. Each event is a tuple $e=\ev{i, t, \operatorname{op}}$, where $i$ is the unique event identifier of $e$, $t$ is the identifier of the thread that performs $e$, and $\operatorname{op}$ is the operation performed by $e$, which can be one of the following types \footnote{Fork and join events are ignored for ease of presentation. Handling such events is straightforward.}. \begin{compactenum} \item $\operatorname{op}=\rd(x)$, denoting that $e$ reads global variable $x$. \item $\operatorname{op}=\wt(x)$, denoting that $e$ writes to global variable $x$. \item $\operatorname{op}=\acq(\ell)$, denoting that $e$ acquires the lock $\ell$. \item $\operatorname{op}=\rel(\ell)$, denoting that $e$ releases the lock $\ell$. \end{compactenum} We write $\operatorname{\mathsf{tid}}(e)$ and $\operatorname{op}(e)$ to denote the thread identifier and the operation of $e$, respectively. For a read/write event $e$, we denote by $\operatorname{Variable}(e)$ the (unique) variable that $e$ accesses. We often ignore the identifier $i$ and represent $e$ as $\ev{t, \operatorname{op}}$. In addition, we are often not interested in the thread of $e$, in which case we simply denote $e$ by its operation, e.g., we refer to event $\rd(x)$. When the variable of $e$ is not relevant, it is also omitted (e.g., we may refer to a read event $\rd$). A (concrete) \emph{trace} is a sequence of events $\Trace=e_1, \dots, e_n$. The trace $\Trace$ naturally defines a total order $\trord{\Trace}$ (pronounced \emph{trace order}) over the set of events appearing in $\Trace$, i.e., we have $e \trord{\Trace} e'$ iff either $e = e'$ or $e$ appears before $e'$ in $\Trace$; when $e \neq e'$, then we say $e \stricttrord{\Trace} e'$. We require that $\Trace$ respects the semantics of locks. That is, for every lock $\ell$ and every two acquire events $\acq_1(\ell)$, $\acq_2(\ell)$ on the lock $\ell$ such that $\acq_1(\ell) \stricttrord{\Trace} \acq_2(\ell)$, there exists a lock release event $\rel_1(\ell)$ in $\Trace$ with $\operatorname{\mathsf{tid}}(\acq_1(\ell))=\operatorname{\mathsf{tid}}(\rel_1(\ell))$ and $\acq_1(\ell) \stricttrord{\Trace} \rel_1(\ell) \stricttrord{\Trace}\acq_2(\ell)$. Finally, we denote by $\Threads{\Trace}$ the set of thread identifiers appearing in $\Trace$. \Paragraph{Thread order.} Given a trace $\Trace$, the \emph{thread order} $\tho{\Trace}$ is the smallest partial order such that $e_1 \tho{\Trace} e_2$ iff $\operatorname{\mathsf{tid}}(e_1)=\operatorname{\mathsf{tid}}(e_2)$ and $e_1 \trord{\Trace}e_2$. For an event $e$ in a trace $\Trace$, the local time $\LocalTime{\Trace}{e}$ of $e$ is the number of events that appear before $e$ in the trace $\Trace$ that are also performed by $\ThreadOf{e}$, i.e., $ \LocalTime{\Trace}{e} = |\setpred{e'}{e' \tho{\Trace} e}| $. We remark that the pair $(\ThreadOf{e}, \LocalTime{\Trace}{e})$ uniquely identifies the event $e$ in the trace $\Trace$. \Paragraph{Conflicting events.} Two events of $e_1$, $e_2$ of $\Trace$ are called \emph{conflicting}, denoted by $\Confl{e_1}{e_2}$, if (i)~$\operatorname{Variable}(e_1)=\operatorname{Variable}(e_2)$, (ii)~$\operatorname{\mathsf{tid}}(e_1)\neq \operatorname{\mathsf{tid}}(e_2)$, and (iii)~at least one of $e_1$, $e_2$ is a write event. The standard approach in concurrent analyses is to detect conflicting events that are causally independent, according to some pre-defined notion of causality, and can thus be executed concurrently. \subsection{Work Optimality of tree clocks}\label{subsec:vtoptimality} Finally, in this section we prove the optimality of tree clocks for race detection, in the sense that replacing tree clocks with any other data structure cannot reduce the asymptotic running time of the algorithm. \section{Tree Clocks in Other Partial Orders}\label{subsec:other} \input{shb} \input{maz} \section{Preliminaries}\label{sec:prelims} In this section we develop relevant notation and present standard concepts regarding concurrent executions, partial orders and vector clocks. \input{model} \input{vectorclocks_prelim} \input{happens_before_races} \section{Proofs}\label{appsec:proofs} \smallskip \lemcopymonotonicity* \begin{proof} Consider a trace $\Trace$, a release event $\rel(\ell)$ and let $\acq(\ell)$ be the matching acquire event. When $\acq(\ell)$ is processed the algorithm performs $\mathbb{C}_{t}\gets \mathbb{C}_{t}\sqcup\mathbb{C}_{\ell}$, and thus $\mathbb{C}_{\ell}\sqsubseteq \mathbb{C}_{t}$ after this operation. By lock semantics, there exists no release event $\rel'(\ell)$ such that $\acq(\ell)<^{\Trace}\rel'(\ell)<^{\Trace}\rel(\ell)$, and hence $\mathbb{C}_{\ell}$ is not modified until $\rel(\ell)$ is processed. Since vector clock entries are never decremented, when $\rel(\ell)$ is processed we have $\mathbb{C}_{\ell} \sqsubseteq \mathbb{C}_{t}$, as desired. \end{proof} \lemtcmonotonicity* \begin{proof} First, note that after initialization $u$ has no children, hence each statement is trivially true. Now assume that both statements hold when the algorithm processes an event $e$, and we show that they both hold after the algorithm has processed $e$. We distinguish cases based on the type of $e$. \noindent $e=\langle t, \acq(\ell) \rangle$. The algorithm performs the operation $\mathbb{C}_{t}.\FunctionCTJoin(\mathbb{L}_{\ell})$, hence the only tree clock modified is $\mathbb{C}_{t}$, and thus it suffices to examine the cases that $\mathbb{C}_{t}$ is $\mathbb{C}$ and $\mathbb{C}_{t}$ is $\mathbb{C}'$. \begin{compactenum} \item $\mathbb{C}_{t}$ is $\mathbb{C}$. First consider the case that $u=\mathbb{C}_{t}.\operatorname{T}.\operatorname{root}$. Observe that $u.\operatorname{clk} > \mathbb{C}'.\FunctionCTGet(u.\operatorname{\mathsf{tid}})$, and thus \cref{item:monotonicity1} holds trivially. For \cref{item:monotonicity2}, we distinguish cases based on whether $v.\operatorname{clk}$ has progressed by the $\FunctionCTJoin$ operation. If yes, then we have $v.\operatorname{aclk} = u.\operatorname{clk}$, and the statement holds trivially for the same reason as in \cref{item:monotonicity1}. Otherwise, we have that for every descendant $w$ of $v$, the clock $w.\operatorname{clk}$ has not progressed by the $\FunctionCTJoin$ operation, hence the statement holds by the induction hypothesis on $\mathbb{C}_{t}$. Now consider the case that $u\neq \mathbb{C}_{t}.\operatorname{T}.\operatorname{root}$. If $u.\operatorname{clk}$ has not progressed by the $\FunctionCTJoin$ operation then each statement holds by the induction hypothesis on $\mathbb{C}_{t}$. Otherwise, using the induction hypothesis one can show that for every descendant $w$ of $u$, there exists a node $w_{\ell}$ of $\mathbb{L}_{\ell}$ that is a descendant of a node $u_{\ell}$ such that $w_{\ell}.\operatorname{\mathsf{tid}} = w.\operatorname{\mathsf{tid}}$ and $u_{\ell}.\operatorname{\mathsf{tid}}=u.\operatorname{\mathsf{tid}}$. Then, each statement holds by the induction hypothesis on $\mathbb{L}_{\ell}$. \item $\mathbb{C}_{t}$ is $\mathbb{C}'$. For \cref{item:monotonicity1}, if $u.\operatorname{clk} \leq \mathbb{C}'.\FunctionCTGet(u.\operatorname{\mathsf{tid}})$ holds before the $\FunctionCTJoin$ operation, then the statement holds by the induction hypothesis, since $\FunctionCTJoin$ does not decrease the clocks of $\mathbb{C}_{t}$. Otherwise, the statement follows by the induction hypothesis on $\mathbb{L}_{\ell}$. The analysis for \cref{item:monotonicity2} is similar. The desired result follows. \end{compactenum} \noindent $e=\langle t, \rel(\ell) \rangle$. The algorithm performs the operation $\mathbb{L}_{\ell}.\linebreak\FunctionCTMonotoneCopy(\mathbb{C}_{t})$. The analysis is similar to the previous case, this time also using \cref{lem:copy_monotonicity} to argue that no time stored in $\mathbb{L}_{\ell}$ decreases. \end{proof} \smallskip \lemhbcor* \begin{proof} The lemma follows directly from \cref{lem:tc_monotonicity}. In each case, if the corresponding operation (i.e., $\FunctionCTJoin$ for event $\langle t, \acq(\ell) \rangle$ and $\FunctionCTMonotoneCopy$ for $\langle t, \rel(\ell) \rangle$), if the clock of a node $w$ of the tree clock that performs the operation does not progress, then we are guaranteed that $w.\operatorname{clk}$ is not smaller than the time of the thread $w.\operatorname{\mathsf{tid}}$ in the tree clock that is passed as an argument to the operation. \end{proof} \Paragraph{First remote acquires.} Consider a trace $\Trace$ and a lock-release event $e = \ev{t, \rel(\ell)}$ of $\Trace$, such that there exists a later acquire event $e' = \ev{t', \acq(\ell)}$ ($e \stricttrord{\Trace} e'$). The \emph{first remote acquire} of $e$ is the first event $e'$ with the above property. For example, in \cref{subfig:hb_example_trace}, $e_7$ is the first remote acquire of $e_2$. While constructing the $\operatorname{\acr{HB}}$ partial order, the algorithm makes $\operatorname{\acr{HB}}$ orderings from lock-release events to their first remote acquires $\rel(\ell) \stricthb{\Trace} \acq(\ell)$. The following lemma captures the property that the edges of tree clocks are essentially the inverses of such orderings. \smallskip \begin{restatable}{lemma}{lemuniqueness}\label{lem:uniqueness} Consider the execution of \cref{algo:hb} on a trace $\Trace$. For every tree clock $\mathbb{C}_i$ and node $u$ of $\mathbb{C}_i.\operatorname{T}$ other than the root, the following assertions hold. \vspace{-0.1cm} \begin{compactenum} \item $u$ points to a lock-release event $\rel(\ell)$. \item $\rel(\ell)$ has a first remote acquire $\acq(\ell)$ and $(v.\operatorname{\mathsf{tid}}, u.\operatorname{aclk})$ points to $\acq(\ell)$, where $v$ is the parent of $u$ in $\mathbb{C}_i.\operatorname{T}$. \end{compactenum} \end{restatable} \begin{proof} The lemma follows by a straightforward induction on $\Trace$. \end{proof} \cref{lem:uniqueness} allows us to prove the vt-optimality of tree clocks. \smallskip \thmvtoptimality* \begin{proof} Consider a critical section of a thread $t$ on lock $\ell$, marked by two events $\acq(\ell)$, $\rel(\ell)$. We define the following vector times. \begin{compactenum} \item $\operatorname{V}_{t}^{1}$ and $\operatorname{V}_{t}^{2}$ are the vector times of $\mathbb{C}_{t}$ right before and right after $\acq(\ell)$ is processed, respectively. \item $\operatorname{V}_{\ell}^{1}$ is the vector time of $\mathbb{C}_{\ell}$ right before $\acq(\ell)$ is processed. \item $\operatorname{V}_{t}^{3}$ is the vector time of $\mathbb{C}_{t}$ right before $\rel(\ell)$ is processed. \item $\operatorname{V}_{\ell}^{3}$ and $\operatorname{V}_{\ell}^{4}$ are the vector times of $\mathbb{C}_{\ell}$ right before and right after $\rel(\ell)$ is processed, respectively, \end{compactenum} First, note that (i)~$\operatorname{V}_{t}^{1}\sqsubseteq\operatorname{V}_{t}^{3}$, and (ii)~due to lock semantics, we have $\operatorname{V}_{\ell}^3=\operatorname{V}_{\ell}^{1}$. Let $W=W_{J}+W_{C}$, where \begin{align*} W_J=&|\{t'\colon \operatorname{V}_{t}^{2}(t')\neq \operatorname{V}_{t}^{1}(t')\}| \quad \text{ and}\\ W_C=&|\{t'\colon \operatorname{V}_{\ell}^{4}(t')\neq \operatorname{V}_{\ell}^{3}(t')\}| \end{align*} i.e., $W_J$ and $W_C$ are the vt-work for handling $\acq(\ell)$ and $\rel(\ell)$, respectively. Let $\mathcal{T}_{J}$ be the time spent in $\operatorname{TC}_{t}.\FunctionCTJoin$ due to $\acq(\ell)$. Similarly, let $\mathcal{T}_{C}$ be the time spent in $\operatorname{TC}_{\ell}.\FunctionCTMonotoneCopy$ due to $\rel(\ell)$. We will argue that $\mathcal{T}_{J}=O(W)$ and $\mathcal{T}_{C}=O(W_C)$, and thus $\mathcal{T}_{J}+\mathcal{T}_{C}=O(W)$. Note that this proves the lemma, simply by summing over all critical sections of $\Trace$. We start with $\mathcal{T}_J$. Observe that the time spent in this operation is proportional to the number of times the loop in \cref{line:routineupdatednodesforjoin_loop} is executed, i.e., the number of nodes $v'$ that the loop iterates over. Consider the if statement in \cref{line:routineupdatednodesforjoin_if}. If $\FunctionCTGet(v'.\operatorname{\mathsf{tid}})<v'.\operatorname{clk}$, then we have $\operatorname{V}_{t}^{2}(v'.\operatorname{\mathsf{tid}})>\operatorname{V}_{t}^{1}(v'.\operatorname{\mathsf{tid}})$, and thus this iteration is accounted for in $W_J$. On the other hand, if $\FunctionCTGet(v'.\operatorname{\mathsf{tid}})>v'.\operatorname{clk}$, then we have $\operatorname{V}_{t}^{1}(v'.\operatorname{\mathsf{tid}})>\operatorname{V}_{\ell}^{1})(v'.\operatorname{\mathsf{tid}})$. Due to (i) and (ii) above, we have $\operatorname{V}_{\ell}^{4}(v'.\operatorname{\mathsf{tid}})>\operatorname{V}_{\ell}^{3}(v'.\operatorname{\mathsf{tid}})$, and thus this iteration is accounted for in $W_C$. Finally, consider the case that $\FunctionCTGet(v'.\operatorname{\mathsf{tid}})=v'.\operatorname{clk}$, and let $v$ be the node of $\mathbb{C}_{t}$ such that $v.\operatorname{\mathsf{tid}}=v'.\operatorname{\mathsf{tid}}$. There can be at most one such $v$ that is not the root of $\operatorname{TC}_{t}$. For every other such $v$, let $u=\operatorname{Prnt}(v)$. Note that $v'$ is not the root of $\mathbb{C}_{\ell}$, and let $u'=\operatorname{Prnt}(v')$. Let $\rel(\ell)$ be the lock-release event that $v$ and $v'$ point to. By \cref{lem:uniqueness}, $\rel(\ell)$ has a first remote acquire $\acq(\ell)$ such that (i)~$u.\operatorname{\mathsf{tid}}=u'.\operatorname{\mathsf{tid}}=t'$, where $t'$ is the thread of $\acq(\ell)$, and (ii)~$v.\operatorname{aclk}$ is the local clock of $\acq(\ell)$. Since $\RoutineUpdatedNodesForCopy$ examines $v'$, we must have $u'.\operatorname{clk} > u.\operatorname{clk}$. In turn, we have $u.\operatorname{clk} \geq v.\operatorname{aclk}$, and thus $u'.\operatorname{aclk}>v.\operatorname{aclk}$. Hence, due to \cref{line:routineupdatednodesforjoinifbreak}, $u'$ can have at most one child $v'$ with $v'.\operatorname{clk}=\FunctionCTGet(v'.\operatorname{\mathsf{tid}})$. Thus, we can account for the time of this case in $W_J$. Hence, $\mathcal{T}_J=O(W)$, as desired. We now turn our attention to $\mathcal{T}_C$. Similarly to the previous case, the time spent in this operation is proportional to the number of times the loop in \cref{line:routineupdatednodesforcopy_loop} is executed. Consider the if statement in \cref{line:routineupdatednodesforcopy_if}. If $\FunctionCTGet(v'.\operatorname{\mathsf{tid}})<v'.\operatorname{clk}$, then we have $\operatorname{V}_{\ell}^{4}(v'.\operatorname{\mathsf{tid}})>\operatorname{V}_{\ell}^{3}(v'.\operatorname{\mathsf{tid}})$, and thus this iteration is accounted for in $W_C$. Note that as the copy is monotone (\cref{lem:copy_monotonicity}), we can't have $\FunctionCTGet(v'.\operatorname{\mathsf{tid}})>v'.\operatorname{clk}$. Finally, the reasoning for the case where $\FunctionCTGet(v'.\operatorname{\mathsf{tid}})=v'.\operatorname{clk}$ is similar to the analysis of $\mathcal{T}_J$, using \cref{line:routineupdatednodesforcopyifbreak} instead of \cref{line:routineupdatednodesforjoinifbreak}. Hence, $\mathcal{T}_C=O(W_C)$, as desired. The desired result follows. \end{proof} \section{Tree Clocks in Race Prediction}\label{sec:race_prediction} \section{Related Work}\label{sec:related_work} \Paragraph{Other partial orders and tree clocks.} As we have mentioned in the introduction, besides $\operatorname{\acr{HB}}$ and $\operatorname{\acr{SHB}}$, many other partial orders perform dynamic analysis using vector clocks. In such cases, tree clocks can replace vector clocks either partially or completely, sometimes requiring small extensions to the data structure as presented here. In particular, we foresee interesting applications of tree clocks for the $\operatorname{\acr{WCP}}$~\cite{Kini17}, $\operatorname{\acr{DC}}$~\cite{Roemer18} and $\operatorname{\acr{SDP}}$~\cite{Genc19} partial orders. \Paragraph{Speeding up dynamic analyses.} Vector-clock based dynamic race detection is known to be slow~\cite{sadowski-tools-2014}, which many prior works have aimed to mitigate. One of the most prominent performance bottlenecks is the linear dependence of the size of vector timestamps on the number of threads. Despite theoretical limits~\cite{CharronBost1991}, prior research exploits special structures in traces~\cite{cheng1998detecting,Feng1997,surendran2016dynamic,Dimitrov2015,Agrawal2018} that enable succinct vector time representations. The Goldilocks~\cite{Elmas07} algorithm infers $\operatorname{\acr{HB}}$-orderings using locksets instead of vector timestamps but incurs severe slowdown~\cite{Flanagan09}. The \textsc{FastTrack}\xspace~\cite{Flanagan09} optimization uses epochs for maintaining succinct access histories and our work complements this optimization --- tree clocks offer optimizations for other clocks (thread and lock clocks). Other optimizations in clock representations are catered towards dynamic thread creation~\cite{Raychev2013,Wang2006,Raman2012}. Another major source of slowdown is program instrumentation and expensive metadata synchronization. Several approaches have attempted to minimize this slowdown, including hardware assistance~\cite{RADISH2012,HARD2007}, hybrid race detection~\cite{OCallahan03,Yu05}, static analysis~\cite{bigfoot2017,redcard2013}, and sophisticated ownership protocols~\cite{Bond2013,Wood2017,Roemer20}. \section{Conclusion} We have introduced tree clocks, a new data structure for maintaining logical times in concurrent executions. In contrast to standard vector clocks, tree clocks can dynamically capture communication patterns in their structure and perform join and copy operations in sublinear time, thereby avoiding the traditional overhead of these operations when possible. Moreover, we have shown that tree clocks are vector-time optimal for computing the $\operatorname{\acr{HB}}$ partial order, performing at most a constant factor work compared to what is absolutely necessary, in contrast to vector clocks. Finally, our experiments show that tree clocks effectively reduce the running time for computing the $\operatorname{\acr{MAZ}}$, $\operatorname{\acr{SHB}}$ and $\operatorname{\acr{HB}}$ partial orders significantly, and thus offer a promising alternative over vector clocks. \camera{ Interesting future work includes incorporating tree clocks in an online analysis such as ThreadSanitizer~\cite{threadsanitizer}. Any use of additional synchronization to maintain analysis atomicity in this online setting is identical and of the same granularity to both vector clocks and tree clocks. However, the faster joins performed by tree clocks may lead to less congestion compared to vector clocks, especially for partial orders such as $\operatorname{\acr{SHB}}$ and $\operatorname{\acr{MAZ}}$ where synchronization occurs on all events (i.e., synchronization, as well as access events). We leave this evaluation for future work. Finally, since tree clocks are a drop-in replacement of vector clocks, most of the existing techniques that minimize the slowdown due to metadata synchronization (\cref{sec:related_work}) are directly applicable to tree clocks. } \begin{acks} We thank anonymous reviewers for their constructive feedback on an earlier draft of this manuscript. Umang Mathur was partially supported by the Simons Institute for the Theory of Computing. Mahesh Viswanathan is partially supported by grants NSF SHF 1901069 and NSF CCF 2007428. \end{acks} \subsection{Strong-Dependently-Precedes}\label{subsec:sdp} \Paragraph{The $\operatorname{\acr{SDP}}$ partial order.} Strong-dependently-precedes is a partial order introduced in~\cite{Genc19}. It is an unsound (see~\cite{Mathur21Arxiv}) weakening of $\operatorname{\acr{WCP}}$ and thus can report more races. It has similar flavor to $\operatorname{\acr{WCP}}$, while in high level, the difference is that orderings to a write event $\wt$ are instead applied to the read event $\rd$ that observes $\wt$ in $\Trace$. \Paragraph{$\operatorname{\acr{SDP}}$ with tree clocks.} \cref{algo:sdp} shows $\operatorname{\acr{SDP}}$ using tree clocks. The algorithm is similar to \cref{algo:wcp}, with $\FunctionCTJoin$ operations replaced by $\FunctionCTSubRootJoin$ operations in \cref{line:sdp_readsubjoin1}, \cref{line:sdp_readsubjoin2} and \cref{line:sdp_writesubjoin}. \lipsum \input{algorithms/algo_sdp} \subsection{Schedulable-Happens-Before}\label{subsec:shb} $\operatorname{\acr{SHB}}$ is a strengthening of $\operatorname{\acr{HB}}$, introduced recently~\cite{Mathur18} in the context of race detection. Given a trace $\Trace$ and a read event $\rd$ let $\operatorname{lw}_{\Trace}(\rd)$ be the last write event of $\Trace$ before $\rd$ with $\operatorname{Variable}(\wt)=\operatorname{Variable}(\rd)$. $\operatorname{\acr{SHB}}$ is the smallest partial order that satisfies the following. \begin{compactenum} \item\label{item:shb1} $ \hb{\Trace} \subseteq \shb{\Trace}$. \item\label{item:shb2} for every read event $\rd$, we have $\operatorname{lw}_{\Trace}(\rd)\shb{\Trace} \rd$. \end{compactenum} \Paragraph{Algorithm for $\operatorname{\acr{SHB}}$.} Similarly to $\operatorname{\acr{HB}}$, the $\operatorname{\acr{SHB}}$ partial order is computed by a single pass of the input trace $\Trace$ using vector-times~\cite{Mathur18}. The $\operatorname{\acr{SHB}}$ algorithm processes synchronization events (i.e., $\acq(\ell)$ and $\rel(\ell)$) similarly to $\operatorname{\acr{HB}}$. In addition, for each variable $x$, the algorithm maintains a data structure $\operatorname{LW}_x$ that stores the vector time of the latest write event on $x$. When a write event $\wt(x)$ is encountered, the vector time $\mathbb{C}_{\operatorname{\mathsf{tid}}(\wt)}$ is copied to $\operatorname{LW}_{x}$. In turn, when a read event $\rd(x)$ is encountered the algorithm joins $\operatorname{LW}_{x}$ to $\mathbb{C}_{\operatorname{\mathsf{tid}}(\rd)}$. \Paragraph{$\operatorname{\acr{SHB}}$ with tree clocks.} Tree clocks can directly be used as the data structure to store vector times in the $\operatorname{\acr{SHB}}$ algorithm. We refer to \cref{algo:shb} for the pseudocode. The important new component is the function $\FunctionCTCopyCheckMonotone$ in \cref{line:copycheckmonotone} that copies the vector time of $\mathbb{C}_{t}$ to $\operatorname{LW}_x$. In contrast to $\FunctionCTMonotoneCopy$, this copy is not guaranteed to be monotone, i.e., we might have $\operatorname{LW}_x\not \sqsubseteq\mathbb{C}_{t}$. Note, however, that using tree clocks, this test requires only constant time. Internally, $\FunctionCTCopyCheckMonotone$ performs $\FunctionCTMonotoneCopy$ if $\operatorname{LW}_x \sqsubseteq\mathbb{C}_{t}$ (running in sublinear time), otherwise it performs a deep copy for the whole tree clock (running in linear time). In practice, we expect that most of the times $\FunctionCTCopyCheckMonotone$ results in $\FunctionCTMonotoneCopy$ and thus is very efficient. The key insight is that if $\FunctionCTMonotoneCopy$ is not used, then $\operatorname{LW}_x\not \sqsubseteq\mathbb{C}_{t}$ and thus we have a race $(\operatorname{lw}_{\Trace}(\rd),\rd)$. Hence, the number of times a deep copy is performed is bounded by the number of write-read races in $\Trace$ between a read and its last write. \input{algorithms/algo_shb} \section{The Tree Clock Data Structure}\label{sec:tree_clocks} In this section we introduce tree clocks, a new data structure for representing logical times in concurrent and distributed systems. We first illustrate the intuition behind tree clocks, and then develop the data structure in detail. \input{intuition} \input{treeclocks_details} \subsection{Tree Clocks}\label{subsec:clocktrees_details} We now present the tree clock data structure in detail. \input{figures/fig_treeclocks} \Paragraph{Tree clocks.} A tree clock $\operatorname{TC}$ consists of the following. \begin{compactenum} \item $\operatorname{T}=(\mathcal{V}, \mathcal{E})$ is a \emph{rooted tree} of nodes of the form $(\operatorname{\mathsf{tid}},\operatorname{clk}, \operatorname{aclk})\in \Threads{} \times \mathbb{N}^2$. Every node $u$ stores its children in an ordered list $\operatorname{Chld}(u)$ of descending $\operatorname{aclk}$ order. We also store a pointer $\operatorname{Prnt}(u)$ of $u$ to its parent in $\operatorname{T}$. \item $\operatorname{ThrMap}\colon \Threads{} \to \mathcal{V}$ is a \emph{thread map}, with the property that if $\operatorname{ThrMap}(t)=(\operatorname{\mathsf{tid}}, \operatorname{clk}, \operatorname{aclk})$, then $t=\operatorname{\mathsf{tid}}$. \end{compactenum} We denote by $\operatorname{T}.\operatorname{root}$ the root of $\operatorname{T}$, and for a tree clock $\operatorname{TC}$ we refer by $\operatorname{TC}.\operatorname{T}$ and $\operatorname{TC}.\operatorname{ThrMap}$ to the rooted tree and thread map of $\operatorname{TC}$, respectively. For a node $u=(\operatorname{\mathsf{tid}},\operatorname{clk},\operatorname{aclk})$ of $\operatorname{T}$, we let $u.\operatorname{\mathsf{tid}}=\operatorname{\mathsf{tid}}$, $u.\operatorname{clk}=\operatorname{clk}$ and $u.\operatorname{aclk}=\operatorname{aclk}$, and say that $u$ \emph{points to} the unique event $e$ \camera{with $\ThreadOf{e} = \operatorname{\mathsf{tid}}$ and $\LocalTime{}{e} = \operatorname{clk}$.} Intuitively, if $v=\operatorname{Prnt}(u)$, then $u$ represents the following information. \begin{compactenum} \item $\operatorname{TC}$ has the \emph{local time} $u.\operatorname{clk}$ for thread $u.\operatorname{\mathsf{tid}}$. \item $u.\operatorname{aclk}$ is the \emph{attachment time} of $v.\operatorname{\mathsf{tid}}$, which is the local time of $v$ when $v$ learned about $u.\operatorname{clk}$ of $u.\operatorname{\mathsf{tid}}$ (this will be the time that $v$ had when $u$ was attached to $v$). \end{compactenum} Naturally, if $u=\operatorname{T}.\operatorname{root}$ then $u.\operatorname{aclk}=\bot$. See \cref{fig:treeclocks}. \input{algorithms/algo_clock_tree} \Paragraph{Tree clock operations.} Just like vector clocks, tree clocks provide functions for initialization, update and comparison. There are two main operations worth noting. The first is $\FunctionCTJoin$ --- $\operatorname{TC}_1.\FunctionCTJoin(\operatorname{TC}_2)$ joins the tree clock $\operatorname{TC}_2$ to $\operatorname{TC}_1$. In contrast to vector clocks, this operation takes advantage of the direct and indirect monotonicity outlined in \cref{subsec:intuition} to perform the join in sublinear time in the size of $\operatorname{TC}_1$ and $\operatorname{TC}_2$ (when possible). The second is $\FunctionCTMonotoneCopy$. We use $\operatorname{TC}_1.\FunctionCTMonotoneCopy(\operatorname{TC}_2)$ to copy $\operatorname{TC}_2$ to $\operatorname{TC}_1$ when we know that $\operatorname{TC}_1\sqsubseteq \operatorname{TC}_2$. The idea is that when this holds, the copy operation has the same semantics as the join, and hence the principles that make $\FunctionCTJoin$ run in sublinear time also apply to $\FunctionCTMonotoneCopy$. \input{figures/fig_ctjoin} \input{figures/fig_ctmonotonecopy} \cref{algo:clock_tree} gives a pseudocode description of this functionality. The functions on the left column present operations that can be performed on tree clocks, while the right column lists helper routines for the more involved functions $\FunctionCTJoin$ and $\FunctionCTMonotoneCopy$. In the following we give an intuitive description of each function. \SubParagraph{1. $\FunctionCTInitialize(t)$.} This function initializes a tree clock $\operatorname{TC}_{t}$ that belongs to thread $t$, by creating a node $u=(t, 0, \bot)$. Node $u$ will always be the root of $\operatorname{TC}_{t}$. This initialization function is only used for tree clocks that represent the clocks of threads. Auxiliary tree clocks for storing vector times of release events do not execute this initialization. \SubParagraph{2. $\FunctionCTGet(t)$.} This function simply returns the time of thread $t$ stored in $\operatorname{TC}$, while it returns $0$ if $t$ is not present in $\operatorname{TC}$. \SubParagraph{3. $\FunctionCTIncrement(i)$.} This function increments the time of the root node of $\operatorname{TC}$. It is only used on tree clocks that have been initialized using $\FunctionCTInitialize$, i.e., the tree clock belongs to a thread that is always stored in the root of the tree. \SubParagraph{4. $\FunctionCTLessThan(\operatorname{TC}')$.} This function compares the vector time of $\operatorname{TC}$ to the vector time of $\operatorname{TC}'$, i.e., it returns $\operatorname{True}$ iff $\operatorname{TC}\sqsubseteq\operatorname{TC}'$. \SubParagraph{5. $\FunctionCTJoin(\operatorname{TC}')$.} This function implements the join operation with $\operatorname{TC}'$, i.e., updating $\operatorname{TC}\gets \operatorname{TC}\sqcup \operatorname{TC}'$. At a high level, the function performs the following steps. \begin{compactenum} \item Routine $\RoutineUpdatedNodesForJoin$ performs a pre-order traversal of $\operatorname{TC}'$, and gathers in a stack $\mathcal{S}$ the nodes of $\operatorname{TC}'$ that have progressed in $\operatorname{TC}'$ compared to $\operatorname{TC}$. \camera{The traversal may stop early due to direct or indirect monotonicity, hence, this routine generally takes sub-linear time.} \item Routine $\RoutineDetachNodes$ detaches from $\operatorname{TC}$ the nodes whose $\operatorname{\mathsf{tid}}$ appears in $\mathcal{S}$, as these will be repositioned in the tree. \item Routine $\RoutineAttachNodes$ updates the nodes of $\operatorname{TC}$ that were detached in the previous step, and repositions them in the tree. This step effectively creates a subtree of nodes of $\operatorname{TC}$ that is identical to the subtree of $\operatorname{TC}'$ that contains the progressed nodes computed by $\RoutineUpdatedNodesForJoin$. \item Finally, the last 4 lines of $\FunctionCTJoin$ attach the subtree constructed in the previous step under the root $z$ of $\operatorname{TC}$, at the front of the $\operatorname{Chld}(z)$ list. \end{compactenum} \cref{fig:ctjoin} provides an illustration. \SubParagraph{6. $\FunctionCTMonotoneCopy(\operatorname{TC}')$.} This function implements the copy operation $\operatorname{TC}\gets \operatorname{TC}'$ assuming that $\operatorname{TC} \sqsubseteq\operatorname{TC}'$. The function is very similar to $\FunctionCTJoin$. The key difference is that this time, the root of $\operatorname{TC}$ is always considered to have progressed in $\operatorname{TC}'$, even if the respective times are equal. This is required for changing the root of $\operatorname{TC}$ from the current node to one with $\operatorname{\mathsf{tid}}$ equal to the root of $\operatorname{TC}'$. \cref{fig:ctmonotonecopy} provides an illustration. The crucial parts of $\FunctionCTJoin$ and $\FunctionCTMonotoneCopy$ that exploit the hierarchical structure of tree clocks are in $\RoutineUpdatedNodesForJoin$ and $\RoutineUpdatedNodesForCopy$. In each case, we proceed from a parent $u'$ to its children $v'$ only if $u'$ has progressed wrt its time in $\operatorname{TC}$ (recall \cref{subfig:intuition1}), capturing \emph{direct monotonicity}. Moreover, we proceed from a child $v'$ of $u'$ to the next child $v''$ (in order of appearance in $\operatorname{Chld}(u')$) only if $\operatorname{TC}$ is not yet aware of the attachment time of $v'$ on $u'$ (recall \cref{subfig:intuition2}), capturing \emph{indirect monotonicity}. \begin{remark}[Constant time epoch accesses]\label{rem:epochs} The function $\operatorname{TC}.\FunctionCTGet(t)$ returns the time of thread $t$ stored in $\operatorname{TC}$ in $O(1)$ time, just like vector clocks. This allows all epoch-related optimizations~\cite{Flanagan09,Roemer20} from vector clocks to apply to tree clocks. \end{remark} \subsection{The Vector Time Interface}\label{subsec:vector_clocks} \Paragraph{The vector-clock interface.} We say that a data structure $\operatorname{DS}$ \emph{implements the vector-clock-interface} (or \emph{vc-interface}) if it represents a vector-time, denoted by $\VTime{\operatorname{DS}}$, and supports updates of its state with vector-time operations. \Paragraph{Vector-clock work.} Consider a streaming algorithm $A$ and let $\mathcal{C}=\{ C_1, \dots, C_m \}$ be the set of data structures maintained by $A$ that implement the vector interface. Consider that $A$ is executed on some input trace $\Trace$. Given some $i\in[m]$ and some integer $j\in [|\Trace|]$, we denote by $C_i^j$ the state of the data structure $C_i$ after $A$ has processed the $j$-th event of $\Trace$. The \emph{vector-clock-work} (or \emph{vc-work}) of $A$ on $\Trace$ is defined as \begin{align*} \operatorname{VTWork}_{A}(\Trace) = \sum_{i\leq m} \sum_{j\leq |\Trace|} \left|\left\{t\colon \VTime{C_i^{j}}(t) \neq \VTime{C_i^{j-1}}(t) \right\}\right|\ . \end{align*} In words, each term in the sum counts the number of entries of $C_i$ that are updated by $A$ as the algorithm processes the next event of the input. Overall, $\operatorname{VTWork}_{A}(\Trace)$ counts the total number of vector-clock updates made by $A$ as it processes the whole of $\Trace$. \Paragraph{Vector-clock optimality.} The goal of a vc data structure $\operatorname{DS}$ is to provide the vc-interface to $A$ as efficiently as possible. Let $\mathcal{T}_{A}(\Trace)$ be the total time spent by $A$ for vector-clock operations. We say that $\operatorname{DS}$ is \emph{vector-clock-work optimal} (or \emph{vc-work optimal}) for $A$ if \begin{align*} \mathcal{T}_{A}(\Trace) = O(\operatorname{VTWork}_{A}(\Trace))\ . \end{align*} In words, the amount of time that $\operatorname{DS}$ instances spend in vc operations is proportional to the total number of updates to their state by $A$ (i.e., the total amount of vc-work by $A$). \subsection{Partial Orders, Vector Times and Vector Clocks} \label{subsec:vc_prelim} A partial order on a set $S$ is a reflexive, transitive and anti-symmetric binary relation on the elements of $S$. Partial orders are the standard mathematical object for analyzing concurrent executions. The main idea behind such techniques is to define a partial order $\ord{\Trace}{P}$ on the set of events of the trace $\Trace$ being analyzed. The intuition is that $\ord{\Trace}{P}$ captures \emph{causality} --- the relative order of two events of $\Trace$ must be maintained if they are ordered by $\ord{\Trace}{P}$. More importantly, when two events $e_1$ and $e_2$ are unordered by $\ord{\Trace}{P}$ (denoted $e_1 \unord{\Trace}{P} e_2$), then they can be deemed \emph{concurrent}. This principle forms the backbone of all partial-order based concurrent analyses. A na\"{i}ve approach for constructing such a partial order is to explicitly represent it as an acyclic directed graph over the events of $\Trace$, and then perform a graph search whenever needed to determine whether two events are ordered. Vector clocks, on the other hand, provide a more efficient method to represent partial orders and therefore are the key data structure in most partial order-based algorithms. The use of vector clocks enables designing streaming algorithms, which are also suitable for monitoring the system. These algorithms associate \emph{vector timestamps}~\cite{Mattern89,Fidge91,fidge1988timestamps} with events so that the point-wise ordering between timestamps reflects the underlying partial order. Let us formalize these notions now. \Paragraph{Vector Timestamps.} Let us fix the set of threads $\Threads{}$ in the trace. A \emph{vector timestamp} (or simply vector time) is a mapping $\operatorname{V}\colon \Threads{} \to \mathbb{N}$. It supports the following operations. { \setlength\tabcolsep{3pt} \begin{tabular}{lclr} $\operatorname{V}_1 \sqsubseteq\operatorname{V}_2$ & iff & $\forall t\colon \operatorname{V}_1(t)\leq \operatorname{V}_2(t)$ & (Comparison)\\ $\operatorname{V}_1\sqcup \operatorname{V}_2$ & $=$ & $\lambda t\colon \max(\operatorname{V}_1(t), \operatorname{V}_2(t))$ & (Join)\\ $\operatorname{V}\CAssign{t'}{i}$ & $=$ & $ \lambda t \colon \begin{cases} \operatorname{V}(t) + i, & \text{if } t=t'\\ \operatorname{V}(t), & \text{otherwise } \end{cases} $ & (Increment) \end{tabular} } We write $\operatorname{V}_1 =\operatorname{V}_2$ to denote that $\operatorname{V}_1 \sqsubseteq\operatorname{V}_2$ and $\operatorname{V}_2 \sqsubseteq\operatorname{V}_1$. Let us see how vector timestamps provide an efficient implicit representation of partial orders. \Paragraph{Timestamping for a partial order.} Consider a partial order $\ord{\Trace}{P}$ defined on the set of events of $\Trace$ such that $\tho{\Trace} \subseteq \ord{\Trace}{P}$. In this case, we can define the $\mathsf{P}$-timestamp of an event $e$ as the following vector timestamp: \begin{equation*} \POTime{\ord{\Trace}{P}}{e} = \lambda u: \max \setpred{\,\LocalTime{\Trace}{f}}{f \ord{\Trace}{P} e,\ \ThreadOf{f} = u} \end{equation*} \camera{In words, $\POTime{\ord{\Trace}{P}}{e}$ contains the timestamps of the events that appear the latest in their respective threads such that they are ordered before $e$ in the partial order $\ord{\Trace}{P}$}. We remark that $\POTime{\ord{\Trace}{P}}{e}(\ThreadOf{e}) = \LocalTime{\Trace}{e}$. The following observation then shows that the timestamps defined above precisely capture the order $\ord{\Trace}{P}$. \begin{lemma} \label{lem:PO-timestamps} Let $\ord{\Trace}{P}$ be a partial order defined on the set of events of trace $\Trace$ such that $\tho{\Trace} \subseteq \ord{\Trace}{P}$. Then for any two events $e_1, e_2$ of $\Trace$, we have, $\POTime{\ord{\Trace}{P}}{e_1} \sqsubseteq \POTime{\ord{\Trace}{P}}{e_2} \iff e_1 \ord{\Trace}{P} e_2$. \end{lemma} \camera{ In words, \cref{lem:PO-timestamps} implies that, in order to check whether two events are ordered according to $\ord{\Trace}{P}$, it suffices to compare their vector timestamps. } \Paragraph{The vector clock data structure.} When establishing a causal order over the events of a trace, the timestamps of an event is computed using timestamps of other events in the trace. Instead of explicitly storing timestamps of each event, it is often sufficient to store only the timestamps of a few events, as the algorithms is running. Typically a data-structure called \emph{vector clocks} is used to store vector times. Vector clocks are implemented as a simple integer array indexed by thread identifiers, and they support all the operations on vector timestamps. A useful feature of this data-structure is the ability to perform in-place operations. In particular, there are methods such as $\FunctionVCJoin(\cdot)$, $\FunctionVCCopy(\cdot)$ or $\FunctionVCIncrement(\cdot,\cdot)$ that store the result of the corresponding vector time operation in the original instance of the data-structure. For example, for a vector clock $\mathbb{C}$ and a vector time $V$, a function call $\mathbb{C}.\FunctionVCJoin(V)$ stores the value $\mathbb{C} \sqcup V$ back in $\mathbb{C}$. Each of these operations iterates over all the thread identifiers (indices of the array representation) and compares the corresponding components in $\mathbb{C}$ and $V$. The running time of the join operation for the vector clock data structure is thus $\Theta(k)$, where $k$ is the number of threads. Similarly, copy and comparison operations take $\Theta(k)$ time. \subsection{Weak-Causally-Precedes}\label{subsec:wcp} \Paragraph{The $\operatorname{\acr{WCP}}$ partial order.} Weak-causally-precedes is a partial order introduced in~\cite{Kini17}. It is a sound weakening of $\operatorname{\acr{HB}}$ and hence can report more races. Intuitively, this is achieved by not always ordering critical sections on the same lock. To maintain soundness, $\operatorname{\acr{WCP}}$ is closed under left and right composition with $\operatorname{\acr{HB}}$. \Paragraph{$\operatorname{\acr{WCP}}$ with tree clocks.} \cref{algo:wcp} shows $\operatorname{\acr{WCP}}$ using tree clocks. There are two important observations for replacing vector clocks in $\operatorname{\acr{WCP}}$ with tree clocks. First, the copies in \cref{line:wcp_copyread,line:wcp_copywrite}, occurring when the algorithm processes a lock-release event $\rel$ are indeed monotone. This monotonicity holds due to the corresponding joins in \cref{line:wcp_readjoin} and \cref{line:wcp_writejoin} that occur while the algorithm processes the critical section of $\rel$. Second, $\operatorname{\acr{WCP}}$ does not contain the thread order $\operatorname{\acr{TO}}$, i.e., there might exist events $e_1, e_2$ such that $e_1 \stricttho{\Trace} e_2$ but $e_1 \unordwcp{\Trace} e_2$. Due to this, when we perform a join operation from a tree clock $\mathbb{X}$ to a tree clock $\mathbb{C}_{t}$ (where $\mathbb{C}_{t}$ holds the $\operatorname{\acr{WCP}}$ time of the current event of thread $t$), monotonicity does not generally hold: it could be that the clock of some thread $t'$ has progressed in $\mathbb{C}_{t}$ compared to $\mathbb{X}$, but $\mathbb{C}_{t}$ is not aware of the events that hang in the subtree of $\mathbb{X}$ rooted the a node $u$ with $u.\operatorname{\mathsf{tid}}=t'$. However, a closer observation reveals that, because $\operatorname{\acr{WCP}}$ is closed under composition with $\operatorname{\acr{HB}}$, this can only happen when $t'=t$, and thus $u$ is the root of $\mathbb{C}_{t}$. Hence, monotonicity only fails for nodes in the first level of $\mathbb{C}_{t}$, while it holds for all other nodes. We accommodate this corner case by modifying the general $\FunctionCTJoin$ function to $\FunctionCTSubRootJoin$ --- the latter is identical to the former with the following small modifications. \begin{compactenum} \item The if block in \cref{line:functionctjoinifroot} is not executed. \item The if block in \cref{line:routineupdatednodesforjoinifbreak} is not executed if $u'.\operatorname{\mathsf{tid}}=t$. \item The body of the while loop in \cref{line:routineattachnodesmainloop} is skipped $u'.\operatorname{\mathsf{tid}}=t$. \end{compactenum} $\FunctionCTSubRootJoin$ is called in \cref{line:wcp_readsubjoin} and \cref{line:wcp_writesubjoin1} of \cref{algo:wcp}. \input{algorithms/algo_wcp}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conformal Mapping of a Deformed Cylinder} By constructing a conformal mapping from a deformed surface to one of constant curvature, analysis of the Laplacian eigenfunctions on the former surface can be made tractable. Consider a cylinder with radius $R_0$. Using cylindrical polar coordinates, radially-symmetric deformations from this geometry can be described by a position-dependent radius, $R(z) = R_{0} \left( 1 + h(z) \right)$. The line element on this surface is given by \begin{equation} \dd{s}^2 = \left( 1 + R_0^2 h_z^2 \right) \dd{z}^2 + R_0^2 \left( 1+h \right)^2 \dd{\varphi}^2\,. \label{eq:CylinderLineElement} \end{equation} Where $h_z$ denotes a derivative with respect to $z$. We map this to a conformally flat cylinder, whose line element is given by \begin{equation} \dd{s}^2 = \Omega^2(v) \left( \dd{v}^2 + R_0^2 \dd{\varphi}^2 \right)\,. \label{eq:CylinderLineConformal} \end{equation} The equality between the two line elements implies the relationship \begin{equation} v = \int_0^z \left( 1 + R_0^2 h_z^2 \right)^{\frac{1}{2}}\frac{\dd{z}}{1+h(z)} + C\,. \label{eq:CylinderCoordinateRelation} \end{equation} The condition $v \sim z$ at $z=0$ implies that $C=0$. To first order in $h$ and its gradients, \begin{equation} v \approx z - \int_0^z h(z) \dd{z}\,. \label{eq:CylinderCoordinateRelationApprox} \end{equation} The conformal factor is given by Eq.~\ref{eq:CylinderLineElement} and \ref{eq:CylinderLineConformal} as \begin{equation} \Omega^2 = \left( 1+h(v) \right)^2\,. \label{eq:CylinderConformalFactor} \end{equation} \section{Conformal Mapping of a Deformed Sphere} We consider conformal mappings of a sphere with an aziumthally-symmetric deformation. Working in spherical coordinates, these surfaces can be described by an angular-dependent radius, $R(\theta) = R_{0} \left( 1 + h(\theta) \right)$. In these coordinates, the line element is given by \begin{equation} \dd{s}^2 = \left( \left( 1+h(\theta) \right)^2 + h_\theta^2 \right) R_0^2 \dd{\theta}^2 + R_0^2\left( 1+h(\theta) \right)^2 \sin^2\theta \dd{\varphi}^2\,. \label{eq:SphereLineElement} \end{equation} We map this to a conformally-flat sphere, \begin{equation} \dd{s}^2 = \Omega^2\left( \dd{\Theta}^2 + \sin^2 \Theta \dd{\varphi}^2 \right)\,. \label{eq:SphereConformalLine} \end{equation} With this, we obtain the constraints \begin{equation} \begin{aligned} \sin^2\theta \Theta'^2 &= \sin^2 \Theta \left( 1 + \frac{R'^2}{R^2} \right)\,, \\ \sin^2 \Theta \Omega^2 &= \sin^2 \theta R^2 \label{eq:SphereCoordinateODE} \end{aligned} \end{equation} These equation can be analyzed perturbatively, and to first order in $h$, $\Theta=\theta$. Using Eq.~\ref{eq:SphereCoordinateODE} to solve for $\Omega$, we find that eigenvalues and eigenfunctions satisfy \begin{equation} \left[-\Delta^0 - 2k^2 R_0^2 h \right] \phi_k = k^2 R_0^2 \phi_k\,. \label{eq:SphereEigenfunction} \end{equation} The additional term on the LHS of Eq.~\ref{eq:SphereEigenfunction} breaks the polar symmetry in the eigenfunctions, effectively ``fixing'' the resulting polar orientation of the eigenfunctions (although azimuthal symmetry is preserved). \section{Conformal Mapping of Deformed Planar Drum} The technique of using conformal mapping to understand how deformations modify the Laplacian can be applied to a variety of surfaces. To elaborate on the procedure that we used for deformed spheres and cylinders, we show the steps of the calculation for a circularly symmetric deformation on a planar drum. Consider a height function, $h(r)$. The line element on this surface is given by \begin{equation} \dd{s}^2 = \left(1 + h_r^2\right) \dd{r}^2 + r^2 \dd{\varphi}^2 \text{,} \end{equation} We assume a reparameterization of the surface geometry $(v, \varphi) \rightarrow (u, \varphi)$ so that the line element assumes the manifestly conformally flat form \begin{equation} \dd{s}^2 = \Omega^2(u) \left(\dd{u}^2 + u^2 \dd{\varphi}^2 \right)\,. \end{equation} Setting these two line elements equal to each other, we obtain the two relations \begin{equation} \begin{aligned} \left(1+h_r^2\right) \dd{r}^2 &= \Omega^2(u) \dd{u}^2\,, \\ r^2 &= u^2 \Omega^2\,. \label{planardrumrelations} \end{aligned} \end{equation} Rearranging, one obtains a relationship between $u$ and $r$, \begin{equation} u = Cr \exp \int_0^r \left[\left(1+h_r^2\right)^\frac{1}{2} -1 \right] \frac{\dd{r}}{r} \text{.} \end{equation} Provided our bump possesses a tangent plane at $r=0$, the relationship $r \sim u$ at $r=0$ sets the constant of integration to $C=1$, which gives an exact relationship that can be expanded in powers of $h_r$. To $\order{h_r^2}$, \begin{equation} r = u \left(1 - \frac{1}{2} \int_0^u \frac{\dd{u}}{u} h_u^2 \right) \label{quadraticapproximation} \text{,} \end{equation} where we have inverted the equation to obtain $r(u)$. As $u \rightarrow \infty$, this relationship becomes linear, \begin{equation*} r = u\left(1 - \frac{\Gamma}{2} \right) \text{,} \end{equation*} where $\Gamma \equiv \int_0^\infty \dd{u}h_u^2/u$. From Eq.~\ref{planardrumrelations}, we also obtain $\Omega$, and hence the relationship between the Laplacian defined on the curved coordinates, $\Delta^G$, and conformally flat coordinates, $\Delta^0$, \begin{equation} \Delta^G \phi(v, \varphi) = k^2 \phi(v, \varphi) \rightarrow \Delta^0 \phi(u, \varphi) = \frac{r(u)k^2}{u^2} \phi(u, \varphi)\,. \end{equation} Using Eq.~\ref{quadraticapproximation} and the definition of $\Gamma$, we can express this in a more telling form, \begin{equation} \left[-\Delta^0 + V(u) \right] \phi = k^2(1-\Gamma) \phi \text{,} \end{equation} where \begin{equation} V(u) = -k^2 \int_u^\infty \frac{\dd{u}}{u} h_u^2\,. \label{planarpotential} \end{equation} Similar to the cases of the deformed sphere and cylinder, we can express the effect of a deformation as the addition of an eigenvalue-dependent potential. As an example, if one considers a planar drum with a circularly symmetric Gaussian bump, Eq.~\ref{planarpotential} shows that this leads to a Gaussian potential in conformal coordinates. \section{Using the Mathematica file} The RD spectrum for the rippled cylinder can be explored analytically in Mathematica. The Mathematica notebook also generates the enclosed animation of the cosine or sine mode being selected as the wavelength is tuned through the edge of the BZ. To regenerate the animation, simply execute each block of the notebook from the beginning. \section{Using the COMSOL files} To help others repeat our results, we have included several .mph files that you can run in COMSOL Multiphysics. Tuning the meshing and other simulation parameters can affect convergence and interpretation; in particular, mesh irregularities at the ends of the cylinder models can act like defects and pin patterns. We cope with this by using smooth periodic boundary conditions and a rectangular mesh. To reproduce the rippled cylinder results, you can: \begin{enumerate} \item Open \texttt{periodic-cylinder-length-sweep.mph} \item Click the green equals sign to launch the solver, see Fig.~\ref{fig:click-solve}. \item The solver should finish after ten to twenty minutes of processing. You can watch progress in the log window or convergence plots. If it fails, try simply relaunching it, because some versions of COMSOL do not successfully initialize all of the needed components on the first run. \item Click \texttt{3D Plot Group 5} to see solutions. It will default to showing the longest cylinder at the end of the sweep, see Fig.~\ref{fig:click-solution}. \item Select the length value, scroll up to a shorter cylinder, like 25, and click \texttt{Plot}, see Fig.~\ref{fig:click-shorter}. \item The full sequence of length sweep solutions has exported as an animation, see \texttt{rippled-cylinder-ridge-trough-switch.mov}. \end{enumerate} \begin{widetext} \begin{figure}[h] \includegraphics[width=1\textwidth]{periodic-cylinder-length-sweep--COMSOL-screenshot-1.png} \caption{Run the solver with a dynamic sweep of cylinder lengths \label{fig:click-solve}} \end{figure} \begin{figure}[h] \includegraphics[width=1\textwidth]{periodic-cylinder-length-sweep--COMSOL-screenshot-2.png} \caption{View a longer cylinder's solution with higher concentration on saddles, i.e. negative Gaussian curvature \label{fig:click-solution}} \end{figure} \begin{figure}[h] \includegraphics[width=1\textwidth]{periodic-cylinder-length-sweep--COMSOL-screenshot-3.png} \caption{View a shorter cylinder's solution with higher concentration on ridges, i.e. positive Gaussian curvature \label{fig:click-shorter}} \end{figure} \end{widetext} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} It is believed that the evolution of the large-scale magnetic field of the Sun is governed by the interplay between large-scale motions, like differential rotation and meridional circulation, turbulent convection flows and magnetic fields. One of the most important issues in solar dynamo theory is related to the origin of the equatorial drift of sunspot activity in the equatorial regions and, simultaneously at high latitudes, the poleward drift of the location of large-scale unipolar regions and quiet prominences. \cite{par55} and \cite{yosh1975} suggested that the evolution of large-scale magnetic activity of the Sun can be interpreted as dynamo waves propagating perpendicular to the direction of shear from the differential rotation. They found that the propagation can be considered as a diffusion process, which follows the iso-rotation surfaces of angular velocity in the solar convection zone. The direction of propagation can be modified by meridional circulation, anisotropic diffusion and the effects of turbulent pumping (see, e.g., \citealp{choud95,k02,2008A&A...485..267G}). The latter induces an effective drift of the large-scale magnetic field even though the mean flow of the turbulent medium may be zero. The turbulent pumping effects can be equally important both for dynamos without meridional circulation and for the meridional circulation-dominated dynamo regimes. For the latter case the velocity of turbulent pumping has to be comparable to the meridional circulation speed. It is known that an effect of this magnitude can be produced by diamagnetic pumping and perhaps by so-called topological pumping. Both effects produce pumping in the radial direction and have not a direct impact on the latitudinal drift of the large-scale magnetic field. Recently \citep{pi08Gafd,mitr2009AA,2010GApFD.104..167L}, it has been found that the helical convective motions and the helical turbulent magnetic fields interacting with large-scale magnetic fields and differential rotation can produce effective pumping in the direction of the large-scale vorticity vector. Thus, the effect produces a latitudinal transport of the large-scale magnetic field in the convective zone wherever the angular velocity has a strong radial gradient. It is believed that these regions, namely the tachocline beneath the solar convection zone and the subsurface shear layer, are important for the solar dynamo. Figure \ref{fig:The-field-line} illustrates the principal processes that induce the helicity--vorticity pumping effect. It is suggested that this effect produces an anisotropic drift of the large-scale magnetic field, which means that the different components of the large-scale magnetic field drift in different directions. Earlier work, e.g.\ by \citet{kit:1991} and \citet{2003PhRvE..67b6321K}, suggests that the effect of anisotropy in the transport of mean-field is related to nonlinear effects of the global Coriolis force on the convection. Also, nonlinear effects of the large-scale magnetic field result in an anisotropy of turbulent pumping \citep{1996A&A...307..293K}. It is noteworthy, that the helicity--vorticity effect produces an anisotropy of the large-scale magnetic field drift already in the case of slow rotation and a weak magnetic field. A comprehensive study of the linear helicity--vorticity pumping effect for the case of weak shear and slow rotation was given by \citet{garr2011} and their results were extended by DNS with a more general test-field method \citet{bran2012AA}. \begin{figure} \begin{centering} \includegraphics[width=0.6\textwidth]{W_hel} \par\end{centering} \caption{\label{fig:The-field-line}The field lines of the large-scale magnetic field, $\bm{B}^{(T)}$, are transformed by the helical motions to a twisted $\varOmega$-like shape. This loop is folded by the large-scale shear, $\bm{V}^{(T)}$, into the direction of the background large-scale magnetic field, $\bm{B}^{(T)}$. The induced electromotive force has a component, $\mbox{\boldmath ${\cal E}$} {}^{(P)}$, which is perpendicular to the field $\bm{B}^{(T)}$. The resulting effect is identical to the effective drift of the large-scale magnetic field along the $x$-axis, in the direction opposite to the large-scale vorticity vector $\bm{W}=\nabla\times\bm{V}^{(T)}$, i.e., $\mbox{\boldmath ${\cal E}$} {}^{(P)}\sim-\bm{W}\times\bm{B}^{(T)}$.} \end{figure} In this paper we analytically estimate the helicity--vorticity pumping effect taking into account the Coriolis force due to global rotation. The calculations were done within the framework of mean-field magnetohydrodynamics using the minimal $\tau$-approximation. The results are applied to mean field dynamo models, which are used to examine this effect on the dynamo. The paper is structured as follows. In the next section we briefly outline the basic equations and assumptions, and consider the results of calculations. Next, we apply the results to the solar dynamo. In Section 3 we summarize the main results of the paper. The details of analytical calculations are given in the Appendices A and B. \section{Basic equations} In the spirit of mean-field magnetohydrodynamics, we split the physical quantities of the turbulent conducting fluid into mean and fluctuating parts where the mean part is defined as an ensemble average. One assumes the validity of the Reynolds rules. The magnetic field $\bm{B}$ and the velocity $\bm{V}$ are decomposed as $\bm{B}=\overline{\bm{B}}+\bm{b}$ and $\bm{V}=\overline{\bm{V}}+\bm{u}$, respectively. Hereafter, we use small letters for the fluctuating parts and capital letters with an overbar for mean fields. Angle brackets are used for ensemble averages of products. We use the two-scale approximation \citep{rob-saw,krarad80} and assume that mean fields vary over much larger scales (both in time and in space) than fluctuating fields. The average effect of MHD-turbulence on the large-scale magnetic field (LSMF) evolution is described by the mean-electromotive force (MEMF), $\mbox{\boldmath ${\cal E}$} {}=\left\langle \bm{u\times b}\right\rangle$. The governing equations for fluctuating magnetic field and velocity are written in a rotating coordinate system as follows: \begin{eqnarray} \frac{\partial\bm{b}}{\partial t} & = & \nabla\times\left(\bm{u}\times\bm{\bar{B}}+\bm{\bar{V}}\times\bm{b}\right)+\eta\nabla^{2}\bm{b}+\mathfrak{G},\label{induc1}\\ \frac{\partial u_{i}}{\partial t}+2\left(\bm{\varOmega}\times\bm{u}\right)_{i} & = & -\nabla_{i}\left(p+\frac{\left(\bm{b{\bm\cdot}\bar{B}}\right)}{\mu}\right)+\nu\Delta u_{i}\label{navie1}\\ & + & \frac{1}{\mu}\nabla_{j}\left(\bar{B}_{j}b_{i}+\bar{B}_{i}b_{j}\right)-\nabla_{j}\left(\bar{V}_{j}u_{i}+\bar{V}_{i}u_{j}\right)+f_{i}+\mathfrak{F}_{i},\nonumber \end{eqnarray} where $\mathfrak{G},\mathfrak{F}$ stand for nonlinear contributions to the fluctuating fields, $p$ is the fluctuating pressure, $\bm{\varOmega}$ is the angular velocity responsible for the Coriolis force, $\bm{\bar{V}}$ is mean flow which is a weakly variable in space, and $\bm{f}$ is the random force driving the turbulence. Equations~(\ref{induc1}) and (\ref{navie1}) are used to compute the mean-electromotive force $\mbox{\boldmath ${\cal E}$} {}=\left\langle \bm{u\times b}\right\rangle $. It was computed with the help of the equations for the second moments of fluctuating velocity and magnetic fields using the double-scale Fourier transformation and the minimal $\tau$-approximations and for a given model of background turbulence. To simplify the estimation of nonlinear effects due to global rotation, we use scale-independent background turbulence spectra and correlation time. Details of the calculations are given in Appendix A. In what follows we discuss only those parts of the mean-electromotive force which are related to shear and the pumping effect. \subsection{Results} The large-scale shear flow is described by the tensor $\overline{V}_{i,j}=\nabla\overline{V}_{i}$. It can be decomposed into a sum of strain and vorticity tensors, $\nabla_{j}\overline{V}_{i}={\displaystyle \frac{1}{2}\left(\overline{V}_{i,j} +\overline{V}_{j,i}\right)-\frac{1}{2}\varepsilon_{ijp}\overline{W}_{p}}$, where $\overline{\bm{W}}=\boldsymbol{\nabla}\times\overline{\bm{V}}$ is the large-scale vorticity vector. The joint effect of large-scale shear, helical turbulent flows and magnetic fields can be expressed by the following contributions to the mean-electromotive force (omitting the $\alpha$-effect): \begin{eqnarray} \mbox{\boldmath ${\cal E}$} {}^{(H)} & = & \left(\bm{\overline{W}}\times\bm{\overline{B}}\right)\left(f_{2}^{(\gamma)}h_{\mathcal{C}}+f_{1}^{(\gamma)}h_{\mathcal{K}}\right)\tau_{c}^{2}+\bm{\tilde{V}}\left(\bm{B}\right)\left(f_{4}^{(\gamma)}h_{\mathcal{C}}+f_{3}^{(\gamma)}h_{\mathcal{K}}\right)\tau_{c}^{2}\label{eq:EMFP}\\ & + & \bm{e}\left[\left(\bm{e}\times\bm{\overline{W}}\right){\bm\cdot}\bm{\overline{B}}\right]\left(f_{6}^{(\gamma)}h_{\mathcal{C}}+f_{5}^{(\gamma)}h_{\mathcal{K}}\right)\tau_{c}^{2}+\left(\bm{e}\times\bm{\overline{W}}\right)\left(\bm{e}{\bm\cdot}\bm{\overline{B}}\right)\left(f_{8}^{(\gamma)}h_{\mathcal{C}}+f_{7}^{(\gamma)}h_{\mathcal{K}}\right)\tau_{c}^{2},\nonumber \end{eqnarray} where $\bm{\tilde{V}}\left(\bm{B}\right)={\displaystyle \frac{\overline{B}_{j}}{2}\left(\overline{V}_{i,j}+\overline{V}_{j,i}\right)}$, ${\displaystyle \bm{e}=\frac{\boldsymbol{\varOmega}}{\left|\boldsymbol{\varOmega}\right|}}$ is the unit vector along the rotation axis, $\tau_{c}$ is the typical relaxation time of turbulent flows and magnetic fields, $h_{\mathcal{K}}^{\left(0\right)}=\left\langle \bm{u^{\left(0\right)}{\bm\cdot}\nabla\times u^{\left(0\right)}}\right\rangle $ and $h_{\mathcal{C}}^{\left(0\right)}={\displaystyle \frac{\left\langle \bm{b^{\left(0\right)}{\bm\cdot}\nabla\times b^{\left(0\right)}}\right\rangle }{\mu\rho}}$ are kinetic and current helicity of the background turbulence. These parameters are assumed to be known in advance. Functions $f_{n}^{(\gamma)}(\varOmega^{*})$ are given in Appendix B, they depend on the Coriolis number $\varOmega^{*}=2\varOmega_{0}\tau_{c}$ and describe the nonlinear effect due the Coriolis force, and $\varOmega_{0}$ is the global rotation rate. For slow rotation, $\varOmega^{*}\ll1$, we perform a Taylor expansion of $f_{n}^{(\gamma)}(\varOmega^{*})$ and obtain \begin{eqnarray} \mbox{\boldmath ${\cal E}$} {}^{(H)} & = & \frac{\tau_{c}^{2}}{2}\left(\bm{\overline{W}}\times\bm{\overline{B}}\right)\left(h_{\mathcal{C}}-h_{\mathcal{K}}\right)+\frac{\tau_{c}^{2}}{5}\bm{\tilde{V}}\left(\bm{B}\right)\left(3h_{\mathcal{K}}-\frac{13}{3}h_{\mathcal{C}}\right).\label{eq:EMFT} \end{eqnarray} The coefficients in the kinetic part of Eq.~(\ref{eq:EMFT}) are two times larger than those found by \citet{garr2011}. This difference results from our assumption that the background turbulence spectra and the correlation time are scale-independent. The results for the magnetic part are in agreement with our earlier findings (see \citealp{pi08Gafd}). The first term in Eq.~(\ref{eq:EMFT}) describes turbulent pumping with an effective velocity ${\displaystyle \frac{\tau_{c}^{2}\bm{\overline{W}}}{2}\left(h_{\mathcal{C}}-h_{\mathcal{K}}\right)}$ and the second term describes anisotropic turbulent pumping. Its structure depends on the geometry of the shear flow. For large Coriolis numbers, $\varOmega^{*}\gg1$, only the kinetic helicity contributions survive: \begin{equation} \mbox{\boldmath ${\cal E}$} {}^{(H)}=-\frac{\tau_{c}^{2}}{6}\left(\bm{\overline{W}}\times\bm{\overline{B}}\right)h_{\mathcal{K}}+\frac{\tau_{c}^{2}}{5}\bm{\tilde{V}}\left(\bm{B}\right)h_{\mathcal{K}}.\label{eq:EMFTI} \end{equation} Figure \ref{fig:Fun-dep} show the dependence of the pumping effects on the Coriolis number. We observe that for the terms $\left(\bm{\overline{W}}\times\bm{\overline{B}}\right)$ and $\bm{\tilde{V}}\left(\bm{B}\right)$ the effects of kinetic helicity are non-monotonic and have a maximum at $\varOmega^{*}\approx1$. The effects of current helicity for these terms are monotonically quenched with increasing values of $\varOmega^{*}$. The additional contributions in Eq.~(\ref{eq:EMFP}) are rather small in comparison with the main terms. Thus, we can conclude that the first line in Eq.~(\ref{eq:EMFP}) describes the leading effect of pumping due to the helicity of turbulent flows and magnetic field. Below, we drop the contributions from the second line in Eq.~(\ref{eq:EMFP}) from our analysis. \begin{figure} \begin{centering} \includegraphics[width=0.45\textwidth]{fig1}\includegraphics[width=0.45\textwidth]{fig2} \par\end{centering} \caption{\label{fig:Fun-dep}The dependence of the pumping effects on the Coriolis number. Solid lines show contributions from kinetic helicity and dashed lines the same for current helicity.} \end{figure} \subsection{Helicity--vorticity pumping in the solar convection zone} \subsubsection{The dynamo model} To estimate the impact of this pumping effect on the dynamo we consider the example of a dynamo model which takes into account contributions of the mean electromotive force given by Eq.~(\ref{eq:EMFP}). The dynamo model employed in this paper has been described in detail by \cite{pk11,pk11mf}. This type of dynamo was proposed originally by \citet{2005ApJ...625..539B}. The reader may find the discussion for different types of mean-field dynamos in \cite{2005PhR...417....1B} and \cite{2007sota.conf..319T}. We study the standard mean-field induction equation in a perfectly conducting medium: \begin{equation} \frac{\partial\overline{\bm{B}}}{\partial t}=\boldsymbol{\nabla}\times\left(\mbox{\boldmath ${\cal E}$} {}+\overline{\bm{U}}\times\overline{\bm{B}}\right),\label{eq:dyn} \end{equation} where $\mbox{\boldmath ${\cal E}$} {}=\overline{\bm{u\times b}}$ is the mean electromotive force, with $\bm{u,\, b}$ being fluctuating velocity and magnetic field, respectively, $\overline{\bm{U}}$ is the mean velocity (differential rotation and meridional circulation), and the axisymmetric magnetic field is: \[ \overline{\bm{B}}=\bm{e}_{\phi}B+\nabla\times\frac{A\bm{e}_{\phi}}{r\sin\theta}, \] where $\theta$ is the polar angle. The expression for the mean electromotive force $\mbox{\boldmath ${\cal E}$} {}$ is given by \citet{pi08Gafd}. It is expressed as follows: \begin{equation} \mathcal{E}_{i}=\left(\alpha_{ij}+\gamma_{ij}^{(\varLambda)}\right)\overline{B}-\eta_{ijk}\nabla_{j}\overline{B}_{k}+\mathcal{E}_{i}^{(H)}.\label{eq:EMF-1} \end{equation} The new addition due to helicity and mean vorticity effects is marked by $\mbox{\boldmath ${\cal E}$} {}^{H}$. The tensor $\alpha_{ij}$ represents the $\alpha$-effect. It includes hydrodynamic and magnetic helicity contributions, \begin{align} \alpha_{ij} & =C_{\alpha}\sin^{2}\theta\alpha_{ij}^{(H)}+\alpha_{ij}^{(M)},\label{alp2d}\\ \alpha_{ij}^{(H)} & =\delta_{ij}\left\{ 3\eta_{T}\left(f_{10}^{(a)}\left(\bm{e}{\bm\cdot}\boldsymbol{\varLambda}^{(\rho)}\right)+f_{11}^{(a)}\left(\bm{e}{\bm\cdot}\boldsymbol{\varLambda}^{(u)}\right)\right)\right\} +\\ & +e_{i}e_{j}\left\{ 3\eta_{T}\left(f_{5}^{(a)}\left(\bm{e}{\bm\cdot}\boldsymbol{\varLambda}^{(\rho)}\right)+f_{4}^{(a)}\left(\bm{e}{\bm\cdot}\boldsymbol{\varLambda}^{(u)}\right)\right)\right\} +\\ & 3\eta_{T}\left\{ \left(e_{i}\varLambda_{j}^{(\rho)}+e_{j}\varLambda_{i}^{(\rho)}\right)f_{6}^{(a)}+\left(e_{i}\varLambda_{j}^{(u)}+e_{j}\varLambda_{i}^{(u)}\right)f_{8}^{(a)}\right\} , \end{align} where the hydrodynamic part of the $\alpha$-effect is defined by $\alpha_{ij}^{(H)}$, $\bm{\boldsymbol{\varLambda}}^{(\rho)}=\boldsymbol{\nabla}\log\overline{\rho}$ quantifies the density stratification, $\bm{\boldsymbol{\varLambda}}^{(u)}=\boldsymbol{\nabla}\log\left(\eta_{T}^{(0)}\right)$ quantifies the turbulent diffusivity variation, and $\bm{e}=\boldsymbol{\varOmega}/\left|\varOmega\right|$ is a unit vector along the axis of rotation. The turbulent pumping, $\gamma_{ij}^{(\varLambda)}$, depends on mean density and turbulent diffusivity stratification, and on the Coriolis number $\varOmega^{*}=2\tau_{c}\varOmega_{0}$ where $\tau_{c}$ is the typical convective turnover time and $\varOmega_{0}$ is the global angular velocity. Following the results of \cite{pi08Gafd}, $\gamma_{ij}^{(\varLambda)}$ is expressed as follows: \begin{align} \gamma_{ij}^{(\varLambda)} & =3\eta_{T}\left\{ f_{3}^{(a)}\varLambda_{n}^{(\rho)}+f_{1}^{(a)}\left(\bm{e}{\bm\cdot}\boldsymbol{\varLambda}^{(\rho)}\right)e_{n}\right\} \varepsilon_{inj}-3\eta_{T}f_{1}^{(a)}e_{j}\varepsilon_{inm}e_{n}\varLambda_{m}^{(\rho)},\label{eq:pump}\\ & -3\eta_{T}\left(\varepsilon-1\right)\left\{ f_{2}^{(a)}\varLambda_{n}^{(u)}+f_{1}^{(a)}\left(\bm{e}{\bm\cdot}\boldsymbol{\varLambda}^{(u)}\right)e_{n}\right\} \varepsilon_{inj}. \end{align} The effect of turbulent diffusivity, which is anisotropic due to the Coriolis force, is given by: \begin{equation} \eta_{ijk}=3\eta_{T}\left\{ \left(2f_{1}^{(a)}-f_{2}^{(d)}\right)\varepsilon_{ijk}-2f_{1}^{(a)}e_{i}e_{n}\varepsilon_{njk}+\varepsilon C_{\omega}f_{4}^{(d)}e_{j}\delta_{ik}\right\}.\label{eq:diff} \end{equation} The last term in Eq.~(\ref{eq:diff}) describes R\"adler's $\boldsymbol{\varOmega}\times\bm{J}$ effect. The functions $f_{\{1-11\}}^{(a,d)}$ depend on the Coriolis number. They can be found in \cite{pi08Gafd}; see also \cite{pk11} or \cite{ps11}). In the model, the parameter $\varepsilon={\displaystyle \frac{\overline{\bm{b}^{2}}}{\mu_{0}\overline{\rho}\overline{\bm{u}^{2}}}}$, which measures the ratio between magnetic and kinetic energies of the fluctuations in the background turbulence, is assumed to be equal to 1. This corresponds to perfect energy equipartition. The $\varepsilon$ contribution in the second line of Eq.~(\ref{eq:pump}) describes the paramagnetic effect \citep{2003PhRvE..67b6321K}. In the state of perfect energy equipartition the effect of diamagnetic pumping is compensated by the paramagnetic effect. We can, formally, skip the second line in Eq.~(\ref{eq:pump}) from our consideration if $\varepsilon=1$. To compare the magnitude of the helicity--vorticity pumping effect with the diamagnetic effect we will show results for the pumping velocity distribution with $\varepsilon=0$. The contribution of small-scale magnetic helicity $\overline{\chi}=\overline{\bm{a{\bm\cdot}}\bm{b}}$ ($\bm{a}$ is the fluctuating vector-potential of the magnetic field) to the $\alpha$-effect is defined as \begin{equation} \alpha_{ij}^{(M)}=2f_{2}^{(a)}\delta_{ij}\frac{\overline{\chi}\tau_{c}}{\mu_{0}\overline{\rho}\ell^{2}}-2f_{1}^{(a)}e_{i}e_{j}\frac{\overline{\chi}\tau_{c}}{\mu_{0}\overline{\rho}\ell^{2}}.\label{alpM} \end{equation} The nonlinear feedback of the large-scale magnetic field to the $\alpha$-effect is described by a dynamical quenching due to the constraint of magnetic helicity conservation. The magnetic helicity, $\overline{\chi}$ , subject to a conservation law, is described by the following equation \citep{kle-rog99,sub-bra:04}: \begin{eqnarray} \frac{\partial\overline{\chi}}{\partial t} & = & -2\left(\mbox{\boldmath ${\cal E}$} {}{\bm\cdot}\overline{\bm{B}}\right)-\frac{\overline{\chi}}{R_{\chi}\tau_{c}}+\boldsymbol{\nabla}{\bm\cdot}\left(\eta_{\chi}\boldsymbol{\nabla}\bar{\chi}\right),\label{eq:hel} \end{eqnarray} where $\tau_{c}$ is a typical convective turnover time. The parameter $R_{\chi}$ controls the helicity dissipation rate without specifying the nature of the loss. The turnover time $\tau_{c}$ decreases from about 2 months at the bottom of the integration domain, which is located at $0.71R_{\odot}$, to several hours at the top boundary located at $0.99R_{\odot}$. It seems reasonable that the helicity dissipation is most efficient near the surface. The last term in Eq.~(\ref{eq:hel}) describes a turbulent diffusive flux of magnetic helicity \citep{mitra10}. We use the solar convection zone model computed by \citet{stix:02}, in which the mixing-length is defined as $\ell=\alpha_{\rm MLT}\left|\varLambda^{(p)}\right|^{-1}$, where $\bm{\boldsymbol{\varLambda}}^{(p)}=\boldsymbol{\nabla}\log\overline{p}\,$ quantifies the pressure variation, and $\alpha_{\rm MLT}=2$. The turbulent diffusivity is parameterized in the form, $\eta_{T}=C_{\eta}\eta_{T}^{(0)}$, where $\eta_{T}^{(0)}={\displaystyle \frac{u'\ell}{3}}$ is the characteristic mixing-length turbulent diffusivity, $\ell$ is the typical correlation length of the turbulence, and $C_{\eta}$ is a constant to control the efficiency of large-scale magnetic field dragging by the turbulent flow. Currently, this parameter cannot be introduced in the mean-field theory in a consistent way. In this paper we use $C_{\eta}=0.05$. The differential rotation profile, $\varOmega=\varOmega_{0}f_{\varOmega}\left(x,\mu\right)$ (shown in Fig.\ref{fig:CZ}a) is a slightly modified version of the analytic approximation proposed by \citet{antia98}: \begin{eqnarray} f_{\varOmega}\left(x,\mu\right) & = & \frac{1}{\varOmega_{0}}\left[\varOmega_{0}+55\left(x-0.7\right)\phi\left(x,x_{0}\right)\phi\left(-x,-0.96\right)\right.\label{eq:rotBA}\\ & - & \left.200\left(x-0.95\right)\phi\left(x,0.96\right)\right)\nonumber \\ & + & \left(21P_{3}\left(\mu\right)+3P_{5}\left(\mu\right)\right]\left(\frac{\mu^{2}}{j_{p}\left(x\right)}+\frac{1-\mu^{2}}{j_{e}\left(x\right)}\right)/\varOmega_{0},\nonumber \\ j_{p} & = & \frac{1}{1+\exp\left(\frac{0.709-x}{0.02}\right)},\,\, j_{e}=\frac{1}{1+\exp\left(\frac{0.692-x}{0.01}\right)},\nonumber \end{eqnarray} where $\varOmega_{0}=2.87\cdot10^{-6}s^{-1}$ is the equatorial angular velocity of the Sun at the surface, $x=r/R_{\odot}$, $\phi\left(x,x_{0}\right)=0.5\left[1+\tanh\left[100(x-x_{0})\right]\right]$, and $x_{0}=0.71$. \begin{figure} \begin{centering} \includegraphics[width=0.3\textwidth]{om_rt}\includegraphics[width=0.35\textwidth]{CZ}\includegraphics[width=0.35\textwidth]{hk} \par\end{centering} \begin{centering} \includegraphics[width=0.45\textwidth]{HW}\includegraphics[width=0.45\textwidth]{HW_P} \par\end{centering} \caption{\label{fig:CZ}Distributions of the angular velocity and the turbulent parameters, and the kinetic helicity inside the solar convection zone. The bottom panel shows the patterns of the pumping velocity fields for the toroidal magnetic field(left) and for the poloidal field(right). They were computed on the basis of Eqs.~(\ref{eq:erH},\ref{eq:etH},\ref{eq:ephH}).} \end{figure} \subsubsection{Pumping effects in the solar convection zone} The components of the strain tensor $\bm{\tilde{V}}$ in a spherical coordinate system are given by the matrix: \[ \begin{array}{lll} \bm{\tilde{V}} & = & \left(\begin{array}{ccc} 0 & 0 & \tilde{V}_{\left(r,\varphi\right)}\\ 0 & 0 & \tilde{V}_{\left(\theta,\varphi\right)}\\ \tilde{V}_{\left(r,\varphi\right)} & \tilde{V}_{\left(\theta,\varphi\right)} & 0 \end{array}\right)\end{array}, \] where we take into account only the azimuthal component of the large-scale flow, $\tilde{V}_{\left(r,\varphi\right)}=r\sin\theta\partial_{r}\varOmega\left(r,\theta\right)$, \ $\tilde{V}_{\left(\theta,\varphi\right)}=\sin\theta\partial_{\theta}\varOmega\left(r,\theta\right)$, so $\bm{\hat{V}}\left(\bm{B}\right)=\left(B\tilde{V}_{\left(r,\varphi\right)},B\tilde{V}_{\left(\theta,\varphi\right)},\bm{B}_{i}^{p}\tilde{V}_{\left(i,\varphi\right)}\right)$. Substituting this into Eq.~(\ref{eq:EMFP}) we find the components of the mean-electromotive force for the helicity--vorticity pumping effect, \begin{eqnarray} \mathcal{E}_{r}^{(H)} & \!\!=\!\! & \frac{\varOmega^{\ast}\tau_{c}}{2}\sin\theta\left\{ \!\left[\! h_{\mathcal{K}}\left(f_{3}^{(\gamma)}-f_{1}^{(\gamma)}\right)+h_{\mathcal{C}}\left(f_{4}^{(\gamma)}-f_{2}^{(\gamma)}\right)\right]x\frac{\partial\tilde{\varOmega}}{\partial x}\!-\!2\left(\tilde{\varOmega}-1\right)\!\!\left[h_{\mathcal{K}}f_{1}^{(\gamma)}+h_{C}f_{2}^{(\gamma)}\right]\!\!\right\} \!\! B,\label{eq:erH}\\ \mathcal{E}_{\theta}^{(H)} & \!\!=\!\! & \frac{\varOmega^{\ast}\tau_{c}}{2}\left\{ \!\sin^{2}\theta\!\left[\! h_{\mathcal{K}}\left(f_{3}^{(\gamma)}-f_{1}^{(\gamma)}\right)\!+\! h_{\mathcal{C}}\left(f_{4}^{(\gamma)}-f_{2}^{(\gamma)}\right)\!\right]\!\!\frac{\partial\tilde{\varOmega}}{\partial\mu}-2\mu\left(\tilde{\varOmega}-1\right)\!\!\left[h_{\mathcal{K}}f_{1}^{(\gamma)}+h_{C}f_{2}^{(\gamma)}\right]\right\} \!\! B,\label{eq:etH}\\ \mathcal{E}_{\phi}^{(H)} & = & -\frac{\varOmega^{\ast}\tau_{c}}{2}\frac{\sin\theta}{x}\left[h_{\mathcal{K}}\left(f_{3}^{(\gamma)}+f_{1}^{(\gamma)}\right)+h_{\mathcal{C}}\left(f_{4}^{(\gamma)}+f_{2}^{(\gamma)}\right)\right]\frac{\partial\left(\tilde{\varOmega},A\right)}{\partial\left(x,\mu\right)}\label{eq:ephH}\\ & & -\frac{\left(\tilde{\varOmega}-1\right)\varOmega^{\ast}\tau_{c}}{x\sin\theta}\left[h_{\mathcal{K}}f_{1}^{(\gamma)}+h_{C}f_{2}^{(\gamma)}\right]\left(\mu\frac{\partial A}{\partial x}+\frac{\sin^{2}\theta}{x}\frac{\partial A}{\partial\mu}\right),\nonumber \end{eqnarray} where $h_{\mathcal{C}}=C_{\mathcal{C}}{\displaystyle \frac{\overline{\chi}}{\mu_{0}\overline{\rho}\ell^{2}}}$. It remains to define the kinetic helicity distribution. We use a formula proposed in our earlier study (see \citealt{kps:06}), \begin{align*} h_{\mathcal{K}} & ={\displaystyle C_{\eta}C_{\mathcal{K}}}\frac{\overline{u^{(0)2}}}{2}\frac{\partial}{\partial r}\log\left(\overline{\rho}\sqrt{\overline{u^{(0)2}}}\right)F_{1}\cos\theta, \end{align*} where $F_{1}\left(\varOmega^{*}\right)$ was defined in the above cited paper. The radial profile of ${\displaystyle \frac{h_{\mathcal{K}}}{\cos\theta}}$ is shown in Figure \ref{fig:CZ}. The radial profile of kinetic helicity is shown in Figure 3a of the above cited paper. The parameters $C_{\mathcal{K},\mathcal{C}}$ are introduced to switch on/off the pumping effects in the model. \begin{figure} \begin{centering} \includegraphics[width=0.45\textwidth]{HS}\includegraphics[width=0.45\textwidth]{HS_P} \par\end{centering} \caption{\label{fig:sum-pump}The patterns of the total (including the diamagnetic and the density gradient effects) pumping velocity fields for the toroidal magnetic field(left) and for the poloidal field(right).} \end{figure} The expressions given by Eq.~(\ref{eq:EMFP}) are valid for the case of weak shear, when $\tau_{c}\max\left(\left|\nabla_{i}\overline{V}_{j}\right|\right)\ll1$. In terms of the strain tensor $\bm{\tilde{V}}$ this condition of weak shear implies ${\displaystyle \varOmega^{\star}}\max\left(\left|r\partial_{r}\tilde{\varOmega}\right|,\left|\partial_{\theta}\tilde{\varOmega}\right|\right)\ll1$. This is not valid at the bottom of the solar convection zone where the radial gradient of the angular velocity is strong and $\varOmega^{\star}\gg1$ and $\tau_{c}\max\left(\left|\nabla_{i}\overline{V}_{j}\right|\right)\approx2$. \citet{2010GApFD.104..167L} suggested that this pumping effect is quenched with increasing shear inversely proportional to $\left(\tau_{c}\max\left(\left|\nabla_{i}\overline{V}_{j}\right|\right)\right)^{1\dots2}$. Therefore, we introduce an ad-hoc quenching function for the pumping effect: \begin{equation} f^{(S)}=\frac{1}{1+C_{S}{\displaystyle \varOmega^{*s}\left(\left|r\frac{\partial\tilde{\varOmega}}{\partial r}\right|+\left|\frac{\partial\tilde{\varOmega}}{\partial\theta}\right|\right)^{s}}}, \end{equation} where $C_{S}$ is a constant to control the magnitude of the quenching, and $s=1$. Results by \citet{2010GApFD.104..167L} suggest $1 < s < 2$ in relation to geometry of the large-scale shear. We find that for the solar convection zone the amplitude of the pumping effect does not change very much ($\sim1\,\mathrm{m/s}$) with $s$ varying in the range $1\dots2$. From the given relations, using $\mbox{\boldmath ${\cal E}$} {}^{(H)}=\bm{U}^{\rm(eff)}\times\bm{\overline{B}}$, we find the effective drift velocity, $\bm{U}^{\rm(eff)}$, due to the helicity--vorticity pumping effect. Taking into account the variation of turbulence parameters in the solar convection zone we compute $\bm{U}^{\rm(eff)}$. The bottom panel of Figure \ref{fig:CZ} shows the distribution of the velocity field $\bm{U^{(eff)}}$ for the helicity--vorticity pumping effect for the toroidal and poloidal components of the large-scale magnetic field. The maximum velocity drift occurs in the middle and at the bottom of the convection zone. The direction of drift has equatorial and polar cells corresponding to two regions in the solar convection zone with different signs of the radial gradient of the angular velocity. The anisotropy in transport of the toroidal and poloidal components of the large-scale magnetic field is clearly seen. The other important pumping effects are due to mean density and turbulence intensity gradients \citep{zeld57,kit:1991,kit-rud:1993a,2001ApJ...549.1183T}. These effects were estimated using Eq.~(\ref{eq:pump}). For these calculations we put $C_{\eta}=1$, $\varepsilon=0$, $C_{\mathcal{K}}=1$ and $\overline{\chi}=0$. Figure \ref{fig:sum-pump} shows the sum of the pumping effects for the toroidal and poloidal components of mean magnetic fields including the helicity--vorticity pumping effect. In agreement with previous studies, it is found that the radial direction is the principal direction of mean-field transport in the solar convection zone. In its upper part the transport is downward because of pumping due to the density gradient \citep{kit:1991}. At the bottom of the convection zone the diamagnetic pumping effect produces downward transport as well \citep{kit:1991,1995A&A...296..557R}. The diamagnetic pumping is quenched inversely proportional to the Coriolis number (e.g., \citealp{kit:1991,pi08Gafd}) and it has the same order of magnitude as the helicity--vorticity pumping effect. The latter effect modifies the direction of effective drift of the toroidal magnetic field near the bottom of the convection zone. There is also upward drift of the toroidal field at low latitudes in the middle of the convection zone. It results from the combined effects of density gradient and global rotation \citep{kit:1991,2004IAUS..223..277K}. For the poloidal magnetic field the transport is downward everywhere in the convection zone. At the bottom of the convection zone the action of the diamagnetic pumping on the meridional component of the large-scale magnetic field is amplified due to the helicity--vorticity pumping effect. The obtained pattern of large-scale magnetic field drift in the solar convection zone does not take into account nonlinear effects, e.g., because of magnetic buoyancy. The effect of mean-field buoyancy is rather small compared with flux-tube buoyancy (\citealp{1993A&A...274..647K}, cf.\ \citealp{2011A&A...533A..40G}). To find out the current helicity counterpart of the pumping effect we analyze dynamo models by solving Eqs.~(\ref{eq:dyn}, \ref{eq:hel}). The governing parameters of the model are $C_{\eta}=0.05$, $C_{\omega}={\displaystyle \frac{1}{3}C_{\alpha}}$. We discuss the choice of the governing parameters later. The other parameters of the model are given in the Table \ref{tab:The-parameters-of}. Because of the weakening factor $C_{\eta}$ the magnitude of the pumping velocity is about one order of magnitude smaller than what is shown in Figure~\ref{fig:sum-pump}. Following \citet{pk11apjl}, we use a combination of ``open'' and ``closed'' boundary conditions at the top, controlled by a parameter $\delta=0.95$, with \begin{equation} \delta\frac{\eta_{T}}{r_{e}}B+\left(1-\delta\right)\mathcal{E}_{\theta}=0.\label{eq:tor-vac} \end{equation} This is similar to the boundary condition discussed by \citet{kit:00}. For the poloidal field we apply a combination of the local condition $A=0$ and the requirement of a smooth transition from the internal poloidal field to the external potential (vacuum) field: \begin{equation} \delta\left(\left.\frac{\partial A}{\partial r}\right|_{r=r_{e}}-\left.\frac{\partial A^{(vac)}}{\partial r}\right|_{r=r_{e}}\right)+\left(1-\delta\right)A=0.\label{eq:pol-vac} \end{equation} We assume perfect conductivity at the bottom boundary with standard boundary conditions. For the magnetic helicity, similar to \citet{guero10}, we put $\boldsymbol{\nabla}_{r}\bar{\chi}=0$ at the bottom of the domain and $\bar{\chi}=0$ at the top of the convection zone. \begin{table} \caption{\label{tab:The-parameters-of}The parameters of the models. Here, $B_{\max}$ is the maximum of the toroidal magnetic field strength inside the convection zone, $P$ is the dynamo period of the model.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & $C_{\alpha}$ & $R_{\chi}$ & ${\displaystyle C_{\mathcal{K}}}$ & ${\displaystyle C_{\mathcal{C}}}$ & $B_{\max}$ [G] & $P$ [yr] \tabularnewline \hline \hline D1 & 0.025 & $10^{3}$ & 0 & 0 & 500 & 16\tabularnewline \hline D2 & 0.025 & $10^{3}$ & 1 & 0 & 250 & 13\tabularnewline \hline D3 & 0.03 & $10^{3}$ & 1 & 10 & 300 & 13\tabularnewline \hline D4 & 0.035 & $5\cdot10^{2}$ & 1 & 1 & 500 & 11\tabularnewline \hline \end{tabular} \end{table} In this paper we study dynamo models which include R\"adler's $\boldsymbol{\varOmega}\times\bm{J}$ dynamo effect due to a large-scale current and global rotation \citep{rad69}. There is also a dynamo effect due to large-scale shear and current \citep{kle-rog:04a}. The motivation to consider these addional turbulent sources in the mean-field dynamo comes from DNS dynamo experiments \citep{2007NJPh....9..305B,2008A&A...491..353K,2009PhRvL.102d4501H,2009A&A...500..633K} and from our earlier studies \citep{2009A&A...493..819P,2009A&A...508....9S}. The dynamo effect due to large-scale current gives an additional source of large-scale poloidal magnetic field. This can help to solve the issue with the dynamo period being otherwise too short. Also, in the models the large-scale current dynamo effect produces less overlapping cycles than dynamo models with $\alpha$-effect alone. The choice of parameters in the dynamo is justified by our previous studies \citep{2009A&A...493..819P,pk11mf}, where we showed that solar-types dynamos can be obtained for $C_{\alpha}/C_{\omega}>2$. In those papers we find the approximate threshold to be $C_{\alpha}\approx0.02$ for a given diffusivity dilution factor of $C_{\eta}=0.05$. As follows from the results given in Fig.\ref{fig:sum-pump}, the kinetic helicity--vorticity pumping effect has a negligible contribution in the near-surface layers, where downward pumping due to density stratification dominates. Therefore, it is expected that the surface dynamo waves are not affected if we discard magnetic helicity from the dynamo equations. Figure \ref{evo1} shows time-latitude diagrams for toroidal and radial magnetic fields at the surface and for toroidal magnetic field at the bottom of the convection zone for two dynamo models D1 and D2 with and without the helicity--vorticity pumping effect, but magnetic helicity is taken into account as the main dynamo quenching effect. To compare with observational data from a time-latitude diagram of sunspot area (e.g., \citealp{2011SoPh..273..221H}), we multiply the toroidal field component $B$ by factor $\sin\theta$. This gives a quantity, which is proportional to the flux of large-scale toroidal field at colatitude $\theta$. We further assume that the sunspot area is related to this flux. Near the surface, models D1 and D2 give similar patterns of magnetic field evolution. At the bottom of the convection zone model D1 shows both poleward and equatorward branches of the dynamo wave propagation that is in agreement with the Parker-Yoshimura rule. Both branches have nearly the same time scale that equals $\simeq$16 years. The results from model D2 show that at the bottom of the convection zone the poleward branch of the dynamo wave dominates. Thus we conclude that the helicity--vorticity pumping effect alters the propagation of the dynamo wave near the bottom of the solar convection zone. We find that models with magnetic helicity contributions to the pumping effect do not change this conclusion. \begin{figure} a)\includegraphics[width=0.85\textwidth,height=3.5cm]{2d0_t} b)\includegraphics[width=0.8\textwidth,height=3.5cm]{2d0_b} c)\includegraphics[width=0.85\textwidth,height=3.5cm]{2d1_t} d)\includegraphics[width=0.8\textwidth,height=3.5cm]{2d1_b} \caption{\label{evo1}The time-latitude diagrams for the toroidal and radial magnetic fields for the models D1 and D2: a) the model D1, the toroidal field (iso-contours, $\pm.25KG$) near the surface and the radial field (gray-scale density plot); b) the model D1, the toroidal field at the bottom of the solar convection zone, the contours drawn in the range $\pm.5KG$; c) the same as for item a) for the model D2; d) the same as for item b) for the model D2.} \end{figure} Figure \ref{fig:Snapshot-for-the} shows a typical snapshot of the magnetic helicity distribution in the northern hemisphere for all our models. The helicity has a negative sign in the bulk of the solar convection zone. Regions with positive current helicity roughly correspond to domains of the negative large-scale current helicity concentration. They are located in the middle of the solar convection zone and at the high and low latitudes near the top of the solar convection zone. As follows from Fig.~\ref{fig:Snapshot-for-the}, the pumping effect due to current helicity may be efficient in the upper part of the solar convection zone where it might intensify the equatorial drift of the dynamo wave along iso-surfaces of the angular velocity. \begin{figure} \includegraphics[width=0.5\textwidth]{2d0wh_b} \caption{\label{fig:Snapshot-for-the}Snapshots for the mean magnetic field and the current helicity distributions at the north hemisphere in the model D4. Left panel shows the field lines of the poloidal component of the mean magnetic field. The right panel shows the toroidal magnetic field (iso-contours $\pm500$G) and the current helicity (gray scale density plot).} \end{figure} We find that the pumping effect that results from magnetic helicity is rather small in our models. This may be due to the weakness of the magnetic field. Observations \citep{zetal10} give about one order magnitude larger current helicity than what is shown in Fig.~\ref{fig:Snapshot-for-the}. In the model we estimate the current helicity as $H_{\mathcal{C}}={\displaystyle \frac{\overline{\chi}}{\mu_{0}\ell^{2}}}$. This result depends essentially on the mixing length parameter $\ell$. The stronger helicity is concentrated to the surface, the larger $H_{\mathcal{C}}$. In observations, we do not know from were the helical magnetic structures come from. In view of the given uncertainties we estimate the probable effect of a larger magnitude of magnetic helicity in the model by increasing the parameter $C_{\mathcal{C}}$ to 10 (model D3). In addition, we consider the results for the nonlinear model D4. It has a higher $C_{\alpha}$ and a lower $R_{\chi}$ to increase the nonlinear impact of the magnetic helicity on the large-scale magnetic field evolution. The top panel of Figure~\ref{evo2} shows a time-latitude diagram of toroidal magnetic field and current helicity evolution near the surface for model D4. We find a positive sign of current helicity at the decay edges of the toroidal magnetic field butterfly diagram. There are also areas with positive magnetic helicity at high latitudes at the growing edges of the toroidal magnetic field butterfly diagram. The induced pumping velocity is about 1 $\mathrm{cm\, s^{-1}}$. The increase of the magnetic helicity pumping effect by a factor of 10 (model D3) shifts the latitude of the maximum of the toroidal magnetic field by about $5^{\circ}$ toward the equator. The induced pumping velocity is about 5 $\mathrm{cm\, s^{-1}}$. Stronger nonlinearity (model D4) and a stronger magnetic helicity pumping effect (model D3) modify the butterfly diagram in different ways. Model D3 shows a simple shift of the maximum of toroidal magnetic field toward the equator. Model D4 shows a fast drift of large-scale toroidal field at the beginning of a cycle and a slow-down of the drift velocity as the cycle progresses. \begin{figure} \includegraphics[width=0.85\textwidth,height=3.5cm]{2d3wh_t} \includegraphics[width=0.85\textwidth,height=3.5cm]{2d2wh_tp} \caption{\label{evo2} Top, the near-surface time-latitude diagrams for the toroidal magnetic field and the current helicity for the models D4. Bottom, the near-surface time-latitude diagrams for the toroidal magnetic field and the latitudinal component of the drift velocity induced by the magnetic helicity for the model D3.} \end{figure} Figure \ref{fig:The-latitude-of} shows in more detail the latitudinal drift of the maximum of the toroidal magnetic field evolution during the cycle (left panel in the Figure \ref{fig:The-latitude-of}), \begin{equation} \lambda_{\max}(t)=90^{\circ}-\underset{\theta>45^{\circ}}{\max}\left(\left|B_{S}(\theta)\right|\sin\theta\right),\label{eq:lmax} \end{equation} and the latitudinal drift of the centroid position of the toroidal magnetic field flux (cf.\ \citealp{2011SoPh..273..221H}) \begin{equation} \lambda_{C}(t)=90^{\circ}-\frac{\int_{0}^{\pi/2}\theta B_{S}(\theta)\sin\theta d\theta}{\int_{0}^{\pi/2}B_{S}(\theta)\sin\theta d\theta},\label{eq:lcen} \end{equation} where $B_{S}(\theta)={\displaystyle \left\langle B\left(r,\theta\right)\right\rangle _{(0.9,0.99)R}}$ is the toroidal magnetic field, which is averaged over the surface layers. Note that the overlap between subsequent cycles influences the value of $\lambda_{C}$ more than the value of $\lambda_{\max}$. The behaviour of $\lambda_{\max}$ in models D1,D2 and D3 reproduces qualitatively the exponential drift of maximum latitude as suggested by \citet{2011SoPh..273..221H}: \[ \lambda_{C}(t)=28^{\circ}\exp\left(-\frac{12t}{90}\right), \] where $t$ is time measured in years. Model D4 shows a change between fast (nearly steady dynamo wave) drift at the beginning of the cycle to slow drift at the decaying phase of the cycle. The overlap between subsequent cycles is growing from model D1 to model D4. \begin{figure} \includegraphics[width=0.5\textwidth]{max_lat}\includegraphics[width=0.5\textwidth]{max_latcent} \caption{\label{fig:The-latitude-of}The drift of the latitude of maximum (left) and the centroid position of the magnetic flux at the near-surface layer in the models D(1-4). The dash-dotted line shows results for the model D1, the red dashed line - for the model D2, the solid black line - for the model D3, the black dashed line - for the model D4 and the solid green line shows the exponential law of the sunspot area centroid drift, as suggested by \citet{2011SoPh..273..221H}. } \end{figure} In all the models the highest latitude of the centroid position of the toroidal magnetic flux is below 30$^{\circ}$. Models D3 and D4 have nearly equal starting latitude of the centroid position. It is about 24$^{\circ}$. This means that a model with increased magnetic helicity pumping produces nearly the same effect for the shift of the centroid position as a model with a strong nonlinear effect of magnetic helicity. \section{Discussion and conclusions} We have shown that the interaction of helical convective motions and differential rotation in the solar convection zone produces a turbulent drift of large-scale magnetic field. The principal direction of the drift corresponds to the direction of the large-scale vorticity vector. The large-scale vorticity vector roughly follows to iso-surfaces of angular velocity. Since the direction of the drift depends on the sign of helicity, the pumping effect is governed by the Parker-Yoshimura rule \citep{par55,yosh1975}. The effect is computed within the framework of mean-field magnetohydrodynamics using the minimal $\tau$-approximation. In the calculations, we have assumed that the turbulent kinetic and current helicities are given. The calculations were done for arbitrary Coriolis number. In agreement with \citet{mitr2009AA} and \citet{garr2011}, the analytical calculations show that the leading effect of pumping is described by a large-scale magnetic drift in the direction of the large-scale vorticity vector and by anisotropic pumping which produces a drift of toroidal and poloidal components of the field in opposite directions. The component of the drift that is induced by global rotation and helicity (second line in Eq.~(\ref{eq:EMFP})) is rather small compared to the main effect. The latter conclusion should be checked separately for a different model of background turbulence, taking into account the generation of kinetic helicity due to global rotation and stratification in a turbulent medium. We have estimated the pumping effect for the solar convection zone and compared it with other turbulent pumping effects including diamagnetic pumping and turbulent pumping that results from magnetic fluctuations in stratified turbulence \citep{kit:1991,pi08Gafd}. The latter is sometimes referred to as ``density-gradient pumping effect'' \citep{2004IAUS..223..277K}. The diamagnetic pumping is upward in the upper part of the convection zone and downward near the bottom. The velocity field of density-gradient pumping is more complicated (see Figure 4). However, its major effect is concentrated near the surface. Both diamagnetic pumping and density-gradient pumping effects are quenched inversely proportional to the Coriolis number \citep{kit:1991,pi08Gafd}. The helicity--vorticity pumping effect modifies the direction of large-scale magnetic drift at the bottom of the convection zone. This effect was illustrated by a dynamo model that shows a dominant poleward branch of the dynamo wave at the bottom of the convection zone. It is found that the magnetic helicity contribution of the pumping effect can be important for explaining the fine structure of the sunspot butterfly diagram. In particular, the magnetic helicity contribution results in a slow-down of equatorial propagation of the dynamo wave. The slow-down starts just before the maximum of the cycle. Observations indicate a similar behavior in sunspot activity \citep{ternul2007AN,2011SoPh..273..221H}. A behavior like this can be seen in flux-transport models as well (\citealp{2006ApJ...647..662R}). For the time being it is unclear what are the differences between different dynamo models and how well do they reproduce the observations. A more detailed analysis is needed. \bibliographystyle{plainnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Decompactification limit and quantisation}\label{sec:decomp-limit} After fixing light-cone gauge for bosons and fermions as in the previous sections, we obtain a Hamiltonian on a cilinder, defining the time evolution of a closed strings. In general this model is complicated because of the non-linear nature of the interactions. The first step that we take is the so-called \emph{decompactification limit}. It essentially consists in taking the length of the string to be very large $L\gg 1$. The model originally defined on the cylinder becomes then a problem on the two-dimensional plane. When this limit is taken, one should replace the periodic boundary conditions for the fields with the ones decaying at infinity. The strategy is then to solve the model in the $L\to \infty$ limit, and to take into account the finite-length corrections in a later step. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{images/cylinder-worldsheet.pdf} \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] at ( 0 , -3cm) {\raisebox{2cm}{$\xrightarrow{P_-\to \infty}$}}; \end{tikzpicture} \includegraphics[width=0.4\textwidth]{images/plane-worldsheet.pdf} \caption{In the decompactification limit we take the total light-cone momentum $P_-$ to be very large. This is equivalent to taking the limit of infinite length of the string. The original model defined on a cylinder lives now on a plane.} \label{fig:dec-limit} \end{figure} The technical reason why the decompactification limit can be taken is that in uniform light-cone gauge the length of the string $L=P_-$ is equal to the total momentum conjugate to $x^-$, see~\eqref{eq:Ham-En-L-P-}. The momentum $P_-$ enters the light-cone Hamiltonian only through the integration limits for the worldsheet coordinate $\sigma$, therefore sending $P_-\to \infty$ has really just the effect of decompactifying the cylinder. The light-cone Hamiltonian is expressed in terms of the target-space charges as $E-J$. This means that in order to get configurations with finite worldsheet energy, we should take both $E$ and $J$ to be large, in such a way that their difference is finite. \subsection{Large tension expansion}\label{sec:large-tens-exp} In~\cite{Berenstein:2002jq} Berenstein, Maldacena and Nastase (BMN) showed that there exists a limit of AdS$_n\times$S$^n$ spaces that reproduces plane-wave geometries. For the $\sigma$-model on {AdS$_5\times$S$^5$} this matches the plane-wave background of~\cite{Blau:2001ne,Blau:2002dy}. In the light-cone gauge-fixed theory, this limit is equivalent to the usual expansion in powers of fields truncated at leading order. In this section we want to look at the ``near-BMN limit'', where we take this expansion beyond the leading order. In fact, in our case we can look at it as a large-tension expansion. To implement it one has to rescale the worldsheet coordinate $\sigma\to g \sigma$ and then the bosonic and fermionic fields as \begin{equation}\label{eq:field-expansion} X^\mu\to \frac{1}{\sqrt{g}} X^\mu\,, \qquad p_\mu\to \frac{1}{\sqrt{g}} p_\mu\,, \qquad \Theta_I\to \frac{1}{\sqrt{g}} \Theta_I\,. \end{equation} In the action, inverse powers of the string tension $g$ organise the contributions at different powers in the fields \begin{equation} \begin{aligned} S_{\text{g.f.}}= \int_{-\infty}^\infty {\rm d}\tau {\rm d} \sigma \, &\left( L_2+\frac{1}{g}L_4+\frac{1}{g^2}L_6+ \cdots \right). \end{aligned} \end{equation} Here $L_n$ is the contribution to the Lagrangian containing $n$ powers of the physical fields. Using~\eqref{eq:gauge-fix-lagr-I-order-form-fer}, at lowest order we find simply \begin{equation}\label{eq:quad-lagr-str} \begin{aligned} L_2= p_\mu\dot{X}^\mu +i\, \bar{\Theta}_I\Gamma_0\dot{\Theta}_I-\mathcal{H}_2. \end{aligned} \end{equation} The first two terms define a canonical Poisson structure for bosons and fermions\footnote{The kinetic term for fermions comes from the term $\frac{i}{2}p_M\bar{\Theta}_I\Gamma^M\dot{\Theta}_I=\frac{i}{2}p_+\bar{\Theta}_I[(1-a)\Gamma^t+a\Gamma^\phi]\dot{\Theta}+\frac{i}{2}p_-\bar{\Theta}_I[-\Gamma^t+\Gamma^\phi]\dot{\Theta}$ in~\eqref{eq:lagr-I-order-form-fer}, where Gamma matrices along transverse directions do not contribute thanks to the gauge-fixing for fermions~\eqref{eq:kappa-lcg}. At leading order we have to consider just the contribution of $p_-=1$, and we assume that $e_t^0\sim1, e_\phi^5\sim1$ expanding in transverse bosons.}. The form of the quadratic Hamiltonian depends on the specific theory considered. For the case of {AdS$_5\times$S$^5$} (this will be true also for its $\eta$-deformations), $\mathcal{H}_2$ is the Hamiltonian for eight free massive bosons and eight free massive fermions, see Section~\ref{sec:quartic-action-lcg-etaads}. For the case of {AdS$_3\times$S$^3\times$T$^4$} we find instead a collection of four bosons and four fermions that are massive, plus four bosons and four fermions that are massless, see Section~\ref{sec:quadr-charges-T4}. The massless fields are a consequence of the presence of the four-dimensional torus in the background. The higher order contributions to the Lagrangian define the interactions of the fields, which are organised in inverse powers of the string tension $g$. We notice that under this rescaling of the physical fields, the quantity $x'^-$ that solves the constraint $C_1$ in~\eqref{eq:Vira-bos-and-fer} has the form \begin{equation}\label{eq:xminus-rescaled-g} x'^-=-\frac{1}{g} \left( p_\mu X'^{\mu}+i\, \bar{\Theta}_I\Gamma_0\Theta_I' \right)+\mathcal{O}(1/g^2)\,, \end{equation} and the leading contribution is at order $1/g$. Let us now discuss quantisation of the model. \medskip \subsection{Perturbative quantisation} Here we address the quantisation of the two-dimensional quantum field theory that we find on the worldsheet after the gauge-fixing and the decompactification limit. Assuming that a canonical Poisson structure for both bosons and fermions of the classical theory was achieved, in the quantised theory we can write equal-time commutation and anti-commutation relations \begin{equation} [X^\mu(\sigma,\tau),p_\nu(\sigma',\tau)]=i\, \delta^\mu_\nu \, \delta(\sigma-\sigma')\,, \qquad \{ \Theta^{\ul{a}}(\sigma,\tau),\Theta^\dagger_{\ul{b}}(\sigma',\tau) \} = \delta^{\ul{a}}_{\ul{b}}\, \delta(\sigma-\sigma')\,. \end{equation} Here $\ul{a},\ul{b}$ are indices that span all the eight complex fermionic degrees of freedom, remaining after gauge fixing. One may introduce oscillators for the bosonic fields \begin{equation} \begin{aligned} X^\mu(\sigma,\tau) &= \frac{1}{\sqrt{2\pi}} \int {\rm d} p\, \frac{1}{\sqrt{2\, \omega(p)}} \left( e^{ip\sigma} a^\mu(p,\tau) + e^{-ip\sigma} a^{\mu\dagger}(p,\tau) \right)\,, \\ p_\mu(\sigma,\tau) &= \frac{1}{\sqrt{2\pi}} \int {\rm d} p \ \frac{i}{2}\, \sqrt{2\, \omega(p)} \left( e^{-ip\sigma} a^\dagger_\mu(p,\tau) - e^{ip\sigma} a_{\mu}(p,\tau) \right)\,, \end{aligned} \end{equation} in such a way that the creation and annihilation operators satisfy canonical commutation relations \begin{equation} [a^\mu(p,\tau),a^\dagger_\nu(p',\tau)]=\delta^\mu_\nu \, \delta(p-p')\,. \end{equation} The explicit form of the frequency $\omega(p)$ is dictated by the quadratic Hamiltonian $\mathcal{H}_2$. Similarly, for fermions we may write \begin{equation} \Theta^{\ul{a}}(\sigma,\tau)= \frac{e^{i\, \phi_{\ul{a}}}}{\sqrt{2\pi}} \int \frac{{\rm d} p}{\sqrt{\omega(p)}}\, \left( e^{ip\sigma} f(p)\, a^{\ul{a}}(p,\tau) + e^{-ip\sigma} g(p)\, a^{{\ul{a}}\dagger}(p,\tau) \right)\,, \end{equation} where we have the freedom of choosing a phase $\phi_{\ul{a}}$, and we have introduced the wave-function parameters $f(p),\ g(p)$. The creation and annihilation operators satisfy canonical anti-commutation relations \begin{equation} \{a^{\ul{a}}(p,\tau),a^\dagger_{\ul{b}}(p',\tau)\}=\delta^{\ul{a}}_{\ul{b}} \, \delta(p-p')\,, \end{equation} if these functions satisfy\footnote{Typically one also sets $f^2(p)-g^2(p)=m$, so that $\Theta^\dagger_I \Theta_I$ in $\mathcal{H}_2$ generates a mass term written for the oscillators $a,a^\dagger$ multiplied by the mass $m$.} \begin{equation} f^2(p)+g^2(p)=\omega(p)\,, \qquad \frac{f(-p)g(-p)}{\omega(-p)}=-\frac{f(p)g(p)}{\omega(p)}\,. \end{equation} For simplicity, let us collect all bosonic and fermionic oscillators together and label them by $k=(\mu,\ul{a})$. The time evolution for these operators is dictated by \begin{equation}\label{eq:inter-op-time-ev} \dot{a}^{k}(p,\tau)=i\, [\gen{H}(a^\dagger,a),a^k(p,\tau)]\,, \end{equation} and similarly for creation operators. Here $\gen{H}(a^\dagger,a)$ is the full Hamiltonian written in terms of the oscillators. Because of the complicated nature of the interactions, one prefers to formulate the problem in terms of \emph{scattering}. We do not try to describe interactions at any time $\tau$, but rather we focus on the in- and out-operators that evolve freely and coincide with the ones of the interacting theory at $\tau=-\infty$ and $\tau=+\infty$ \begin{equation}\label{eq:bound-cond-in-out} a|_{\tau=-\infty} = a_{\text{in}}|_{\tau=-\infty}\,, \qquad a|_{\tau=+\infty} = a_{\text{out}}|_{\tau=+\infty}\,. \end{equation} They create in- and out-states \begin{equation} \begin{aligned} \ket{p_1,p_2,\ldots,p_n}^{\text{in}}_{k_1,k_2,\ldots,k_n}&=a^\dagger_{\text{in},k_1}(p_1)\cdots a^\dagger_{\text{in},k_n}(p_n)\ket{\mathbf{0}}\,, \\ \ket{p_1,p_2,\ldots,p_n}^{\text{out}}_{k_1,k_2,\ldots,k_n}&=a^\dagger_{\text{out},k_1}(p_1)\cdots a^\dagger_{\text{out},k_n}(p_n)\ket{\mathbf{0}}\,, \end{aligned} \end{equation} from the vacuum $\ket{\mathbf{0}}$, which is killed by annihilation operators. These operators are particularly simple because by definition interactions are switched off \begin{equation}\label{eq:free-op-time-ev} \begin{aligned} \dot{a}^{k}_{\text{in}}(p,\tau)&=i\, [\gen{H}_{2}(a^\dagger_{\text{in}},a_{\text{in}}),a^k_{\text{in}}(p,\tau)]\,, \\ \dot{a}^{k}_{\text{out}}(p,\tau)&=i\, [\gen{H}_{2}(a^\dagger_{\text{out}},a_{\text{out}}),a^k_{\text{out}}(p,\tau)]\,, \end{aligned} \end{equation} meaning that their time evolution is dictated just by the quadratic Hamiltonian $\gen{H}_2$. Since we want all pairs of creation and annihilation operators $(a^\dagger_{\text{in}},a_{\text{in}})$,$(a^\dagger_{\text{out}},a_{\text{out}})$ and $(a^\dagger,a)$ to satisfy canonical commutation relations, they all must be related by unitarity operators. In particular, in- and out-operators are related to the interacting operators by \begin{equation} \begin{aligned} a(p,\tau)&= \mathbb{U}_{\text{in}}^\dagger(\tau) \cdot a_{\text{in}}(p,\tau) \cdot \mathbb{U}_{\text{in}}(\tau)\,, \\ a(p,\tau)&= \mathbb{U}_{\text{out}}(\tau) \cdot a_{\text{out}}(p,\tau) \cdot \mathbb{U}_{\text{out}}^\dagger(\tau)\,, \end{aligned} \end{equation} where we require $\mathbb{U}_{\text{in}}(\tau=-\infty)={1},\ \mathbb{U}_{\text{out}}(\tau=+\infty)={1}$ to respect the boundary conditions~\eqref{eq:bound-cond-in-out}. The unitary operator that we call $\mathbb{S}$ is actually the most interesting of them, as it relates in- and out-operators \begin{equation} a_{\text{in}}(p,\tau)=\mathbb{S}\cdot a_{\text{out}}(p,\tau) \cdot \mathbb{S}^\dagger\,, \qquad \mathbb{S}\ket{\mathbf{0}}=\ket{\mathbf{0}}\,. \end{equation} From this definition we have that the map between in- and out-states is given by the S-matrix \begin{equation} \ket{p_1,p_2,\ldots,p_n}^{\text{in}}_{k_1,k_2,\ldots,k_n}=\mathbb{S}\ket{p_1,p_2,\ldots,p_n}^{\text{out}}_{k_1,k_2,\ldots,k_n}, \end{equation} and consistency of the above relations implies \begin{equation} \mathbb{S}=\mathbb{U}_{\text{in}}(\tau)\cdot \mathbb{U}_{\text{out}}(\tau). \end{equation} Let us mention that the time dependence on the right hand side is only apparent; in fact, the in- and out-operators are free and evolve with the same time dependence, that cancels. This means that we may evaluate the expression at any preferred value of $\tau$. The three unitary operators are determined by imposing that the time evolutions~\eqref{eq:inter-op-time-ev} and~\eqref{eq:free-op-time-ev} are respected. For $\mathbb{U}_{\text{in}},\mathbb{U}_{\text{out}}$ one can check that \begin{equation} \begin{aligned} \mathbb{U}_{\text{in}}(\tau) &= \mathcal{T} \text{exp} \left( -i\int_{-\infty}^\tau {\rm d}\tau'\ \gen{V}\left(a_{\text{in}}^\dagger(\tau'),a_{\text{in}}(\tau')\right) \right)\,, \\ \mathbb{U}_{\text{out}}(\tau) &= \mathcal{T} \text{exp} \left( -i\int_{\tau}^{+\infty} {\rm d}\tau'\ \gen{V}\left(a_{\text{out}}^\dagger(\tau'),a_{\text{out}}(\tau')\right) \right)\,, \end{aligned} \end{equation} solves the desired equations, where we have introduced the potential $\gen{V}=\gen{H}-\gen{H}_2$, and $\mathcal{T}\text{exp}$ is the time-ordered exponential. For evaluating the S-matrix we can use the fact that boundary conditions simplify the formulae and write two equivalent results \begin{equation}\label{eq:S-mat-Texp} \begin{aligned} \mathbb{S}&=\mathbb{U}_{\text{in}}(+\infty)=\mathcal{T} \text{exp} \left( -i\int_{-\infty}^{\infty} {\rm d}\tau'\ \gen{V}\left(a_{\text{in}}^\dagger(\tau'),a_{\text{in}}(\tau')\right) \right)\,, \\ &=\mathbb{U}_{\text{out}}(-\infty)=\mathcal{T} \text{exp} \left( -i\int_{-\infty}^{\infty} {\rm d}\tau'\ \gen{V}\left(a_{\text{out}}^\dagger(\tau'),a_{\text{out}}(\tau')\right) \right)\,. \end{aligned} \end{equation} We conclude pointing out that perturbation theory is a useful tool to compute the scattering processes. We get approximate and simpler results if we define the T-matrix as \begin{equation}\label{eq:def-Tmat} \mathbb{S}=1+\frac{i}{g} \mathbb{T}\,, \end{equation} and we take the large-tension expansion of~\eqref{eq:S-mat-Texp}. At leading order we obtain \begin{equation}\label{eq:pert-Tmat} \mathbb{T} = -g\int_{-\infty}^\infty{\rm d} \tau'\ \gen{V}(\tau')+\ldots \,, \end{equation} where $\gen{V}=1/g\, \gen{H}_4+\mathcal{O}(1/g^{2})$. We then recover the known fact that the quartic Hamiltonian provides the $2\to 2$ tree-level scattering elements. Subleading contributions in inverse powers of $g$ for the matrix $\mathbb{T}$ will give the quantum corrections for the $2\to 2 $ scattering processes. In the next chapter we show how in some cases non-perturbative methods may be used to account for quantum corrections to all orders. \section{Fermions and type IIB}\label{sec:fermions-type-IIB} When symmetries allow for a supercoset description, the action of the string may be computed perturbatively in powers of fermions as it was done for the case of {AdS$_5\times$S$^5$} in~\cite{Metsaev:1998it} following the ideas of~\cite{Henneaux:1984mh}. For our discussions we will need only the contribution to the action at \emph{quadratic} order in fermions. In order to be more general and account also for cases in which a coset description is not valid, we review the Green-Schwarz action for the superstring~\cite{Green:1983wt}. We work in type IIB, where we have two sets of 32-components Majorana-Weyl fermions $\Theta_I$ labelled by $I=1,2$. In most expressions we write only these labels and we omit the spinor indices, on which the ten-dimensional Gamma-matrices are acting. We get a total of 32 real degrees of freedom after imposing the chirality and the Majorana conditions \begin{equation} \Gamma_{11}\Theta_I=\Theta_I\,, \qquad\qquad \bar{\Theta}_I=\Theta^t_I \mathcal{C}\,. \end{equation} In the above equations, $\Gamma_{11}$ is constructed by multiplying all the $32\times32$ rank-1 Gamma-matrices, and $\mathcal{C}$ is the charge conjugation matrix. The barred version of the fermions is defined in the standard way $\bar{\Theta}_I\equiv\Theta^\dagger_I \Gamma^0$. The Green-Schwarz action of type II superstring~\cite{Grisaru1985116} may be found order by order in fermions, and its explicit form is known to fourth order in $\Theta$~\cite{Wulff:2013kga}. For us it will be enough to stop at second order~\cite{Grisaru1985116,Cvetic:1999zs} \begin{equation}\label{eq:IIB-action-theta2} \begin{aligned} S^{\alg{f}^2}&= \int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma {\rm d} \tau\ L^{\alg{f}^2}\,, \\ L^{\alg{f}^2}&=-\frac{g}{2} \ i\, \bar{\Theta}_I \left( \gamma^{\alpha\beta} \delta^{IJ} +\epsilon^{\alpha\beta} \sigma_3^{IJ} \right) {e}^m_\alpha \Gamma_m \, {D}^{JK}_\beta \Theta_K\,. \end{aligned} \end{equation} In type IIB the operator ${D}^{IJ}_\alpha$ acting on $\Theta$ has the following expression \begin{equation} \begin{aligned} {D}^{IJ}_\alpha = & \delta^{IJ} \left( \partial_\alpha -\frac{1}{4} {\omega}^{mn}_\alpha \Gamma_{mn} \right) +\frac{1}{8} \sigma_3^{IJ} {e}^m_\alpha {H}_{mnp} \Gamma^{np} \\ &-\frac{1}{8} e^{\varphi} \left( \epsilon^{IJ} \Gamma^p {F}^{(1)}_p + \frac{1}{3!}\sigma_1^{IJ} \Gamma^{pqr} {F}^{(3)}_{pqr} + \frac{1}{2\cdot5!}\epsilon^{IJ} \Gamma^{pqrst} {F}^{(5)}_{pqrst} \right) {e}^m_\alpha \Gamma_m. \end{aligned} \end{equation} In the equations above, $e^m_\alpha=\partial_{\alpha}X^Me^m_M$ is the pullback of the vielbein on the worldsheet, and it is related to the spacetime metric as \begin{equation} G_{MN}=e^m_Me^n_N\eta_{mn}\,. \end{equation} The spin connection $\omega^{mn}_{\alpha}=\partial_{\alpha}X^M\omega_{M}^{mn}$ satisfies the equation \begin{equation}\label{eq:spi-conn-vielb} {\omega}_M^{mn}= - {e}^{N \, [m} \left( \partial_M {e}^{n]}_N - \partial_N {e}^{n]}_M + {e}^{n] \, P} {e}_M^p \partial_P {e}_{Np} \right), \end{equation} where the factor $1/2$ is included in the anti-symmetrisation of the indices $m,n$. Also the field-strength of the $B$-field appears in the fermionic action \begin{equation} H_{MNP}=3\partial_{[M}B_{NP]}=\partial_M B_{NP}+\partial_N B_{PM}+\partial_P B_{MN}\,. \end{equation} The quantities denoted by $F^{(n)}$ are the Ramond-Ramond field-strengths and $\varphi$ is the dilaton. The whole set of fields satisfies the supergravity equations of motion~\cite{Bergshoeff:1985su}. We refer to Appendix~\ref{app:IIBsugra} where we collect these equations. \medskip The Green-Schwarz action presented above enjoys a local fermionic symmetry called ``kappa-symmetry''~\cite{Green:1983wt,Grisaru1985116}. This is a generalisation of the symmetry found for superparticles~\cite{Siegel1983397} and it allows one to gauge away half of the fermions, thus recovering the correct number of physical degrees of freedom. At lowest order, the kappa-variation is implemented on the bosonic and fermionic coordinates as \begin{equation}\label{eq:kappa-var-32} \begin{aligned} \delta_{\kappa}X^M &= - \frac{i}{2} \ \bar{\Theta}_I \Gamma^M \delta_{\kappa} \Theta_I + \mathcal{O}(\Theta^3)\,, \qquad &&\Gamma^M={e}^{Mm} \Gamma_m\,, \\ \delta_{\kappa} \Theta_I &= -\frac{1}{4} (\delta^{IJ} \gamma^{\alpha\beta} - \sigma_3^{IJ} \epsilon^{\alpha\beta}) \Gamma_\beta {K}_{\alpha J}+ \mathcal{O}(\Theta^2)\,, \qquad &&\Gamma_\beta={e}_{\beta}^m \Gamma_m\,, \end{aligned} \end{equation} where we have introduced local fermionic parameters ${K}_{\alpha I}$ with chirality opposite to the one of the fermions $\Gamma_{11}{K}_{\alpha I}=-{K}_{\alpha I}$. Together with the kappa-variation of the worldsheet metric \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta}&= 2i\ \Pi^{IJ\, \alpha\a'}\Pi^{JK\, \beta\b'} \ \bar{{K}}_{I\alpha'}{D}^{KL}_{\beta'}\Theta_{L}+ \mathcal{O}(\Theta^3), \\ \Pi^{IJ\, \alpha\a'}&\equiv\frac{\delta^{IJ}\gamma^{\alpha\a'}+\sigma_3^{IJ}\epsilon^{\alpha\a'}}{2}\,, \end{aligned} \end{equation} one finds invariance of the action under kappa-symmetry $\delta_{\kappa}(S^\alg{b}+S^{\alg{f}^2})=0$ at first order in $\Theta$. Let us use this freedom to gauge away half of the fermions. We consider the Gamma-matrices $\Gamma_0$ and $\Gamma_5$---corresponding to the coordinates $t$ and $\phi$ used in Section~\ref{sec:Bos-string-lcg} to fix the gauge for bosonic strings---and we define the combinations\footnote{Another definition that seems natural from the point of view of a generic $a$-gauge is $\Gamma^+=(1-a)\Gamma^0+a\Gamma^5, \ \Gamma^-=-\Gamma^0+\Gamma^5$.} \begin{equation}\label{eq:defin-Gamma-pm} \Gamma^{\pm}=\frac{1}{2} (\Gamma^5\pm\Gamma^0)\,. \end{equation} Kappa-symmetry is fixed by imposing~\cite{Metsaev:2000yu} \begin{equation}\label{eq:kappa-lcg} \Gamma^+\Theta_I=0\quad \implies\quad \bar{\Theta}_I\Gamma^+=0\,. \end{equation} This gauge simplifies considerably the form of the Lagrangian. To start, in this gauge all terms containing an even number of Gamma-matrices in the light-cone directions vanish, as it is seen by using the identity \begin{equation} \Gamma^+\Gamma^-+\Gamma^-\Gamma^+=\mathbf{1}_{32}\,. \end{equation} Moreover, the motivation for choosing this gauge is that at leading order in the usual perturbative expansion in powers of fields it gives a non-vanishing and standard kinetic term for fermions, see Section~\ref{sec:decomp-limit}. \medskip Let us first show how to generalise the procedure of Section~\ref{sec:Bos-string-lcg} by including the fermionic contributions. We first define an effective metric $\hat{G}_{MN}$ and an effective $B$-field $\hat{B}_{MN}$ containing all the couplings to the fermions that do not involve derivatives on them \begin{equation} \begin{aligned} \hat{G}_{MN}=&{G}_{MN} + i \,\bar{\Theta}_I \, \, {e}^m_{(M} \Gamma_m \Bigg[ -\frac{1}{4}\delta^{IJ} {\omega}^{pq}_{N)} \Gamma_{pq} + \frac{1}{8} \sigma_3^{IJ}{e}^n_{N)} H_{npq} \Gamma^{pq} \\ & -\frac{1}{8} e^\varphi \left(\epsilon^{IJ} \Gamma^p F^{(1)}_p +\frac{1}{3!}\sigma_1^{IJ} \Gamma^{pqr} F^{(3)}_{pqr} +\frac{1}{2\cdot5!} \epsilon^{IJ}\Gamma^{pqrst} F^{(5)}_{pqrst} \right) {e}^n_{N)} \Gamma_n \Bigg]\Theta_J\, , \\ \hat{B}_{MN}=&{B}_{MN} - i \,\sigma_3^{IK}\bar{\Theta}_I \, \, {e}^m_{[M} \Gamma_m \Bigg[ -\frac{1}{4}\delta^{KJ} {\omega}^{pq}_{N]} \Gamma_{pq} + \frac{1}{8} \sigma_3^{KJ}{e}^n_{N]} H_{npq} \Gamma^{pq} \\ & -\frac{1}{8} e^\varphi \left(\epsilon^{KJ} \Gamma^p F^{(1)}_p +\frac{1}{3!}\sigma_1^{KJ} \Gamma^{pqr} F^{(3)}_{pqr} +\frac{1}{2\cdot5!} \epsilon^{KJ}\Gamma^{pqrst} F^{(5)}_{pqrst} \right) {e}^n_{N]} \Gamma_n \Bigg]\Theta_J\, . \end{aligned} \end{equation} This allows us to rewrite the sum of the bosonic and fermionic Lagrangians as \begin{equation} \begin{aligned} L^{\alg{b}}+L^{\alg{f}^2}=-\frac{g}{2} \Bigg( &\, \gamma^{\alpha\beta} \partial_\alpha X^M \partial_\beta X^N \hat{G}_{MN} -\epsilon^{\alpha\beta} \partial_\alpha X^M \partial_\beta X^N \hat{B}_{MN} \\ &+i\, \bar{\Theta}_I \left( \gamma^{\alpha\beta} \delta^{IJ} +\epsilon^{\alpha\beta} \sigma_3^{IJ} \right) {e}^m_\alpha \Gamma_m \, {\partial}_\beta \Theta_J \Bigg)\,. \end{aligned} \end{equation} The momenta $p_M$ conjugate to the bosonic coordinates $X^M$ receive fermionic corrections, that using the above rewriting are \begin{equation} \begin{aligned} p_M =&- g \gamma^{\tau\beta} \partial_\beta X^N \hat{G}_{MN} + g X^{'N} \hat{B}_{MN} \\ &-g\frac{i}{2}\, \bar{\Theta}_I \left( \gamma^{\tau\beta} \delta^{IJ} \Gamma_M \, {\partial}_\beta \Theta_J +\sigma_3^{IJ} \Gamma_M \, \Theta_J'\right)\,. \end{aligned} \end{equation} After inverting the above relation for $\dot{X}^M$ we find that the Lagrangian is \begin{equation}\label{eq:lagr-I-order-form-fer} \begin{aligned} L^{\alg{b}}+L^{\alg{f}^2}=&p_M\dot{X}^M +\frac{i}{2}p_M\bar{\Theta}_I\Gamma^M\dot{\Theta}_I \\ &+\frac{i}{2}g\, \sigma_3^{IJ} \, X'^M\bar{\Theta}_I\Gamma_M\dot{\Theta}_J +\frac{i}{2}g\, B_{MN}X'^M\bar{\Theta}_I \Gamma^N\dot{\Theta}_I \\& + \frac{\gamma^{\tau\sigma}}{\gamma^{\tau\tau}} C_1 + \frac{1}{2g \gamma^{\tau\tau}} C_2\,. \end{aligned} \end{equation} At second order in fermions, the two Virasoro constraints read as \begin{equation}\label{eq:Vira-bos-and-fer} \begin{aligned} C_1 =& p_M X'^{M}+\frac{i}{2}p_M\bar{\Theta}_I\Gamma^M\Theta_I'+\frac{i}{2}g\, \sigma_3^{IJ}\, X'^M \bar{\Theta}_I\Gamma_M\Theta_J'+\frac{i}{2}g\, B_{MN}X'^M\bar{\Theta}_I \Gamma^N\Theta_I', \\ C_2 =& \hat{G}^{MN} p_M p_N+ g^2 X'^{M} X'^{N} \hat{G}_{MN} - 2 g\, p_M X'^{Q} \hat{G}^{MN} \hat{B}_{NQ} + g^2 X'^{P} X'^{Q} \hat{B}_{MP} \hat{B}_{NQ} \hat{G}^{MN}\\ &+ig^2\, X'^M \bar{\Theta}_I\Gamma_M\Theta_I'+ig\, \sigma_3^{IJ} p_M\bar{\Theta}_I\Gamma^M \Theta_J'-ig^2 \, {B}_{MP}X'^P \sigma_3^{IJ} \bar{\Theta}_I \Gamma^M \Theta_J'. \end{aligned} \end{equation} At this point we introduce bosonic light-cone coordinates as in~\eqref{eq:lc-coord} and fix the gauge as in~\eqref{eq:unif-lcg}. Together with the gauge fixing for the fermions~\eqref{eq:kappa-lcg}, we then find the gauge-fixed Lagrangian at order $\Theta^2$. Now $x'^-$ and $p_+$ must be determined by solving the Virasoro constraints $C_1=0,\ C_2=0$ that include the fermionic contributions as in~\eqref{eq:Vira-bos-and-fer}. The gauge-fixed Lagrangian\footnote{We have assumed as in the previous section that the $B$-field vanishes along light-cone coordinates.} \begin{equation}\label{eq:gauge-fix-lagr-I-order-form-fer} \begin{aligned} \left(L^{\alg{b}}+L^{\alg{f}^2}\right)_{\text{g.f.}}=&p_\mu\dot{X}^\mu +\frac{i}{2}\bar{\Theta}_I\left[\delta^{IJ}\left(p_+\Gamma^{\check{+}}+p_-\Gamma^{\check{-}}\right)+g\, \sigma_3^{IJ} \, X'^-\Gamma_{\check{-}}\right]\dot{\Theta}_I \\ &+p_+ \,, \end{aligned} \end{equation} shows that the Hamiltonian for the gauge-fixed model remains to be related to the momentum conjugate to $x^+$, namely $\mathcal{H}=-p_+(X^\mu,p_\mu,\Theta_I)$. In the kinetic term for fermions of the gauge-fixed Lagrangian, Gamma-matrices with transverse indices disappear thanks to the kappa-gauge~\eqref{eq:kappa-lcg}. We defined Gamma-matrices with checks on the indices to distinguish them from the ones introduced in~\eqref{eq:defin-Gamma-pm}, as now we consider linear combinations of Gamma-matrices with \emph{curved} indices $\Gamma_M=e_M^m\Gamma_m$ \begin{equation} \begin{aligned} &\Gamma^{\check{+}}=\phantom{-}a\Gamma^\phi+(1-a)\Gamma^t\,, \qquad &&\Gamma^{\check{-}}=\Gamma^\phi-\Gamma^t\,, \\ &\Gamma_{\check{-}}=-a\Gamma_t+(1-a)\Gamma_\phi\,, \qquad &&\Gamma_{\check{+}}=\Gamma_t+\Gamma_\phi\,. \end{aligned} \end{equation} The kinetic term for the fermions defines a Poisson structure that in general is not canonical. One may choose to keep this or rather implement field redefinitions to recast the kinetic term in the standard form. The description simplifies when we use the usual perturbative expansion in powers of fields. We explain how to implement it in the next section, after presenting the decompactification limit. \section{The Lagrangian}\label{sec:-eta-def-lagr-quad-theta} We first repeat the exercise of computing the Lagrangian in the undeformed case, as done in~\cite{Metsaev:1998it}, and then we derive the results for the $\eta$-deformed model. \subsection{Undeformed case} When we send $\eta\to 0$ we recover the Lagrangian for the superstring on {AdS$_5\times$S$^5$} \begin{equation} L = - \frac{g}{2} \left( \gamma^{\alpha\beta}\Str[A^{(2)}_\alpha A^{(2)}_\beta]+\epsilon^{\alpha\beta}\Str[A^{(1)}_\alpha A^{(3)}_\beta] \right)\,, \end{equation} where $A^{(k)}=P^{(k)}A$. The purely bosonic Lagrangian is easily found by setting the fermions to zero and one obtains \begin{equation} L_{\{00\}} = - \frac{g}{2} \gamma^{\alpha\beta} e^m_\alpha e^n_\beta \, \eta_{mn}. \end{equation} We are using the notation $\{00\}$ to remind that we are considering both currents $A_\alpha$ and $A_\beta$ entering the definition of the Lagrangian at order 0 in the fermions. This Lagrangian matches with the one presented in~\eqref{eq:bos-act-undef-adsfive}. If we want to look at the Lagrangian that is quadratic in fermions, we have to compute three terms, that according to our notation we call $\{02\},\{20\},\{11\}$. It is convenient to consider the contributions $\{02\},\{20\}$ together. In fact---using the properties of the supertrace---it is easy to show that their sum is symmetric in $\alpha,\beta$, meaning that what we get is multiplied just by $\gamma^{\alpha\beta}$ \begin{equation} L_{\{02\}} + L_{\{20\}} = - \frac{g}{2} \gamma^{\alpha\beta} \, i \bar{\theta}_I e^m_\alpha \boldsymbol{\gamma}_m D^{IJ}_\beta \theta_J . \end{equation} By similar means one also shows that the contribution $\{11\}$ is antisymmetric in $\alpha,\beta$ and thus yields the quadratic order of the Wess-Zumino term \begin{equation} \begin{aligned} L_{\{11\}} &=- \frac{{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_I \sigma_3^{IJ} \, i \, e^m_\alpha \boldsymbol{\gamma}_m D^{JK}_\beta \theta_K +\text{tot. der.} \end{aligned} \end{equation} For the details of the computation we refer to the discussion for the deformed case after Eq.~\eqref{eq:orig-lagr-101}. The sum of the contributions at quadratic order in $\theta$ gives \begin{equation} L^{\alg{f}^2}=- \frac{{g}}{2} \, i \, \bar{\theta}_I \left(\gamma^{\alpha\beta} \delta^{IJ}+ \epsilon^{\alpha\beta} \sigma_3^{IJ} \right) e^m_\alpha \boldsymbol{\gamma}_m D^{JK}_\beta \theta_K , \end{equation} that matches with the correct Lagrangian expected for type IIB~\eqref{eq:IIB-action-theta2}. In particular one finds a five-form~\cite{Metsaev:1998it} \begin{equation} \slashed{F}^{(5)}=F^{(5)}_{m_1m_2m_3m_4m_5}\Gamma^{m_1m_2m_3m_4m_5}=4e^{-\varphi_0}(\Gamma^{01234}-\Gamma^{56789})\,, \end{equation} originated by the term multiplied by $\epsilon^{IJ}$ in the definition~\eqref{eq:op-DIJ-psu224-curr} of $D^{IJ}$, and a constant dilaton $\varphi=\varphi_0$. \subsection{Deformed case}\label{sec:def-lagr-supercos} In the deformed case the Lagrangian is defined as~\cite{Delduc:2013qra} \begin{equation}\label{eq:def-deformed-lagr-full} \begin{aligned} L &= - \frac{g}{4} (1+\eta^2) (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \Str[\tilde{d}(A_\alpha) \, \mathcal{O}^{-1}(A_\beta)] \\ & = - \frac{g}{2} \frac{\sqrt{1+\varkappa^2}}{1+\sqrt{1+\varkappa^2}} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \Str[\tilde{d}(A_\alpha) \, \mathcal{O}^{-1}(A_\beta)]. \end{aligned} \end{equation} In the notation introduced in the previous section, the bosonic Lagrangian already obtained in Section~\ref{sec:def-bos-model} is \begin{equation} L_{\{000\}} = - \frac{\tilde{g}}{2} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \ e^m_\alpha e^n_\beta {k_n}^p \eta_{mp}, \qquad \tilde{g} \equiv g\sqrt{1+\varkappa^2}. \end{equation} Here we need three numbers to label the contribution to the Lagrangian: we indicate the order in powers of fermions for the current $A_\alpha$, for the inverse operator $\mathcal{O}^{-1}$ and for the current $A_\beta$ respectively. When we rewrite this result in the usual form~\eqref{eq:bos-str-action} of the Polyakov action we recover the deformed metric and the $B$-field of Section~\ref{sec:def-bos-model}, see Eq.~\eqref{eq:metrc-etaAdS5S5-sph-coord} and~\eqref{eq:B-field-etaAdS5S5-sph-coord}. We may rewrite the deformed metric in terms of a vielbein $\widetilde{G}_{MN}=\widetilde{e}^m_M\widetilde{e}^n_N \eta_{mn}$, that we choose to be diagonal \begin{equation}\label{eq:def-vielb-comp} \begin{aligned} \widetilde{e}^0_t=\frac{\sqrt{1+\rho ^2}}{\sqrt{1-\varkappa ^2 \rho ^2}}, \quad \widetilde{e}^1_{\psi_2}=-\rho \sin \zeta, \quad \widetilde{e}^2_{\psi_1}=-\frac{\rho \cos \zeta}{\sqrt{1+\varkappa ^2 \rho ^4 \sin ^2\zeta}}, \\ \widetilde{e}^3_\zeta=-\frac{\rho }{\sqrt{1+\varkappa ^2 \rho ^4 \sin ^2\zeta}}, \quad \widetilde{e}^4_\rho=-\frac{1}{\sqrt{1+\rho ^2} \sqrt{1-\varkappa ^2 \rho ^2}}, \\ \widetilde{e}^5_\phi=\frac{\sqrt{1-r^2}}{\sqrt{1+\varkappa ^2 r^2}}, \quad \widetilde{e}^6_{\phi_2}=-r \sin \xi , \quad \widetilde{e}^7_{\phi_1}=-\frac{r \cos \xi }{\sqrt{1+\varkappa ^2 r^4 \sin ^2\xi}}, \\ \widetilde{e}^8_\xi=-\frac{r}{\sqrt{1+\varkappa ^2 r^4 \sin ^2\xi}}, \quad \widetilde{e}^9_r=-\frac{1}{\sqrt{1-r^2} \sqrt{1+\varkappa ^2 r^2}}. \end{aligned} \end{equation} The Lagrangian quadratic in fermions is now divided into six terms: three of them when we choose $\mathcal{O}^{-1}$ at order 0 in fermions $(\{002\},\{200\},\{101\})$, two when it is at order 1 $(\{011\},\{110\})$ and one when it is at order 2 $(\{020\})$. We start by considering the following two contributions \begin{equation} \begin{aligned} L_{\{002\}} & = - \frac{\tilde{g}}{2} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \, \frac{i}{2}\bar{\theta}_I (e^m_\alpha {k^n}_{m}\boldsymbol{\gamma}_n ) D^{IJ}_\beta \theta_J , \\ L_{\{200\}} & = - \frac{\tilde{g}}{2} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \, \frac{i}{2}\bar{\theta}_I (e^m_\beta {k_{m}}^{n}\boldsymbol{\gamma}_n ) D^{IJ}_\alpha \theta_J . \end{aligned} \end{equation} where ${k^n}_{m} = {k_q}^p \eta^{nq}\eta_{mp}$. Now the sum of $L_{\{002\}}+L_{\{200\}}$ gives a non-trivial contribution also to the Wess-Zumino term, since the matrix $k_{mn}$ has a non-vanishing anti-symmetric part. Considering the case $\{101\}$, it is easy to see that the insertion of $\op^{\text{inv}}_{(0)}$ between two odd currents does not change the fact that the expression is anti-symmetric in $\alpha,\beta$. In Appendix~\ref{app:der-Lagr-101} we show the steps needed to rewrite the original result~\eqref{eq:orig-lagr-101} in the standard form \begin{equation}\label{eq:Lagr-101} \begin{aligned} L_{\{101\}} &= - \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_L \, i \, e^m_\alpha \boldsymbol{\gamma}_m \left( \sigma_3^{LK} D^{KJ}_\beta \theta_J -\frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \ \epsilon^{LK} \mathcal{D}^{KJ}_\beta \theta_J \right) \\ &= - \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_I \left( \sigma_3^{IJ} -\frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \ \epsilon^{IJ} \right) \, i \, e^m_\alpha \boldsymbol{\gamma}_m \mathcal{D}_\beta \theta_J + \frac{\tilde{g}}{4} \epsilon^{\alpha\beta} \bar{\theta}_I \sigma_1^{IJ} e^m_\alpha \boldsymbol{\gamma}_m e^n_\beta \boldsymbol{\gamma}_n \theta_J , \end{aligned} \end{equation} up to a total derivative. Let us now consider the inverse operator at first order in the $\theta$ expansion. The two contributions $\{011\},\{110\}$ can be naturally considered together\footnote{The result can be put in this form thanks to the properties~\eqref{eq:swap-lambda}.} \begin{equation} \begin{aligned} L_{\{011\}+\{110\}} = & - \frac{\tilde{g}}{4} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \bar{\theta}_K \Bigg[ - (\varkappa \sigma_1^{KI}-(-1+\sqrt{1+\varkappa^2})\delta^{KI}) \left( i\boldsymbol{\gamma}_p +\frac{1}{2}\boldsymbol{\gamma}_{mn} \lambda_{p}^{mn} \right) \\ & + (\varkappa \sigma_3^{KI} - (-1+\sqrt{1+\varkappa^2})\epsilon^{KI}) \ i\boldsymbol{\gamma}_n {\lambda_p}^n \Bigg] (k^p_{\ q}e^q_{\alpha} D^{IJ}_\beta +{k_{q}}^{p}e^q_{\beta} D^{IJ}_\alpha )\theta_J. \end{aligned} \end{equation} To conclude, the last contribution to the Lagrangian that we should consider is the one in which the inverse operator is at order $\theta^2$. We find \begin{equation} \begin{aligned} L_{\{020\}} = & - \frac{\tilde{g}}{2} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \ \frac{\varkappa}{4} e^v_\alpha e^m_\beta \, {k^{u}}_v {k_m}^n \, \bar{\theta}_K \\ \Bigg[ &- 2 \delta^{KI} \left( \boldsymbol{\gamma}_u \left(\boldsymbol{\gamma}_n +\frac{i}{4} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) - \frac{i}{4} \boldsymbol{\gamma}_{pq} \boldsymbol{\gamma}_n \lambda^{pq}_{\ u}\right) - \epsilon^{KI} \left( \boldsymbol{\gamma}_u {\lambda_n}^p \boldsymbol{\gamma}_p - \boldsymbol{\gamma}_p \boldsymbol{\gamma}_n {\lambda_u}^p \right) \\ & -(-1+\sqrt{1+\varkappa^2}) \delta^{KI} \bigg( \left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs} \right) + \boldsymbol{\gamma}_p {\lambda_u}^p {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ & -(-1+\sqrt{1+\varkappa^2}) \epsilon^{KI} \bigg(- \boldsymbol{\gamma}_p {\lambda_u}^p \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) + \left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ & + \varkappa \sigma_1^{KI} \bigg( \left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) - \boldsymbol{\gamma}_p {\lambda_u}^p {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ & -\varkappa \sigma_3^{KI} \bigg( \boldsymbol{\gamma}_p{\lambda_u}^p \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) + \left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \Bigg] \theta_I . \end{aligned} \end{equation} Summing up all the above contributions we discover that the result is \emph{not} written in the standard form of the Green-Schwarz action for type IIB superstring~\eqref{eq:IIB-action-theta2}. This issue is addressed in the next section. \subsection{Canonical form}\label{sec:canonical-form} The Lagrangian for the deformed model that we obtain from the definition~\eqref{eq:def-deformed-lagr-full} is not in the standard form of the Green-Schwarz action for type IIB superstring. It is clear that a field redefinition of the bosonic and fermionic coordinates will in general modify the form of the action. The strategy of this section is to find a field redefinition that recasts the result that we have obtained in the desired form~\eqref{eq:IIB-action-theta2}. Let us focus for the moment just on the contributions involving derivatives on fermions, whose expression is not canonical. For convenience we collect these terms here. We write separately the contributions contracted with $\gamma^{\alpha\beta}$ and $\epsilon^{\alpha\beta}$ \begin{equation}\label{eq:non-can-lagr-kin} \begin{aligned} L^{\gamma,\partial} = -\frac{\tilde{g}}{2} \ \gamma^{\alpha\beta} \bar{\theta}_I \Bigg[ & \frac{i}{2} (\sqrt{1+\varkappa^2} \delta^{IJ} - \varkappa \sigma_1^{IJ} ) \boldsymbol{\gamma}_n -\frac{1}{4} (\varkappa \sigma_1^{IJ} -(-1+ \sqrt{1+ \varkappa^2}) \delta^{IJ} ) \lambda^{pq}_{n} \boldsymbol{\gamma}_{pq} \\ & +\frac{i}{2} (\varkappa \sigma_3^{IJ} -(-1+ \sqrt{1+ \varkappa^2}) \epsilon^{IJ} ) {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg] ({k^n}_{m}+{k_{m}}^n) e^m_\alpha \partial_\beta \theta_J, \end{aligned} \end{equation} \begin{equation}\label{eq:non-can-lagr-WZ} \begin{aligned} L^{\epsilon,\partial} = -\frac{\tilde{g}}{2} \ \epsilon^{\alpha\beta} \bar{\theta}_I \Bigg[ & \Bigg( -\frac{i}{2} (\sqrt{1+\varkappa^2} \delta^{IJ} - \varkappa \sigma_1^{IJ} ) \boldsymbol{\gamma}_n +\frac{1}{4} (\varkappa \sigma_1^{IJ} -(-1+ \sqrt{1+ \varkappa^2}) \delta^{IJ} ) \lambda^{pq}_{n} \boldsymbol{\gamma}_{pq} \\ & -\frac{i}{2} (\varkappa \sigma_3^{IJ} -(-1+ \sqrt{1+ \varkappa^2}) \epsilon^{IJ} ) {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg) ({k^n}_{m}-{k_{m}}^n) \\ & +i \left( \sigma_3^{IJ} - \frac{-1+ \sqrt{1+ \varkappa^2}}{\varkappa} \epsilon^{IJ} \right) \boldsymbol{\gamma}_m \Bigg] e^m_\alpha \partial_\beta \theta_J. \end{aligned} \end{equation} To simplify the result we first redefine our fermions as \begin{equation}\label{eq:red-fer-2x2-sp} \theta_I \to \frac{\sqrt{1+\sqrt{1+\varkappa^2}}}{\sqrt{2}} \left(\delta^{IJ} + \frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \sigma_1^{IJ} \right) \theta_J. \end{equation} The contributions to the Lagrangian that we are considering are then transformed as \begin{equation} \begin{aligned} L^{\gamma,\partial} \to L^{\gamma,\partial} = L^{\gamma,\partial}_1 + L^{\gamma,\partial}_2, \\ L^{\gamma,\partial}_1 = -\frac{\tilde{g}}{2} \ \gamma^{\alpha\beta} \bar{\theta}_I \Bigg[ & \frac{i}{2} \delta^{IJ} \boldsymbol{\gamma}_n +\frac{i}{2} \varkappa \sigma_3^{IJ} {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg] ({k^n}_{m}+{k_{m}}^n) e^m_\alpha \partial_\beta \theta_J, \\ L^{\gamma,\partial}_2 = -\frac{\tilde{g}}{2} \ \gamma^{\alpha\beta} \bar{\theta}_I \Bigg[ & -\frac{1}{4} (\varkappa \sigma_1^{IJ} +(-1+ \sqrt{1+ \varkappa^2}) \delta^{IJ} ) \lambda^{pq}_{n} \boldsymbol{\gamma}_{pq} \\ & -\frac{i}{2} (-1+ \sqrt{1+ \varkappa^2}) \epsilon^{IJ} {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg] ({k^n}_{m}+{k_{m}}^n) e^m_\alpha \partial_\beta \theta_J. \end{aligned} \end{equation} \begin{equation} \begin{aligned} L^{\epsilon,\partial} \to L^{\epsilon,\partial} &= L^{\epsilon,\partial}_1 + L^{\epsilon,\partial}_2, \\ L^{\epsilon,\partial}_1 = -\frac{\tilde{g}}{2} \ \epsilon^{\alpha\beta} \bar{\theta}_I \Bigg[ & -\Bigg( \frac{i}{2} \delta^{IJ} \boldsymbol{\gamma}_n +\frac{i}{2} \varkappa \sigma_3^{IJ} {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg) ({k^n}_{m}-{k_{m}}^n) +i \sigma_3^{IJ} \boldsymbol{\gamma}_m \Bigg] e^m_\alpha \partial_\beta \theta_J , \\ L^{\epsilon,\partial}_2 = -\frac{\tilde{g}}{2} \ \epsilon^{\alpha\beta} \bar{\theta}_I \Bigg[ & \Bigg( \frac{1}{4} (\varkappa \sigma_1^{IJ} +(-1+ \sqrt{1+ \varkappa^2}) \delta^{IJ} ) \lambda^{pq}_{n} \boldsymbol{\gamma}_{pq} \\ & +\frac{i}{2} (-1+ \sqrt{1+ \varkappa^2}) \epsilon^{IJ} {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg) ({k^n}_{m}-{k_{m}}^n) \\ & -i \frac{-1+ \sqrt{1+ \varkappa^2}}{\varkappa} \epsilon^{IJ} \boldsymbol{\gamma}_m \Bigg] e^m_\alpha \partial_\beta \theta_J. \end{aligned} \end{equation} The various terms have been divided into $L_1^\partial$ and $L_2^\partial$ according to the symmetry properties of the objects involved. In particular, given an expression of the form $\theta_I M^{IJ} \partial\theta_J$, we decide to organise the terms according to \begin{equation} \begin{aligned} \theta_I M^{IJ} \partial\theta_J = -\partial\theta_I M^{IJ} \theta_J &\quad\implies\quad L_1^\partial\,, \\ \theta_I M^{IJ} \partial\theta_J = +\partial\theta_I M^{IJ} \theta_J &\quad\implies\quad L_2^\partial\,. \end{aligned} \end{equation} The symmetry properties are dictated by purely algebraic manipulations---we are not integrating by parts---based on the symmetry properties of the gamma matrices contained in $M^{IJ}$, and on the symmetry properties of $M^{IJ}$ under the exchange of $I,J$. We also use the ``Majorana-flip''relations of Eq.~\eqref{eq:symm-gamma-otimes-gamma}. We make this distinction because we can show that we can remove $\mathcal{L}^{\gamma,\partial}_2$ by shifting the bosonic coordinates with $\varkappa$-dependent corrections that are quadratic in fermions. Let us consider the redefinition \begin{equation}\label{eq:red-bos} X^M \longrightarrow X^M + \bar{\theta}_I \ f^M_{IJ} (X) \ \theta_J, \end{equation} where $f^M_{KI} (X)$ is a function of the bosonic coordinates that for the moment is not fixed. Requiring that the shift is non-vanishing--we use~\eqref{eq:Maj-flip}---shows that the quantity $f^M_{IJ} (X)$ has the same symmetry properties of the terms that we collected in $L_2^\partial$. This shift produces contributions to the \emph{fermionic} Lagrangian originating from the \emph{bosonic} one~\eqref{eq:bos-lagr-eta-def-Pol}. We find that it is modified as $L^{\alg{b}} \to L^{\alg{b}} +\delta L^{\alg{b},\gamma}_m +\delta L^{\alg{b},\gamma}_2 +\delta L^{\alg{b},\epsilon}_m +\delta L^{\alg{b},\epsilon}_2+\mathcal{O}(\theta^4)$ where \begin{equation}\label{eq:red-bos-lagr} \begin{aligned} \delta L^{\alg{b},\gamma}_m &= + \tilde{g} \gamma^{\alpha\beta} \left( - \partial_\alpha X^M \ \bar{\theta}_I \ \widetilde{G}_{MN} \left( \partial_\beta f^N_{IJ} \right) \ \theta_J - \frac{1}{2} \partial_\alpha X^M \partial_\beta X^N \partial_P \widetilde{G}_{MN} \ \bar{\theta}_I \, f^P_{IJ} \theta_J \right),\\ \delta L^{\alg{b},\gamma}_2 &= + \tilde{g} \gamma^{\alpha\beta} \left( - 2 \partial_\alpha X^M \ \bar{\theta}_I \ \widetilde{G}_{MN} f^N_{IJ} \ \partial_\beta \theta_J \right),\\ \delta L^{\alg{b},\epsilon}_m &=+ \tilde{g} \epsilon^{\alpha\beta} \left( + \partial_\alpha X^M \ \bar{\theta}_I \ \widetilde{B}_{MN} \left( \partial_\beta f^N_{IJ} \right) \ \theta_J + \frac{1}{2} \partial_\alpha X^M \partial_\beta X^N \partial_P \widetilde{B}_{MN} \ \bar{\theta}_I \, f^P_{IJ} \theta_J \right), \\ \delta L^{\alg{b},\epsilon}_2 &=+ \tilde{g} \epsilon^{\alpha\beta} \left( 2 \partial_\alpha X^M \ \bar{\theta}_I \ \widetilde{B}_{MN} f^N_{IJ} \ \partial_\beta \theta_J \right). \end{aligned} \end{equation} Here we have used $\partial \bar{\theta}_I \ f^M_{IJ} (X) \ \theta_J = + \bar{\theta}_I \ f^M_{IJ} (X) \ \partial \theta_J$, consequence of the symmetry properties of $f^M_{IJ} (X)$, and we have stopped at quadratic order in fermions. It is now easy to see that if we define the function \begin{equation}\label{eq:def-shift-bos-f} \begin{aligned} f^M_{IJ}(X) &= e^{Mp} \Bigg[ \frac{1}{8} \left( \varkappa \sigma_1^{IJ} - (1-\sqrt{1+\varkappa^2}) \delta^{IJ} \right) \lambda_{p}^{mn} \boldsymbol{\gamma}_{mn} -\frac{i}{4} (1-\sqrt{1+\varkappa^2}) \epsilon^{IJ} {\lambda_p}^n \boldsymbol{\gamma}_n \Bigg], \end{aligned} \end{equation} then we are able to remove completely the contribution $L^{\gamma,\partial}_2$ from the Lagrangian \begin{equation} L^{\gamma,\partial}_2 + \delta L^{\alg{b},\gamma}_2 =0. \end{equation} On the other hand this shift of the bosonic coordinates is not able to remove completely $\mathcal{L}^{\epsilon,\partial}_2$: there is actually cancellation of the terms with\footnote{This statement is true if one includes also the components $B_{t\rho},B_{\phi r}$ of the $B$-field in the bosonic Lagrangian. Clearly these will contribute giving also new terms with no derivatives on fermions contained in $\delta L_m^{\alg{b},\epsilon}$ of~\eqref{eq:red-bos-lagr}. If these components are not included, cancellation of terms with $\delta^{IJ}, \sigma_1^{IJ}$ is not complete, but what is left may be rewritten as a term with no derivatives on fermions, up to a total derivative. The two ways of proceeding are equivalent.} $\delta^{IJ}, \sigma_1^{IJ}$, but the ones with $\epsilon^{IJ}$ are not removed. However, in the Wess-Zumino term we are allowed to perform partial integration\footnote{Performing partial integration in the Lagrangian with $\gamma^{\alpha\beta}$ would generate derivatives of the worldsheet metric and also of $\partial_\alpha X^M$.} to rewrite the result such that---up to a total derivative---the partial derivative acts on the bosons and not on the fermions \begin{equation}\label{eq:WZ-shift-bos-eps} \begin{aligned} L^{\epsilon,\partial}_2 + \delta L^{\alg{b},\epsilon}_2 = & \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_I \frac{-1+\sqrt{1+\varkappa^2}}{\varkappa} \epsilon^{IJ} \\ & e^m_\alpha \left( i \delta^q_m - \frac{i}{2} \varkappa (k^n_{\ m} - {k_{m}}^{n} ) \lambda_n^{\ q} + \frac{i}{2} \varkappa \tilde{B}_{mn} (k^{pn} + k^{np} ) \lambda_p^{\ q} \right) \boldsymbol{\gamma}_q \partial_\beta \theta_J \\ = & \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_I \frac{-1+\sqrt{1+\varkappa^2}}{\varkappa} \epsilon^{IJ} e^m_\alpha i \boldsymbol{\gamma}_m \partial_\beta \theta_J \\ = & -\frac{\tilde{g}}{4} \epsilon^{\alpha\beta} \bar{\theta}_I \frac{-1+\sqrt{1+\varkappa^2}}{\varkappa} \epsilon^{IJ} \partial_\alpha X^M \left(\partial_\beta e^m_M \right) i\boldsymbol{\gamma}_m \theta_J + \text{tot. der.} \end{aligned} \end{equation} This method works thanks to the symmetry properties of $f^M_{IJ}(X)$. We have also used the identity \begin{equation} k^p_{\ m} - {k_{m}}^{p} - \tilde{B}_{mn} (k^{pn} + k^{np} ) =0. \end{equation} After the shift of the bosonic coordinates, the only terms containing derivatives on fermions are $L^{\gamma,\partial}_1$ and $L^{\epsilon,\partial}_1$. Let us stress again that the shift will also introduce new couplings without derivatives on fermions, as showed in~\eqref{eq:red-bos-lagr}. We collect in Eq.~\eqref{eq:lagr-gamma-no-F-red} and~\eqref{eq:lagr-epsilon-no-F-red} the expression for the total Lagrangian at this point. \medskip In order to put the remaining terms in canonical form we redefine the fermions as $\theta_I \to U_{IJ}\theta_J$, where the matrix $U_{IJ}$ acts both on the $2\times 2$ space spanned by the labels $I,J$ and on the space of spinor indices---that we are omitting here. We actually write the matrix $U_{IJ}$ as factorised in the AdS and sphere spinor indices parts \begin{equation}\label{eq:red-ferm-Lor-as} \begin{aligned} \theta_I &\to( U_{IJ}^{\alg{a}}\otimes U_{IJ}^{\alg{s}})\theta_J\,, \\ \theta_{I\ul{\alpha}\ul{a}} &\to( U_{IJ}^{\alg{a}})_{\ul{\alpha}}^{\ \ul{\nu}} (U_{IJ}^{\alg{s}})_{\ul{a}}^{\ \ul{b}}\theta_{J\ul{\nu}\ul{b}}\,. \end{aligned} \end{equation} This is not the most generic redefinition, but it will turn out to be enough. Each of the matrices $U_{IJ}^{\alg{a}}$ and $U_{IJ}^{\alg{s}}$ may be expanded in terms of the tensors spanning the $2\times 2$ space \begin{equation} U_{IJ}^{\alg{a},\alg{s}}=\delta_{IJ}\, U_{\delta}^{\alg{a},\alg{s}}+\sigma_{1\, IJ}\, U_{\sigma_1}^{\alg{a},\alg{s}}+\epsilon_{IJ}\, U_{\epsilon}^{\alg{a},\alg{s}}+\sigma_{3\, IJ}\, U_{\sigma_3}^{\alg{a},\alg{s}}\,. \end{equation} The objects $U_{\mu}^{\alg{a},\alg{s}}$ with $\mu=\delta,\sigma_1,\epsilon,\sigma_3$ are $4\times 4$ matrices that may be written in the convenient basis of $4\times 4$ gamma matrices. From the Majorana condition~\eqref{eqMajorana-cond-compact-not} we find that in order to preserve $\theta_I^\dagger \boldsymbol{\gamma}^0=+ \theta_I^t \, (K\otimes K)$ under the field redefinition, we have to impose \begin{equation} \boldsymbol{\gamma}^0\, ((U_{\mu}^{\alg{a}})^\dagger \otimes (U_{\mu}^{\alg{s}})^\dagger) \boldsymbol{\gamma}^0= -(K\otimes K) ((U_{\mu}^{\alg{a}})^t \otimes (U_{\mu}^{\alg{s}})^t) (K\otimes K)\,. \end{equation} We impose $\check{\gamma}^0\, (U_{\mu}^{\alg{a}})^\dagger \check{\gamma}^0= K (U_{\mu}^{\alg{a}})^t K$ and $(U_{\mu}^{\alg{s}})^\dagger=- K (U_{\mu}^{\alg{s}})^t K$ and we find that they are solved by \begin{equation} \begin{aligned} & U^{\alg{a}}_\mu \equiv f^{\alg{a}}_{\mu} \mathbf{1} + i f^p_\mu \check{\gamma}_p + \frac{1}{2} f^{pq}_\mu \check{\gamma}_{pq} , \qquad U^{\alg{s}}_\mu \equiv f^{\alg{s}}_{\mu} \mathbf{1} - f^p_\mu \hat{\gamma}_p - \frac{1}{2} f^{pq}_\mu \hat{\gamma}_{pq} , \end{aligned} \end{equation} where the coefficients $f$ are \emph{real} functions of the bosonic coordinates. In other words, what we have fixed in the above equation are the factors of $i$ in front of these coefficients, using~\eqref{eq:symm-prop-5dim-gamma} and~\eqref{eq:herm-conj-prop-5dim-gamma}. On the other hand the barred version of the fermions will be redefined as $\bar{\theta}_I\to \bar{\theta}_J \bar{U}_{IJ}$ with a matrix $\bar{U}_{IJ}=( \bar{U}_{IJ}^{\alg{a}}\otimes \bar{U}_{IJ}^{\alg{s}})$, that we expand again in the tensors of the $2\times 2$ space. To preserve $\bar{\theta}_I=\theta_I^t (K\otimes K)$ we have to impose \begin{equation} ( \bar{U}_{\mu}^{\alg{a}}\otimes \bar{U}_{\mu}^{\alg{s}})=(K\otimes K)((U_{\mu}^{\alg{a}})^t \otimes (U_{\mu}^{\alg{s}})^t) (K\otimes K), \end{equation} that allows us to define \begin{equation} \begin{aligned} & \bar{U}^{\alg{a}}_\mu\equiv f^{\alg{a}}_{\mu} \mathbf{1} + i f^p_\mu \check{\gamma}_p - \frac{1}{2} f^{pq}_\mu \check{\gamma}_{pq} , \qquad \bar{U}^{\alg{s}}_\mu\equiv f^{\alg{s}}_{\mu} \mathbf{1} - f^p_\mu \hat{\gamma}_p + \frac{1}{2} f^{pq}_\mu \hat{\gamma}_{pq} . \end{aligned} \end{equation} Here the coefficients $f$ are the same entering the definition of $U^{\alg{a},\alg{s}}_\mu $. In order to get a canonical expression for the terms containing derivatives on fermions \begin{equation} \begin{aligned} L^{\gamma,\partial}_1 \to & -\frac{\tilde{g}}{2} \gamma^{\alpha\beta} \, i \, \bar{\theta}_I \, \delta^{IJ} \, \widetilde{e}^m_\alpha \boldsymbol{\gamma}_m \partial_\beta \theta_J, \\ L^{\epsilon,\partial}_1 \to & -\frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \, i \, \bar{\theta}_I \, \sigma^{IJ}_3 \, \widetilde{e}^m_\alpha \boldsymbol{\gamma}_m \partial_\beta \theta_J , \end{aligned} \end{equation} where $\widetilde{e}^m_\alpha$ is the deformed vielbein given in~\eqref{eq:def-vielb-comp}, we set all coefficients $f$ for the field redefinition to $0$, except for the redefinition ${U}_{\mu}^{\alg{a}}$ of the AdS factor \begin{equation} \begin{aligned} f^{\alg{a}}_{\delta} &= \frac{1}{2} \sqrt{\frac{\left(1+\sqrt{1-\varkappa ^2 \rho ^2}\right) \left(1+\sqrt{1+\varkappa ^2 \rho ^4 \sin ^2\zeta }\right)}{\sqrt{1-\varkappa ^2 \rho ^2} \sqrt{1+\varkappa ^2 \rho ^4 \sin ^2\zeta }}}, \\ f^1_\delta &= -\frac{\varkappa ^2 \rho ^3 \sin \zeta }{f^{\alg{a}}_{\text{den}}}, \\ f^{04}_{\sigma_3} &= \frac{\varkappa \rho \left(1+\sqrt{1+\varkappa ^2 \rho ^4 \sin ^2\zeta }\right)}{f^{\alg{a}}_{\text{den}}}, \\ f^{23}_{\sigma_3} &= \frac{\varkappa \rho ^2 \sin \zeta \left(1+\sqrt{1-\varkappa ^2 \rho ^2}\right)}{f^{\alg{a}}_{\text{den}}}, \\ f^{\alg{a}}_{\text{den}} &\equiv 2 (1-\varkappa ^2 \rho ^2)^{\frac{1}{4}} (1+\varkappa ^2 \rho ^4 \sin ^2\zeta )^{\frac{1}{4}} \sqrt{1+\sqrt{1-\varkappa ^2 \rho ^2}} \sqrt{1+\sqrt{1+\varkappa ^2 \rho ^4 \sin ^2\zeta }}, \end{aligned} \end{equation} and for the redefinition ${U}_{\mu}^{\alg{s}}$ of the sphere factor \begin{equation} \begin{aligned} f^{\alg{s}}_{\delta} &= \frac{1}{2} \sqrt{\frac{\left(1+\sqrt{1+\varkappa ^2 r ^2}\right) \left(1+\sqrt{1+\varkappa ^2 r ^4 \sin ^2\xi }\right)}{\sqrt{1+\varkappa ^2 r ^2} \sqrt{1+\varkappa ^2 r ^4 \sin ^2\xi }}}, \\ f^6_{\delta} &= \frac{\varkappa ^2 r ^3 \sin \xi }{f^{\alg{s}}_{\text{den}}}, \\ f^{59}_{\sigma_3} &= \frac{\varkappa r \left(1+\sqrt{1+\varkappa ^2 r ^4 \sin ^2\xi }\right)}{f^{\alg{s}}_{\text{den}}}, \\ f^{78}_{\sigma_3} &= \frac{\varkappa r ^2 \sin \xi \left(1+\sqrt{1+\varkappa ^2 r ^2}\right)}{f^{\alg{s}}_{\text{den}}}, \\ f^{\alg{s}}_{\text{den}} &\equiv 2 (1+\varkappa ^2 r ^2)^{\frac{1}{4}} (1+\varkappa ^2 r ^4 \sin ^2\xi )^{\frac{1}{4}} \sqrt{1+\sqrt{1+\varkappa ^2 r ^2}} \sqrt{1+\sqrt{1+\varkappa ^2 r ^4 \sin ^2\xi }}. \end{aligned} \end{equation} Since the particular redefinition that we have chosen is diagonal in the labels $I,J$---it involves just the tensors $\delta$ and $\sigma_3$---it is interesting to look at the transformation rules for the two sets of Majorana-Weyl fermions separately. We define \begin{equation} U_{(1)} \equiv U_\delta+U_{\sigma_3}\,, \qquad U_{(2)} \equiv U_\delta-U_{\sigma_3}\,, \implies \theta_I\to U_{(I)} \theta_I\quad I=1,2. \end{equation} These matrices satisfy \begin{equation}\label{eq:transf-rule-gamma-ferm-rot} \begin{aligned} & \bar{U}_{(I)} U_{(I)} = \gen{1}_4, \qquad && \bar{U}_{(I)} \boldsymbol{\gamma}_m U_{(I)} = (\Lambda_{(I)})_m^{\ n} \boldsymbol{\gamma}_n , \\ &U_{(I)} \bar{U}_{(I)} = \gen{1}_4, \qquad &&\bar{U}_{(I)} \boldsymbol{\gamma}_{mn} U_{(I)} = (\Lambda_{(I)})_m^{\ p} (\Lambda_{(I)})_n^{\ q} \boldsymbol{\gamma}_{pq} , \end{aligned} \end{equation} where we do not sum over $I$. The matrices $\Lambda_{(I)}$ look very simple \begin{equation}\label{eq:Lambda-res1} \begin{aligned} & (\Lambda_{(I)})_0^{\ 0} = (\Lambda_{(I)})_4^{\ 4} = \frac{1}{\sqrt{1-\varkappa^2 \rho^2}} , \quad && (\Lambda_{(I)})_5^{\ 5} =(\Lambda_{(I)})_9^{\ 9}= \frac{1}{\sqrt{1+\varkappa^2 r^2}}, \\ & (\Lambda_{(I)})_1^{\ 1} = 1, \quad && (\Lambda_{(I)})_6^{\ 6} = 1, \\ & (\Lambda_{(I)})_2^{\ 2} = (\Lambda_{(I)})_3^{\ 3} = \frac{1}{\sqrt{1+\varkappa^2 \rho^4 \sin^2 \zeta}}, \quad && (\Lambda_{(I)})_7^{\ 7} =(\Lambda_{(I)})_8^{\ 8}= \frac{1}{\sqrt{1+\varkappa^2 r^4 \sin^2 \xi}}, \end{aligned} \end{equation} \begin{equation}\label{eq:Lambda-res2} \begin{aligned} & (\Lambda_{(I)})_0^{\ 4} = +(\Lambda_{(I)})_4^{\ 0}= -\frac{\sigma_{3II}\, \varkappa \rho}{\sqrt{1-\varkappa^2 \rho^2}}, \quad &&(\Lambda_{(I)})_5^{\ 9} = - (\Lambda_{(I)})_9^{\ 5}=-\frac{\sigma_{3II}\, \varkappa r}{\sqrt{1+\varkappa^2 r^2}},\\ & (\Lambda_{(I)})_2^{\ 3}=-(\Lambda_{(I)})_3^{\ 2}= \frac{\sigma_{3II}\, \varkappa \rho^2 \sin \zeta}{\sqrt{1+\varkappa^2 \rho^4 \sin^2 \zeta}}, \quad && (\Lambda_{(I)})_7^{\ 8}= -(\Lambda_{(I)})_8^{\ 7}=-\frac{\sigma_{3II}\, \varkappa r^2 \sin \xi}{\sqrt{1+\varkappa^2 r^4 \sin^2 \xi}}, \end{aligned} \end{equation} and they satisfy the remarkable property of being ten-dimensional Lorentz transformations \begin{equation} (\Lambda_{(I)})_m^{\ p}\ (\Lambda_{(I)})_n^{\ q}\ \eta_{pq}=\eta_{mn}\,, \qquad I=1,2\,. \end{equation} We refer to Appendix~\ref{app:total-lagr-field-red} for some comments on how to efficiently implement this field redefinition of the fermions in the Lagrangian. \subsection{The quadratic Lagrangian} In this section we show that the field redefinition that was found to put the terms with derivatives acting on fermions into canonical form is actually able to put the whole action in the standard form of Green-Schwarz for type IIB superstring~\eqref{eq:IIB-action-theta2}. In order to identify the bakground fields that are coupled to the fermions, we can do the computation separately for the part contracted with $\gamma^{\alpha\beta}$ and the one with $\epsilon^{\alpha\beta}$, and then check that they yield the same results. It is convenient to consider separately the terms that are diagonal and the ones that are off-diagonal in the labels $I,J$. The correct identification of the fields is achieved by looking at the tensor structure \emph{after} the rotation of the fermions~\eqref{eq:red-ferm-Lor-as} is implemented. In the contribution contracted with $\gamma^{\alpha\beta}$, the terms without derivatives on fermions that are multiplied by $\delta^{IJ}$ will then correspond to the coupling to the spin connection. The terms multiplied by $\sigma_3^{IJ}$ contain the coupling to the field strength of the $B$-field. The RR-fields are identified by looking at the contributions to the Lagrangian off-diagonal in $IJ$, and by selecting the appropriate Gamma-matrix structure. Taking into account just the anti-symmetry in the indices, the number of different components for the form $F^{(r)}$ is given by $\sum_{n_1 = 0}^9 \sum_{n_2 =n_1 +1}^9 \cdots \sum_{n_r =n_{r-1} +1}^9$, meaning \begin{equation} F^{(1)}: \quad 10, \qquad F^{(3)}: \quad 120, \qquad F^{(5)}: \quad 252. \end{equation} If we consider also self-duality for $F^{(5)}$ this gives a total number of $10+120+252/2=256$ different components. It is then possible to identify uniquely the RR fields, since the matrices $\boldsymbol{\gamma}$ of rank $1,3,5$ are all linearly independent and a $16\times 16$-matrix has indeed $256$ entries. To impose automatically the self-duality condition for the 5-form, we will rewrite---when necessary---the components in terms of the components $F^{(5)}_{0qrst}$ (there are 126 of them), using \begin{comment} \footnote{In the equation we specify flat indices. The corresponding equation valid for curved indices is \begin{equation} F_{M_1M_2M_3M_4M_5}=-\frac{1}{5!}\sqrt{-g}\epsilon_{M_1\ldots M_{10}}F^{M_6M_7M_8M_9M_{10}}. \end{equation}} \end{comment} \begin{equation} F_{m_1m_2m_3m_4m_5}=+\frac{1}{5!}\epsilon_{m_1\ldots m_{10}}F^{m_6m_7m_8m_9m_{10}}\,, \end{equation} where $\epsilon^{0\ldots 9}=1$ and $\epsilon_{0\ldots 9}=-1$. One should remember that for the Wess-Zumino contribution with $\epsilon^{\alpha\beta}$ there is an additional $\sigma_3^{IJ}$ as in~\eqref{eq:IIB-action-theta2}. We find that the Lagrangian quadratic in fermion is written in the standard form \begin{equation}\label{eq:lagr-quad-ferm} \begin{aligned} L^{\alg{f}^2} &= - \frac{\tilde{g}}{2} \, i \, \bar{\Theta}_I \,(\gamma^{\alpha\beta} \delta^{IJ}+\epsilon^{\alpha\beta} \sigma_3^{IJ}) \widetilde{e}^m_\alpha \Gamma_m \, \widetilde{D}^{JK}_\beta \Theta_K , \end{aligned} \end{equation} where the $32\times 32$ ten-dimensional $\Gamma$-matrices are constructed in~\eqref{eq:def-10-dim-Gamma} and the $32$-dimensional fermions $\Theta$ in~\eqref{eq:def-32-dim-Theta}. The operator $\widetilde{D}^{IJ}_\alpha$ acting on the fermions has the desired form \begin{equation}\label{eq:deform-D-op} \begin{aligned} \widetilde{D}^{IJ}_\alpha = & \delta^{IJ} \left( \partial_\alpha -\frac{1}{4} \widetilde{\omega}^{mn}_\alpha \Gamma_{mn} \right) +\frac{1}{8} \sigma_3^{IJ} \widetilde{e}^m_\alpha \widetilde{H}_{mnp} \Gamma^{np} \\ &-\frac{1}{8} e^{\varphi} \left( \epsilon^{IJ} \Gamma^p \widetilde{F}^{(1)}_p + \frac{1}{3!}\sigma_1^{IJ} \Gamma^{pqr} \widetilde{F}^{(3)}_{pqr} + \frac{1}{2\cdot5!}\epsilon^{IJ} \Gamma^{pqrst} \widetilde{F}^{(5)}_{pqrst} \right) \widetilde{e}^m_\alpha \Gamma_m. \end{aligned} \end{equation} We use the tilde on all quantities to remind that we are discussing the deformed model. The deformed spin connection satisfies the expected equation \begin{equation} \widetilde{\omega}_M^{mn}= - \widetilde{e}^{N \, [m} \left( \partial_M \widetilde{e}^{n]}_N - \partial_N \widetilde{e}^{n]}_M + \widetilde{e}^{n] \, P} \widetilde{e}_M^p \partial_P \widetilde{e}_{Np} \right), \end{equation} where tangent indices $m,n$ are raised and lowered with $\eta_{mn}$, while curved indices $M,N$ with the deformed metric $\widetilde{G}_{MN}$. From the computation of the deformed Lagrangian we find a field $\widetilde{H}^{(3)}$ with the following non-vanishing components \begin{equation} \widetilde{H}_{234} = - 4 \varkappa \rho \frac{\sqrt{1+\rho^2}\sqrt{1-\varkappa^2\rho^2}\sin \zeta}{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \qquad \widetilde{H}_{789} = + 4 \varkappa r \frac{\sqrt{1-r^2}\sqrt{1+\varkappa^2r^2}\sin \xi}{1+\varkappa^2 r^4 \sin^2 \xi}, \end{equation} where we have specified \emph{tangent} indices. Translating this into \emph{curved} indices we find agreement with the expected result \begin{equation} \begin{aligned} \widetilde{H}_{\psi_1\zeta\rho} &= \frac{2 \varkappa \rho ^3 \sin (2 \zeta )}{\left(1+\varkappa ^2 \rho ^4 \sin ^2 \zeta\right)^2} & = \partial_\rho B_{\psi_1\zeta}, \qquad \widetilde{H}_{\phi_1\xi r} &= -\frac{2 \varkappa r^3 \sin (2 \xi )}{\left(1+\varkappa ^2 r^4 \sin ^2\xi\right)^2} & = \partial_r B_{\phi_1\xi}. \end{aligned} \end{equation} The new results that can be obtained from the Lagrangian quadratic in fermions are the components of the RR-fields. When we specify \emph{tangent} indices we get \begin{equation}\label{eq:flat-comp-F1} \begin{aligned} &e^{\varphi} \widetilde{F}_1 =-4 \varkappa ^2 \ c_{F}^{-1} \ \rho ^3 \sin \zeta , \qquad &&e^{\varphi} \widetilde{F}_6&= +4 \varkappa ^2 \ c_{F}^{-1} \ r^3 \sin\xi , \end{aligned} \end{equation} \begin{equation}\label{eq:flat-comp-F3} \begin{aligned} &e^{\varphi} \widetilde{F}_{014} = + 4 \varkappa \ c_{F}^{-1} \ \rho ^2 \sin\zeta, \qquad &&e^{\varphi} \widetilde{F}_{123} = -4 \varkappa \ c_{F}^{-1} \ \rho , \\ &e^{\varphi} \widetilde{F}_{569}= + 4 \varkappa \ c_{F}^{-1} \ r^2 \sin\xi, \qquad &&e^{\varphi} \widetilde{F}_{678} = -4 \varkappa \ c_{F}^{-1} \ r, \\ &e^{\varphi} \widetilde{F}_{046} = +4 \varkappa^3 \ c_{F}^{-1} \ \rho r^3 \sin \xi, \qquad &&e^{\varphi} \widetilde{F}_{236} = -4 \varkappa^3 \ c_{F}^{-1} \ \rho ^2 r^3 \sin \zeta \sin\xi, \\ &e^{\varphi} \widetilde{F}_{159} = - 4 \varkappa^3 \ c_{F}^{-1} \ \rho ^3 r \sin\zeta, \qquad &&e^{\varphi} \widetilde{F}_{178} = -4 \varkappa^3 \ c_{F}^{-1} \ \rho ^3 r^2 \sin\zeta \sin \xi, \\ \end{aligned} \end{equation} \begin{equation}\label{eq:flat-comp-F5} \begin{aligned} &e^{\varphi} \widetilde{F}_{01234} = + 4 \ c_{F}^{-1} , \qquad &&e^{\varphi} \widetilde{F}_{02346} = -4 \varkappa ^4 \ c_{F}^{-1}\rho ^3 r^3 \sin \zeta \sin\xi , \\ &e^{\varphi} \widetilde{F}_{01459} = +4 \varkappa ^2 \ c_{F}^{-1} \rho ^2 r \sin\zeta, \qquad &&e^{\varphi} \widetilde{F}_{01478} = +4 \varkappa ^2 \ c_{F}^{-1} \rho ^2 r^2 \sin\zeta \sin \xi , \\ &e^{\varphi} \widetilde{F}_{04569}= +4 \varkappa ^2 \ c_{F}^{-1} \rho r^2 \sin \xi, \qquad &&e^{\varphi} \widetilde{F}_{04678} = -4 \varkappa ^2 \ c_{F}^{-1} \rho r. \end{aligned} \end{equation} For simplicity we have defined the common coefficient \begin{equation} c_{F} = \frac{1}{\sqrt{1+\varkappa ^2}}\sqrt{1-\varkappa ^2 \rho^2} \sqrt{1+\varkappa ^2 \rho ^4 \sin ^2\zeta} \sqrt{1+\varkappa ^2 r^2} \sqrt{1+\varkappa ^2 r^4 \sin ^2\xi}. \end{equation} The same results written in \emph{curved} indices are \begin{equation}\label{eq:curved-comp-F1} \begin{aligned} e^{\varphi} \widetilde{F}_{\psi_2} &=4 \varkappa ^2 \ c_{F}^{-1} \ \rho ^4 \sin ^2 \zeta , \qquad e^{\varphi} \widetilde{F}_{\phi_2} &= -4 \varkappa ^2 \ c_{F}^{-1} \ r^4 \sin ^2\xi , \end{aligned} \end{equation} \begin{equation}\label{eq:curved-comp-F3} \begin{aligned} e^{\varphi} \widetilde{F}_{t\psi_2 \rho } &= + 4 \varkappa \ c_{F}^{-1} \ \frac{ \rho ^3 \sin ^2\zeta}{1-\varkappa ^2 \rho ^2}, \qquad &e^{\varphi} \widetilde{F}_{\psi_2 \psi_1 \zeta } &= +4 \varkappa \ c_{F}^{-1} \ \frac{ \rho ^4 \sin \zeta \cos \zeta }{1+\varkappa ^2 \rho ^4 \sin^2\zeta }, \\ e^{\varphi} \widetilde{F}_{\phi \phi_2 r } &= + 4 \varkappa \ c_{F}^{-1} \ \frac{ r^3 \sin ^2\xi}{1+\varkappa ^2 r^2}, \qquad &e^{\varphi} \widetilde{F}_{\phi_2 \phi_1 \xi } &= +4 \varkappa \ c_{F}^{-1} \ \frac{ r^4 \sin \xi \cos\xi}{1+\varkappa ^2 r^4 \sin ^2\xi}, \\ e^{\varphi} \widetilde{F}_{t \rho \phi_2 } &= + 4 \varkappa^3 \ c_{F}^{-1} \ \frac{\rho r^4 \sin ^2\xi }{1-\varkappa ^2 \rho ^2}, \qquad &e^{\varphi} \widetilde{F}_{\psi_1 \zeta \phi_2 } &= +4 \varkappa^3 \ c_{F}^{-1} \ \frac{ \rho ^4 r^4 \sin \zeta \cos \zeta \sin ^2\xi}{1+\varkappa ^2 \rho ^4 \sin ^2\zeta}, \\ e^{\varphi} \widetilde{F}_{\psi_2 \phi r } &= - 4 \varkappa^3 \ c_{F}^{-1} \ \frac{ \rho ^4 r \sin ^2\zeta}{1+\varkappa ^2 r^2}, \qquad &e^{\varphi} \widetilde{F}_{\psi_2 \phi_1 \xi } &= +4 \varkappa^3 \ c_{F}^{-1} \ \frac{ \rho ^4 r^4 \sin^2\zeta \sin \xi \cos \xi}{1+\varkappa ^2 r^4 \sin ^2\xi}, \\ \end{aligned} \end{equation} \begin{equation}\label{eq:curved-comp-F5} \begin{aligned} e^{\varphi} \widetilde{F}_{t\psi_2\psi_1\zeta\rho } &= \frac{ 4 \ c_{F}^{-1} \rho ^3 \sin\zeta \cos\zeta}{\left(1-\varkappa ^2 \rho ^2\right) \left(1+\varkappa ^2 \rho ^4 \sin ^2\zeta\right)}, \ &e^{\varphi} \widetilde{F}_{t\psi_1\zeta\rho\phi_2 } &= -\frac{4 \varkappa ^4 \ c_{F}^{-1}\rho ^5 r^4 \sin \zeta \cos \zeta \sin ^2\xi }{\left(1-\varkappa ^2 \rho ^2\right) \left(1+\varkappa ^2 \rho ^4 \sin ^2\zeta\right)}, \\ e^{\varphi} \widetilde{F}_{t\psi_2\rho\phi r } &= -\frac{4 \varkappa ^2 \ c_{F}^{-1} \rho ^3 r \sin ^2\zeta}{\left(1-\varkappa ^2 \rho ^2\right) \left(1+\varkappa ^2 r^2\right)}, &e^{\varphi} \widetilde{F}_{t\psi_2\rho\phi_1\xi } &= +\frac{4 \varkappa ^2 \ c_{F}^{-1} \rho ^3 r^4 \sin ^2\zeta \sin \xi \cos\xi}{\left(1-\varkappa ^2 \rho ^2\right) \left(1+\varkappa ^2 r^4 \sin ^2\xi\right)}, \\ e^{\varphi} \widetilde{F}_{t\rho\phi\phi_2 r } &= -\frac{4 \varkappa ^2 \ c_{F}^{-1} \rho r^3 \sin ^2\xi}{\left(1-\varkappa ^2 \rho ^2\right) \left(1+\varkappa ^2 r^2\right)}, &e^{\varphi} \widetilde{F}_{t\rho \phi_2 \phi_1\xi } &= -\frac{4 \varkappa ^2 \ c_{F}^{-1} \rho r^4 \sin \xi \cos\xi}{\left(1-\varkappa ^2 \rho ^2\right) \left(1+\varkappa ^2 r^4 \sin ^2\xi\right)}. \end{aligned} \end{equation} Let us present another method that can be used to derive the same results for the background RR fields, without the need of computing the Lagrangian at quadratic order in fermions. \section{Gauge-fixed action for {AdS$_3\times$S$^3\times$T$^4$} at order $\theta^2$}\label{app:gauge-fixed-action-T4} In this section we explain how to obtain the action at quadratic order in fermions for the superstring on the pure RR {AdS$_3\times$S$^3\times$T$^4$} background, following~\cite{Borsato:2014hja}. The bosonic action is given by Eq.~\eqref{eq:bos-str-action}, where in our coordinates the spacetime metric for {AdS$_3\times$S$^3\times$T$^4$} reads as \begin{equation} \begin{aligned} {\rm d}s^2 &= -\left(\frac{1 + \frac{z_1^2 + z_2^2}{4}}{1 - \frac{z_1^2 + z_2^2}{4}}\right)^2 {\rm d}t^2 + \frac{1}{\left(1 - \frac{z_1^2 + z_2^2}{4}\right)^2} ( {\rm d}z_1^2 + {\rm d}z_2^2 ) \\ &+\left(\frac{1 - \frac{y_3^2 + y_4^2}{4}}{1 + \frac{y_3^2 + y_4^2}{4}}\right)^2 {\rm d}\phi^2 + \frac{1}{\left(1 + \frac{y_3^2 + y_4^2}{4}\right)^2} ( {\rm d}y_3^2 + {\rm d}y_4^2 )\\ & + {\rm d}x_i {\rm d}x_i \,. \end{aligned} \end{equation} We consider the case of vanishing $B$-field. Coordinates $t,z_1,z_2$ parameterise AdS$_3$, and $t$ is the time coordinate. Coordinates $\phi,y_3,y_4$ parameterise S$^3$, and $\phi$ is an angle that we will use, together with $t$ to create light-cone coordinates. Coordintates $x^6,x^7,x^8,x^9$ parameterise the torus. We prefer to enumerate the coordinates as \begin{equation} X^0=t,\ X^1=z_1,\ X^2=z_2 ,\ X^3=y_3,\ X^4=y_4,\ X^5=\phi,\ X^i=x_i \text{ for } i=6,\ldots,9\,, \end{equation} and to use a diagonal vielbein \begin{equation} \begin{aligned} e^0_t = \frac{1 + \frac{z_1^2 + z_2^2}{4}}{1 - \frac{z_1^2 + z_2^2}{4}}\,,\quad e^1_{z_1}=e^2_{z_2}=\frac{1}{1 - \frac{z_1^2 + z_2^2}{4}}\,,\\ e^5_\phi = \frac{1 - \frac{y_3^2 + y_4^2}{4}}{1 + \frac{y_3^2 + y_4^2}{4}}\,,\quad e^3_{y_3}=e^4_{y_4}=\frac{1}{1 + \frac{y_3^2 + y_4^2}{4}}\,,\\ e^i_{x_i}=1\quad i=6,\ldots,9. \end{aligned} \end{equation} In order to avoid confusion, we use letters to denote explicitly curved indices on vielbein components, etc. We will never distinguish between upper or lower indices for the coordinates $z_i\equiv z^i,y_i\equiv y^i,x_i\equiv x^i$. To write down the fermionic action we first define the ten-dimensional Gamma-matrices\footnote{This basis is obtained by permuting the third and fourth spaces in the tensor products defining the Gamma-matrices that we find after implementing the change of basis explained in (2.55) of~\cite{Borsato:2014hja}.} \begin{equation} \newcommand{\mathrlap{\,\gen{1}}\hphantom{\sigma_a}}{\mathrlap{\,\gen{1}}\hphantom{\sigma_a}} \newcommand{\mathrlap{\,\gen{1}}\hphantom{\gamma^A}}{\mathrlap{\,\gen{1}}\hphantom{\gamma^A}} \begin{aligned} & \Gamma^0 = -i\sigma_1 \otimes \sigma_3 \otimes \sigma_2 \otimes \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} , \qquad && \Gamma^1 = +\sigma_1 \otimes \sigma_1 \otimes \sigma_2 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a}\otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} ,\\ & \Gamma^2 = +\sigma_1 \otimes \sigma_2 \otimes \sigma_2 \otimes \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} , \qquad && \Gamma^3 = +\sigma_1 \otimes \sigma_1 \otimes \sigma_1 \otimes \sigma_1 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} ,\\ & \Gamma^4 = -\sigma_1 \otimes \sigma_1 \otimes \sigma_1 \otimes\sigma_2 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} , \qquad && \Gamma^5 = -\sigma_1 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_1 \otimes \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} ,\\ & \Gamma^6 = +\sigma_1 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_1 , \qquad && \Gamma^7 = +\sigma_1 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_2 ,\\ & \Gamma^8 = +\sigma_1 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_3 , \qquad && \Gamma^9 = -\sigma_2 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} . \end{aligned} \end{equation} For all Gamma-matrices we define the antisymmetric product by \begin{equation} \Gamma^{m_1 m_2 \dotsm m_n} = \frac{1}{n!} \sum_{\pi \in S_n} (-1)^{\pi} \Gamma^{m_{\pi(1)}} \Gamma^{m_{\pi(2)}} \dotsm \Gamma^{m_{\pi(n)}} , \end{equation} where the sum runs over all permutations of the indices and $(-1)^{\pi}$ denotes the signature of the permutation. For convenience, let us write down explicitly some higher-rank Gamma-matrices that may be obtained by the above definitions \begin{equation} \newcommand{\mathrlap{\,\gen{1}}\hphantom{\sigma_a}}{\mathrlap{\,\gen{1}}\hphantom{\sigma_a}} \begin{aligned} \Gamma^{1234} &= +\mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a}\otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} , \\ \Gamma^{6789} &= + \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \sigma_3\otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} , \\ \Gamma = \Gamma^{0123456789} &= {+} \sigma_3 \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a}\otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} \otimes \mathrlap{\,\gen{1}}\hphantom{\sigma_a} . \end{aligned} \end{equation} The Gamma-matrices satisfy \begin{equation} (\Gamma^m)^t = - \mathcal{C} \Gamma^m \mathcal{C}^{-1} , \qquad (\Gamma^m)^\dag = - \Gamma^0 \Gamma^m (\Gamma^0)^{-1} , \qquad (\Gamma^m)^* = + \mathcal{B} \Gamma^m \mathcal{B}^{-1} , \qquad \end{equation} where \begin{equation} \mathcal{C} = -i\sigma_2 \otimes \sigma_3 \otimes \sigma_2 \otimes \sigma_1 \otimes \sigma_2 , \qquad \mathcal{B} = -\Gamma^0 \, \mathcal{C} . \end{equation} It is useful to note the relations \begin{equation} \begin{gathered} (\Gamma^0)^\dag \Gamma^0 = \mathcal{C}^\dag \mathcal{C} = \mathcal{B}^\dag \mathcal{B} = \gen{1} , \qquad \mathcal{B}^t = \mathcal{C} (\Gamma^0)^\dag , \\ \mathcal{C}^\dag = - \mathcal{C} = + \mathcal{C}^t , \qquad (\Gamma^0)^\dag = - \Gamma^0 = + (\Gamma^0)^t , \qquad \mathcal{B}^\dag = + \mathcal{B} = + \mathcal{B}^t , \\ \mathcal{C} = - \Gamma^{01479} , \qquad \mathcal{B} = + \sigma_3 \otimes \gen{1} \otimes \sigma_1 \otimes \sigma_2 \otimes \sigma_2 = - \Gamma^{1479} , \\ \mathcal{B} \Gamma \mathcal{B}^\dag = \Gamma^* . \end{gathered} \end{equation} The two sets of 32-component Majorana-Weyl spinors labelled by $I=1,2$ satisfy the conditions \begin{equation} \Gamma \Theta_I = +\Theta_I,\qquad \Theta^*_I = \mathcal{B} \Theta_I , \qquad \bar{\Theta}_I = \Theta^t_I \mathcal{C}, \end{equation} to give a total of 32 real fermions. The action at quadratic order in fermions is given by~\eqref{eq:IIB-action-theta2}, where the operator ${D}^{IJ}_\alpha$ in this case is \begin{equation} \begin{aligned} {D}^{IJ}_\alpha = \delta^{IJ} \left( \partial_\alpha -\frac{1}{4} {\omega}^{mn}_\alpha \Gamma_{mn} \right) +\frac{1}{4} \sigma_1^{IJ} (\Gamma^{012}+\Gamma^{345}) \ {e}^m_\alpha \Gamma_m. \end{aligned} \end{equation} \subsection{Linerarly realised supersymmetries} The background possesses a total of 16 real supersymmetries. It is more convenient to redefine the fermions introduced above, such that these supersymmetries are realised as linear shifts of fermionic components. For a background realised by a supercoset, the original form of the action would correspond to the choice $\alg{g}=\alg{g}_{\text{bos}}\cdot\alg{g}_{\text{fer}}$ for the coset element. The redefinition we perform here would allow us to obtain to the choice $\alg{g}=\alg{g}_{\text{fer}}\cdot\alg{g}_{\text{bos}}$ for the coset element. For convenience we first redefine the fermions as \begin{equation}\label{eq:introd-vartheta} \Theta_{1} =\vartheta_1 + \vartheta_2, \qquad \Theta_{2} = \vartheta_1 - \vartheta_2 , \end{equation} and then introduce fermions $\vartheta^\pm_I$ as \begin{equation}\label{eq:theta-chi-eta-def} \begin{aligned} \vartheta_1 &= \frac{1}{2} ( 1 + \Gamma^{012345} ) \hat{M} \vartheta^+_1 + \frac{1}{2} ( 1 - \Gamma^{012345} ) \hat{M} \vartheta^-_1 , \\ \vartheta_2 &= \frac{1}{2} ( 1 + \Gamma^{012345} ) \check{M} \vartheta^+_2 + \frac{1}{2} ( 1 - \Gamma^{012345} ) \check{M} \vartheta^-_2 . \end{aligned} \end{equation} The projectors $\frac{1}{2} ( 1 \pm \Gamma^{012345} )$ make sure that we are again using a total of 32 real fermions. Here $\hat{M}$ and $\check{M}$ are $32\times 32$ matrices \begin{equation}\label{eq:MN-10d-expressions} \hat{M} = M_0 M_t , \qquad \check{M} = M_0^{-1} M_t^{-1} , \end{equation} where \begin{equation}\label{eq:def-matr-M0} \begin{aligned} M_0 &= \frac{1}{\sqrt{\bigl( 1 - \frac{z_1^2 + z_2^2}{4} \bigr) \bigl( 1 + \frac{y_3^2 + y_4^2}{4} \bigr)}} \Bigl( {1} - \frac{1}{2} (z_{1} \Gamma^{1}+z_{2} \Gamma^{2}) \Gamma^{012} \Bigr) \Bigl( {1} - \frac{1}{2} (y_{3} \Gamma^{3}+y_{4} \Gamma^{4}) \Gamma^{345} \Bigr) , \\ M_0^{-1} &= \frac{1}{\sqrt{\bigl( 1 - \frac{z_1^2 + z_2^2}{4} \bigr) \bigl( 1 + \frac{y_3^2 + y_4^2}{4} \bigr)}} \Bigl( {1} + \frac{1}{2} (z_{1} \Gamma^{1}+z_{2} \Gamma^{2}) \Gamma^{012} \Bigr) \Bigl( {1} + \frac{1}{2} (y_{3} \Gamma^{3}+y_{4} \Gamma^{4}) \Gamma^{345} \Bigr) , \end{aligned} \end{equation} and \begin{equation} M_t = e^{-\frac{1}{2} ( t \, \Gamma^{12} + \phi \, \Gamma^{34} )} , \qquad M_t^{-1} = e^{+\frac{1}{2} ( t \, \Gamma^{12} + \phi \, \Gamma^{34} )} . \end{equation} It is useful to see how these fermionic redefinitions are implemented on the Gamma-matrices \begin{equation} \hat{M}^{-1} \, \Gamma_m \, \hat{M} \, e_M^m = \Gamma_m \, \hat{\mathcal{M}}^m{}_n e_M^n ,\qquad \check{M}^{-1} \, \Gamma_m \, \check{M} \, e_M^m = \Gamma_m \, \check{\mathcal{M}}^m{}_n e_M^n , \end{equation} where $\hat{\mathcal{M}}_n{}^m$ and $\check{\mathcal{M}}_n{}^m$ are components of orthogonal matrices. They rotate non-trivially only the indices $m=0,\ldots, 5$ of AdS$_3\times$S$^3$, and they act as the identity for directions tangent to the torus. In particular, they can be reabsorbed in the definition of the vielbein, to produce\footnote{The conventions for the symbols ``check'' or ``hat'' should not be confused with the ones used in the chapters discussing the $\eta$-deformation of {AdS$_5\times$S$^5$}, where they refer to AdS$_5$ and S$^5$. Here they refer to the matrices $\check{M},\hat{M}$ defined before.} \begin{equation} \hat{e}_M^m = \hat{\mathcal{M}}^m{}_n \, e_M^n \,, \qquad \check{e}_M^m = \check{\mathcal{M}}^m{}_n e_M^n \,. \end{equation} Explicitly, the $10\times 10$ matrices $\hat{e},\check{e}$ whose components are $ \hat{e}_M^m, \check{e}_M^m$ are \begin{equation} \hat{e}= \hat{e}_{\text{AdS}_3} \oplus \hat{e}_{\text{S}^3} \oplus \gen{1}_4\,, \qquad \check{e}= \check{e}_{\text{AdS}_3} \oplus \check{e}_{\text{S}^3} \oplus \gen{1}_4\,, \end{equation} where we have defined \begin{equation}\label{eq:E-components-AdS3} \begin{aligned} \hat{e}_{\text{AdS}_3} &= \begin{pmatrix} 1 & 0 & 0 \\ 0 & + \cos t & + \sin t \\ 0 & - \sin t & + \cos t \end{pmatrix} \cdot \begin{pmatrix} + 1 & + z_2 & - z_1 \\ + \frac{z_2}{1 + \frac{z_1^2 + z_2^2}{4}} & 1 - \frac{z_1^2 - z_2^2}{4} & - \frac{z_1 z_2}{2} \\ - \frac{z_1}{1 + \frac{z_1^2 + z_2^2}{4}} & - \frac{z_1 z_2}{2} & 1 + \frac{z_1^2 - z_2^2}{4} \end{pmatrix} , \\ \check{e}_{\text{AdS}_3} &= \begin{pmatrix} 1 & 0 & 0 \\ 0 & + \cos t & - \sin t \\ 0 & + \sin t & + \cos t \end{pmatrix} \cdot \begin{pmatrix} + 1 & - z_2 & + z_1 \\ - \frac{z_2}{1 + \frac{z_1^2 + z_2^2}{4}} & 1 - \frac{z_1^2 - z_2^2}{4} & - \frac{z_1 z_2}{2} \\ + \frac{z_1}{1 + \frac{z_1^2 + z_2^2}{4}} & - \frac{z_1 z_2}{2} & 1 + \frac{z_1^2 - z_2^2}{4} \end{pmatrix} , \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \label{eq:E-components-S3} \hat{e}_{\text{S}^3} &= \begin{pmatrix} + \cos\phi & + \sin\phi & 0 \\ - \sin\phi & + \cos\phi & 0 \\ 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} 1 + \frac{y_3^2 - y_4^2}{4} & + \frac{y_3 y_4}{2} & -\frac{y_4}{1 - \frac{y_3^2 + y_4^2}{4}} \\ + \frac{y_3 y_4}{2} & 1 - \frac{y_3^2 - y_4^2}{4} & +\frac{y_3}{1 - \frac{y_3^2 + y_4^2}{4}} \\ + y_4 & - y_3 & 1 \end{pmatrix} \,, \\ \check{e}_{\text{S}^3} &= \begin{pmatrix} + \cos\phi & - \sin\phi & 0 \\ + \sin\phi & + \cos\phi & 0 \\ 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} 1 + \frac{y_3^2 - y_4^2}{4} & + \frac{y_3 y_4}{2} & +\frac{y_4}{1 - \frac{y_3^2 + y_4^2}{4}} \\ + \frac{y_3 y_4}{2} & 1 - \frac{y_3^2 - y_4^2}{4} & -\frac{y_3}{1 - \frac{y_3^2 + y_4^2}{4}} \\ - y_4 & + y_3 & 1 \end{pmatrix} . \end{aligned} \end{equation} It is then possible to check that the sum of the bosonic and fermionic Lagrangians are invariant under the following supersymmetry transformations \begin{equation}\label{eq:susy-transformations} \begin{aligned} & \delta\vartheta^-_I = \epsilon_I \,, \qquad && \delta\vartheta^+_I = 0 \,, \\ & \delta\hat{e}_\alpha^m =\delta\check{e}_\alpha^m = -i \bar{\epsilon}_I \Gamma^m \partial_\alpha \vartheta^-_I \quad m=0,\ldots,5\,, \qquad && \delta\hat{e}_\alpha^m =\delta\check{e}_\alpha^m = 0 \quad m=6,\ldots,9\,, \end{aligned} \end{equation} at first order in $\vartheta^\pm_I$. \subsection{Gauge-fixed action} In the previous subsection we showed which is the most convenient choice for the fermions in order to achieve a simple form of the supersymmetry variations. In this subsection we perform a different fermionc field redefinition. This is necessary to get fermions that are not charged under the two isometries corresponding to shifts of the coordinates $t,\phi$. This is needed to fix light-cone kappa-gauge later. For a background realised as a supercoset, this would correspond to the choice $\alg{g}=\Lambda(t,\phi)\cdot\alg{g}_{\text{fer}}\cdot\alg{g}'_{\text{bos}}$ for the coset representative, where $\Lambda(t,\phi)$ is a group element parameterised by $t,\phi$ only, while $\alg{g}'_{\text{bos}}$ by the transverse bosonic coordinates. Starting from the fermions appearing in~\eqref{eq:introd-vartheta}, we define fermions $\eta_I,\chi_I$ \begin{equation}\label{eq:theta-chi-eta-def-t} \begin{aligned} \vartheta_1 &= \frac{1}{2} ( 1 + \Gamma^{012345} ) \mathrlap{M_0 \chi_1}\hphantom{M_0^{-1} \chi_2} + \frac{1}{2} ( 1 - \Gamma^{012345} ) M_0 \eta_1 \\ \vartheta_2 &= \frac{1}{2} ( 1 + \Gamma^{012345} ) M_0^{-1} \chi_2 + \frac{1}{2} ( 1 - \Gamma^{012345} ) M_0^{-1} \eta_2 , \end{aligned} \end{equation} where the matrices $M_0,M_0^{-1}$ may be read in~\eqref{eq:def-matr-M0}. As previously, the correct number of fermions is ensured by the presence of the projectors. After introducing bosonic light-cone coordinates as in Section~\ref{sec:Bos-string-lcg} we impose kappa-gauge like in Section~\ref{sec:fermions-type-IIB} \begin{equation}\label{eq:bmn-lc-kappa-gauge} \Gamma^+ \eta_I = 0 , \qquad \Gamma^+ \chi_I = 0 , \qquad \Gamma^{\pm} = \frac{1}{2} \bigl( \Gamma^5 \pm \Gamma^0 \bigr) , \end{equation} to keep only a total of 16 real fermions. The redefinition~\eqref{eq:theta-chi-eta-def-t} and the condition imposed by the kappa-gauge allows us to require that the fermions satisfy \begin{equation} \Gamma^{1234} \chi_I = +\chi_I \,,\qquad \Gamma^{6789} \chi_I = +\chi_I \,,\qquad \Gamma^{1234} \eta_I = -\eta_I \,, \qquad \Gamma^{6789} \eta_I = -\eta_I \,. \end{equation} It is then natural to write them as \begin{equation} \begin{aligned} \chi_I &= \left(\begin{array}{c}1\\color{black!40}0\end{array}\right) \otimes \left(\begin{array}{c}1\\color{black!40}0\end{array}\right) \otimes \left(\begin{array}{c}1\\color{black!40}0\end{array}\right) \otimes ( \chi_I )^{\underline{a} b} \,, \\ \eta_I &= \left(\begin{array}{c}1\\color{black!40}0\end{array}\right) \otimes \left(\begin{array}{c}0\\mathbf{1}\end{array}\right) \otimes \left(\begin{array}{c}0\\mathbf{1}\end{array}\right) \otimes ( \eta_I )^{\underline{\dot{a}} \dot{b}} \,. \end{aligned} \end{equation} As explained in Section~\ref{sec:fermions-type-IIB}, from the light-cone gauge-fixed action we can read the Hamiltonian of the gauge-fixed model. The Hamiltonian at quadratic order in the fields is written in~\eqref{eq:quadr-Hamilt-fields-T4}. To obtain that expression we have actually rewritten our bosonic and fermionic coordinates as follows. We first introduce complex coordinates to parameterise the transverse directions of AdS$_3$ and $S^3$ \begin{equation} Z=-z_2+i\,z_1\,,\qquad \bar{Z}=-z_2-i\,z_1\,,\qquad\qquad Y=-y_3-i\,y_4\,,\qquad \bar{Y}=-y_3+i\,y_4\,, \end{equation} together with the corresponding conjugate momenta \begin{equation} P_Z=\frac{1}{2}\dot{Z},\qquad P_{\bar{Z}}=\frac{1}{2}\dot{\bar{Z}}\,,\qquad\qquad P_Y=\frac{1}{2}\dot{Y},\qquad P_{\bar{Y}}=\frac{1}{2}\dot{\bar{Y}}\,. \end{equation} Similarly, for the four directions in the torus we define the complex combinations \begin{equation} X^{12}= x_8 - i \,x_9 \,, \quad X^{21}= -x_8 -i \,x_9\,, \qquad X^{11}= -x_6 + i\, x_7\,, \quad X^{22}= -x_6 - i\, x_7\,, \end{equation} and the conjugate momenta \begin{equation} P_{\dot{a}a}= \frac{1}{2}\epsilon_{\dot{a}\dot{b}}\epsilon_{ab}\dot{X}^{\dot{b}b}. \end{equation} Upon quantisation, these fields satisfy the canonical commutation relations \begin{equation}\label{eq:comm-rel-bos} \begin{aligned} &[Z(\tau,\sigma_1),P_{\bar{Z}}(\tau,\sigma_2)]=[\bar{Z}(\tau,\sigma_1),P_{Z}(\tau,\sigma_2)]=i\,\delta(\sigma_1-\sigma_2)\,,\\ &[Y(\tau,\sigma_1),P_{\bar{Y}}(\tau,\sigma_2)]=[\bar{Y}(\tau,\sigma_1),P_{Y}(\tau,\sigma_2)]=i\,\delta(\sigma_1-\sigma_2)\,,\\ &[X^{{\dot{a}a}}(\tau,\sigma_1), P_{{\dot{b}b}}(\tau,\sigma_2)]=i\,\delta^{{\dot{a}}}_{\ {\dot{b}}}\,\delta^{{a}}_{\ {b}}\,\delta(\sigma_1-\sigma_2)\,. \end{aligned} \end{equation} For the massive fermions we define the various components as \begin{equation} \big(\eta_1\big)^{\underline{\dot{a}}\dot{{a}}}=\left( \begin{array}{cc} -e^{+i\pi/4}\,\bar{\eta}_{\sL 2} & -e^{+i\pi/4}\, \bar{\eta}_{\sL 1} \\ \phantom{-}e^{-i\pi/4}\,\eta_{\sL}^{\ 1} & -e^{-i\pi/4}\,\eta_{\sL}^{\ 2} \end{array} \right), \qquad \big(\eta_2\big)^{\underline{\dot{a}}\dot{{a}}}=\left( \begin{array}{cc} \phantom{-}e^{-i\pi/4}\,\eta_{\sR 2} & \phantom{-}e^{-i\pi/4}\,\eta_{\sR 1}\\ -e^{+i\pi/4}\,\bar{\eta}_{\sR}^{\ 1} & \phantom{-}e^{+i\pi/4}\, \bar{\eta}_{\sR}{}^{ 2} \end{array} \right), \end{equation} where the signs and the factors of~$e^{\pm i\pi/4}$ are introduced for later convenience. Similarly, for the massless fermions we write \begin{equation} \big(\chi_1\big)^{\underline{a}a}=\left( \begin{array}{cc} -e^{+i\pi/4}\bar{\chi}_{+2} & \phantom{-}e^{+i\pi/4} \bar{\chi}_{+1} \\ -e^{-i\pi/4}\chi_{+}^{\ 1} & -e^{-i\pi/4}\chi_{+}^{\ 2} \end{array} \right), \qquad \big(\chi_2\big)^{\underline{a}a}=\left( \begin{array}{cc} \phantom{-}e^{-i\pi/4}\chi_{-}^{\ 1} & \phantom{-}e^{-i\pi/4} \chi_{-}^{\ 2} \\ -e^{+i\pi/4}\bar{\chi}_{-2} & \phantom{-}e^{+i\pi/4}\bar{\chi}_{-1} \end{array} \right). \end{equation} The canonical anti-commutation relations are \begin{equation}\label{eq:comm-rel-ferm} \begin{aligned} &\ \{\bar{\eta}_{\sL{\dot{a}}}(\sigma_1),{\eta}_{\mbox{\tiny L}}^{\ {\dot{b}}}(\sigma_2)\}= \{\bar{\eta}_{\sR}^{\ {\dot{b}}}(\sigma_1),{\eta}_{\sR{\dot{a}}}(\sigma_2)\}=\delta_{{\dot{a}}}^{\ {\dot{b}}}\,\delta(\sigma_1-\sigma_2),\\ &\{\bar{\chi}_{+{a}}(\sigma_1),{\chi}_{+}^{{b}}(\sigma_2)\}= \{\bar{\chi}_{-{a}}(\sigma_1),\chi_{-}^{{b}}(\sigma_2)\}=\delta_{{a}}^{\ {b}}\,\delta(\sigma_1-\sigma_2), \end{aligned} \end{equation} It is possible to derive the (super)currents associated to the isometries of the model. In general the conserved charges are divided into \emph{kinematical} and \emph{dynamical}. The former do not depend on the light-cone coordinate $x^-$, while the latter do. Given their definition, kinematical charges commute with the total light-cone momentum $P_-$ of~\eqref{eq:total-light-cone-mom-P+P-}. Another way to look at the conserved charges is to see if they also do or do not depend on $x^+=\tau$. Because ${\rm d}\gen{Q}/{\rm d}\tau = \partial \gen{Q}/\partial\tau + \{\gen{H},\gen{Q}\}=0$, it is clear that only the charges without an explicit time-dependence commute with the Hamiltonian. These are actually the charges we are interested in. In particular, of the sixteen real conserved supercharges, only eight of them commute with the Hamiltonian. They turn out to be dynamical. For {AdS$_3\times$S$^3\times$T$^4$} they have been presented at first order in fermions and third order in bosons in~\cite{Borsato:2014hja}. In~\eqref{eq:supercharges-quadratic-T4} we write them at first order in fermions and first order in bosons. \begin{comment} Questions for Olof: In eq. (2.10) there are Gamma-matrices \Gamma^{\underline{i}} appearing. According to the convention stated in the paper, those are curved indices. Gamma-matrices with curved indices are found by multiplying the Gamma-matrices previously defined with tangent indices by the proper vielbein. But when I check that M_0 M_0^{-1} = 1, I discover that this works only if those Gamma-matrices are defined with tangent indices. In (2.21) the susy transformation for \check{K} is missing. Is it correct that it is equal to the one for \hat{K}? Around eq. (2.55) a change of basis for Gamma-matrices is used. Do you agree that using this second choice for the basis from the very beginning of the paper would affect only appendix B and C, in the equations where the explicit spinor decomposition is shown? Equations like (2.26), (C.1), (C.7), etc. would not change. \end{comment} \chapter{AdS$_3\times$S$^3\times$T$^4$}\label{app:AdS3} \input{Chapters/appGSactionAdS3.tex} \section{Oscillator representation}\label{app:oscillators} Here we introduce creation and annihilation operators. We will use them to rewrite the conserved charges of Chapter~\ref{sec:SymmetryAlgebraT4} forming the algebra $\mathcal{A}$. We first define the following wave-function parameters \begin{equation} \omega(p,m)=\sqrt{m^2+p^2},\quad f(p,m)=\sqrt{\frac{\omega(p,m)+|m|}{2}},\quad g(p,m)=-\frac{p}{2f(p,m)}, \end{equation} that satisfy \begin{equation} \omega(p,m)=f(p,m)^2+g(p,m)^2,\qquad |m|=f(p,m)^2-g(p,m)^2, \end{equation} and the short-hand notation \begin{equation}\begin{aligned} \omega_p=\omega(p,\pm1)\qquad f_p=f(p,\pm1),\qquad g_p=g(p,\pm1),\\ \tilde{\omega}_p=\omega(p,0),\qquad\ \ \tilde{f}_p=f(p,0),\qquad\ \ \tilde{g}_p=g(p,0).\ \end{aligned} \end{equation} For the massive bosons we take \begin{equation} \begin{aligned} a_{\mbox{\tiny L} z} (p) &= \frac{1}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{2\sqrt{\omega_p}} \left(\omega_p \bar{Z} +2i P_{\bar{Z}} \right) e^{-i p \sigma},\\ a_{\mbox{\tiny R} z} (p)&= \frac{1}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{2\sqrt{\omega_p}} \left(\omega_p Z + 2i P_{Z} \right) e^{-i p \sigma}, \\ \\ a_{\mbox{\tiny L} y}(p) &= \frac{1}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{2\sqrt{\omega_p}} \left(\omega_p \bar{Y} + 2i P_{\bar{Y}} \right) e^{-i p \sigma},\\ a_{\mbox{\tiny R} y}(p) &= \frac{1}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{2\sqrt{\omega_p}} \left(\omega_p Y + 2i P_{Y} \right) e^{-i p \sigma}, \\ \end{aligned} \end{equation} and for the massless bosons \begin{equation} \begin{aligned} a_{{\dot{a}a}} (p) &= \frac{1}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{2\sqrt{\tilde{\omega}_p}} \left(\tilde{\omega}_p X_{{\dot{a}a}} + 2i P_{{\dot{a}a}} \right) e^{-i p \sigma}. \end{aligned} \end{equation} The corresponding creation operators are found by taking the complex conjugate of the above expressions and are indicated by a dagger. For massless bosons we have in particular $(a_{{\dot{a}a}})^*=a^{{\dot{a}a}\dagger}$. \begin{equation} \begin{gathered} [a_{\mbox{\tiny L}\, z}(p_1),a^{\dagger}_{\mbox{\tiny L}\, z}(p_2)]= [a_{\mbox{\tiny R}\, z}(p_1),a^{\dagger}_{\mbox{\tiny R}\, z}(p_2)]= \delta(p_1-p_2)\,,\\ [a_{\mbox{\tiny L}\, y}(p_1),a^{\dagger}_{\mbox{\tiny L}\, y}(p_2)]= [a_{\mbox{\tiny R}\, y}(p_1),a^{\dagger}_{\mbox{\tiny R}\, y}(p_2)]= \delta(p_1-p_2)\,,\\ [a^{{\dot{a}a}}(p_1),a^{\dagger}_{{\dot{b}b}}(p_2)]=\delta^{{\dot{a}}}_{\ {\dot{b}}}\,\delta^{{a}}_{\ {b}}\, \delta(p_1-p_2)\,. \end{gathered} \end{equation} The ladder operators for massive fermions are defined as \begin{equation} \label{eq:fermion-par-massive} \begin{aligned} d_{\mbox{\tiny L} {\dot{a}}}(p) &=+ \frac{e^{+i \pi / 4}}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{\sqrt{\omega_p}} \ \epsilon_{{\dot{a}\dot{b}}}\left( f_p \, {\eta}^{\ {\dot{b}}}_{\sL} +i g_p \, \bar{\eta}^{\ {\dot{b}}}_{\sR} \right) e^{-i p \sigma}, \\ d_{\mbox{\tiny R}}^{\ {\dot{a}}}(p) &=- \frac{e^{+i \pi / 4}}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{\sqrt{\omega_p}} \ \epsilon^{{\dot{a}\dot{b}}} \left( f_p \, \eta_{\sR{\dot{b}}} +i g_p \, \bar{\eta}_{\sL{\dot{b}}} \right) e^{-i p \sigma}. \\ \end{aligned} \end{equation} while for massless fermions we take \begin{equation} \label{eq:fermion-par-massless} \begin{aligned} \tilde{d}_{{a}}(p)&= \frac{e^{-i \pi / 4}}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{\sqrt{\tilde{\omega}_p}} \left( \tilde{f}_p \bar{\chi}_{+{a}} -i \tilde{g}_p \, \epsilon_{{ab}} {\chi}_{-}^{\ {b}} \right) e^{-i p \sigma}, \\ {d}_{{a}}(p) &= \frac{e^{+i \pi / 4}}{\sqrt{2\pi}} \int \frac{{\rm d}\sigma}{\sqrt{\tilde{\omega}_p}} \left( \tilde{f}_p\, \epsilon_{{ab}} {\chi}_{+}^{\ {b}} -i \tilde{g}_p\, \bar{\chi}_{-{a}}\right) e^{-i p \sigma}. \end{aligned} \end{equation} Also in this case the creation operators are found by taking $({d}_{{a}})^*={d}^{{a}\dagger}$. The anti-commutation relations are \begin{equation} \begin{gathered} \{d_{\mbox{\tiny L}}^{\ {\dot{a}}\,\dagger}(p_1),d_{\mbox{\tiny L} {\dot{b}}}(p_2)\}= \{d_{\mbox{\tiny R} {\dot{b}}}^{\,\dagger}(p_1),d_{\mbox{\tiny R}}^{\ {\dot{a}}}(p_2)\}=\delta_{{\dot{b}}}^{\ {\dot{a}}}\,\delta(p_1-p_2)\,, \\ \{\tilde{d}^{{a}\,\dagger}(p_1),\tilde{d}_{{b}}(p_2)\}= \{d^{\ {a}\,\dagger}(p_1),d_{{b}}(p_2)\}=\delta_{{b}}^{\ {a}}\,\delta(p_1-p_2)\,. \end{gathered} \end{equation} Using these definitions we can rewrite the conserved charges in terms of creation and annihilation operators, to obtain~\eqref{eq-central-charges-osc-T4} and~\eqref{eq-supercharges-osc-T4}. \section{Explicit S-matrix elements}\label{app:S-mat-explicit} Here we write the action of the $\alg{psu}(1|1)^4_{\text{c.e.}}$-invariant S-matrix on two-particle states. \subsection{The massive sector} In the massive sector, when we scatter two Left excitations we get \begingroup \addtolength{\jot}{1ex} \begin{equation}\label{eq:massive-S-matrix-LL} \begin{aligned} \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{Y^{\mbox{\tiny L}}_p Y^{\mbox{\tiny L}}_q} =& A^{\mbox{\tiny L}\sL}_{pq} A^{\mbox{\tiny L}\sL}_{pq} \ket{Y^{\mbox{\tiny L}}_q Y^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{Y^{\mbox{\tiny L}}_p \eta^{\mbox{\tiny L}\, \dot{a}}_q} =& A^{\mbox{\tiny L}\sL}_{pq} B^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_q Y^{\mbox{\tiny L}}_p } +A^{\mbox{\tiny L}\sL}_{pq} C^{\mbox{\tiny L}\sL}_{pq} \ket{Y^{\mbox{\tiny L}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{Y^{\mbox{\tiny L}}_p Z^{\mbox{\tiny L}}_q} =& B^{\mbox{\tiny L}\sL}_{pq} B^{\mbox{\tiny L}\sL}_{pq} \ket{Z^{\mbox{\tiny L}}_q Y^{\mbox{\tiny L}}_p } +C^{\mbox{\tiny L}\sL}_{pq} C^{\mbox{\tiny L}\sL}_{pq} \ket{Y^{\mbox{\tiny L}}_q Z^{\mbox{\tiny L}}_p } +\epsilon^{\dot{a}\dot{b}} B^{\mbox{\tiny L}\sL}_{pq} C^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_q \eta^{\mbox{\tiny L}\, \dot{b}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_p Y^{\mbox{\tiny L}}_q} =& A^{\mbox{\tiny L}\sL}_{pq} D^{\mbox{\tiny L}\sL}_{pq} \ket{Y^{\mbox{\tiny L}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } +A^{\mbox{\tiny L}\sL}_{pq} E^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_q Y^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_p \eta^{\mbox{\tiny L}\, \dot{b}}_q} =& -B^{\mbox{\tiny L}\sL}_{pq} D^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{b}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } +C^{\mbox{\tiny L}\sL}_{pq} E^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_q \eta^{\mbox{\tiny L}\, \dot{b}}_p } \\ &+\epsilon^{\dot{a}\dot{b}}C^{\mbox{\tiny L}\sL}_{pq} D^{\mbox{\tiny L}\sL}_{pq} \ket{Y^{\mbox{\tiny L}}_q Z^{\mbox{\tiny L}}_p } +\epsilon^{\dot{a}\dot{b}}B^{\mbox{\tiny L}\sL}_{pq} E^{\mbox{\tiny L}\sL}_{pq} \ket{Z^{\mbox{\tiny L}}_q Y^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_p Z^{\mbox{\tiny L}}_q} =& -B^{\mbox{\tiny L}\sL}_{pq} F^{\mbox{\tiny L}\sL}_{pq} \ket{Z^{\mbox{\tiny L}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } +C^{\mbox{\tiny L}\sL}_{pq} F^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_q Z^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{Z^{\mbox{\tiny L}}_p Y^{\mbox{\tiny L}}_q} =& D^{\mbox{\tiny L}\sL}_{pq} D^{\mbox{\tiny L}\sL}_{pq} \ket{Y^{\mbox{\tiny L}}_q Z^{\mbox{\tiny L}}_p } +E^{\mbox{\tiny L}\sL}_{pq} E^{\mbox{\tiny L}\sL}_{pq} \ket{Z^{\mbox{\tiny L}}_q Y^{\mbox{\tiny L}}_p } +\epsilon^{\dot{a}\dot{b}}D^{\mbox{\tiny L}\sL}_{pq} E^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_q \eta^{\mbox{\tiny L}\, \dot{b}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{Z^{\mbox{\tiny L}}_p \eta^{\mbox{\tiny L}\, \dot{a}}_q} =& -D^{\mbox{\tiny L}\sL}_{pq} F^{\mbox{\tiny L}\sL}_{pq} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_q Z^{\mbox{\tiny L}}_p } +E^{\mbox{\tiny L}\sL}_{pq} F^{\mbox{\tiny L}\sL}_{pq} \ket{Z^{\mbox{\tiny L}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\sL} \ket{Z^{\mbox{\tiny L}}_p Z^{\mbox{\tiny L}}_q} =& F^{\mbox{\tiny L}\sL}_{pq} F^{\mbox{\tiny L}\sL}_{pq} \ket{Z^{\mbox{\tiny L}}_q Z^{\mbox{\tiny L}}_p } , \\ \end{aligned} \end{equation} \endgroup Scattering a Left and a Right excitation yields\footnote{To have a better notation, we prefer to raise the $\alg{su}(2)$ index of Right fermions with $\epsilon^{\dot{a}\dot{b}}$.} \begingroup \addtolength{\jot}{1ex} \begin{equation}\label{eq:massive-S-matrix-LR} \begin{aligned} \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{Y^{\mbox{\tiny L}}_p Y^{\mbox{\tiny R}}_q} =& A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Y^{\mbox{\tiny R}}_q Y^{\mbox{\tiny L}}_p } -B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Z^{\mbox{\tiny R}}_q Z^{\mbox{\tiny L}}_p } +\epsilon^{\dot{a}\dot{b}}A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{a}}_q \eta^{\mbox{\tiny L}\, \dot{b}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{Y^{\mbox{\tiny L}}_p \eta^{\mbox{\tiny R}\, \dot{a}}_q} =& A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{a}}_q Y^{\mbox{\tiny L}}_p } -B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Z^{\mbox{\tiny R}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{Y^{\mbox{\tiny L}}_p Z^{\mbox{\tiny R}}_q} =& C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Z^{\mbox{\tiny R}}_q Y^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_p Y^{\mbox{\tiny R}}_q} =& A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Y^{\mbox{\tiny R}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } -B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{a}}_q Z^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_p \eta^{\mbox{\tiny R}\, \dot{b}}_q} =& +A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{b}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } -B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{a}}_q \eta^{\mbox{\tiny L}\, \dot{b}}_p } \\ &-\epsilon^{\dot{a}\dot{b}}A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Y^{\mbox{\tiny R}}_q Y^{\mbox{\tiny L}}_p } +\epsilon^{\dot{a}\dot{b}}B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Z^{\mbox{\tiny R}}_q Z^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{\eta^{\mbox{\tiny L}\, \dot{a}}_p Z^{\mbox{\tiny R}}_q} =& -E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Z^{\mbox{\tiny R}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } +C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{a}}_q Y^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{Z^{\mbox{\tiny L}}_p Y^{\mbox{\tiny R}}_q} =& D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Y^{\mbox{\tiny R}}_q Z^{\mbox{\tiny L}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{Z^{\mbox{\tiny L}}_p \eta^{\mbox{\tiny R}\, \dot{a}}_q} =& -D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{a}}_q Z^{\mbox{\tiny L}}_p } +D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Y^{\mbox{\tiny R}}_q \eta^{\mbox{\tiny L}\, \dot{a}}_p } , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{Z^{\mbox{\tiny L}}_p Z^{\mbox{\tiny R}}_q} =& E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Z^{\mbox{\tiny R}}_q Z^{\mbox{\tiny L}}_p } -F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{Y^{\mbox{\tiny R}}_q Y^{\mbox{\tiny L}}_p } -\epsilon^{\dot{a}\dot{b}}E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\eta^{\mbox{\tiny R}\, \dot{a}}_q \eta^{\mbox{\tiny L}\, \dot{b}}_p } , \\ \end{aligned} \end{equation} \endgroup \subsection{The mixed-mass sector} In the case of left massive excitations that scatter with massless excitations transforming in the $\varrho_{\mbox{\tiny L}} \otimes \widetilde{\varrho}_{\mbox{\tiny L}}$ representation of $\alg{psu}(1|1)^4_{\text{c.e.}}$ we find \begingroup \addtolength{\jot}{1ex} \begin{equation}\label{eq:mixed-S-matrix-L} \begin{aligned} \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{Z^{\mbox{\tiny L}}_p T^{\dot{a}a}_q} =& - F^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq} \ket{T^{\dot{a}a}_q Z^{\mbox{\tiny L}}_p } - F^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq} \ket{\widetilde{\chi}^{a}_q \eta^{\mbox{\tiny L} \dot{a}}_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{Y^{\mbox{\tiny L}}_p T^{\dot{a}a}_q} =& + A^{\mbox{\tiny L}\sL}_{pq}B^{\mbox{\tiny L}\sL}_{pq} \ket{T^{\dot{a}a}_q Y^{\mbox{\tiny L}}_p } - A^{\mbox{\tiny L}\sL}_{pq}C^{\mbox{\tiny L}\sL}_{pq} \ket{ \chi^{a}_q \eta^{\mbox{\tiny L} \dot{a}}_p }, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{\eta^{\mbox{\tiny L} \dot{a}}_p \widetilde{\chi}^{a}_q} =& + F^{\mbox{\tiny L}\sL}_{pq}B^{\mbox{\tiny L}\sL}_{pq} \ket{\widetilde{\chi}^a_q \eta^{\mbox{\tiny L} \dot{a}}_p } + F^{\mbox{\tiny L}\sL}_{pq}C^{\mbox{\tiny L}\sL}_{pq} \ket{ T^{\dot{a}a}_q Z^{\mbox{\tiny L}}_p }, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{\eta^{\mbox{\tiny L} \dot{a}}_p \chi^a_q} =& -A^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq} \ket{\chi^a_q \eta^{\mbox{\tiny L} \dot{a}}_p } + A^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq} \ket{ T^{\dot{a}a}_q Y^{\mbox{\tiny L}}_p }, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{Z^{\mbox{\tiny L}}_p \widetilde{\chi}^a_q} =& + F^{\mbox{\tiny L}\sL}_{pq}F^{\mbox{\tiny L}\sL}_{pq} \ket{\widetilde{\chi}^a_q Z^{\mbox{\tiny L}}_p }, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{Y^{\mbox{\tiny L}}_p \chi^a_q} =& + A^{\mbox{\tiny L}\sL}_{pq}A^{\mbox{\tiny L}\sL}_{pq} \ket{\chi^a_q Y^{\mbox{\tiny L}}_p }, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{Z^{\mbox{\tiny L}}_p \chi^a_q} =& + D^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq} \ket{\chi^a_q Z^{\mbox{\tiny L}}_p } + E^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq} \ket{\widetilde{\chi}^a_q Y^{\mbox{\tiny L}}_p } + D^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq} \, \epsilon_{\dot{a}\dot{b}} \ket{ T^{\dot{a}a}_q \eta^{\mbox{\tiny L} \dot{b}}_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{Y^{\mbox{\tiny L}}_p \widetilde{\chi}^a_q} =& + B^{\mbox{\tiny L}\sL}_{pq}B^{\mbox{\tiny L}\sL}_{pq} \ket{ \widetilde{\chi}^a_q Y^{\mbox{\tiny L}}_p } + C^{\mbox{\tiny L}\sL}_{pq}C^{\mbox{\tiny L}\sL}_{pq} \ket{\chi^a_q Z^{\mbox{\tiny L}}_p } + B^{\mbox{\tiny L}\sL}_{pq}C^{\mbox{\tiny L}\sL}_{pq} \, \epsilon_{\dot{a}\dot{b}} \ket{ T^{\dot{a}a}_q \eta^{\mbox{\tiny L} \dot{b}}_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \ket{ \eta^{\mbox{\tiny L} \dot{a}}_p T^{\dot{b}a}_q} =& + D^{\mbox{\tiny L}\sL}_{pq}B^{\mbox{\tiny L}\sL}_{pq} \ket{ T^{\dot{a}a}_q \eta^{\mbox{\tiny L} \dot{b}}_p } - E^{\mbox{\tiny L}\sL}_{pq}C^{\mbox{\tiny L}\sL}_{pq} \ket{ T^{\dot{b}a}_q \eta^{\mbox{\tiny L} \dot{a}}_p } \\[-1ex] & + D^{\mbox{\tiny L}\sL}_{pq}C^{\mbox{\tiny L}\sL}_{pq} \, \epsilon^{\dot{a}\dot{b}} \ket{ \chi_q^{a} Z^{\mbox{\tiny L}}_p} + E^{\mbox{\tiny L}\sL}_{pq}B^{\mbox{\tiny L}\sL}_{pq} \, \epsilon^{\dot{a}\dot{b}} \ket{ \widetilde{\chi}_q^{a} Y^{\mbox{\tiny L}}_p}. \end{aligned} \end{equation} \endgroup When we scatter a right excitation with a massless one we can write the S-matrix elements as \begingroup \addtolength{\jot}{1ex} \begin{equation}\label{eq:mixed-S-matrix-R} \begin{aligned} \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{Z^{\mbox{\tiny R}}_p T^{\dot{a}a}_q} =& -D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{T^{\dot{a}a}_q Z^{\mbox{\tiny R}}_p } + D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\chi^a_q \eta^{\mbox{\tiny R} \dot{a}}_p}, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{Y^{\mbox{\tiny R}}_p T^{\dot{a}a}_q} =& + A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{T^{\dot{a}a}_q Y^{\mbox{\tiny R}}_p } -B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{ \widetilde{\chi}^a_q \eta^{\mbox{\tiny R} \dot{a}}_p }, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{\eta^{\mbox{\tiny R} \dot{a}}_p \chi^a_q} =& -D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\chi^a_q \eta^{\mbox{\tiny R} \dot{a}}_p } + D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{ T^{\dot{a}a}_q Z^{\mbox{\tiny R}}_p }, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{\eta^{\mbox{\tiny R} \dot{a}}_p \widetilde{\chi}^a_q} =& + E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\widetilde{\chi}^a_q \eta^{\mbox{\tiny R} \dot{a}}_p } - F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{ T^{\dot{a}a}_q Y^{\mbox{\tiny R}}_p }, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{Z^{\mbox{\tiny R}}_p \chi^a_q} =& + D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\chi^a_q Z^{\mbox{\tiny R}}_p }, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{Y^{\mbox{\tiny R}}_p \widetilde{\chi}^a_q} =& + C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\widetilde{\chi}^a_q Y^{\mbox{\tiny R}}_p }, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{Z^{\mbox{\tiny R}}_p \widetilde{\chi}^a_q} =& + E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\widetilde{\chi}^a_q Z^{\mbox{\tiny R}}_p } -F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\chi^a_q Y^{\mbox{\tiny R}}_p } +F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \, \epsilon_{\dot{a}\dot{b}} \ket{ T^{\dot{a}a}_q \eta^{\mbox{\tiny R} \dot{b}}_p}, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{Y^{\mbox{\tiny R}}_p \chi^a_q} =& + A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{ \chi^a_q Y^{\mbox{\tiny R}}_p } - B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}\ket{\widetilde{\chi}^a_q Z^{\mbox{\tiny R}}_p } - B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \, \epsilon_{\dot{a}\dot{b}} \ket{ T^{\dot{a}a}_q \eta^{\mbox{\tiny R} \dot{b}}_p}, \\ \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\check{\otimes}\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{ \eta^{\mbox{\tiny R} \dot{a}}_p T^{\dot{b}a}_q} =& + B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{ T^{\dot{a}a}_q \eta^{\mbox{\tiny R} \dot{b}}_p } - A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{ T^{\dot{b}a}_q \eta^{\mbox{\tiny R} \dot{a}}_p } \\[-1ex] & - B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \, \epsilon^{\dot{a}\dot{b}} \ket{ \widetilde{\chi}^{a}_q Z^{\mbox{\tiny R}}_p} + A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \, \epsilon^{\dot{a}\dot{b}} \ket{ \chi^{a}_q Y^{\mbox{\tiny R}}_p}. \end{aligned} \end{equation} \endgroup After taking into account a proper normalisation like in Section~\ref{sec:smat-tensor-prod}, the S-matrix elements for left-massless and right-massless scattering can be related by LR symmetry. In order to do so, one needs to implement it on massive and massless excitations as in equations~\eqref{eq:LR-massive} and~\eqref{eq:LR-massless}. \subsection{The massless sector} We write the non-vanishing entries of the two-particle S~matrix in the massless sector. First we focus on the structure fixed by the $\alg{psu}(1|1)^4$ invariance. For this reason we omit the indices corresponding to $\alg{su}(2)_{\circ}$. \begingroup \addtolength{\jot}{1ex} \begin{equation}\label{eq:massless-S-matrix} \begin{aligned} \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{T^{\dot{a}\ }_p T^{\dot{b}\ }_q} =& -C^{\mbox{\tiny L}\sL}_{pq} E^{\mbox{\tiny L}\sL}_{pq} \ket{T^{\dot{a}\ }_q T^{\dot{b}\ }_p} +B^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq} \ket{T^{\dot{b}\ }_q T^{\dot{a}\ }_p} \\[-1ex] & +\epsilon^{\dot{a}\dot{b}} \left(C^{\mbox{\tiny L}\sL}_{pq} D^{\mbox{\tiny L}\sL}_{pq} \ket{\chi^{\ }_q \widetilde{\chi}^{\ }_p} + B^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq} \ket{\widetilde{\chi}^{\ }_q \chi^{\ }_p}\right), \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{T^{\dot{a}\ }_p \widetilde{\chi}^{\ }_q} =&-B^{\mbox{\tiny L}\sL}_{pq}F^{\mbox{\tiny L}\sL}_{pq} \ket{\widetilde{\chi}^{\ }_q T^{\dot{a}\ }_p} - C^{\mbox{\tiny L}\sL}_{pq}F^{\mbox{\tiny L}\sL}_{pq} \ket{T^{\dot{a}\ }_q \widetilde{\chi}^{\ }_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\widetilde{\chi}^{\ }_p T^{\dot{a}\ }_q} =&-F^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq}\ket{T^{\dot{a}\ }_q \widetilde{\chi}^{\ }_p} - F^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq}\ket{\widetilde{\chi}^{\ }_q T^{\dot{a}\ }_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{T^{\dot{a}\ }_p \chi^{\ }_q} =&-B^{\mbox{\tiny L}\sL}_{pq}F^{\mbox{\tiny L}\sL}_{pq} \ket{\chi^{\ }_q T^{\dot{a}\ }_p} - C^{\mbox{\tiny L}\sL}_{pq}F^{\mbox{\tiny L}\sL}_{pq} \ket{T^{\dot{a}\ }_q \chi^{\ }_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\chi^{\ }_p T^{\dot{a}\ }_q} =&-F^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq}\ket{T^{\dot{a}\ }_q \chi^{\ }_p} - F^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq}\ket{\chi^{\ }_q T^{\dot{a}\ }_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\widetilde{\chi}^{\ }_p \widetilde{\chi}^{\ }_q} =& -A^{\mbox{\tiny L}\sL}_{pq}A^{\mbox{\tiny L}\sL}_{pq} \ket{\widetilde{\chi}^{\ }_q \widetilde{\chi}^{\ }_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\chi^{\ }_p \chi^{\ }_q} =& -A^{\mbox{\tiny L}\sL}_{pq}A^{\mbox{\tiny L}\sL}_{pq} \ket{\chi^{\ }_q \chi^{\ }_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\widetilde{\chi}^{\ }_p \chi^{\ }_q} =& -D^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq}\ket{\chi^{\ }_q \widetilde{\chi}^{\ }_p} - E^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq}\ket{\widetilde{\chi}^{\ }_q \chi^{\ }_p} - E^{\mbox{\tiny L}\sL}_{pq} D^{\mbox{\tiny L}\sL}_{pq} \epsilon_{\dot{a}\dot{b}} \ket{T^{\dot{a}\ }_q T^{\dot{b}\ }_p}, \\ \mathcal{S}^{\mbox{\tiny L}\sL}\check{\otimes}\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\chi^{\ }_p \widetilde{\chi}^{\ }_q} =& -D^{\mbox{\tiny L}\sL}_{pq}D^{\mbox{\tiny L}\sL}_{pq}\ket{\widetilde{\chi}^{\ }_q \chi^{\ }_p} - E^{\mbox{\tiny L}\sL}_{pq}E^{\mbox{\tiny L}\sL}_{pq}\ket{\chi^{\ }_q \widetilde{\chi}^{\ }_p} + E^{\mbox{\tiny L}\sL}_{pq} D^{\mbox{\tiny L}\sL}_{pq} \epsilon_{\dot{a}\dot{b}} \ket{T^{\dot{a}\ }_q T^{\dot{b}\ }_p}. \end{aligned} \end{equation} \endgroup The structure fixed by the $\alg{su}(2)_{\circ}$ symmetry is as follows \begin{equation} \mathcal{S}_{\alg{su}(2)} \ket{\mathcal{X}^a_p \mathcal{Y}^b_q} = \frac{1}{1+\varsigma_{pq}} \left( \varsigma_{pq} \ket{\mathcal{Y'}^b_q \mathcal{X'}^a_p} + \ket{\mathcal{Y'}^a_q \mathcal{X'}^b_p}\right), \end{equation} where we use $\mathcal{X},\mathcal{Y},\mathcal{X'},\mathcal{Y'}$ to denote any of the excitations that appear above. The antisymmetric function $\varsigma_{pq}$ is further constrained in section~\ref{sec:unitarity-YBe}. The full S-matrix in the massless sector is then found by combining the structures fixed by $\alg{psu}(1|1)^4_{\text{c.e.}}$ and $\alg{su}(2)_{\circ}$. The preferred normalisation is found by multiplying each element by the scalar factor as in Section~\ref{sec:smat-tensor-prod}. This S-matrix automatically satisfies the LR-symmetry, where this is implemented on massless excitations as in (\ref{eq:LR-massless}). \section{Appendix for Bosonic (AdS$_5\times$S$^5)_\eta$}\label{app:bos-eta-def} In this appendix we collect some useful results needed in Chapter~\ref{ch:qAdS5Bos} and Chapter~\ref{ch:qAdS5Fer}. \subsection{Coset elements for the bosonic model} A very convenient parametrisation for a bosonic coset element is given by \begin{equation} \label{basiccoset} \alg{g}_{\alg{b}}=\small{\left( \begin{array}{cc} \alg{g}_{\alg{a}} & 0 \\ 0 & \alg{g}_{\alg{s}} \end{array} \right)}\, ,\quad \alg{g}_{\alg{a}}=\Lambda(\psi_k)\, \Xi(\zeta)\check\alg{g}_{\rho}(\rho)\, ,\quad \alg{g}_{\alg{s}}=\Lambda(\phi_k)\, \Xi(\xi)\check\alg{g}_{\rm r}(r)\, . \end{equation} Here the matrix functions $\Lambda$, $\Xi$ and $\check\alg{g}$ are defined as \begin{equation} \label{eq:Lambda} \Lambda(\varphi_k)=\exp(\sum_{k=1}^3 \frac{i}{2}\varphi_k h_k )\,,\quad \Xi(\varphi)=\left( \begin{array}{cccc} \cos\frac{\varphi}{2} & \sin\frac{\varphi}{2} & 0 & 0 \\ -\sin\frac{\varphi}{2} & \cos\frac{\varphi}{2} & 0 & 0 \\ 0 & 0 & \cos\frac{\varphi}{2} & -\sin\frac{\varphi}{2} \\ 0 & 0 & \sin\frac{\varphi}{2} & \cos\frac{\varphi}{2} \\ \end{array} \right)\,, \end{equation} \begin{equation} \label{checkgrho} \check\alg{g}_{\rho}(\rho) = \left( \begin{array}{cccc} \rho_+& 0 & 0 &\rho_- \\ 0 & \rho_+& -\rho_-& 0 \\ 0 & -\rho_- & \rho_+& 0 \\ \rho_-& 0 & 0 & \rho_+\\ \end{array} \right) \,,\quad \rho_\pm= {\sqrt{\sqrt{\rho ^2+1}\pm1} \over\sqrt 2}\,,\end{equation} \begin{equation} \label{checkgr} \check\alg{g}_{r}(r) = \left( \begin{array}{cccc} r_+& 0 & 0 &i\, r_- \\ 0 & r_+& -i \,r_-& 0 \\ 0 & -i\,r_- & r_+& 0 \\ i\,r_-& 0 & 0 & r_+\\ \end{array} \right) \,,\quad r_\pm= {\sqrt{1\pm\sqrt{1-r ^2}} \over\sqrt 2}\,,\end{equation} where the diagonal matrices $h_i$ are given by \begin{equation} h_1={\rm diag}(-1,1,-1,1) \,,\quad h_2={\rm diag}(-1,1,1,-1) \,,\quad h_3={\rm diag}(1,1,-1,-1) \,. \end{equation} The coordinates $t\equiv\psi_3\,,\,\psi_1\,,\,\psi_2\,,\, \zeta\,,\, \rho$ and $\phi\equiv\phi_3\,,\,\phi_1\,,\,\phi_2\,,\, \xi\,,\, r$ are the ones introduced in~\eqref{eq:sph-coord-AdS5} and~\eqref{eq:sph-coord-S5} to parameterise AdS$_5$ and S$^5$. An alternative choice for the bosonic coset element is the one used in~\cite{Arutyunov:2009ga}. The bosonic coset element would be defined as \begin{equation}\label{eq:eucl-bos-coset-el} \alg{g_b}'=\Lambda(t,\phi)\cdot \alg{g}(\gen{X})\,, \end{equation} where \begin{equation} \Lambda(t,\phi)=\small{\left( \begin{array}{cc} \Lambda(t) & 0 \\ 0 & \Lambda(\phi) \end{array} \right)}\,, \end{equation} defined in~\eqref{eq:Lambda} and \begin{equation} \alg{g}(\gen{X})=\small{\left( \begin{array}{cc} \frac{1}{\sqrt{1-\frac{z^2}{4}}}\left( \gen{1}_4 -\frac{1}{2} z_i \gamma_i \right) & 0 \\ 0 & \frac{1}{\sqrt{1+\frac{y^2}{4}}}\left( \gen{1}_4 -\frac{i}{2} y_i \gamma_i \right) \end{array} \right)}\,. \end{equation} The gamma matrices $\gamma_i$ are given in~\eqref{eq:choice-5d-gamma}. The coordinates $t,z_i$ and $\phi,y_i$ are the ones introduced in~\eqref{eq:embed-eucl-coord-AdS5} and~\eqref{eq:embed-eucl-coord-S5} to parameterise AdS$_5$ and S$^5$. The difference from~\cite{Arutyunov:2009ga} is that we have changed the sign in front of $z_i,y_i$. In this way the two coset elements are related by a local transformation \begin{equation} \alg{g_b}'=\alg{g_b}\cdot \alg{h}\,\qquad \alg{h}\in \alg{so}(4,1)\oplus\alg{so}(5)\,, \end{equation} proving that the Lagrangian is the same and the two descriptions are equivalent. Alternatively, one could shift the angles $\psi_i\to\psi+\pi,\ \phi_i\to\phi+\pi$ when relating the two set of coordinates. We remind that with respect to~\cite{Arutyunov:2009ga} we have also exchanged what we call $\gamma_1,\gamma_4$. \subsection{The operator $\mathcal{O}$ at bosonic order and its inverse}\label{app:bosonic-op-and-inverse} An important property of the coset representative \eqref{basiccoset} is that the $R_\alg{g_b}$ operator defined in~\eqref{eq:defin-Rg-bos} is independent of the angles $\psi_k$ and $\phi_k$: \begin{equation} R_{\alg{g}_{\alg{b}}}(M) = R_{\check\alg{g}}(M) \,,\quad \check\alg{g}=\small{\left( \begin{array}{cc} \check\alg{g}_{\alg{a}} & 0 \\ 0 & \check\alg{g}_{\alg{s}} \end{array} \right)}\, ,\quad \check\alg{g}_{\alg{a}}=\Xi(\zeta)\check\alg{g}_{\rho}(\rho)\, ,\quad \check\alg{g}_{\alg{s}}=\Xi(\xi)\check\alg{g}_{\rm r}(r)\, . \end{equation} We collect the formulas for the action of $1/(1-\eta R_\alg{g_b} \circ d)$---where $d$ is given in~\eqref{eq:defin-op-d-dtilde}---on the projections $M^{(2)}$ and $M_{\rm odd}=M^{(1)}+M^{(3)}$ of an elment $M$ of $\alg{su}(2,2|4)$. The projections induced by the $\mathbb{Z}_4$ grading are defined in~\eqref{eq:def-proj-Z4-grad}. The action on odd elements appears to be $\check\alg{g}$-independent \begin{equation} {1\over 1-\eta R_{\check\alg{g}}\circ d}(M_{\rm odd}) ={\mathbbm{1} + \eta R\circ d\over 1 - \eta^2}(M_{\rm odd}) \,. \end{equation} This action on $M^{(2)}$ factorizes into a sum of actions on $M_{\alg{a}}$ and $M_{\alg{s}}$ where $M_{\alg{a}}$ is the upper left $4\times4$ block of $M^{(2)}$, and $M_{\alg{s}}$ is the lower right $4\times4$ block of $M^{(2)}$. One can check that the inverse operator is given by \begin{equation} {1\over 1-\eta R_{\check\alg{g}}\circ d}(M_{\alg{a}}) =\Big(\mathbbm{1}+ {\eta^3f_{31}^{\alg{a}}+\eta^4f_{42}^{\alg{a}}+\eta^5h_{53}^{\alg{a}}\over (1-c_{\alg{a}}\eta^2)(1-d_{\alg{a}}\eta^2)} + {\eta R_{\check\alg{g}}\circ d + \eta^2 R_{\check\alg{g}}\circ d\circ R_{\check\alg{g}}\circ d\over 1-c_{\alg{a}}\eta^2}\Big)\big(M_{\alg{a}}\big)\,, \end{equation} \begin{equation} {1\over 1-\eta R_{\check\alg{g}}\circ d}(M_{\alg{s}}) =\Big(\mathbbm{1}+ {\eta^3f_{31}^{\alg{s}}+\eta^4f_{42}^{\alg{s}}+\eta^5h_{53}^{\alg{s}}\over (1-c_{\alg{s}}\eta^2)(1-d_{\alg{s}}\eta^2)} + {\eta R_{\check\alg{g}}\circ d + \eta^2 R_{\check\alg{g}}\circ d\circ R_{\check\alg{g}}\circ d\over 1-c_{\alg{s}}\eta^2}\Big)\big(M_{\alg{s}}\big)\,. \end{equation} Here \begin{equation} c_{\alg{a}}= \frac{4\rho^2}{\left(1-\eta ^2\right)^2}\,,\quad d_{\alg{a}}=-\frac{4\rho^4 \sin^2\zeta }{\left(1-\eta^2\right)^2} \,,\quad c_{\alg{s}}= -\frac{4r^2}{\left(1-\eta ^2\right)^2}\,,\quad d_{\alg{s}}=-\frac{4r^4 \sin^2\xi }{\left(1-\eta^2\right)^2} \,, \end{equation} \begin{equation} f_{k,k-2}^{\alg{a}}(M_{\alg{a}}) =\Big(\big(R_{\check\alg{g}}\circ d\big)^k - c_{\alg{a}}\big(R_{\check\alg{g}}\circ d\big)^{k-2}\Big)(M_{\alg{a}}) \,, \end{equation} \begin{equation} f_{k,k-2}^{\alg{s}}(M_{\alg{s}}) =\Big(\big(R_{\check\alg{g}}\circ d\big)^k - c_{\alg{s}}\big(R_{\check\alg{g}}\circ d\big)^{k-2}\Big)(M_{\alg{s}}) \,, \end{equation} $d_{\alg{a}}$ and $d_{\alg{s}}$ appear in the identities \begin{equation} f_{k+2,k}^{\alg{a}} = d_{\alg{a}} f_{k,k-2}^{\alg{a}} \,,\quad f_{k+2,k}^{\alg{s}} = d_{\alg{s}} f_{k,k-2}^{\alg{s}} \,,\quad k = 4,5,\ldots\,, \end{equation} and $h_{53}^{\alg{a}}$ and $h_{53}^{\alg{s}}$ appear in \begin{equation} h_{53}^{\alg{a}}=f_{53}^{\alg{a}} - d_{\alg{a}} f_{31}^{\alg{a}} \,,\quad h_{53}^{\alg{s}}=f_{53}^{\alg{s}} - d_{\alg{s}} f_{31}^{\alg{s}} \,. \end{equation} \subsection{On the bosonic Lagrangian}\label{app:bos-lagr-eta-def} In Section~\ref{sec:def-bos-model} we have computed the bosonic Lagrangian using the bosonic coset element~\eqref{basiccoset}. It is also possible to compute the deformed Lagrangian by choosing the coset representative~\eqref{eq:eucl-bos-coset-el}. Accordingly, for the metric pieces we obtain \begin{eqnarray} \label{adsL} \mathscr L_{\alg{a}}^{G}&=&-\frac{g}{2}(1+\varkappa^2)^{1\ov2}\gamma^{\alpha\beta}\Big[-G_{tt}\partial_{\alpha}t\partial_{\beta}t+G_{zz}\partial_{\alpha}z_i\partial_{\beta}z_i+G_{\alg{a}}^{(1)}z_i\partial_{\alpha}z_iz_j\partial_{\beta}z_j+\nonumber \\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~ +G_{\alg{a}}^{(2)}(z_3\partial_{\alpha}z_4-z_4\partial_{\alpha}z_3)(z_3\partial_{\beta}z_4-z_4\partial_{\beta}z_3) \Big]\, , \\ \label{spL} \mathscr L_{\alg{s}}^{G}&=&-\frac{g}{2}(1+\varkappa^2)^{1\ov2}\gamma^{\alpha\beta}\Big[G_{\phi\phi}\partial_{\alpha}\phi\partial_{\beta}\phi+G_{yy}\partial_{\alpha}y_i\partial_{\beta}y_i+G_{\alg{s}}^{(1)}y_i\partial_{\alpha}y_iy_j\partial_{\beta}y_j+\nonumber \\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~ +G_{\alg{s}}^{(2)}(y_3\partial_{\alpha}y_4-y_4\partial_{\alpha}y_3)(y_3\partial_{\beta}y_4-y_4\partial_{\beta}y_3) \Big]\, . \end{eqnarray} Here the coordinates $z_i$, $i=1,\ldots,4$, and $t$ parametrize the deformed AdS space, while the coordinates $y_i$, $i=1,\ldots,4$, and the angle $\phi$ parametrize the deformed five-sphere. The components of the deformed AdS metric in (\ref{adsL}) are\footnote{Note that the coordinates $y_i$ and $z_i$ are different from the ones appearing in the quartic Lagrangian \eqref{Lquart} because the nondiagonal components of the deformed metric do not vanish. } \begin{eqnarray} \begin{aligned} \label{Gads} \hspace{-0.5cm} G_{tt}&=\frac{(1+z^2/4)^2}{(1-z^2/4)^2-\varkappa^2 z^2}\, , ~~~~~~~~~~~~~~~~~~~ G_{zz}=\frac{(1-z^2/4)^2}{(1-z^2/4)^4+\varkappa^2 z^2(z_3^2+z_4^2)} \, , \\ G_{\alg{a}}^{(1)}&=\varkappa^2G_{tt}G_{zz}\frac{z_3^2+z_4^2+(1-z^2/4)^2}{(1-z^2/4)^2(1+z^2/4)^2}\, , ~~~~~~ G_{\alg{a}}^{(2)}=\varkappa^2 G_{zz}\frac{z^2}{(1-z^2/4)^4} \, . \end{aligned} \end{eqnarray} For the sphere part the corresponding expressions read \begin{eqnarray} \begin{aligned} \label{Gsphere} \hspace{-0.5cm} G_{\phi\phi}&=\frac{(1-y^2/4)^2}{(1+y^2/4)^2+\varkappa^2 y^2}\, , ~~~~~~~~~~~~~~~~~~~ G_{yy}=\frac{(1+y^2/4)^2}{(1+y^2/4)^4+\varkappa^2 y^2(y_3^2+y_4^2)} \, , \\ G_{\alg{s}}^{(1)}&=\varkappa^2G_{\phi\phi}G_{yy}\frac{y_3^2+y_4^2-(1+y^2/4)^2}{(1-y^2/4)^2(1+y^2/4)^2}\, , ~~~~~~ G_{\alg{s}}^{(2)}=\varkappa^2 G_{yy}\frac{y^2}{(1+y^2/4)^4} \, . \end{aligned} \end{eqnarray} Obviously, in the limit $\varkappa\to 0$ the components $G_{\alg{a}}^{(i)}$ and $G_{\alg{s}}^{(i)}$ vanish, and one obtains the metric of the {AdS$_5\times$S$^5$}, {\it c.f.} fomulae (1.145) and (1.146) in \cite{Arutyunov:2009ga}. Finally, for the Wess-Zumino terms the results (up to total derivative terms which do not contribute to the action) are \begin{eqnarray} \begin{aligned} \label{WZ} \mathscr{L}_{\alg{a}}^{B}&=2g\varkappa(1+\varkappa^2)^{1\ov2}\, \epsilon^{\alpha\beta}\frac{(z_3^2+z_4^2)\partial_{\alpha}z_1\partial_{\beta}z_2}{(1-z^2/4)^4+\varkappa^2 z^2(z_3^2+z_4^2)}\, \\ \mathscr{L}_{\alg{s}}^{B}&=-2g\varkappa(1+\varkappa^2)^{1\ov2}\, \epsilon^{\alpha\beta}\frac{(y_3^2+y_4^2)\partial_{\alpha}y_1\partial_{\beta}y_2}{(1+y^2/4)^4+\varkappa^2 y^2(y_3^2+y_4^2)}\, . \end{aligned} \end{eqnarray} \medskip To find the quartic Lagrangian used for computing the bosonic part of the four-particle world-sheet scattering matrix, we first expand the Lagrangian \eqref{eq:bos-lagr-eta-def-Pol} up to quartic order in $\rho$, $r$ and their derivatives \begin{equation} \begin{aligned} &\mathscr L_{\alg{a}} =-{g\ov2}(1+\varkappa^2)^{1\ov2}\Big( \gamma^{\alpha\beta}\Big[-\partial_\alpha t\partial_\beta t (1+(1+\varkappa ^2) \rho ^2 (1+\varkappa ^2 \rho ^2))+ \partial_\alpha \rho\partial_\beta \rho (1+(\varkappa ^2-1) \rho ^2) \\ &+ \partial_\alpha \psi_1\partial_\beta\psi_1\rho ^2 \cos ^2\zeta+\partial_\alpha \psi_2\partial_\beta\psi_2 \rho ^2 \sin ^2\zeta+\partial_\alpha \zeta\partial_\beta\zeta \rho ^2\Big]- \varkappa \epsilon^{\alpha\beta} \rho ^4 \sin 2 \zeta\partial_\alpha\psi_1\partial_\beta\zeta\Big)\,, \\ \\ &\mathscr L_{\alg{s}} =-{g\ov2}(1+\varkappa^2)^{1\ov2}\Big( \gamma^{\alpha\beta}\Big[\partial_\alpha \phi\partial_\beta \phi (1-(1+\varkappa ^2) r^2 (1-\varkappa ^2 r^2)) +\partial_\alpha r\partial_\beta r (1+(1-\varkappa^2)r^2) \\ &+ \partial_\alpha \phi_1\partial_\beta \phi_1 r^2 \cos ^2\xi +\partial_\alpha \phi_2\partial_\beta \phi_2 r^2 \sin^2\xi +\partial_\alpha \xi\partial_\beta \xi r^2 \Big]+ \varkappa \epsilon^{\alpha\beta} r^4 \sin 2 \xi \partial_\alpha\phi_1\partial_\beta\xi\Big)\, . \end{aligned} \end{equation} Further, we make a shift \begin{eqnarray} \label{shift} \rho\to \rho-\frac{\varkappa^2}{4}\rho^3\, , ~~~~~r\to r+\frac{\varkappa^2}{4}r^3\,\end{eqnarray} so that the quartic action acquires the form \begin{eqnarray} \mathscr L_{\alg{a}} &=&-{g\ov2}(1+\varkappa^2)^{1\ov2}\, \gamma^{\alpha\beta}\times\\ &&\Big[-\partial_\alpha t\partial_\beta t \Big(1+(1+\varkappa ^2)\rho^2 +\tfrac{1}{2}\varkappa^2 (1+\varkappa ^2) \rho^4\Big)+\partial_\alpha \rho\partial_\beta \rho \Big(1- \rho ^2-\tfrac{\varkappa^2}{2}\rho^4\Big)+ \nonumber\\ && +\Big(\rho ^2-\tfrac{\varkappa^2}{2}\rho^4\Big) \Big(\partial_\alpha \psi_1\partial_\beta\psi_1\cos ^2\zeta+\partial_\alpha \psi_2\partial_\beta\psi_2 \sin ^2\zeta+\partial_\alpha \zeta\partial_\beta\zeta \Big)\Big] \nonumber \\ &&+{g\ov2} \varkappa (1+\varkappa^2)^{1\ov2}\epsilon^{\alpha\beta} \rho ^4 \sin 2 \zeta\partial_\alpha\psi_1\partial_\beta\zeta\,, \nonumber \end{eqnarray} \begin{eqnarray} \mathscr L_{\alg{s}} &=&-{g\ov2}(1+\varkappa^2)^{1\ov2}\, \gamma^{\alpha\beta}\times\\ &&\Big[\partial_\alpha \phi\partial_\beta \phi \Big(1-(1+\varkappa ^2) r^2+\tfrac{1}{2}\varkappa^2(1+\varkappa^2)r^4\Big) +\partial_\alpha r\partial_\beta r \Big(1+r^2+\tfrac{\varkappa^2}{2}r^4\Big) + \nonumber \\ &&+ \Big(r^2+ \tfrac{\varkappa^2}{2}r^4\Big)\Big(\partial_\alpha \phi_1\partial_\beta \phi_1 \cos ^2\xi +\partial_\alpha \phi_2\partial_\beta \phi_2 \sin^2\xi +\partial_\alpha \xi\partial_\beta \xi \Big)\Big] \nonumber \\ &&-{g\ov2} \varkappa(1+\varkappa^2)^{1\ov2} \epsilon^{\alpha\beta} r^4 \sin 2 \xi \partial_\alpha\phi_1\partial_\beta\xi\, .\nonumber\end{eqnarray} Changing the spherical coordinates to $(z_i,y_i)$ and expanding the resulting action up to the quartic order in $z$ and $y$ fields we get the quartic Lagrangian \eqref{Lquart}. Notice that the shifts of $\rho$ and $r$ in \eqref{shift} were chosen so that the deformed metric expanded up to quadratic order in the fields would be diagonal. \newpage \section{The $\alg{psu}(2|2)_q$-invariant S-matrix} \label{app:matrixSmatrix} The S-matrix compatible with $\alg{psu}(2|2)_q$ symmetry \cite{Beisert:2008tw} has been studied in detail in~\cite{Hoare:2011wr,Arutyunov:2012zt,Arutyunov:2012ai,Hoare:2013ysa}. In this Appendix we recall its explicit form following the same notation as in~\cite{Arutyunov:2012zt}. Let $E_{ij}\equiv E_i^j$ stand for the standard matrix unities, $i,j=1,\ldots, 4$. We introduce the following definition \begin{equation} E_{kilj}=(-1)^{\epsilon(l)\epsilon(k)}E_{ki}\otimes E_{lj}\, , \end{equation} where $\epsilon(i)$ denotes the parity of the index, equal to $0$ for $i=1,2$ (bosons) and to $1$ for $i=3,4$ (fermions). The matrices $E_{kilj}$ are convenient to write down invariants with respect to the action of copies of $\alg{su}_q(2)\subset \alg{psu}_q(2|2)$. If we introduce \begin{eqnarray} \Lambda_1&=&E_{1111}+\frac{q}{2}E_{1122}+\frac{1}{2}(2-q^2)E_{1221}+\frac{1}{2}E_{2112}+\frac{q}{2}E_{2211}+E_{2222}\, ,\nonumber\\ \Lambda_2&=&\frac{1}{2}E_{1122}-\frac{q}{2}E_{1221}-\frac{1}{2q}E_{2112}+\frac{1}{2}E_{2211}\, , \nonumber \\ \Lambda_3&=&E_{3333}+\frac{q}{2}E_{3344}+\frac{1}{2}(2-q^2)E_{3443}+\frac{1}{2}E_{4334}+\frac{q}{2}E_{4433}+E_{4444} \, , \nonumber\\ \Lambda_4&=&\frac{1}{2}E_{3344}-\frac{q}{2}E_{3443}-\frac{1}{2q}E_{4334}+\frac{1}{2}E_{4433}\, , \nonumber\\ \Lambda_5&=&E_{1133}+E_{1144}+E_{2233}+E_{2244}\, ,\\ \Lambda_6&=&E_{3311}+E_{3322}+E_{4411}+E_{4422}\, , \nonumber\\ \Lambda_7&=&E_{1324}-qE_{1423}-\frac{1}{q}E_{2314}+E_{2413}\, , \nonumber\\ \Lambda_8&=&E_{3142}-qE_{3214}-\frac{1}{q}E_{4132}+E_{4231}\, , \nonumber\\ \Lambda_9&=&E_{1331}+E_{1441}+E_{2332}+E_{2442}\, , \nonumber\\ \Lambda_{10}&=&E_{3113}+E_{3223}+E_{4114}+E_{4224}\, , \nonumber \end{eqnarray} the S-matrix of the $q$-deformed model is given by \begin{equation}\label{Sqmat} S_{12}(p_1,p_2)=\sum_{k=1}^{10}a_k(p_1,p_2)\Lambda_k\, , \end{equation} where the coefficients are \begin{eqnarray} a_1&=&1\, , \nonumber \\ a_2&=&-q+\frac{2}{q}\frac{x^-_1(1-x^-_2x^+_1)(x^+_1-x^+_2)}{x^+_1(1-x^-_1x^-_2)(x^-_1-x^+_2)}\nonumber \\ a_3&=&\frac{U_2V_2}{U_1V_1}\frac{x^+_1-x^-_2}{x^-_1-x^+_2}\nonumber \\ a_4&=&-q\frac{U_2V_2}{U_1V_1}\frac{x^+_1-x^-_2}{x^-_1-x^+_2}+\frac{2}{q}\frac{U_2V_2}{U_1V_1}\frac{x^-_2(x^+_1-x^+_2)(1-x^-_1x^+_2)}{x^+_2(x^-_1-x^+_2)(1-x^-_1x^-_2)}\nonumber \\ a_5&=&\frac{x^+_1-x^+_2}{\sqrt{q}\, U_1V_1(x^-_1-x^+_2)} \nonumber \\ a_6&=&\frac{\sqrt{q}\, U_2V_2(x^-_1-x^-_2)}{x^-_1-x^+_2} \\ a_7&=&\frac{ig}{2}\frac{(x^+_1-x^-_1)(x^+_1-x^+_2)(x^+_2-x^-_2)}{\sqrt{q}\, U_1V_1(x^-_1-x^+_2)(1-x_1^- x_2^-)\gamma_1\gamma_2} \nonumber \\ a_8&=&\frac{2i}{g}\frac{U_2V_2\, x^-_1x^-_2(x^+_1-x^+_2)\gamma_1\gamma_2}{q^{\frac{3}{2}} x^+_1x^+_2(x^-_1-x^+_2)(x^-_1x^-_2-1)}\nonumber \\ a_9&=&\frac{(x^-_1-x^+_1)\gamma_2}{(x^-_1-x^+_2)\gamma_1} \nonumber \\ \nonumber a_{10}&=&\frac{U_2V_2 (x^-_2-x^+_2)\gamma_1}{U_1V_1(x^-_1-x^+_2)\gamma_2}\, . \end{eqnarray} Here the basic variables $x^{\pm}$ parametrizing a fundamental representation of the centrally extended superalgebra $\alg{psu}_q(2|2)$ satisfy the following constraint \cite{Beisert:2008tw} \begin{eqnarray} \label{fc} \frac{1}{q}\left(x^++\frac{1}{x^+}\right)-q\left(x^-+\frac{1}{x^-}\right)=\left(q-\frac{1}{q}\right)\left(\xi+\frac{1}{\xi}\right)\, , \end{eqnarray} where the parameter $\xi$ is related the coupling constant $g$ as \begin{eqnarray} \xi=-\frac{i}{2}\frac{g(q-q^{-1})}{\sqrt{1-\frac{g^2}{4}(q-q^{-1})^2}}\, . \end{eqnarray} The (squares of) central charges are given by \begin{eqnarray} U_i^2=\frac{1}{q}\frac{x^+_i+\xi}{x^-_i+\xi}=e^{ip_i}\, , ~~~~V^2_i=q\frac{x^+_i}{x^-_i}\frac{x^-_i+\xi}{x^+_i+\xi} \, , \end{eqnarray} and the parameters $\gamma_i$ are \begin{eqnarray} \gamma_i=q^{\frac{1}{4}}\sqrt{\frac{ig}{2}(x^-_i-x^+_i)U_iV_i}\, . \end{eqnarray} The $q$-deformed dispersion relation ${\cal E}$ takes the form \begin{equation}\label{qdisp} \Bigg(1-\frac{g^2}{4}(q-q^{-1})^2 \Bigg)\Bigg(\frac{q^{{\cal E}/2}-q^{-{\cal E}/2}}{q-1/q}\Bigg)^2-g^2\sin^2\frac{p}{2}=\Bigg(\frac{q^{1/2}-q^{-1/2}}{q-1/q}\Bigg)^2\, . \end{equation} Finally, we point out that in the $q$-deformed dressing phase the variable $u$ appears which is given by \begin{eqnarray} u(x)=\frac{1}{\upsilon}\log\Bigg[-\frac{x+\tfrac{1}{x}+\xi+\tfrac{1}{\xi}}{\xi-\tfrac{1}{\xi}}\Bigg]\, . \end{eqnarray} The log of the $q$-deformed Gamma function admits an integral representation valid in the strip $-1< \mathbf{Re}( x)<k$ (with $k>1$)~\cite{Hoare:2011wr} \begin{equation}\label{lnGovG} \begin{aligned} \log\Gamma_{q^2}(1+x)&=\frac{i\pi x(x-1)}{2k}\\ &\qquad+\int_0^\infty\frac{dt}t\,\frac{e^{-tx}-e^{(x-k+1)t} -x(e^{-t}-1)(1+e^{(2-k)t})+e^{(1-k)t}-1}{(e^{t}-1)(1-e^{-kt})}\,, \end{aligned} \end{equation} where $q=e^{i\pi/k}$. Seding $k\to \infty$ one recovers the integral representation for the conventional Gamma function. Writing $k=-i\pi g/\nu$, keeping $\nu$ and $x$ fixed and sending $g\to\infty$, we find that at leading order \begin{eqnarray}\label{qGamma1} \log\frac{\Gamma_{q^2}(1+g x)}{\Gamma_{q^2}(1-gx)} &\approx& g\Big(-2 x+2 x \log (g)+x \big(\log (-x)+ \log (x)\big)\Big)\\\nonumber &+& g{2\pi\over i \nu} \big(\psi ^{(-2)}(1-{i\nu x\over\pi} )-\psi^{(-2)}(1+ {i\nu x\over\pi} )\big)\,, \end{eqnarray} where $\psi ^{(-2)}\left(z\right)$ is given by \begin{equation} \psi ^{(-2)}\left(z\right) = \int_0^z\, dt\, \text{log$\Gamma $}\left(t\right)\,. \end{equation} A derivation of this formula may be found in Appendix C of~\cite{Arutyunov:2013ega} \chapter*{Acknowledgements} \addcontentsline{toc}{chapter}{Acknowledgements} Many people deserve my gratitude for helping me to carry out my PhD in Utrecht. Here I want to thank in particular the collaborators I have had during those years: Gleb Arutyunov, Sergey Frolov, Olof Ohlsson Sax, Alessandro Sfondrini, Bogdan Stefa\'nski and Alessandro Torrielli. A special thank goes to my supervisor Gleb Arutyunov, who gave me the chance to be one of his students and offered me his constant guidance and support. I wish to thank also Arkady Tseytlin and Kostya Zarembo for granting me the opportunity of continuing to do research. \vspace{12pt} The author acknowledges funding from the ERC Advanced grant No.290456. This review is based on the author's PhD thesis, written at the Institute for Theoretical Physics of Utrecht University and defended on 7th September 2015. The work was supported by the Netherlands Organization for Scientific Research (NWO) under the VICI grant 680-47-602. The work was also part of the ERC Advanced grant research programme No. 246974, ``Supersymmetry: a window to non-perturbative physics'', and of the D-ITP consortium, a program of the NWO that is funded by the Dutch Ministry ofEducation, Culture and Science (OCW). \vspace{12pt} A version of this PhD thesis can be found also in the repository of Utrecht University at \url{http://dspace.library.uu.nl/handle/1874/323083} \begin{flushright} $\Box$ \end{flushright} \chapter{Introduction}\label{ch:intro} Together with great achievements, general relativity and quantum field theory come with unresolved problems. Among others, these include quantisation of gravity and a feasible description of strongly-coupled gauge theories. Recent developments have proved that string theory is a useful framework to investigate open issues in theoretical Physics. One sign of this success may be seen in the discovery of dualities, through which we are able to relate seemingly different concepts. In the most celebrated ``holographic duality'', gravity and gauge theories turn out to be two sides of the same coin. A prominent role in the study of this correspondence has been played by Integrability. The term ``Integrability'' is very broad, and actually collects many different concepts---from classical integrable models to factorisation of scattering in two-dimensional quantum field theories, et cetera. For the moment we point out that methods borrowed from Integrability allow one to obtain exact results that go beyond the usual perturbative analysis, thus giving stringent tests for holography. Interestingly, Integrability provides a new language to describe both the string and the gauge theory forming the holographic pair. In this introductory chapter we review some of these achievements and explain how this thesis fits in this context. \bigskip Maldacena's proposal~\cite{Maldacena:1997re} currently known as the AdS/CFT correspondence is a concrete version of the holographic principle anticipated by 't~Hooft in 1974~\cite{'tHooft:1973jz}. AdS/CFT conjectures the equivalence between a gravity theory living in an (asymptotically) Anti-de Sitter (AdS) spacetime in $d+1$ dimensions, and a gauge theory---or more precisely a conformal field theory (CFT)---in flat $d$-dimensional Minkowski spacetime. Often one refers to AdS as the \emph{bulk} and interprets the gauge theory as living on the \emph{boundary} of this spacetime. The best understood example of this conjecture is the pair AdS$_5$/CFT$_4$. On the one side we have string theory on the ten-dimensional background {AdS$_5\times$S$^5$}, the product of a five-dimensional Anti-de Sitter and a five-dimensional sphere. On the other side we find $\mathcal{N}=4$ super Yang-Mills (SYM), the maximally supersymmetric gauge theory in four dimensions. Although the equivalence is believed to hold precisely at any point in the parameter space, it becomes more testable in the \emph{planar} or \emph{large-$N$} limit. For the gauge theory, $N$ is the number of colors of the gauge group $SU(N)$, and as pointed out already by 't~Hooft~\cite{'tHooft:1973jz} sending $N\to \infty$ is a way to simplify the problem while keeping some of its non-trivial features. In some sense it is an approximation along a direction different from the usual perturbation theory, the latter being an expansion in the number of loops. At large $N$ an indefinite number of loops remains, but at leading order only \emph{planar} graphs survive. These are the Feynman graphs that can be drawn on genus-zero surfaces like the sphere, as opposed to the ones that can be drawn only on surfaces with handles. To be more precise, when $N\to \infty$ we have to send the Yang-Mills coupling $g_{\text{YM}}$ to zero in such a way that the effective coupling $\lambda=g_{\text{YM}}^2 N$ remains finite. After this limit is taken, one may still implement the usual perturbation theory by performing an expansion at small values of $\lambda$---the point $\lambda=0$ corresponding to the free theory. On the string side, the planar limit corresponds to considering free, \textit{i.e.}\xspace non interacting, strings. More precisely, for $N$ large the string coupling constant $g_s$ is related to the 't~Hooft coupling $\lambda$ as $g_s=\lambda/4 \pi N$ and it tends to zero, while the tension $g=\sqrt{\lambda}/2 \pi$ remains finite. These relations show one of the most exciting features of the AdS/CFT correspondence, namely that it is a weak/strong duality. In fact, the regime in which the string theory is more tractable is not for small values of $\lambda$, but rather for $\lambda\gg 1$. This is when the tension is large and the string moves like a rigid object. Therefore, if we decide to make our life easier by considering the ``simple'' regime for the string, this actually corresponds to the---usually unaccessible---gauge theory at strong coupling, and vice versa! \bigskip The AdS/CFT correspondence is made quantitatively more precise by saying how to match observables on the two sides of the duality. In particular, conformal dimensions of operators correspond to the energies of the dual string configurations~\cite{Witten:1998qj,Gubser:1998bc}. Integrability for gauge and string theory has the power of computing \emph{exactly} the dependence of these observables on the effective coupling $\lambda$. We are then able to go beyond a perturbative expansion at weak or at strong coupling, and we can actually interpolate the spectrum between the two sides for any finite value of $\lambda$. The achievement is not just computational, but also conceptual. In fact, with these methods we find a unified description of both the gauge and the string theory in a single \emph{quantum integrable model} in $1+1$ dimensions. From the point of view of the gauge theory, the interpretation is that of a spin-chain with long-range interactions; the different flavors of the field content of $\mathcal{N}=4$ SYM correspond in fact to the directions of the spins. For the string, the quantum integrable model is the one arising on the worldsheet after gauge-fixing, where the excitations now correspond to the bosonic and fermionic coordinates that parameterise the spacetime in which the superstring is living. Integrability allows one to compute the all-loop S-matrix governing the scattering of the excitations, on the spin-chain or on the worldsheet. The remarkable fact is that both sides of the AdS/CFT correspondence lead to the same result. \paragraph{Gauge theory and Integrability} The first hint about the presence of Integrability in the large-$N$ limit appeared on the gauge theory side\footnote{Integrability in the context of four-dimensional gauge theories appeared already in~\cite{Lipatov:1993yb,Faddeev:1994zg}, where it was shown that it manifests itself in some specific regimes of Quantum Chromodynamics (QCD).}. In their seminal paper~\cite{Minahan:2002ve}, Minahan and Zarembo showed that the problem of finding the spectrum of the gauge theory can be rephrased in terms of an \emph{integrable spin-chain}. Let us say a few words about this. Because of conformal symmetry, interesting observables to consider are the conformal dimensions of gauge-invariant operators. These are formed by taking traces of products of fields, where the trace is needed to ensure gauge invariance\footnote{In the large-$N$ limit it is enough to consider single-trace operators, as at leading order the conformal dimension of multi-trace operators is additive.}. The one-dimensional object that we get by taking this product already suggests how a spin-chain comes into the game. The various fields here play the role of the spins pointing in different directions in the space of flavors. Because of the cyclicity of the trace, we can anticipate that what we consider are \emph{periodic} spin-chains. The analogy becomes more precise when one computes loop corrections to the dilatation operator. For simplicity let us focus on an ``$\alg{su}(2)$ sector'' with just two scalar fields of $\mathcal{N}=4$ SYM, that we will interpret as ``spin up'' and ``spin down''. One finds that at one loop the operator measuring the anomalous dimension mixes operators that differ by permutations of fields sitting at neighbouring sites. Its expression actually matches that of the Hamiltonian of the Heisenberg spin-chain, solved by Bethe with a method that is commonly called the \emph{Bethe Ansatz}~\cite{Bethe:1931hc}. It is then really nice to discover that we can compute anomalous dimensions by using the same diagonalisation techniques. The story is not restricted to this $\alg{su}(2)$ sector, as the complete dilatation operator at one loop still has the form of an integrable Hamiltonian~\cite{Beisert:2003jj}. Also higher loops can be accounted for~\cite{Beisert:2003tq,Beisert:2003jj}, and one finds that at higher orders interactions become more and more non-local, meaning that not only nearest-neighbour sites are coupled, but also next-to-nearest neighbour, et cetera. We refer to~\cite{Rej:2010ju} for a review. \bigskip In the Heisenberg spin-chain that arises at one-loop, an S-matrix can be defined to describe the scattering of the magnons. It turns out that the key object on which one should focus to obtain all-loop results is not the Hamiltonian---which becomes more and more complicated at higher orders---but rather the S-matrix. In fact, this can be fixed even at finite values of the effective coupling by imposing compatibility with the symmetries~\cite{Beisert:2005tm}, which for $\mathcal{N}=4$ SYM are given by two copies of a central extension of the Lie superalgebra $\alg{su}(2|2)$. The possibility of bootstrapping the S-matrix is a consequence, on the one hand, of the presence of these powerful symmetries, and on the other hand of the knowledge of the exact dependence of the central charges on the momenta of the excitations and on the effective coupling~\cite{Beisert:2005tm}. It is clear that this bootstrapping method relies on the assumption that Integrability extends to all loop orders; let us review here some important points and refer to Chapter~\ref{ch:S-matrix-T4} for a more detailed discussion. The S-matrix that is considered dictates just $2\to 2$ scattering. This is enough for integrable models, since the number of particles is conserved and generic $N\to N$ scatterings can be derived just from the knowledge of the two-body S-matrix. This is a crucial property of integrable theories that goes under the name of \emph{factorisation} of scattering, and we refer to~\cite{Dorey:1996gd} for a nice review. The idea is that thanks to the large amount of symmetry generators, one is allowed to move the wave packets of the excitations independently, to disentangle interactions in such a way that only two particles are involved every time an interaction takes place. If this is possible, then any generic process can be reinterpreted as a sequence of two-body interactions. In Figure~\ref{fig:Yang-Baxter-Intro} we show how this works in the example of the scattering of three particles. Notice that in this case we have two different possibilities to achieve factorisation. \begin{figure}[t] \centering \hspace{-0.75cm} \subfloat[\label{fig:YB-left-Intro}]{ \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (p1in) at ($(-1.5cm,-2cm)+(0.5cm,0cm)$) {}; \node [box] (p2in) at (-0.5cm,-2cm) {}; \node [box] (p3in) at ($(+1.5cm,-2cm)+(1cm,0cm)$) {}; \node [box] (p1out) at ($(+1.5cm,2cm)+(0.5cm,0cm)$) {}; \node [box] (p2out) at (+0.5cm,2cm) {}; \node [box] (p3out) at ($(-1.5cm,2cm)+(1cm,0cm)$) {}; \draw (p1in) -- (p1out); \draw [dashed] (p2in) -- (p2out); \draw [dotted] (p3in) -- (p3out); \end{tikzpicture} } \raisebox{2cm}{$=$} \hspace{0cm} \subfloat[\label{fig:YB-central-Intro}]{ \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (p1in) at (-1.5cm,-2cm) {$p_1$}; \node [box] (p2in) at (-0.5cm,-2cm) {$p_2$}; \node [box] (p3in) at (+1.5cm,-2cm) {$p_3$}; \node [box] (p1out) at (+1.5cm,2cm) {$p_1$}; \node [box] (p2out) at (+0.5cm,2cm) {$p_2$}; \node [box] (p3out) at (-1.5cm,2cm) {$p_3$}; \draw (p1in) -- (p1out); \draw [dashed] (p2in) -- (p2out); \draw [dotted] (p3in) -- (p3out); \end{tikzpicture} } \hspace{0.5cm} \raisebox{2cm}{$=$} \subfloat[\label{fig:YB-right-Intro}]{ \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (p1in) at ($(-1.5cm,-2cm)+(0cm,0cm)$) {}; \node [box] (p2in) at ($(-0.5cm,-2cm)+(0.5cm,0cm)$) {}; \node [box] (p3in) at ($(+1.5cm,-2cm)+(-0.5cm,0cm)$) {}; \node [box] (p1out) at ($(+1.5cm,2cm)+(0cm,0cm)$) {}; \node [box] (p2out) at ($(+0.5cm,2cm)+(0.5cm,0cm)$) {}; \node [box] (p3out) at ($(-1.5cm,2cm)+(-0.5cm,0cm)$) {}; \draw (p1in) -- (p1out); \draw [dashed] (p2in) -- (p2out); \draw [dotted] (p3in) -- (p3out); \end{tikzpicture} } \caption{The vertical axis corresponds to propagation in time, while the horizontal axis parameterises space. A process like the one in the center involving three particles may be factorised as a sequence of two-body scatterings in two possible ways, as in the left or right figure. Consistency between the two factorisations yields the Yang-Baxter equation.} \label{fig:Yang-Baxter-Intro} \end{figure} It is clear that for consistency it should not matter which choice of factorisation we pick. This imposes a constraint on the two-body S-matrix $\mathcal{S}$ that goes under the name of \emph{Yang-Baxter equation}. It is found by equating the left and right processes in Figure~\ref{fig:Yang-Baxter-Intro} $$ \mathcal{S}_{23}\cdot\mathcal{S}_{13}\cdot\mathcal{S}_{12} = \mathcal{S}_{12}\cdot\mathcal{S}_{13}\cdot\mathcal{S}_{23}\,, $$ and one can check that this is enough to ensure consistency also for factorisation of scattering with more than three particles. Symmetries actually allow one to fix the S-matrix only up to an overall scalar function, which in this context is called the ``dressing factor''~\cite{Arutyunov:2004vx}. Without this factor ``just'' ratios of scattering elements would be known exactly in the effective coupling $\lambda$. However, the dressing factor for $\mathcal{N}=4$ SYM is actually known~\cite{Beisert:2006ez}, and it can be found by solving the equation obtained by imposing \emph{crossing invariance}~\cite{Janik:2006dc,Arutyunov:2006iu}, which relates physical processes to the ones in which one particle is analytically continued to an unphysical channel. As we will explain in more detail later, in this thesis we will show how it is possible to apply similar methods to a specific instance of the AdS$_3$/CFT$_2$ correspondence, allowing us to find an all-loop S-matrix for that case. This opens the possibility of implementing the same program that proved to be successful for AdS$_5$/CFT$_4$, and solve another dual pair exactly in the planar limit, suggesting that the presence of Integrability might be more general than expected. Rather than the ones used for $\mathcal{N}=4$ SYM, the methods we exploit are actually borrowed from the description of strings on {AdS$_5\times$S$^5$}. Let us briefly review the situation there. \paragraph{String theory and Integrability} In parallel to the findings for the gauge theory, developments were achieved also on the string theory side of the AdS/CFT correspondence. The string is described as a non-linear $\sigma$-model on the background {AdS$_5\times$S$^5$}, and thanks to the realisation in terms of the supercoset $\text{PSU}(2,2|4)/(\text{SO}(4,1)\times \text{SO}(5))$ it is possible to write down its action to all orders in the fields~\cite{Metsaev:1998it}. Integrability starts appearing at the classical level. In fact, the equations of motion for the superstring on the background {AdS$_5\times$S$^5$} admit a formulation in terms of a Lax connection $L_\alpha(z,\tau,\sigma)$~\cite{Bena:2003wd}, which depends on the worldsheet coordinates $(\tau,\sigma)$ and a \emph{spectral parameter} that we denote by $z$. It is a way to encode the dynamics of the model, as the flatness condition $$ \partial_\tau L_\sigma -\partial_\sigma L_\tau - [L_\tau,L_\sigma]=0\,, $$ provides the equations of motion for the string. The Lax connection is of primary importance, since expanding the trace of its path-ordered exponential---that goes under the name of transfer matrix---around any value of the spectral parameter generates the complete tower of conserved charges of the system. These charges can be used to construct solutions of the equations of motion. Classical integrability is inherited also by reduced models, obtained by confining the motion of the string to specific dynamics, and one of the finite-dimensional integrable models that may be recovered is \textit{e.g.}\xspace the Neumann model~\cite{Arutyunov:2003uj,Arutyunov:2003za}. \begin{comment} \alert{write it!} A way to encode the spectrum of the integrable model is to construct the finite-gap equations. These are... In~\cite{} it was shown how a proper large-tension limit of the all-loop Bethe-Yang equations indeed reproduces the finite-gap equations. AFS~\cite{Arutyunov:2004vx} \end{comment} \bigskip The action for the string has a local invariance, which generates unphysical modes that should be removed by fixing a gauge. It turns out that, to make contact with the description of the gauge theory, the proper gauge choice is a combination of the light-cone gauge for the bosonic coordinates and a specific ``kappa-gauge'' for the fermions. We will review this procedure in Chapter~\ref{ch:strings-light-cone-gauge}. The Hamiltonian of the light-cone gauge-fixed two-dimensional model is highly non-linear, and a standard way to study it is by implementing the usual expansion in powers of fields. The quadratic Hamiltonian turns out to be the free Hamiltonian for eight massive bosons and eight massive fermions of unit mass. From the quartic contribution one can extract the tree-level two-body scattering processes~\cite{Klose:2006zd} that satisfy the classical Yang-Baxter equation---the semiclassical limit of the equation that we encountered before. Loop contributions may be taken into account to construct the S-matrix perturbatively~\cite{Klose:2007rz}, confirming again consistency with factorised scattering. Let us stress again that now the perturbative expansion is performed for large values of $\lambda$, as opposed to the small-$\lambda$ expansion used for the perturbation theory in $\mathcal{N}=4$ SYM. The perturbative results suggest that the approach used on the gauge-theory side to construct an all-loop S-matrix may be considered also for the string. In this context the scattering will involve excitations on the worldsheet, and rather than a spin-chain we now encounter a field theory in $1+1$ dimensions, to be quantised to all orders. For the string there exists actually a derivation of the exact eigenvalues of the central charges~\cite{Arutyunov:2006ak} that are crucial in the all-loop construction. Exploiting the symmetries, one finds an S-matrix that is essentially the same as the one derived from the point of view of the gauge theory~\cite{Arutyunov:2006yd}---the two objects being related by a change of the two-particle basis. This S-matrix is supposed to describe to all-loops the scattering of the excitations on the worldsheet. The results rely on the assumption that Integrability extends from the classical to the quantum leve ; however, the strongest indication of its validity is that the all-loop S-matrix matches with the perturbative results of both the string and the gauge theory. \bigskip The program of finding the S-matrix is so important because its knowledge allows one to construct the \emph{Bethe-Yang equations} by imposing periodicity of the wave-function. These are the equations that one should solve to compute the spectrum of the theory~\cite{Beisert:2005fw}. Let us mention that the Bethe-Yang equations derived from the all-loop S-matrix actually describe the spectrum only in the so-called \emph{asymptotic limit}. In fact, as anticipated, from the point of view of the gauge theory higher-loop corrections introduce long-range interactions, and these eventually lead to virtual particles travelling all around the spin-chain. These wrapping interactions give contributions that are exponentially suppressed in the length of the spin-chain, and become important for precision computations when this is finite. The same issue has a counterpart on the string side. In fact, in order to define asymptotic states on the worldsheet and an S-matrix, one has to consider the limit of large length of the light-cone gauge-fixed string. In both cases, therefore, to compute the exact spectrum one has to account for finite-size corrections~\cite{Ambjorn:2005wa}. A way to incorporate the wrapping corrections is to use the trick of the mirror model, first introduced by Zamolodchikov in the context of relativistic integrable systems~\cite{Zamolodchikov:1989cf}. Rather than considering a model with finite length, one chooses to reinterpret the problem as the one of finding the spectrum for another model, with infinite size but at finite temperature. The treatment can be done with the method of the Thermodynamic Bethe Ansatz (TBA)~\cite{Yang:1968rm}, which can be applied to the case of the ground state of {AdS$_5\times$S$^5$}~\cite{Arutyunov:2007tc,Arutyunov:2009ur,Bombardelli:2009ns} as well as the excited states~\cite{Gromov:2009tv,Gromov:2009zb,Arutyunov:2009ax}, allowing one to obtain numerical results for the spectrum with arbitrary precision. Thanks to the inclusion of the finite-size effects~\cite{Arutyunov:2010gb,Balog:2010xa}, it was possible to match with perturbative results at \emph{five} loops in the gauge theory~\cite{Eden:2012fe}. A more recent and refined version of the TBA is the Quantum Spectral Curve~\cite{Gromov:2013pga}, through which it is possible to efficiently obtain \emph{analytic} results for anomalous dimensions of operators up to \emph{ten} loops~\cite{Marboe:2014gma}. \paragraph{AdS$_3$/CFT$_2$} The striking presence of Integrability for AdS$_5/$CFT$_4$ at large-$N$ and the great success achieved raise the natural question of whether it is possible to apply the same methods also to other instances of the AdS/CFT correspondence. We may wonder whether also lower-dimensional and less supersymmetric models still have the chance of being solvable. The answer was shown to be positive for the ABJM theory~\cite{Aharony:2008ug}, see~\cite{Klose:2010ki} for a review. The case that is of interest for this thesis is rather AdS$_3$/CFT$_2$. AdS$_3$ gravity was actually the first \emph{ante litteram} example of the holographic duality. In 1986 Brown and Henneaux showed that its asymptotic symmetry algebra---the gauge transformations that leave the field configurations invariant at the boundary---coincides with the Virasoro algebra, that is the symmetry of two-dimensional CFTs~\cite{Brown:1986nw}. On the one hand, gravity in three dimensions is remarkably simpler than the one we experience in our world, and it can be seen as an easier set-up to investigate some questions. An example of this is that it does not contain a propagating graviton. On the other hand, this does not make it a trivial theory at all. As shown by Ba\~{n}ados, Teitelboim and Zanelli, gravity in three dimensions with a negative cosmological constant admits black hole solutions ~\cite{Banados:1992wn}. These are locally isomorphic to empty AdS$_3$, differing from it because of global identifications~\cite{Banados:1992gq}. Also these black holes follow the famous Bekenstein-Hawking area-law for the entropy~\cite{Bekenstein:1972tm,Bekenstein:1973ur}, making AdS$_3$ a nice playground to further understand the nature of these objects. For black holes whose near-horizon geometry is (locally) AdS$_3$, it was actually possible to derive the area-law by performing a micro-state counting in the dual CFT$_2$~\cite{Strominger:1997eq}. This computation generalises the one for black holes arising in string theory, as considered in~\cite{Strominger:1996sh,Callan:1996dv,Maldacena:1996ky}. Let us mention that AdS$_3$/CFT$_2$ appears also in this context because of a particular D-brane construction, the D1-D5 system. In Chapters~\ref{ch:symm-repr-T4},~\ref{ch:S-matrix-T4} and~\ref{ch:massive-sector-T4} we will actually study strings propagating on the background that arises as the near-horizon limit of D1-D5. Let us briefly review some facts about backgrounds that are relevant for AdS$_3$/CFT$_2$. The backgrounds that we want to consider here preserve a total of 16 supercharges, and are AdS$_3\times$S$^3\times$S$^3\times$S$^1$ and AdS$_3\times$S$^3\times$T$^4$. The former is actually a family of backgrounds, as the amount of supersymmetry is preserved if the radii of AdS and the two three-spheres S$^3_{(1)}$ and S$^3_{(2)}$ satisfy the constraint $$ {R^{-2}_{\text{AdS}}}={R^{-2}_{{(1)}}}+{R^{-2}_{{(2)}}}\,. $$ We then find a family parameterised by a continuous parameter $\alpha=R^{2}_{\text{AdS}}/R^{2}_{{(1)}}$, where $0<\alpha<1$. The algebra of isometries is given by $\alg{d}(2,1;\alpha)_{\mbox{\tiny L}}\oplus \alg{d}(2,1;\alpha)_{\mbox{\tiny R}}$, where the labels for the two copies of the exceptional Lie superalgebra~\cite{Frappat:1996pb} refer to the Left and Right movers of the dual CFT$_2$. The background {AdS$_3\times$S$^3\times$T$^4$} may be understood as a contraction of the previous case, when we blow up the radius of one of the S$^3$'s and then compactify these directions together with the remaining S$^1$ to get a four-dimensional torus. At the level of the algebra this is achieved by a proper $\alpha\to0$ limit, or alternatively $\alpha\to1$. In this case the algebra of isometries is $\alg{psu}(1,1|2)_{\mbox{\tiny L}}\oplus \alg{psu}(1,1|2)_{\mbox{\tiny R}}$. The above backgrounds provide a rich structure since they can be supported by a mixture of Ramond-Ramond (RR) and Neveu-Schwarz--Neveu-Schwarz (NSNS) fluxes, where a parameter permits to interpolate between the pure RR and the pure NSNS backgrounds. The latter case was solved by using methods of representations of chiral algebras~\cite{Giveon:1998ns,Elitzur:1998mm,Maldacena:2000hw,Maldacena:2000kv}. On the contrary the case of pure RR cannot be addressed with these techniques~\cite{Berkovits:1999im} and we will argue that the right language to study it in the planar limit is indeed Integrability. One of the main challenges of AdS$_3/$CFT$_2$ is that the gauge theories dual to the above backgrounds are not as well understood as the example of $\mathcal{N}=4$ SYM in four dimensions. Maldacena argued that the dual CFT$_2$ should be found at the infra-red fixed point of the Higgs branch of the dual gauge theory~\cite{Maldacena:1997re}. In the case of {AdS$_3\times$S$^3\times$T$^4$} the finite-dimensional algebra mentioned before is completed to \emph{small} $\mathcal{N}=(4,4)$ superconformal symmetry~\cite{Seiberg:1999xz}, while for AdS$_3\times$S$^3\times$S$^3\times$S$^1$ one finds \emph{large} $\mathcal{N}=(4,4)$~\cite{Boonstra:1998yu}. Constructions of long-range spin-chains for the dual CFT$_2$'s are unfortunately lacking. A first proposal of a weakly coupled spin-chain description of the CFT$_2$ dual to {AdS$_3\times$S$^3\times$T$^4$} appeared in~\cite{Pakman:2009mi}, for a different and more recent description see~\cite{Sax:2014mea}. In this thesis we show that addressing the problem on the string theory side of the correspondence allows us to derive the desired all-loop S-matrix. One of the new features common to backgrounds relevant for the AdS$_3$/CFT$_2$ correspondence is the presence of \emph{massless} worldsheet excitations, corresponding to flat directions\footnote{For AdS$_3\times$S$^3\times$S$^3\times$S$^1$ directions corresponding to massless modes are the circle $S^1$ and a linear combination of the two equators of the S$^3$'s. For {AdS$_3\times$S$^3\times$T$^4$} the flat directions correspond to the torus.}. For some time they have been elusive in the Integrability description, but we will show that they can be naturally included in it. The massive sectors of AdS$_3\times$S$^3\times$S$^3\times$S$^1$ and {AdS$_3\times$S$^3\times$T$^4$} may be described respectively by the cosets $$ \frac{\text{D}(2,1;\alpha)_{\mbox{\tiny L}}\times \text{D}(2,1;\alpha)_{\mbox{\tiny R}}}{\text{SO}(1,2)\times \text{SO}(3) \times \text{SO}(3)}\,, \qquad\qquad \frac{\text{PSU}(1,1|2)_{\mbox{\tiny L}}\times \text{PSU}(1,1|2)_{\mbox{\tiny R}}}{\text{SO}(1,2)\times \text{SO}(3) }\,, $$ and following the method of Ref.~\cite{Metsaev:1998it} one can construct the action for the non-linear $\sigma$-models on AdS$_3\times$S$^3\times$S$^3$ and AdS$_3\times$S$^3$~\cite{Park:1998un,Pesando:1998wm,Rahmfeld:1998zn}. The missing flat directions can then be re-inserted by hand, and agreement with the Green-Schwarz action can be shown in a certain kappa-gauge for fermions~\cite{Babichenko:2009dk}. Classical Integrability for the pure RR backgrounds\footnote{It is interesting that classical Integrability was extended also to the case in which a $B$-field is present in the background~\cite{Cagnazzo:2012se}.} was demonstrated in~\cite{Babichenko:2009dk}. In fact, these cosets enjoy a $\mathbb{Z}_4$ symmetry that allows one to borrow the construction for the Lax representation of the {AdS$_5\times$S$^5$} background~\cite{Bena:2003wd}, see also~\cite{Adam:2007ws,Zarembo:2010sg,Zarembo:2010yz}. In this thesis we will consider the case of strings on the pure RR {AdS$_3\times$S$^3\times$T$^4$} background, and by assuming that Integrability extends from the classical to the quantum level we will derive an all-loop S-matrix, in the spirit of what was done for strings on {AdS$_5\times$S$^5$}. For the case of mixed flux see~\cite{Lloyd:2014bsa}, and for results on AdS$_3\times$S$^3\times$S$^3\times$S$^1$ see~\cite{Borsato:2012ud,Borsato:2012ss,Borsato:2015mma}. \paragraph{Deforming {AdS$_5\times$S$^5$}} Together with the study of lower-dimensional AdS models, one may wonder whether it is possible to deform the superstring on {AdS$_5\times$S$^5$} and its dual $\mathcal{N}=4$ SYM, to relax some of the symmetries while preserving the integrable structure. This would teach us what the conditions are under which Integrability is still present, and it would allow us to study cases that are less special than the maximally supersymmetric theory in four dimensions. Examples of deformations of the $\sigma$-model that preserve its classical Integrability are orbifolds of either AdS$_5$ or S$^5$~\cite{Beisert:2005he,Solovyov:2007pw}, where the fields living on the worldsheet are identified through the action of a discrete subgroup of the bosonic isometries. Another class of deformations is generated by the so-called ``TsT-transformations'', that can be implemented any time the background possesses at least two commuting isometries. Let us call $\phi_1$ and $\phi_2$ the two directions on which these isometries act as shifts. The TsT-transformation is a sequence of a T-duality, a shift, and a T-duality on $\phi_1$ and $\phi_2$. The first T-duality transformation acts on $\phi_1$, producing the dual coordinate $\tilde{\phi}_1$; the shift is implemented on $\phi_2$ as $\phi_2\to \phi_2 + \gamma \tilde{\phi}_1$; to conclude, one performs another T-duality along $\tilde{\phi}_1$~\cite{Alday:2005ww}. Multi-parameter deformations are made possible by the various choices of pairs of U$(1)$-isometries used to implement the TsT-transformation, and in general they can break all supersymmetries~\cite{Frolov:2005dj}. A restriction to a one-parameter real deformation of the sphere reproduces the Lunin-Maldacena background~\cite{Lunin:2005jy}, which preserves $\mathcal{N}=1$ supersymmetry. The effects of these classes of deformations on the gauge theory and on the quantum integrable model have also been studied, and we refer to~\cite{vanTongeren:2013gva} for a review on this. A different approach consists of deforming the symmetry algebra by a continuous parameter. The case we want to discuss is generally referred to as $q$-deformation, where $q$ is indeed the deformation parameter. This deformation replaces a Lie algebra $\alg{f}$ by its quantum group version $U_q(\alg{f})$, which we will just denote by $\alg{f}_q$. To show how this works in a simple example\footnote{For higher-rank algebras, the deformed commutation relations in the Serre-Chevalley basis must be supplemented by the $q$-deformed Serre relations.}, let us consider the case of the $\alg{sl}(2)$ algebra where we denote by $\gen{S}_3$ the Cartan element and by $\gen{S}_\pm$ the positive and negative roots, \textit{i.e.}\xspace the ladder operators. The $\alg{sl}_q(2)$ relations are given by $$ [\gen{S}_3,\gen{S}_\pm] = \pm 2 \gen{S}_\pm\,, \qquad\quad [\gen{S}_+,\gen{S}_-]=\frac{q^{\gen{S}_3}-q^{-\gen{S}_3}}{q-q^{-1}}\,, $$ meaning that the deformation modifies the right-hand-side of the commutation relation of the two ladder operators. Sending the deformation parameter $q \to 1$ we recover the undeformed algebra. The $q$-deformation is not just a beautiful mathematical construction, it is also physically motivated. The most famous realisation of it is the XXZ spin-chain~\cite{Faddeev:1996iy}. In fact, allowing for anisotropy (\textit{i.e.}\xspace a different coupling related to $q$ for the spins in the $z$-direction) one obtains a $q$-deformation, in the sense presented above, of the XXX spin-chain. The interest for this type of deformation in the context of AdS/CFT first sparkled when Beisert and Koroteev studied the $q$-deformation of the R-matrix of the Hubbard model~\cite{Beisert:2008tw}, see also~\cite{Beisert:2010kk,Beisert:2011wq}. After solving the crossing equation for the dressing factor, it was possible to define an all-loop S-matrix for the $q$-deformation of the integrable model describing the dual pair of AdS$_5$/CFT$_4$~\cite{Hoare:2011wr}. The case considered was that of $q$ being a root of unity, and it was shown that the ``vertex to IRF'' transformation can be used to restore unitarity of the corresponding S-matrix~\cite{Hoare:2013ysa}. Much progress has been made, and thanks to the TBA construction of~\cite{Arutyunov:2012zt,Arutyunov:2012ai} it is even possible to compute the spectrum at finite size. We want to stress that all this work was pursued just by using the description of the deformed quantum integrable model, bypassing the meaning of this deformation for both the gauge and the string theory. The gap was filled on the string side by Delduc, Magro and Vicedo, who proposed a method to deform the action for the superstring on {AdS$_5\times$S$^5$}~\cite{Delduc:2013qra}. This realises a $q$-deformation of the symmetry algebra of the classical charges~\cite{Delduc:2014kha}, where now the deformation parameter is \emph{real}. It is a generalisation of deformations valid for bosonic cosets~\cite{Delduc:2013fga} and it is of the type of the Yang-Baxter $\sigma$-model of Klim\v{c}\'ik~\cite{Klimcik:2002zj,Klimcik:2008eq}. It is sometimes referred to as ``$\eta$-deformation'', where $\eta$ is a deformation parameter that is related to $q$. The limit $\eta\to0$ gives back the undeformed model. The remarkable fact is that by construction the deformation procedure preserves the classical Integrability of the original model. In this thesis we will study this deformation when it is applied to strings on {AdS$_5\times$S$^5$}, and we will compare it to the S-matrix of Beisert and Koroteev. Let us mention that recently a new method was studied, going under the name of ``$\lambda$-deformation''. It was first introduced by gauging a combination of a principal chiral model and a Wess-Zumino-Witten model~\cite{Sfetsos:2013wia}, and it was then extended to strings on symmetric spaces~\cite{Hollowood:2014rla} and on {AdS$_5\times$S$^5$}~\cite{Hollowood:2014qma}. There is evidence that it realises the $q$-deformation in the case of $q$ being root of unity~\cite{Hollowood:2015dpa}, and it was shown to be related to the $\eta$-deformation~\cite{Vicedo:2015pna,Hoare:2015gda} by the Poisson-Lie T-duality of~\cite{Klimcik:1995ux,Klimcik:1995dy}. To conclude this paragraph let us point out that it is still unclear how to construct the duals of these $\sigma$-models, in other words how to $q$-deform $\mathcal{N}=4$ SYM. The result is expected to be a non-commutative gauge theory, and it would be extremely interesting to build it explicitly. \paragraph{About this thesis} This thesis contains some of the author's contributions to the research on Integrability applied to AdS/CFT. Part of this work has been devoted to the understanding of lower dimensional examples, and we will present in particular the derivation of an all-loop S-matrix for the case of {AdS$_3\times$S$^3\times$T$^4$}. A different direction was motivated by questions on the $\eta$-deformation of strings on {AdS$_5\times$S$^5$}. We start in \textbf{Chapter~\ref{ch:strings-light-cone-gauge}} with a review on basic notions that will be useful for the remaining chapters of the thesis. We begin with a discussion on bosonic strings and on how to fix light-cone gauge. We follow~\cite{Arutyunov:2009ga}, but we include also the possibility in which a background $B$-field is present. The main consequences of the light-cone gauge-fixing are explained. We then extend the discussion to include fermions. We present the generic action for type IIB superstring at quadratic order in fermions, and explain how to fix a proper kappa-gauge. After presenting the \emph{decompactification limit} of the worldsheet---necessary to define asymptotic states---we discuss the large-tension expansion, equivalent to the usual expansion in powers of fields on the worldsheet. We review also perturbative quantisation and the corresponding scattering theory, whose all-loop generalisation will be a major topic in the following. \textbf{Chapter~\ref{ch:symm-repr-T4}} is the first one specifically devoted to AdS$_3$/CFT$_2$. We consider the background {AdS$_3\times$S$^3\times$T$^4$} and we study the centrally-extended symmetry algebra $\mathcal{A}$ of the charges commuting with the light-cone Hamiltonian. We derive the exact momentum-dependence of the central charges, and then we study the representation of $\mathcal{A}$ under which the excitations are organised, in a limit in which the dispersion relation is relativistic. The analysis shows that this representation is \emph{reducible}, a feature of {AdS$_3\times$S$^3\times$T$^4$} that was not there for the known case of {AdS$_5\times$S$^5$}. We find a total of three irreducible representations, labelled by the eigenvalue of an angular momentum in AdS$_3\times$S$^3$. Figure~\ref{fig:massive-Intro} shows the two \emph{massive} representations, where this eigenvalue takes value $+1$ and $-1$ on Left and Right excitations respectively. Here Left and Right refer to the two copies of $\alg{psu}(1,1|2)$, that are isometries of the background. The algebra $\mathcal{A}$ was first identified in~\cite{Borsato:2013qpa} from the point of view of the spin-chain with symmetry $\alg{psu}(1,1|2)_{\mbox{\tiny L}}\oplus \alg{psu}(1,1|2)_{\mbox{\tiny R}}$. The excitations on this spin-chain correspond to the massive worldsheet excitations of {AdS$_3\times$S$^3\times$T$^4$}. In this chapter we take instead the point of view of the string theory description and we follow~\cite{Borsato:2014exa,Borsato:2014hja}, where \emph{massless} excitations---see Figure~\ref{fig:massless-Intro}---were finally included. \begin{figure}[t] \centering \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{Y^{\mbox{\tiny L}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny L} 1}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny L} 2}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{Z^{\mbox{\tiny L}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=north west,Q node] {}; \end{tikzpicture} \hspace{2cm} \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{Z^{\mbox{\tiny R}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny R}}_{\ 2}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny R}}_{\ 1}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{Y^{\mbox{\tiny R}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=north west,Q node] {}; \end{tikzpicture} \caption{The Left and Right massive modules. Excitations $Z^{\mbox{\tiny L},\mbox{\tiny R}}$ correspond to transverse directions in AdS$_3$, while $Y^{\mbox{\tiny L},\mbox{\tiny R}}$ in S$^3$. Fermions are denoted by $\eta$. The arrows correspond to supercharges, while the dotted lines correspond to the action of an $\alg{su}(2)$.} \label{fig:massive-Intro} \end{figure} \begin{figure}[t] \centering \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \newcommand{-4cm}{-4cm} \begin{scope}[xshift=-4cm] \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\chi^{1}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{T^{11}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{T^{21}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{\widetilde{\chi}^{1}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=north west,Q node] {}; \end{scope} \newcommand{1cm}{1cm} \newcommand{1cm}{1cm} \begin{scope}[xshift=1cm,yshift=1cm] \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\chi^{2}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{T^{12}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{T^{22}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{\widetilde{\chi}^{2}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=south west,Q node] {}; \draw [dashed] ($(PhiM.west)+({-0.1cm,0.1cm})$) -- ($(PhiM.east)-({1cm,1cm})+({-4cm,0cm})+({-0.1cm,0.2cm})$); \draw [dashed] (PsiM) -- ($(PsiM.east)-({1cm,1cm})+({-4cm,0cm})$); \draw [dashed] (PsiP) -- ($(PsiP.east)-({1cm,1cm})+({-4cm,0cm})+({-0.1cm,0.1cm})$); \draw [dashed] ($(PhiP.south west)+({0cm,0.2cm})$) -- ($(PhiP.east)-({1cm,1cm})+({-4cm,0cm})+({0cm,-0.1cm})$) node [pos=0.45,anchor=north west,Q node] {}; \end{scope} \end{tikzpicture} \caption{The massless module. $T^{\dot{a}a}$ are excitations on T$^4$, and the fermions are denoted by $\chi^a$ and $\tilde{\chi}^a$. Dotted and dashed lines correspond to the actions of two $\alg{su}(2)$ algebras.} \label{fig:massless-Intro} \end{figure} Using the knowledge of the central charges and arguments of representation theory, we generalise the representations to all loops in the large-$N$ limit. We further study these representations and introduce the notion of Left-Right symmetry. In \textbf{Chapter~\ref{ch:S-matrix-T4}} we impose compatibility with symmetries and bootstrap the all-loop S-matrix for the worldsheet excitations, as done in~\cite{Borsato:2014exa,Borsato:2014hja}. Remarkably, the S-matrix satisfies the Yang-Baxter equation, confirming compatibility with the assumption of factorisation of scattering. The S-matrix is actually fixed completely up to some dressing factors that cannot be found from symmetries. Taking into account the constraints of unitarity and of Left-Right symmetry, we find a total of four unspecified functions. Further constraints are imposed on them by the crossing equations, that we derive. We then explain how to impose the periodicity condition on the wave-function to derive the Bethe-Yang equations. We guide the reader through the diagonalisation procedure, introducing the various complications in different steps, until the nesting procedure is used. We conclude by presenting the complete\footnote{This result has appeared in~\cite{Borsato:2016kbm}.} set of Bethe-Yang equations for {AdS$_3\times$S$^3\times$T$^4$}. We restrict our attention to the massive sector\footnote{The massive sector of {AdS$_3\times$S$^3\times$T$^4$} has been discussed in detail also in the thesis of A. Sfondrini~\cite{Sfondrini:2014via}, to which we refer for an alternative presentation.} in \textbf{Chapter~\ref{ch:massive-sector-T4}}. First we show that the previous results are closely related to a spin-chain description, following~\cite{Borsato:2013qpa}. This spin-chain needs to be dynamical---the interactions change its length---in order to correctly account for the central extension of the algebra. An all-loop S-matrix can be determined, which is related to the worldsheet S-matrix by a similarity transformation. We also present solutions to the crossing equations for the dressing factors of the massive sector, and we provide some checks for their validity. These solutions and the corresponding discussion were first presented in~\cite{Borsato:2013hoa}. By taking a proper thermodynamical limit in the regime of large string tension, we also recover the so-called ``finite-gap equations'' from the Bethe-Yang equations, repeating the calculation in~\cite{Borsato:2013qpa}. We conclude by referring to the independent perturbative calculations that confirm our all-loop results. In \textbf{Chapter~\ref{ch:qAdS5Bos}} we begin the investigation of the $\eta$-deformation of the string on {AdS$_5\times$S$^5$}. Here we restrict to the bosonic model. After a brief introduction to the undeformed model and to the deformation procedure, we derive the results first obtained in~\cite{Arutyunov:2013ega}. We find that the background metric is deformed and a $B$-field is generated. A representation of the squashing-effect of the deformation in the case of a two-dimensional sphere may be seen in Figure~\ref{fig:eta-def-sphere-Intro}. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{images/undef-sphere.pdf} \includegraphics[width=0.4\textwidth]{images/def-sphere.pdf} \caption{When we apply the $\eta$-deformation to a two-dimensional sphere---left figure---we find that its effect is a squashing---right figure.} \label{fig:eta-def-sphere-Intro} \end{figure} The bosonic action is studied perturbatively by computing the tree-level S-matrix for the scattering of bosonic worldsheet excitations. The result allows us to succesfully match with the large-tension limit of the all-loop S-matrix found by fixing the $\alg{psu}_q(2|2)_{\text{c.e.}}$ symmetry. In particular, we can relate the two deformation parameters $\eta$ and $q$ on the two sides. We conclude the chapter with some concluding remarks. In \textbf{Chapter~\ref{ch:qAdS5Fer}} we want to address the question of whether the deformed metric and $B$-field can be completed to a full type IIB supergravity background. With this motivation we compute the action of the deformed coset at quadratic order in fermions, as done in~\cite{Arutyunov:2015qva}. We cure the---only apparent---mismatch with the standard form of the Green-Schwarz action by implementing proper field redefinitions on the bosonic and fermionic coordinates. From the action we extract the couplings to the odd-rank tensors that should correspond to the Ramond-Ramond fields multiplied by the exponential of the dilaton. We also compute the kappa-symmetry variations of the bosonic and fermionic coordinates, and of the worldsheet metric at leading order. From this computation we confirm the results obtained from the Lagrangian, and we show that they are \emph{not} compatible with the equations of motion of type IIB supergravity. We conclude the chapter with a discussion of these findings. \begin{comment} \paragraph{Summary of new results} This thesis is based on research papers of the author. Let us review the new contributions that this work has provided. The spin-chain---believed to give an equivalent description for the massive sector of strings on {AdS$_3\times$S$^3\times$T$^4$}---with $\alg{psu}(1,1|2)_{\mbox{\tiny L}}\oplus \alg{psu}(1,1|2)_{\mbox{\tiny R}}$ symmetry was studied in~\cite{Borsato:2013qpa}. In particular the symmetry algebra that preserves the choice of the vacuum was identified. All-loop representations of this symmetry algebra were constructed. By imposing compatibility with symmetries, it was possible to fix the S-matrix for scattering of spin-chain excitations up to two scalar factors. These are constrained by the crossing equations, that were derived. The S-matrix satisfies the Yang-Baxter equation, thus confirming compatibility with the underlying assumption of factorised scattering. The S-matrix was used to derive the Bethe-Yang equations, by imposing periodicity of the wave-function. Particular limits---corresponding to weak and to strong coupling---were studied, and agreement with perturbative results was shown. \end{comment} \section{The current}\label{sec:psu224-current} In this section we compute the current that enters the definition of the Lagrangian up to quadratic order in fermions. We start by defining a coset element of $\text{PSU}(2,2|4)/\text{SO}(4,1)\times \text{SO}(5)$, that we choose to write as \begin{equation}\label{eq:choice-full-coset-el} \alg{g}=\alg{g_b} \cdot \alg{g_f}. \end{equation} Here $\alg{g_b}$ is a bosonic group element. We choose the same representative used in Chapter~\ref{ch:qAdS5Bos}, see Equation~\eqref{basiccoset}. The fermionic group element is denoted by $\alg{g_f}$ and we define it simply through the exponential map \begin{equation} \alg{g_f}=\text{exp} \chi\,,\qquad\qquad \chi \equiv \genQind{I}{}{} \ferm{\theta}{I}{}{}\,. \end{equation} One may prefer different choices, \textit{e.g.}\xspace $\alg{g_f}= \chi + \sqrt{1+\chi^2}$ turns out to be more convenient when we want to expand up to fourth order~\cite{Arutyunov:2009ga}, since it generates no cubic term in the expansion. At quadratic order the two parameterisations are equivalent. Let us comment on the fact that other choices for $\alg{g}$ are also possible, \textit{e.g.}\xspace we could put the fermions to the left and use an element of the form $\alg{g_f} \cdot \alg{g_b}$. In~\cite{Frolov:2006cc,Arutyunov:2009ga} yet another choice was made, namely $\Lambda(t,\phi)\cdot \alg{g_f}\cdot{\alg{g}}_{\text{X}}$, where $\Lambda(t,\phi)$ is the group element for shifts of $t$ and $\phi$, while ${\alg{g}}_{\text{X}}$ contains the remaining bosonic isometries. We prefer to use~\eqref{eq:choice-full-coset-el} as in~\cite{Metsaev:1998it} because its expansion in powers of fermions is simpler. This choice corresponds to fermions that are not charged under global bosonic isometries. The current is defined as $A=-\alg{g}^{-1}{\rm d}\alg{g}$, and being an element of the algebra we decompose it in terms of linear combinations of the generators \begin{equation} A= L^m \gen{P}_m + \frac{1}{2} L^{mn} \gen{J}_{mn} + \genQind{I}{\alpha a}{}\ferm{L}{I}{\alpha a}{}. \end{equation} It is useful to look at the purely bosonic and purely fermionic currents separately, that are found after switching off the fermions and the bosons respectively. The purely bosonic current is a combination of even generators only \begin{equation} A^\alg{b}= -\alg{g_b}^{-1} {\rm d} \alg{g_b} = e^m \gen{P}_m + \frac{1}{2} \omega^{mn} \gen{J}_{mn} . \end{equation} The coefficients in front of the generators $\gen{P}_m$ are the components of the vielbein, while the ones in front of the generators $\gen{J}_{mn}$ are the components of the spin-connection for the ten-dimensional metric. To write them explicitly, let us choose to enumerate the ten spacetime coordinates as \begin{equation} \begin{aligned} & X^0 = t, & \quad X^1= \psi_2,& \quad X^2= \psi_1, &\quad X^3=\zeta, &\quad X^4= \rho,\\ & X^5 = \phi, & \quad X^6= \phi_2,& \quad X^7= \phi_1, &\quad X^8=\xi, &\quad X^9= r. \end{aligned} \end{equation} We find that in our parameterisation the vielbein $e^m = e^m_M {\rm d}X^M$ is diagonal and given by\footnote{To avoid confusion with tangent indices, we write curved indices with the explicit names of the spacetime coordinates.} \begin{equation} \begin{aligned} e^0_t = \sqrt{1+\rho^2}, \quad & e^1_{\psi_2} = -\rho \sin \zeta , \quad &e^2_{\psi_1} =-\rho \cos \zeta, \quad &e^3_{\zeta} = -\rho, \quad& e^4_{\rho} = -\frac{1}{\sqrt{1+\rho^2}}, \\ e^5_{\phi} = \sqrt{1-r^2}, \quad & e^6_{\phi_2} = -r \sin \xi , \quad &e^7_{\phi_1} =-r \cos \xi, \quad &e^8_\xi = -r, \quad& e^9_r = -\frac{1}{\sqrt{1-r^2}}. \end{aligned} \end{equation} The non-vanishing components of the spin connection $\omega^{mn} = \omega^{mn}_M {\rm d}X^M$ are \begin{equation} \begin{aligned} & \omega^{04}_t = \rho, \quad & \omega^{34}_\zeta = -\sqrt{1+\rho^2}, \quad & \quad & \\ & \omega^{13}_{\psi_2} = -\cos \zeta, \quad & \omega^{14}_{\psi_2} = -\sqrt{1+\rho^2} \sin \zeta , \quad & \omega^{23}_{\psi_1} = \sin \zeta, \quad & \omega^{24}_{\psi_1} = -\sqrt{1+\rho^2} \cos \zeta, \\ \\ & \omega^{59}_\phi = -r, \quad & \omega^{89}_\xi = -\sqrt{1-r^2}, \quad & \quad & \\ & \omega^{68}_{\phi_2} = -\cos \xi, \quad & \omega^{69}_{\phi_2} = -\sqrt{1-r^2} \sin \xi , \quad & \omega^{78}_{\phi_1} = \sin \xi, \quad & \omega^{79}_{\phi_1} = -\sqrt{1-r^2} \cos \xi, \end{aligned} \end{equation} and it can be checked that $\omega_M^{mn}$ satisfies the correct equation for the spin-connection~\eqref{eq:spi-conn-vielb}. The purely fermionic current is decomposed in terms of even and odd generators \begin{equation} A^\alg{f}= -\alg{g_f}^{-1} {\rm d} \alg{g_f} = \Omega^m \gen{P}_m + \frac{1}{2} \Omega^{mn} \gen{J}_{mn} + \ferm{\Omega}{I}{\alpha a}{} \genQind{I}{\alpha a}{} \end{equation} where we have defined the to-be-determined quantities $\Omega^m, \Omega^{mn}, \ferm{\Omega}{I}{\alpha a}{} $. After expanding $\alg{g_f}$ in powers of $\theta$, at quadratic order in the fermions we find \begin{equation} \begin{aligned} A^\alg{f}=& -\alg{g_f}^{-1} {\rm d} \alg{g_f} \\ =& - \gen{Q}^{I} \, {\rm d} \theta_I + \frac{1}{2}[ \gen{Q}^{I} \theta_I , \gen{Q}^{J} \, {\rm d}\theta_J ]+\mathcal{O}(\theta^3)\\ =& - \gen{Q}^{I} \, {\rm d} \theta_I + \frac{i}{2} \delta^{IJ} \bar{\theta}_I \boldsymbol{\gamma}^m {\rm d} \theta_J \ \gen{P}_m - \frac{1}{4} \epsilon^{IJ} \bar{\theta}_I \boldsymbol{\gamma}^{mn} {\rm d} \theta_J \ \check{\gen{J}}_{mn} + \frac{1}{4} \epsilon^{IJ} \bar{\theta}_I \boldsymbol{\gamma}^{mn} {\rm d} \theta_J \ \hat{\gen{J}}_{mn} +\mathcal{O}(\theta^3), \end{aligned} \end{equation} where we make use of the commutation relations~\eqref{eq:comm-rel-QQ-su224} of $\alg{psu}(2,2|4)$---meaning that we also project out the generator proportional to the identity operator. When we repeat the computation for the full current we see that the computation is similar to the one of the fermionic current, upon replacing ${\rm d} \to ({\rm d}-A^\alg{b})$~\cite{Metsaev:1998it} \begin{equation} \begin{aligned} A=& -\alg{g}^{-1}{\rm d}\alg{g}= -\alg{g_f}^{-1} ({\rm d}-A^\alg{b}) \alg{g_f} \\ =& A^\alg{b}- \gen{Q}^{I} {\rm d} \theta_I - [ \gen{Q}^{I} \theta_I , A^\alg{b}]\\ & +\frac{1}{2} \left[\gen{Q}^{I} \theta_I , \left(\gen{Q}^{J} {\rm d} \theta_J - [ \gen{Q}^{J} \theta_J , A^\alg{b}] \right)\right] +\mathcal{O}(\theta^3) \\ =& \left( e^m +\frac{i}{2} \bar{\theta}_I \boldsymbol{\gamma}^m D^{IJ} \theta_J \right) \gen{P}_m - \gen{Q}^{I} \, D^{IJ} \theta_J \\ & +\frac{1}{2} \omega^{mn} {\gen{J}}_{mn} - \frac{1}{4} \epsilon^{IJ} \bar{\theta}_I \left( \boldsymbol{\gamma}^{mn} \check{\gen{J}}_{mn} - \boldsymbol{\gamma}^{mn} \hat{\gen{J}}_{mn} \right) D^{JK} \theta_K +\mathcal{O}(\theta^3) \end{aligned} \end{equation} where the operator $D^{IJ}$ on fermions $\theta$ is \begin{equation}\label{eq:op-DIJ-psu224-curr} D^{IJ} = \delta^{IJ} \left( {\rm d} - \frac{1}{4} \omega^{mn} \boldsymbol{\gamma}_{mn} \right) + \frac{i}{2} \epsilon^{IJ} e^m \boldsymbol{\gamma}_m . \end{equation} Sometimes it is useful to write it as \begin{equation} D^{IJ} = \mathcal{D}^{IJ} + \frac{i}{2} \epsilon^{IJ} e^m \boldsymbol{\gamma}_m , \qquad \mathcal{D}^{IJ} \equiv\delta^{IJ} \left( d - \frac{1}{4} \omega^{mn} \boldsymbol{\gamma}_{mn} \right), \end{equation} where $\mathcal{D}^{IJ}$ is the covariant derivative on the fermions. The contribution of the generators $\gen{J}$ to the current will be irrelevant for the computation of the Lagrangian, since they are projected out when defining the coset. Imposing the flatness condition on the current \begin{equation} \epsilon^{\alpha\beta} (\partial_\alpha A_\beta -\frac{1}{2} [A_\alpha,A_\beta])=0 \end{equation} and projecting on the bosonic generators we find the following equations for the vielbein and the spin connection \begin{equation}\label{eq:d-veilbein} \begin{aligned} \epsilon^{\alpha\beta} (\partial_\alpha e^m_\beta - \omega^{mq}_\alpha e_{q\beta}) &= 0, \end{aligned} \end{equation} \begin{equation}\label{eq:d-spin-conn} \begin{aligned} \epsilon^{\alpha\beta} (\partial_\alpha \check{\omega}^{mn}_\beta - \check{\omega}^{m}_{\ p\alpha} \check{\omega}^{pn}_\beta - \check{e}^m_\alpha \check{e}^n_\beta) &= 0, \qquad \epsilon^{\alpha\beta} (\partial_\alpha \hat{\omega}^{mn}_\beta - \hat{\omega}^{m}_{\ p\alpha} \hat{\omega}^{pn}_\beta + \hat{e}^m_\alpha \hat{e}^n_\beta) &= 0. \end{aligned} \end{equation} \section{The $\alg{psu}(2,2|4)$ algebra}\label{sec:algebra-basis} The subject of this section is the $\alg{psu}(2,2|4)$ algebra, which plays a central role in the construction of the action for the superstring on {AdS$_5\times$S$^5$} and its deformation. We start from the algebra $\alg{sl}(4|4)$, one particular element of which may be written as a $8\times 8$ matrix \begin{equation} M= \left( \begin{array}{c|c} m_{11} & m_{12} \\ \hline m_{21} & m_{22} \end{array} \right)\,, \end{equation} where each $m_{ij}$ above is a $4\times 4$ block. The matrix $M$ is required to have vanishing supertrace, defined as $\Str M=\tr m_{11}-\tr m_{22}$. The $\mathbb{Z}_2$ structure identifies the diagonal blocks $m_{11}, m_{22}$ as even, while the off-diagonal blocks $m_{12},m_{21}$ as odd. Later we will multiply the former by Grassmann-even (bosonic) variables, while the latter by Grassmann-odd (fermionic) ones. We find the algebra $\alg{su}(2,2|4)$ by imposing a proper reality condition \begin{equation}\label{eq:real-cond-su224} M^\dagger H+HM=0\,, \end{equation} where we have defined the matrix $H$ as \begin{equation} H=\left( \begin{array}{cc} \Sigma & 0 \\ 0 & \mathbf{1}_4 \\ \end{array} \right)\,, \end{equation} and the diagonal matrix $\Sigma=\text{diag}(1,1,-1,-1)$. We will present an explicit realisation of this algebra in terms of $8\times 8$ matrices. Since $\alg{su}(2,2|4)$ is non-compact, the above representation is non-unitary. The algebra $\alg{psu}(2,2|4)$ is then found by projecting away the generator proportional to the identity operator. To construct our $8\times 8$ matrices we will use $4\times 4$ gamma-matrices~\cite{Arutyunov:2009ga}. In Appendix~\ref{app:su224-algebra} we write our preferred choice for the $4\times 4$ gamma-matrices. They are all Hermitian and satisfy the $\text{SO}(5)$ Clifford algebra \begin{equation} \{ \gamma_m, \gamma_n\} = 2 \delta_{mn}\,, \qquad m=0,\ldots,4\,. \end{equation} We need two copies of these matrices, one for Anti-de Sitter and one for the sphere. In the first case we will denote them with a check $\check{\gamma}_m$, in the second with a hat $\hat{\gamma}_m$ \begin{equation}\label{eq:gamma-AdS5-S5} \begin{aligned} \text{AdS}_5:\quad&\check{\gamma}_0 = i \gamma_0, \qquad \check{\gamma}_m = \gamma_m, \quad m=1,\cdots,4, \\ \text{S}^5:\quad&\hat{\gamma}_{m+5} = -\gamma_m, \quad m=0,\cdots,4. \end{aligned} \end{equation} We have chosen to enumerate the gamma-matrices for AdS$_5$ from $0$ to $4$ and the ones for S$^5$ from $5$ to $9$ to have a better notation when we want to write ten-dimensional expressions. The $i$ in the definition of $\check{\gamma_0}$ is needed to reproduce the signature of the metric. We will not use the notation of~\cite{Arutyunov:2009ga} for the generators. In the following we provide explicitly our preferred basis. \paragraph{Even generators We denote 10 (5 for AdS$_5$ + 5 for S$^5$) of the even generators by $\gen{P}$ and the remaining 20 (10 for AdS$_5$ + 10 for S$^5$) by $\gen{J}$. The generators $\check{\gen{P}}_m$ for AdS$_5$ and the generators $\hat{\gen{P}}_{m}$ for S$^5$ are defined as \begin{equation} \check{\gen{P}}_m = \left( \begin{array}{cc} -\frac{1}{2} \check{\gamma}_m & \mathbf{0}_4 \\ \mathbf{0}_4 & \mathbf{0}_4 \\ \end{array} \right), \quad m=0,\ldots4, \qquad \hat{\gen{P}}_{m} = \left( \begin{array}{cc} \mathbf{0}_4 & \mathbf{0}_4 \\ \mathbf{0}_4 & \frac{i}{2} \hat{\gamma}_m \\ \end{array} \right), \quad m=5,\ldots9. \end{equation} After defining $\check{\gamma}_{mn} \equiv \frac{1}{2} [\check{\gamma}_m,\check{\gamma}_n]$ and $\hat{\gamma}_{mn} \equiv \frac{1}{2} [\hat{\gamma}_m,\hat{\gamma}_n]$ we also write the generators $\check{\gen{J}}_{mn}$ and $\hat{\gen{J}}_{mn}$ for AdS$_5$ and for S$^5$ \begin{equation} \check{\gen{J}}_{mn} = \left( \begin{array}{cc} \frac{1}{2} \check{\gamma}_{mn} & \mathbf{0}_4 \\ \mathbf{0}_4 & \mathbf{0}_4 \\ \end{array} \right), \quad m,n=0,\ldots4, \qquad \hat{\gen{J}}_{mn} = \left( \begin{array}{cc} \mathbf{0}_4 & \mathbf{0}_4 \\ \mathbf{0}_4 & \frac{1}{2} \hat{\gamma}_{mn} \\ \end{array} \right), \quad m,n=5,\ldots9. \end{equation} All the generators satisfy Equation~\eqref{eq:real-cond-su224} and hence belong to $\alg{su}(2,2|4)$. \paragraph{Odd generators To span all the 32 odd generators of $\alg{su}(2,2|4)$ we use a label $I=1,2$ and two spinor indices $\ul{\alpha},\ul{a}=1,2,3,4$. Greek spinor indices are used for AdS$_5$, Latin ones for S$^5$. Our preferred basis for the odd generators is \begin{equation}\label{eq:def-odd-el-psu224} \begin{aligned} \genQind{I}{\alpha}{a} &=e^{+i\pi/4} \left( \begin{array}{cc} \mathbf{0}_4 & m^{\ \, \ul{\alpha}}_{I \, \ul{a}} \\ K \left(m^{\ \, \ul{\alpha}}_{I \, \ul{a}}\right)^\dagger K & \mathbf{0}_4 \\ \end{array} \right), \\ {\left(m^{\ \, \ul{\alpha}}_{1 \, \ul{a}}\right)_j}^k &= e^{+i\pi/4+i\phi_{\gen{Q}}} \, \delta^{\ul{\alpha}}_j \delta_{\ul{a}}^k, \qquad\qquad {\left(m^{\ \, \ul{\alpha}}_{2 \, \ul{a}}\right)_j}^k = -e^{-i\pi/4+i\phi_{\gen{Q}}} \, \delta^{\ul{\alpha}}_j \delta_{\ul{a}}^k. \end{aligned} \end{equation} Here $m^{\ \, \ul{\alpha}}_{I \, \ul{a}}$ are $4\times 4$ matrices, and $K$ is defined in~\eqref{eq:SKC-gm}. The phase $\phi_{\gen{Q}}$ corresponds to the $U(1)$ automorphism of $\alg{su}(2,2|4)$, and we set $\phi_{\gen{Q}}=0$. These supermatrices are constructed in such a way that they do \emph{not} satisfy Eq.~\eqref{eq:real-cond-su224} but rather $\gen{Q}^\dagger i\, \widetilde{H}+\widetilde{H}\gen{Q}=0$ where we have defined \begin{equation} \widetilde{H}\equiv \left( \begin{array}{cc} K & 0 \\ 0 & K \\ \end{array} \right). \end{equation} The supermatrices $\gen{Q}$ can be seen as complex combinations of supermatrices $\mathcal{Q}$ satisfying~\eqref{eq:real-cond-su224} \begin{equation} \mathcal{Q}= e^{+i\pi/4} \, \left( \begin{array}{cc} C & 0 \\ 0 & K \\ \end{array} \right) \gen{Q}, \qquad \gen{Q}= -e^{-i\pi/4} \, \left( \begin{array}{cc} C & 0 \\ 0 & K \\ \end{array} \right) \mathcal{Q}. \end{equation} The matrix $C$ is defined in Eq.~\eqref{eq:SKC-gm}. On the one hand, taking linear combinations of $\mathcal{Q}$'s with Grassmann variables and imposing that $\ferm{\vartheta}{I}{\alpha}{a} \mathcal{Q}^{I \, \ul{\alpha}}_{\ \, \ul{a}}$ belongs to $\alg{su}(2,2|4)$ would translate into the fact that the fermions $\ferm{\vartheta}{I}{\alpha}{a}$ are real. On the other hand, imposing that $\ferm{\theta}{I}{\alpha}{a} \genQind{I}{\alpha}{a}$ belongs to $\alg{su}(2,2,|4)$ $(\ferm{\theta}{I}{\alpha}{a} \genQind{I}{\alpha}{a})^\dagger= -H(\ferm{\theta}{I}{\alpha}{a} \genQind{I}{\alpha}{a})H^{-1}$ gives \begin{equation} \ferm{{\theta^\dagger}}{I}{a}{\alpha} =-i\, \ferm{\theta}{I}{\nu}{b} \ C^{\ul{\nu}\ul{\alpha}} K_{\ul{b}\ul{a}} . \end{equation} Defining the barred version of a fermion we find the Majorana condition in the form \begin{equation} \ferm{\bar{\theta}}{I}{a}{\alpha} \equiv \ferm{{\theta^\dagger}}{I}{a}{\nu} {(\check{\gamma}^0)_{\ul{\nu}}}^{\ul{\alpha}} = - \ferm{\theta}{I}{\nu}{b} \ K^{\ul{\nu}\ul{\alpha}} K_{\ul{b}\ul{a}} . \end{equation} Later on we will decide to write the fermions $\ferm{\theta}{I}{\alpha a}{}$ with both spinor indices lowered and $\ferm{\bar{\theta}}{I}{}{\alpha a}$ with both spinor indices raised, so the above equation reads as \begin{equation}\label{eq:Majorana-cond-ferm-sp-ind} \ferm{\bar{\theta}}{I}{}{\alpha a} = + \ferm{\theta}{I}{\nu b}{} \ K^{\ul{\nu}\ul{\alpha}} K^{\ul{b}\ul{a}} , \end{equation} matching with~\cite{Metsaev:1998it}. Let us comment on the fact that the matrix $K$ is the charge conjugation matrix for the $\gamma$-matrices. We call it $K$ to keep the same notation of~\cite{Arutyunov:2009ga}. We refer to Appendix~\ref{app:su224-algebra} for our conventions with spinors. A more compact notation is achieved by actually omitting the spinor indices. The above equation then reads as \begin{equation}\label{eqMajorana-cond-compact-not} \bar{\theta}_I = \theta_I^\dagger \boldsymbol{\gamma}^0=+ \theta_I^t \, (K\otimes K)\,, \end{equation} where $\boldsymbol{\gamma}^0\equiv \check{\gamma}^0\otimes \gen{1}_4$, and Hermitian conjugation and transposition are implemented only in the space spanned by the spinor indices, where the matrices $\boldsymbol{\gamma}^0$ and $K\otimes K$ are acting. \paragraph{Commutation relations It is convenient to rewrite the commutation relations when considering the Grassmann enveloping algebra. In this way we may suppress the spinor indices to obtain more compact expressions. We define $\gen{Q}^{I} \theta_I\equiv \genQind{I}{\alpha a}{} \ferm{\theta}{I}{\alpha a}{}$ and we introduce the $16\times 16$ matrices \begin{equation}\label{eq:def16x16-gamma} \begin{aligned} & \boldsymbol{\gamma}_m \equiv \check{\gamma}_m \otimes \mathbf{1}_4, \quad m=0, \cdots, 4, \qquad & \boldsymbol{\gamma}_m \equiv \mathbf{1}_4 \otimes i\hat{\gamma}_m, \quad m=5, \cdots, 9, \\ & \boldsymbol{\gamma}_{mn} \equiv \check{\gamma}_{mn} \otimes \mathbf{1}_4, \quad m,n=0, \cdots, 4, \qquad & \boldsymbol{\gamma}_{mn} \equiv \mathbf{1}_4 \otimes \hat{\gamma}_{mn}, \quad m,n=5, \cdots, 9. \\ \end{aligned} \end{equation} The first space in the tensor product is spanned by the AdS spinor indices, the second by the sphere spinor indices. To understand the 10-dimensional origin of these objects see appendix~\ref{sec:10-dim-gamma}. In the context of type IIB, one usually continues to refer to them as gamma-matrices even though they do not satisfy Clifford algebra relations. In our basis the commutation relations involving only bosonic elements read as \begin{equation} \begin{aligned} \text{AdS}_5 : \quad &[ \check{\gen{P}}_m, \check{\gen{P}}_n ] = \check{\gen{J}}_{mn}, \ &\ \ \text{S}^5 : \quad &[ \hat{\gen{P}}_m, \hat{\gen{P}}_n ] = -\hat{\gen{J}}_{mn}, \\ &[ \check{\gen{P}}_{m}, \check{\gen{J}}_{np} ] = \eta_{mn} \check{\gen{P}}_p - \ _{n \leftrightarrow p}, \ & &[ \hat{\gen{P}}_{m}, \hat{\gen{J}}_{np} ] = \eta_{mn} \hat{\gen{P}}_p - \ _{n \leftrightarrow p},\\ &[ \check{\gen{J}}_{mn}, \check{\gen{J}}_{pq} ] = (\eta_{np} \check{\gen{J}}_{mq} - _{m \leftrightarrow n} ) - _{p \leftrightarrow q} \ & &[ \hat{\gen{J}}_{mn}, \hat{\gen{J}}_{pq} ] = (\eta_{np} \hat{\gen{J}}_{mq} - _{m \leftrightarrow n} ) - _{p \leftrightarrow q}, \end{aligned} \end{equation} where $\eta_{mn}= \text{diag}(-1,1,1,1,1,1,1,1,1,1)$. Generators of the two different spaces commute with each other. The generators $\gen{J}$ identify the bosonic subalgebra $\alg{so}(4,1)\oplus\alg{so}(5)$. With the above definitions, the commutation relations of $\alg{su}(2,2|4)$ involving odd generators are\footnote{For commutators of two odd elements we need to multiply by two \emph{different} fermions $\lambda_I,\psi_I$, otherwise the right hand side vanishes.} \begin{equation} \begin{aligned} & [\gen{Q}^{I} \theta_I, \gen{P}_m] = - \frac{i}{2} \epsilon^{IJ} \gen{Q}^{J} \boldsymbol{\gamma}_m \theta_I, & \qquad & [\gen{Q}^{I} \theta_I, \gen{J}_{mn}] = -\frac{1}{2} \delta^{IJ} \gen{Q}^{J} \boldsymbol{\gamma}_{mn} \theta_I, & \end{aligned} \end{equation} \begin{equation}\label{eq:comm-rel-QQ-su224} \begin{aligned} [ \gen{Q}^{I} \lambda_I, \gen{Q}^{J} \psi_J ] =& \, i \, \delta^{IJ} \bar{\lambda}_I \boldsymbol{\gamma}^m \psi_J \ \gen{P}_m % - \frac{1}{2} \epsilon^{IJ} \bar{\lambda}_I (\boldsymbol{\gamma}^{mn} \check{\gen{J}}_{mn} -\boldsymbol{\gamma}^{mn} \hat{\gen{J}}_{mn}) \psi_J \ - \frac{i}{2} \delta^{IJ} \bar{\lambda}_I \psi_J \mathbf{1}_8. \end{aligned} \end{equation} Here we have also used the Majorana condition to rewrite the result in terms of the fermions $\bar{\lambda}_I$, and for completeness we indicate also the generator proportional to the identity operator. We refer to Appendix~\ref{app:su224-algebra} for the commutation relations with explicit spinor indices. \paragraph{Supertraces In the computation for the Lagrangian we will need to take the supertrace of products of two generators of the algebra. For the non-vanishing ones we find \begin{equation} \begin{aligned} \Str[\gen{P}_m\gen{P}_n]&=\eta_{mn}, \\ \text{AdS}_5 : \quad \Str[\check{\gen{J}}_{mn}\check{\gen{J}}_{pq}]&= - (\eta_{mp}\eta_{nq}-\eta_{mq}\eta_{np}), \\ \text{S}^5 : \quad \Str[\hat{\gen{J}}_{mn}\hat{\gen{J}}_{pq}]&= + (\eta_{mp}\eta_{nq}-\eta_{mq}\eta_{np}), \\ \Str[\gen{Q}^I \lambda_I \, \gen{Q}^J \psi_J ]&= -2 \epsilon^{IJ} \bar{\lambda}_I \psi_J = -2 \epsilon^{JI} \bar{\psi}_J \lambda_I \,. \end{aligned} \end{equation} \paragraph{$\mathbb{Z}_4$ decomposition The $\alg{su}(2,2|4)$ algebra admits a $\mathbb{Z}_4$ decomposition, compatible with the commutation relations. We call $\Omega$ the outer automorphism, that acts on elements of the algebra as \begin{equation} \Omega(M)=i^k\, M\,, \qquad k=0,\ldots, 4\,, \end{equation} identifying four different subspaces of $\alg{su}(2,2|4)$ labelled by $k$. We define it as~\cite{Arutyunov:2009ga} \begin{equation} \Omega(M) = - \mathcal{K} M^{st} \mathcal{K}^{-1}, \end{equation} with $\mathcal{K}=\text{diag}(K,K)$ and $^{st}$ denoting the supertranspose \begin{equation} M^{st}\equiv \left( \begin{array}{c|c} m_{11}^t & -m_{21}^t \\ \hline m_{12}^t & m_{22}^t \end{array} \right)\,. \end{equation} If we consider bosonic generators, it is easy to see that $\gen{J}$ and $\gen{P}$ belong to the subspaces of grading 0 and 2 respectively\footnote{The subspaces of grading $0$ and $2$ of this chapter correspond to the subspaces with label $+$ and $-$ respectively of Chapter~\ref{ch:qAdS5Bos}.} \begin{equation} \Omega(\gen{J})=+\gen{J}\,, \qquad \Omega(\gen{P})=-\gen{P}\,. \end{equation} In our basis, the action on odd generators is also very simple \begin{equation} \Omega(\genQind{I}{\alpha a}{})=\sigma_3^{II}\, i\, \genQind{I}{\alpha a}{}\,, \end{equation} meaning that odd elements with $I=1$ have grading 1, and with $I=2$ have grading 3. It is natural to introduce projectors $P^{(k)}$ on each subspace, whose action may be found by \begin{equation}\label{eq:def-proj-Z4-grad} P^{(k)}(M)=\frac{1}{4}\left( M+i^{3k}\Omega(M)+i^{2k}\Omega^2(M) +i^{k}\Omega^{3}(M)\right)\,. \end{equation} Then $P^{(0)}$ will project on generators $\gen{J}$, $P^{(2)}$ on generators $\gen{P}$, and $P^{(1)},P^{(3)}$ on odd elements with labels $I=1,2$ \begin{equation} P^{(1)}(\genQind{I}{\alpha a}{}) = \frac{1}{2} (\delta^{IJ}+\sigma_3^{IJ}) \genQind{J}{\alpha a}{}, \qquad P^{(3)}(\genQind{I}{\alpha a}{}) = \frac{1}{2} (\delta^{IJ}-\sigma_3^{IJ}) \genQind{J}{\alpha a}{}. \end{equation} The definition of the coset uses the $\mathbb{Z}_4$ grading, as the generators that are removed are the $\gen{J}$'s spanning the $\alg{so}(4,1) \oplus \alg{so}(5)$ subalgebra, that conincides with the subspace of grading $0$. \chapter{(AdS$_5\times$S$^5)_\eta$}\label{app:etaAdS5} \input{Chapters/appBosetaAdS5.tex} \input{Chapters/appFermetaAdS5.tex} \section{Bethe-Yang equations}\label{sec:Bethe-Yang} The Bethe-Yang equations are quantisation conditions that allow one to solve for the momenta of the excitations of a multi-particle state. With this data one can find the spectrum of the theory in the decompactification limit. In integrable models the Bethe-Yang equations arise when imposing periodicity of the wave-function of an eigenstate of the Hamiltonian. In our case periodicity is motivated by the fact that we are studying closed strings, and we depict this in Figure~\ref{fig:period-cond}. Instead of looking for eigenstates of the exact quantum Hamiltonian---that is not known---we will construct eigenstates of the exact S-matrix derived in Section~\ref{sec:S-mat-T4}. \subsection{Deriving the Bethe-Yang equations} Rather than introducing a toy model to explain how to derive the Bethe-Yang equations, we prefer to guide the reader through the main steps in the case of AdS$_3\times$S$^3\times$T$^4$. Indeed, deciding to turning on only certain flavors of the excitations reduces the problem remarkably, and from the operational point of view it makes it conceptually equivalent to other simpler integrable models, such as the Heisenberg spin-chain---see~\cite{1998cond.mat..9162K,Faddeev:1994nk,Faddeev:1996iy} for nice reviews. The various complications are added gradually, until all the material to construct the full set of Bethe-Yang equations is presented. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{images/CBA1.pdf} \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] at ( 0 , -3cm) {\raisebox{2.5cm}{$=$}}; \end{tikzpicture} \includegraphics[width=0.4\textwidth]{images/CBA2.pdf} \caption{The periodicity condition of the wave-function yields the Bethe-Yang equations. A configuration with excitations localised at given points of the string is equivalent to another one where the excitations are cyclically permuted.} \label{fig:period-cond} \end{figure} \paragraph{Bethe Ansatz with one flavor.} Let us start for simplicity with the case in which only excitations of the type $Y^{\mbox{\tiny L}}$ are present. We make this choice because the scattering of two $Y^{\mbox{\tiny L}}$ excitations is very simple \begin{equation} \mathcal{S}\ket{Y^{\mbox{\tiny L}}_pY^{\mbox{\tiny L}}_q}= \mathcal{A}_{pq}\ket{Y^{\mbox{\tiny L}}_qY^{\mbox{\tiny L}}_p}\,. \end{equation} Here $\mathcal{A}_{pq}$ denotes the corresponding scattering element. What is important is that this two-particle state does not mix with others under scattering. The simplest multiparticle state that we might want to consider is a collection of plane-waves \begin{equation} \ket{Y^{\mbox{\tiny L}}_{p_1}Y^{\mbox{\tiny L}}_{p_2}\ldots Y^{\mbox{\tiny L}}_{p_n}}=\sum_{\sigma_1\ll \sigma_2 \ll \ldots \ll \sigma_{n}} \ e^{i \sum_{j=1}^n p_j\sigma_j} \ \ket{Y^{\mbox{\tiny L}}_{\sigma_1}Y^{\mbox{\tiny L}}_{\sigma_2}\ldots Y^{\mbox{\tiny L}}_{\sigma_n}}. \end{equation} Here and in the following we always assume that we deal with asymptotic states, meaning that excitations with different momenta are ordered $p_1>\ldots>p_n$ and well separated. As it is known from the simplest integrable models (\emph{e.g.} the Heisenberg spin-chain), the eigenstates of the Hamiltonian are specific superpositions of plane waves. Let us focus on the case of just two excitations. We consider a generic state \begin{equation} \begin{aligned} \ket{\Psi}&=\sum_{\sigma_1\ll \sigma_2} \psi(\sigma_1,\sigma_2) \ket{Y^{\mbox{\tiny L}}_{\sigma_1}Y^{\mbox{\tiny L}}_{\sigma_2}} \\ &=\ket{Y^{\mbox{\tiny L}}_{p_1}Y^{\mbox{\tiny L}}_{p_2}}+S(p_1,p_2)\ket{Y^{\mbox{\tiny L}}_{p_2}Y^{\mbox{\tiny L}}_{p_1}}. \end{aligned} \end{equation} where by definition we restrict ourselves to the region $\sigma_1\ll \sigma_2$ and we have defined the generic wave-function \begin{equation} \psi(\sigma_1,\sigma_2) =e^{i(p_1\sigma_1+p_2\sigma_2)}+S(p_1,p_2)e^{i(p_2\sigma_1+p_1\sigma_2)}. \end{equation} The choice $S(p_1,p_2)=1$ would correspond to just the sum of the original plane-waves with the reflected ones. We choose instead to identify $S(p_1,p_2)=\mathcal{A}_{p_1p_2}$, namely the scattering element of the two excitations. Thanks to this choice $\ket{\Psi}$ becomes an eigenstate of the S-matrix \begin{equation} \begin{aligned} \mathcal{S}\ket{\Psi} &=\mathcal{A}_{p_1p_2}\ket{Y^{\mbox{\tiny L}}_{p_2}Y^{\mbox{\tiny L}}_{p_1}}+\mathcal{A}_{p_1p_2}\mathcal{A}_{p_2p_1}\ket{Y^{\mbox{\tiny L}}_{p_1}Y^{\mbox{\tiny L}}_{p_2}} =\ket{\Psi}\,, \end{aligned} \end{equation} that is proved using braiding unitarity, \textit{i.e.}\xspace $\mathcal{A}_{p_1p_2}\mathcal{A}_{p_2p_1}=1$. This justifies the choice for the function $S(p_1,p_2)$. \medskip The important requirement that we want to impose now is the periodicity of the wave-function, as depicted in Figure~\ref{fig:period-cond}. An explicit computation gives \begin{equation} \begin{aligned} \psi(\sigma_2,\sigma_1+L)&=e^{i(p_1\sigma_2+p_2\sigma_1+p_2L)}+\mathcal{A}_{p_1p_2}e^{i(p_2\sigma_2+p_1\sigma_1+p_1L)} \\ &= e^{ip_1L}\mathcal{A}_{p_1p_2} \left(\left(\mathcal{A}_{p_1p_2}\right)^{-1}e^{i(p_1\sigma_2+p_2\sigma_1+(p_2-p_1)L)}+e^{i(p_2\sigma_2+p_1\sigma_1)}\right)\,. \end{aligned} \end{equation} If we require $\psi(\sigma_2,\sigma_1+L)=\psi(\sigma_1,\sigma_2)$ we find the two equations \begin{equation} e^{i p_1L}= \left(\mathcal{A}_{p_1p_2}\right)^{-1}, \qquad e^{i p_2L}= \left(\mathcal{A}_{p_2p_1}\right)^{-1}. \end{equation} These are the Bethe-Yang equations for the particular case at hand. The generalisation to $N$-particle states is straightforward. We define the wave-function \begin{equation} \psi(\sigma_1,\sigma_2,\ldots,\sigma_N) =e^{i\sum_{j=1}^N p_j\sigma_j}+\sum_{\mathbf{\pi}} S_{\mathbf{\pi}}(p_1,\ldots p_N)e^{i\sum_{j=1}^N p_{\mathbf{\pi}(j)}\sigma_j}. \end{equation} where we sum over all possible permutations $\mathbf{\pi}$. Once we rewrite a given permutation $\mathbf{\pi}$ as a sequence of two-body permutations, we define the function $S_{\mathbf{\pi}}(p_1,\ldots p_N)$ as the product of the two-body scattering elements produced by the chosen factorisation.\footnote{For example, given the permutation $1234|3214$ we define \begin{equation} S_{1234|3214} =\mathcal{A}_{p_1p_2} \mathcal{A}_{p_1p_3} \mathcal{A}_{p_2p_3}. \end{equation} } Integrability ensures that different factorisations are equivalent. Similarly as before, it is possible to check that the state \begin{equation} \ket{\Psi}=\sum_{\sigma_1\ll \sigma_2\ll \ldots \sigma_N} \psi(\sigma_1,\sigma_2,\ldots ,\sigma_N) \ket{Y^{\mbox{\tiny L}}_{\sigma_1}Y^{\mbox{\tiny L}}_{\sigma_2}\ldots Y^{\mbox{\tiny L}}_{\sigma_N}} \end{equation} is an eigenstate of the S-matrix $\mathcal{S}\ket{\Psi} =\ket{\Psi} $. Periodicity of the wave-function written as $\psi(\sigma_2,\ldots,\sigma_N,\sigma_1+L)=\psi(\sigma_1,\sigma_2,\ldots,\sigma_N)$ now imposes \begin{equation} e^{i p_kL}= \prod_{\substack{j = 1\\j \neq k}}^{N} \left(\mathcal{A}_{p_kp_j}\right)^{-1}\,, \qquad k=1,\ldots,N\,, \end{equation} for each of the momenta $p_k$ of the excitations on the worldsheet. The above result is compatible with the level matching condition. Indeed multiplying all the Bethe equations together we get \begin{equation} e^{i \sum_{k=1}^N p_kL}= \prod_{k=1}^N\prod_{\substack{j = 1\\j \neq k}}^{N} \left(\mathcal{A}_{p_kp_j}\right)^{-1}=1\,, \end{equation} where we have used that $\mathcal{A}_{p_kp_j}\mathcal{A}_{p_jp_k}=1$, as a consequence of unitarity. We then recover the quantisation condition on the sum of momenta \begin{equation} \sum_{k=1}^N p_k = 2 \pi n\,, \qquad n \in \mathbb{Z}, \end{equation} that characterises on-shell multi-particle states. \paragraph{Bethe Ansatz with more flavors.} It is clear that for the previous construction it was not essential to have only excitations of the same flavor, and we can extend it also to the case in which other types of excitations are present. The only characterising requirement is that the scattering of any of the excitations involved is \emph{diagonal}\footnote{When we write diagonal scattering we mean that other different flavors are not created after the scattering, and that the flavors of the two in-states are \emph{transmitted} to the out-states.}. In {AdS$_3\times$S$^3\times$T$^4$} this situation is realised if for example we allow also for the presence of excitations $Z^{\mbox{\tiny R}}$. We denote the relevant scattering elements by \begin{equation} \mathcal{S} \ket{Z^{\mbox{\tiny R}}_p Z^{\mbox{\tiny R}}_q} =\mathcal{C}_{pq} \ket{Z^{\mbox{\tiny R}}_q Z^{\mbox{\tiny R}}_p }\,, \qquad \mathcal{S} \ket{Z^{\mbox{\tiny R}}_p Y^{\mbox{\tiny L}}_q}=\widetilde{\mathcal{B}}_{pq}\ket{Y^{\mbox{\tiny L}}_q Z^{\mbox{\tiny R}}_p } \,. \end{equation} It is clear that unitarity implies that scattering $Y^{\mbox{\tiny L}}$ and $Z^{\mbox{\tiny R}}$ in the opposite order yields $\mathcal{S} \ket{Y^{\mbox{\tiny L}}_p Z^{\mbox{\tiny R}}_q}=(\widetilde{\mathcal{B}}_{qp})^{-1}\ket{Z^{\mbox{\tiny R}}_q Y^{\mbox{\tiny L}}_p}$. The situation to consider---that is new with respect to the case of just one flavor---is when both $\ket{Z^{\mbox{\tiny R}}}$ and $\ket{Y^{\mbox{\tiny L}}}$ are present. In the example of a two-particle state we would define \begin{equation} \ket{\Psi}=\ket{Y^{\mbox{\tiny L}}_{p_1}Z^{\mbox{\tiny R}}_{p_2}}+(\widetilde{\mathcal{B}}_{p_2p_1})^{-1}\ket{Z^{\mbox{\tiny R}}_{p_2}Y^{\mbox{\tiny L}}_{p_1}}, \end{equation} to get an eigenstate of the S-matrix. The Bethe-Yang equations that we get now after imposing periodicity of the wave-function are \begin{equation} e^{i p_1L}= \widetilde{\mathcal{B}}_{p_2p_1}, \qquad e^{i p_2L}= \left(\widetilde{\mathcal{B}}_{p_2p_1}\right)^{-1}. \end{equation} Multiplying these equations we recover again the level-matching condition. If we had a total number of $N_{\mbox{\tiny L}}$ excitations of type $Y^{\mbox{\tiny L}}$ and $N_{\mbox{\tiny R}}$ excitations of type $Z^{\mbox{\tiny R}}$ we would generalise the previous construction and find the Bethe-Yang equations \begin{equation} \begin{aligned} e^{i p_kL}&= \prod_{\substack{j = 1\\j \neq k}}^{N_{\mbox{\tiny L}}} \left(\mathcal{A}_{p_kp_j}\right)^{-1} \ \prod_{j=1}^{N_{\mbox{\tiny R}}} \widetilde{\mathcal{B}}_{p_jp_k}\,, \qquad &&k=1,\ldots,N_{\mbox{\tiny L}}\,, \\ e^{i p_kL}&= \prod_{\substack{j = 1\\j \neq k}}^{N_{\mbox{\tiny R}}} \left(\mathcal{C}_{p_kp_j}\right)^{-1} \ \prod_{j=1}^{N_{\mbox{\tiny L}}} \left(\widetilde{\mathcal{B}}_{p_kp_j}\right)^{-1}\,, \qquad &&k=1,\ldots,N_{\mbox{\tiny R}}\,. \end{aligned} \end{equation} The first are equations for $p_k$ being the momenta the excitations $Y^{\mbox{\tiny L}}$, while the second for the excitations $Z^{\mbox{\tiny R}}$. In {AdS$_3\times$S$^3\times$T$^4$} we may add another type of excitations that scatter diagonally with both $Y^{\mbox{\tiny L}}$ and $Z^{\mbox{\tiny R}}$. They belong to the massless module, and they can be chosen to be of type $\chi^1$. The excitations $Y^{\mbox{\tiny L}}$, $Z^{\mbox{\tiny R}}$ and $\chi^1$ that we have chosen here are the highest-weight states in each of the irreducible one-particle representations at our disposal, as it can be checked in Section~\ref{sec:exact-repr-T4}, see also Figure~\ref{fig:massive} and~\ref{fig:massless}. \paragraph{Non-diagonal scattering: nesting procedure.} To describe the most generic state we have to allow also for excitations that do not scatter diagonally. We then have to appeal to the nesting procedure to write the corresponding Bethe-Yang equations. The idea is to organise the excitations at our disposal into different levels. Level-I corresponds to the set of excitations that scatter diagonally among each other, as considered in the previous paragraphs. Level-II contains all the excitations that can be created from level-I by the action of lowering operators. Depending on the algebra and representations considered, one might need to go further than level-II, but this will not be our case. In the following we explain how the nesting procedure works when we choose the lowering operator $\gen{Q}^{\mbox{\tiny L} 1}$ acting on level-I excitations $Y^{\mbox{\tiny L}}$. According to the exact representation in Eq.~\eqref{eq:exact-repr-left-massive} this creates a fermionic excitation $\eta^{\mbox{\tiny L} 1}$. The S-matrix acts on a two-particle state containing both of them as \begin{equation} \mathcal{S}\ket{Y^{\mbox{\tiny L}}_p\eta^{\sL1}_q}=S_0^{\mbox{\tiny L}\sL}(p,q)\left(A^{\mbox{\tiny L}\sL}_{pq}B^{\mbox{\tiny L}\sL}_{pq}\ket{\eta^{\sL1}_qY^{\mbox{\tiny L}}_p}+A^{\mbox{\tiny L}\sL}_{pq}C^{\mbox{\tiny L}\sL}_{pq}\ket{Y^{\mbox{\tiny L}}_q\eta^{\sL1}_p}\right)\,, \end{equation} where the $\alg{su}(1|1)^2_{\text{c.e.}}$ scattering elements may be found in ~\eqref{eq:expl-su112-smat-el}, and the normalisation $S_0^{\mbox{\tiny L}\sL}(p,q)$ was fixed in~\eqref{eq:norm-LL-massive-sector}. The S-matrix does not act as simple permutation on the above two-particle scattering elements. The idea is to find a state containing both $Y^{\mbox{\tiny L}}$ and $\eta^{\mbox{\tiny L} 1}$ where this is the case. A way to do it is to consider the linear combination defined by \begin{equation} \ket{\mathcal{Y}_y} = f(y,p_1) \ket{\eta^{\mbox{\tiny L} 1}_{p_1} Y^{\mbox{\tiny L}}_{p_2}} + f(y,{p_2}) S^{\text{II,I}}(y,p_1) \ket{Y^{\mbox{\tiny L}}_{p_1} \eta^{\mbox{\tiny L} 1}_{p_2}}. \end{equation} In order to parameterise the state we have introduced a variable $y$. It goes under the name of \emph{auxiliary root} and it is associated to the level-II excitation. Here $f(y,p_j)$ is interpreted as a normalisation parameter and $S^{\text{II,I}}(y,p_1)$ as the scattering element between the level-II and the level-I excitation. The diagonal scattering is achieved by imposing the equation \begin{equation}\label{eq:compat-cond-level-II} \mathcal{S} \ket{\mathcal{Y}_y} = \mathcal{A}_{p_1p_2} \ket{\mathcal{Y}_y}_\mathbf{\pi}\,, \end{equation} where $\ket{\mathcal{Y}_y}_\mathbf{\pi}$ is the permuted state that is found from $\ket{\mathcal{Y}_y}$ after exchanging the momenta $p_1$ and $p_2$. This equation is motivated by the fact that we want to interpret level-II excitations as created on top of the ones of level-I. Thanks to the compatibility condition~\eqref{eq:compat-cond-level-II}, it is enough to define \begin{equation} \ket{\Psi}=\ket{\mathcal{Y}_y}+\mathcal{A}_{p_1p_2}\ket{\mathcal{Y}_y}_\mathbf{\pi}, \end{equation} to get an eigenstate of the S-matrix. Because of the above definitions the wave-function looks more involved \begin{equation} \begin{aligned} \psi(\sigma_1,\sigma_2)&=\left[ f(y,p_1)+f(y,p_2) S^{\text{II,I}}(y,p_1)\right]e^{i(p_1\sigma_1+p_2\sigma_2)}\\ &+\mathcal{A}_{p_1p_2} \left[ f(y,p_2)+f(y,p_1) S^{\text{II,I}}(y,p_2)\right]e^{i(p_2\sigma_1+p_1\sigma_2)}\,. \end{aligned} \end{equation} Imposing the periodicity condition $\psi(\sigma_2,\sigma_1+L)=\psi(\sigma_1,\sigma_2)$ and matching the coefficients for $f(y,p_1)$ and $f(y,p_2)$ we find the Bethe equations \begin{equation} \begin{aligned} e^{i p_1L}&= \left(\mathcal{A}_{p_1p_2}\right)^{-1} \ S^{\text{II,I}}(y,p_1), \qquad 1=S^{\text{II,I}}(y,p_1)\ S^{\text{II,I}}(y,p_2)\,, \\ e^{i p_2L}&= \left(\mathcal{A}_{p_2p_1}\right)^{-1} \ S^{\text{II,I}}(y,p_2)\,. \end{aligned} \end{equation} The introduction of level-II excitations then has the consequence of producing factors of $S^{\text{II,I}}(y,p_j)$ in the Bethe-Yang equations, confirming the interpretation of these terms as scattering elements between excitations of different levels. We also find a new equation for the auxiliary root $y$, conceptually similar to the equations for $p_1$ and $p_2$. The variable $y$ carries no momentum, and the left-hand-side of its equation is just $1$. In the particular case we considered, when we impose~\eqref{eq:compat-cond-level-II} we find \begin{equation} f(y,p_j)= \frac{g(y) \eta_{p_j}}{y - x^+_{p_j}}\,, \qquad S^{\text{II,I}}(y,p_j)= \left(\frac{x^+_{p_j}}{x^-_{p_j}}\right)^{1/2} \frac{y - x^-_{p_j}}{y - x^+_{p_j}} \,. \end{equation} To derive the Bethe-Yang equations for a state with a number of $N^{\text{I}}_{\mbox{\tiny L}}$ excitations $Y^{\mbox{\tiny L}}$ and $N^{\text{II}}_{\mbox{\tiny L}}$ excitations $\eta^{\mbox{\tiny L} 1}$ we repeat the above procedure, generalising it to multiparticle states. One should also take into account the possibility of a non-trivial scattering of level-II excitations among each other. To check whether such a scattering element $S^{\text{II,II}}(y_1,y_2)$ exists, for two such excitations parameterised by the auxiliary roots $y_1$ and $y_2$, we consider the state \begin{equation} \begin{aligned} \ket{\mathcal{Y}_{y_1} \mathcal{Y}_{y_2}} &= f(y_1,p_1) f(y_2,p_2) S^{\text{II},\text{I}}(y_2,p_1) \ket{\eta^{\mbox{\tiny L} 1}_{p_1} \eta^{\mbox{\tiny L} 1}_{p_2}} \\ &\qquad + f(y_2,p_1) f(y_1,p_2) S^{\text{II},\text{I}}(y_1,p_1) S^{\text{II},\text{II}}(y_1,y_2) \ket{\eta^{\mbox{\tiny L} 1}_{p_1} \eta^{\mbox{\tiny L} 1}_{p_2}} \,, \end{aligned} \end{equation} where the functions $f(y_j,p_k)$ and $S^{\text{II},\text{I}}(y_j,p_k)$ are the ones calculated previously. Imposing the compatibility condition $\mathcal{S} \ket{\mathcal{Y}_{y_1} \mathcal{Y}_{y_2}} = \mathcal{A}_{p_1p_2} \ket{\mathcal{Y}_{y_1} \mathcal{Y}_{y_2}}_\mathbf{\pi}$---where the permuted state is found by exchanging the momenta $p_1$ and $p_2$---we find \begin{equation} S^{\text{II},\text{II}}(y_1,y_2) = -1. \end{equation} The scattering element is \emph{trivial} and there is no contribution to the Bethe-Yang equations. The minus sign is present here because we are permuting two fermionic states. The periodicity condition is then written as \begin{equation} \begin{aligned} e^{i p_kL}&= \prod_{\substack{j = 1\\j \neq k}}^{N_{\mbox{\tiny L}}} \left(\mathcal{A}_{p_kp_j}\right)^{-1} \ \prod_{j=1}^{N_{\mbox{\tiny L}}^{\text{II}}} S^{\text{II},\text{I}}(y_j,p_k)\,, \qquad &&k=1,\ldots,N_{\mbox{\tiny L}}\,, \\ 1&= \prod_{j=1}^{N_{\mbox{\tiny L}}} S^{\text{II},\text{I}}(y_k,p_j)\,, \qquad &&k=1,\ldots,N_{\mbox{\tiny L}}^{\text{II}}\,, \end{aligned} \end{equation} where we have defined the total number of excitations $N_{\mbox{\tiny L}}=N_{\mbox{\tiny L}}^{\text{I}}+N_{\mbox{\tiny L}}^{\text{II}}$. Similar computations may be done to consider scattering of different level-II excitations with other level-I states. It is clear that we need to associate one auxiliary root to each lowering operator of the algebra $\mathcal{A}$, creating the corresponding level-II excitation. \subsection{Bethe-Yang equations for AdS$_3\times$S$^3\times$T$^4$} \label{sec:BAE} Using the procedure explained in the previous section, we can derive the whole set of \mbox{Bethe-Yang} equations for AdS$_3\times$S$^3\times$T$^4$, when we allow for a generic number of excitations of each type. We write them explicitly\footnote{Here we write the Bethe-Yang equations following the convention of~\cite{Borsato:2014hja} for the normalisation of the mixed-mass sector. This differs from the normalisation of~\cite{Borsato:2016kbm}.} in Equations~\eqref{eq:BA-1}-\eqref{eq:BA-su(2)}. In the following we use the shorthand notation $\nu_k \equiv e^{i p_k}.$ When restricting to the massive sector, the factors of $\nu$ have also the meaning of \emph{frame-factors}, and they allow us to relate the string frame to the spin-chain frame, see Section~\ref{sec:S-matr-spin-chain-T4}. \begin{align} \label{eq:BA-1} 1 &= \prod_{j=1}^{K_2} \frac{y_{1,k} - x_j^+}{y_{1,k} - x_j^-} \nu_j^{-\frac{1}{2}} \prod_{j=1}^{K_{\bar{2}}} \frac{1 - \frac{1}{y_{1,k} \bar{x}_j^-}}{1- \frac{1}{y_{1,k} \bar{x}_j^+}} \nu_j^{-\frac{1}{2}} \prod_{j=1}^{K_0} \frac{y_{1,k} - z_j^+ }{y_{1,k} - z_j^- } \nu_j^{-\frac{1}{2}} , \\ \nonumber \\ \begin{split} \label{eq:BA-2} \left(\frac{x_k^+}{x_k^-}\right)^L &= \prod_{\substack{j = 1\\j \neq k}}^{K_2} \nu_k^{-1}\nu_j \frac{x_k^+ - x_j^-}{x_k^- - x_j^+} \frac{1- \frac{1}{x_k^+ x_j^-}}{1- \frac{1}{x_k^- x_j^+}} (\sigma^{\bullet\bullet}_{kj})^2 \prod_{j=1}^{K_1} \nu_k^{\frac{1}{2}} \, \frac{x_k^- - y_{1,j}}{x_k^+ - y_{1,j}} \prod_{j=1}^{K_3} \nu_k^{\frac{1}{2}} \, \frac{x_k^- - y_{3,j}}{x_k^+ - y_{3,j}} \\ &\times \prod_{j=1}^{K_{\bar{2}}} \nu_j \frac{1- \frac{1}{x_k^+ \bar{x}_j^+}}{1- \frac{1}{x_k^- \bar{x}_j^-}} \frac{1- \frac{1}{x_k^+ \bar{x}_j^-}}{1- \frac{1}{x_k^- \bar{x}_j^+}} (\widetilde{\sigma}^{\bullet\bullet}_{kj})^2 \prod_{j=1}^{K_{\bar{1}}} \nu_k^{-\frac{1}{2}} \, \frac{1 - \frac{1}{x_k^- y_{\bar{1},j}}}{1- \frac{1}{x_k^+ y_{\bar{1},j}}} \prod_{j=1}^{K_{\bar{3}}} \nu_k^{-\frac{1}{2}} \, \frac{1 - \frac{1}{x_k^- y_{\bar{3},j}}}{1- \frac{1}{x_k^+ y_{\bar{3},j}}} \\ &\times \prod_{j=1}^{K_{0}} \nu_k^{\frac{1}{2}} \left(\frac{1- \frac{1}{x_k^+ z_j^+}}{1- \frac{1}{x_k^- z_j^-}}\right)^{\frac{1}{2}} \left(\frac{1- \frac{1}{x_k^+ z_j^-}}{1- \frac{1}{x_k^- z_j^+}}\right)^{\frac{1}{2}} (\sigma^{\bullet\circ}_{kj})^2, \end{split} \\ \nonumber \\ \label{eq:BA-3} 1 &= \prod_{j=1}^{K_2} \frac{y_{3,k} - x_j^+}{y_{3,k} - x_j^-} \nu_j^{-\frac{1}{2}} \prod_{j=1}^{K_{\bar{2}}} \frac{1 - \frac{1}{y_{3,k} \bar{x}_j^-}}{1- \frac{1}{y_{3,k} \bar{x}_j^+}} \nu_j^{-\frac{1}{2}} \prod_{j=1}^{K_0} \frac{y_{3,k} - z_j^+ }{y_{3,k} - z_j^- } \nu_j^{-\frac{1}{2}} , \\ \nonumber \\ \nonumber \\ \nonumber \\ \label{eq:BA-1b} 1 &= \prod_{j=1}^{K_{\bar{2}}} \frac{y_{\bar{1},k} - \bar{x}_j^-}{y_{\bar{1},k} - \bar{x}_j^+} \nu_j^{\frac{1}{2}} \prod_{j=1}^{K_2} \frac{1 - \frac{1}{y_{\bar{1},k} x_j^+}}{1- \frac{1}{y_{\bar{1},k} x_j^-}} \nu_j^{\frac{1}{2}} \prod_{j=1}^{K_{0}} \frac{y_{\bar{1},k} - z_j^-}{y_{\bar{1},k} - z_j^+} \nu_j^{\frac{1}{2}}, \\ \nonumber \\ \begin{split} \label{eq:BA-2b} \left(\frac{\bar{x}_k^+}{\bar{x}_k^-}\right)^L &= \prod_{\substack{j = 1\\j \neq k}}^{K_{\bar{2}}} \frac{\bar{x}_k^- - \bar{x}_j^+}{\bar{x}_k^+ - \bar{x}_j^-} \frac{1- \frac{1}{\bar{x}_k^+ \bar{x}_j^-}}{1- \frac{1}{\bar{x}_k^- \bar{x}_j^+}} (\sigma^{\bullet\bullet}_{kj})^2 \prod_{j=1}^{K_{\bar{1}}} \nu_k^{-\frac{1}{2}} \frac{\bar{x}_k^+ - y_{\bar{1},j}}{\bar{x}_k^- - y_{\bar{1},j}} \prod_{j=1}^{K_{\bar{3}}} \nu_k^{-\frac{1}{2}} \frac{\bar{x}_k^+ - y_{\bar{3},j}}{\bar{x}_k^- - y_{\bar{3},j}} \\ &\times \prod_{j=1}^{K_2} \nu_k^{-1} \frac{1- \frac{1}{\bar{x}_k^- x_j^-}}{1- \frac{1}{\bar{x}_k^+ x_j^+}} \frac{1- \frac{1}{\bar{x}_k^+ x_j^-}}{1- \frac{1}{\bar{x}_k^- x_j^+}}(\widetilde{\sigma}^{\bullet\bullet}_{kj})^2 \prod_{j=1}^{K_{1}} \nu_k^{\frac{1}{2}} \frac{1 - \frac{1}{\bar{x}_k^+ y_{1,j}}}{1- \frac{1}{\bar{x}_k^- y_{1,j}}} \prod_{j=1}^{K_{3}} \nu_k^{\frac{1}{2}}\frac{1 - \frac{1}{\bar{x}_k^+ y_{3,j}}}{1- \frac{1}{\bar{x}_k^- y_{3,j}}} \\ &\times \prod_{j=1}^{K_{0}} \nu_k^{-\frac{1}{2}}\nu_j^{-1} \left(\frac{1- \frac{1}{\bar{x}_k^- z_j^-}}{1- \frac{1}{\bar{x}_k^+ z_j^+}}\right)^{\frac{3}{2}} \left(\frac{1- \frac{1}{\bar{x}_k^+ z_j^-}}{1- \frac{1}{\bar{x}_k^- z_j^+}}\right)^{\frac{1}{2}} (\sigma^{\bullet\circ}_{kj})^2, \end{split} \\ \nonumber \\ \label{eq:BA-3b} 1 &= \prod_{j=1}^{K_{\bar{2}}} \frac{y_{\bar{3},k} - \bar{x}_j^-}{y_{\bar{3},k} - \bar{x}_j^+} \nu_j^{\frac{1}{2}} \prod_{j=1}^{K_2} \frac{1 - \frac{1}{y_{\bar{3},k} x_j^+}}{1- \frac{1}{y_{\bar{3},k} x_j^-}}\nu_j^{\frac{1}{2}} \prod_{j=1}^{K_{0}} \frac{y_{\bar{3},k} - z_j^-}{y_{\bar{3},k} - z_j^+} \nu_j^{\frac{1}{2}}, \end{align} \begin{align} \begin{split} \label{eq:BA-0} \left(\frac{z_k^+}{z_k^-}\right)^L &= \prod_{\substack{j = 1\\j \neq k}}^{K_0}\nu_k^{-\frac{1}{2}}\nu_j^{\frac{1}{2}} \frac{z_k^+ - z_j^-}{z_k^- - z_j^+}(\sigma^{\circ\circ}_{kj})^2 \prod_{j=1}^{K_{4}} \frac{w_k - y_{4,j} - i/2}{w_k -y_{4,j} + i/2} \\ & \hspace{-8pt} \prod_{j=1}^{K_{2}} \nu_j^{-\frac{1}{2}} \left( \frac{1-\frac{1}{z_k^- x_j^-}}{1-\frac{1}{z_k^+ x_j^+}} \right)^{\frac{1}{2}} \left(\frac{1-\frac{1}{z_k^+ x_j^-}}{1-\frac{1}{z_k^- x_j^+}} \right)^{\frac{1}{2}} (\sigma^{\circ\bullet}_{kj})^2 \ \prod_{j=1}^{K_1}\nu_k^{\frac{1}{2}} \frac{z_k^- - y_{1,j}}{z_k^+ - y_{1,j}} \prod_{j=1}^{K_3} \nu_k^{\frac{1}{2}} \frac{z_k^- - y_{3,j}}{z_k^+ - y_{3,j}} \\ & \hspace{-8pt} \prod_{j=1}^{K_{\bar{2}}} \nu_k\nu_j^{\frac{1}{2}} \left(\frac{1-\frac{1}{z_k^+ \bar{x}_j^+}}{1-\frac{1}{z_k^- \bar{x}_j^-}}\right)^{\frac{3}{2}} \left( \frac{1-\frac{1}{z_k^+ \bar{x}_j^-}}{1-\frac{1}{z_k^- \bar{x}_j^+}} \right)^{\frac{1}{2}} (\sigma^{\circ\bullet}_{kj})^2 \prod_{j=1}^{K_{\bar{1}}} \nu_k^{-\frac{1}{2}} \frac{z_k^+ - y_{\bar{1},j}}{z_k^- - y_{\bar{1},j}} \prod_{j=1}^{K_{\bar{3}}} \nu_k^{-\frac{1}{2}} \frac{z_k^+ - y_{\bar{3},j}}{z_k^- - y_{\bar{3},j}}, \end{split} \\ \nonumber \\ \begin{split} \label{eq:BA-su(2)} 1 & = \prod_{\substack{j = 1\\j \neq k}}^{K_{4}} \frac{y_{4,k} - y_{4,j} + i}{y_{4,k} - y_{4,j} - i} \prod_{j=1}^{K_{0}} \frac{y_{4,k} - w_j - i/2}{y_{4,k} - w_j + i/2} \end{split} \end{align} \begin{figure} \centering \begin{tikzpicture} \begin{scope} \coordinate (m) at (0cm,0cm); \node (v1L) at (-1.75cm, 1.5cm) [dynkin node] {}; \node (v2L) at (-1.75cm, 0cm) [dynkin node] {$\scriptscriptstyle +1$}; \node (v3L) at (-1.75cm,-1.5cm) [dynkin node] {}; \draw [dynkin line] (v1L) -- (v2L); \draw [dynkin line] (v2L) -- (v3L); \node (v1R) at (+1.75cm, 1.5cm) [dynkin node] {}; \node (v2R) at (+1.75cm, 0cm) [dynkin node] {$\scriptscriptstyle -1$}; \node (v3R) at (+1.75cm,-1.5cm) [dynkin node] {}; \draw [dynkin line] (v1L.south west) -- (v1L.north east); \draw [dynkin line] (v1L.north west) -- (v1L.south east); \draw [dynkin line] (v3L.south west) -- (v3L.north east); \draw [dynkin line] (v3L.north west) -- (v3L.south east); \draw [dynkin line] (v1R.south west) -- (v1R.north east); \draw [dynkin line] (v1R.north west) -- (v1R.south east); \draw [dynkin line] (v3R.south west) -- (v3R.north east); \draw [dynkin line] (v3R.north west) -- (v3R.south east); \draw [dynkin line] (v1R) -- (v2R); \draw [dynkin line] (v2R) -- (v3R); \node (v0) at (0cm, 0cm) [dynkin node] {}; \draw [dynkin line] (v0.south west) -- (v0.north east); \draw [dynkin line] (v0.north west) -- (v0.south east); \node (vsu2) at (0cm, +1cm) [dynkin node] {}; \draw [dynkin line] (vsu2.south) -- (v0.north); \begin{pgfonlayer}{background} \draw [inverse line] [out= 0+30,in= 120] (v1L) to (v2R); \draw [inverse line] [out= 0-30,in=240] (v3L) to (v2R); \draw [inverse line] [out=180-30,in= 60] (v1R) to (v2L); \draw [inverse line] [out=180+30,in=300] (v3R) to (v2L); \draw [inverse line] [out= 0+30,in= 120] (v1L) to (v0); \draw [inverse line] [out= 0-30,in=240] (v3L) to (v0); \draw [inverse line] [out=180-30,in= 60] (v1R) to (v0); \draw [inverse line] [out=180+30,in=300] (v3R) to (v0); \end{pgfonlayer} \draw [red phase] [out=-40,in=180+40] (v2L) to (v2R); \draw [blue phase] [out=180-40,in= 180+40,loop] (v2L) to (v2L); \draw [blue phase] [out=-40,in=40,loop] (v2R) to (v2R); \draw [green phase] [out=180+30,in= -30,loop] (v0) to (v0); \draw [brown phase] [out=180-40,in= +40] (v0) to (v2L); \draw [brown phase] [out=+40,in=180-40] (v0) to (v2R); \end{scope} \begin{scope}[xshift=+4cm,yshift=-0.75cm] \draw [dynkin line] (0cm,1.5cm) -- (1cm,1.5cm) node [anchor=west,black] {\small Dynkin links}; \draw [inverse line] (0cm+0.75pt,1.0cm) -- (1cm,1.0cm) node [anchor=west,black] {\small Fermionic inversion symmetry links}; \draw [blue phase] (0cm,0.5cm) -- (1cm,0.5cm) node [anchor=west,black] {\small Dressing phase $\sigma^{\bullet\bullet}_{pq}$}; \draw [red phase] (0cm,0.0cm) -- (1cm,0.0cm) node [anchor=west,black] {\small Dressing phase $\widetilde{\sigma}^{\bullet\bullet}_{pq}$}; \draw [brown phase] (0cm,-0.5cm) -- (1cm,-0.5cm) node [anchor=west,black] {\small Dressing phases $\sigma^{\bullet\circ}_{pq},\sigma^{\circ\bullet}_{pq}$}; \draw [green phase] (0cm,-1cm) -- (1cm,-1cm) node [anchor=west,black] {\small Dressing phase $\sigma^{\circ\circ}_{pq}$}; \end{scope} \end{tikzpicture} \caption{The Dynkin diagram for $\alg{psu}(1,1|2)^2$ plus the root for massless modes, with the various interaction terms appearing in the Bethe ansatz indicated.} \label{fig:bethe-equations} \end{figure} A nice way to visualise the equations is done by considering Figure~\ref{fig:bethe-equations}. We recognise two copies of the Dynkin diagram of $\alg{psu}(1,1|2)$, corresponding to L and R. The crossed nodes denote fermionic roots, the other bosonic ones. The diagrams are in the $\alg{su}(2)$ grading for the Left copy and in the $\alg{sl}(2)$ grading for the Right copy. We refer to Section~\ref{sec:psu112-algebra} for a discussion on the possible different gradings of $\alg{psu}(1,1|2)$. Between these two Dynkin diagrams we find two additional nodes, one fermionic and one bosonic. The latter corresponds to the $\alg{su}(2)_{\circ}$ lowering operator\footnote{In~\cite{Borsato:2016kbm} the S-matrix scattering the $\alg{su}(2)_\circ$ flavors of massless excitations was conjectured to be trivial at all orders in $h$ to match with the perturbative results of~\cite{Sundin:2016gqe}. This was obtained by sending the rapidity $w_p \to \infty$. As a result this node and the auxiliary root $y_4$ are missing in the Bethe-Yang equations of~\cite{Borsato:2016kbm}.}, and we associate to it the auxiliary root $y_4$, whose Bethe-Yang equation is~\eqref{eq:BA-su(2)}. The external fermionic nodes in the diagram correspond to the four lowering fermionic operators of $\mathcal{A}$. We associate to them the auxiliary roots $y_1,y_3$ for the Left part of the diagram, and $y_{\bar{1}},y_{\bar{3}}$ for the Right part of the diagram. The corresponding Bethe-Yang equations may be found in~\eqref{eq:BA-1},\eqref{eq:BA-3} and~\eqref{eq:BA-1b},\eqref{eq:BA-3b}. The three nodes aligned horizontally in Figure~\ref{fig:bethe-equations} are the momentum carrying nodes. The one on the left corresponds to Left massive excitations, for which we use parameters $x^\pm$. To distinguish the Right massive excitations that correspond to the node on the right, we use the notation $\bar{x}^\pm$ for the spectral parameters. The two equations for L and R are found in~\eqref{eq:BA-2} and~\eqref{eq:BA-2b}. The node in the middle of the diagram is associated to massless excitations. For them we use the notation $z^\pm$, and their equation is~\eqref{eq:BA-0}. The table below recaps our notation for the spectral parameters and the number of excitations in each case. We also write our choice of level I excitations. $$ \begin{aligned} \hline & & &\text{Left massive} & \qquad &\text{Right massive} & \qquad &\text{Massless}& \\ \hline & \text{spectral parameters}& &\ \ \ \ \ \ x^\pm & &\ \ \ \ \ \ \bar{x}^\pm & &\ \ \ \ z^\pm& \\ & \text{number of excitations}& &\ \ \ \ \ \ K_2 & &\ \ \ \ \ \ K_{\bar{2}} & &\ \ \ \ K_0& \\ \hline & \text{level I excitations}& &\ \ \ \ \ \ Y^{\sL} & &\ \ \ \ \ \ Z^{\sR} & &\ \ \ \ \chi^1&\\ \hline \end{aligned} $$ These level I excitations have been chosen because they scatter diagonally among each other. They are highest weight states for the raising operators $\bar{\gen{Q}}_{\smallL1},\bar{\gen{Q}}_{\smallL2},\gen{Q}_{\smallR1},\gen{Q}_{\smallR2}$ and $\gen{J}_{\circ 2}^{ \ \ 1} $. The table below shows the notation for the auxiliary root and number of excitations associated to each lowering operator $$ \begin{aligned} & & &\gen{Q}^{\smallL1} & \qquad &\gen{Q}^{\smallL2} & \qquad &\bar{\gen{Q}}^{\smallR1} & \qquad &\bar{\gen{Q}}^{\smallR2} & \qquad & \gen{J}_{\circ 1}^{ \ \ 2} & \\ \hline & \text{auxiliary root}& & y_1& &y_3& & y_{\bar{1}}& & y_{\bar{3}}& & y_{4}& \\ & \text{number of excitations}& & K_1& &K_3& & K_{\bar{1}}& & K_{\bar{3}}& &K_4& \\ \hline \end{aligned} $$ Solutions of the Bethe-Yang equations allow to compute the spectrum only up to wrapping corrections. In the model we are studying virtual particles wrapping the cylinder can be either massive or massless, and in~\cite{Abbott:2015pps} it was shown that the latter contributions cannot be discarded if one wants to reproduce the energy of semiclassical spinning strings as computed in~\cite{Beccaria:2012kb}. \vspace{12pt} \smallskip \begin{comment} \alert{Comment on other choices of level-I excitations and fermionic duality.} We call \emph{level-I vacuum} $\ket{\mathbf{0}}^{\text{I}}=\ket{\mathbf{0}^L}$ the state that contains no excitation. We imagine to create excitations on this vacuum that may scatter among each other. We first consider the maximal set of excitations that scatter diagonally among each other. In order to have a complete description we should find at least on excitation per irreducible module. We indeed find four possible choices, each of them contains a left and a right massive excitation \begin{equation}\label{eq:level-II-vacuum} \begin{aligned} V^{\text{II}}_A &= \{ Y^{\mbox{\tiny L}},Z^{\mbox{\tiny R}}\}, \qquad V^{\text{II}}_B &= \{ Z^{\mbox{\tiny L}},Y^{\mbox{\tiny R}}\}, \\ V^{\text{II}}_C &= \{ \eta^{\mbox{\tiny L} 2},\eta^{\mbox{\tiny R}}_{\ 1}\}, \qquad V^{\text{II}}_D &= \{ \eta^{\mbox{\tiny L} 1},\eta^{\mbox{\tiny R}}_{\ 2}\}. \end{aligned} \end{equation} The diagonalisation procedure of the S-matrix can be carried out with any of the above choices. For definiteness we choose the set $V^{\text{II}}_A$. This identifies for us different possible \emph{level-II vacua} that we denote as \begin{equation} \begin{aligned} \ket{\mathbf{0}}_{\mbox{\tiny L}\sL}^{\text{II}} \equiv \ket{Y^{\mbox{\tiny L}}Y^{\mbox{\tiny L}}}\,, \qquad \ket{\mathbf{0}}_{\mbox{\tiny R}\sR}^{\text{II}} \equiv \ket{Z^{\mbox{\tiny R}}Z^{\mbox{\tiny R}}}\,, \\ \ket{\mathbf{0}}_{\mbox{\tiny L}\mbox{\tiny R}}^{\text{II}} \equiv \ket{Y^{\mbox{\tiny L}}Z^{\mbox{\tiny R}}}\,, \qquad \ket{\mathbf{0}}_{\mbox{\tiny R}\mbox{\tiny L}}^{\text{II}} \equiv \ket{Z^{\mbox{\tiny R}}Y^{\mbox{\tiny L}}}\,. \end{aligned} \end{equation} \subsection{Comments on the weak-coupling limit} To discuss the weak coupling regime we expand our results at small values of the coupling constant $h\ll 1$. In particular we take the spectral parameters and the auxiliary roots to expand as \begin{equation} x^\pm\approx \frac{u_x\pm i/2}{h},\qquad y\approx\frac{u_y}{h}, \end{equation} where the rapidities $u_x,u_y$ are finite. Together with the expansion of the rational expressions appearing in the BY equations, we should also expand the dressing factors $\sigma^{\bullet\bullet},\tilde{\sigma}^{\bullet\bullet}$. \alert{Do it} We then find that the BY equation for the node $n$ can be rewritten as \begin{equation} \label{eq:BE-one-loop} \left( \frac{u_{n,k} + \frac{i}{2} \varpi_n}{u_{n,k} - \frac{i}{2} \varpi_n} \right)^L = \prod_{\substack{j = 1\\j \neq k}}^{K_n} \frac{u_{n,k} - u_{n,j} + \frac{i}{2} A_{nn}}{u_{n,k} - u_{n,j} - \frac{i}{2} A_{nn}} \prod_{n' \neq n} \prod_{j=1}^{K_{n'}} \frac{u_{n,k} - u_{n',j} + \frac{i}{2} A_{nn'}}{u_{n,k} - u_{n,j} - \frac{i}{2} A_{nn'}}, \end{equation} where $\varpi_n$ are weights that are $\varpi_2=\varpi_{\bar{2}}=1$ for momentum-carrying nodes, and $\varpi_1=\varpi_3=\varpi_{\bar{1}}=\varpi_{\bar{3}}=0$ for nodes of auxiliary roots. The numbers $A_{nn'}$ concide with the entries of the Cartan matrix of $\alg{psu}(1,1|2)^2$ \begin{equation}\label{eq:cartan} A = \begin{pmatrix} 0 & -1 &0 &0 & 0 &0 \\ -1 & +2 & -1 & 0 & 0 & 0 \\ 0 & -1 & 0 &0 & 0 &0 \\ 0 & 0 &0 & 0 & +1 & 0 \\ 0 & 0 & 0 & +1 & -2 & +1 \\ 0 & 0 &0 & 0 & +1 & 0 \end{pmatrix}, \end{equation} where the first copy appears to be in the $\alg{su}(2)$ grading, while the second in the $\alg{sl}(2)$, see Section~\ref{sec:psu112-algebra}. \end{comment} \section{Representations of the off-shell symmetry algebra}\label{sec:RepresentationsT4} In this section we want to study the representations of the off-shell symmetry algebra $\mathcal{A}$ that are relevant for {AdS$_3\times$S$^3\times$T$^4$}. For simplicity we start by considering the near-BMN limit introduced in Section~\ref{sec:large-tens-exp}, and then we explain how we can extend the results to all-loops. We further study these representations and we provide a parameterisation in terms of the momentum of the excitation. \subsection{Near-BMN limit} We start by considering the near-BMN limit, where we truncate the charges at the quadratic order in the fields. For the supercharges this means also that we will ignore the factor $e^{i\, x_-}$ in~\eqref{eq:supercharges-quadratic-T4}. Introducing creation and annihilation operators, we rewrite the charges in momentum space and discuss the representations under which the excitations transform. For the explicit map between fields and oscillators we refer to Appendix~\ref{app:oscillators}. The tables below summarise the notation for the annihilation operators that we use. Creation operators are denoted with a dagger. We have bosonic ladder operators $a$ carrying a label $z$ or $y$, to distinguish excitations on AdS or the sphere respectively, and a label L or R. As anticipated in the previous section, they create massive excitations on the worldsheet. The labels L,R appear also for the ladder operators $d$ of massive fermions, that also carry a $\alg{su}(2)_{\bullet}$ index. For bosons of T$^4$, creation and annihilation operators will carry two indices, as they are charged under $\alg{su}(2)_{\bullet}$ and $\alg{su}(2)_{\circ}$. They are massless excitations, and together with them we find also massless fermions, whose ladder operators $d,\tilde{d}$ carry just an $\alg{su}(2)_{\circ}$ index. \vspace{12pt} \noindent Bosons: \quad \begin{tabular}{c|c|c} AdS$_3$ & S$^3$ & T$^4$ \\ \hline $a_{\mbox{\tiny L} z}$, $a_{\mbox{\tiny R} z}$ & $a_{\mbox{\tiny L} y}$, $a_{\mbox{\tiny R} y}$ & $a_{\dot{a}a}$ \end{tabular} \qquad\quad Fermions: \quad \begin{tabular}{c|c} massive & massless \\ \hline \raisebox{-3pt}{$d_{\mbox{\tiny L},\dot{a}}$, $d_{\mbox{\tiny R}}{}^{\dot{a}}$} & \raisebox{-3pt}{$d_a$, $\tilde{d}_a$} \end{tabular} \vspace{12pt} When acting on the vacuum, we create the eight massive states that we denote as \begin{equation}\label{eq:massive-states-BMN} \ket{Z^{\mbox{\tiny L},\mbox{\tiny R}}}=a^\dagger_{\mbox{\tiny L},\mbox{\tiny R}\, z}\ket{\mathbf{0}},\quad \ket{Y^{\mbox{\tiny L},\mbox{\tiny R}}}=a^\dagger_{\mbox{\tiny L},\mbox{\tiny R}\, y}\ket{\mathbf{0}},\quad \ket{\eta^{\mbox{\tiny L} \dot{a}}}=d^{\ \dot{a} \dagger}_{\mbox{\tiny L}}\ket{\mathbf{0}},\quad \ket{\eta^{\mbox{\tiny R}}_{\ \dot{a}}}=d^{\dagger}_{\mbox{\tiny R} \dot{a}}\ket{\mathbf{0}}, \end{equation} and the eight massless ones \begin{equation}\label{eq:massless-states-BMN} \ket{T^{\dot{a}a}}=a^{\dot{a} a\dagger}\ket{\mathbf{0}},\qquad \ket{\chi^{a}}=d^{a\,\dagger}\ket{\mathbf{0}},\qquad \ket{\widetilde{\chi}^{a}}=\tilde{d}^{a\,\dagger}\ket{\mathbf{0}}. \end{equation} The notation that we introduce here for states in the near-BMN limit is the same one that we will use for the exact representations, starting in Section~\ref{sec:exact-repr-T4}. In terms of ladder operators, the central charges are written as \begin{equation}\label{eq-central-charges-osc-T4} \begin{aligned} &\gen{H}= \int {\rm d} p \ \Bigg[ \omega_p \left( a_{\mbox{\tiny L} z}^\dagger a_{\mbox{\tiny L} z} +a_{\mbox{\tiny L} y}^\dagger a_{\mbox{\tiny L} y} + a_{\mbox{\tiny R} z}^\dagger a_{\mbox{\tiny R} z} +a_{\mbox{\tiny R} y}^\dagger a_{\mbox{\tiny R} y} + d_{\mbox{\tiny L}}^{\ {\dot{a}}\,\dagger}d_{\mbox{\tiny L} {\dot{a}}} +d_{\mbox{\tiny R}{\dot{a}}}^\dagger d_{\mbox{\tiny R}}^{\ {\dot{a}}} \right) \\ &\qquad\qquad\qquad\qquad\qquad +\tilde{\omega}_p \left( a^{\dot{a}a\dagger} a_{\dot{a}a} + d^{a\,\dagger}d_{a} + \tilde{d}^{a\,\dagger}\tilde{d}_{a} \right) \Bigg],\\ &\gen{M}= \int {\rm d} p \ \Bigg[ a_{\mbox{\tiny L} z}^\dagger a_{\mbox{\tiny L} z} +a_{\mbox{\tiny L} y}^\dagger a_{\mbox{\tiny L} y} + d_{\mbox{\tiny L}}^{\ {\dot{a}}\,\dagger}d_{\mbox{\tiny L} {\dot{a}}} -\left( a_{\mbox{\tiny R} z}^\dagger a_{\mbox{\tiny R} z} +a_{\mbox{\tiny R} y}^\dagger a_{\mbox{\tiny R} y} +d_{\mbox{\tiny R}{\dot{a}}}^\dagger d_{\mbox{\tiny R}}^{\ {\dot{a}}} \right) \Bigg],\\ &\gen{C}= -\frac{1}{2}\int {\rm d} p \ p \ \Bigg[ a_{\mbox{\tiny L} z}^\dagger a_{\mbox{\tiny L} z} +a_{\mbox{\tiny L} y}^\dagger a_{\mbox{\tiny L} y} + a_{\mbox{\tiny R} z}^\dagger a_{\mbox{\tiny R} z} +a_{\mbox{\tiny R} y}^\dagger a_{\mbox{\tiny R} y} + d_{\mbox{\tiny L}}^{\ {\dot{a}}\,\dagger}d_{\mbox{\tiny L} {\dot{a}}} +d_{\mbox{\tiny R}{\dot{a}}}^\dagger d_{\mbox{\tiny R}}^{\ {\dot{a}}} \\ &\qquad\qquad\qquad\qquad\qquad + a^{\dot{a}a\dagger} a_{\dot{a}a} + d^{a\,\dagger}d_{a} + \tilde{d}^{a\,\dagger}\tilde{d}_{a} \Bigg]. \end{aligned} \end{equation} As expected, in the near-BMN limit the frequencies for massive $(\omega_p=\sqrt{1+p^2})$ and massless excitations $(\tilde{\omega}_p=|p|)$ show a relativistic dispersion relation. We will see later how this is deformed by the $h$-dependence, see~\eqref{eq:all-loop-disp-rel-T4}. In the near-BMN limit, the element introduced by the central extension is essentially just the charge measuring the worldsheet momentum $\gen{C} \sim - \frac{1}{2} \gen{P}$. The quadratic supercharges of Eq.~\eqref{eq:supercharges-quadratic-T4} take the form \begin{equation}\label{eq-supercharges-osc-T4} \begin{aligned} &\gen{Q}_{\sL}^{\ {\dot{a}}}= \int {\rm d} p \ \Bigg[ f_p (d_{\mbox{\tiny L}}^{\ {\dot{a}}\,\dagger} a_{\mbox{\tiny L} y} + \epsilon^{{\dot{a}\dot{b}}}\, a_{\mbox{\tiny L} z}^\dagger d_{\mbox{\tiny L} {\dot{b}}}) +g_p (a_{\mbox{\tiny R} y}^\dagger d_{\mbox{\tiny R}}^{\ {\dot{a}}} +\epsilon^{{\dot{a}\dot{b}}}\, d_{\mbox{\tiny R} {\dot{b}}}^\dagger a_{\mbox{\tiny R} z}) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +\tilde{f}_p \left( \epsilon^{{\dot{a}\dot{b}}}\, \tilde{d}^{{a}\,\dagger}a_{{\dot{b}a}}+ a^{{\dot{a}a}\,\dagger}d_{{a}}\right)\Bigg],\\ &\gen{Q}_{\sR {\dot{a}}}=\int {\rm d} p \ \Bigg[ f_p (d_{\mbox{\tiny R} {\dot{a}}}^\dagger a_{\mbox{\tiny R} y} -\epsilon_{{\dot{a}\dot{b}}}\, a_{\mbox{\tiny R} z}^\dagger d_{\mbox{\tiny R}}^{\ {\dot{b}}}) +g_p (a_{\mbox{\tiny L} y}^\dagger d_{\mbox{\tiny L} {\dot{a}}} -\epsilon_{{\dot{a}\dot{b}}}\, d_{\mbox{\tiny L}}^{\ {\dot{b}}\,\dagger} a_{\mbox{\tiny L} z})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +\tilde{g}_p \left( d^{{a}\,\dagger}a_{{\dot{a}a}}- \epsilon_{{\dot{a}\dot{b}}}\, a^{{\dot{b}a}\,\dagger}\tilde{d}_{{a}}\right)\Bigg]. \end{aligned} \end{equation} Here we have introduced the functions $f_p,g_p$ and $\tilde{f}_p,\tilde{g}_p$ for massive and massless excitations respectively, defined by \begin{equation} \begin{aligned} f_p&=\sqrt{\frac{1+\omega_p}{2}},\qquad\qquad &&g_p=-\frac{p}{2f_p}, \\ \tilde{f}_p&=\sqrt{\frac{\tilde{\omega}_p}{2}},\qquad\qquad &&\tilde{g}_p=-\frac{p}{2\tilde{f}_p}. \end{aligned} \end{equation} The action of the bosonic and the fermionic charges on the states of Equation~\eqref{eq:massive-states-BMN} and~\eqref{eq:massless-states-BMN} define the representation under which the excitations of {AdS$_3\times$S$^3\times$T$^4$} are organised. It is clear that this representation is \emph{reducible}. We find three irreducible representations, that may be labelled by the eigenvalue $m$ of the central charge $\gen{M}$: $m=+1$ for Left massive, $m=-1$ for Right massive, and $m=0$ for massless excitations. \subsection{All-loop representations}\label{sec:exact-repr-T4} The study of the charges at quadratic level allowed for the understanding of the representations at the near-BMN order. We now want to go beyond this limit and write down representations that are supposed to be valid to all loops. In particular we want to reproduce the full non-linear momentum dependence of the charge $\gen{C}$, as in Eq.~\eqref{eq:allloop-centralcharges}. Doing so we discover that the dispersion relation is modified, and for generic $h$ it is not relativistic. The results rely on the assumption that the eigenvalues of the central charges $\gen{C},\overline{\gen{C}}$ derived from the classical computation remain unmodified at the quantum level. As already pointed out, the main motivation for believing this is the fact that the same central charges were found on both sides of the AdS$_5$/CFT$_4$ dual pair~\cite{Beisert:2005tm,Arutyunov:2006ak}. The key point of the construction is that each of the irreducible representations found in the near-BMN limit---Left-massive, Right-massive and massless---is a \emph{short} representation of $\alg{psu}(1|1)^4_{\text{c.e.}}$. Even beyond the near-BMN limit the dimensionality remains the same, and they remain to be short. A generic short representation satisfies the important constraint relating the central charges\footnote{This equation is a consequence of the fact that in a short representation a highest weight state---defined as being annihilated by the raising operators $\overline{\gen{Q}}_{\mbox{\tiny L}\dot{a}},\gen{Q}_{\mbox{\tiny R}}^{\ \dot{a}}$---is annihilated also by a particular combination of lowering operators $\frac{1}{2}(\gen{H}-\gen{M})\gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}}-\gen{C}\overline{\gen{Q}}_{\mbox{\tiny R}\dot{a}}$. Then the vanishing of the anti-commutator $\{\overline{\gen{Q}}_{\mbox{\tiny L}\dot{a}},\frac{1}{2}(\gen{H}-\gen{M})\gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}}-\gen{C}\overline{\gen{Q}}_{\mbox{\tiny R}\dot{a}}\}$ yields the desired constraint on the central elements.} \begin{equation}\label{eq:short-cond} \gen{H}^2 = \gen{M}^2 + 4 \gen{C}\overline{\gen{C}}\,, \end{equation} called \emph{shortening condition}. It allows us to solve immediately for the eigenvalue $E_p$ of the Hamiltonian $\gen{H}$, in terms of the eigenvalues $m$ and $\frac{ih}{2}(e^{i\, p}-1)$ of $\gen{M}$ and $\gen{C}$, yielding \begin{equation}\label{eq:all-loop-disp-rel-T4} E_p = \sqrt{ m^2 + 4 h^2 \sin^2 \frac{p}{2}}\,. \end{equation} This result is particularly important because it states what is the energy of a fundamental worldsheet excitation at a generic value of $h$. For this reason it is often referred to as the \emph{all-loop} dispersion relation. $\gen{M}$ measures an angular momentum, and its eigenvalue will continue to take the integer values $m=+1,-1,0$. In other words it does not depend on $h$ and $p$ even beyond the near-BMN limit. \medskip We now proceed with the discussion on the \emph{exact} representations by presenting the action of the supercharges on the states. The result is written in terms of the coefficients $a_p,\bar{a}_p,b_p,\bar{b}_p$, that will be fixed in Section~\ref{sec:RepresentationCoefficientsT4} by requiring that we reproduce the eigenvalues~\eqref{eq:allloop-centralcharges} and~\eqref{eq:all-loop-disp-rel-T4} of the central charges, and that we match with the results in the near-BMN limit once we rescale the momentum $p\to p/h$ and send $h\to \infty$. We show the action of the supercharges separately for each of the irreducible modules. \paragraph{Massive representations} The Left and Right modules are depicted in Figure~\ref{fig:massive}. Each of them has the shape of a square, where supercharges connect adjacent corners. The two corners hosting the fermions are related by $\alg{su}(2)_\bullet$ generators. \begin{figure}[t] \centering \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{Y^{\mbox{\tiny L}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny L} 1}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny L} 2}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{Z^{\mbox{\tiny L}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {\scriptsize $\gen{Q}^{\ 1}_{\mbox{\tiny L}},\overline{\gen{Q}}{}^{\ 1}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] {\scriptsize $\overline{\gen{Q}}{}_{\mbox{\tiny L} 1},\gen{Q}_{\mbox{\tiny R} 1}$}; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] {\scriptsize $\overline{\gen{Q}}{}_{\mbox{\tiny L} 2},{\gen{Q}}{}_{\mbox{\tiny R} 2}$}; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] {\scriptsize $ \gen{Q}^{\ 2}_{\mbox{\tiny L}},\overline{\gen{Q}}{}^{\ 2}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=north west,Q node] {\scriptsize $\gen{J}^{\ \ \dot{b}}_{\bullet\dot{a}}$}; \end{tikzpicture} \hspace{2cm} \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{Z^{\mbox{\tiny R}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny R}}_{\ 2}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{\eta^{\mbox{\tiny R}}_{\ 1}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{Y^{\mbox{\tiny R}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {\scriptsize $\gen{Q}^{\ 1}_{\mbox{\tiny L}},\overline{\gen{Q}}{}^{\ 1}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] {\scriptsize $\overline{\gen{Q}}{}_{\mbox{\tiny L} 1},\gen{Q}_{\mbox{\tiny R} 1}$}; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] {\scriptsize $\overline{\gen{Q}}{}_{\mbox{\tiny L} 2},{\gen{Q}}{}_{\mbox{\tiny R} 2}$}; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] {\scriptsize $ \gen{Q}^{\ 2}_{\mbox{\tiny L}},\overline{\gen{Q}}{}^{\ 2}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=north west,Q node] {\scriptsize $\gen{J}^{\ \ \dot{b}}_{\bullet\dot{a}}$}; \end{tikzpicture} \caption{The Left and Right massive modules. The supercharges indicated explicitly correspond to the outer arrows only. The two massive fermions within each module are related by $\alg{su}(2)_\bullet$ ladder operators.} \label{fig:massive} \end{figure} More explicitly, the action of the supercharges on the Left module is \begin{equation}\label{eq:exact-repr-left-massive} \begin{aligned} \gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{Y_p^{\mbox{\tiny L}}} &= a_p \ket{\eta^{\mbox{\tiny L} \dot{a}}_p}, \qquad &\gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{\eta^{\mbox{\tiny L} \dot{b}}_p} &= \epsilon^{\dot{a}\dot{b}} \, a_p \ket{Z_p^{\mbox{\tiny L}}}, \\ \overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{Z_p^{\mbox{\tiny L}}} &= - \epsilon_{\dot{a}\dot{b}} \, \bar{a}_p \ket{\eta^{\mbox{\tiny L} \dot{b}}_p}, \qquad &\overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{\eta^{\mbox{\tiny L} \dot{b}}_p}& = \delta_{\dot{a}}^{\ \dot{b}} \, \bar{a}_p \ket{Y_p^{\mbox{\tiny L}}}, \\[8pt] \gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{Z^{\mbox{\tiny L}}_p} &= - \epsilon_{\dot{a}\dot{b}} \, b_p \ket{\eta^{\mbox{\tiny L} \dot{b}}_p}, \qquad &\gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{\eta^{\mbox{\tiny L} \dot{b}}_p} &= \delta_{\dot{a}}^{\ \dot{b}} \, b_p \ket{Y^{\mbox{\tiny L}}_p},\\ \overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{Y^{\mbox{\tiny L}}_p} &= \bar{b}_p \ket{\eta^{\mbox{\tiny L} \dot{a}}_p}, \qquad &\overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{\eta^{\mbox{\tiny L} \dot{b}}_p} &= \epsilon^{\dot{a}\dot{b}} \, \bar{b}_p \ket{Z^{\mbox{\tiny L}}_p}. \end{aligned} \end{equation} As it is clear also from the picture, if we define $\overline{\gen{Q}}{}_{\mbox{\tiny L} 1},\gen{Q}_{\mbox{\tiny R} 1}$ to be our raising operators, then the bosonic excitation $\ket{Y^{\mbox{\tiny L}}}$ of the sphere is the highest weight state of this module. For the Right module the situation is different, as the highest weight state is the bosonic excitation $\ket{Z^{\mbox{\tiny R}}}$ of AdS. The action of the supercharges in this case is\footnote{Although we have defined Right fermions with a lower $\alg{su}(2)_\bullet$ index in Eq.~\eqref{eq:massive-states-BMN}, here we prefer to raise it, to avoid collision with the label for the momentum of the excitation. We raise $\alg{su}(2)$ indices with the help of $\epsilon^{\dot{a}\dot{b}}$.} \begin{equation}\label{eq:exact-repr-right-massive} \begin{aligned} &\gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{Z_p^{\mbox{\tiny R}}} = b_p \ket{\eta^{\mbox{\tiny R} \dot{a}}_p}, \qquad &\gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{\eta^{\mbox{\tiny R} \dot{b}}_p} &=- \epsilon^{\dot{a}\dot{b}} \, b_p \ket{Y_p^{\mbox{\tiny R}}},\\ &\overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{Y_p^{\mbox{\tiny R}}} = \epsilon_{\dot{a}\dot{b}} \, \bar{b}_p \ket{\eta^{\mbox{\tiny R} \dot{b}}_p}, \qquad &\overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{\eta^{\mbox{\tiny R} \dot{b}}_p} &= \delta_{\dot{a}}^{\ \dot{b}} \, \bar{b}_p \ket{Z_p^{\mbox{\tiny R}}}, \\[8pt] &\gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{Y_p^{\mbox{\tiny R}}} = \epsilon_{\dot{a}\dot{b}} \, a_p \ket{\eta^{\mbox{\tiny R} \dot{b}}_p}, \qquad &\gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{\eta^{\mbox{\tiny R} \dot{b}}_p} &= \delta_{\dot{a}}^{\ \dot{b}} \, a_p \ket{Z_p^{\mbox{\tiny R}}}, \\ &\overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{Z_p^{\mbox{\tiny R}}} = \bar{a}_p \ket{\eta^{\mbox{\tiny R} \dot{a}}_p}, \qquad &\overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{\eta^{\mbox{\tiny R} \dot{b}}_p} &= - \epsilon^{\dot{a}\dot{b}} \, \bar{a}_p \ket{Y_p^{\mbox{\tiny R}}}. \end{aligned} \end{equation} The above exact representations reproduce the ones found in the near-BMN limit after identifying $a_p\sim\bar{a}_p\sim f_p$ and $b_p\sim\bar{b}_p\sim g_p$. When going on-shell one has to set also $b_p=\bar{b}_p=0$, with the result that only Left (Right) supercharges act non-trivially on Left (Right) states. \paragraph{Massless representations} Figure~\ref{fig:massless} shows the massless module, with the shape of a parallelepiped. It is obtained by gluing together two short $\alg{psu}(1|1)^4_{\text{c.e.}}$ representations---with the shape of a square, like in the case of massive excitations---related by the action of $\alg{su}(2)_\circ$ generators. \begin{figure}[t] \centering \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \newcommand{-4cm}{-4cm} \begin{scope}[xshift=-4cm] \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\chi^{1}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{T^{11}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{T^{21}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{\widetilde{\chi}^{1}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {\scriptsize $\gen{Q}^{\ 1}_{\mbox{\tiny L}},\overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ 1}$}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] }; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] }; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] {\scriptsize $\gen{Q}^{\ 2}_{\mbox{\tiny L}},\overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ 2}$}; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=north west,Q node] {\scriptsize $\gen{J}^{\ \ \dot{b}}_{\bullet\dot{a}}$}; \end{scope} \newcommand{1cm}{1cm} \newcommand{1cm}{1cm} \begin{scope}[xshift=1cm,yshift=1cm] \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\chi^{2}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{T^{12}}$}; \node [box] (PsiM) at (+2cm, 0cm) {\small $\ket{T^{22}}$}; \node [box] (PhiP) at ( 0 ,-2cm) {\small $\ket{\widetilde{\chi}^{2}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] { }; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \draw [arrow] ($(PsiM.south)-(0.09cm,0cm)$) -- ($(PhiP.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south east,Q node] {}; \draw [arrow] ($(PhiP.east) -(0cm,0.10cm)$) -- ($(PsiM.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=north west,Q node] {\scriptsize $\overline{\gen{Q}}{}_{\mbox{\tiny L} 1}, \gen{Q}_{\mbox{\tiny R} 1}$}; \draw [arrow] ($(PhiM.east) -(0cm,0.10cm)$) -- ($(PsiM.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=north east,Q node] {}; \draw [arrow] ($(PsiM.north)+(0.09cm,0cm)$) -- ($(PhiM.east) +(0cm,0.10cm)$) node [pos=0.5,anchor=south west,Q node] {\scriptsize $\overline{\gen{Q}}{}_{\mbox{\tiny L} 2}, \gen{Q}_{\mbox{\tiny R} 2}$}; \draw [arrow] ($(PsiP.south)-(0.09cm,0cm)$) -- ($(PhiP.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north east,Q node] { }; \draw [arrow] ($(PhiP.west) +(0cm,0.10cm)$) -- ($(PsiP.south)+(0.09cm,0cm)$) node [pos=0.5,anchor=south west,Q node] {}; \draw [dotted] (PsiM) -- (PsiP) node [pos=0.65,anchor=south west,Q node] }; \draw [dashed] ($(PhiM.west)+({-0.1cm,0.1cm})$) -- ($(PhiM.east)-({1cm,1cm})+({-4cm,0cm})+({-0.1cm,0.2cm})$); \draw [dashed] (PsiM) -- ($(PsiM.east)-({1cm,1cm})+({-4cm,0cm})$); \draw [dashed] (PsiP) -- ($(PsiP.east)-({1cm,1cm})+({-4cm,0cm})+({-0.1cm,0.1cm})$); \draw [dashed] ($(PhiP.south west)+({0cm,0.2cm})$) -- ($(PhiP.east)-({1cm,1cm})+({-4cm,0cm})+({0cm,-0.1cm})$) node [pos=0.45,anchor=north west,Q node] {\scriptsize $\gen{J}^{\ \ b}_{\circ a}$}; \end{scope} \end{tikzpicture} \caption{The massless module. It is composed of two short representations of $\alg{psu}(1|1)^4_{\text{c.e.}}$ that are connected by the action of the $\alg{su}(2)_{\circ}$ generators.} \label{fig:massless} \end{figure} The explicit action of the supercharges on the massless module is \begin{equation}\label{eq:exact-repr-massless} \begin{aligned} \gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{T^{\dot{b}a}_p}& = \epsilon^{\dot{a}\dot{b}} a_p \ket{\widetilde{\chi}^a_p}, \qquad &\gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{\chi^{a}_p} \;&= a_p \ket{T^{\dot{a}a}_p}, \\ \overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{\widetilde{\chi}^{a}_p}\;& = -\epsilon_{\dot{a}\dot{b}} \bar{a}_p \ket{T^{\dot{b}a}_p}, \qquad &\overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{T^{\dot{b}a}_p} &= \delta_{\dot{a}}^{\ \dot{b}} \bar{a}_p \ket{\chi^a_p}, \\[8pt] \gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{T^{\dot{b}a}_p} &= \delta_{\dot{a}}^{\ \dot{b}} b_p \ket{\chi^a_p}, \qquad &\gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{\widetilde{\chi}^a_p} \;&= -\epsilon_{\dot{a}\dot{b}} b_p \ket{T^{\dot{b}a}_p}, \\ \overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{\chi^a_p}\;& = \bar{b}_p \ket{T^{\dot{a}a}_p}, \qquad &\overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{T^{\dot{b}a}_p} &= \epsilon^{\dot{a}\dot{b}} \bar{b}_p \ket{\widetilde{\chi}^a_p}. \end{aligned} \end{equation} Masslessness of the excitations is encoded in the fact that they are annihilated by~$\gen{M}$, which results in a constraint on the representation coefficients\footnote{We stress that the coefficients $a_p,\bar{a}_p,b_p,\bar{b}_p$ appearing for the massless module are different from the ones for the massive modules. The dependence on the eigenvalue $m$ is not written explicitly not to burden the notation.} \begin{equation} \label{eq:masslessness} |a_p|^2 = |b_p|^2. \end{equation} To match with the near-BMN limit one has to take $a_p\sim\bar{a}_p\sim \tilde{f}_p$ and $b_p\sim\bar{b}_p\sim \tilde{g}_p$. Differently from the massive case, on-shell all supercharges annihilate massless excitations. \subsection{Bi-fundamental representations}\label{sec:BiFundamentalRepresentationsT4} In Section~\ref{sec:AlgebraTensorProductT4} we showed how it is possible to write the $\alg{psu}(1|1)^4_{\text{c.e.}}$ algebra in terms of $\alg{su}(1|1)^2_{\text{c.e.}}$ generators. In this section we explain how the representations of $\alg{psu}(1|1)^4_{\text{c.e.}}$ that are relevant for {AdS$_3\times$S$^3\times$T$^4$} can be understood as proper tensor products of representations of $\alg{su}(1|1)^2_{\text{c.e.}}$. The representations that we consider in this section are depicted in Figure~\ref{fig:su112-repr}. \begin{figure}[t] \centering \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\phi^{\mbox{\tiny L}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\psi^{\mbox{\tiny L}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {\scriptsize $\mathbb{Q}_{\mbox{\tiny L}},\overline{\mathbb{Q}}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \node [box] at ( -1 , -1cm) {\boxed{\varrho_{\mbox{\tiny L}}}}; \end{tikzpicture} \hspace{0.5cm} % \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\psi^{\mbox{\tiny R}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\phi^{\mbox{\tiny R}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {\scriptsize $\mathbb{Q}_{\mbox{\tiny L}},\overline{\mathbb{Q}}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \node [box] at ( -1 , -1cm) {\boxed{\varrho_{\mbox{\tiny R}}}}; \end{tikzpicture} \hspace{0.5cm} % \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\widetilde{\psi}^{\mbox{\tiny L}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\widetilde{\phi}^{\mbox{\tiny L}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {\scriptsize $\mathbb{Q}_{\mbox{\tiny L}},\overline{\mathbb{Q}}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \node [box] at ( -1 , -1cm) {\boxed{\widetilde{\varrho}_{\mbox{\tiny L}}}}; \end{tikzpicture} \hspace{0.5cm} % \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (PhiM) at ( 0 , 2cm) {\small $\ket{\widetilde{\phi}^{\mbox{\tiny R}}}$}; \node [box] (PsiP) at (-2cm, 0cm) {\small $\ket{\widetilde{\psi}^{\mbox{\tiny R}}}$}; \newcommand{0.09cm,0cm}{0.09cm,0cm} \newcommand{0cm,0.10cm}{0cm,0.10cm} \draw [arrow] ($(PhiM.west) +(0cm,0.10cm)$) -- ($(PsiP.north)-(0.09cm,0cm)$) node [pos=0.5,anchor=south east,Q node] {\scriptsize $\mathbb{Q}_{\mbox{\tiny L}},\overline{\mathbb{Q}}_{\mbox{\tiny R}}$}; \draw [arrow] ($(PsiP.north)+(0.09cm,0cm)$) -- ($(PhiM.west) -(0cm,0.10cm)$) node [pos=0.5,anchor=north west,Q node] {}; \node [box] at ( -1 , -1cm) {\boxed{\widetilde{\varrho}_{\mbox{\tiny R}}}}; \end{tikzpicture} \caption{Short representations of $\alg{su}(1|1)^2_{\text{c.e.}}$. They differ by the label L or R, and by the grading.} \label{fig:su112-repr} \end{figure} We start by considering a possible short representation of $\alg{su}(1|1)^2_{\text{c.e.}}$ that we call $\varrho_{\mbox{\tiny L}}$. It has dimension two, with one boson denoted by $\phi^{\mbox{\tiny L}}$ and one fermion denoted by $\psi^{\mbox{\tiny L}}$. It is defined by the following action of the supercharges that satisfy the commutation relations~\eqref{eq:comm-rel-su112}-\eqref{eq:comm-rel-su112-ce} \begin{equation}\label{eq:su(1|1)2-repr1} \boxed{\varrho_{\mbox{\tiny L}}:} \qquad\qquad \begin{aligned} \mathbb{Q}_{\sL} \ket{\phi^{\mbox{\tiny L}}_p} &= a_p \ket{\psi^{\mbox{\tiny L}}_p} , \qquad & \mathbb{Q}_{\sL} \ket{\psi^{\mbox{\tiny L}}_p} &= 0 , \\ \overline{\mathbb{Q}}{}_{\sL} \ket{\phi^{\mbox{\tiny L}}_p} &= 0 , \qquad & \overline{\mathbb{Q}}{}_{\sL} \ket{\psi^{\mbox{\tiny L}}_p} &= \bar{a}_p \ket{\phi^{\mbox{\tiny L}}_p} , \\ \mathbb{Q}_{\sR} \ket{\phi^{\mbox{\tiny L}}_p} &= 0 , \qquad & \mathbb{Q}_{\sR} \ket{\psi^{\mbox{\tiny L}}_p} &= b_p \ket{\phi^{\mbox{\tiny L}}_p} , \\ \overline{\mathbb{Q}}{}_{\sR} \ket{\phi^{\mbox{\tiny L}}_p} &= \bar{b}_p \ket{\psi^{\mbox{\tiny L}}_p} , \qquad & \overline{\mathbb{Q}}{}_{\sR} \ket{\psi^{\mbox{\tiny L}}_p} &= 0 . \end{aligned} \end{equation} The choice of the coefficients makes sure that the Left and the Right Hamiltonians are positive definite. The above equations identify a \emph{Left} representation, in the sense that on-shell $b_p=\bar{b}_p=0$ the Right charges annihilate the module. Similarly, one could consider a \emph{Right} representation $\varrho_{\mbox{\tiny R}}$ where the role of Left and Right charges is exchanged \begin{equation}\label{eq:su(1|1)2-reprR} \boxed{\varrho_{\mbox{\tiny R}}:} \qquad\qquad \begin{aligned} \mathbb{Q}_{\sR} \ket{\phi_p^{\mbox{\tiny R}}} &= a_p \ket{\psi_p^{\mbox{\tiny R}}} , \qquad & \mathbb{Q}_{\sR} \ket{\psi_p^{\mbox{\tiny R}}} &= 0 , \\ \overline{\mathbb{Q}}{}_{\sR} \ket{\phi_p^{\mbox{\tiny R}}} &= 0 , \qquad & \overline{\mathbb{Q}}{}_{\sR} \ket{\psi_p^{\mbox{\tiny R}}} &= \bar{a}_p \ket{\phi_p^{\mbox{\tiny R}}} , \\ \mathbb{Q}_{\sL} \ket{\phi_p^{\mbox{\tiny R}}} &= 0 , \qquad & \mathbb{Q}_{\sL} \ket{\psi_p^{\mbox{\tiny R}}} &= b_p \ket{\phi_p^{\mbox{\tiny R}}} , \\ \overline{\mathbb{Q}}{}_{\sL} \ket{\phi_p^{\mbox{\tiny R}}} &= \bar{b}_p \ket{\psi_p^{\mbox{\tiny R}}} , \qquad & \overline{\mathbb{Q}}{}_{\sL} \ket{\bar{\psi}_p} &= 0 . \end{aligned} \end{equation} If for lowering operators we conventionally choose the supercharges $\mathbb{Q}_{\sL},\overline{\mathbb{Q}}_{\sR}$---and for raising operators $\mathbb{Q}_{\sR},\overline{\mathbb{Q}}_{\sL}$---then the representations $\varrho_{\mbox{\tiny L}}$ and $\varrho_{\mbox{\tiny R}}$ are identified by the fact that the highest weight states are respectively $\phi^{\mbox{\tiny L}}$ and $\psi^{\mbox{\tiny R}}$. Other two possible representations are found by taking the opposite grading of the representations above, namely by exchanging the role of the boson and the fermion. We call $\widetilde{\varrho}_{\mbox{\tiny L}}$ the Left representation in which $\psi^{\mbox{\tiny L}}$ is the highest weight state \begin{equation}\label{eq:su(1|1)2-repr2} \boxed{\widetilde{\varrho}_{\mbox{\tiny L}}:} \qquad\qquad \begin{aligned} \mathbb{Q}_{\sL} \ket{\widetilde{\psi}_p^{\mbox{\tiny L}}} &= a_p \ket{\widetilde{\phi}_p^{\mbox{\tiny L}}} , \qquad & \mathbb{Q}_{\sL} \ket{\widetilde{\phi}_p^{\mbox{\tiny L}}} &= 0 , \\ \overline{\mathbb{Q}}{}_{\sL} \ket{\widetilde{\psi}_p^{\mbox{\tiny L}}} &= 0 , \qquad & \overline{\mathbb{Q}}{}_{\sL} \ket{\widetilde{\phi}_p^{\mbox{\tiny L}}} &= \bar{a}_p \ket{\widetilde{\psi}_p^{\mbox{\tiny L}}} , \\ \mathbb{Q}_{\sR} \ket{\widetilde{\psi}_p^{\mbox{\tiny L}}} &= 0 , \qquad & \mathbb{Q}_{\sR} \ket{\widetilde{\phi}_p^{\mbox{\tiny L}}} &= b_p \ket{\widetilde{\psi}_p^{\mbox{\tiny L}}} , \\ \overline{\mathbb{Q}}{}_{\sR} \ket{\widetilde{\psi}_p^{\mbox{\tiny L}}} &= \bar{b}_p \ket{\widetilde{\phi}_p^{\mbox{\tiny L}}} , \qquad & \overline{\mathbb{Q}}{}_{\sR} \ket{\widetilde{\phi}_p^{\mbox{\tiny L}}} &= 0 , \end{aligned} \end{equation} and similarly $\widetilde{\varrho}_{\mbox{\tiny R}}$ the one in which $\phi^{\mbox{\tiny R}}$ is the highest weight state. \smallskip The study of short representations of $\alg{su}(1|1)^2_{\text{c.e.}}$ is useful because the exact representations relevant for {AdS$_3\times$S$^3\times$T$^4$} are \emph{bi-fundamental representations} of $\alg{su}(1|1)^2_{\text{c.e.}}$. It is easy to check that the Left-massive, the Right-massive and the massless modules correspond to the following tensor products of representations \begin{equation} \text{Left}:\ \varrho_{\mbox{\tiny L}} \otimes \varrho_{\mbox{\tiny L}}, \qquad\quad \text{Right}:\ \varrho_{\mbox{\tiny R}} \otimes \varrho_{\mbox{\tiny R}}, \qquad\quad \text{massless}:\ (\varrho_{\mbox{\tiny L}} \otimes \widetilde{\varrho}_{\mbox{\tiny L}})^{\oplus 2}. \end{equation} For the massless module one has to consider two copies of $\varrho_{\mbox{\tiny L}} \otimes \widetilde{\varrho}_{\mbox{\tiny L}}$, hence the symbol $\oplus 2$. These two modules transform one into the other under the fundamental representation of $\alg{su}(2)_{\circ}$. More precisely, one can identify the massive states as \begin{equation} \label{eq:mv-tensor} \begin{aligned} Y^{\mbox{\tiny L}} = \phi^{\mbox{\tiny L}} \otimes \phi^{\mbox{\tiny L}} , \qquad \eta^{\mbox{\tiny L} 1} = \psi^{\mbox{\tiny L}} \otimes \phi^{\mbox{\tiny L}} ,\; \qquad \eta^{\mbox{\tiny L} 2} = \phi^{\mbox{\tiny L}} \otimes \psi^{\mbox{\tiny L}} ,\; \qquad Z^{\mbox{\tiny L}} = \psi^{\mbox{\tiny L}} \otimes \psi^{\mbox{\tiny L}} , \\ Y^{\mbox{\tiny R}} = {\phi}^{\mbox{\tiny R}} \otimes {\phi}^{\mbox{\tiny R}} , \qquad \eta^{\mbox{\tiny R}}_{\ 1} = {\psi}^{\mbox{\tiny R}} \otimes {\phi}^{\mbox{\tiny R}} , \qquad \eta^{\mbox{\tiny R}}_{\ 2} = {\phi}^{\mbox{\tiny R}} \otimes {\psi}^{\mbox{\tiny R}} , \qquad Z^{\mbox{\tiny R}} = {\psi}^{\mbox{\tiny R}} \otimes {\psi}^{\mbox{\tiny R}} , \end{aligned} \end{equation} and the massless ones as \begin{equation}\label{eq:ml-tensor} \begin{aligned} T^{1a} = \big(\psi^{\mbox{\tiny L}} \otimes \tilde{\psi}^{\mbox{\tiny L}}\big)^{a} ,\quad \widetilde{\chi}^{a} = \big(\psi^{\mbox{\tiny L}} \otimes \tilde{\phi}^{\mbox{\tiny L}}\big)^{a} , \quad \chi^{a} = \big(\phi^{\mbox{\tiny L}} \otimes \tilde{\psi}^{\mbox{\tiny L}}\big)^{a} , \quad T^{2a} = \big(\phi \otimes \tilde{\phi}^{\mbox{\tiny L}}\big)^{a} . \end{aligned} \end{equation} Together with the identification~\eqref{eq:supercharges-tensor-product} for the $\alg{psu}(1|1)^4_{\text{c.e.}}$ charges in terms of the ones of $\alg{su}(1|1)^2_{\text{c.e.}}$, it is easy to check that we reproduce the action presented in~\eqref{eq:exact-repr-left-massive}-\eqref{eq:exact-repr-right-massive} and~\eqref{eq:exact-repr-massless}. \subsection{Left-Right symmetry}\label{sec:LR-symmetry} The labels L and R appearing in the representations for the massive excitations are inherited from the two copies $\alg{psu}(1,1|2)_{\mbox{\tiny L}} \oplus \alg{psu}(1,1|2)_{\mbox{\tiny R}}$ of the symmetry of the string~\cite{Babichenko:2009dk}. It is clear that exchanging the two labels in this case does not produce any difference. Considering the commutation relations of $\mathcal{A}$ in~\eqref{eq:cealgebra} we see that they remain invariant under the map \begin{equation} \gen{Q}_{\mbox{\tiny L}}{}^{\dot{a}} \longleftrightarrow \gen{Q}_{\mbox{\tiny R} \dot{a}}, \qquad \gen{M} \longrightarrow -\gen{M}. \end{equation} A Left supercharge with upper $\alg{su}(2)_{\bullet}$ index is mapped to a Right supercharge with lower $\alg{su}(2)_{\bullet}$ index because they transform under the fundamental and anti-fundamental representations of $\alg{su}(2)_{\bullet}$, respectively. The Left and Right massive modules inherit a similar $\mathbb{Z}_2$ symmetry that we call Left-Right (LR) symmetry. The map here is given by \begin{equation}\label{eq:LR-massive} Y^{\mbox{\tiny L}} \longleftrightarrow {Y}^{\mbox{\tiny R}}, \qquad Z^{\mbox{\tiny L}} \longleftrightarrow {Z}^{\mbox{\tiny R}}, \qquad \eta^{\mbox{\tiny L} \dot{a}} \longleftrightarrow \eta^{\mbox{\tiny R}}_{\ \dot{a}}. \end{equation} Combining the map for the charges and the one for the states, we find compatibility for the representations~\eqref{eq:exact-repr-left-massive} and~\eqref{eq:exact-repr-right-massive}. This will prove to be extremely useful when constructing the S-matrix, see Section~\ref{sec:LR-symmetry}. At the level of the bi-fundamental representations, it is clear that the map above is equivalent to exchanging the labels L and R in~\eqref{eq:mv-tensor}. Let us consider the massless representation in~\eqref{eq:exact-repr-massless}, or its bi-fundamental structure~\eqref{eq:ml-tensor}. Na\"ively, it seems that the notion of LR symmetry cannot be extended to this module, as only Left representations are used for the construction. It turns out that LR symmetry is naturally implemented, and the resolution is in the masslessness of these excitations: there exists a momentum-dependent change of basis for the massless states\footnote{Using the parameterisation of the Section~\ref{sec:RepresentationCoefficientsT4} one can check that the rescalings are in fact just a sign, $\frac{a_p}{b_p}=-\text{sign} \big(\sin \frac{p}{2}\big)$.} \begin{equation} \label{eq:mless-rescaling} \ket{ \underline{\widetilde{\chi}}{}_p^a } = -\frac{a_p}{b_p} \ket{ \widetilde{\chi}_p^a }, \qquad \ket{ \underline{\chi}{}_p^a } = \frac{b_p}{a_p} \ket{ \chi_p^a }, \end{equation} under which the action of the supercharges becomes \begin{equation}\label{eq:repr-massless2} \boxed{(\varrho_{\mbox{\tiny R}}\otimes\widetilde{\varrho}_{\mbox{\tiny R}})^{\oplus 2}}: \qquad \begin{aligned} \gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{T^{\dot{b}a}_p} &= -\epsilon^{\dot{a}\dot{b}} b_p \ket{\underline{\widetilde{\chi}}{}^\alpha_p}, \qquad & \gen{Q}_{\mbox{\tiny L}}^{\ \dot{a}} \ket{\underline{\chi}{}^{a}_p}\ &= b_p \ket{T^{\dot{a}a}_p}, \\ \overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{\underline{\widetilde{\chi}}{}^{a}_p}\ &= \epsilon_{\dot{a}\dot{b}} \bar{b}_p \ket{T^{\dot{b}a}_p}, \qquad & \overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}} \ket{T^{\dot{b}a}_p} &= \delta_{\dot{a}}^{\ \dot{b}} \bar{b}_p \ket{\underline{\chi}{}^a_p}, \\[8pt] \gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{T^{\dot{b}a}_p} &= \delta_{\dot{a}}^{\ \dot{b}} a_p \ket{\underline{\chi}{}^a_p}, \qquad & \gen{Q}_{\mbox{\tiny R} \dot{a}} \ket{\underline{\widetilde{\chi}}{}^a_p}\ &= \epsilon_{\dot{a}\dot{b}} a_p \ket{T^{\dot{b}a}_p}, \\ \overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{\underline{\chi}{}^a_p}\ &= \bar{a}_p \ket{T^{\dot{a}a}_p}, \qquad & \overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}} \ket{T^{\dot{b}a}_p} &= -\epsilon^{\dot{a}\dot{b}} \bar{a}_p \ket{\underline{\widetilde{\chi}}{}^a_p}. \end{aligned} \end{equation} The bi-fundamental structure in this case corresponds to the identifications \begin{equation}\label{eq:ml-tensor-R} \begin{aligned} T_{1a} = \big({\psi}^{\mbox{\tiny R}} \otimes \widetilde{\psi}^{\mbox{\tiny R}}\big)_{a} , \quad \underline{\widetilde{\chi}}{}_{a} = \big({\phi}^{\mbox{\tiny R}} \otimes \widetilde{\psi}^{\mbox{\tiny R}}\big)_{a} , \quad \underline{\chi}{}_{a} = \big({\psi}^{\mbox{\tiny R}} \otimes \widetilde{\phi}^{\mbox{\tiny R}}\big)_{a} , \quad T_{2a} = \big({\phi}^{\mbox{\tiny R}} \otimes \widetilde{\phi}^{\mbox{\tiny R}}\big)_{a} . \end{aligned} \end{equation} This change of basis yields the above representation only when the states are massless, as we need to use explicitly $|a_p|^2=|b_p|^2$. It is then clear that a notion of LR symmetry is present also for the massless module, where we have the rules \begin{equation} \label{eq:LR-massless} \ket{T^{\dot{a}a}} \longleftrightarrow \ket{T_{\dot{a} a}}, \qquad \ket{\widetilde{\chi}^a} \longleftrightarrow +\frac{b_p}{a_p} \ket{\chi_a}, \qquad \ket{\chi^a} \longleftrightarrow -\frac{a_p}{b_p} \ket{\widetilde{\chi}_a}. \end{equation} It is interesting to note that one might perform also a different rescaling \begin{equation}\label{eq:ml-tensor-LR} \ket{ \underline{\widetilde{\chi}}{}_p^1 } = \ket{ \widetilde{\chi}_p^1 }, \quad \ket{ \underline{\chi}{}_p^1 } = \ket{ \chi_p^1 }, \qquad \ket{ \underline{\widetilde{\chi}}{}_p^2 } = -\frac{a_p}{b_p} \ket{ \widetilde{\chi}_p^2 }, \quad \ket{ \underline{\chi}{}_p^2 } = \frac{b_p}{a_p} \ket{ \chi_p^2 }. \end{equation} Doing this, one would get a bi-fundamental structure of the form $({\varrho}_{\mbox{\tiny L}}\otimes\widetilde{\varrho}_{\mbox{\tiny L}}) \oplus({\varrho}_{\mbox{\tiny R}}\otimes\widetilde{\varrho}_{\mbox{\tiny R}})$. Now both Left and Right representations would be used to construct the massless module, where Left corresponds to the $\alg{su}(2)_{\bullet}$ index $\dot{a}=1$ and Right to $\dot{a}=2$. LR symmetry would be implemented just by swapping the two $\alg{su}(2)_{\bullet}$ flavors. \subsection{Representation coefficients}\label{sec:RepresentationCoefficientsT4} In the previous section we presented the action of the odd generators of $\mathcal{A}$ on the massive and massless states. It is written in terms of two complex coefficients $a_p,b_p$---depending on the mometum $p$ of the excitation and the eigenvalue $m$ of the central charge $\gen{M}$ on the specific module---and their complex conjugates $\bar{a}_p,\bar{b}_p$. Computing anti-commutators of supercharges we are able to write the relation between these coefficients and the eigenvalues of the central charges. These are known at any value of the coupling constant, thanks to the results coming from the explicit worldsheet computation~\eqref{eq:allloop-centralcharges} and the shortening condition~\eqref{eq:short-cond}\footnote{The eigenvalue of the charge $\gen{M}$ is denoted by $m$. In these equations we have the absolute value of $m$ appearing becuase we get $a_p \bar{a}_p - b_p \bar{b}_p=m=+1$ for Left states and $a_p \bar{a}_p - b_p \bar{b}_p=-m=+1$ for Right states. On massless states we have $a_p \bar{a}_p - b_p \bar{b}_p=m=0$.} \begin{equation} \begin{aligned} \gen{M}: & \qquad a_p \bar{a}_p - b_p \bar{b}_p = |m|\,, \\ \gen{H}: & \qquad a_p \bar{a}_p + b_p \bar{b}_p = \sqrt{ m^2 + 4 h^2 \sin^2 \frac{p}{2}}\,, \\ \gen{C}: & \qquad \phantom{+ b_p \bar{b}_p\ } a_p b_p = h\, \frac{i}{2}(e^{ip}-1)\,\zeta\,. \end{aligned} \end{equation} Here $\zeta=e^{2i\, \xi}$ is a function that characterises the representation. On one-particle states it can be taken to be $1$, but in Section~\ref{sec:two-part-repr-T4} we will show that this is not the case when constructing two-particle states. A way to solve the above equations is to introduce the Zhukovski parameters $x^\pm$, that satisfy \begin{equation} \label{eq:zhukovski} x^+_p +\frac{1}{x^+_p} -x^-_p -\frac{1}{x^-_p} = \frac{2i \, |m|}{h}, \qquad \frac{x^+_p}{x^-_p}=e^{ip}. \end{equation} Then we can take the representation coefficients to be \begin{equation}\label{eq:expl-repr-coeff} a_p = \eta_p e^{i\xi}, \quad \bar{a}_p = \eta_p \left( \frac{x^+_p}{x^-_p}\right)^{-1/2} e^{-i\xi}, \quad b_p = -\frac{\eta_p}{x^-_p} \left( \frac{x^+_p}{x^-_p}\right)^{-1/2} e^{i\xi}, \quad \bar{b}_p = -\frac{\eta_p}{x^+_p} e^{-i\xi}, \end{equation} where we have introduced the function \begin{equation}\label{eq:def-eta} \eta_p = \left( \frac{x^+_p}{x^-_p}\right)^{1/4} \sqrt{\frac{ih}{2}(x^-_p - x^+_p)}\,. \end{equation} This parameterisation coincides with the one of~\cite{Arutyunov:2009ga}. The constraints on the spectral parameters $x^\pm$ can be solved by taking \begin{equation}\label{eq:xpm-funct-p} x^\pm_p = \frac{e^{\pm\frac{i p}{2}} \csc \left(\frac{p}{2}\right) \left(|m|+\sqrt{m^2+4 h^2 \sin ^2\left(\frac{p}{2}\right)}\right)}{2 h}\, , \end{equation} where the branch of the square root has been chosen such that $|x^\pm_p|>1$ for real values of the momentum $p$, when we consider massive states $|m|>0$. For massless states, we have simply \begin{equation} x^\pm_p = \text{sgn}(\sin \tfrac{p}{2})\, e^{\pm\frac{i p}{2}}, \qquad E_p = 2h \left|\sin \frac{p}{2}\right|. \end{equation} In the massless case the spectral parameters lie on the unit circle. Similarly to the situation of massive excitations, the dispersion relation is not relativistic, and now it actually gets the form of a giant magnon dispersion relation at strong coupling~\cite{Hofman:2006xt}. \section{Finite-gap equations}\label{sec:strong-limit-T4} Taking the limit of large string tension, one can make contact with solutions of rigid strings that are constructed explicitly by solving the classical equations of motion. On the other hand, the formulation in terms of a Lax connection allows one to write down the so-called \emph{finite-gap equations}, from which one can find the spectrum of the classical integrable model. We refer to~\cite{SchaferNameki:2010jy} for a review on this in the context of AdS/CFT. Here we take a proper thermodynamic limit of the Bethe-Yang equations of Section~\ref{sec:BAE} in the limit of large tension, to recover the finite-gap equations for the massive sector of {AdS$_3\times$S$^3\times$T$^4$}. To start, we expand $x^\pm$ for large values of the string tension. We consider the Zhukovski parameters $x^\pm_p$ expressed in terms of the momentum $p$ as in Eq.~\eqref{eq:xpm-funct-p}, that solves the constraints~\eqref{eq:zhukovski}. The large-tension limit of these parameters is obtained by first rescaling the momentum $p= {\rm p}/h$ and then expanding the expressions at large $h$, obtaining\footnote{Since we are considering massive excitations, we obtain the same result for left or right movers on the worldsheet. One should instead distinguish between these two cases when considering massless excitations.} \begin{equation}\label{eq:resc-p-xpm} x^\pm_p =\frac{\sqrt{m^2+{\rm p}^2}+|m|}{{\rm p}} \pm \frac{i \left(\sqrt{m^2+{\rm p}^2}+|m|\right)}{2 h} +\mathcal{O}(1/h^2). \end{equation} We parameterise the leading contribution in the expansion with a spectral parameter $x$, obtaining \begin{equation} x=\frac{\sqrt{m^2+{\rm p}^2}+|m|}{{\rm p}} \implies {\rm p}=\frac{2 |m| x}{x^2-1}, \end{equation} \begin{equation} x^\pm_p =x \pm \frac{i \, |m| \, x^2}{h(x^2-1)} +\mathcal{O}(1/h^2). \end{equation} Notice that it was important to assume that $m\neq 0$ when solving for ${\rm p}$ in terms of $x$. For a single excitation, momentum and energy (difference) are given by \begin{equation} \begin{aligned} {\rm p} &= h\, p = -i \, h \log \frac{x^+_p}{x^-_p} = \frac{2 |m|\, x}{x^2 - 1}+\mathcal{O}(1/h), \\ \Delta E_p &= -|m| + \sqrt{m^2 +4 h^2 \sin^2 \frac{p}{2}} = -i \, h \left( \frac{1}{x^-_p} - \frac{1}{x^+_p} \right) = \frac{2 |m|}{x^2 - 1} +\mathcal{O}(1/h). \end{aligned} \end{equation} To take the finite-gap limit we consider a large number $K_i$ of excitations, where the index $i$ denotes the possible types of massive excitations and it takes values $i=1,2,3,\bar{1},\bar{2},\bar{3}$. More precisely, we take the number of excitations to scale like the string tension $K_i \sim h$, and we define densities as \begin{equation} \rho_i(x) \equiv \frac{2}{h} \sum_{k=1}^{K_i} \frac{x^2}{x^2-1} \delta(x-x_{i,k})\,. \end{equation} Momentum and energy (difference) for the collection of the excitations are then expressed as the integrals \begin{equation} \mathcal{P}_i \equiv \int {\rm d}x \ \frac{\rho_i(x)}{x} , \qquad \epsilon_i \equiv \int {\rm d}x \ \frac{\rho_i(x)}{x^2} . \end{equation} The finite-gap limit of the Bethe-Yang equations is taken by considering each factor\footnote{Here $S_{pq}$ stands for any product of rational expressions of $x^\pm$ and dressing phases that appear in our Bethe-Yang equations.} $S_{pq}$ and by computing $-i\, \log S_{pq}$ in the large $h$ limit, using the formulas above for the expansion of $x^\pm$ and keeping the auxiliary roots finite. For the massive sector of {AdS$_3\times$S$^3\times$T$^4$} this yields the following equations\footnote{To avoid confusion we remind that when writing the finite-gap equations one uses a convention in notation that is a bit different from the one of the Bethe-Yang equations. For finite-gap we use the letter $x$ for the variable that solves the given equation, while $y$ is used for any variable on which we integrate. There is no distinction anymore in the notation for momentum carrying nodes and auxiliary roots.} \begin{equation} \begin{aligned} 2\pi n_1 &= - \int\frac{\rho_2(y)}{x-y}dy- \int\frac{\rho_{\bar{2}}(y)}{x-1/y}\frac{dy}{y^2} { -\frac{1}{2} \left( \mathcal{P}_{2} + \mathcal{P}_{\bar{2}} \right)}\,,\\ 2\pi n_2&=-\frac{x}{x^2-1}2\pi\mathcal{E}-\int\frac{\rho_1(y)}{x-y}dy + 2 \;\Xint-\frac{\rho_2(y)}{x-y}dy -\int\frac{\rho_3(y)}{x-y}dy\\ &\phantom{={}} +\int\frac{\rho_{\bar{1}}(y)}{x-1/y}\frac{dy}{y^2} + \int\frac{\rho_{\bar{3}}(y)}{x-1/y}\frac{dy}{y^2}+\frac{1}{x^2-1}\mathcal{M} { + \left( \mathcal{P}_{2} + \mathcal{P}_{\bar{2}} \right)}\,, \\ 2\pi n_3&= - \int\frac{\rho_2(y)}{x-y}dy- \int\frac{\rho_{\bar{2}}(y)}{x-1/y}\frac{dy}{y^2} { -\frac{1}{2} \left( \mathcal{P}_{2} + \mathcal{P}_{\bar{2}} \right)}\,,\\ \label{eq:FGlimit} 2\pi n_{\bar{1}}&= \int\frac{\rho_{2}(y)}{x-1/y}\frac{dy}{y^2}+\int\frac{\rho_{\bar{2}}(y)}{x-y}dy { +\frac{1}{2} \left( \mathcal{P}_{2} + \mathcal{P}_{\bar{2}} \right)} \,,\\ 2\pi n_{\bar{2}}&=-\frac{x}{x^2-1}2\pi\mathcal{E}-\int\frac{\rho_1(y)}{x-1/y}\frac{dy}{y^2} -\int\frac{\rho_3(y)}{x-1/y}\frac{dy}{y^2}\\ &\phantom{={}} +\int\frac{\rho_{\bar{1}}(y)}{x-y}dy -2 \;\Xint-\frac{\rho_{\bar{2}}(y)}{x-y}dy + \int\frac{\rho_{\bar{3}}(y)}{x-y}dy+\frac{1}{x^2-1}\mathcal{M}\,, \\ 2\pi n_{\bar{3}}&= \int\frac{\rho_{2}(y)}{x-1/y}\frac{dy}{y^2}+\int\frac{\rho_{\bar{2}}(y)}{x-y}dy { +\frac{1}{2} \left( \mathcal{P}_{2} + \mathcal{P}_{\bar{2}} \right)}\,. \\ \end{aligned} \end{equation} The explicit factors containing $\mathcal{P}_{2} + \mathcal{P}_{\bar{2}}$ are frame-dependent, and would not be present if we took the finite-gap limit of the Bethe-Yang equations written in the spin-chain frame. The limit allows us also to read off the residue of the quasi-momentum $\mathcal{E}$, that is the same for the node $2$ and $\bar{2}$. \begin{equation} \mathcal{E} = \frac{1}{2 \pi} \left(\frac{2}{h}L -\epsilon_1 +2 \epsilon_2 -\epsilon_3 +\epsilon_{\bar{1}} +\epsilon_{\bar{3}} { -\frac{2}{h} \left(\frac{1}{2}K_{1} -K_2 +\frac{1}{2}K_{3} -\frac{1}{2}K_{\bar{1}}-\frac{1}{2}K_{\bar{3}} \right) } \right), \end{equation} The factor $1/h$ is consistent with the fact that we have taken the length $L$ and the excitation numbers to be large, and only the ratios $L/h, K_i/h$ remain finite. The quantity $\mathcal{M}$ reads as \begin{equation}\label{eq:winding-finite-gap} \mathcal{M}=\mathcal{P}_1+\mathcal{P}_3-\mathcal{P}_{\bar{1}}+2 \mathcal{P}_{\bar{2}}- \mathcal{P}_{\bar{3}} . \end{equation} The finite-gap equations that we have obtained here are equivalent to the ones constructed in~\cite{Babichenko:2009dk} with the help of the Lax connection. \begin{comment} \subsection{Near-BMN and near-FS limits} The near-BMN limit is defined by first rescaling the momentum $p\to p/h$ and then expand at large values of the parameter $h$. This limit may be performed directly at the level of the S-matrix derived in Section~\ref{}. Doing so, one finds that this is expanded as \begin{equation} \mathbf{S}_{pq}=\mathbf{1} + \frac{i}{h} \mathbf{T}_{pq} + \mathcal{O}(1/h^2)\,. \end{equation} The object $\mathbf{T}_{pq}$ is called the tree-level S-matrix. Physical and braiding unitarity of $\mathbf{S}_{pq}$ imply \begin{equation} \mathbf{T}_{qp}=\mathbf{T}_{pq}^\dagger=-\mathbf{T}_{pq}\,, \end{equation} while the fact that $\mathbf{S}_{pq}$ satisfies the quantum Yang-Baxter equation implies that $\mathbf{T}_{pq}$ satisfies the \emph{classical} Yang-Baxter equation. The tree-level S-matrix may be computed with standard methods by first fixing light-cone gauge for the $\sigma$-model action, and then expand the result in powers of the transverse fields. The quartic Hamiltonian provides the two-body scattering elements that define $\mathbf{T}_{pq}$. Such a computation yields a result that is compatible withthe strong coupling limit of our S-matrix. Computing loop-corrections one can consider the corrections to the tree-level results, that are organised in higher powers of $1/h$. \end{comment} \section{Off-shell symmetry algebra of AdS$_3\times$S$^3\times$T$^4$}\label{sec:SymmetryAlgebraT4} To accustom the reader to the symmetry algebra $\mathcal{A}$ derived in~\cite{Borsato:2014exa}, we start by introducing the notation for the bosonic and fermionic charges, and we present the (anti)commutation relations that they satisfy. To begin we have the anti-commutators \begin{equation} \label{eq:cealgebra} \begin{aligned} &\{\gen{Q}_{\sL}^{\ \dot{a}},\overline{\gen{Q}}{}_{\sL \dot{b}}\} =\frac{1}{2}\delta^{\dot{a}}_{\ \dot{b}}\,(\gen{H}+\gen{M}), &\qquad &\{\gen{Q}_{\sL}^{\ \dot{a}},{\gen{Q}}{}_{\sR \dot{b}}\} =\delta^{\dot{a}}_{\ \dot{b}}\,\gen{C}, \\ &\{\gen{Q}_{\sR \dot{a}},\overline{\gen{Q}}{}_{\sR}^{\ \dot{b}}\} =\frac{1}{2}\delta^{\ \dot{b}}_{\dot{a}}\,(\gen{H}-\gen{M}), &\qquad &\{\overline{\gen{Q}}{}_{\sL \dot{a}},\overline{\gen{Q}}{}_{\sR}^{\ \dot{b}}\} =\delta^{\ \dot{b}}_{\dot{a}}\,\overline{\gen{C}}. \end{aligned} \end{equation} Here $\gen{H},\gen{M},\gen{C},\overline{\gen{C}}$ are central elements of the algebra. The charge $\gen{H}$ corresponds to the Hamiltonian, and $\gen{M}$ to a combination of angular momenta in AdS$_3\times$S$^3$. The charges $\gen{C},\overline{\gen{C}}$ are related by complex conjugation and they appear only after relaxing the level matching condition, see Chapter~\ref{ch:strings-light-cone-gauge}. If we set $\gen{C}=\overline{\gen{C}}=0$ we remove the central extension, and the two copies Left (L) and Right (R) of the algebra decouple. The supercharges are denoted by $\gen{Q}$ and the bar means complex conjugation. The labels L or R are inherited from the superisometry algebra $\alg{psu}(1,1|2)_{\mbox{\tiny L}}\oplus\alg{psu}(1,1|2)_{\mbox{\tiny R}} $, where they refer to the chirality in the dual~$\textup{CFT}_2 $. The supercharges transform under the fundamental and anti-fundamental representations of $\alg{su}(2)_{\bullet}$, whose indices are denoted by $\dot{a}=1,2$ \begin{equation} \comm{{\gen{J}_{\bullet {\dot{a}}}}^{\dot{b}}}{\gen{Q}_{\dot{c}}} = \delta^{\dot{b}}_{\ {\dot{c}}} \gen{Q}_{\dot{a}} - \frac{1}{2} \delta^{\ {\dot{b}}}_{\dot{a}} \gen{Q}_{\dot{c}}, \qquad \comm{{\gen{J}_{\bullet {\dot{a}}}}^{\dot{b}}}{\gen{Q}^{\dot{c}}} = -\delta^{\ {\dot{c}}}_{\dot{a}} \gen{Q}^{\dot{b}} + \frac{1}{2} \delta^{\ {\dot{b}}}_{\dot{a}} \gen{Q}^{\dot{c}}. \end{equation} Here ${\gen{J}_{\bullet {\dot{a}}}}^{\dot{b}}$ denotes the generators of $\alg{su}(2)_{\bullet}$. Together with the generators ${\gen{J}_{\circ a}}^b$ of $\alg{su}(2)_{\circ}$---under which the supercharges are not charged---they span the algebra $\alg{so}(4) = \alg{su}(2)_{\bullet} \oplus \alg{su}(2)_{\circ}$ \begin{equation} \comm{{\gen{J}_{\bullet \dot{a}}}^{\dot{b}} }{ {\gen{J}_{\bullet \dot{c}}}^{\dot{d}}} = \delta^{\dot{b}}_{\ \dot{c}}\, {\gen{J}_{\bullet \dot{a}}}^{\dot{d}} - \delta^{\dot{d}}_{\ \dot{a}}\, {\gen{J}_{\bullet \dot{c}}}^{\dot{b}}, \qquad \comm{{\gen{J}_{\circ a}}^b}{{\gen{J}_{\circ c}}^d} = \delta^b_{\ c}\, {\gen{J}_{\circ a}}^d - \delta^d_{\ a}\, {\gen{J}_{\circ c}}^b. \end{equation} The whole set of (anti-)commutation relations defines the algebra $\mathcal{A}$, that we continue to study in more detail in the rest of the chapter. \subsection{The symmetry algebra as a tensor product}\label{sec:AlgebraTensorProductT4} Focusing on the subalgebra $\alg{psu}(1|1)^4_\text{c.e.}\subset \mathcal{A}$, it is convenient to rewrite its generators---namely the supercharges and the central charges---in terms of generators of a smaller algebra,\footnote{This possibility has a counterpart in the case of AdS$_5\times$S$^5$, where the generators that commute with the light-cone Hamiltonian close into two copies of $\alg{su}(2|2)_{\text{c.e.}}$~\cite{Arutyunov:2006ak}. The S-matrix may be then written as a tensor product of two $\alg{su}(2|2)_{\text{c.e.}}$-invariant S-matrices. In Section~\ref{sec:smat-tensor-prod} we will show how in the case of {AdS$_3\times$S$^3\times$T$^4$} we may rewrite an S-matrix compatible with $\alg{psu}(1|1)^4_\text{c.e.}\subset \mathcal{A}$ as a tensor product of two $\alg{su}(1|1)^2_\text{c.e.}$-invariant S-matrices.} that we call $\alg{su}(1|1)^2_\text{c.e.}$. Let us start from $\alg{su}(1|1)^2=\alg{su}(1|1)_{\mbox{\tiny L}}\oplus\alg{su}(1|1)_{\mbox{\tiny R}}$, defined as the sum of two copies of $\alg{su}(1|1)$ labelled by L and R \begin{equation}\label{eq:comm-rel-su112} \acomm{\mathbb{Q}_{\sL}}{\overline{\mathbb{Q}}{}_{\sL}} = \mathbb{H}_{\sL}, \qquad \acomm{\mathbb{Q}_{\sR}}{\overline{\mathbb{Q}}{}_{\sR}} = \mathbb{H}_{\sR}. \end{equation} A central extension of this is the algebra $\alg{su}(1|1)^2_\text{c.e.}$ that we want to consider. The two new central elements $\mathbb{C},\overline{\mathbb{C}}$ appear on the right hand side of the following anti-commutators mixing L and R~\cite{Borsato:2012ud} \begin{equation}\label{eq:comm-rel-su112-ce} \acomm{\mathbb{Q}_{\sL}}{\mathbb{Q}_{\sR}} = \mathbb{C}\,, \qquad \acomm{\overline{\mathbb{Q}}{}_{\sL}}{\overline{\mathbb{Q}}{}_{\sR}} = \overline{\mathbb{C}}\,. \end{equation} It is now easy to see that the supercharges of $\alg{psu}(1|1)^4$ appearing in the previous subsection may be constructed via the elements of $\alg{su}(1|1)^2_\text{c.e.}$. Intuitively, we identify the $\alg{su}(2)_\bullet$ index ``$1$'' with the first space in a tensor product, and the index ``$2$'' with the second space\footnote{It is important to make this identification when Left supercharges have an upper $\alg{su}(2)_\bullet$ index, while for Right supercharges the index is lower. In fact, for this rewriting to work, if Left supercharges transform in the anti-fundamental representation of $\alg{su}(2)_\bullet$, then Right supercharges have to transform in the fundamental---or viceversa. Hermitian conjugation swaps fundamental and anti-fundamental representations.} and we write \begin{equation}\label{eq:supercharges-tensor-product} \begin{aligned} \gen{Q}_{\sL}^{\ 1} = \mathbb{Q}_{\sL} \otimes \mathbf{1} ,\, \qquad \overline{\gen{Q}}{}_{\sL 1} = \overline{\mathbb{Q}}{}_{\sL} \otimes \mathbf{1} ,\, \qquad \gen{Q}_{\sL}^{\ 2} = \Sigma \otimes \mathbb{Q}_{\sL} ,\, \qquad \overline{\gen{Q}}{}_{\sL 2} = \Sigma \otimes \overline{\mathbb{Q}}{}_{\sL} , \\ \gen{Q}_{\sR 1} = \mathbb{Q}_{\sR} \otimes \mathbf{1} , \qquad \overline{\gen{Q}}{}_{\sR}^{\ 1} = \overline{\mathbb{Q}}{}_{\sR} \otimes \mathbf{1} , \qquad \gen{Q}_{\sR 2} = \Sigma \otimes \mathbb{Q}_{\sR} , \qquad \overline{\gen{Q}}{}_{\sR}^{\ 2} = \Sigma \otimes \overline{\mathbb{Q}}{}_{\sR} . \end{aligned} \end{equation} The matrix $\Sigma$ is defined as the diagonal matrix taking value $+1$ on bosons and $-1$ on fermions. In this way we can take into account the odd nature of the supercharges while using the ordinary tensor product $\otimes$. Following the same rule, for the central elements we first define \begin{equation}\label{eq:supercharges-tensor-product-C} \begin{aligned} &\gen{H}_{\sL}^{\ 1} = \mathbb{H}_{\sL} \otimes \mathbf{1} , \qquad && \gen{H}_{\sL}^{\ 2} = \mathbf{1} \otimes \mathbb{H}{\sL} , \qquad && \gen{C}^{1} = \mathbb{C} \otimes \mathbf{1} , \qquad && \gen{C}^{2} = \mathbf{1} \otimes \mathbb{C} , \\ &\gen{H}_{\sR}^{\ 1} = \mathbb{H}_{\sR} \otimes \mathbf{1} , \qquad && \gen{H}_{\sR}^{\ 2} = \mathbf{1} \otimes \mathbb{H}_{\sR} , \qquad && \overline{\gen{C}}{}^{1} = \overline{\mathbb{C}} \otimes \mathbf{1} , \qquad && \overline{\gen{C}}{}^{2} = \mathbf{1} \otimes \overline{\mathbb{C}} . \end{aligned} \end{equation} To reproduce the property that these generators are not charged under the $\alg{su}(2)_\bullet$ algebra, we identify the charges in the two spaces as \begin{equation} \gen{H}_{\mbox{\tiny L}}\equiv\gen{H}_{\sL}^{\ 1}=\gen{H}_{\sL}^{\ 2}, \quad \gen{H}_{\mbox{\tiny R}}\equiv\gen{H}_{\sR}^{\ 1}=\gen{H}_{\sR}^{\ 2}, \qquad \gen{C}\equiv\gen{C}^1=\gen{C}^2, \quad \overline{\gen{C}}\equiv\overline{\gen{C}}{}^1=\overline{\gen{C}}{}^2. \end{equation} Another consequence of this requirement is that the above generators become proportional to the identity operator on irreducible representations. Using these identifications and the anti-commutation relations~\eqref{eq:comm-rel-su112}-\eqref{eq:comm-rel-su112-ce}, one can check that the anti-commutation relations~\eqref{eq:cealgebra} of $\alg{psu}(1|1)^4_\text{c.e.}$ are satisfied, where we have \begin{equation} \gen{H}=\gen{H}_{\sL}+\gen{H}_{\sR},\qquad \gen{M}=\gen{H}_{\sL}-\gen{H}_{\sR}\,. \end{equation} The tensor product construction presented here will be particularly useful when studying the representations of the algebra $\mathcal{A}$, and we refer to Section~\ref{sec:BiFundamentalRepresentationsT4} for further details. \subsection{Charges quadratic in the fields}\label{sec:quadr-charges-T4} We present the expressions for the bosonic and fermionic conserved charges that enter the superalgebra $\mathcal{A}$, as derived from the worldsheet Lagrangian. We refer to Appendix~\ref{app:gauge-fixed-action-T4} for notation, and for the calculations of the gauged-fixed action following the general explanation of Chapter~\ref{ch:strings-light-cone-gauge}. We parameterise the transverse directions of AdS$_3$ with complex coordinates $Z,\bar{Z}$ and the transverse directions of S$^3$ with $Y,\bar{Y}$, such that $Z^\dagger=\bar{Z},Y^\dagger=\bar{Y}$. The directions on T$^4$ are denoted by $X^{\dot{a}a}$. The index $\dot{a}=1,2$ corresponds to $\alg{su}(2)_{\bullet}$, while $a=1,2$ to $\alg{su}(2)_{\circ}$. The reality condition on these bosons is $(X^{11})^\dagger=X^{22}, (X^{12})^\dagger=-X^{21}$. Half of the fermions are denoted with the letter $\eta$ and carry a label L or R. Being charged under $\alg{su}(2)_{\bullet}$ they carry also an index $\dot{a}=1,2$. The other half of the fermions are denoted with the letter $\chi$ and are equipped with a label $+$ or $-$ and the $\alg{su}(2)_{\circ}$ index $a=1,2$. In both cases a bar means charge conjugation. Later we show that the former are massive, the latter are massless. At quadratic order in the transverse fields\footnote{In this section we use bold face notation also for the charges written in terms of the fields.}, the light-cone Hamiltonian $\gen{H}$ and the angular momentum $\gen{M}$ are \begin{equation}\label{eq:quadr-Hamilt-fields-T4} \begin{aligned} &\gen{H}=\int{\rm d}\sigma\Bigg( 2 P_{\bar{Z}}P_Z +\frac{1}{2}\bar{Z}Z + \frac{1}{2}\bar{Z}'Z'+2 P_{\bar{Y}}P_Y + \frac{1}{2}\bar{Y}Y +\frac{1}{2}\bar{Y}'Y' \\ &\qquad\qquad\qquad\qquad+\bar{\eta}_{\mbox{\tiny L}\dot{a}}\eta_{\mbox{\tiny L}}^{\ \dot{a}}+\bar{\eta}_{\mbox{\tiny R}}^{\ \dot{a}}\eta_{\mbox{\tiny R}\dot{a}} +{\eta}_{\mbox{\tiny L}}^{\ \dot{a}} \eta_{\mbox{\tiny R} \dot{a}}'-{\bar{\eta}}_{\mbox{\tiny R}}^{\ \dot{a}} \bar{\eta}_{\mbox{\tiny L} \dot{a}}' \\ &\qquad\qquad\qquad\qquad+ P_{\dot{a}a}P^{\dot{a}a} +\frac{1}{4} X_{\dot{a}a}'X^{\dot{a}a'} +{\chi}_{+}^{\ a} \chi_{- a}'-{\bar{\chi}}_{-}^{\ a} \bar{\chi}_{+ a}' \Bigg)\,,\\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} &\gen{M}= \int{\rm d}\sigma\Bigg(iP_{\bar{Z}}Z-iP_Z\bar{Z} +iP_{\bar{Y}}Y-iP_Y\bar{Y} +\bar{\eta}_{\mbox{\tiny L}\dot{a}}\eta_{\mbox{\tiny L}}^{\ \dot{a}}-\bar{\eta}_{\mbox{\tiny R}}^{\ \dot{a}}\eta_{\mbox{\tiny R}\dot{a}} \Bigg)\,. \end{aligned} \end{equation} The Hamiltonian shows that the fields $Z,\bar{Z},Y,\bar{Y}$ parameterising AdS and the sphere are massive, with mass equal to $1$ in our units. They are accompanied by fermions $\eta$ with the same value of the mass. The fields $X^{\dot{a}a}$ that parameterise the torus are massless, as well as the fermions denoted by $\chi$. Taking the Poisson bracket of a given charge with the various fields we may discover its action on them. In particular, when we do it for the angular momentum $\gen{M}$---a central element of the algebra---we discover that it takes eigenvalues $\pm 1$ for massive fields, and $0$ for massless fields. We learn that the representation of $\mathcal{A}$ is \emph{reducible}. The knowledge of the supercharges allows us to compute the central charges $\gen{C},\overline{\gen{C}}$ exactly in the string tension $g$. In order to do that, one has to keep exact expressions involving the light-cone coordinate $x^-$, that carries the information about the worldsheet momentum as showed in Chapter~\ref{ch:strings-light-cone-gauge}. On the other hand, we might want to perform an expansion in transverse fields, and we actually decide to stop the expansion at quadratic order. This is preferable from the point of view of the presentation, and it is enough for our purposes\footnote{For expressions at quartic order---in particular at first order in fermions and third order in bosons--- we refer to~\cite{Borsato:2014hja}.}. In this hybrid expansion, the supercharges read as \begin{equation}\label{eq:supercharges-quadratic-T4} \begin{aligned} &\gen{Q}_{\sL}^{\ {\dot{a}}}=\frac{e^{-\frac{\pi}{4}i}}{2}\int{\rm d}\sigma \ e^{\frac{i}{2}\, x^-}\Bigg( 2P_{Z}\eta^{\ {\dot{a}}}_{\sL}-i Z'\bar{\eta}^{\ {\dot{a}}}_{\sR}+ iZ\eta^{\ {\dot{a}}}_{\sL} -\epsilon^{{\dot{a}\dot{b}}}\, \big(2i P_{\bar{Y}} \bar{\eta}_{\sL{\dot{b}}}- {\bar{Y}'}\eta_{\sR {\dot{b}}}+ \bar{Y}\bar{\eta}_{\sL {\dot{b}}}\big)\\ & \qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad -2\epsilon^{{\dot{a}\dot{b}}} P_{{\dot{b}a}}\chi_{+}^{\ {a}} -i(X^{{\dot{a}a}})'\, \bar{\chi}_{-{a}} \Bigg),\\ &\gen{Q}_{\sR {\dot{a}}}=\frac{e^{-\frac{\pi}{4}i}}{2}\int{\rm d}\sigma \ e^{\frac{i}{2}\, x^-}\Bigg( 2P_{\bar{Z}}\eta_{\sR {\dot{a}}}-i\bar{Z}'\bar{\eta}_{\sL {\dot{a}}}+i\bar{Z}\eta_{\sR {\dot{a}}} +\epsilon_{{\dot{a}\dot{b}}}\,\big(2i {P}_Y\bar{\eta}^{\ {\dot{b}}}_{\sR}-{Y'}\eta^{\ {\dot{b}}}_{\sL}+ Y\bar{\eta}^{\ {\dot{b}}}_{\sR}\big)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +2P_{{\dot{a}a}}\chi_{-}^{\ {a}} -i \epsilon_{{\dot{a}\dot{b}}}(X^{{\dot{b}a}})'\, \bar{\chi}_{+{a}}\Bigg), \end{aligned} \end{equation} while their Hermitian conjugates are found directly by \begin{equation} \overline{\gen{Q}}{}_{\sL {\dot{a}}}= (\gen{Q}_{\sL}^{\ {\dot{a}}})^\dagger, \qquad\qquad \overline{\gen{Q}}{}_{\sR}^{\ {\dot{a}}}= (\gen{Q}_{\sR {\dot{a}}})^\dagger. \end{equation} Using the canonical (anti-)commutation relations for the fields as in Appendix~\ref{app:gauge-fixed-action-T4}, one finds that the above supercharges indeed close into the algebra $\mathcal{A}$ defined by~\eqref{eq:cealgebra}. We now want to derive the form of the generators $\gen{C},\overline{\gen{C}}$ introduced by the central extension. Their exact eigenvalues are found thanks to the hybrid expansion of the supercharges, where expressions in $x^-$ have been kept exactly. In fact it is this light-cone coordinate that carries information on the worldsheet momentum, as it can be seen from the Virasoro constraints in Chapter~\ref{ch:strings-light-cone-gauge}. Computing, for example, $\{\gen{Q}_{\sL}^{\ 1},{\gen{Q}}{}_{\sR 1}\}$ one finds\footnote{When we compute the anti-commutator of a Left and a Right supercharge, we should keep only terms at order zero in the transverse fermions, as higher order terms mix with fermionic corrections to the supercharges that we have dropped. This approximation does not prevent to find the result, since $\gen{C}$ is a central element and the knowledge of its eigenvalue on bosonic fields is enough.} \begin{equation} \begin{aligned} &\gen{C}=-\frac{i}{4}\int{\rm d}\sigma \ e^{i\, x^-}\Bigg[ -2i\left( P_Z \bar{Z}' + P_{\bar{Z}} Z'+ P_Y \bar{Y}' + P_{\bar{Y}} Y'+ P_{\dot{a}a} X^{\dot{a}a'}\right) \\ & \qquad\qquad\qquad\qquad\qquad +\partial_\sigma(\bar{Z}Z+\bar{Y}Y+X_{\dot{a}a} X^{\dot{a}a}) +\ldots \Bigg]\\ &\phantom{\gen{C}={}}=\frac{g}{2}\ \int{\rm d}\sigma \ e^{i\, x^-} (x'^- +\text{total derivative})\,, \end{aligned} \end{equation} where we have used the relation~\eqref{eq:xminus-rescaled-g} that solves one of the Virasoro conditions, and we have dropped a total derivative term. The combination that appears is particularly nice and can be integrated as \begin{equation} -\frac{ig}{2}\, \int_{-\infty}^{+\infty}{\rm d}\sigma \ \frac{{\rm d}}{{\rm d} \sigma}e^{i\, x^-} =-\frac{ig}{2}\, \left(e^{i\, x^-(+\infty)} - e^{i\, x^-(-\infty)}\right) = -\frac{ig}{2}\, e^{i\, x^-(-\infty)} (e^{i p_{\text{ws}}}-1). \end{equation} Here $g$ is the string tension. To be more general, from now on we write this result in terms of a new effective coupling $h(g)$, that may be identified with $g$ in the semiclassical regime $h\sim g$. These central charges may be then written in terms of the charge $\gen{P}$ measuring the worldsheet momentum as \begin{equation} \label{eq:allloop-centralcharges} \gen{C}=+\frac{ih}{2}(e^{+i\gen{P}}-1), \qquad\qquad \overline{\gen{C}}=-\frac{ih}{2}(e^{-i\gen{P}}-1)\,, \end{equation} where we have fixed a normalisation for $e^{i\, x^-(-\infty)}$. This is the key result that will allow us to find the exact S-matrix in Chapter~\ref{ch:S-matrix-T4}. It is worth stressing that with this computation we were able to fix the exact momentum dependence of these central charges, and one may take into account higher order corrections in power of fields to check that the dependence is not modified. This derivation is classical, and it would be interesting to explicitly show that the result is solid under quantum corrections, at least at the leading orders in the near-BMN limit. The eigenvalues of the central charges $\gen{C},\overline{\gen{C}}$ that we have found match with those computed in the case of {AdS$_5\times$S$^5$}~\cite{Arutyunov:2006ak}. In the context of AdS$_5$/CFT$_4$, these central charges appear also in the construction of the gauge theory side, with exactly the same eigenvalues~\cite{Beisert:2005tm}. This fact was a strong suggestion that they are not modified by quantum corrections. We will assume that also in the case of {AdS$_3\times$S$^3\times$T$^4$} quantum corrections do not spoil the result found with the classical computation presented above. \section{Dressing factors}\label{sec:dressing-factors} In Section~\ref{sec:smat-tensor-prod} we determined the S-matrix up to a total of four unconstrained dressing factors. In this section we present a solution to the crossing equations~\eqref{eq:cr-massive} for the dressing factors in the massive sector~\cite{Borsato:2013hoa}. \subsection{Solution of the crossing equations} As explained in Section~\ref{sec:unitarity-YBe}, thanks to the unitarity conditions the dressing factors are written as \begin{equation} \sigma^{\bullet\bullet}_{pq} = \text{exp}(i \ \theta^{\bullet\bullet}_{pq}), \qquad \tilde{\sigma}^{\bullet\bullet}_{pq} = \text{exp}(i \ \tilde{\theta}^{\bullet\bullet}_{pq}), \end{equation} where $\theta^{\bullet\bullet}_{pq},\tilde{\theta}^{\bullet\bullet}_{pq}$ are real anti-symmetric functions of the physical momenta. In both cases we will assume that it is possible to rewrite them as~\cite{Arutyunov:2006iu} \begin{equation} \label{eq:thchi} \theta(p,q) = \chi(x_p^+,x_q^+) +\chi(x_p^-,x_q^-) -\chi(x_p^+,x_q^-) -\chi(x_p^-,x_q^+)\,, \end{equation} with $\chi$ anti-symmetric, to respect braiding unitarity. Instead of solving the crossing equations \begin{equation} \begin{aligned} \left(\sigma^{\bullet\bullet}_{pq}\right)^2 \ \left(\tilde{\sigma}^{\bullet\bullet}_{\bar{p}q}\right)^2 &= \tilde{c}_{pq}=\left( \frac{x^-_q}{x^+_q} \right)^2 \frac{(x^-_p-x^+_q)^2}{(x^-_p-x^-_q)(x^+_p-x^+_q)} \frac{1-\frac{1}{x^-_px^+_q}}{1-\frac{1}{x^+_px^-_q}}, \\ \left(\sigma^{\bullet\bullet}_{\bar{p}q}\right)^2 \ \left(\tilde{\sigma}^{\bullet\bullet}_{pq}\right)^2 &= c_{pq}=\left( \frac{x^-_q}{x^+_q} \right)^2 \frac{\left(1-\frac{1}{x^+_px^+_q}\right)\left(1-\frac{1}{x^-_px^-_q}\right)}{\left(1-\frac{1}{x^+_px^-_q}\right)^2} \frac{x^-_p-x^+_q}{x^+_p-x^-_q}, \end{aligned} \end{equation} we prefer to study the ones obtained by taking the product and the ratio of these two \begin{equation} \begin{aligned} \left(\sigma^{+}_{pq}\sigma^{+}_{\bar{p}q}\right)^2&=\left(\sigma^{\bullet\bullet}_{pq}\tilde{\sigma}^{\bullet\bullet}_{pq}\right)^2 \ \left(\sigma^{\bullet\bullet}_{\bar{p}q}\tilde{\sigma}^{\bullet\bullet}_{\bar{p}q}\right)^2 =c_{pq}\tilde{c}_{pq},\\ \frac{\left(\sigma^{-}_{pq}\right)^2}{\left(\sigma^{-}_{\bar{p}q}\right)^2}&=\left(\frac{\sigma^{\bullet\bullet}_{pq}}{\tilde{\sigma}^{\bullet\bullet}_{pq}}\right)^2 \ \left(\frac{\sigma^{\bullet\bullet}_{\bar{p}q}}{\tilde{\sigma}^{\bullet\bullet}_{\bar{p}q}}\right)^{-2} =\frac{\tilde{c}_{pq}}{c_{pq}},\\ \end{aligned} \end{equation} where the symbols $+$ and $-$ are introduced to remind that the corresponding phases are the sum and the difference of the original ones \begin{equation} \theta^{+}_{pq}=\theta^{\bullet\bullet}_{pq}+\tilde{\theta}^{\bullet\bullet}_{pq}, \qquad \theta^{-}_{pq}=\theta^{\bullet\bullet}_{pq}-\tilde{\theta}^{\bullet\bullet}_{pq}. \end{equation} This rewriting turns out to be very convenient, one reason being that the solution for $\theta^{+}_{pq}$ can be found by using the results valid for describing the integrable model of AdS$_5\times$S$^5$. \paragraph{Solution for the sum of the phases} The right-hand-side of the crossing equation for $\sigma^{+}_{pq}$ can be rewritten as \begin{equation}\label{eq:rhs-sum-phases} c_{pq}\tilde{c}_{pq} = \frac{(c^{\text{BES}}_{pq})^3}{(c^{\text{BES}}_{pq})^*}, \end{equation} where $*$ denotes complex conjugation and $c^{\text{BES}}_{pq}$ is the right-hand-side of the crossing equation of {AdS$_5\times$S$^5$} satisfied by the Beisert-Eden-Staudacher (BES) dressing factor~\cite{Beisert:2006ez} \begin{equation}\label{eq:cr-BES} \sigma^\text{BES}_{pq} \sigma^\text{BES}_{\bar{p}q} = c^{\text{BES}}_{pq}= \frac{x_q^-}{x_q^+}\frac{x_p^- - x_q^+}{x_p^- - x_q^-} \frac{1-\frac{1}{x_p^+x_q^+}}{1-\frac{1}{x_p^+x_q^-}}. \end{equation} A useful representation of this solution in the physical region was given by Dorey, Hofman and Maldacena (DHM)~\cite{Dorey:2007xn} in terms of a double integral on unit circles \begin{equation} \chi^\text{BES}(x,y)= i \ointc \frac{dw}{2 \pi i} \ointc \frac{dw'}{2 \pi i} \, \frac{1}{x-w}\frac{1}{y-w'} \log{\frac{\Gamma[1+i \frac{h}{2}(w+1/w-w'-1/w')]}{\Gamma[1-i \frac{h}{2}(w+1/w-w'-1/w')]}}\,. \label{eq:besdhmrep} \end{equation} For later convenience, we note that taking the strong coupling limit $h\to \infty$ of this solution one recovers the Arutyunov-Frolov-Staudacher (AFS) phase~\cite{Arutyunov:2004vx}, whose factor may be written in terms of the spectral parameters as \begin{equation} \label{eq:AFS-xpxm} \sigma^\text{AFS}_{pq} = \left( \frac{1-\frac{1}{x_p^-x_q^+}}{1-\frac{1}{x_p^+x_q^-}} \right) \left( \frac{1-\frac{1}{x_p^+x_q^-}}{1-\frac{1}{x_p^+x_q^+}} \frac{1-\frac{1}{x_p^-x_q^+}}{1-\frac{1}{x_p^-x_q^-}} \right)^{i \frac{h}{2} (x_p+1/x_p-x_q-1/x_q)}\,. \end{equation} Pushing the expansion at strong coupling to the next-to-leading order one finds the Hern\'andez-L\'opez (HL) factor~\cite{Hernandez:2006tk}, that solves the crossing equation \begin{equation}\label{eq:cr-HL} \sigma^\text{HL}_{pq} \sigma^\text{HL}_{\bar{p}q} = \sqrt{ \frac{c^{\text{BES}}_{pq}}{c^{\text{BES}}_{\bar{p}q}} }=\sqrt{c^{\text{BES}}_{pq}\,(c^{\text{BES}}_{pq})^*}\,. \end{equation} A possible representation of this phase may be obtained by expanding the one for BES, giving \begin{equation}\label{eq:DHM-HL} \chi^\text{HL}(x,y)= \frac{\pi}{2} \ointc \frac{dw}{2 \pi i} \ointc \frac{dw'}{2 \pi i} \, \frac{1}{x-w}\frac{1}{y-w'} \, \text{sign}(w'+1/w'-w-1/w)\,. \end{equation} The BES and HL phases can then be used as building blocks to construct the solution for the sum of our phases. Using the identity~\eqref{eq:rhs-sum-phases} we see that we can solve the crossing equation for $\sigma^{+}_{pq}$ if we define it as \begin{equation} \label{eq:sumsolution} \sigma^{+}_{pq}=\frac{(\sigma^{\text{BES}}_{pq})^2}{\sigma^{\text{HL}}_{pq}}\,, \qquad\qquad \theta^{+}_{pq}=2\theta^{\text{BES}}_{pq}-\theta^{\text{HL}}_{pq}\,. \end{equation} We now present the solution for the factor $\sigma^{-}_{pq}$. \paragraph{Solution for the difference of the phases} The crossing equation for $\sigma^{-}_{pq}$ is \begin{equation}\label{eq:cr-ratio} \frac{(\sigma^{-}_{pq})^2}{(\sigma^{-}_{\bar{p}q})^2}=\frac{\ell^-(x_p^+,x_q^-)\ \ell^-(x_p^-,x_q^+)}{\ell^-(x_p^+,x_q^+)\ \ell^-(x_p^-,x_q^-)}, \qquad \ell^-(x,y)\equiv(x-y)\left(1-\frac{1}{xy}\right). \end{equation} A solution of this equation is given by \begin{equation}\label{eq:chi-} \begin{aligned} \chi^-(x,y) &=\ointc \, \frac{dw}{8\pi} \frac{\text{sign}((w-1/w)/i)}{x-w} \log{\ell^-(y,w)} \ - x \leftrightarrow y \\ &=\left( \inturl - \intdlr \right)\frac{dw}{8\pi} \frac{1}{x-w} \log{\ell^-(y,w)} \ - x \leftrightarrow y\,, \end{aligned} \end{equation} where in the second line we have split the integration along the upper and the lower semicircles. The proof that $\chi^-$ satisfies the crossing equation may be found in Appendix~\ref{app:crossing-AdS3}. \paragraph{Recap of the solutions} The above results allow us to write the following solution for the dressing phases of the massive sector \begin{equation} \label{eq:solution} \begin{aligned} \chi^{\bullet\bullet}(x,y) &= \chi^{\text{BES}}(x,y)+\frac{1}{2}\left(-\chi^{\text{HL}}(x,y)+\chi^{-}(x,y)\right) \,, \\ \tilde{\chi}^{\bullet\bullet}(x,y) &= \chi^{\text{BES}}(x,y)+\frac{1}{2}\left(-\chi^{\text{HL}}(x,y)-\chi^{-}(x,y)\right) \,. \end{aligned} \end{equation} A consequence of this result is that both factors $\sigma^{\bullet\bullet}$ and $\tilde{\sigma}^{\bullet\bullet}$ reduce to the AFS dressing factor at strong coupling. At the next order they are not just the HL dressing factor: its contribution to the phases is just half\footnote{Remember that BES contains one power of HL in the expansion.} of what one has in the case of AdS$_5\times$S$^5$, and we discover a novel piece produced by $\chi^-$. \subsection{Bound states} In this section we discuss the possibility of bound states arising in the scattering processes. This proves to be a good way to validate the proposed solutions of the crossing equations. Let us consider a two-particle state with excitations of momenta $p,q$ described by the wave-function\footnote{Here $\sigma_1$ and $\sigma_2$ denote the worldsheet spatial coordinate. We trust it does not create confusion with the notation for the dressing factors.} \begin{equation} \psi(\sigma_1,\sigma_2) = e^{i(p \sigma_1 + q \sigma_2)} + S(p,q) e^{i(p \sigma_2 + q \sigma_1)} . \end{equation} Here we are considering just the region $\sigma_1 \ll \sigma_2$. The first and second terms correspond to the in-coming and out-going waves, respectively. After the scattering one picks up a phase-shift $S(p,q)$. A bound state may arise when the S-matrix exibits a pole, and this can happen for complex values of the two momenta \begin{equation}\label{eq:mom-imag-bound-state} p = \frac{p'}{2} + iv , \qquad q = \frac{p'}{2} - iv. \end{equation} The relevant behaviour of the wave-function is then $\psi(\sigma_1,\sigma_2) \sim e^{-v(\sigma_2 - \sigma_1)}$, and the normalisability condition implies that we should impose $v>0$. This condition must be checked when studying the possible bound states that are allowed by representation theory. These are found by studying when a generic multi-particle representation becomes short. A feauture of $\alg{psu}(1|1)^4_\text{c.e.}$ is that all \emph{short} representations\footnote{Similarly, for $\alg{su}(1|1)^2_\text{c.e.}$, short representations have dimension 2, and long ones dimension 4.} have dimension 4, while all \emph{long} representations have dimension 16. When considering a two-particle representation obtained as the tensor product of two Left massive modules, we find that in general it is a long representation. However, there exist particular values for the momenta $p,q$ of the excitations such that the representation becomes short. This happens for $x^+_p=x^-_q$ or for $x^-_p=x^+_q$. In the first case we find that the bosonic state $\ket{Y^{\mbox{\tiny L}}_pY^{\mbox{\tiny L}}_q}$ survives in the module, while in the second case $\ket{Z^{\mbox{\tiny L}}_p Z^{\mbox{\tiny L}}_q}$. We will refer to the two cases as $\alg{su}(2)$ and $\alg{sl}(2)$ bound states respectively. We see that in these situations the momenta of the excitations develop a non-zero imaginary part, as in~\eqref{eq:mom-imag-bound-state}. Nevertheless, only the $\alg{su}(2)$ bound state satisfies the condition $v>0$, while for the $\alg{sl}(2)$ case the imaginary part of $p$ is negative. The former is considered to be a bound state in the spectrum, while the latter should not appear. Later we will check that indeed the S-matrix exibits a pole when scattering two $Y^{\mbox{\tiny L}}$ excitations, while it is regular when scattering two $Z^{\mbox{\tiny L}}$ excitations. The case of two Right massive excitations is equivalent, thanks to LR symmetry. The situation is different when we consider two massive excitations with different LR flavor. In that case the representation becomes short for $x^+_p=1/x^+_q$ or $x^-_p=1/x^-_q$. Neither of these cases satisfy $|x^\pm_p|>1$ and $|x^\pm_q|>1$, necessary to remain in the physical region. For this reason there are no supersymmetric bound states in the LR-sector. \medskip As anticipated, the above results must be checked at the level of the S-matrix of Section~\ref{sec:smat-tensor-prod} derived from symmetries, including the solutions~\eqref{eq:solution} for the dressing factors that satisfy the crossing equations. This provides a non-trivial check of the validity of the solutions. The first process to consider is \begin{equation} \mathcal{A}_{pq}=\bra{Y^{\mbox{\tiny L}}_q \, Y^{\mbox{\tiny L}}_p} \mathcal{S} \ket{Y^{\mbox{\tiny L}}_p \, Y^{\mbox{\tiny L}}_q} = \frac{x^+_p}{x^-_p} \, \frac{x^-_q}{x^+_q} \, \frac{x^-_p - x^+_q}{x^+_p - x^-_q} \, \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^+_p x^-_q}} \, \frac{1}{\left(\sigma^{\bullet\bullet}_{pq} \right)^2 }. \end{equation} The dressing factor $1/\left(\sigma^{\bullet\bullet}_{pq} \right)^2$ is regular at $x^+_p=x^-_q$, as it is shown in~\ref{sec:sing-dressing}. The element $\mathcal{A}_{pq}$ has then a single pole at this point, confirming the presence of the expected $\alg{su}(2)$ bound state. Similarly we can check that in the LR sector there is no pole in the physical region. We consider the scattering element \begin{equation} \widetilde{\mathcal{B}}_{pq}=\bra{Y^{\mbox{\tiny L}}_q Z^{\mbox{\tiny R}}_p }\mathcal{S} \ket{Z^{\mbox{\tiny R}}_p Y^{\mbox{\tiny L}}_q} = \frac{x^+_p}{x^-_p}\, \frac{1-\frac{1}{x_p^- x_q^+}}{1-\frac{1}{x_p^+ x_q^-}} \, \frac{1-\frac{1}{x_p^+ x_q^+}}{1-\frac{1}{x_p^- x_q^-}} \frac{1}{\left(\tilde{\sigma}^{\bullet\bullet}_{pq} \right)^2 }, \end{equation} and we see that both the rational factors and $\tilde{\sigma}^{\bullet\bullet}_{pq}$ are regular in the physical region, in particular at the point $x^+_p=x^-_q$. It is interesting to see how these processes in the $s$-channel automatically give constraints for processes in the $t$-channel, as a consequence of crossing symmetry. The crossing equations~\eqref{eq:cr-massive} can be written in the simple form \begin{equation} \begin{aligned} \mathcal{A}_{pq} \widetilde{\mathcal{A}}_{\bar{p}q}=1,& \qquad \text{where}\quad \widetilde{\mathcal{A}}_{pq}=\bra{Y^{\mbox{\tiny L}}_q Y^{\mbox{\tiny R}}_p }\mathcal{S} \ket{Y^{\mbox{\tiny R}}_p Y^{\mbox{\tiny L}}_q}\,, \\ \widetilde{\mathcal{B}}_{pq} \mathcal{B}_{\bar{p}q}=1,& \qquad \text{where}\quad \mathcal{B}_{pq}=\bra{Y^{\mbox{\tiny L}}_q Z^{\mbox{\tiny L}}_p }\mathcal{S} \ket{Z^{\mbox{\tiny L}}_p Y^{\mbox{\tiny L}}_q}\,, \end{aligned} \end{equation} involving explicit scattering elements. It is clear that the presence of a single pole for $\mathcal{A}_{pq}$ at $x^+_p=x^-_q$ implies a zero for $\widetilde{\mathcal{A}}_{\bar{p}q}$ at the point $1/x^+_{\bar{p}}=x^-_q$. This is then responsible for a process in the $t$-channel. Similarly, regularity of $\widetilde{\mathcal{B}}_{pq}$ implies regularity of $\mathcal{B}_{\bar{p}q}$, and consequently no corresponding process in the $t$-channel\footnote{When checking regularity of $\mathcal{B}_{\bar{p}q}$ one has to carefully analytically continue the dressing factor for crossed values of the momentum $p$. Doing that one discovers that $\left(\sigma^{\bullet\bullet}_{\bar{p}q} \right)^{-2}$ has a zero at $1/x^+_{\bar{p}}=x^-_q$ that cancels the apparent pole coming from the rational terms.}. \medskip The discussion on the pole structure of the S-matrix is important to justify the validity of the solution to the crossing equations. It is indeed always possible to multiply the solutions that we proposed by the so-called CDD factors, that solve the homogeneous crossing equations \begin{equation} \sigma^{\scriptscriptstyle\text{CDD}}_{pq}\,\widetilde{\sigma}^{\scriptscriptstyle\text{CDD}}_{\bar{p}q}=1\,,\qquad \sigma^{\scriptscriptstyle\text{CDD}}_{\bar{p}q}\,\widetilde{\sigma}^{\scriptscriptstyle\text{CDD}}_{pq}=1\,. \end{equation} Usually these are meromorphic functions of the spectral parameters, obtained by taking \begin{equation} \chi^{\scriptscriptstyle\text{CDD}}_{pq}=\frac{i}{2}\log\frac{(x-y)^{c_1}}{(1-xy)^{c_2}}\,,\qquad \widetilde{\chi}^{\scriptscriptstyle\text{CDD}}_{pq}=\frac{i}{2}\log\frac{(x-y)^{c_2}}{(1-xy)^{c_1}}\,. \end{equation} It is clear that such solutions introduce new zeros and poles that modify the analytical structure of the S-matrix elements, spoiling the bound state interpretation. These considerations allow us to rule out the possibility of CDD factors of this form \begin{equation} \sigma^{\scriptscriptstyle\text{CDD}}_{pq}=1\,,\qquad\widetilde{\sigma}^{\scriptscriptstyle\text{CDD}}_{pq}=1\,. \end{equation} A possibility that might still be valid is to introduce factors that satisfy the homogeneous crossing equation and that have no poles or zeros in the physical region. Nevertheless, further independent validations of the phases proposed here appeared in the literature, and we refer to Section~\ref{sec:perturbative-results} for a collection of them. \chapter{Equations of motion of type IIB supergravity}\label{app:IIBsugra} In this appendix we collect the action and the equations of motion of type IIB supergravity, as taken from~\cite{IIBbackgroundnotes}. We use these conventions in particular for Section~\ref{sec:fermions-type-IIB} and~\ref{sec:discuss-eta-def-background}. In type IIB supergravity we have Neveu-Schwarz--Neveu-Schwarz (NSNS) and \\ Ramond-Ramond (RR) fields. In particular \begin{itemize} \item[] \textbf{NSNS}: these are the metric $G_{MN}$, the dilaton $\varphi$, and the anti-symmetric two-form $B_{MN}$ with field strength $H_{MNP}$; \item[] \textbf{RR}: these are the axion $\chi$, the anti-symmetric two-form $C_{MN}$, and the anti-symmetric four-form $C_{MNPQ}$. \end{itemize} The RR field strengths are defined as \begin{eqnarray} &&F_{M}=\partial_{M}\chi\, , \\ &&F_{MNP}=3\partial_{[M}C_{NP]} +\chi H_{MNP}\, , \\ &&F_{MNPQR}=5\partial_{[M}C_{NPQR]}-15(B_{[MN}\partial_{P}C_{QR]}-C_{[MN}\partial_{P}B_{QR]})\, .\end{eqnarray} Square brackets $[,]$ are used to denote the anti-symmetrizer, \textit{e.g.}\xspace \begin{equation} H_{MNP}=3\partial_{[M}B_{NP]}=\frac{3}{3!}\sum_{\pi}(-1)^{\pi}\partial_{\pi(M)}B_{\pi(N) \pi(P)}=\partial_{M}B_{NP}+\partial_{N}B_{PM}+\partial_{P}B_{MN}\, , \end{equation} where we have to sum over all permutations $\pi$ of indices $M$, $N$ and $P$, and the sign $(-1)^{\pi}$ is $+1$ for even and $-1$ for odd permutations. The equations of motion of type IIB supergravity in the {\it string frame} may be found by first varying the action~\cite{Dall'Agata:1997ju,Dall'Agata:1998va} \begin{equation} \begin{aligned} S=\frac{1}{2\kappa^2}\int {\rm d}^{10}X \Bigg[\sqrt{-G}\Bigg(& e^{-2\varphi}\Big(R+4\partial_{M}\varphi\partial^{M}\varphi-\frac{1}{12}H_{MNP}H^{MNP}\Big) - \\ &-\frac{1}{2}\partial_{M}\chi \partial^{M}\chi -\frac{1}{12}F_{MNP}F^{MNP}-\frac{1}{4\cdot 5!}F_{MNPQR}F^{MNPQR} \Bigg)+ \, \\ &+\frac{1}{8\cdot 4!}\epsilon^{M_1\ldots M_{10}}C_{M_1M_2M_3M_4}\partial_{M_5}B_{M_6M_7}\partial_{M_8}C_{M_9M_{10}}\Bigg]\, , \end{aligned} \end{equation} and after that by imposing the self-duality condition for the five-form \begin{equation}\label{eq:sel-duality-F5-curved} F_{M_1M_2M_3M_4M_5}=+\frac{1}{5!}\sqrt{-G}\epsilon_{M_1\ldots M_{10}}F^{M_6M_7M_8M_9M_{10}}\,. \end{equation} Here $G$ is the determinant of the metric, $R$ the Ricci scalar, and for the anti-symmetric tensor $\epsilon$ we choose the convention $\epsilon^{0\ldots 9}=1$ and $\epsilon_{0\ldots 9}=-1$. Let us write the equations of motion for all the fields. \medskip \noindent {\bf Equation for the dilaton $\varphi$} \begin{equation}\label{eq:eom-dilaton} 4\partial^{M}\varphi\partial_{M}\varphi-4\partial^{M}\partial_{M}\varphi -4\partial_{M}G^{MN}\partial_{N}\varphi-2\partial_{M}G_{PQ}G^{PQ}\partial^{M}\varphi=R-\frac{1}{12}H_{MNP}H^{MNP}\, . \end{equation} Note that $\partial_{M}G_{PQ}G^{PQ} = 2\partial_{M}\log\sqrt{-G}$. \noindent {\bf Equation for the two-form $B_{MN}$} \begin{equation}\label{eq:eom-B} 2\partial_{M}\Big(\sqrt{-G}(e^{-2\varphi}H^{MNP}+\chi F^{MNP})\Big)+\sqrt{-G}F^{NPQ RS} \partial_{Q}C_{RS}=0 \end{equation} This equation has been rewritten using~\eqref{eq:eom-C4}. \noindent {\bf Equation for the axion $\chi$} \begin{equation}\label{eq:eom-axion} \partial_{M}\Big(\sqrt{-G}\partial^{M}\chi\Big)=+\frac{1}{6}\sqrt{-G}F_{MNP}H^{MNP}\, . \end{equation} \noindent {\bf Equation for the two-form $C_{MN}$} \begin{equation}\label{eq:eom-C2} \partial_{M}(\sqrt{-G}F^{MNP})-\frac{1}{6}\sqrt{-G}F^{NPQ RS}H_{QRS}=0 \end{equation} \noindent {\bf Equation for the four-form $C_{MNPQ}$} \begin{equation}\label{eq:eom-C4} \partial_{N}\left( \sqrt{-G} F^{NM_1M_2M_3M_4}\right)=-\frac{1}{36}\epsilon^{M_1\ldots M_4 M_5\ldots M_{10}}H_{M_5M_6M_7}F_{M_8M_9M_{10}}\, . \end{equation} \noindent {\bf Einstein equations} \begin{equation} R_{MN}-\frac{1}{2}G_{MN}R=T_{MN}\, , \end{equation} where the stress tensor is \begin{equation} \begin{aligned} T_{MN}=&G_{MN}\Bigg[2\partial^{ P}(\partial_{ P}\varphi)-2G^{PQ}\Gamma^{R}_{PQ}\partial_{R}\varphi-2\partial_{P}\varphi\partial^{P}\varphi\\ &-\frac{1}{24}H_{PQR}H^{PQR}-\frac{1}{4}e^{2\varphi}F_{P}F^{P}-\frac{1}{24}e^{2\varphi}F_{PQR}F^{PQR}\Bigg] \\ &-2\partial_{M}\partial_{N}\varphi+2\Gamma^{P}_{MN}\partial_{P}\varphi\\ &+\frac{1}{4}H_{MPQ}H_{N}^{\ PQ}+\frac{1}{2}e^{2\varphi}F_{M}F_{N} +\frac{1}{4}e^{2\varphi}F_{MPQ}F_{N}^{\ PQ}+\frac{1}{4\cdot 4!}e^{2\varphi}F_{MPQRS}F_{N}^{\ PQRS}\, , \\ \end{aligned} \end{equation} and the Christoffel symbol is \begin{equation} \Gamma^{P}_{MN}=\frac{1}{2}G^{PQ}(\partial_{M}G_{NQ}+\partial_{N}G_{MQ}-\partial_{Q}G_{MN})\, . \end{equation} \chapter{Strings in light-cone gauge}\label{ch:strings-light-cone-gauge} \begin{comment} \alert{Say 2 U(1) isometries, time and ``angle''} light-cone gauge for strings in flat space~\cite{goddard1973quantum} light-cone gauge in AdS only~\cite{Metsaev:2000yu} lcg for spinning strings~\cite{Kruczenski:2004kw,Kruczenski:2004cn} uniform light-cone gauge for AdS/CFT~\cite{Arutyunov:2004yx,Arutyunov:2005hd,Arutyunov:2006gs} giant magnons~\cite{Hofman:2006xt} worldsheet scattering in AdS5~\cite{Klose:2006zd} first time central charge~\cite{Beisert:2005tm} \end{comment} \medskip This chapter serves as an introduction and a review of notions that are needed to derive the results in the rest of the thesis. In fact, we will use the same methods for strings on both the {AdS$_3\times$S$^3\times$T$^4$} background and on the $\eta$-deformed {AdS$_5\times$S$^5$}. We find then useful to present here a slightly more general discussion valid for both cases. We explain how to fix \emph{uniform light-cone gauge} for bosonic and fermionic degrees of freedom in the action of a freely propagating string. The need of fixing a gauge that removes some unphysical bosonic degrees of freedom comes from reparameterisation invariance on the worldsheet. At the same time, another local symmetry called ``kappa-symmetry''---now parameterised by Grassmann quantities---suggests that half of the fermions should be gauged away. Clearly, different gauge fixings are possible, all being equivalent in the sense that the physical observables that we compute will not depend on any particular choice. However, it is obvious that some of them may be more convenient than others. The type of gauge-fixing used for backgrounds relevant for the AdS/CFT correspondence appears to provide models that are solvable by non-perturbative methods. This gauge is a generalisation of what was first introduced in flat space in~\cite{goddard1973quantum}. In fact the procedure is quite general and the only necessary requirement to impose it is the presence of two commuting isometries---in our case these are shifts of time and an angle. Although other choices are possible---one might choose the angle being in Anti-de Sitter~\cite{Metsaev:2000yu}---the most convenient one for AdS/CFT is to combine into the light-cone coordinates the time of AdS and an angle of the compact space. The procedure we present here was used to gauge-fix the $\sigma$-model describing the string on {AdS$_5\times$S$^5$}~\cite{Arutyunov:2004yx,Arutyunov:2005hd,Arutyunov:2006gs} and corresponds to the one used to study spinning strings~\cite{Kruczenski:2004kw,Kruczenski:2004cn}. We start with the gauge-fixing procedure for bosons in Section~\ref{sec:Bos-string-lcg} and then extend it to fermions in Section~\ref{sec:fermions-type-IIB}. In Section~\ref{sec:decomp-limit} we explain how to define an S-matrix that governs scattering of worldsheet excitations---in the limit of long strings---and we provide a discussion on perturbation theory. We refer to~\cite{Arutyunov:2009ga} for a more detailed review on these topics. \input{Chapters/BosonicStringsLCG.tex} \input{Chapters/StringsQuadFermIIBLCG.tex} \input{Chapters/DecompactLimit.tex} \section{Two-particle representations}\label{sec:two-part-repr-T4} In this section we study the action of the charges on two-particle states. We will show that not all the charges are defined via the standard co-product---for some of them this has to be non-local. Given a charge $\gen{J}$ acting on a one-particle state $\ket{\mathcal{X}}$ as $\gen{J}\ket{\mathcal{X}}=\ket{\mathcal{Y}}$, the corresponding charge on two-particle states that we get by using the \emph{standard} co-product is \begin{equation} \gen{J}_{12} \equiv \gen{J} \otimes \mathbf{1} + \mathbf{1} \otimes \gen{J}, \quad\implies\quad \gen{J}_{12} \ket{\mathcal{X}_1\mathcal{X}_2}= \ket{\mathcal{Y}_1\mathcal{X}_2}+\ket{\mathcal{X}_1\mathcal{Y}_2}. \end{equation} In case $\gen{J}$ is an odd charge one has to take care of the signs arising when commuting with a fermionic state. It is easy to check that the standard co-product cannot be used to define the action of the central charge $\gen{C}_{12}$ on two-particle states~\cite{Beisert:2006qh,Arutyunov:2006yd}. Another way to phrase this is to say that we cannot set to zero the parameters $\xi_1$ and $\xi_2$ entering the definition~\eqref{eq:expl-repr-coeff}. Indeed using $\gen{C}=+\frac{ih}{2}(e^{+i\gen{P}}-1)$ we find \begin{equation} \gen{C} \ket{\mathcal{X}_1\mathcal{X}_2} = \frac{ih}{2}(e^{+i(p_1+p_2)}-1) \ket{\mathcal{X}_1\mathcal{X}_2} , \end{equation} while using the combined action on one-particle states we get \begin{equation} \gen{C} \ket{\mathcal{X}_1\mathcal{X}_2} = \frac{ih}{2}\left(e^{2i\xi_{1}}(e^{+ip_1}-1)+ e^{2i\xi_{2}}(e^{+ip_2}-1)\right)\ket{\mathcal{X}_1\mathcal{X}_2}. \end{equation} In order to have compatibility of the two results, we cannot set $e^{i\xi_{1}}=e^{i\xi_{2}}=1$. If we require that these factors lie on the unit circle, then we get two possible solutions \begin{equation} \{e^{2i\xi_{1}} = 1, e^{2i\xi_{2}} = e^{i\, p_1} \}, \qquad \qquad \{e^{2i\xi_{1}} = e^{i\, p_2},e^{2i\xi_{2}} = 1 \}. \end{equation} Both these solutions imply that $\gen{C}_{12}$ is defined by a \emph{non-local} product, as the action on one of the two states depends on the momentum of the other. In the rest of the chapter we will choose the first of the above solutions, namely $\xi_1=0,\ \xi_2=p_1/2$. This choice agrees with the one of~\cite{Arutyunov:2009ga}. It is obvious that also the action of supercharges on two-particle states is defined by a non-local co-product, and the exact action is found by replacing the value of $\xi_i$ in the definitions of the coefficients $a_p,\bar{a}_p,b_p,\bar{b}_p$ in~\eqref{eq:expl-repr-coeff}. As an explicit example, when we consider the action of $\gen{Q}_{\mbox{\tiny L}}^{\ 1}$ on a two-particle state, we find \begin{equation} \gen{Q}_{\mbox{\tiny L}}^{\ 1}(p_1,p_2) = \gen{Q}_{\mbox{\tiny L}}^{\ 1}(p_1)\otimes\mathbf{1}+e^{i\, p_1/2}\, \Sigma \otimes \gen{Q}_{\mbox{\tiny L}}^{\ 1}(p_2)\,. \end{equation} The matrix $\Sigma$ takes into account the even or odd grading of the states, and is $+1$ on bosons and $-1$ on fermions. For this reason we can use the ordinary tensor product $\otimes$. On the other hand, computing the action of the generators corresponding to the Hamiltonian $\gen{H}$ and the angular momentum $\gen{M}$, it is clear that the dependence on the parameters $\xi_i$ cancels, and the action on two-particle states is just given by the standard co-product\footnote{Although we indicate the momentum dependence in both cases, we remind that the eigenvalue of $\gen{M}$ is momentum-independent.} \begin{equation} \begin{aligned} \gen{H}(p_1,p_2)&=\gen{H}(p_1)\otimes\mathbf{1}+\mathbf{1}\otimes\gen{H}(p_2)\,, \\ \gen{M}(p_1,p_2)&=\gen{M}(p_1)\otimes\mathbf{1}+\mathbf{1}\otimes\gen{M}(p_2)\,. \end{aligned} \end{equation} The same is true for the $\alg{su}(2)_{\bullet} \oplus \alg{su}(2)_{\circ}$ generators, whose action does not depend on the above coefficients. We note that a generalisation of this discussion to multi-particle states is possible. The requirement that $\gen{C}=+\frac{ih}{2}(e^{+i\gen{P}}-1)$ still holds is fullfilled by taking $\xi_1=0$ and $\xi_i=\sum_{j=1}^{i-1}p_j/2$, for $i>1$. \section{The S-matrix}\label{sec:S-mat-T4} In this section we present the explicit form of the exact two-body S-matrix for the worldsheet excitations of AdS$_3\times$S$^3\times$T$^4$. This is found by fixing invariance of the S-matrix under the symmetry algebra $\mathcal{A}$. Depending on convenience, we will use two objects denoted by $\mathcal{S}$ and $\mathbf{S}$. They are related to each other by a simple permutation in the two body space\footnote{The object that here is called $\mathbf{S}$ is denoted by $S$ in~\cite{Arutyunov:2009ga}.} \begin{equation} \mathcal{S}=\Pi\, \mathbf{S}\,. \end{equation} After constructing the generators on two-particle states as explained in the previous section, we impose compatibility as\footnote{The difference between the two is how we apply the charge after the action of the proper S-matrix.} \begin{equation} \begin{aligned} \mathcal{S}_{12}(p,q)\, \gen{J}_{12}(p,q) - \gen{J}_{12}(q,p)\, \mathcal{S}_{12}(p,q) &=0\,, \\ \mathbf{S}_{12}(p,q)\, \gen{J}_{12}(p,q) - \gen{J}_{21}(q,p)\, \mathbf{S}_{12}(p,q) &=0\,. \end{aligned} \end{equation} Invariance under the action of the generators $\gen{M}$ and $\gen{H}$ allows us to identify three possible sectors, that we use to divide the S-matrix: \begin{itemize} \item[-] the massive sector $(\bullet\bullet)$, \item[-] the massless sector $(\circ\circ)$, \item[-] the mixed-mass sector $(\bullet\circ,\circ\bullet)$. \end{itemize} In each of these sectors the set of masses is conserved under the scattering. In the mixed-mass sector we have in addition that the mass is transmitted. In other words, the mass can be thought as a label attached to the momentum of the excitation. The next generators to consider are the ones of $\alg{su}(2)_\bullet \oplus \alg{su}(2)_\circ$. Their action is momentum independent, and compatibility of the S-matrix with them allows us to relate or set to zero different scattering elements, in such a way that the $\alg{su}(2)$ structures are respected. Another powerful way to constrain the S-matrix is to consider the $\mathbb{Z}_2$-symmetry introduced in Section~\ref{sec:LR-symmetry}, that we called LR-symmetry. We will then impose that scattering elements that are related by the rules~\eqref{eq:LR-massive} and~\eqref{eq:LR-massless} should be the same. It is considering compatibility with the supercharges that we see the dependence of the scattering elements on the momenta of the excitations. In particular, this fixes invariance under the $\alg{psu}(1|1)^4_\text{c.e.}$ subalgebra of $\mathcal{A}$. We will use the fact that fundamental representations of $\alg{psu}(1|1)^4_\text{c.e.}$ can be understood as bi-fundamental representations of $\alg{psu}(1|1)^2_\text{c.e.}$ to rewrite an S-matrix compatible with $\alg{psu}(1|1)^4_\text{c.e.}$-invariance as a proper tensor product of two copies of $\alg{psu}(1|1)^2_\text{c.e.}$-invariant S-matrices. Let us first construct the relevant S-matrices in this simpler case. \subsection{The $\alg{su}(1|1)^2_{\text{c.e.}}$-invariant S-matrices}\label{sec:su112-S-matrices} In Section~\ref{sec:BiFundamentalRepresentationsT4} we presented the possible fundamental representations of $\alg{su}(1|1)^2_{\text{c.e.}}$, that we called $\varrho_{\mbox{\tiny L}},\varrho_{\mbox{\tiny R}},\widetilde{\varrho}_{\mbox{\tiny L}},\widetilde{\varrho}_{\mbox{\tiny R}}$. They are related by exchanging the labels L and R on the states and on the supercharges, or by exchanging the role of the boson $\phi$ and the fermion $\psi$ composing the short representation. Using the four possible fundamental representations one can construct sixteen different two-particle representations. In this section we discuss only the ones that are relevant for the S-matrix of AdS$_3\times$S$^3\times$T$^4$, in particular we start by considering the case in which both particles that scatter belong to the representation $\varrho_{\mbox{\tiny L}}$. Invariance under the algebra yields an S-matrix $\mathcal{S}^{\mbox{\tiny L}\sL}$ of the form~\cite{Borsato:2012ud,Borsato:2014exa} \begin{equation}\label{eq:su(1|1)2-Smat-grad1} \begin{aligned} \mathcal{S}^{\mbox{\tiny L}\sL} \ket{\phi_p^{\mbox{\tiny L}} \phi_q^{\mbox{\tiny L}}} &= A_{pq}^{\mbox{\tiny L}\sL} \ket{\phi_q^{\mbox{\tiny L}} \phi_p^{\mbox{\tiny L}}}, \qquad &\mathcal{S}^{\mbox{\tiny L}\sL} \ket{\phi_p^{\mbox{\tiny L}} \psi_q^{\mbox{\tiny L}}} &= B_{pq}^{\mbox{\tiny L}\sL} \ket{\psi_q^{\mbox{\tiny L}} \phi_p^{\mbox{\tiny L}}} + C_{pq}^{\mbox{\tiny L}\sL} \ket{\phi_q^{\mbox{\tiny L}} \psi_p^{\mbox{\tiny L}}}, \\ \mathcal{S}^{\mbox{\tiny L}\sL} \ket{\psi_p^{\mbox{\tiny L}} \psi_q^{\mbox{\tiny L}}} &= F_{pq}^{\mbox{\tiny L}\sL} \ket{\psi_q^{\mbox{\tiny L}} \psi_p^{\mbox{\tiny L}}},\qquad &\mathcal{S}^{\mbox{\tiny L}\sL} \ket{\psi_p^{\mbox{\tiny L}} \phi_q^{\mbox{\tiny L}}} &= D_{pq}^{\mbox{\tiny L}\sL} \ket{\phi_q^{\mbox{\tiny L}} \psi_p^{\mbox{\tiny L}}} + E_{pq}^{\mbox{\tiny L}\sL} \ket{\psi_q^{\mbox{\tiny L}} \phi_p^{\mbox{\tiny L}}}, \end{aligned} \end{equation} The coefficients appearing are determined up to an overall factor. As a convention we decide to normalise $A_{pq}^{\mbox{\tiny L}\sL}=1$ and we find \begin{equation}\label{eq:expl-su112-smat-el} \begin{aligned} A^{\mbox{\tiny L}\sL}_{pq} &= 1, & \qquad B^{\mbox{\tiny L}\sL}_{pq} &= \phantom{-}\left( \frac{x^-_p}{x^+_p}\right)^{1/2} \frac{x^+_p-x^+_q}{x^-_p-x^+_q}, \\ C^{\mbox{\tiny L}\sL}_{pq} &= \left( \frac{x^-_p}{x^+_p} \frac{x^+_q}{x^-_q}\right)^{1/2} \frac{x^-_q-x^+_q}{x^-_p-x^+_q} \frac{\eta_p}{\eta_q}, \qquad & D^{\mbox{\tiny L}\sL}_{pq} &= \phantom{-}\left(\frac{x^+_q}{x^-_q}\right)^{1/2} \frac{x^-_p-x^-_q}{x^-_p-x^+_q}, \\ E^{\mbox{\tiny L}\sL}_{pq} &= \frac{x^-_p-x^+_p}{x^-_p-x^+_q} \frac{\eta_q}{\eta_p}, \qquad & F^{\mbox{\tiny L}\sL}_{pq} &= - \left(\frac{x^-_p}{x^+_p} \frac{x^+_q}{x^-_q}\right)^{1/2} \frac{x^+_p-x^-_q}{x^-_p-x^+_q}. \end{aligned} \end{equation} The result is written in terms of the Zhukovski variables introduced in Section~\ref{sec:RepresentationCoefficientsT4}. In particular the result above holds for any value of the masses $|m|$---that appear in the quadratic constraint of~\eqref{eq:zhukovski}---of the two particles, and are valid also for scattering of particles of different masses. When considering scattering of two $\widetilde{\varrho}_{\mbox{\tiny L}}$ representations, we find that the result can be rewritten using the above coefficients. Also in this case the overall normalisation is a convention and we decide to write it as \begin{equation}\label{eq:su(1|1)2-Smat-grad2} \begin{aligned} \mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\tilde{\phi}^{\mbox{\tiny L}}_p \tilde{\phi}^{\mbox{\tiny L}}_q} &= -F_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \tilde{\phi}^{\mbox{\tiny L}}_p}, \qquad &\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\tilde{\phi}^{\mbox{\tiny L}}_p \tilde{\psi}^{\mbox{\tiny L}}_q} &= D_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \tilde{\phi}^{\mbox{\tiny L}}_p} -E_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \tilde{\psi}^{\mbox{\tiny L}}_p}, \\ \mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\tilde{\psi}^{\mbox{\tiny L}}_p \tilde{\psi}^{\mbox{\tiny L}}_q} &= -A_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \tilde{\psi}^{\mbox{\tiny L}}_p}, \qquad &\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\tilde{\psi}^{\mbox{\tiny L}}_p \tilde{\phi}^{\mbox{\tiny L}}_q} &= B_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \tilde{\psi}^{\mbox{\tiny L}}_p} -C_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \tilde{\phi}^{\mbox{\tiny L}}_p}. \end{aligned} \end{equation} The last Left-Left case that we want to consider---as it will be used to construct the S-matrix of {AdS$_3\times$S$^3\times$T$^4$}---concerns the scattering of $\varrho_{\mbox{\tiny L}}$ and $\widetilde{\varrho}_{\mbox{\tiny L}}$. We write it as \begin{equation}\label{eq:su(1|1)2-Smat-grad3} \begin{aligned} \mathcal{S}^{{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\phi^{\mbox{\tiny L}}_p \tilde{\phi}^{\mbox{\tiny L}}_q} &= \phantom{+}B_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \phi^{\mbox{\tiny L}}_p} -C_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \psi^{\mbox{\tiny L}}_p}, \qquad &\mathcal{S}^{{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\phi^{\mbox{\tiny L}}_p \tilde{\psi}^{\mbox{\tiny L}}_q} &= \phantom{+}A_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \phi^{\mbox{\tiny L}}_p} , \\ \mathcal{S}^{{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\psi^{\mbox{\tiny L}}_p \tilde{\psi}^{\mbox{\tiny L}}_q} &= -D_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \psi^{\mbox{\tiny L}}_p}+E_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \phi^{\mbox{\tiny L}}_p} , \qquad &\mathcal{S}^{{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}} \ket{\psi^{\mbox{\tiny L}}_p \tilde{\phi}^{\mbox{\tiny L}}_q} &= -F_{pq}^{\mbox{\tiny L}\sL} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \psi^{\mbox{\tiny L}}_p} . \end{aligned} \end{equation} In order to complete the discussion and present all the material that is needed to construct the full S-matrix, we now turn to Left-Right scattering. For the case of two particles with equal masses, requiring just invariance under the symmetry algebra one obtains an S-matrix that is a combination of transmission and reflection, where this terminology should be applied to the LR-flavors. Imposing LR-symmetry and unitarity one finds that only two solutions are allowed, namely \emph{pure transmission} or \emph{pure reflection}~\cite{Borsato:2012ud}. Compatibility with perturbative results then forces to choose the pure-transmission S-matrix, that is the one presented here. Moreover it is only this S-matrix that satisfies the Yang-Baxter equation. A process involving the representations $\varrho_{\mbox{\tiny L}}$ and $\varrho_{\mbox{\tiny R}}$ yields an S-matrix of the form~\cite{Borsato:2012ud,Borsato:2014exa} \begin{equation}\label{eq:su(1|1)2-Smat-LRgrad1} \begin{aligned} \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{\phi^{\mbox{\tiny L}}_p \phi^{\mbox{\tiny R}}_q} &= A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\phi^{\mbox{\tiny R}}_q \phi^{\mbox{\tiny L}}_p} + B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\psi^{\mbox{\tiny R}}_q \psi^{\mbox{\tiny L}}_p}, \qquad &\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{\phi^{\mbox{\tiny L}}_p \psi^{\mbox{\tiny R}}_q} &= C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\psi^{\mbox{\tiny R}}_q \phi^{\mbox{\tiny L}}_p} , \\ \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{\psi^{\mbox{\tiny L}}_p \psi^{\mbox{\tiny R}}_q} &= E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\psi^{\mbox{\tiny R}}_q \psi^{\mbox{\tiny L}}_p}+F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\phi^{\mbox{\tiny R}}_q \phi^{\mbox{\tiny L}}_p} , \qquad & \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}} \ket{\psi^{\mbox{\tiny L}}_p \phi^{\mbox{\tiny R}}_q} &= D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\phi^{\mbox{\tiny R}}_q \psi^{\mbox{\tiny L}}_p} , \end{aligned} \end{equation} where the scattering elements can be parametrised explicitly by \begin{equation} \begin{aligned} A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} &= \zeta_{pq}\, \left(\frac{x^+_p}{x^-_p} \right)^{1/2} \frac{1-\frac{1}{x^+_p x^-_q}}{1-\frac{1}{x^-_p x^-_q}}\,, \qquad & B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} &= -\frac{2i}{h} \, \left(\frac{x^-_p}{x^+_p}\frac{x^+_q}{x^-_q} \right)^{1/2} \frac{\eta_{p}\eta_{q}}{ x^-_p x^+_q} \frac{\zeta_{pq}}{1-\frac{1}{x^-_p x^-_q}}\,, \\ C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} &= \zeta_{pq}\, , \qquad & D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} &=\zeta_{pq}\, \left(\frac{x^+_p}{x^-_p}\frac{x^+_q}{x^-_q} \right)^{1/2} \frac{1-\frac{1}{x^+_p x^+_q}}{1-\frac{1}{x^-_p x^-_q}}\,, \\ E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} &= - \zeta_{pq}\, \left(\frac{x^+_q}{x^-_q} \right)^{1/2} \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^-_p x^-_q}}\,, \qquad & F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} &= \frac{2i}{h} \left(\frac{x^+_p}{x^-_p}\frac{x^+_q}{x^-_q} \right)^{1/2} \frac{\eta_{p}\eta_{q}}{ x^+_p x^+_q} \frac{\zeta_{pq}}{1-\frac{1}{x^-_p x^-_q}}\,, \end{aligned} \end{equation} and we have introduced a convenient factor \begin{equation} \zeta_{pq} = \left(\frac{x^+_p}{x^-_p}\right)^{-1/4}\left(\frac{x^+_q}{x^-_q}\right)^{-1/4} \left(\frac{1-\frac{1}{x^-_p x^-_q}}{1-\frac{1}{x^+_p x^+_q}}\right)^{1/2}\,. \end{equation} Similarly, an S-matrix $\mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}$ can be found by swapping the labels L and R in~\eqref{eq:su(1|1)2-Smat-LRgrad1}. Imposing LR-symmetry one has that the explicit parameterisation is the same as in the equations above $ A^{\mbox{\tiny R}\mbox{\tiny L}}_{pq}= A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq}$, et cetera. Changing the grading of the first of the two representations, one finds for example an S-matrix \begin{equation}\label{eq:su(1|1)2-Smat-LRgrad2} \begin{aligned} \mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny R}} \ket{\tilde{\phi}^{\mbox{\tiny L}}_p \phi^{\mbox{\tiny R}}_q} &= +D^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\phi^{\mbox{\tiny R}}_q \tilde{\phi}^{\mbox{\tiny L}}_p}, \qquad & \mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny R}} \ket{\tilde{\phi}^{\mbox{\tiny L}}_p \psi^{\mbox{\tiny R}}_q} &= -E^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\psi^{\mbox{\tiny R}}_q \tilde{\phi}^{\mbox{\tiny L}}_p} -F^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\phi^{\mbox{\tiny R}}_q \tilde{\psi}^{\mbox{\tiny L}}_p}, \\ \mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny R}} \ket{\tilde{\psi}^{\mbox{\tiny L}}_p \psi^{\mbox{\tiny R}}_q} &= -C^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\psi^{\mbox{\tiny R}}_q \tilde{\psi}^{\mbox{\tiny L}}_p}, \qquad & \mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny R}} \ket{\tilde{\psi}^{\mbox{\tiny L}}_p \phi^{\mbox{\tiny R}}_q} &= +A^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\phi^{\mbox{\tiny R}}_q \tilde{\psi}^{\mbox{\tiny L}}_p} +B^{\mbox{\tiny L}\mbox{\tiny R}}_{pq} \ket{\psi^{\mbox{\tiny R}}_q \tilde{\phi}^{\mbox{\tiny L}}_p}. \end{aligned} \end{equation} To conclude we write down another result that we will need in the following \begin{equation}\label{eq:su(1|1)2-Smat-RLgrad2} \begin{aligned} \mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{\phi^{\mbox{\tiny R}}_p \tilde{\phi}^{\mbox{\tiny L}}_q} &= +C^{\mbox{\tiny R}\mbox{\tiny L}}_{pq} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \phi^{\mbox{\tiny R}}_p}, \qquad & \mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{\phi^{\mbox{\tiny R}}_p \tilde{\psi}^{\mbox{\tiny L}}_q} &= +A^{\mbox{\tiny R}\mbox{\tiny L}}_{pq} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \phi^{\mbox{\tiny R}}_p} - B^{\mbox{\tiny R}\mbox{\tiny L}}_{pq} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \psi^{\mbox{\tiny R}}_p}, \\ \mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{\psi^{\mbox{\tiny R}}_p \tilde{\psi}^{\mbox{\tiny L}}_q} &= -D^{\mbox{\tiny R}\mbox{\tiny L}}_{pq} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \psi^{\mbox{\tiny R}}_p}, \qquad & \mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \ket{\psi^{\mbox{\tiny R}}_p \tilde{\phi}^{\mbox{\tiny L}}_q} &= -E^{\mbox{\tiny R}\mbox{\tiny L}}_{pq} \ket{\tilde{\phi}^{\mbox{\tiny L}}_q \psi^{\mbox{\tiny R}}_p} + F^{\mbox{\tiny R}\mbox{\tiny L}}_{pq} \ket{\tilde{\psi}^{\mbox{\tiny L}}_q \phi^{\mbox{\tiny R}}_p}. \end{aligned} \end{equation} The S-matrices presented here are also compatible with braiding and physical unitarity \begin{equation} \begin{aligned} \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\, \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}=\mathbf{1}\,, \qquad (\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}})^\dagger\, \mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}=\mathbf{1}\,, \\ \mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny R}}\, \mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}}=\mathbf{1}\,, \qquad (\mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny R}})^\dagger\, \mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}}=\mathbf{1}\,. \end{aligned} \end{equation} We refer to Section~\ref{sec:unitarity-YBe} for a discussion on this. \subsection{The S-matrix as a tensor product}\label{sec:smat-tensor-prod} The results of the previous section allow us to rewrite a $\alg{psu}(1|1)^4_\text{c.e.}$-invariant S-matrix as a tensor product of two $\alg{su}(1|1)^2_\text{c.e.}$-invariant S-matrices. \begin{equation} \begin{aligned} \mathcal{S}_{\alg{psu}(1|1)^4}&= S_0 \cdot \mathcal{S}_{\alg{su}(1|1)^2} \;\check{\otimes}\; \mathcal{S}_{\alg{su}(1|1)^2}, \\ \mathbf{S}_{\alg{psu}(1|1)^4}&= S_0 \cdot \mathbf{S}_{\alg{su}(1|1)^2} \;\hat{\otimes}\; \mathbf{S}_{\alg{su}(1|1)^2}, \end{aligned} \end{equation} where $S_0$ is a possible prefactor that is not fixed by symmetries. We introduced the graded tensor products~$\check{\otimes}$ and $\hat{\otimes}$ \begin{equation} \label{eq:gradedtensorpr} \begin{aligned} \left( \mathcal{A}\,\check{\otimes}\,\mathcal{B} \right)_{MM',NN'}^{KK',LL'} &= (-1)^{\epsilon_{M'}\epsilon_{N}+\epsilon_{L}\epsilon_{K'}} \ \mathcal{A}_{MN}^{KL} \ \mathcal{B}_{M'N'}^{K'L'}\,, \\ \left( \mathbf{A}\,\hat{\otimes}\,\mathbf{B} \right)_{MM',NN'}^{KK',LL'} &= (-1)^{\epsilon_{M'}\epsilon_{N}+\epsilon_{L'}\epsilon_{K}} \ \mathbf{A}_{MN}^{KL} \ \mathbf{B}_{M'N'}^{K'L'}\,, \end{aligned} \end{equation} where the symbol~$\epsilon$ is $1$ for fermions and $0$ for bosons. Depending on the representations that we want to scatter we have to choose the proper $\alg{su}(1|1)^2$ S-matrices entering the tensor product~\cite{Borsato:2014exa}. This construction is explained in the rest of this section, while the explicit result for all the scattering elements may be found in Appendix~\ref{app:S-mat-explicit}. \subsubsection{The massive sector $(\bullet\bullet)$}When considering the massive sector, we can scatter two different irreducible representations $\varrho_{\mbox{\tiny L}} \otimes \varrho_{\mbox{\tiny L}}$ and $\varrho_{\mbox{\tiny R}} \otimes \varrho_{\mbox{\tiny R}}$, identified by the eigenvalue $m=\pm 1$ of the generator $\gen{M}$. We see that this divides the massive sector into four different subsectors: Left-Left (LL), Right-Right (RR), Left-Right (LR) and Right-Left (RL). In each of these subsectors the LR-flavor is transmitted\footnote{As explained in the previous section, one needs to impose also LR-symmetry and unitarity to get pure transmission for the scattering of different flavors~\cite{Borsato:2012ud}.}. Scattering two Left excitations means that we need to consider the tensor product \begin{equation} \text{Left - Left:}\quad S_0^{\mbox{\tiny L}\sL} \cdot \mathcal{S}^{\mbox{\tiny L}\sL}\,\check{\otimes}\,\mathcal{S}^{\mbox{\tiny L}\sL}\,, \end{equation} where the explicit form of $\mathcal{S}^{\mbox{\tiny L}\sL}$ is given in~\eqref{eq:su(1|1)2-Smat-grad1}. We need to fix a proper normalisation and we find convenient to do it as \begin{equation}\label{eq:norm-LL-massive-sector} S_0^{\mbox{\tiny L}\sL} (x^\pm_p,x^\pm_q) = \frac{x^+_p}{x^-_p} \, \frac{x^-_q}{x^+_q} \, \frac{x^-_p - x^+_q}{x^+_p - x^-_q} \, \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^+_p x^-_q}} \, \frac{1}{\left(\sigma^{\bullet\bullet}_{pq} \right)^2 }, \end{equation} where $\sigma^{\bullet\bullet}_{pq}$ is called \emph{dressing factor}. Since it cannot be fixed by the symmetries, it will be constrained later by solving the crossing equations derived in Section~\ref{sec:crossing-invar-T4}. This normalisation is chosen to get for example the following scattering element \begin{equation} \bra{Y^{\mbox{\tiny L}}_q \, Y^{\mbox{\tiny L}}_p} \mathcal{S} \ket{Y^{\mbox{\tiny L}}_p \, Y^{\mbox{\tiny L}}_q} = \frac{x^+_p}{x^-_p} \, \frac{x^-_q}{x^+_q} \, \frac{x^-_p - x^+_q}{x^+_p - x^-_q} \, \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^+_p x^-_q}} \, \frac{1}{\left(\sigma^{\bullet\bullet}_{pq} \right)^2 }. \end{equation} When we scatter two Right excitations we find an S-matrix \begin{equation} \text{Right - Right:}\quad S_0^{\mbox{\tiny R}\sR} \cdot \mathcal{S}^{\mbox{\tiny R}\sR}\,\check{\otimes}\,\mathcal{S}^{\mbox{\tiny R}\sR}\,, \end{equation} and imposing LR-symmetry allows us to relate this result to the previous one, $\mathcal{S}^{\mbox{\tiny R}\sR}=\mathcal{S}^{\mbox{\tiny L}\sL}$ and $S_0^{\mbox{\tiny R}\sR}=S_0^{\mbox{\tiny L}\sL}$. In particular one does not need to introduce a different dressing factor in this subsector. On the other hand, scattering a Left excitation with a Right one we get the S-matrix \begin{equation} \text{Left - Right:}\quad S_0^{\mbox{\tiny L}\mbox{\tiny R}} \cdot \mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\,\check{\otimes}\,\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\,, \end{equation} where $\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}$ may be found in~\eqref{eq:su(1|1)2-Smat-LRgrad1}. The preferred normalisation in this case is \begin{equation} S_0^{\mbox{\tiny L}\mbox{\tiny R}}(x^\pm_p,x^\pm_q) =\left(\frac{x^+_p}{x^-_p}\right)^{1/2}\left(\frac{x^+_q}{x^-_q}\right)^{-1/2} \, \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^+_p x^-_q}} \, \frac{1}{\left(\tilde{\sigma}^{\bullet\bullet}_{pq}\right)^2}\,, \end{equation} where a new dressing factor $\tilde{\sigma}^{\bullet\bullet}_{pq}$ is introduced. With this normalisation we get for example the following scattering element \begin{equation} \bra{Y^{\mbox{\tiny R}}_q \, Y^{\mbox{\tiny L}}_p} \mathcal{S} \ket{Y^{\mbox{\tiny L}}_p \, Y^{\mbox{\tiny R}}_q} = \frac{x^+_p}{x^-_p} \, \frac{x^-_q}{x^+_q} \, \frac{1-\frac{1}{x^+_p x^-_q}}{1-\frac{1}{x^+_p x^+_q}} \, \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^-_p x^-_q}} \, \frac{1}{\left(\tilde{\sigma}^{\bullet\bullet}_{pq} \right)^2 }. \end{equation} To conclude, in the massive sector we need to introduce two unconstrained factors $\sigma^{\bullet\bullet}_{pq},\tilde{\sigma}^{\bullet\bullet}_{pq}$. \subsubsection{The massless sector $(\circ\circ)$} Each one-particle massless representation transforms under two copies of $\varrho_{\mbox{\tiny L}} \otimes \widetilde{\varrho}_{\mbox{\tiny L}}$, that are further organised in an $\alg{su}(2)_{\circ}$ doublet. When scattering two massless modules, we should then consider $16$ copies of $\alg{psu}(1|1)^4_\text{c.e.}$-invariant S-matrices, relating each of the $4$ possible in-states to each of the $4$ possible out-states. Using the $\alg{su}(2)_{\circ}$ symmetry one is able to relate all these S-matrices, finding an object that is $\alg{su}(2)_{\circ}$-invariant. More explicitly, the S-matrix in the massless sector can be written as the tensor product of an $\alg{su}(2)_{\circ}$-invariant S-matrix and the relevant tensor product realisation of the $\alg{psu}(1|1)^4_\text{c.e.}$-invariant S-matrix \begin{equation} (\circ\circ):\quad S_0^{\circ\circ} \cdot \mathcal{S}_{\alg{su}(2)}\;{\otimes}\;\big(\mathcal{S}^{\mbox{\tiny L}\sL}\;\check{\otimes}\;\mathcal{S}^{\tilde{\mbox{\tiny L}}\tilde{\mbox{\tiny L}}}\big)\,. \end{equation} Fixing a preferred normalisation we have \begin{equation} \label{eq:su2smat} \mathcal{S}_{\alg{su}(2)}(p,q)=\frac{1}{1+\varsigma_{pq}}\big(\mathbf{1}+\varsigma_{pq} \Pi\big) = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 &\frac{1}{1+\varsigma_{pq}} &\frac{\varsigma_{pq}}{1+\varsigma_{pq}} & 0 \\ 0 &\frac{\varsigma_{pq}}{1+\varsigma_{pq}} &\frac{1}{1+\varsigma_{pq}} & 0 \\ 0 & 0 & 0 & 1 \\ \end{array}\right)\,, \end{equation} where $\Pi$ is the permutation matrix and $\varsigma_{pq}$ is a function of the two momenta $p,q$ that is not fixed by the $\alg{su}(2)_{\circ}$ symmetry. In Sections~\ref{sec:unitarity-YBe} and~\ref{sec:crossing-invar-T4} we will see that further constraints are imposed on that by unitarity, Yang-Baxter equation and crossing invariance. We choose to fix the overall normalisation as \begin{equation} S_0^{\circ\circ} = \left( \frac{x^+_p}{x^-_p} \, \frac{x^-_q}{x^+_q} \right)^{1/2}\, \frac{x^-_p - x^+_q}{x^+_p - x^-_q} \, \frac{1}{\left(\sigma^{\circ\circ}_{pq} \right)^2 }, \end{equation} where we introduced the dressing factor $\sigma^{\circ\circ}_{pq}$ for the massless sector. With this choice, the scattering of two identical bosons coming from the torus is just \begin{equation} \begin{aligned} \bra{T^{\dot{a}a}_q \, T^{\dot{a}a}_p} \mathcal{S} \ket{T^{\dot{a}a}_p \, T^{\dot{a}a}_q} & = \frac{1}{\left(\sigma^{\circ\circ}_{pq} \right)^2 }. \end{aligned} \end{equation} \subsubsection{The mixed-mass sector $(\bullet\circ),(\circ\bullet)$} In the mixed-mass sector we may decide to scatter a massive particle with a massless one $(\bullet\circ)$ or vice-versa $(\circ\bullet)$. Let us focus on the first case. We have also the possibility of choosing the LR-flavor of the first excitation. When this is Left we find the S-matrix \begin{equation} \label{eq:mssive-mless1} \text{Left (massive) - massless:}\quad S_0^{\mbox{\tiny L}\circ} \cdot \left(\mathcal{S}^{\mbox{\tiny L}\sL}\,\check{\otimes}\,\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \right)^{\oplus 2}\,. \end{equation} The symbol $\oplus 2$ appears because massless excitations are organised in two $\alg{psu}(1|1)^4_\text{c.e.}$-modules, identified by the two $\alg{su}(2)_{\circ}$ flavors. Imposing $\alg{su}(2)_{\circ}$-invariance one finds that the S-matrix is just the sum of two identical copies. In other words the $\alg{su}(2)_{\circ}$ flavor stands as a spectator and it is transmitted under the scattering. Similarly, for scattering involving Right-massive excitations \begin{equation} \label{eq:mssive-mlessR} \text{Right (massive) - massless:}\quad S_0^{\mbox{\tiny R}\circ} \cdot \left(\mathcal{S}^{\mbox{\tiny R}\mbox{\tiny L}}\,\check{\otimes}\,\mathcal{S}^{\mbox{\tiny R}\tilde{\mbox{\tiny L}}} \right)^{\oplus 2}\,. \end{equation} This S-matrix is related by LR-symmetry to the previous one. Implementing LR-symmetry as in Section~\ref{sec:LR-symmetry} and using the fact that the second excitation satisfies a massless dispersion relation, one can check that the two S-matrices are mapped one into the other, upon fixing the proper normalisations. This allows us to use just $S_0^{\mbox{\tiny L}\circ} \cdot \left(\mathcal{S}^{\mbox{\tiny L}\sL}\,\check{\otimes}\,\mathcal{S}^{\mbox{\tiny L}\tilde{\mbox{\tiny L}}} \right)^{\oplus 2}$ for both cases. We also set the overall factor \begin{equation} \begin{aligned} S_0^{\bullet\circ} &\equiv S_0^{\mbox{\tiny L}\circ} =\left( \frac{x^+_p}{x^-_p} \right)^{-1/2}\, \left( \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^+_p x^-_q}} \right)^{1/2} \, \left(\frac{1-\frac{1}{x^-_p x^-_q}}{1-\frac{1}{x^+_p x^+_q}} \right)^{1/2} \, \frac{1}{\left(\sigma^{\bullet\circ}_{pq} \right)^2 }, \end{aligned} \end{equation} where we have chosen a proper normalisation and introduced the dressing factor $\sigma^{\bullet\circ}_{pq}$ for massive-massless scattering. Similar considerations apply when considering massless-massive scattering. The second excitation is allowed to take the two different flavors L or R, and in the two cases we find the S-matrices \begin{equation} \label{eq:mless-mssive} \begin{aligned} \text{massless - \ \ Left\ (massive):}\quad & S_0^{\circ \mbox{\tiny L}} \cdot \left(\mathcal{S}^{\mbox{\tiny L}\sL}\,\check{\otimes}\,\mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny L}} \right)^{\oplus 2}\,, \\ \text{massless - Right (massive):}\quad & S_0^{\circ \mbox{\tiny R}} \cdot \left(\mathcal{S}^{\mbox{\tiny L}\mbox{\tiny R}}\,\check{\otimes}\,\mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny R}} \right)^{\oplus 2}\,. \end{aligned} \end{equation} LR-symmetry allows us to use just $S_0^{\circ \mbox{\tiny L}} \cdot \left(\mathcal{S}^{\mbox{\tiny L}\sL}\,\check{\otimes}\,\mathcal{S}^{\tilde{\mbox{\tiny L}}\mbox{\tiny L}} \right)^{\oplus 2}$ in both cases. We then introduce the common factor $S_0^{\circ\bullet}$ that we decide to normalise as \begin{equation} S_0^{\circ\bullet}\equiv S_0^{\circ \mbox{\tiny L}}=\left( \frac{x^+_q}{x^-_q} \right)^{1/2}\, \left( \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^+_p x^-_q}} \right)^{1/2} \, \left(\frac{1-\frac{1}{x^-_p x^-_q}}{1-\frac{1}{x^+_p x^+_q}} \right)^{-1/2} \, \frac{1}{\left(\sigma^{\circ\bullet}_{pq} \right)^2 }. \end{equation} The chosen normalisations allow us to write for example the following scattering elements \begin{equation} \begin{aligned} \bra{T^{\dot{a}a}_q \, Y^{\mbox{\tiny L}}_p} \mathcal{S} \ket{Y^{\mbox{\tiny L}}_p \, T^{\dot{a}a}_q} &= \left( \frac{1-\frac{1}{x^+_p x^-_q}}{1-\frac{1}{x^+_p x^+_q}} \, \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^-_p x^-_q}} \right)^{1/2} \, \frac{1}{\left(\sigma^{\bullet\circ}_{pq} \right)^2 }, \\ \bra{Y^{\mbox{\tiny L}}_q \, T^{\dot{a}a}_p} \mathcal{S} \ket{T^{\dot{a}a}_p \, Y^{\mbox{\tiny L}}_q} &= \left( \frac{1-\frac{1}{x^+_p x^-_q}}{1-\frac{1}{x^+_p x^+_q}} \, \frac{1-\frac{1}{x^-_p x^+_q}}{1-\frac{1}{x^-_p x^-_q}} \right)^{1/2} \, \frac{1}{\left(\sigma^{\circ\bullet}_{pq} \right)^2 }. \end{aligned} \end{equation} Later we will discuss how massive-massless and massless-massive scatterings are related in a simple way by unitarity. In particular, this will give a relation between $\sigma^{\bullet\circ}_{pq}$ and $\sigma^{\circ\bullet}_{pq}$, motivating the statement that in the mixed-mass sector we have just one dressing factor. \subsection{Unitarity and Yang-Baxter equation}\label{sec:unitarity-YBe} After fixing the S-matrix based on symmetries, one finds that this is determined up to five dressing factors. Two of them belong to the massive sector, describing scattering of excitations with the same or with opposite LR flavors. Other two are responsible for the mixed-mass sector, namely massive-massless and massless-massive scattering. The last one belongs to the massless sector. More constraints on those scalar factors come from unitarity. One notion of this is the usual \emph{physical} unitarity, that requires the S-matrix to be unitary as a matrix \begin{equation} \mathcal{S}^\dagger_{pq}\mathcal{S}_{pq}=\mathbf{1}\,. \end{equation} Another natural constraint for scattering of particles on a line is \emph{braiding} unitarity \begin{equation} \mathcal{S}_{qp}\mathcal{S}_{pq}=\mathbf{1}\,. \end{equation} Its interpretation is that scattering twice two excitations should just bring us back to the initial situation\footnote{We define the S-matrix such that the momentum of the first particle is larger than the one of the second. If the first scattering happens for $p>q$, to evaluate the second process we have to analytically continue the S-matrix to the region where the momentum of the first particle is less than the one of the second excitation.}. We refer to~\cite{Arutyunov:2006yd} for a justification of this constraint from the point of view of the formalism of the Zamolodchikov-Fadeev algebra applied to worldsheet integrable scattering. In our case we find the following equations \begin{equation} \begin{aligned} \sigma^{\bullet\bullet}_{qp}=\big(\sigma^{\bullet\bullet}_{pq}\big)^*=\frac{1}{\sigma^{\bullet\bullet}_{pq}}\,, \qquad \tilde{\sigma}^{\bullet\bullet}_{qp}=\big(\tilde{\sigma}^{\bullet\bullet}_{pq}\big)^*=\frac{1}{\tilde{\sigma}^{\bullet\bullet}_{pq}}\,, \qquad \sigma^{\circ\circ}_{qp}=\big(\sigma^{\circ\circ}_{pq}\big)^*=\frac{1}{\sigma^{\circ\circ}_{pq}}\,,\\ \qquad \sigma^{\bullet\circ}_{qp}=\big(\sigma^{\circ\bullet}_{pq}\big)^*=\frac{1}{\sigma^{\circ\bullet}_{pq}}\,, \qquad\qquad \sigma^{\circ\bullet}_{qp}=\big(\sigma^{\bullet\circ}_{pq}\big)^*=\frac{1}{\sigma^{\bullet\circ}_{pq}}\,. \qquad\qquad \end{aligned} \end{equation} The first line states that the dressing factors in the massive and massless sectors can be written as exponentials of anti-symmetric functions of the two momenta, and for physical momenta they take values on the unit circle. On the other hand, in the mixed-mass sector unitarity relates massive-massless and massless-massive scattering. This reduces to four the number of unconstrained dressing factors. Unitarity imposes also the following constraint on the function $\varsigma_{pq}$ appearing in the $\alg{su}(2)_\circ$-invariant S-matrix \begin{equation} \varsigma_{qp}=\big(\varsigma_{pq}\big)^*=-\varsigma_{pq}\,, \end{equation} meaning that it is a purely imaginary anti-symmetric function of $p$ and $q$. \medskip For the integrability of the model, it is necessary for the S-matrix to satisfy the Yang-Baxter equation \begin{equation}\label{eq:YBe} \mathcal{S}(q,r)\otimes\mathbf{1} \cdot\mathbf{1}\otimes\mathcal{S}(p,r) \cdot \mathcal{S}(p,q)\otimes\mathbf{1} = \mathbf{1} \otimes\mathcal{S}(p,q)\cdot\mathcal{S}(p,r)\otimes \mathbf{1} \cdot \mathbf{1} \otimes\mathcal{S}(q,r)\,. \end{equation} This is a crucial requirement to make factorisability of multi-particle scatterings possible. One may check the Yang-Baxter equation for the full S-matrix, or equivalently for each factor of the tensor product appearing in each sector. Since the $\alg{su}(1|1)^2_\text{c.e.}$ S-matrices of Section~\ref{sec:su112-S-matrices} satisfy the Yang-Baxter equation, it follows that this is true also for the $\alg{psu}(1|1)^4_\text{c.e.}$ S-matrices of the various sectors of our model. On the S-matrix for the $\alg{su}(2)_\circ$ factor, the Yang-Baxter equation yields a further constraint for $\varsigma_{pq}$ \begin{equation} \varsigma(p,q)-\varsigma(p,r)+\varsigma(q,r)=0\,. \end{equation} The above equation is linear thanks to a suitable choice of parameterisation for the $\alg{su}(2)_\circ$ S-matrix. The solution is a function that is a difference of two rapidities, each depending on just one momentum. Together with the constraints imposed by unitarity we can write \begin{equation} \label{eq:varsigma-difference} \varsigma(p,q)=i\big(w_p-w_q\big), \end{equation} where we have introduce a new real function of the momentum $w_p$. \subsection{Crossing invariance}\label{sec:crossing-invar-T4} The analytic properties of the dressing factors are revealed after imposing crossing invariance of the S-matrix~\cite{Janik:2006dc}. A crossing transformation corresponds to an analytic continuation to an unphysical channel, where the energy and the momentum flip sign. We start the discussion by first considering the massive excitations. Their dispersion relation satisfies \begin{equation} E^2 = 1+4h^2\sin^2 \frac{p}{2}\,, \end{equation} and we can uniformise it in terms of a complex parameter $z$ with the parameterisation~\cite{Beisert:2006nonlin} \begin{equation} p=2 \text{am}z\,,\qquad \sin \frac{p}{2} = \text{sn}(z,k)\,, \qquad E=\text{dn}(z,k)\,, \end{equation} where the elliptic modulus is defined as $k=-4h^2<0$. The curve that we obtain is a torus, and we call $2\omega_1$ and $2\omega_2$ the periods for real and imaginary shifts, respectively. They are obtained by \begin{equation} 2\omega_1= 4\,\text{K}(k)\,, \qquad 2\omega_2 = 4i\, \text{K}(1-k)-4\, \text{K}(k)\,, \end{equation} with $\text{K}(k)$ the complete elliptic integral of the first kind. Real values of $z$ correspond to real values of the momentum $p$. If we take periodicity into account and we define the physical range of the momentum to be $-\pi\leq p<\pi$, then we may take $-\omega_1/2\leq z<\omega_1/2$. A crossing transformation corresponds to an analytic continuation to a complex value that we denote by $\bar{z}$, where we flip the signs of the momentum and the energy. We see that this is implemented by shifting $z$ by half of the imaginary period \begin{equation} z\to\bar{z}= z+\omega_2\,, \quad\implies\quad p\to \bar{p}=-p\,,\quad E\to \bar{E}=-E\,. \end{equation} On the Zhukovski variables and the function $\eta_p$ defined in~\eqref{eq:def-eta}, the crossing transformation implies \begin{equation} x^{\pm}(z+\omega_2)= \frac{1}{x^{\pm}(z)}\,, \qquad \eta(z+\omega_2) = \frac{i}{x^+(z)}\eta(z)\,. \end{equation} We can easily check that for massless excitations the crossing transformation on the parameters $x^\pm$ is implemented in the same way. It is important to note that for the eigeinvalue of the central charge $\gen{C}$, crossing does not change just its sign \begin{equation}\label{eq:cross-central-charge} \frac{i\,h}{2}\left(e^{i\,\bar{p}}-1\right)=-e^{-i\,p}\frac{i\,h}{2}\left(e^{i\,p}-1\right)\,, \implies \gen{C}(\bar{z})= -e^{i\,p} \gen{C}(z)\,. \end{equation} A crucial step is now to note that we may mimic these transformation laws on the central charges also in a different way, namely by defining a proper charge conjugation matrix $\mathscr{C}(z)$. According to the previous discussion, on the central charges $\gen{H}$ and $\gen{M}$ we should impose \begin{equation} \label{eq:crossign-u1-charges} \gen{H}(\bar{z}) = - \mathscr{C}(z)\gen{H}\,\mathscr{C}(z)^{-1}\,, \qquad \gen{M} = - \mathscr{C}(z)\gen{M}\,\mathscr{C}(z)^{-1}\,. \end{equation} In order to reproduce also the transformation~\eqref{eq:cross-central-charge} on the charge $\gen{C}$, we impose that on supercharges we have \begin{equation} \begin{aligned} \gen{Q}^{\ \dot{a}}_{\mbox{\tiny L}}(z+\omega_2)^{\text{st}} &= - e^{-\frac{i}{2}\, p}\ \mathscr{C}(z)\, \gen{Q}^{\ \dot{a}}_{\mbox{\tiny L}}(z)\, \mathscr{C}^{-1}(z), \\ \gen{Q}_{\mbox{\tiny R} \dot{a}}(z+\omega_2)^{\text{st}} &= - e^{-\frac{i}{2}\, p}\ \mathscr{C}(z)\, \gen{Q}_{\mbox{\tiny R} \dot{a}}(z)\, \mathscr{C}^{-1}(z), \\ \overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}}(z+\omega_2)^{\text{st}} &= -e^{+\frac{i}{2} p}\ \mathscr{C}(z)\, \overline{\gen{Q}}{}_{\mbox{\tiny L} \dot{a}}(z)\, \mathscr{C}^{-1}(z),\\ \overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}}(z+\omega_2)^{\text{st}} &= -e^{+\frac{i}{2} p}\ \mathscr{C}(z)\, \overline{\gen{Q}}{}_{\mbox{\tiny R}}^{\ \dot{a}}(z)\, \mathscr{C}^{-1}(z). \end{aligned} \end{equation} Here ${}^\text{st}$ denotes supertransposition, that is implemented on supercharges as $\gen{Q}^{\text{st}} = \gen{Q}^{\text{t}} \, \Sigma$. The diagonal matrix $\Sigma$ is the fermion-sign matrix, taking values $+1, -1$ on bosons and fermions respectively. Compatibility with the $\alg{su}(2)_{\bullet}$ generators requires that we exchange the highest and the lowest weight in the doublet representation. We follow the same rule also for $\alg{su}(2)_{\circ}$ \begin{equation} {\gen{J}_{\bullet \dot{b}}}^{\dot{a}}=-\mathscr{C}(z) \, {\gen{J}_{\bullet \dot{a}}}^{\dot{b}}\,\mathscr{C}(z)^{-1}\,, \qquad {\gen{J}_{\circ b}}^a=- \mathscr{C}(z) \, {\gen{J}_{\circ a}}^b\,\mathscr{C}(z)^{-1} \,. \end{equation} The form of the charge conjugation matrix is not unique, nevertheless all choices yield the same crossing equations. In the basis \begin{equation} \{ Y^{\mbox{\tiny L}}, \eta^{\mbox{\tiny L} 1}, \eta^{\mbox{\tiny L} 2}, Z^{\mbox{\tiny L}} \} \oplus \{ Y^{\mbox{\tiny R}}, \eta^{\mbox{\tiny R} 1}, \eta^{\mbox{\tiny R} 2}, Z^{\mbox{\tiny R}} \} \oplus \{ T^{11}, T^{21}, T^{12}, T^{22} \} \oplus \{ \widetilde{\chi}^1, \chi^1, \widetilde{\chi}^2, \chi^2 \} , \end{equation} we take it to be \begin{equation} \newcommand{\color{black!40}0}{\color{black!40}0} \renewcommand{\arraystretch}{1.1} \setlength{\arraycolsep}{3pt} \mathscr{C}_p=\!\left(\! \mbox{\footnotesize$ \begin{array}{cccc|cccc} \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & 1 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & -i & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & i & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & 1 \\ \hline 1 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & i & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & -i & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & 1 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \end{array}$}\! \right) \oplus \!\left(\! \mbox{\footnotesize$ \begin{array}{cccc|cccc} \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & 1 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & -1 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & -1 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ 1 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \hline \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & -i \frac{a_p}{b_p} \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & i \frac{b_p}{a_p} & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & i \frac{a_p}{b_p} & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & -i \frac{b_p}{a_p} & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \end{array}$}\! \right). \end{equation} With the explicit form of the charge conjugation matrix we are able to impose crossing invariance on the full S-matrix. They key is to consider the objects \begin{equation} \mathbf{S}(z_p,z_q)^{-1} \quad\text{ and } \quad\mathscr{C}_1(z_p) \cdot \mathbf{S}^{\text{t}_1}(z_p+\omega_2,z_q) \cdot \mathscr{C}_1^{-1}(z_p) \,, \end{equation} where we have used the notation $\mathscr{C}_1(z_p) = \mathscr{C} (z_p) \otimes \mathbf{1}$, and ${}^{\text{t}_1}$ denotes transposition on the first space. We can check that the compatibility condition with the charges for $\mathbf{S}(z_p,z_q)^{-1}$ \begin{equation} \begin{aligned} \gen{J}_{12}(z_p,z_q) \,\mathbf{S}(z_p,z_q)^{-1}- \mathbf{S}(z_p,z_q)^{-1}\,\gen{J}_{21}(z_q,z_p) &=0\,, \end{aligned} \end{equation} is satisfied also by $\mathscr{C}_1(z_p) \cdot \mathbf{S}^{\text{t}_1}(z_p+\omega_2,z_q) \cdot \mathscr{C}_1^{-1}(z_p)$. They might then differ just by some factors, one for each of the sectors that we identified in Section~\ref{sec:smat-tensor-prod}. Crossing symmetry fixes this freedom and states that these two objects are equal\footnote{A similar statement can be made also for the object $ \mathscr{C}_2^{-1}(z_q) \cdot \mathbf{S}^{\text{t}_2}(z_p,z_q-\omega_2) \cdot \mathscr{C}_2(z_q)$, where crossing is implemented by shifting the second entry in the opposite direction. This second equation would be related by unitarity to the previous one.} \begin{equation}\label{eq:cr-matrix-form} \begin{aligned} \mathscr{C}_1(z_p) \cdot \mathbf{S}^{\text{t}_1}(z_p+\omega_2,z_q) \cdot \mathscr{C}_1^{-1}(z_p) \cdot \mathbf{S}(z_p,z_q) &= \mathbf{1} \,. \end{aligned} \end{equation} It is an important fact that the whole set of equations reduces to equations just for the dressing factors \begin{equation}\label{eq:cr-massive} \begin{aligned} \left(\sigma^{\bullet\bullet}_{pq}\right)^2 \ \left(\tilde{\sigma}^{\bullet\bullet}_{\bar{p}q}\right)^2 &= \left( \frac{x^-_q}{x^+_q} \right)^2 \frac{(x^-_p-x^+_q)^2}{(x^-_p-x^-_q)(x^+_p-x^+_q)} \frac{1-\frac{1}{x^-_px^+_q}}{1-\frac{1}{x^+_px^-_q}}, \\ \left(\sigma^{\bullet\bullet}_{\bar{p}q}\right)^2 \ \left(\tilde{\sigma}^{\bullet\bullet}_{pq}\right)^2 &= \left( \frac{x^-_q}{x^+_q} \right)^2 \frac{\left(1-\frac{1}{x^+_px^+_q}\right)\left(1-\frac{1}{x^-_px^-_q}\right)}{\left(1-\frac{1}{x^+_px^-_q}\right)^2} \frac{x^-_p-x^+_q}{x^+_p-x^-_q}, \end{aligned} \end{equation} \begin{equation}\label{eq:cr-mixed} \begin{aligned} \left(\sigma^{\bullet \circ}_{\bar{p}q} \right)^2 \ \left( \sigma^{\bullet \circ}_{pq} \right)^2 &= \frac{x^+_p}{x^-_p} \frac{x^-_p-x^+_q}{x^+_p-x^+_q} \frac{1-\frac{1}{x^+_px^+_q}}{1-\frac{1}{x^-_px^+_q}}, \\ \left(\sigma^{\circ \bullet}_{\bar{p}q} \right)^2 \ \left( \sigma^{\circ \bullet}_{pq} \right)^2 &= \frac{x^+_q}{x^-_q} \frac{x^+_p-x^-_q}{x^+_p-x^+_q} \frac{1-\frac{1}{x^+_px^+_q}}{1-\frac{1}{x^+_px^-_q}} \end{aligned} \end{equation} \begin{equation}\label{eq:cr-massless} \begin{aligned} \left(\sigma^{\circ\circ}_{\bar{p}q} \right)^2 \ \left(\sigma^{\circ\circ}_{pq} \right)^2 &= \frac{\varsigma_{pq}-1}{\varsigma_{pq}} \, \frac{1-\frac{1}{x^+_p x^+_q}}{1-\frac{1}{x^+_p x^-_q}} \, \frac{1-\frac{1}{x^-_p x^-_q}}{1-\frac{1}{x^-_p x^+_q}}, \end{aligned} \end{equation} and for the function $\varsigma_{pq}$ of the $\alg{su}(2)_{\circ}$ S-matrix \begin{equation}\label{eq:cr-varsigma} \begin{aligned} \varsigma_{\bar{p}q} &= \varsigma_{pq}-1\, \implies w(\bar{p})=w(p)+i\,. \end{aligned} \end{equation} In Section~\ref{sec:dressing-factors} we present solutions of the crossing equations for the dressing factors of the massive sector. \chapter{Symmetries of AdS$_3\times$S$^3\times$T$^4$}\label{ch:symm-repr-T4} In this chapter we study the 1+1-dimensional model that emerges as a description for strings on the {AdS$_3\times$S$^3\times$T$^4$} background, after fixing light-cone gauge on the worldsheet and taking the decompactification limit. The symmetry algebra of the original model---the isometries of this background are given by $\alg{psu}(1,1|2)_{\mbox{\tiny L}}\oplus\alg{psu}(1,1|2)_{\mbox{\tiny R}}$---is broken to a smaller algebra under the gauge-fixing procedure explained in Chapter~\ref{ch:strings-light-cone-gauge}. The generators that commute with the light-cone Hamiltonian close into the superalgebra that we call $\mathcal{A}$. The explicit commutation relations are presented in the next section. Here it is enough to say that the vector space underlying $\mathcal{A}$ can be decomposed as $$ \alg{psu}(1|1)^4 \oplus \alg{u}(1)^2 \oplus \alg{so}(4) \oplus \alg{u}(1)^2\,. $$ The four copies of $\alg{psu}(1|1)$ provide a total of eight real supercharges. As it can be seen in~\eqref{eq:cealgebra}, their anti-commutators yield the $\alg{u}(1)$ central charges corresponding to the light-cone Hamiltonian $\gen{H}$ and an angular momentum $\gen{M}$ in AdS$_3\times$S$^3$. The $\alg{so}(4)$ symmetry is present only in the decompactification limit, where we have to consider the zero-winding sector for the torus and use vanishing boundary conditions for the worldsheet fields. This $\alg{so}(4)$ may be decomposed into $\alg{su}(2)_\bullet \oplus \alg{su}(2)_\circ$, to show more conveniently that the supercharges transform in doublets of $\alg{su}(2)_\bullet$, and are not charged under $\alg{su}(2)_\circ$. Let us introduce some terminology and say that \emph{on-shell}---when we consider states for which the total worldsheet momentum vanishes---these are the only generators appearing. Going \emph{off-shell} we relax the condition on the momentum and we get $\mathcal{A}$, a central extension of the on-shell algebra. The two new $\alg{u}(1)$ generators $\gen{C},\overline{\gen{C}}$ measure the momentum of the state and play a major role in the whole construction. For the reader's convenience, we start by presenting the commutation relations defining the algebra $\mathcal{A}$, and we explain how to rewrite its generators in terms of the elements of a smaller superalgebra, namely $\alg{su}(1|1)^2_{\text{c.e.}}$. The explicit form of the charges at quadratic order in the fields permits, on the one hand, to check the closure under the correct commutation relations. On the other hand, it suffices to derive the exact momentum dependence of the eigenvalues of the central charges $\gen{C},\overline{\gen{C}}$ . We also study the representations of $\mathcal{A}$ under which the worldsheet excitations are organised, first in the near-BMN limit and then to all-loops. We also rewrite them as bi-fundamental representations of $\alg{su}(1|1)^2_{\text{c.e.}}$, and we show that we can define a discrete ``\mbox{Left-Right} symmetry'' that will be crucial for constructing the S-matrix in the next chapter. We conclude with an explicit parameterisation of the action of the charges, as a function of the momenta of the worldsheet excitations. We collect in an appendix the calculations of the gauge-fixed action needed to obtain these results. \input{Chapters/SymmetryAlgebraT4.tex} \input{Chapters/RepresentationsT4.tex} \section{Summary} In this chapter we have studied the symmetry algebra that remains after fixing light-cone gauge for strings on {AdS$_3\times$S$^3\times$T$^4$}. We have considered the charges written in terms of worldsheet fields, and to simplify the computations we have truncated them at quadratic order in the expansion in powers of fields. We actually used a ``hybrid expansion'', in the sense that the dependence on the light-cone coordinate $x^-$ was kept exact. The coordinate $x^-$ is related to the worldsheet momentum through the Virasoro constraint. Computing anti-commutators of supercharges we have verified the presence of a central extension when we are off-shell, \textit{i.e.}\xspace when we relax the level-matching condition and we consider states whose total worldsheet momentum is not zero. The hybrid expansion allowed us to derive the exact momentum-dependence of the central charges. The computation at the near-BMN order revealed that we have four bosonic and four fermionic massive excitations, together with four bosonic and four fermionic massless excitations. The massive excitations correspond to transverse directions in AdS$_3\times$S$^3$, and they are further divided into two irreducible representations---labelled by Left and Right---of the off-shell symmetry algebra $\mathcal{A}$. Massless modes correspond to excitations on T$^4$. We showed that it is possible to deform the near-BMN representations introducing a dependence on the parameter $h$---related to the string tension---that reproduces the exact non-linear momentum dependence of the central charges. We also obtained the ``all-loop dispersion relation'' for the worldsheet excitations. In the next chapter we will use compatibility with the charges constructed here to bootstrap an all-loop S-matrix. \chapter{Exact S-matrix for AdS$_3\times$S$^3\times$T$^4$}\label{ch:S-matrix-T4} The Hamiltonian of the gauge-fixed model living on the worldsheet can, in principle, be used to compute the S-matrix, responsible for the scattering of the excitations on the string. To start, the quartic Hamiltonian provides the $2\to 2$ scattering elements at tree-level, and higher corrections may be computed. In this chapter we want to take a different route. Rather than deriving the \mbox{S-matrix} in perturbation theory, we use a bootstrap procedure to find it at all values of the coupling $h$. This method relies on a crucial assumption, namely that the theory at hand is \emph{quantum integrable}. As anticipated in the introduction of Chapter~\ref{ch:intro}, one can prove \emph{classical Integrability} for strings on {AdS$_3\times$S$^3\times$T$^4$}~\cite{Babichenko:2009dk}, meaning that there exists an infinite number of conserved quantities in involution with each other. The assumption is that this property survives at the quantum level, where we find an infinite set of commuting conserved charges labelled by $n_j$ $$ [\gen{J}_{n_1},\gen{J}_{n_2}]=0. $$ Two of these are the familiar charges that measure momentum and energy of the state. The others are called higher charges, and in relativisitic integrable field theories their eigenvalues typically depend on higher order polynomials in the momenta. In general, given a state with momentum $p$, each charge acts simply as $\gen{J}_{n_j}\ket{p}=j_{n_j}(p)\ket{p}$. We should appreciate that the situation is very much constrained, as we have at our disposal an infinite set of independent functions $j_{n_j}(p)$. The consequences of this are important when we consider the scattering problem. We focus on the in-states prepared at $t=-\infty$ and on the out-states that remain after the collision at $t=+\infty$. We do not try to describe the details of the scattering when the particles are close to each other, as the interactions might be very complicated. We define an object $\mathcal{S}$ that we call S-matrix and that relates the inital and final states $$ \mathcal{S}\ket{\mathcal{X}^{c_1}(p_1)\ldots\mathcal{X}^{c_{N_{\text{in}}}}(p_{N_{\text{in}}})}= \mathcal{A}_{\ c'_1\ldots c'_{N_{\text{out}}}}^{c_1\ldots c_{N_{\text{in}}}}(p_1,\ldots p_{N_{\text{in}}};p'_1,\ldots p'_{N_{\text{out}}}) \ket{\mathcal{X}^{c'_1}(p'_1)\ldots\mathcal{X}^{c'_{N_{\text{out}}}}(p'_{N_{\text{out}}})}. $$ For a generic quantum field theory, the first requirement that we might want to impose on this S-matrix is compatibility with symmetries. Additionally, we should also impose the unitarity condition, to be sure that no state is missing in the description. The generic problem is very complicated; in fact, creation and annihilation processes may take place, meaning that interactions might modify the number of particles after the scattering. Following Alexander and Alexei Zamolodchikov~\cite{Zamolodchikov:1978xm}, if in a quantum integrable model we impose conservation for each of the charges $\gen{J}_{n}$ $$ \sum_{k=1}^{N_{\text{in}}} j_n(p_k) =\sum_{k=1}^{N_{\text{out}}} j_n(p'_k), \qquad\forall n, $$ we conclude that the only way to satisfy all these constraints is to conserve under the scattering \begin{itemize} \item the number of particles $N_{\text{in}}=N_{\text{out}}$, \item the set of momenta $\{p_1,\ldots,p_{N_{\text{in}}}\}=\{p'_1,\ldots,p'_{N_{\text{out}}}\}$. \end{itemize} The momenta are allowed to be reshuffled under the scattering, but not to change their values. Already at this stage we find a problem that is much simpler than what is usually considered in a generic quantum field theory. The fact that we have higher charges gives even more powerful consequences than the ones already mentioned, as one can show that any $N$-particle scattering is \emph{factorisable} into a sequence of two-body processes. The idea is that the action of the higher charges allows us to move independently the wave packets corresponding to each of the particles that scatter. Thanks to this property, a three-body process like the one in Figure~\ref{fig:YB-central} becomes equivalent to either~\ref{fig:YB-left} or~\ref{fig:YB-right}. \begin{figure}[t] \centering \hspace{-0.75cm} \subfloat[\label{fig:YB-left}]{ \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (p1in) at ($(-1.5cm,-2cm)+(0.5cm,0cm)$) {}; \node [box] (p2in) at (-0.5cm,-2cm) {}; \node [box] (p3in) at ($(+1.5cm,-2cm)+(1cm,0cm)$) {}; \node [box] (p1out) at ($(+1.5cm,2cm)+(0.5cm,0cm)$) {}; \node [box] (p2out) at (+0.5cm,2cm) {}; \node [box] (p3out) at ($(-1.5cm,2cm)+(1cm,0cm)$) {}; \draw (p1in) -- (p1out); \draw [dashed] (p2in) -- (p2out); \draw [dotted] (p3in) -- (p3out); \end{tikzpicture} } \raisebox{2cm}{$=$} \hspace{0cm} \subfloat[\label{fig:YB-central}]{ \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (p1in) at (-1.5cm,-2cm) {$p_1$}; \node [box] (p2in) at (-0.5cm,-2cm) {$p_2$}; \node [box] (p3in) at (+1.5cm,-2cm) {$p_3$}; \node [box] (p1out) at (+1.5cm,2cm) {$p_1$}; \node [box] (p2out) at (+0.5cm,2cm) {$p_2$}; \node [box] (p3out) at (-1.5cm,2cm) {$p_3$}; \draw (p1in) -- (p1out); \draw [dashed] (p2in) -- (p2out); \draw [dotted] (p3in) -- (p3out); \end{tikzpicture} } \hspace{0.5cm} \raisebox{2cm}{$=$} \subfloat[\label{fig:YB-right}]{ \begin{tikzpicture}[% box/.style={outer sep=1pt}, Q node/.style={inner sep=1pt,outer sep=0pt}, arrow/.style={-latex} ]% \node [box] (p1in) at ($(-1.5cm,-2cm)+(0cm,0cm)$) {}; \node [box] (p2in) at ($(-0.5cm,-2cm)+(0.5cm,0cm)$) {}; \node [box] (p3in) at ($(+1.5cm,-2cm)+(-0.5cm,0cm)$) {}; \node [box] (p1out) at ($(+1.5cm,2cm)+(0cm,0cm)$) {}; \node [box] (p2out) at ($(+0.5cm,2cm)+(0.5cm,0cm)$) {}; \node [box] (p3out) at ($(-1.5cm,2cm)+(-0.5cm,0cm)$) {}; \draw (p1in) -- (p1out); \draw [dashed] (p2in) -- (p2out); \draw [dotted] (p3in) -- (p3out); \end{tikzpicture} } \caption{Parallel lines with the same style correspond to particles having the same momentum. The vertical axis parameterises time, while the horizontal axis space. The action of the higher conserved charges allows us to independently move the wave-packets of the excitations that are scattering. A three-body process like the one in the central figure becomes then equivalent to either the process depicted in the left or the one in the right. In both cases we get a sequence of two-body scatterings. Constistency imposes that these two factorisations should be equivalent. This requirement results in the Yang-Baxter equation, a constraint that the two-body S-matrix should satisfy.} \label{fig:Yang-Baxter} \end{figure} It is clear that factorisability is possible only if we satisfy the consistency condition stating that the order of factorisation is unimportant. We then find that the S-matrix has to satisfy the \emph{Yang-Baxter equation} $$ \mathcal{S}_{23}\ \mathcal{S}_{13}\ \mathcal{S}_{12}=\mathcal{S}_{12}\ \mathcal{S}_{13}\ \mathcal{S}_{23}\,. $$ When the above equation is satisfied, the consistency of factorisation of any $N$-body scattering is automatically ensured. To derive any scattering process it is then enough to know the two-body S-matrix, and this object is indeed the subject of this chapter. \medskip In Section~\ref{sec:two-part-repr-T4} we explain how to obtain the action of the charges on two-particle states. Demanding compatibility with these charges, in Section~\ref{sec:S-mat-T4} we bootstrap the all-loop two-body S-matrix. The S-matrix is naturally divided into blocks corresponding to the various sectors of scattering---massive, massless, mixed-mass. In each sector we write the S-matrix as a tensor product of two smaller S-matrices, compatible with the tensor product representations of the previous chapter. Taking into account the constraints coming from unitarity and LR-symmetry, we show that the S-matrix is fixed completely up to a total of four undetermined scalar function of the momenta, that we call ``dressing factors''. Further contraints on them are imposed by the crossing equations, that we derive. Compatibility with the assumption of factorisation of scattering is confirmed by the Yang-Baxter equation, that our S-matrix satisfies. Section~\ref{sec:Bethe-Yang} is devoted to the derivation of the Bethe-Yang equations. We first present the procedure and then write explicitly the Bethe-Yang equations that we obtain for {AdS$_3\times$S$^3\times$T$^4$}. \begin{comment} \alert{Mention Bethe equations} \alert{Also S-matrix for massless excitations: scattering is possible for non-relativism.} \end{comment} \input{Chapters/SmatrixT4.tex} \input{Chapters/BetheYangEquations.tex} \section{Summary} In this chapter we have constructed the action of the charges on multi-particle states, using the results of the previous chapter. In particular, we have used a non-local co-product to write supercharges acting on two-particle representations. This was needed in order to reproduce the exact eigenvalues of the charges appearing in the central extension. Compatibility with the bosonic and fermionic generators allowed us to fix the all-loop S-matrix almost completely. We found a total of four ``dressing factors'' that are not fixed by symmetries, and that are further constrained by unitarity, LR-symmetry and crossing invariance. We have also checked that the S-matrix that we have derived satisfies the Yang-Baxter equation, confirming compatibility with the assumption of factorisation of scattering. We have imposed periodicity of the wave-function, motivated by the fact that we are describing closed strings. Using the ``nesting procedure'' we have derived the complete set of Bethe-Yang equations, which should encode the spectrum of strings on the background {AdS$_3\times$S$^3\times$T$^4$} up to wrapping corrections. \chapter{The massive sector of AdS$_3\times$S$^3\times$T$^4$}\label{ch:massive-sector-T4} In this chapter we concentrate on the massive sector\footnote{The massive sector of {AdS$_3\times$S$^3\times$T$^4$} has been discussed in detail also in the thesis of A. Sfondrini~\cite{Sfondrini:2014via}, to which we refer for an alternative presentation.} of {AdS$_3\times$S$^3\times$T$^4$}. The massive sector corresponds to strings moving only in the AdS$_3\times$S$^3$ subspace of the background. From the point of view of worldsheet scattering, the results of the previous chapter show that we indeed identify a sector if we consider only massive excitations for the incoming states. In fact, Integrability ensures that if we scatter two massive excitations, then massless particles never appear in the asymptotic out-states. Focusing on smaller sectors of the theory is a good method to better understand the results obtained. Moreover, the study of the massive sector allows us to compare to the case of {AdS$_5\times$S$^5$}, where only massive excitations are present. We start by explaining how we can encode the integrable model found from the point of view of the string into a spin-chain description. As reviewed in Chapter~\ref{ch:intro}, in AdS$_5$/CFT$_4$ integrable spin-chains emerge when considering the spectrum of the dilatation operator in the gauge theory~\cite{Minahan:2002ve}. The idea here is to construct a spin-chain from which we can derive essentially the same all-loop S-matrix and Bethe-Yang equations valid for the massive sector of the string. We will also consider the crossing equations of Section~\ref{sec:crossing-invar-T4} for the dressing factors governing massive scattering, and derive solutions for them. The solution of these equations is not unique, and we will motivate our choice by commenting on the analytical structure of these functions. We will also take a proper limit of the Bethe-Yang equations, to obtain the ``finite-gap equations''. We conclude with a discussion and with a collection of the references to the perturbative calculations that succesfully tested our findings. \input{Chapters/SpinChain.tex} \input{Chapters/DressingFactors.tex} \input{Chapters/StrongLimitT4.tex} \section{Concluding remarks}\label{sec:perturbative-results} In this chapter we have focused on the massive sector of {AdS$_3\times$S$^3\times$T$^4$}. First we showed that it is possible to construct a dynamic spin-chain with $\alg{psu}(1,1|2)_{\mbox{\tiny L}}\oplus \alg{psu}(1,1|2)_{\mbox{\tiny R}}$ symmetry. We have then derived the S-matrix governing the scattering of the spin-chain excitations, showing that it is related to the worldsheet S-matrix of the previous chapter by a change of basis for the two-particle states. We have also solved the crossing equations for the two ``dressing factors'' of the massive sector\footnote{Solutions to the crossing equations in the massless and mixed-mass sectors were proposed in~\cite{Borsato:2016kbm}} that are not fixed by compatibility of symmetries. Commenting on the analytical properties of our factors, we have motivated the choice of the solutions of the crossing equations. We have also made contact with the ``finite-gap equations''---that are obtained from the Lax formulation of the classical integrable model---by taking a proper limit of the Bethe-Yang equations derived in the previous chapter. \bigskip Let us conclude the chapters devoted to AdS$_3$/CFT$_2$ by referring to the independent perturbative tests of the all-loop results presented here. Tree-level scattering elements involving excitations of the massive sector of the background {AdS$_3\times$S$^3\times$T$^4$} were computed in~\cite{Hoare:2013pma}. There the more general case in which a $B$-field is present was actually considered. For the pure RR case, the same tree-level results had appeared in~\cite{Sundin:2013ypa}, where also certain one-loop processes in the ``near-flat-space'' limit\footnote{The near-flat-space limit is achieved by having a momentum that scales like $p\sim \lambda^{-1/4}$~\cite{Maldacena:2006rv} and at leading order it can be seen as a further expansion on top of the near-BMN limit. It was used also to eliminate apparent ultra-violet divergences arising in near-BMN worldsheet computations, that were finally resolved in~\cite{Roiban:2014cia}.} and interactions involving massless excitations were produced. Agreement with these perturbative results was shown in~\cite{Borsato:2013qpa}. The ``Hern\'andez-L\'opez order'' of the dressing phases in the massive sector was addressed with semiclassical methods in~\cite{Beccaria:2012kb} and~\cite{Abbott:2013ixa}. Of these two results, only the latter agrees with the findings presented in~\cite{Borsato:2013hoa} and reviewed here. The resolution of this mismatch was explained in~\cite{Abbott:2015pps}, where it was shown that the procedure of~\cite{Beccaria:2012kb} for deriving the phases must be modified by taking into account wrapping corrections due to massless virtual particles, as these are not suppressed. The S-matrix including the proposed all-loop dressing phases was shown to agree with two-loop worldsheet calculations obtained with unitarity techniques in~\cite{Engelund:2013fja}. These are actually able to probe just the log-dependence of the scattering processes. Different unitarity techniques that are able to account also for the rational terms were performed in~\cite{Bianchi:2014rfa}, where it was shown that the full momentum-dependence of the scattering elements in the massive sector matches at one loop. Certain one-loop processes obtained with standard near-BMN computations confirmed again agreement with the large-tension expansion of the all-loop scattering elements~\cite{Sundin:2014sfa}. This result was later extended to the full theory---including the massless and mixed-mass sectors---in~\cite{Sundin:2016gqe}, finding again match at one-loop with the proposed S-matrix. In~\cite{Sundin:2014ema,Sundin:2015uva} the two-loop correction to the two-point function was computed. While for massive excitations this agrees with the expansion of the exact dispersion relation, a mismatch is found for the massless ones. At present this is still an unresolved problem, which might be explained by unexpected quantum corrections to the central charges $\gen{C},\overline{\gen{C}}$ or by ambiguities in treating massless modes in perturbation theory. \chapter{Bosonic {(AdS$_5\times$S$^5)_\eta$}}\label{ch:qAdS5Bos} This is the first of two chapters devoted to the investigation of another integrable $\sigma$-model motivated by the AdS/CFT correspondence. It corresponds to a particular deformation of the $\sigma$-model for strings on {AdS$_5\times$S$^5$}. Beisert and Koroteev first constructed an R-matrix invariant under a $q$-deformation of the $\alg{su}(2|2)_{\text{c.e.}}$ superalgebra~\cite{Beisert:2008tw}. After solving the crossing equation for the factor that was not fixed by the symmetries, Hoare, Hollowood and Miramontes~\cite{Hoare:2011wr} proposed an S-matrix that was conjectured to correspond to a quantum integrable model realising the $q$-deformation of the model for the AdS$_5$/CFT$_4$ dual pair. Up to now, no explicit realisation of a $q$-deformation of $\mathcal{N}=4$ Super Yang-Mills has been constructed. A deformation of the string $\sigma$-model on {AdS$_5\times$S$^5$} was proposed by Delduc, Magro and Vicedo in~\cite{Delduc:2013qra}, building on previous results for bosonic cosets~\cite{Delduc:2013fga}. It preserves the classical integrability of the original model, and replaces the original $\alg{psu}(2,2|4)$ symmetry with the quantum group $U_q(\alg{psu}(2,2|4))$~\cite{Delduc:2014kha}. The parameter that is used to deform the theory was called $\eta$, and the procedure is often referred to as ``$\eta$-deformation''. We adopt this terminology here. The deformation is of the type of the Yang-Baxter $\sigma$-model constructed by Klim\v{c}\'{i}k~\cite{Klimcik:2002zj,Klimcik:2008eq}, that generalises the work of Cherednik~\cite{Cherednik:1981df}. In this chapter we focus on the bosonic sector of the deformed model. For convenience, we start by reviewing the undeformed case, then we study the effects of the deformation and explain how to match with the large-tension limit of the proposed S-matrix invariant under the $q$-deformed algebra. \section{Undeformed model} {AdS$_5\times$S$^5$} is the product of the five-dimensional Anti-de Sitter and the five-dimensional sphere. Let us start with the compact space. We use six coordinates $Y_A, A=1,\ldots,6$ to parameterise the Euclidean space $\mathbb{R}^6$. The five-dimensional sphere is identified by the constraint $Y_AY_B\delta^{AB}=1$. A convenient parameterisation of these coordinates is \begin{equation}\label{eq:sph-coord-S5} \begin{aligned} Y_1+iY_2 = r\cos\xi\,e^{i\phi_1}\,,\quad Y_3+iY_4 = r\sin\xi\,e^{i\phi_2}\,,\quad Y_5+iY_6 = \sqrt{1-r^2}\,e^{i\phi_3}\,, \end{aligned} \end{equation} where $0<r<1$ is the radius of the three-sphere, and for the angles we have the ranges $0<\xi<\pi/2,\ 0<\phi_i<2\pi$. From now on we rename $\phi_3=\phi$, as this will be the angle that we will use to fix light-cone gauge, see Section~\ref{sec:Bos-string-lcg} for a generic treatment and Section~\ref{sec:quartic-action-lcg-etaads} for the case at hand. The metric on $\mathbb{R}^6$ ${\rm d}s^2_{\mathbb{R}^6}={\rm d}Y_A{\rm d}Y_B \delta^{AB}$ then induces the metric on the sphere \begin{equation}\label{eq:metrc-S5-sph-coord} \begin{aligned} {\rm d}s^2_{\text{S}^5}&=\left(1-r^2\right){\rm d}\phi^2 +\frac{{\rm d}r^2}{ \left(1-r^2\right)} + r^2\left( {\rm d}\xi^2+\cos ^2\xi \, {\rm d}\phi_1^2+ \sin^2\xi\, {\rm d}\phi_2^2\right) \,. \end{aligned} \end{equation} Let us comment also on another convenient parameterisation, that will be useful in Section~\ref{sec:pert-bos-S-mat-eta-ads5s5} for implementing perturbation theory on the worldsheet. The above constraint may be satisfied also by\footnote{For $y_i$ and also for the coordinates $z_i$ introduced later, we do not distinguish between upper or lower indices $y^i=y_i,\ z^i=z_i$.} \begin{equation}\label{eq:embed-eucl-coord-S5} Y_1+iY_2= \frac{y_1+iy_2}{1+\frac{|y|^2}{4}}\,, \qquad Y_3+iY_4= \frac{y_3+iy_4}{1+\frac{|y|^2}{4}}\,, \qquad Y_5+iY_6= \frac{1-\frac{|y|^2}{4}}{1+\frac{|y|^2}{4}}e^{i\phi}\,, \end{equation} where we have defined $|y|^2\equiv y_iy_i$ and we have $-2<y_i<2$. The metric of the sphere in these coordinates reads as \begin{equation}\label{eq:metrc-S5-eucl-coord} {\rm d}s^2_{\text{S}^5}=\left(\frac{1-\frac{|y|^2}{4}}{1+\frac{|y|^2}{4}}\right)^2 {\rm d}\phi^2 +\frac{{\rm d}y_i{\rm d}y_i}{\left(1+\frac{|y|^2}{4}\right)^2}\,. \end{equation} The discussion for five-dimensional Anti-de Sitter follows a similar route. We embed it into $\mathbb{R}^{2,4}$ spanned by $Z_A, A=0,\ldots,5$, and we identify it with the constraint $Z^AZ^B\eta_{AB}=-1$, where $\eta_{AB}=\text{diag}(-1,1,1,1,1,-1)$. A parameterisation---reminiscent of the one for the sphere---for which the AdS constraint is satisfied is \begin{equation}\label{eq:sph-coord-AdS5} Z_1+iZ_2 = \rho\cos\zeta\,e^{i\psi_1}\,,\quad Z_3+iZ_4 = \rho\sin\zeta\,e^{i\psi_2}\,,\quad Z_0+iZ_5 = \sqrt{1+\rho^2}\,e^{it}\,, \end{equation} where $0<\rho<\infty$, and for the angles we have the ranges $0<\zeta<\pi/2,\ 0<\psi_i<2\pi$. We take the universal cover of AdS$_5$, where $t$ is the non-compact time coordinate. Using these local coordinates the metric for Anti-de Sitter is \begin{equation} \begin{aligned}\label{eq:metrc-AdS5-sph-coord} {\rm d}s^2_{\text{AdS}_5}&=-\left(1+\rho^2\right){\rm d}t^2 +\frac{{\rm d}\rho^2}{ \left(1+\rho^2\right)} + \rho^2\left( {\rm d}\zeta^2+\cos ^2\zeta \, {\rm d}\psi_1^2+ \sin^2\zeta\, {\rm d}\psi_2^2\right) \,. \end{aligned} \end{equation} Also in this case we mention an alternative parameterisation that will be useful for perturbation theory \begin{equation}\label{eq:embed-eucl-coord-AdS5} Z_1+iZ_2= \frac{z_1+iz_2}{1-\frac{|z|^2}{4}}\,, \qquad Z_3+iZ_4= \frac{z_3+iz_4}{1-\frac{|z|^2}{4}}\,, \qquad Z_5+iZ_6= \frac{1+\frac{|z|^2}{4}}{1-\frac{|z|^2}{4}}e^{it}\,, \end{equation} where $|z|^2\equiv z_iz_i$ and the space is covered by $-2<z_i<2$. The metric in these coordinates is \begin{equation}\label{eq:metrc-AdS5-eucl-coord} {\rm d}s^2_{\text{AdS}^5}=-\left(\frac{1+\frac{|z|^2}{4}}{1-\frac{|z|^2}{4}}\right)^2 {\rm d}t^2 +\frac{{\rm d}z_i{\rm d}z_i}{\left(1-\frac{|z|^2}{4}\right)^2}\,. \end{equation} These two spaces are also realised as the following cosets \begin{equation} \text{AdS}_5: \quad \frac{\text{SU}(2,2)}{\text{SO}(4,1)} \,, \qquad\qquad \text{S}^5: \quad \frac{\text{SU}(4)}{\text{SO}(5)}\,. \end{equation} Then the action of the string may be written in the form of a non-linear $\sigma$-model, where the base space is the worldsheet and the target space is {AdS$_5\times$S$^5$}. We do that by considering coset elements $\alg{g_a}$ and $\alg{g_s}$ that depend on the local coordinates parameterising Anti-de Sitter and the sphere. It is natural to represent these elements in terms of $4\times 4$ matrices that satisfy a reality condition compatible with $SU(2,2)$ and $SU(4)$. We refer to Appendix~\ref{app:bos-eta-def} for possible parameterisations. The two group elements may be considered at the same time by defining the $8\times 8$ matrix \begin{equation}\label{eq:8x8-bos-el} {\alg{g_b}}= \left( \begin{array}{cc} \alg{g_a} & 0 \\ 0 & \alg{g_s} \end{array} \right)\,. \end{equation} In Section~\ref{sec:algebra-basis} we will realise the $\alg{su}(2,2|4)\supset \alg{su}(2,2)\oplus \alg{su}(4)$ algebra in terms of $8\times 8$ matrices, making the above definition naturally motivated. After constructing the current $A\equiv -{\alg{g_b}}^{-1}{\rm d}{\alg{g_b}}$ that is an element of the algebra $\alg{su}(2,2)\oplus\alg{su}(4)$, we have to decompose it into $A=A^++A^-$, where $A^+$ belongs to the denominator of the coset, while $A^-$ to its complement\footnote{The subspaces with labels $+$ and $-$ appearing in this chapter correspond to the subspaces of grading $0$ and $2$ respectively of Section~\ref{sec:algebra-basis}}. In particular \begin{equation} \begin{aligned} &A^{\alg{a}+}\in \alg{so}(4,1), \qquad &&A^{\alg{a}-}\in \alg{su}(2,2)\setminus \alg{so}(4,1),\\ &A^{\alg{s}+}\in \alg{so}(5), \qquad &&A^{\alg{s}-}\in \alg{su}(4)\setminus \alg{so}(5)\,. \end{aligned} \end{equation} Then the action for the bosonic string may be written as \begin{equation} S^\alg{b}=-\frac{g}{2}\int {\rm d}\tau{\rm d}\sigma\, (\gamma^{\alpha\beta}-\epsilon^{\alpha\beta}) \Str\left(A^{-}_{\alpha}A^{-}_{\beta}\right)\,, \end{equation} where we need to define a graded trace that we call supertrace\footnote{The minus sign in front of the trace for the sphere contribution is motivated by the fact that we want the correct signature for this space. It becomes natural when we think of the full $\alg{psu}(2,2|4)$ algebra, see Section~\ref{sec:algebra-basis}.} $\Str\equiv \tr_{\alg{a}}-\tr_{\alg{s}}$. It is easy to check that the contribution with $\epsilon^{\alpha\beta}$ vanishes. Therefore, after choosing an explicit coset representative and rewriting the action in the Polyakov form~\eqref{eq:bos-str-action} of Section~\ref{sec:Bos-string-lcg}, we find that the $B$-field is zero \begin{equation}\label{eq:bos-act-undef-adsfive} \begin{aligned} S^{\alg{b}}&= -\frac{g}{2}\int \, {\rm d}\sigma {\rm d} \tau\ \gamma^{\alpha\beta} \partial_\alpha X^M \partial_\beta X^N G_{MN}\,. \end{aligned} \end{equation} If we use coordinates $X^0,\ldots,X^4$ to parameterise AdS$_5$ and $X^5,\ldots, X^9$ for S$^5$, the metric $G_{MN}$ is in block form, with the upper-left block containing the AdS$_5$ metric and the lower-right block the S$^5$ metric. If we use the coset representatives of Eq.~\eqref{basiccoset} we find the metrics in the form~\eqref{eq:metrc-S5-sph-coord} and~\eqref{eq:metrc-AdS5-sph-coord}, while~\eqref{eq:eucl-bos-coset-el} yields the metrics~\eqref{eq:metrc-S5-eucl-coord} and~\eqref{eq:metrc-AdS5-eucl-coord}. \section{Deformed model}\label{sec:def-bos-model} The deformed model is obtained by inserting a linear operator acting on one of the two currents in the action of the non-linear $\sigma$-model~\cite{Delduc:2013fga,Delduc:2013qra}. For the deformation of the full supercoset $\sigma$-model we refer to Section~\ref{sec:def-lagr-supercos}. Here it will be enough to notice that when restricted to the bosonic model, the deformed action may be written as \begin{equation} \tilde{S}^{\alg{b}}=-\frac{\tilde{g}}{2}\int{\rm d}\sigma {\rm d} \tau \ \left(\gamma^{\alpha\beta}-\epsilon^{\alpha\beta}\right)\, \Str\left(A^{-}_{\alpha}\cdot\mathcal{O}^{-1}_{\alg{b}}(A^{-}_{\beta})\right) \, , \end{equation} where $\mathcal{O}^{-1}_{\alg{b}}$ is the inverse of the linear operator \begin{equation}\label{eq:def-op-def-bos-mod} \mathcal{O}_{\alg{b}}=\gen{1}-\frac{2\eta}{1-\eta^2} R_{\alg{g_b}} \circ P^{(-)}\,, \end{equation} mapping the algebra $\alg{su}(2,2)\oplus\alg{su}(4)$ to itself. The deformation parameter is $\eta\in]-1,1[$, where the range is chosen to have invertibility for $\mathcal{O}_{\alg{b}}$. Setting $\eta=0$ we recover the undeformed model. The definition of $\mathcal{O}_{\alg{b}}$ depends on the composition of the operators $P^{(-)}$ and $R_{\alg{g_b}}$. The former is the projector onto the component ``$-$'' of the algebra, while the latter is defined as \begin{equation}\label{eq:defin-Rg-bos} R_{\alg{g_b}} = \text{Adj}_{{\alg{g_b}}^{-1}} \circ R \circ \text{Adj}_{{\alg{g_b}}}\,, \end{equation} meaning that its action on a matrix $M$ is $R_{\alg{g_b}}(M) = {\alg{g_b}}^{-1}R({\alg{g_b}} M{\alg{g_b}}^{-1}){\alg{g_b}}$. The linear operator $R$ satisfies the \emph{modified classical Yang-Baxter equation} \begin{equation}\label{eq:mod-cl-YBeq-R} [R(M),R(N)]-R([R(M),N]+[M,R(N)])=[M,N]\,. \end{equation} According to the definition given in~\cite{Delduc:2013fga,Delduc:2013qra}, it multiplies by $-i$ and $+i$ generators associated with positive and negative roots respectively, and by $0$ Cartan generators. Strictly speaking it is defined on the complexified algebra, and what we will use is its restriction to $\alg{su}(2,2)\oplus\alg{su}(4)$. On elements of the algebra written as $8\times 8$ matrices, we may write its action as \begin{equation} R(M)_{ij} = -i\, \epsilon_{ij} M_{ij}\,,\quad \epsilon_{ij} = \left\{\begin{array}{ccc} 1& \rm if & i<j \\ 0&\rm if& i=j \\ -1 &\rm if& i>j \end{array} \right.\,. \end{equation} The action for the deformed model is multiplied by $\tilde{g}$ that plays the role of the effective string tension, related to the one of the undeformed theory $g$ by\footnote{Our $\eta$-dependent prefactor differs from the one in \cite{Delduc:2013qra}. Our choice is necessary to match the perturbative worldsheet scattering matrix with the $q$-deformed one.} \begin{equation} \label{eq:def-g-tilde} \tilde{g}=\frac{1+\eta^2}{1-\eta^2}\, g\,. \end{equation} To obtain better expressions we also introduce a new deformation parameter related to $\eta$ as \begin{equation} \varkappa=\frac{2\eta}{1-\eta^2}\,, \qquad\qquad 0<\varkappa<\infty\,, \end{equation} which as we will see is a convenient choice. In order to compute the action for the deformed theory one has to first study the operator $\mathcal{O}_\alg{b}$ and invert it. From its definition it is clear that $\mathcal{O}_\alg{b}$ acts as the identity operator on the $10+10$ generators of $\alg{so}(4,1)\oplus\alg{so}(5)\subset\alg{su}(2,2)\oplus\alg{su}(4)$. When acting on the $5+5$ generators of the coset $\alg{su}(2,2)\oplus\alg{su}(4)\setminus\alg{so}(4,1)\oplus\alg{so}(5)$ we see that we never mix generators of Anti-de Sitter and of the sphere. In Appendix~\ref{app:bosonic-op-and-inverse} we provide the explicit results for the inverse operator $\mathcal{O}^{-1}_{\alg{b}}$. When we put the action of the deformed model in the form presented in~\eqref{eq:bos-str-action} \begin{equation}\label{eq:bos-lagr-eta-def-Pol} S^{\alg{b}}=-\frac{\tilde{g}}{2} \int \, {\rm d}\sigma {\rm d} \tau\ \left( \, \gamma^{\alpha\beta} \partial_\alpha X^M \partial_\beta X^N \widetilde{G}_{MN} -\epsilon^{\alpha\beta} \partial_\alpha X^M \partial_\beta X^N \widetilde{B}_{MN} \right)\,, \end{equation} we find that the metric is deformed and that a $B$-field is generated. The result is particularly simple when expressed in terms of the coordinates~\eqref{eq:metrc-S5-sph-coord} and~\eqref{eq:metrc-AdS5-sph-coord}, related to the coset representative~\eqref{basiccoset}. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{images/undef-hemisphere.pdf} \raisebox{-0.4cm}{ \includegraphics[width=0.4\textwidth]{images/def-hemisphere.pdf} } \caption{The left figure represents the hemisphere parameterised by the coordinates $r\in[0,1]$ and $\phi\in[0,2\pi]$. On the right we draw the squashed hemisphere that we find when we turn on the deformation. The figure was generated with deformation parameter $\varkappa=1$.} \label{fig:eta-def-sphere} \end{figure} \begin{figure}[t] \centering \raisebox{2.5cm}{ \includegraphics[width=0.2\textwidth]{images/undef-hemiAdS.pdf} } \includegraphics[width=0.4\textwidth]{images/def-hemiAdS.pdf} \caption{In the left figure we draw the space parameterised by the coordinates $\rho\in[0,\infty[$---here we stop the range of $\rho$ at the value $1$---and $t\in[0,2\pi]$. On the right we draw its deformation, generated with $\varkappa=1$. The right figure has actually been rescaled to fit the page: the circles at $\rho=0$ have the same radius in the two cases.} \label{fig:eta-def-AdS} \end{figure} In particular, we find that the metrics for the deformed AdS$_5$ and the deformed S$^5$ are~\cite{Arutyunov:2013ega} \begin{equation}\label{eq:metrc-etaAdS5S5-sph-coord} \begin{aligned} {\rm d}s^2_{(\text{AdS}_5)_{\eta}}=&-\frac{1+\rho^2}{1-\varkappa^2\rho^2}{\rm d}t^2 +\frac{{\rm d}\rho^2}{ \left(1+\rho^2\right)(1-\varkappa^2\rho^2)}\\ & + \frac{\rho^2}{1+\varkappa^2\rho^4\sin^2\zeta}\left( {\rm d}\zeta^2+\cos ^2\zeta \, {\rm d}\psi_1^2\right) +\rho^2 \sin^2\zeta\, {\rm d}\psi_2^2\,, \\ \\ {\rm d}s^2_{(\text{S}^5)_{\eta}}=&\frac{1-r^2}{1+\varkappa^2 r^2}{\rm d}\phi^2 +\frac{{\rm d}r^2}{ \left(1-r^2\right)(1+\varkappa^2 r^2)}\\ & + \frac{r^2}{1+\varkappa^2r^4\sin^2\xi}\left( {\rm d}\xi^2+\cos ^2\xi \, {\rm d}\phi_1^2\right) +r^2 \sin^2\xi\, {\rm d}\phi_2^2\,. \end{aligned} \end{equation} Figure~\ref{fig:eta-def-sphere} and~\ref{fig:eta-def-AdS} represent the effect of the deformation on the sphere and on AdS. We find the $B$-field $B=\frac{1}{2} B_{MN}\ {\rm d}X^M\wedge {\rm d}X^N$~\cite{Arutyunov:2013ega} \begin{equation}\label{eq:B-field-etaAdS5S5-sph-coord} \begin{aligned} \widetilde{B}_{(\text{AdS}_5)_{\eta}} &= +\frac{\varkappa}{2} \left( \frac{\rho^4 \sin (2\zeta)}{1+\varkappa^2 \rho^4\sin^2 \zeta} {\rm d}\psi_1\wedge{\rm d}\zeta + \frac{2 \rho}{1-\varkappa^2 \rho^2}{\rm d}t\wedge{\rm d}\rho\right), \\ \widetilde{B}_{(\text{S}_5)_{\eta}} &= -\frac{\varkappa}{2} \left( \frac{r^4 \sin (2\xi)}{1+\varkappa^2 r^4\sin^2 \xi}{\rm d}\phi_1\wedge{\rm d}\xi + \frac{2r}{1+\varkappa^2 r^2}{\rm d}\phi\wedge{\rm d}r\right). \end{aligned} \end{equation} It is easy to see that the contributions of the components $B_{t\rho}$ and $B_{\phi r}$ to the Lagrangian are total derivatives, meaning that they can be ignored. We refer to Appendix~\ref{app:bos-lagr-eta-def} for the Lagrangian written in the coordinates $(t,z_i)$ and $(\phi,y_i)$. Let us note that in the undeformed case the action is invariant with respect to two copies of ${\rm SO}(4)$, one of them corresponding to rotations of $z_i$ and the other copy of $y_i$. In the above action this symmetry is broken down to four copies of ${\rm SO}(2)\sim {\rm U}(1)$, corresponding to shifts of the angles $\psi_i$ and $\phi_i$. Thus, together with the two ${\rm U}(1)$ isometries acting on $t$ and $\phi$, the deformed action is invariant under ${\rm U}(1)^3\times {\rm U}(1)^{3}$. We also find that the range of $\rho$ is reduced under the deformation $0\le \rho\le 1/\varkappa$, to preserve the time-like nature of $t$. The (string frame) metric of the deformed AdS is singular at $\rho=1/\varkappa$. This is not just a coordinate singularity, as the Ricci scalar has a pole there. Without knowing the dilaton it is unclear whether the Einstein-frame metric exhibits the same singularity. \section{Perturbative bosonic worldsheet S-matrix}\label{sec:pert-bos-S-mat-eta-ads5s5} In this Section we want to compute the perturbative S-matrix governing worldsheet scattering between two bosonic excitations. We will then compare it with the large-tension limit of the $q$-deformed S-matrix proposed in~\cite{Hoare:2011wr} and find agreement. \subsection{Quartic action in light-cone gauge}\label{sec:quartic-action-lcg-etaads} Since we are interested in the perturbative expansion in powers of fields around $\rho=0,\ r=0$, we first expand the full bosonic Lagrangian up to quartic order in $\rho$, $r$ and their derivatives. To simplify the result we also make the shifts of $\rho$ and $r$ as described in Appendix \ref{app:bos-lagr-eta-def}, {\it c.f.}~\eqref{shift}. We then change the spherical coordinates to the Euclidean coordinates $(z_i,y_i)_{i=1,\ldots,4}$ introduced in~\eqref{eq:embed-eucl-coord-S5} and~\eqref{eq:embed-eucl-coord-AdS5}---as they are the preferred ones for perturbation theory---and we further expand the resulting action up to the quartic order in $z$ and $y$ fields. In this way we find the Lagrangian up to quartic order $L=L^{G,\alg{a}}+L^{B,\alg{a}}+L^{G,\alg{s}}+L^{B,\alg{s}}$, where we have separated the contributions of AdS$_5$ from the ones of $S^5$, and the contributions of the metric $G_{MN}$ from the ones of the $B$-field \begin{equation}\label{Lquart} \begin{aligned} L^{G,\alg{a}} &= -\frac{\tilde{g}}{2} \, \gamma^{\alpha \beta} \Bigg[ -\left( 1 + (1+\varkappa^2) |z|^2 +\frac{1}{2}(1+\varkappa^2)^2|z|^4\right) \partial_{\alpha}t \partial_{\beta}t \\ &\qquad\qquad + \left(1+(1-\varkappa^2)\frac{|z|^2}{2}\right) \partial_{\alpha}z_i\partial_{\beta}z_i \Bigg] \, , \\ L^{B,\alg{a}} &= +2\tilde{g} \, \varkappa (z_3^2+z_4^2) \epsilon^{\alpha\beta} \partial_{\alpha}z_1 \partial_{\beta}z_2 \, , \\ L^{G,\alg{s}}&= -\frac{\tilde{g}}{2}\, \gamma^{\alpha \beta} \Bigg[ \left( 1 - (1+\varkappa^2) |y|^2 +\frac{1}{2}(1+\varkappa^2)^2|y|^4\right) \partial_{\alpha}\phi \partial_{\beta}\phi \\ &\qquad\qquad+ \left(1-\frac{1}{2}(1-\varkappa^2)|y|^2 \right) \partial_{\alpha}y_i\partial_{\beta}y_i \Bigg] \, ,\\ L^{B,\alg{s}}&= - 2\tilde{g}\, \varkappa (y_3^2+y_4^2) \epsilon^{\alpha\beta} \partial_{\alpha}y_1 \partial_{\beta}y_2\, . \end{aligned} \end{equation} Here we use the notation $|z|\equiv(z_iz_i)^{1/2},\ |y|\equiv(y_iy_i)^{1/2}$. The ``metric part'' of this Lagrangian has a manifest ${\rm SO}(4)\times {\rm SO(4)}$ symmetry at quartic order, which is however broken by the Wess-Zumino terms. \medskip We first need to impose the uniform light-cone gauge, as explained more generally in Section~\ref{sec:Bos-string-lcg}. We follow exactly the same notation and conventions. After that we take the decompactification limit and perform the large-tension expansion presented in Section~\ref{sec:decomp-limit} The gauge-fixed action is organised in the form \begin{equation}\label{eq:large-tens-exp-eta-def} S= \int {\rm d}\tau {\rm d} \sigma \, \left( p_\mu \dot{x}^\mu - \mathcal{H}_2 - \frac{1}{g} \mathcal{H}_4 - \ldots \right), \end{equation} where we find the quadratic Hamiltonian \begin{equation}\label{eq:quadr-hamilt-eta-def} \mathcal{H}_2 = \frac{1}{2} p_\mu^2 + \frac{1}{2} (1+\varkappa^2) (X^\mu)^2 + \frac{1}{2} (1+\varkappa^2) (X'^\mu)^2. \end{equation} The quartic Hamiltonian in a general $a$-gauge is \begin{equation} \begin{aligned} \mathcal{H}_4 &= \frac{1}{4} \Bigg( (2 \varkappa^2 |z|^2 -(1+\varkappa^2) |y|^2 ) |p_z|^2 - (2 \varkappa^2 |y|^2 -(1+\varkappa^2) |z|^2 ) |p_y|^2 \\ &+\left(1+\varkappa ^2\right) \left(\left(2 |z|^2-\left(1+\varkappa ^2\right) |y|^2\right)|z'|^2 + \left(\left(1+\varkappa ^2\right) |z|^2-2 |y|^2\right)|y'|^2\right)\Bigg) \\ &- 2 \varkappa \left(1+\varkappa ^2\right)^{1\ov2} \left(\left(z_3^2+z_4^2\right) \left(p_{z_1} z_2'-p_{z_2} z_1'\right) - \left(y_3^2+y_4^2\right) \left(p_{y_1} y_2'-p_{y_2} y_1'\right) \right) \\ &+\frac{(2a-1)}{8} \Bigg( (|p_y|^2+|p_z|^2)^2 -(1+\varkappa^2)^2 (|y|^2+|z|^2)^2 \\ &+2 (1+\varkappa^2)(|p_y|^2+|p_z|^2)(|y'|^2+|z'|^2)+(1+\varkappa^2)^2 (|y'|^2+|z'|^2)^2 -4 (1+\varkappa^2) (x_-')^2\Bigg). \end{aligned} \end{equation} Here we use the notation $|p_z|\equiv (p_{z_i}p_{z_i})^{1/2}, \ |p_y|\equiv ( p_{y_i} p_{y_i})^{1/2}$. To simplify the quartic piece, we can remove the terms of the form $|p_z|^2|y|^2$ and $|p_y|^2|z|^2$ by performing a canonical transformation generated by \begin{equation} V= \frac{(1+\varkappa^2)}{4} \int {\rm d}\sigma \Big( p_{y_i} y_{i} |z|^2 -p_{z_i} z_{i} |y|^2 \Big) . \end{equation} After this is done, the quartic Hamiltonian reads as \begin{equation} \begin{aligned} \mathcal{H}_4 &= \frac{(1+\varkappa^2)}{2} ( |z|^2 |z'|^2- |y|^2 |y'|^2 ) + \frac{(1+\varkappa^2)^{2}}{2} (|z|^2 |y'|^2-|y|^2 |z'|^2) \\ &+\frac{\varkappa^2}{2} ( |z|^2 |p_z|^2 - |y|^2 |p_y|^2 ) \\ &- 2\varkappa(1+\varkappa^2)^{1\ov2} \left[\left(z_3^2+z_4^2\right) \left(p_{z_1} z_2'-p_{z_2} z_1'\right) - \left(y_3^2+y_4^2\right) \left(p_{y_1} y_2'-p_{y_2} y_1'\right) \right] \\ &+\frac{(2a-1)}{8}\Bigg( (|p_y|^2+|p_z|^2)^2 - (1+\varkappa^2)^2 (|y|^2+|z|^2)^2 \\ &+2 (1+\varkappa^2) (|p_y|^2+|p_z|^2)(|y'|^2+|z'|^2)+ (1+\varkappa^2)^2 (|y'|^2+|z'|^2)^2 -4 (1+\varkappa^2) (x_-')^2\Bigg). \end{aligned} \end{equation} We recall that in the undeformed case the full theory---including both the bosons and the fermions---is invariant with respect to the two copies of the centrally extended superalgebra $\alg{psu}(2|2)$, each containing two $\alg{su}(2)$ subalgebras. To render invariance under $\alg{su}(2)$ subalgebras manifest, one can introduce the two-index notation for the worldsheet fields. It is convenient to adopt the same notation also for the deformed case\footnote{This parameterisation is different from the one used in \cite{Arutyunov:2009ga}, as we exchange the definitions for $Y^{1\dot{1}}$ and $Y^{2\dot{2}}$ and the definitions for $Y^{1\dot{2}}$ and $Y^{2\dot{1}}$. This does not matter in the undeformed case but is needed here in order to correctly match the perturbative S-matrix with the $q$-deformed one computed from symmetries.} \begin{equation} \begin{aligned} &Z^{3\dot{4}} =\tfrac{1}{2} (z_3-i z_4), \qquad &Z^{3\dot{3}} =\tfrac{1}{2} (z_1-i z_2), \\ & Z^{4\dot{3}}=-\tfrac{1}{2} (z_3+i z_4), \qquad &Z^{4\dot{4}}=\tfrac{1}{2} (z_1+i z_2), \end{aligned} \end{equation} \begin{equation} \begin{aligned} &Y^{1\dot{2}}=-\tfrac{1}{2} (y_3+i y_4), \qquad &Y^{1\dot{1}}=\tfrac{1}{2} (y_1+i y_2), \\ &Y^{2\dot{1}}=\tfrac{1}{2} (y_3-i y_4), \qquad &Y^{2\dot{2}}=\tfrac{1}{2} (y_1-i y_2)\, . \end{aligned} \end{equation} In terms of two-index fields the quartic Hamiltonian becomes $\mathcal{H}_4 = \mathcal{H}^G_4 + \mathcal{H}^{B}_4$, where $\mathcal{H}^G_4$ is the contribution coming from the spacetime metric and $\mathcal{H}^{B}_4 $ from the $B$-field {\small \begin{eqnarray}\label{eq:quartic-hamilt-eta-def} \mathcal{H}^G_4 &=&2(1+\varkappa^2) \left( Z_{\alpha\dot{\alpha}} Z^{\alpha\dot{\alpha}} Z'_{\beta\dot{\beta}} Z'^{\beta\dot{\beta}} -Y_{a\dot{a}}Y^{a\dot{a}} Y'_{b\dot{b}}Y'^{b\dot{b}} \right) \nonumber \\ &+& 2(1+\varkappa^2)^{2} \left( Z_{\alpha\dot{\alpha}} Z^{\alpha\dot{\alpha}} Y'_{b\dot{b}}Y'^{b\dot{b}} - Y_{a\dot{a}}Y^{a\dot{a}} Z'_{\beta\dot{\beta}} Z'^{\beta\dot{\beta}} \right) \nonumber \\ & +&\frac{\varkappa^2}{2} \left( Z_{\alpha\dot{\alpha}} Z^{\alpha\dot{\alpha}} P_{\beta\dot{\beta}} P^{\beta\dot{\beta}} - Y_{a\dot{a}}Y^{a\dot{a}} P_{b\dot{b}}P^{b\dot{b}} \right) \\ &+&\frac{(2a-1)}{8} \Bigg( \frac{1}{4}(P_{a\dot{a}}P^{a\dot{a}}+P_{\alpha\dot{\alpha}}P^{\alpha\dot{\alpha}})^2 -4 (1+\varkappa^2)^2 (Y_{a\dot{a}}Y^{a\dot{a}}+Z_{\alpha\dot{\alpha}}Z^{\alpha\dot{\alpha}})^2 \nonumber \\ &+&2 (1+\varkappa^2) (P_{a\dot{a}}P^{a\dot{a}}+P_{\alpha\dot{\alpha}}P^{\alpha\dot{\alpha}})(Y'_{a\dot{a}}Y'^{a\dot{a}}+Z'_{\alpha\dot{\alpha}}Z'^{\alpha\dot{\alpha}})+4 (1+\varkappa^2)^2 (Y'_{a\dot{a}}Y'^{a\dot{a}}+Z'_{\alpha\dot{\alpha}}Z'^{\alpha\dot{\alpha}})^2 \nonumber \\ &-&4 (1+\varkappa^2) (P_{a\dot{a}}Y'^{a\dot{a}} +P_{\alpha\dot{\alpha}}Z'^{\alpha\dot{\alpha}})^2\Bigg), \nonumber \\ \mathcal{H}^{B}_4 &=& 8 i\varkappa(1+\varkappa^2)^{1\ov2} \left( Z^{3\dot{4}} Z^{4\dot{3}} ( P_{3\dot{3}} Z'^{3\dot{3}} -P_{4\dot{4}} Z'^{4\dot{4}} ) + Y^{1\dot{2}} Y^{2\dot{1}} ( P_{1\dot{1}} Y'^{1\dot{1}} -P_{2\dot{2}} Y'^{2\dot{2}} ) \right)\, . \nonumber \end{eqnarray} } Here indices are raised and lowered with the $\epsilon$-tensors, where $\epsilon^{12}=-\epsilon_{12}=\epsilon^{34}=-\epsilon^{34}=+1$, and similarly for dotted indices. Note that we have used the Virasoro constraint $C_1=0$ given in~\eqref{eq:Vira-constr-bos} in order to express $x'_-$ in terms of the two index fields. The gauge dependent terms multiplying $(2a-1)$ are invariant under SO(8) as in the underformed case. \subsection{Tree-level bosonic S-matrix} The computation of the tree level bosonic S-matrix follows the route reviewed in Section~\ref{sec:large-tens-exp}, see~\cite{Arutyunov:2009ga} for more details. We first quantise the theory by introducing creation and annihilation operators as \begin{equation} \begin{aligned} Z^{\alpha\dot{\alpha}}(\sigma,\tau)&=\frac{1}{\sqrt{2\pi}}\int {\rm d}p\frac{1}{2\sqrt{\omega_p}}\left( e^{ip\sigma}a^{\alpha\dot{\alpha}}(p,\tau)+e^{-ip\sigma}\epsilon^{\alpha\beta}\epsilon^{\dot{\alpha}\dot{\beta}}a^\dagger_{\beta\dot{\beta}}(p,\tau)\right)\,, \\ Y^{a\dot{a}}(\sigma,\tau)&=\frac{1}{\sqrt{2\pi}}\int {\rm d}p\frac{1}{2\sqrt{\omega_p}}\left( e^{ip\sigma}a^{a\dot{a}}(p,\tau)+e^{-ip\sigma}\epsilon^{ab}\epsilon^{\dot{a}\dot{b}}a^\dagger_{b\dot{b}}(p,\tau)\right)\,, \\ P_{\alpha\dot{\alpha}}(\sigma,\tau)&=\frac{1}{\sqrt{2\pi}}\int {\rm d}p\, i\, \sqrt{\omega_p}\left( e^{-ip\sigma}a^\dagger_{\alpha\dot{\alpha}}(p,\tau)-e^{ip\sigma}\epsilon_{\alpha\beta}\epsilon_{\dot{\alpha}\dot{\beta}}a^{\beta\dot{\beta}}(p,\tau)\right)\,, \\ P_{a\dot{a}}(\sigma,\tau)&=\frac{1}{\sqrt{2\pi}}\int {\rm d}p\, i\, \sqrt{\omega_p}\left( e^{-ip\sigma}a^\dagger_{a\dot{a}}(p,\tau)-e^{ip\sigma}\epsilon_{ab}\epsilon_{\dot{a}\dot{b}}a^{b\dot{b}}(p,\tau)\right)\,, \end{aligned} \end{equation} where the frequency $\omega_p$ is related to the momentum $p$ as \begin{equation}\label{omega} \omega_p=(1+\varkappa^2)^{1\ov2} \sqrt{1+p^2} = \sqrt{1+p^2\over 1-\nu^2}\,, \end{equation} and we have introduced a new convenient parameterisation of the deformation as \begin{equation} \nu = {\varkappa\over (1+\varkappa^2)^{1\ov2}}={2\eta\over 1+\eta^2}\,. \end{equation} We compute the T-matrix defined by~\eqref{eq:def-Tmat} using Equation~\eqref{eq:pert-Tmat}. The free Hamiltonian governing the dynamics of in- and out-states is found by rewriting the quadratic Hamiltonian~\eqref{eq:quadr-hamilt-eta-def} in terms of the oscillators $a^\dagger,a$. At leading order in the large-tension expansion the potential $\gen{V}$ is essentially the quartic Hamiltonian~\eqref{eq:quartic-hamilt-eta-def} written for $a^\dagger,a$ \begin{equation} \gen{V}=\frac{1}{g}\ \gen{H}_4+\mathcal{O}(1/g^2)\,. \end{equation} The additional power of $1/g$ comes from the expansion in powers of fields~\eqref{eq:field-expansion}, as it is seen also in~\eqref{eq:large-tens-exp-eta-def}. It is convenient to rewrite the tree-level S-matrix as a sum of two terms $\mathbb{T}=\mathbb{T}^G + \mathbb{T}^{B}$, coming from $\mathcal{H}^G_4$ and $\mathcal{H}^{B}_4$ respectevely. The reason is that $\mathbb{T}^G$ preserves the $\alg{so}(4)\oplus \alg{so}(4)$ symmetry, while $\mathbb{T}^{B}$ breaks it. To write the results we consider states with momenta $p,p'$---and corresponding frequencies $\omega,\omega'$---and we always assume that $p>p'$. To have a nicer notation, we denote the states found by acting with the creation operators on the vacuum by $\ket{Z_{\alpha\dot{\alpha}}}\equiv a^\dagger_{\alpha\dot{\alpha}}\ket{\mathbf{0}}, \ \ket{Y_{a\dot{a}}}\equiv a^\dagger_{a\dot{a}}\ket{\mathbf{0}}$. The action of $\mathbb{T}^G$ on the two-particle states is given by\footnote{Here a $'$ on a state is used when the corresponding momentum is $p'$.} \begin{equation} \label{Tmatrix} \begin{aligned} \mathbb{T}^G \, \ket{Y_{a\dot{c}} Y_{b\dot{d}}'} &= \left[ \frac{1-2a}{2}(p \omega' - p' \omega) +\frac{1}{2} \frac{ (p-p')^2 +\nu^2 (\omega-\omega')^2}{p \omega' - p' \omega} \right] \, \ket{Y_{a\dot{c}} Y_{b\dot{d}}'} \\ & + \frac{p p' + \nu^2 \omega \omega' }{p \omega' - p' \omega} \left( \ket{Y_{a\dot{d}} Y_{b\dot{c}}'} + \ket{Y_{b\dot{c}} Y_{a\dot{d}}'} \right), \\ \\ \mathbb{T}^G \, \ket{Z_{\alpha\dot{\gamma}} Z_{\beta\dot{\delta}}'} &= \left[ \frac{1-2a}{2}(p \omega' - p' \omega) -\frac{1}{2} \frac{ (p-p')^2 +\nu^2 (\omega-\omega')^2 }{p \omega' - p' \omega} \right] \, \ket{Z_{\alpha\dot{\gamma}} Z_{\beta\dot{\delta}}'} \\ & - \frac{ p p' + \nu^2 \omega \omega' }{p \omega' - p' \omega} \left( \ket{Z_{\alpha\dot{\delta}} Z_{\beta\dot{\gamma}}'} + \ket{Z_{\beta\dot{\gamma}} Z_{\alpha\dot{\delta}}'} \right), \\ \\ \mathbb{T}^G \, \ket{Y_{a\dot{b}} Z_{\alpha\dot{\beta}}'} &= \left[ \frac{1-2a}{2}(p \omega' - p' \omega) -\frac{1}{2}\frac{\omega^2-\omega'^2}{p \omega' - p' \omega} \right] \, \ket{Y_{a\dot{b}} Z_{\alpha\dot{\beta}}'}, \\ \\ \mathbb{T}^G \, \ket{Z_{\alpha\dot{\beta}} Y_{a\dot{b}}'} &= \left[ \frac{1-2a}{2}(p \omega' - p' \omega) +\frac{1}{2}\frac{\omega^2-\omega'^2}{p \omega' - p' \omega} \right] \, \ket{Z_{\alpha\dot{\beta}} Y_{a\dot{b}}'}, \end{aligned} \end{equation} and the action of $\mathbb{T}^{B}$ on the two-particle states is \begin{equation} \begin{aligned} \mathbb{T}^{B} \, \ket{Y_{a\dot{c}} Y_{b\dot{d}}'} & = i \nu\left(\epsilon_{ab} \ket{Y_{b\dot{c}} Y_{a\dot{d}}'} +\epsilon_{\dot{c}\dot{d}} \ket{Y_{a\dot{d}} Y_{b\dot{c}}'}\right) , \\ \mathbb{T}^{B} \, \ket{Z_{\alpha\dot{\gamma}} Z_{\beta\dot{\delta}}'} & = i \nu \left( \epsilon_{\alpha\beta} \ket{Z_{\beta\dot{\gamma}} Z_{\alpha\dot{\delta}}'} + \epsilon_{\dot{\gamma}\dot{\delta}} \ket{Z_{\alpha\dot{\delta}} Z_{\beta\dot{\gamma}}'}\right)\,, \end{aligned} \end{equation} where on the r.h.s. we obviously do not sum over the repeated indices. \medskip In the undeformed case, the S-matrix ${\mathbb S}$ computed in perturbation theory is factorised into the product of two S-matrices, each of them invariant under one copy of the centrally extended superalgebra $\alg{psu}(2|2)$~\cite{Beisert:2005tm,Arutyunov:2006yd} \begin{equation} \mathbb{S}_{\alg{psu}(2|2)^2_{\text{c.e.}}} = \mathbb{S}_{\alg{psu}(2|2)_{\text{c.e.}}}\, \hat{\otimes}\, \mathbb{S}_{\alg{psu}(2|2)_{\text{c.e.}}}\,. \end{equation} Using~\eqref{eq:def-Tmat} one finds the corresponding factorisation rule for the T-matrix \begin{eqnarray}\label{eq:factoris-rule-T-mat-AdS5} {\mathbb T}^{P\dot{P},Q\dot{Q}}_{M\dot{M},N\dot{N}}=(-1)^{\epsilon_{\dot M}(\epsilon_{N}+\epsilon_{Q})}{\cal T}_{MN}^{PQ}\delta_{\dot{M}}^{\dot{P}}\delta_{\dot{N}}^{\dot{Q}} +(-1)^{\epsilon_Q(\epsilon_{\dot{M}}+\epsilon_{\dot{P}})}\delta_{M}^{P}\delta_{N}^{Q} {\cal T}_{\dot{M}\dot{N}}^{\dot{P}\dot{Q}}\, . \end{eqnarray} Here $M=(a, \alpha)$ and $\dot{M}=(\dot{a},\dot{\alpha})$, and dotted and undotted indices are referred to two copies of $\alg{psu}(2|2)$, respectively, while $\epsilon_{M}$ and $\epsilon_{\dot{M}}$ describe statistics of the corresponding indices, {\it i.e.} they are zero for bosonic (Latin) indices and equal to one for fermionic (Greek) ones. For the bosonic model the factor ${\mathcal T}$ can be regarded as a $16\times 16$ matrix. \smallskip It is not difficult to see that the same type of factorisation persists in the deformed case as well. Indeed, from \eqref{Tmatrix} we extract the following elements for the ${\mathcal T}$-matrix \begin{eqnarray}\label{cTmatr} \begin{aligned} &{\mathcal T}_{ab}^{cd}= A\,\textup{d}_a^c\textup{d}_b^d+B\,\textup{d}_a^d\textup{d}_b^c+W\, \epsilon_{ab}\textup{d}_a^d\textup{d}_b^c\, , \\ &{\mathcal T}_{\alpha\beta}^{\gamma\textup{d}}= D\,\textup{d}_\alpha^\gamma\textup{d}_\beta^\textup{d}+E\,\textup{d}_\alpha^\textup{d}\de_\beta^\gamma+W\, \epsilon_{\alpha\beta}\, \textup{d}_{\alpha}^{\delta}\textup{d}_{\beta}^{\gamma}\,, \\ &{\mathcal T}_{a\beta}^{c\textup{d}}= G\,\textup{d}_a^c\textup{d}_\beta^\textup{d}\,,\qquad ~{\mathcal T}_{\alpha b}^{\gamma d}= L\,\textup{d}_\alpha^\gamma\textup{d}_b^d\,, \end{aligned} \end{eqnarray} where the coefficients are given by \begin{eqnarray} \begin{aligned} \label{Tmatrcoef} &A(p,p')= \frac{1-2a}{4}(p \omega' - p' \omega) +\frac{1}{4} \frac{ (p-p')^2 +\nu^2 (\omega-\omega')^2}{p \omega' - p' \omega} \,,\\ &B(p,p')=-E(p,p')= \frac{p p' + \nu^2 \omega \omega' }{p \omega' - p' \omega} \,, \\ &D(p,p')=\frac{1-2a}{4} (p \omega' - p' \omega) -\frac{1}{4} \frac{ (p-p')^2 +\nu^2 (\omega-\omega')^2 }{p \omega' - p' \omega} \,,\\ &G(p,p')=-L(p',p)=\frac{1-2a}{4}(p \omega' - p' \omega) -\frac{1}{4} \frac{\omega^2-\omega'^2}{p \omega' - p' \omega} \,,\\ &W(p,p')= i\nu \, . \end{aligned} \end{eqnarray} Here $W$ corresponds to the contribution of the Wess-Zumino term and it does not actually depend on the particle momenta. All the four remaining coefficients ${\mathcal T}_{ab}^{\gamma\textup{d}},{\mathcal T}_{\alpha\beta}^{cd},{\mathcal T}_{a\beta}^{\gamma d},{\mathcal T}_{\alpha b}^{\gamma d}$ vanish in the bosonic case but will be switched on once fermions are taken into account. The matrix ${\mathcal T}$ is recovered from its matrix elements as follows \begin{eqnarray} \nonumber {\mathcal T}={\cal T}_{MN}^{PQ}\, E_P^M\otimes E_Q^N={\mathcal T}_{ab}^{cd}\, E_c^a\otimes E_d^b+{\mathcal T}_{\alpha\beta}^{\gamma\textup{d}}\, E_\gamma^\alpha\otimes E_\delta^\beta+ {\mathcal T}_{a\beta}^{c\textup{d}}\, E_c^a\otimes E_\textup{d}^\beta+{\mathcal T}_{\alpha b}^{\gamma d}\, E_\gamma^\alpha\otimes E_d^b\, , \end{eqnarray} where $E_M^N$ are the standard matrix unities. For the reader convenience we present ${\mathcal T}$ as an explicit $16\times 16$ matrix\footnote{See Appendix 8.5 of \cite{Arutyunov:2006yd} for the corresponding matrix in the undeformed case.} {\scriptsize \begin{eqnarray} \newcommand{\color{black!40}0}{\color{black!40}0} {\mathcal T}\equiv \left( \begin{array}{ccccccccccccccccccc} {\cal A}_1&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&{\cal A}_2&\color{black!40}0&\color{black!40}0&|&{\cal A}_4&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0 &\color{black!40}0\\ \color{black!40}0&\color{black!40}0&{\cal A}_3&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&{\cal A}_3&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ -&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-\\ \color{black!40}0&{\cal A}_5&\color{black!40}0&\color{black!40}0&|& {\cal A}_2&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0& \color{black!40}0 &\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&{\cal A}_1&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&{\cal A}_3&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&{\cal A}_3&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ -&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&{\cal A}_8&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&{\cal A}_8&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0& {\cal A}_6 &\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|& \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&{\cal A}_7&|&\color{black!40}0&\color{black!40}0&{\cal A}_9&\color{black!40}0\\ -&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-&-\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&{\cal A}_8&\color{black!40}0&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&{\cal A}_8&\color{black!40}0&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&{\cal A}_{10}&|&\color{black!40}0&\color{black!40}0&{\cal A}_7&\color{black!40}0\\ \color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&\color{black!40}0&|&\color{black!40}0&\color{black!40}0&\color{black!40}0&{\cal A}_6\\ \end{array} \right) \, .\nonumber \end{eqnarray} } Here the non-trivial matrix elements of ${\mathcal T}$ are given by \begin{eqnarray} &&{\cal A}_1=A+B\, ,\quad {\cal A}_2=A\, ,\quad {\cal A}_4=B-W\, ,\quad {\cal A}_5=B+W\,, \quad{\cal A}_6=D+E\, , \\\nonumber &&{\cal A}_6=D+E\, ,\quad {\cal A}_7=D\, , \quad {\cal A}_8=L\, ,\quad {\cal A}_9=E-W=-{\cal A}_5 \, ,\quad {\cal A}_{10}=E+W=-{\cal A}_4\, . \end{eqnarray} We conclude this section by pointing out that the matrix ${\mathcal T}$ that we have found satisfies the classical Yang-Baxter equation \begin{eqnarray} [{\mathcal T}_{12}(p_1,p_2),{\mathcal T}_{13}(p_1,p_3)+{\mathcal T}_{23}(p_2,p_3)]+[{\mathcal T}_{13}(p_1,p_3),{\mathcal T}_{23}(p_2,p_3)]=0\, \end{eqnarray} for any value of the deformation parameter $\nu$. \subsection{Comparison with the $q$-deformed S-matrix} In this subsection we show that the perturbative bosonic worldsheet S-matrix coincides with the first nontrivial term in the large-$g$ expansion of the $q$-deformed ${\rm AdS}_5\times {\rm S}^5\ $ S-matrix\footnote{The difference with the expansion performed in \cite{Beisert:2010kk} is that we include the dressing factor in the definition of the S-matrix.}. Let us recall that---up to an overall factor---the $q$-deformed ${\rm AdS}_5\times {\rm S}^5\ $ S-matrix is given by a tensor product of two copies of the $\alg{psu}(2|2)_q$-invariant S-matrix~\cite{Beisert:2008tw} which we denote just by $\mathbf{S}$, to avoid a heavy notation. The matrix may be found in~\eqref{Sqmat} of Appendix~\ref{app:matrixSmatrix}. We also need to multiply it by the overall factor $S_{\alg{su}(2)}$~\cite{Hoare:2011wr} \begin{equation} \begin{aligned} &S_{\alg{su}(2)}(p_1,p_2) \ \mathbf{S}_{12}\, \hat{\otimes} \, \mathbf{S}_{12}\, , \\ &S_{\alg{su}(2)}(p_1,p_2)=\frac{e^{i a(p_2{\cal E}_1-p_1{\cal E}_2)}}{\sigma_{12}^2}\frac{x_1^++\xi}{x_1^-+\xi}\frac{x_2^-+\xi}{x_2^++\xi}\cdot \frac{x_1^--x_2^+}{x_1^+-x_2^-}\frac{1-\frac{1}{x_1^-x_2^+}}{1-\frac{1}{x_1^+x_2^-}}\, . \end{aligned} \end{equation} Here $\hat{\otimes} $ stands for the graded tensor product, $a$ is the parameter of the light-cone gauge---see Eq.~\eqref{eq:lc-coord}---$\sigma$ is the dressing factor, and ${\cal E}$ is the $q$-deformed dispersion relation \eqref{qdisp} whose large $g$ expansion starts with $\omega$. The dressing factor can be found by solving the corresponding crossing equation, and it is given by \cite{Hoare:2011wr} \begin{equation}\label{eq:def-theta} \sigma_{12}=e^{i\theta_{12}}\,,\quad \theta_{12} = \chi(x^+_1,x^+_2) + \chi(x^-_1,x^-_2)- \chi(x^+_1,x^-_2) - \chi(x^-_1,x^+_2), \end{equation} where \begin{equation}\label{chi12} \chi(x_1,x_2) = i \oint \frac{dz}{2 \pi i} \frac{1}{z-x_1} \oint \frac{dz'}{2 \pi i} \frac{1}{z'-x_2} \log\frac{\Gamma_{q^2}(1+\frac{i g}{2} (u(z)-u(z')))}{\Gamma_{q^2}(1-\frac{i g}{2} (u(z)-u(z')))}. \end{equation} Here $\Gamma_{q}(x)$ is the $q$-deformed Gamma function which for complex $q$ admits an integral representation \eqref{lnGovG} \cite{Hoare:2011wr}. To develop the large $g$ expansion of the $q$-deformed ${\rm AdS}_5\times {\rm S}^5\ $ S-matrix, one has to assume that $q=e^{-\upsilon/g}$ where $\upsilon$ is a deformation parameter which is kept fixed in the limit $g\to\infty$, and should be related to $\nu$. Then, due to the factorisation of the perturbative bosonic worldsheet S-matrix and of the $q$-deformed ${\rm AdS}_5\times {\rm S}^5\ $ S-matrix, it is sufficient to compare the ${\cal T}$-matrix \eqref{cTmatr} with the ${\mathbf T}$-matrix appearing in the expansion of one copy $\mathbf S$ with the proper factor \begin{equation}\label{qTpert} (S_{\alg{su}(2)})^{1/2}\, \gen{1}_g\, \mathbf{S}=\gen{1} +\frac{i}{g}{\mathbf T}\, , \end{equation} where $\gen{1}_g$ is the graded identity which is introduced so that the expansion starts with $\gen{1}$. To check whether $\mathbf{T}=\mathcal{T}$ at leading order, the only term which is not straightforward to expand is the $S_{\alg{su}(2)}$ scalar factor because it contains the dressing phase $\theta_{12}$. It is clear that it will contribute only to the part of the ${\mathbf T}$-matrix proportional to the identity matrix. If we study the expansion of just $\gen{1}_g\, \mathbf{S}$ without the $S_{\alg{su}(2)}$ factor, we find that it is indeed related to the matrix $\mathcal{T}$ computed in perturbation theory by \begin{equation} \gen{1}_g \mathbf{S}= \gen{1} + \frac{i}{g} (\mathcal{T}-{\cal A}_1 \gen{1})\,, \end{equation} provided we identify the parameters $q$ and $\nu$, or $q$ and $\varkappa$, as \begin{equation} q=e^{-\nu/g}=e^{-\varkappa/\tilde{g}}\,, \end{equation} showing that $q$ is real. What is left to check is then the overall normalisation, namely that the $1/g$ term in the expansion of $S_{\alg{su}(2)}^{1/2}$ is equal to $ {\cal A}_1$. To this end one should find the large $g$ expansion of the dressing phase $\theta_{12}$. This is done by first expanding the ratio of $\Gamma_{q^2}$-functions in \eqref{chi12} with $u(z)$ and $u(z')$ being kept fixed, using Eq.~\eqref{qGamma1}. Next, one combines it with the expansion of the $\frac{1}{z-x_1^\pm} \frac{1}{z'-x_2^\pm} $ terms which appear in the integrand of \eqref{eq:def-theta}. As a result one finds that the dressing phase is of order $1/g$ just as it was in the undeformed case~\cite{Arutyunov:2004vx}. One may check numerically that the element ${\cal A}_1$ is indeed equal to the $1/g$ term in the expansion of $S_{\alg{su}(2)}^{1/2}$ . In fact it is not difficult to extract from ${\cal A}_1$ the leading term in the large $g$ expansion of the dressing phase which appears to be very simple \begin{equation} \theta_{12}= \frac{\nu^2 \left(\omega _1-\omega _2\right)+p_2^2 \left(\omega _1-1\right)-p_1^2 \left(\omega _2-1\right)}{2g \left(p_1+p_2\right)} +\mathcal{O}(1/g^2)\,. \end{equation} It would be curious to derive this expression directly from the double integral representation. Note that, doing this double integral, one could also get the full AFS order of the phase, which would be a deformation of the one in~\eqref{eq:AFS-xpxm}. \section{Concluding remarks} In this chapter we have studied the bosonic sector of the string on $\eta$-deformed {AdS$_5\times$S$^5$}. We have derived the deformed metric and the $B$-field that is generated. After computing the tree-level scattering processes involving bosonic excitations on the worldsheet, we were able to succesfully match with the large-tension expansion of the all-loop S-matrix found by imposing the $\alg{psu}_q(2|2)_{\text{c.e.}}$ symmetry. The bosonic background that we have derived was further studied in a series of papers. Giant magnons were studied in~\cite{Arutynov:2014ota,Khouchen:2014kaa,Ahn:2014aqa} and other classical solutions in~\cite{Kameyama:2014vma,Panigrahi:2014sia,Banerjee:2015nha}. Minimal surfaces were considered in~\cite{Kameyama:2014via,Bai:2014pya} and three-point correlation functions in~\cite{Ahn:2014iia,Bozhilov:2015kya}. For deformations of classical integrable models corresponding to subsectors of the bosonic theory we refer to~\cite{Kameyama:2014bua,Arutyunov:2014cda}. The pertubative S-matrix that we have computed was studied at one and two loops using unitarity techniques in~\cite{Engelund:2014pla}. In~\cite{Hoare:2014pna} truncations to lower dimensional models and special limits were considered. In particular, one way was provided to prove that the limit of maximal deformation is related by double T-duality to dS$_5\times$H$^5$, namely the product of five-dimensional de Sitter and five-dimensional hyperboloid. A similar and more physical ($\eta\to1,\ \varkappa\to \infty$) limit was studied in~\cite{Arutyunov:2014cra}, where it was shown agreement with the background of the mirror model---first introduced to develop the Thermodynamic Bethe Ansatz~\cite{Ambjorn:2005wa,Arutyunov:2007tc,Arutyunov:2009zu}---obtained by performing a double-Wick rotation on the light-cone gauge-fixed string. The exact spectrum was actually considered in~\cite{Arutynov:2014ota}, where the notion of ``mirror duality'' was introduced, after observing that original and mirror models are related by small/large values of the deformation parameter. A proposal on how to deform the $\sigma$-model on {AdS$_5\times$S$^5$} to obtain the $q$-deformation in the case of $q$ being root of unity is the $\lambda$-deformation of~\cite{Hollowood:2014qma}. We refer also to~\cite{Vicedo:2015pna,Hoare:2015gda} for a relation between the $\eta$-deformation and the $\lambda$-deformation. Generalisations of the deformation procedure were studied in~\cite{Kawaguchi:2014fca,Matsumoto:2014cja,vanTongeren:2015soa,vanTongeren:2015uha} were Jordanian and other deformations based on $R$-matrices satisfying the classical Yang-Baxter equation were considered. In~\cite{Lunin:2014tsa} it was shown that in the two cases of (AdS$_2\times$S$^2)_\eta$ and (AdS$_3\times$S$^3)_\eta$ it is possible to add to the deformed metric the missing NSNS scalar and RR fields, to obtain a background satisfying the supergravity equations of motion. Of these two cases, only for (AdS$_2\times$S$^2)_\eta$ it was conjectured that the solution\footnote{A one-parameter family of solutions was actually found, and the conjecture proposes one specific point to correspond to the deformed model.} corresponds to the $\eta$-deformation. The explicit check of this is still missing. The case of {(AdS$_5\times$S$^5)_\eta$} is technically more complicated and a supergravity solution has not been found. One of the main motivations of the next chapter is to compute the Lagrangian of the superstring in the deformed model up to quadratic order in fermions. This will allow us to read off the couplings to the unknown RR fields. \section{Discussion}\label{sec:discuss-eta-def-background} From the Lagrangian at quadratic order in fermions and the kappa-symmetry variation of the worldsheet metric, we have read off couplings to tensors that we want to interpret as the field strengths of the RR fields. In this section we show that the results that we have obtained are \emph{not} compatible with the Bianchi identities and the equations of motion of supergravity. Let us start by looking at the Bianchi identity for $\widetilde{F}^{(1)}$ \begin{equation} \partial_M \widetilde{F}_N- \ _{(M\leftrightarrow N)}=0\,, \implies \partial_M\left(e^{\varphi}\widetilde{F}_N\right) -\partial_M\varphi\, e^{\varphi}\widetilde{F}_N-\ _{(M\leftrightarrow N)}=0. \end{equation} We prefer to rewrite it in the second form, because we only know the combination $e^{\varphi}\widetilde{F}_M$. In particular we obtain \begin{equation}\label{eq:Bianchi-F1-expl} \partial_M\left(e^{\varphi}\widetilde{F}_{\psi_2}\right) -\partial_M\varphi\, e^{\varphi}\widetilde{F}_{\psi_2}- \ _{(M\leftrightarrow \psi_2)}=0\,, \qquad \partial_M\left(e^{\varphi}\widetilde{F}_{\phi_2}\right) -\partial_M\varphi\, e^{\varphi}\widetilde{F}_{\phi_2}- \ _{(M\leftrightarrow \phi_2)}=0\,, \end{equation} because from~\eqref{eq:curved-comp-F1} we know that the only non-vanishing components are $\widetilde{F}_{\psi_2},\widetilde{F}_{\phi_2}$. Moreover, using the fact that the combinations $e^{\varphi}\widetilde{F}_{\psi_2},e^{\varphi}\widetilde{F}_{\phi_2}$ depend just on $\zeta,\rho,\xi,r$ we immediately find that the derivatives of the dilaton $\varphi$ should satisfy the following equations \begin{equation} \begin{aligned} \partial_{t}\varphi = 0\,, \quad \partial_{\psi_1}\varphi = 0\,, \quad \partial_{\phi}\varphi = 0\,, \quad \partial_{\phi_1}\varphi = 0\,, \\ e^{\varphi}\widetilde{F}_{\phi_2}\, \partial_{\psi_2}\varphi =e^{\varphi}\widetilde{F}_{\psi_2}\, \partial_{\phi_2}\varphi\,, \\ \partial_M\varphi=\frac{1}{e^{\varphi}\widetilde{F}_{\psi_2}}\, \partial_M\left(e^{\varphi}\widetilde{F}_{\psi_2}\right) = \frac{1}{e^{\varphi}\widetilde{F}_{\phi_2}}\, \partial_M\left(e^{\varphi}\widetilde{F}_{\phi_2}\right) \quad M=\zeta,\rho,\xi,r\,. \end{aligned} \end{equation} The last equation comes from the compatibility of the two equations that we obtain from~\eqref{eq:Bianchi-F1-expl} for $M=\zeta,\rho,\xi,r$. A consequence of this compatibility is the equation \begin{equation} \partial_M \log \left( \rho ^4 \sin ^2 \zeta \right) = - \partial_M \log \left(r^4 \sin ^2\xi\right)\,, \qquad M=\zeta,\rho,\xi,r\,, \end{equation} which is clearly not satisfied. We then conclude that the results are not compatible with the Bianchi identity for $\widetilde{F}^{(1)}$. Failure to satisfy the equations of motion of type IIB supergravity is easily seen by considering the equation \begin{equation}\label{eq-combination-sugra-not-satisf} \partial_{P}\left(\sqrt{-\widetilde{G}}\ e^{-2\varphi}\widetilde{H}^{MNP}\right)-\sqrt{-\widetilde{G}}\ \widetilde{F}^{MNP}\widetilde{F}_P-{1\over 6}\sqrt{-\widetilde{G}}\ \widetilde{F}^{MNPQR} \widetilde{F}_{PQR}=0\,, \end{equation} that is obtained by combining the equation of motion for the NSNS two form~\eqref{eq:eom-B} and the RR two-form~\eqref{eq:eom-C2}. If we select \textit{e.g.}\xspace the indices $(M,N)=(t,\rho)$, the first term---containing $\widetilde{H}$ and the unknown factor with the dilaton---drops out. We get then just an algebraic equation that we can evaluate using the information at our disposal. To avoid writing curved indices, we write it explicitly in terms of tangent indices\footnote{To transform the curved indices $M,N$ into tangent indices it it enough to multiply the equation by the proper vielbein components~\eqref{eq:def-vielb-comp}. Summed indices can be translated from curved to tangent without affecting the result.} \begin{equation} \begin{aligned} \widetilde{F}^{041}\widetilde{F}_{1}+ \widetilde{F}^{046}\widetilde{F}_{6} \\ + \widetilde{F}^{04123}\widetilde{F}_{123}+ \widetilde{F}^{04236}\widetilde{F}_{236}+ \widetilde{F}^{04159}\widetilde{F}_{159}+ \widetilde{F}^{04178}\widetilde{F}_{178}+ \widetilde{F}^{04569}\widetilde{F}_{569}+ \widetilde{F}^{04678}\widetilde{F}_{678}=0 \end{aligned} \end{equation} Multiplying by $e^{2\varphi}$ and using~\eqref{eq:flat-comp-F1},~\eqref{eq:flat-comp-F3} and~\eqref{eq:flat-comp-F5} we obtain that the left-hand-side of the above equation is \begin{equation} \frac{16(1+\varkappa^2) \varkappa \rho}{1-\varkappa^2 \rho}\neq 0\,. \end{equation} We conclude that the equations of motion of type IIB supergravity are not satisfied. It is natural to wonder whether there exist field redefinitions at the level of the $\sigma$-model action that can cure this problem. We now proceed by first discussing this possibility, and then by studying two special limits of the $\eta$-deformed model. \subsection{On field redefinitions}\label{sec:field-red-q-def} In Section~\ref{sec:canonical-form} we were able to transform the original Lagrangian into the canonical form and, as we have just observed, the RR couplings that we have derived do not satisfy the supergravity equations of motion. However, the NSNS couplings are properly reproduced in the quadratic fermionic action, as they are compatible with the results of the bosonic Lagrangian. We are motivated to ask whether further field redefinitions could be performed which exclusively change the RR content of the theory. It appears to be rather difficult to answer this question in full generality. We will argue however that no such field redefinition exists which is continuous in the deformation parameter. We work in the formulation with 32-dimensional fermions $\Theta_I$ obeying the Majorana and Weyl conditions, see appendix \ref{sec:10-dim-gamma}. We start by considering a generic rotation of fermions\footnote{One could imagine more complicated redefinitions like $\Theta_I \to U_{IJ} \Theta_J+V_{IJ}^{\alpha}\partial_{\alpha}\Theta_J$, etc. They were not needed to bring the original Lagrangian to the canonical form and we do not consider them here. These redefinitions will generate higher derivative terms in the action, whose cancellation would imply further stringent constraints on their possible form.} \begin{equation} \label{eq:FIJ} \begin{aligned} \Theta_I &\to U_{IJ} \Theta_J,\quad \bar\Theta_I \to \bar\Theta_J\bar U_{IJ}\, , \quad \bar U_{IJ} =- \Gamma_0 U_{IJ}^\dagger\Gamma_0\, , \end{aligned} \end{equation} where $U_{IJ}$ are rotation matrices which can depend on bosonic fields. We write $U_{IJ}$ as an expansion over a complete basis in the space of $2\times 2$-matrices \begin{equation} \label{eq:FIJ1} \begin{aligned} U_{IJ} &\equiv \delta^{IJ} U_\delta + \sigma_1^{IJ} U_{\sigma_1} + \epsilon^{IJ} U_{\epsilon} + \sigma_3^{IJ} U_{\sigma_3} = \sum_{a=0}^3\alg{s}_a^{IJ}U_a\,,\\ \bar U_{IJ} &= \delta^{IJ} \bar U_\delta + \sigma_1^{IJ} \bar U_{\sigma_1} + \epsilon^{IJ} \bar U_{\epsilon} + \sigma_3^{IJ} \bar U_{\sigma_3}= \sum_{a=0}^3\alg{s}_a^{IJ}\bar U_a\, , \end{aligned} \end{equation} where we have introduced $$ \alg{s}_0^{IJ}=\delta^{IJ}\,,\quad \alg{s}_1^{IJ}=\sigma_1^{IJ}\,,\quad \alg{s}_2^{IJ}=\epsilon^{IJ}\,,\quad \alg{s}_3^{IJ}=\sigma_3^{IJ}\, . $$ The objects $U_{a}$ and $\bar{U}_{a}$ are $32\times 32$-matrices and they can be expanded over the complete basis generated by $\Gamma^{(r)}$ and identity, see appendix \ref{sec:10-dim-gamma} for the definition and properties of $\Gamma^{(r)}$. Further, we require that the transformation $U_{IJ}$ preserves chirality and the Majorana condition. Conservation of chirality implies that the $\Gamma$-matrices appearing in the expansion of $U_{IJ}$ must commute with $\Gamma_{11}$, {\it i.e.} the expansion involves $\Gamma^{(r)}$ of even rank only \begin{equation} \begin{aligned}\label{eq:def-redefinition} U_a &= f_a\,{\mathbb I}_{32} + {1\ov2}f^{mn}_a\Gamma_{mn} + {1\ov24}f^{klmn}_a\Gamma_{klmn} \,,\\ \bar U_a &= \bar f_a\,{\mathbb I}_{32} + {1\ov2}\bar f^{mn}_a\Gamma_{mn} + {1\ov24}\bar f^{klmn}_a\Gamma_{klmn} \,. \end{aligned} \end{equation} In this expansion matrices of rank six, eight and ten are missing, because by virtue of duality relations they can be re-expressed via matrices of lower rank. The Majorana condition imposes the requirement \begin{equation}\label{eq:Maj} \Gamma_0U^\dagger_{IJ}\Gamma_0=\mathcal{C}U^t_{IJ}\mathcal{C}\, , \end{equation} which implies that the coefficients $f$ are real. Coefficients of $\bar{U}_a$ are then given by \begin{equation} \bar f_{a} = f_a\,,\quad \bar f_{a}^{mn} =- f_a^{mn}\,,\quad \bar f_{a}^{klmn} = f_a^{klmn}\,. \end{equation} Let us note that combining equations \eqref{eq:FIJ} and \eqref{eq:Maj}, we get \begin{equation}\label{eq:symFIJ} \mathcal{C}\bar{U}^t_{IJ}\mathcal{C}= -U_{IJ}\,, \quad \text{ and } \quad \mathcal{C}U^t_{IJ}\mathcal{C}= -\bar{U}_{IJ}\, . \end{equation} The total number of degrees of freedom in the rotation matrix is $$ 4\cdot\Big(1+\frac{10\cdot 9}{2}+\frac{10\cdot 9\cdot 8\cdot 7}{4!}\Big)=2^{10}=(16+16)^2 \, , $$ which is precisely the dimension of ${\rm GL}(32,{\mathbb R})$. This correctly reflects the freedom to perform general linear transformations on 32 real fermions of type IIB. \smallskip Under these rotations the part of the Lagrangian containing derivatives on fermions transforms as \begin{equation} \begin{aligned} &&(\gamma^{\alpha\beta}\textup{d}^{IJ}+\epsilon^{\alpha\beta}\sigma_3^{IJ})\bar{\Theta}_I \tilde{e}_{\alpha}^m\Gamma_m\partial_{\beta}\Theta_{I} \to \\ \nonumber &&~~~~~~~~~~\to (\gamma^{\alpha\beta}\textup{d}^{IJ}+\epsilon^{\alpha\beta}\sigma_3^{IJ})\Big( \bar{\Theta}_K \, \bar{U}_{IK} \, \widetilde{e}^m_\alpha \Gamma_m U_{JL} \, \partial_\beta \Theta_L + \bar{\Theta}_K \, \bar{U}_{IK} \, \widetilde{e}^m_\alpha \Gamma_m (\partial_\beta U_{JL})\Theta_L \Big)\, . \end{aligned} \end{equation} The requirement that under rotations the part with $\partial\Theta$ remains unchanged can be formulated as the following conditions on $U_{IJ}$: \begin{equation}\label{eq:redef_cond} \begin{aligned} \Theta_K\textup{d}^{IJ} \bar{U}_{IK} \, \Gamma_m U_{JL}\partial\Theta_L &=\Theta_K\delta_{KL}\Gamma_m \partial\Theta_L+{\rm removable~terms}\, , \\ \Theta_K\sigma_3^{IJ}\bar{U}_{IK} \, \Gamma_m U_{JL}\partial\Theta_L&=\Theta_K\sigma_3^{KL}\Gamma_m\partial\Theta_L+{\rm removable~terms}\, , \end{aligned} \end{equation} where ``removable terms" means terms which can be removed by shifting bosons in the bosonic action by fermion bilinears, similarly to what was done in~\eqref{eq:red-bos}. In the following it is enough to analyse the first equation in \eqref{eq:redef_cond}. Let us collect all terms on its right hand side that are removable by shifting bosons into a tensor $M_{KL,m}$, where the indices $K,L$ should be multiplied by proper fermions and $m$ is an index in the tangent space. This tensor has the following symmetry property\footnote{Notice that to exhibit this symmetry property, one has to transpose also the indices $K,L$, on top of transposition in the $32\times32$ space.} \begin{equation}\label{eq:sym1} \mathcal{C}\left(M_{KL,m}\right)^t\mathcal{C}=- M_{LK,m}\,, \end{equation} that we need to impose if we want the shift of bosons to be non-vanishing. Note that the tensor in the canonical kinetic term has exactly the opposite symmetry property \begin{equation}\label{eq:sym2} \mathcal{C}(\delta_{KL} \Gamma_m^t)\mathcal{C}=\delta_{LK} \Gamma_m\, . \end{equation} Putting this information together, let us consider the first equation in~\eqref{eq:redef_cond} written as \begin{equation} \bar{U}_{IK}\Gamma_mU_{IL}= \delta_{KL}\Gamma_m+M_{KL,m}\,. \end{equation} We take transposition and we multiply by $\mathcal{C}$ from the left and from the right \begin{equation} \begin{aligned} \mathcal{C}\left(\bar{U}_{IK}\Gamma_mU_{IL}\right)^t \mathcal{C} &= \delta_{KL}\mathcal{C}\left(\Gamma_m\right)^t\mathcal{C}+\mathcal{C}\left(M_{KL,m}\right)^t\mathcal{C}, \end{aligned} \end{equation} and further manipulate as \begin{equation} \begin{aligned} \mathcal{C}\left(U_{IL}\right)^t \mathcal{C}\cdot \mathcal{C}\left(\Gamma_m\right)^t \mathcal{C}\cdot \mathcal{C}\left(\bar{U}_{IK}\right)^t \mathcal{C} &= \delta_{KL}\mathcal{C}\left(\Gamma_m\right)^t\mathcal{C}+\mathcal{C}\left(M_{KL,m}\right)^t\mathcal{C}\, . \end{aligned} \end{equation} With the help of eqs.\eqref{eq:symFIJ} , \eqref{eq:sym1} and \eqref{eq:sym2} and relabelling the indices $K$ and $L$, we get \begin{equation} \begin{aligned} \bar{U}_{IK}\Gamma_mU_{IL}= \delta_{KL}\Gamma_m-M_{KL,m}\, , \end{aligned} \end{equation} which shows that $M_{KL,m}=0$, that is this structure cannot appear because it is incompatible with the symmetry properties of the rotated kinetic term. It is clear that the same considerations are also applied to the second equation in \eqref{eq:redef_cond}, where $\sigma_3^{IJ}$ replaces $\delta^{IJ}$. Thus, the rotation matrix $U_{IJ}$ must satisfy a more stringent system of equations. To write these equations without having to deal with indices of the $2\times 2$ space, we can introduce $64\times 64$ matrices \begin{equation} U\equiv \sum_{a=0}^3 s_a \otimes U_a\,, \qquad \bar{U}\equiv \sum_{a=0}^3 s_a^t \otimes \bar{U}_a=-\sum_{a=0}^3 s_a^t \otimes \mathcal{C}U_a^t\mathcal{C}\, , \end{equation} which allow us to write the constraints as \begin{equation} \begin{aligned}\label{eq:stringent1} \Pi_-\Big(\bar{U} \,(\mathbf{1}_2 \otimes \Gamma_m)U &- \mathbf{1}_2 \otimes \Gamma_m\Big)\Pi_+=0\,, \\ \Pi_-\Big(\bar{U} \,( \sigma_3 \otimes \Gamma_m)U &- \sigma_3 \otimes \Gamma_m\Big)\Pi_+=0\, . \end{aligned} \end{equation} Here we are multiplying by the two projectors $\Pi_{\pm}={\mathbf 1}_2\otimes \frac{1}{2}({\mathbf 1}_{32}\pm \Gamma_{11})$ to account for the chirality of the fermions. We assume that $U$ is a smooth function of $\eta$, and that for small values of the deformation parameter it can be expanded as \begin{equation} U=\mathbf{1}_{64}+\eta^r u+o(\eta^{r})\, . \end{equation} Here $\eta^r$ is the first non-trivial order of the contribution, and $o(\eta^{r})$ denotes subleading terms. At order $\eta^r$ we get a system of linear equations for $u$ \begin{equation} \begin{aligned}\label{stringent2} \Pi_-\Big(\bar{u} \,(\mathbf{1}_2 \otimes \Gamma_m) &+(\mathbf{1}_2 \otimes \Gamma_m)u\Big)\Pi_+=0\,, \\ \Pi_-\Big(\bar{u} \,( \sigma_3 \otimes \Gamma_m) &+ ( \sigma_3 \otimes \Gamma_m)u\Big)\Pi_+=0\,. \end{aligned} \end{equation} This system appears to have no solution which acts non-trivially on chiral fermions. Thus, non-trivial field redefinitions of the type we considered here do not exist. Whether equation \eqref{eq:stringent1} has solutions which do not depend on $\eta$ is unclear to us. Finally, let us mention that similar considerations on field redefinitions can be done by considering the kappa-symmetry transformations of the bosonic and fermionic coordinates, and of the worldsheet metric. Doing so we get to the same conclusion found here. \subsection{Mirror model and Maldacena-Russo background} In this section we want to take special limits of our results, to make contact with other findings appeared in the literature. We first study a particular $\varkappa \to \infty$ limit of {(AdS$_5\times$S$^5)_\eta$}, as used in~\cite{Arutyunov:2014cra}. There it was shown that this limit implemented on the spacetime metric for the deformed model yields the metric for the mirror model of {AdS$_5\times$S$^5$}. It was then shown that it is possible to complete this metric to a IIB supergravity background, by supplementing it with a dilaton and a five-form flux. We have to first rescale the bosonic coordinates as~\cite{Arutyunov:2014cra} \begin{equation} t \to \frac{t}{\varkappa}, \qquad \rho \to \frac{\rho}{\varkappa}, \qquad \phi \to \frac{\phi}{\varkappa}, \qquad r \to \frac{r}{\varkappa}, \end{equation} and then send $\varkappa\to \infty$. The vielbein components $e^m_\alpha$ are of order $\mathcal{O}(1/\varkappa)$ in this limit. We get the following components for $e^m_M$ \begin{equation} \begin{aligned} e^0_t &= +\frac{1}{\sqrt{1-\rho^2}}, \quad e^1_{\psi_2} &= - \rho \sin \zeta, \quad e^2_{\psi_1} &= - \rho \cos \zeta, \quad e^3_{\zeta} &= - \rho , \quad e^4_{\rho} &= - \frac{1}{\sqrt{1-\rho^2}}, \\ e^5_{\phi} &= + \frac{1}{\sqrt{1+r^2}}, \quad e^6_{\phi_2} &= - r \sin \xi, \quad e^7_{\phi_1} &= - r \cos \xi, \quad e^8_{\xi} &= -r, \quad e^9_r &= - \frac{1}{\sqrt{1+r^2}}, \end{aligned} \end{equation} where we omit powers of $\varkappa$. This is compatible with the metric of the mirror background~\cite{Arutyunov:2014cra}. The $B$-field vanishes in this limit. For the RR fields we have to keep those components---when we specify tangent indices---that are of order $\mathcal{O}(\varkappa)$ in this limit. This is to compensate the power $1/\varkappa$ coming from the vielbein that multiplies them in the definition~\eqref{eq:deform-D-op} of $\widetilde{D}_\alpha^{IJ}$. The components that survive are \begin{equation} \begin{aligned} e^\varphi \, F_{123} = - \frac{4 \rho}{\sqrt{1-\rho^2}\sqrt{1+r^2}}\, , \qquad e^\varphi \, F_{678} = - \frac{4 r}{\sqrt{1-\rho^2}\sqrt{1+r^2}}\, , \\ e^\varphi \, F_{01234} = +\frac{4 }{\sqrt{1-\rho^2}\sqrt{1+r^2}}\, , \qquad e^\varphi \, F_{04678} = - \frac{4 \rho r}{\sqrt{1-\rho^2}\sqrt{1+r^2}}\, . \end{aligned} \end{equation} Here we are omitting powers of $\varkappa$. For $F^{(5)}$ one has to take into account also the components that are dual to the ones written above, using~\eqref{eq:sel-duality-F5-curved}. This result does not match with~\cite{Arutyunov:2014cra}, where the proposed background has vanishing $F^{(3)}$ and an $F^{(5)}$ along different directions. Checking~\eqref{eq-combination-sugra-not-satisf}, we find that the RR fields obtained in this limit are again not compatible with the equations of motion of supergravity. \bigskip Studying a particular $\varkappa\to 0$ limit of the results that we have obtained, we can show that we reproduce the Maldacena-Russo (MR) background~\cite{Maldacena:1999mh}. This background was constructed with the motivation of studying the large-$N$ limit of non-commutative gauge theories. We will show agreement with our results both at the level of the NSNS and the RR sector. We first rescale the coordinates parameterising the deformed AdS space as \begin{equation} t\to \sqrt{\varkappa} \, t\,, \quad \psi_2\to \frac{\sqrt{\varkappa}}{\sin \zeta_0} \, \psi_2\,, \quad \psi_1\to \frac{\sqrt{\varkappa}}{\cos \zeta_0} \, \psi_1\,, \quad \zeta\to\zeta_0+ \sqrt{\varkappa} \, \zeta\,, \quad \rho\to \frac{\rho}{\sqrt{\varkappa}}\,, \end{equation} and then send $\varkappa\to 0$. Because we have not rescaled the coordinates on the deformed S$^5$, the corresponding part of the metric just reduces to the usual metric on S$^5$, and the components of the $B$-field in those directions will vanish. On the other hand, for the part originating from the deformed AdS$_5$ we get a result different from the undeformed case. In this limit the complete metric and $B$-field are \begin{equation}\label{eq:Malda-Russo-sph-coord} \begin{aligned} {\rm d}s^2_{(\text{MR})}&=\rho^2\left(-{\rm d}t^2+ {\rm d}\psi_2^2\right) + \frac{\rho^2}{1+\rho^4\sin^2\zeta_0}\left( {\rm d}\psi_1^2+ {\rm d}\zeta^2\right) +\frac{{\rm d}\rho^2}{\rho^2} +{\rm d}s^2_{\text{S}^5}\,, \\ {B}_{(\text{MR})} &= + \frac{\rho^4 \sin \zeta_0}{1+ \rho^4\sin^2 \zeta_0} {\rm d}\psi_1\wedge{\rm d}\zeta . \end{aligned} \end{equation} These equations should be compared with (2.7) of~\cite{Maldacena:1999mh}, using the following identification of the coordinates and the parameters on the two sides \begin{center} \begin{tabular}{r|ccccc|c} Here & $t$ & $\psi_2$ & $\psi_1$ & $\zeta$ & $\rho$ & $\sin\zeta_0$ \\ \hline There & $\tilde{x}_0$ & $\tilde{x}_1$ & $\tilde{x}_2$ & $\tilde{x}_3$ & $u$ & $a^2$ \end{tabular} \end{center} When we repeat the same limiting procedure on the components of the RR fields found for the $\eta$-deformation of {AdS$_5\times$S$^5$}---see~\eqref{eq:flat-comp-F1},~\eqref{eq:flat-comp-F3} and~\eqref{eq:flat-comp-F5}---we find that the axion vanishes, and only one component of $F^{(3)}$ and one of $F^{(5)}$ (plus its dual) survive. These components---when we specify tangent indices---multiplied by the exponential of the dilaton are \begin{equation} e^{\varphi} F_{014} = \frac{4\rho^2\sin \zeta_0}{\sqrt{1+ \rho^4\sin^2 \zeta_0}}\,, \qquad e^{\varphi} F_{01234} = \frac{4}{\sqrt{1+ \rho^4\sin^2 \zeta_0}}\,. \end{equation} If we take the dilaton to be equal to \begin{equation} \varphi=\varphi_0-\frac{1}{2}\log ( 1+ \rho^4\sin^2 \zeta_0)\,, \end{equation} where $\varphi_0$ is a constant, we then find that the non-vanishing components for the RR fields---in tangent and curved indices---are \begin{equation} \begin{aligned} & F_{014} = e^{-\varphi_0}\, 4\rho^2\sin \zeta_0\,, \qquad && F_{01234} = e^{-\varphi_0}\,4\,, \\ & F_{t\psi_2\rho} = e^{-\varphi_0}\, 4\rho^3\sin \zeta_0\,, \qquad && F_{t\psi_2 \psi_1\zeta\rho} = e^{-\varphi_0}\, \frac{4\rho^3}{1+\rho^4\sin^2\zeta_0}\,. \end{aligned} \end{equation} Also the results that we obtain for the dilaton and the RR fields are in perfect agreement with (2.7) of~\cite{Maldacena:1999mh}. It is very interesting that despite the incompatibility with type IIB supergravity for generic values of the deformation parameter, there exists a certain limit---different from the undeformed {AdS$_5\times$S$^5$}---where this compatibility is restored. \subsection{Concluding remarks} In~\cite{Arutyunov:2015qva} the action that we have derived here was used to compute the tree-level scattering elements for excitations on the worldsheet. In addition to the results collected in Section~\ref{sec:pert-bos-S-mat-eta-ads5s5}---where only interactions among bosons were considered--- it was then possible to derive also the scattering elements that involve two fermions in the asymptotic states of $2\to 2$ processes\footnote{The terms Fermion+Fermion$\to$Fermion+Fermion are missing in the computation, since their derivation requires the Lagrangian quartic in fermions.}. The derivation of~\cite{Arutyunov:2015qva} shows that the T-matrix obtained using the methods reviewed in Section~\ref{sec:decomp-limit} cannot be factorised into two copies as in~\eqref{eq:factoris-rule-T-mat-AdS5}, see Section~\ref{sec:pert-bos-S-mat-eta-ads5s5} for the discussion in the bosonic sector. However, there exists a unitary transformation of the basis of two-particle states thanks to which the T-matrix can be factorised into two copies, as desired. This change of basis is of a particular type, as it is not one-particle factorisable. Each of the two copies that compose the T-matrix matches with the large tension expansion of the all-loop S-matrix invariant under $\alg{psu}_q(2|2)$, and it naturally satisfies the classical Yang-Baxter equation. The fact that we can prove compatibility with the $q$-deformed S-matrix is a nice further check of our results. On the other hand, it is not clear why this compatibility is not immediate, given that a change of the two-particle basis is needed. One possible explanation may lie in the choice of the R-operator that is used to define the deformation. Our current choice~\eqref{Rop} corresponds to the standard Dynkin diagram of $\alg{psu}(2,2|4)$. However, it is believed that only the ``all-loop'' Dynkin diagram can be used to write the Bethe-Yang equations for the undeformed model. It might be that defining the deformation through an R-operator which is related to the ``all-loop'' Dynkin diagram would give automatically the T-matrix in the factorised form, with no need of changing the basis of two-particle states. It is natural to wonder whether applying this redefinition---which is in fact $\eta$-independent and it is a symmetry of the undeformed model---necessary to get a factorised T-matrix could also cure the problem of compatibility with type IIB\footnote{This transformation being non-local, it would first be important to check that it does not produce non-local terms in the action.}. It is of course possible that no cure exists and that the $\eta$-model just does not correspond to a solution of supergravity, as it may be suggested by the findings of~\cite{Hoare:2015wia}. There it was shown that the metric and the $B$-field of a T-dual version of the $\eta$-model ---where abelian T-duality is implemented along all six isometric directions---could be completed to a set of background fields\footnote{The corresponding fluxes are purely imaginary, a fact which is attributed to having done a T-duality along the time direction.} satisfying the equations of motion of type IIB. The peculiarity of this solution is that the dilaton of the T-dual model has a linear dependence on the isometric directions, which forbids to undo the T-dualities and get back to a background for the $\eta$-model. However, it is interesting that this dependence is compensated by the RR fields in such a way that the combination $e^{\varphi}F$ still preserves the isometries. The classical Green-Schwarz action is then still invariant under shifts of all these coordinates, and one can study what happens under the standard T-duality transformations when ignoring the issue with the dilaton. The resulting background fields are precisely the ones of the $\eta$-model derived in~\cite{Arutyunov:2015qva} and presented here. Thanks to this formal T-duality relation to a supergravity background, it was then suggested that---while not being Weyl invariant---the $\eta$-model should be UV finite at one loop. This interpretation was further investigated in~\cite{Arutyunov:2015mqj}, where it was shown that the background fields extracted from the $\eta$-model satisfy a set of second-order equations which should follow from scale invariance of the $\sigma$-model. The $\eta$-model seems to be special since it satisfies also a set of first order equations---a modification of the ones of type IIB supergravity---which should follow from kappa-symmetry. Let us mention that the methods applied here were used to study also the $\lambda$-deformation of~\cite{Hollowood:2014qma} and deformations based on R-matrices satisfying the classical Yang-Baxter equation~\cite{Matsumoto:2014cja}. The $\lambda$-deformation of AdS$_2\times$S$^2$ was considered in~\cite{Borsato:2016zcf}, where the RR fields were extracted by looking at the kappa-symmetry variations. In that case the result is found to be a solution of type IIB, suggesting that the $\lambda$-deformation does not break the Weyl invariance of the original $\sigma$-model. A case-by-case study for some Yang-Baxter deformations of AdS$_5\times$S$^5$ \ was carried on in~\cite{Kyono:2016jqy}, providing examples where compatibility with IIB is either realised or not depending on the choice of the R-matrix. It would be interesting to investigate these deformed models more generally with the goal of identifying the properties that ensure compatibility with supergravity. \chapter{{(AdS$_5\times$S$^5)_\eta$} at order $\theta^2$}\label{ch:qAdS5Fer} In this chapter we push the computation of the Lagrangian for the deformed model up to quadratic order in fermions. The motivation for doing that is to discover the couplings to the unknown RR fields and the dilaton, which should complete the deformed metric and $B$-field to a full type IIB background. We start in Section~\ref{sec:algebra-basis} by presenting a convenient realisation of $\alg{psu}(2,2|4)$ and in~\ref{sec:psu224-current} by computing the current for this algebra. Using the results collected in Section~\ref{sec:inverse-op} regarding the inverse operator that is used to define the deformed action, we compute the Lagrangian in Section~\ref{sec:-eta-def-lagr-quad-theta} and we show how to recast it in the standard form of Green-Schwarz. In~\ref{sec:kappa-symm-eta-def-quad-theta} we compute the kappa-variations of the bosonic and fermionic fields, and of the worldsheet metric, to confirm the results for the background fields obtained in the previous section. Section~\ref{sec:discuss-eta-def-background} is devoted to a discussion on the results that we have found. We show that the background fields that we have derived are not compatible with the equations of motion of type IIB supergravity. We comment on this result and on particular limits of the $\sigma$-model action. \begin{comment} \alert{Comment for Gleb and Sergey:}\\ Talking about fermions \begin{itemize} \item I have changed the definition of the odd elements in~\eqref{eq:def-odd-el-psu224} multiplying by an overall factor $e^{i\pi/4}$, \item now I define $\bar{\theta}=\theta^\dagger \boldsymbol{\gamma}^0$, where the index $0$ is \emph{upper} as opposed to lower. \end{itemize} Because of these changes the charge conjugation matrix for 10-dim Gamma's is $\mathcal{C}=i \sigma_2\otimes K \otimes K $, as in~\cite{Metsaev:1998it}. These changes are needed to match with the literature. In particular, now I find the same Majorana condition of~\cite{Metsaev:1998it}, meaning that there is a $1$ and not an $i$ in front of the right hand side of~\eqref{eq:Majorana-cond-ferm-sp-ind}. The Lagrangian that I find matches with (version 2 of)~\cite{Cvetic:1999zs}. The above changes are needed also to agree with the overall sign for the Lagrangian at order $\theta^2$ of~\cite{Cvetic:1999zs}. \medskip I have defined the gamma matrices $\hat{\gamma}_m$ for the sphere with a minus sign in front of them, see~\eqref{eq:gamma-AdS5-S5}. This affects $\hat{\gen{P}}$ but not $\hat{\gen{J}}$. I do that because this effectively changes the sign on the vielbein components for the sphere. Now $e^5_\phi\sim +1$ expanding in bosons, and I can impose kappa-light-cone gauge for fermions in the standard way found in the literature, see~\eqref{eq:kappa-lcg} and~\eqref{eq:defin-Gamma-pm}. Related to this, now I define $\Gamma_{11}$ in the canonical way, namely $\Gamma_{11}=\Gamma_0\cdots\Gamma_9$ as opposed to $\Gamma_{11}=\Gamma_9\cdots\Gamma_0$. Thanks to both changes $\Gamma_{11}$ remains to be equal to $\sigma_3\otimes\mathbf{1}_4\otimes\mathbf{1}_4$. \medskip Signs and factors of $i$ have been changed in the rest of the equations consistently with the above changes. \alert{End of comment.} \end{comment} \input{Chapters/psu224algebra.tex} \input{Chapters/psu224current.tex} \input{Chapters/inverseOp.tex} \input{Chapters/etadefLagrangian.tex} \input{Chapters/etadefKappaSymm.tex} \input{Chapters/DiscussionetaAdS5.tex} \section{Spin-chain description} The spin-chain that we construct in this section shares the $\alg{psu}(1,1|2)^2$ symmetry of the massive sector of strings in {AdS$_3\times$S$^3\times$T$^4$}~\cite{Babichenko:2009dk}. We start by presenting this superalgebra. \subsection{$\alg{psu}(1,1|2)^2$ algebra}\label{sec:psu112-algebra} The bosonic subalgrebra of $\alg{psu}(1,1|2)$ is $\alg{su}(1,1)\oplus\alg{su}(2)$. In what follows it is more convenient to change the real form of the non-compact subalgebra and consider instead $\alg{sl}(2)\oplus\alg{su}(2)$. We denote the corresponding generators by $\gen{S}_0,\gen{S}_\pm$ and $\gen{L}_5,\gen{L}_\pm$ respectively. In addition to those we have also eight supercharges, that we denote with the help of three indices $\gen{Q}_{a\alpha\dot{\alpha}}$, each of them taking values $\pm$. The commutation relations of $\alg{psu}(1,1|2)$ then read as \begin{equation} \begin{aligned} \comm{\gen{S}_0}{\gen{S}_\pm} &= \pm \gen{S}_\pm ,\qquad & \comm{\gen{S}_+}{\gen{S}_-} &= 2 \gen{S}_0 , \\ \comm{\gen{L}_5}{\gen{L}_\pm} &= \pm \gen{L}_\pm , \qquad & \comm{\gen{L}_+}{\gen{L}_-} &= 2 \gen{L}_5 , \end{aligned} \end{equation} \begin{equation} \begin{aligned} \comm{\gen{S}_0}{\gen{Q}_{\pm\beta\dot{\beta}}} &= \pm\frac{1}{2} \gen{Q}_{\pm\beta\dot{\beta}} , \qquad & \comm{\gen{S}_\pm}{\gen{Q}_{\mp\beta\dot{\beta}}} &= \gen{Q}_{\pm\beta\dot{\beta}} , \\ \comm{\gen{L}_5}{\gen{Q}_{b\pm\dot{\beta}}} &= \pm\frac{1}{2} \gen{Q}_{b\pm\dot{\beta}} ,\qquad & \comm{\gen{L}_\pm}{\gen{Q}_{b\mp\dot{\beta}}} &= \gen{Q}_{b\pm\dot{\beta}} , \\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} \acomm{\gen{Q}_{\pm++}}{\gen{Q}_{\pm--}} &= \pm \gen{S}_{\pm} , \qquad & \acomm{\gen{Q}_{\pm+-}}{\gen{Q}_{\pm-+}} &= \mp \gen{S}_{\pm} , \\ \acomm{\gen{Q}_{+\pm+}}{\gen{Q}_{-\pm-}} &= \mp \gen{L}_{\pm} , \qquad & \acomm{\gen{Q}_{+\pm-}}{\gen{Q}_{-\pm+}} &= \pm \gen{L}_{\pm} , \\ \acomm{\gen{Q}_{+\pm\pm}}{\gen{Q}_{-\mp\mp}} &= - \gen{S}_0 \pm \gen{L}_5 , \qquad & \acomm{\gen{Q}_{+\pm\mp}}{\gen{Q}_{-\mp\pm}} &= + \gen{S}_0 \mp \gen{L}_5 . \end{aligned} \end{equation} As it is clear from the equations above, the first index of a supercharge spans an $\alg{sl}(2)$ doublet, while the second index an $\alg{su}(2)$ one. The last index is not associated to an $\alg{su}(2)$ doublet\footnote{The third index on a supercharge spans an $\alg{su}(2)$ doublet in the case of the $\alg{d}(2,1,\alpha)$ superalgebra, of which $\alg{psu}(1,1|2)$ can be seen as a particular contraction---a proper $\alpha \to 1$ or $\alpha \to 0$ limit. For generic $\alpha$ the generator $\gen{R}_8$ is the Cartan element of the additional $\alg{su}(2)$.}. The superalgebra admits a $\alg{u}(1)$ automorphism generated by the charge $\gen{R}_8$, that acts on the supercharges as \begin{equation} \comm{\gen{R}_8}{\gen{Q}_{b\beta\pm}} = \pm \frac{1}{2} \gen{Q}_{b\beta\pm} , \end{equation} and that commutes with all bosonic generators. Let us present the possible choices of Serre-Chevalley basis. For superalgebras the inequivalent possibilities are associated to different Dynkin diagrams. Each of them corresponds to the choice of Cartan generators $\gen{h}_i$ and the corresponding raising and lowering operators $\gen{e}_i,\gen{f}_i$. The index $i$ runs from $1$ to the rank of the superalgebra, that is $3$ in the case of $\alg{psu}(1,1|2)$. In this basis the commutation relations acquire the form\footnote{If both $\gen{e}_i$ and $\gen{f}_j$ are fermionic, then the commmutator $[,]$ should be replaced by the anti-commutator $\{,\}$.} \begin{equation} \comm{\gen{h}_i}{\gen{h}_j} = 0 , \qquad \comm{\gen{e}_i}{\gen{f}_j} = \delta_{ij} \gen{h}_j , \qquad \comm{\gen{h}_i}{\gen{e}_j} = + A_{ij} \gen{e}_j , \qquad \comm{\gen{h}_i}{\gen{f}_j} = - A_{ij} \gen{f}_j , \end{equation} where $A_{ij}$ is the Cartan matrix. The superalgebra $\alg{psu}(1,1|2)$ admits three inequivalent gradings, see Figure~\ref{fig:dynkin-su22} for the corresponding Dynkin diagrams. \begin{figure} \centering \subfloat[\label{fig:dynkin-su22-su}]{ \begin{tikzpicture} [ thick, node/.style={shape=circle,draw,thick,inner sep=0pt,minimum size=5mm} ] \useasboundingbox (-1.5cm,-1cm) rectangle (1.5cm,1cm); \node (v1) at (-1.1cm, 0cm) [node] {}; \node (v2) at ( 0.0cm, 0cm) [node] {}; \node (v3) at ( 1.1cm, 0cm) [node] {}; \draw (v1.south west) -- (v1.north east); \draw (v1.north west) -- (v1.south east); \draw (v3.south west) -- (v3.north east); \draw (v3.north west) -- (v3.south east); \draw (v1) -- (v2); \draw (v2) -- (v3); \node at (v2.south) [anchor=north] {$+1$}; \end{tikzpicture} } \hspace{1cm} \subfloat[\label{fig:dynkin-su22-sl}]{ \begin{tikzpicture} [ thick, node/.style={shape=circle,draw,thick,inner sep=0pt,minimum size=5mm} ] \useasboundingbox (-1.5cm,-1cm) rectangle (1.5cm,1cm); \node (v1) at (-1.1cm, 0cm) [node] {}; \node (v2) at ( 0.0cm, 0cm) [node] {}; \node (v3) at ( 1.1cm, 0cm) [node] {}; \draw (v1.south west) -- (v1.north east); \draw (v1.north west) -- (v1.south east); \draw (v3.south west) -- (v3.north east); \draw (v3.north west) -- (v3.south east); \draw (v1) -- (v2); \draw (v2) -- (v3); \node at (v2.south) [anchor=north] {$-1$}; \end{tikzpicture} } \hspace{1cm} \subfloat[\label{fig:dynkin-su22-fff}]{ \begin{tikzpicture} [ thick, node/.style={shape=circle,draw,thick,inner sep=0pt,minimum size=5mm} ] \useasboundingbox (-1.5cm,-1cm) rectangle (1.5cm,1cm); \node (v1) at (-1.1cm, 0cm) [node] {}; \node (v2) at ( 0.0cm, 0cm) [node] {}; \node (v3) at ( 1.1cm, 0cm) [node] {}; \draw (v1.south west) -- (v1.north east); \draw (v1.north west) -- (v1.south east); \draw (v2.south west) -- (v2.north east); \draw (v2.north west) -- (v2.south east); \draw (v3.south west) -- (v3.north east); \draw (v3.north west) -- (v3.south east); \draw (v1) -- (v2); \draw (v2) -- (v3); \node at (v2.south) [anchor=north] {$\pm 1$}; \end{tikzpicture} } \caption{Three Dynkin diagrams for $\alg{psu}(1,1|2)$. A cross denotes a fermionic root.} \label{fig:dynkin-su22} \end{figure} The Dynkin diagram in Figure~\ref{fig:dynkin-su22}~\subref{fig:dynkin-su22-su} corresponds to the $\alg{su}(2)$ grading. The choice for the Cartan generators and the simple roots is \begin{equation}\label{eq:SC-basis-su2} \begin{aligned} \gen{h}_1 &= -\gen{S}_0 - \gen{L}_5 , \qquad & \gen{e}_1 &= +\gen{Q}_{+--} , \qquad & \gen{f}_1 &= +\gen{Q}_{-++} , \\ \gen{h}_2 &= +2\gen{L}_5 , \qquad & \gen{e}_2 &= +\gen{L}_+ , \qquad & \gen{f}_2 &= +\gen{L}_- , \\ \gen{h}_3 &= -\gen{S}_0 - \gen{L}_5 , \qquad & \gen{e}_3 &= +\gen{Q}_{+-+} , \qquad & \gen{f}_3 &= -\gen{Q}_{-+-} . \end{aligned} \end{equation} This leads to the Cartan matrix \begin{equation}\label{eq:Cartan-su2} \begin{pmatrix} 0 & -1 & 0 \\ -1 & +2 & -1 \\ 0 & -1 & 0 \end{pmatrix}. \end{equation} In Figure~\ref{fig:dynkin-su22}~\subref{fig:dynkin-su22-sl} we find the Dynkin diagram in the $\alg{sl}(2)$ grading. For Cartan generators and simple roots we take \begin{equation}\label{eq:SC-basis-sl2} \begin{aligned} \hat{\gen{h}}_1 &= +\gen{S}_0 + \gen{L}_5 , \qquad & \hat{\gen{e}}_1 &= -\gen{Q}_{-++} , \qquad & \hat{\gen{f}}_1 &= +\gen{Q}_{+--} , \\ \hat{\gen{h}}_2 &= -2\gen{S}_0 , \qquad & \hat{\gen{e}}_2 &= +\gen{S}_+ , \qquad & \hat{\gen{f}}_2 &= -\gen{S}_- , \\ \hat{\gen{h}}_3 &= +\gen{S}_0 + \gen{L}_5 , \qquad & \hat{\gen{e}}_3 &= -\gen{Q}_{-+-} , \qquad & \hat{\gen{f}}_3 &= -\gen{Q}_{+-+} , \end{aligned} \end{equation} with the Cartan matrix \begin{equation}\label{eq:Cartan-sl2} \begin{pmatrix} 0 & +1 & 0 \\ +1 & -2 & +1 \\ 0 & +1 & 0 \end{pmatrix}. \end{equation} The two above are the gradings that will be more relevant for us. For completeness we write down also the choice corresponding to Figure~\ref{fig:dynkin-su22}~\subref{fig:dynkin-su22-fff}, where all simple roots are fermionic. The choice for the raising operators $\gen{e}_i$ can be either \begin{equation} \gen{Q}_{+-+}, \quad \gen{Q}_{++-}, \quad \gen{Q}_{-++} \,, \qquad \text{or} \qquad \gen{Q}_{-+-}, \quad \gen{Q}_{--+}, \quad \gen{Q}_{+--} \,.\, \end{equation} This leads to the Cartan matrices \begin{equation}\label{eq:Cartan-ferm} \begin{pmatrix} 0 & +1 & 0 \\ +1 & 0 & -1 \\ 0 & -1 & 0 \end{pmatrix} , \qquad \text{and} \qquad \begin{pmatrix} 0 & -1 & 0 \\ -1 & 0 & +1 \\ 0 & +1 & 0 \end{pmatrix}. \end{equation} \subsection{Spin-chain representation} To construct a spin-chain that transforms under \emph{one} copy of $\alg{psu}(1,1|2)$, we put at each site an infinite dimensional representation denoted by $(-\tfrac{1}{2};\tfrac{1}{2})$~\cite{OhlssonSax:2011ms,Borsato:2013qpa}. This consists of the bosonic $\alg{su}(2)$ doublet $\phi^{(n)}_{\pm}$, where the indices $\pm$ label the two $\alg{su}(2)$ states, and two $\alg{su}(2)$ singlets $\psi^{(n)}_{\pm}$. The index $n$ indicates the $\alg{sl}(2)$ quantum number. Explicitly, the action of the bosonic generators is \begin{equation}\label{eq:su112-bosonicgen-representation} \begin{gathered} \gen{L}_5 \ket{\phi_{\pm}^{(n)}} = \pm \frac{1}{2} \ket{\phi_{\pm}^{(n)}} , \qquad \gen{L}_+ \ket{\phi_{-}^{(n)}} = \ket{\phi_{+}^{(n)}} , \qquad \gen{L}_- \ket{\phi_{+}^{(n)}} = \ket{\phi_-^{(n)}} , \\ \begin{aligned} \gen{S}_0 \ket{\phi_{\beta}^{(n)}} &= - \left( \tfrac{1}{2} + n \right) \ket{\phi_{\beta}^{(n)}} , & \gen{S}_0 \ket{\psi_{\dot\beta}^{(n)}} &= - \left( 1 + n \right) \ket{\psi_{\dot\beta}^{(n)}} , \\ \gen{S}_+ \ket{\phi_{\beta}^{(n)}} &= +n \ket{\phi_{\beta}^{(n-1)}} , & \gen{S}_+ \ket{\psi_{\dot\beta}^{(n)}} &= +\sqrt{(n + 1)n} \ket{\psi_{\dot\beta}^{(n-1)}} , \\ \gen{S}_- \ket{\phi_{\beta}^{(n)}} &= -(n+1)\ket{\phi_{\beta}^{(n+1)}} , & \gen{S}_- \ket{\psi_{\dot\beta}^{(n)}} &= -\sqrt{(n + 2) (n + 1)} \ket{\psi_{\dot\beta}^{(n+1)}} . \end{aligned} \end{gathered} \end{equation} The supercharges relate the bosons and the fermions as \begin{equation}\label{eq:su112-fermionicgen-representation} \begin{aligned} \gen{Q}_{-\pm\dot\beta} \ket{\phi_{\mp}^{(n)}} &= \pm \sqrt{n+1} \ket{\psi_{\dot\beta}^{(n)}} , & \gen{Q}_{+\pm\dot\beta} \ket{\phi_{\mp}^{(n)}} &= \pm \sqrt{n} \ket{\psi_{\dot\beta}^{(n-1)}} , \\ \gen{Q}_{-\beta\pm} \ket{\psi_{\mp}^{(n)}} &= \mp \sqrt{n+1} \ket{\phi_{\beta}^{(n+1)}} , & \gen{Q}_{+\beta\pm} \ket{\psi_{\mp}^{(n)}} &= \mp \sqrt{n+1} \ket{\phi_{\beta}^{(n)}} . \end{aligned} \end{equation} In the $\alg{su}(2)$ grading the highest weight state is $\phi_{+}^{(0)}$, since it is annihilated by the raising operators $\gen{L}_+,\gen{Q}_{+--}\gen{Q}_{+-+}$. Nevertheless one can check that also the positive roots $\gen{Q}_{-+\pm}$ annihilate this state. For this reason the representation is \emph{short}, and one has the identity \begin{equation} \acomm{\gen{Q}_{+-\mp}}{\gen{Q}_{-+\pm}} \ket{\phi^{(0)}_+} = \mp (\gen{S}_0 + \gen{L}_5 ) \ket{\phi^{(0)}_+} = 0. \end{equation} When constructing a spin-chain of length $L$, we need to consider the $L$-fold tensor product of the representation presented above. Since the symmetry algebra consists of two copies of $\alg{psu}(1,1|2)$ labelled by L and R, we actually need to take the tensor product of two such spin-chains. In particular, we define the ground state of the $\alg{psu}(1,1|2)_{\mbox{\tiny L}}\oplus \alg{psu}(1,1|2)_{\mbox{\tiny R}}$ spin-chain as \begin{equation} \ket{\mathbf{0}}_L = \Ket{(\phi^{(0)}_+)^L} \otimes \Ket{(\phi^{(0)}_+)^L}. \end{equation} This is the heighest weight state of the representation $(-\tfrac{L}{2};\tfrac{L}{2}) \otimes (-\tfrac{L}{2};\tfrac{L}{2})$, where by definition charges with label L act on the first $L$-fold product, while charges with label R on the second one. The shortening condition is also inherited, giving a total of eight supercharges preserving the ground state. We denote them as \begin{equation} \begin{aligned} \gen{Q}_{\mbox{\tiny L}}^{\ 1} = +\gen{Q}_{-++}^{\mbox{\tiny L}} , \quad \gen{Q}_{\mbox{\tiny L}}^{\ 2} = -\gen{Q}_{-+-}^{\mbox{\tiny L}} , \quad \overline{\gen{Q}}_{\mbox{\tiny L} 1} = +\gen{Q}_{+--}^{\mbox{\tiny L}} , \quad \overline{\gen{Q}}_{\mbox{\tiny L} 2} = +\gen{Q}_{+-+}^{\mbox{\tiny L}} , \\ \gen{Q}_{\mbox{\tiny R} 1} = +\gen{Q}_{-++}^{\mbox{\tiny R}} , \quad \gen{Q}_{\mbox{\tiny R} 2} = -\gen{Q}_{-+-}^{\mbox{\tiny R}} , \quad \overline{\gen{Q}}_{\mbox{\tiny R}}^{\ 1} = +\gen{Q}_{+--}^{\mbox{\tiny R}} , \quad \overline{\gen{Q}}_{\mbox{\tiny R}}^{\ 2} = +\gen{Q}_{+-+}^{\mbox{\tiny R}} , \end{aligned} \end{equation} where we use the same notation as for the charges derived from the string theory. The ground state is preserved also by the central charges $\gen{H}_{\mbox{\tiny L}},\gen{H}_{\mbox{\tiny R}}$ defined as \begin{equation} \gen{H}_{\mbox{\tiny L},\mbox{\tiny R}} = -\gen{S}_0^{\mbox{\tiny L},\mbox{\tiny R}}- \gen{L}_5^{\mbox{\tiny L},\mbox{\tiny R}}. \end{equation} Using the $\alg{psu}(1,1|2)$ commutation relations one can check that these generators close into four copies of $\alg{su}(1|1)^2$ \begin{equation}\label{eq:su11-su11-algebra} \acomm{\gen{Q}_{\mbox{\tiny I}}^{\ \dot{a}}}{\overline{\gen{Q}}_{\mbox{\tiny J} \dot{b}}} = \delta^{\dot{a}}_{\ \dot{b}} \delta_{\mbox{\tiny I}\mbox{\tiny J}} \gen{H}_{\mbox{\tiny I}}\,, \qquad\quad \mbox{\tiny I},\mbox{\tiny J}=\mbox{\tiny L},\mbox{\tiny R}. \end{equation} It is clear that this algebra coincides with the one found from the string theory~\eqref{eq:cealgebra}, once the central extension is turned off $\gen{C}=\overline{\gen{C}}=0$ and we identify \begin{equation} \gen{H} = \gen{H}_{\mbox{\tiny L}} + \gen{H}_{\mbox{\tiny R}} , \qquad \gen{M} = \gen{H}_{\mbox{\tiny L}} - \gen{H}_{\mbox{\tiny R}} . \end{equation} To introduce excited states of the spin-chain we just need to replace the highest weight $\phi_+^{(0)}$ with another state of the same module. A nice way to organise excited states is to look at the eigenvalues of the charges $\gen{H}_{\mbox{\tiny L},\mbox{\tiny R}}$. Considering the states $\phi_+^{(n)}$ or $\phi_-^{(n)}$ sitting on a site of one of the two copies of the spin-chain increases the eigenvalue of the corresponding Hamiltonian by $n$ and $n+1$ respectively. For the states $\psi_\pm^{(n)}$ this is increased by $n+1$. The lightest states---the ones increasing the Hamiltonian just by $1$---are then \begin{equation} \phi_-^{(0)} , \qquad \psi_+^{(0)} , \qquad \psi_-^{(0)} , \qquad \text{and} \qquad \phi_+^{(1)}. \end{equation} They transform in the familiar fundamental representation of $\alg{psu}(1|1)^4$---equivalently in the bi-fundamental representation of $\alg{su}(1|1)^2$, see \emph{e.g.} Figure~\ref{fig:massive}. We decide to introduce a notation for the excited states of the spin-chain that makes clear the bi-fundamental nature of the representation. We write \begin{equation}\label{eq:bi-fund-fields} \Phi^{\mbox{\tiny I}++} = +\phi_-^{\mbox{\tiny I}(0)} , \qquad \Phi^{\mbox{\tiny I}--} = +\phi_+^{\mbox{\tiny I}(1)} , \qquad \Phi^{\mbox{\tiny I}-+} = +\psi_+^{\mbox{\tiny I}(0)} , \qquad \Phi^{\mbox{\tiny I}+-} = -\psi_-^{\mbox{\tiny I}(0)} , \end{equation} where we introduced a label I=L,R to distinguish the two types of excitations on the spin-chain. It is easy to check that these states transform under the two irreducible representations $\varrho_{\mbox{\tiny L}} \otimes \varrho_{\mbox{\tiny L}} $ and $\varrho_{\mbox{\tiny R}} \otimes \varrho_{\mbox{\tiny R}} $ of Section~\ref{sec:BiFundamentalRepresentationsT4}, once the supercharges are rewritten in terms of $\alg{su}(1|1)^2$ supercharges as in Section~\ref{sec:AlgebraTensorProductT4}. To match with Equations~\eqref{eq:su112-fermionicgen-representation} one should set $a=\bar{a}=1$ and $b=\bar{b}=0$ in the exact short representations of $\alg{psu}(1,1)^4_{\text{c.e.}}$. We conclude that the lightest excitations in~\eqref{eq:su112-fermionicgen-representation} correspond to an on-shell representation at zero coupling. In the next section we discuss how the central extension and the coupling dependence are implemented in the spin-chain description. \subsection{Central extension} In order to make the Hamiltonian $\gen{H}$ dependent on the coupling constant and the momenta of the excitations, we need to deform the above representation. We give a momentum to a one-particle excitation by writing the plane-wave \begin{equation} \ket{\mathcal{X}_p} = \sum_{n=1}^{L} e^{ipn} \ket{ \mathbf{0}^{n-1} \mathcal{X} \mathbf{0}^{L-n} } , \end{equation} where $\mathbf{0}$ denotes a vacuum site and $L$ is the length of the spin-chain. When we consider multi-particle states we write a similar expression, where we always assume that the spin-chain length is very large $L\gg 1$ and the excitations are well separated. In other words we consider only \emph{asymptotic} states, and we make use of the S-matrix to relate in- and out-states. In order to get the central extension and find non-vanishing eigenvalues for the central charges $\gen{C},\overline{\gen{C}}$ like in Equation~\eqref{eq:cealgebra}, we have to allow for a non-trivial action of the Right supercharges on Left excitations and vice versa. We impose that this action produces \emph{length-changing} effects on the spin-chain, by removing or adding vacuum sites. The addition and the removal of vacuum sites is denoted by $\mathbf{0}^{\pm}$ and it produces new momentum-dependent phase factors once these symbols are commuted to the far left of the excitations \begin{equation} \ket{\mathcal{X}_p\, \mathbf{0}^\pm } = e^{\pm i\, p}\ket{\mathbf{0}^\pm \, \mathcal{X}_p}, \end{equation} as it can be checked from the plane-wave Ansatz. Once we consider a spin-chain invariant under $\alg{su}(1|1)^2$, a way to centrally-extend it is to take~\cite{Borsato:2012ud} \begin{equation}\label{eq:chiral-rep} \begin{aligned} \gen{Q}_{\sL} \ket{\phi_p^{\mbox{\tiny L}}} &= a_p \ket{\psi_p^{\mbox{\tiny L}}} , \qquad & \gen{Q}_{\sL} \ket{\psi_p^{\mbox{\tiny L}}} &= 0 , \\ \overline{\gen{Q}}_{\sL} \ket{\phi_p^{\mbox{\tiny L}}} &= 0 , \qquad & \overline{\gen{Q}}_{\sL} \ket{\psi_p^{\mbox{\tiny L}}} &= \bar{a}_p \ket{\phi_p^{\mbox{\tiny L}}} , \\ \gen{Q}_{\sR} \ket{\phi_p^{\mbox{\tiny L}}} &= 0 , \qquad & \gen{Q}_{\sR} \ket{\psi_p^{\mbox{\tiny L}}} &= b_p \ket{\mathbf{0}^+\, \phi_p^{\mbox{\tiny L}}} , \\ \overline{\gen{Q}}_{\sR} \ket{\phi_p^{\mbox{\tiny L}}} &= \bar{b}_p \ket{\mathbf{0}^-\, \psi_p^{\mbox{\tiny L}}} , \qquad & \overline{\gen{Q}}_{\sR} \ket{\psi_p^{\mbox{\tiny L}}} &= 0 , \end{aligned} \end{equation} and similarly for the Right module, after we exchange the labels L and R. For the bi-fundamental representations of the spin-chain excitations that we want to consider we then get \begin{equation}\label{eq:representation-LL} \begin{aligned} \gen{Q}_{\sL}^{\ 1} \ket{\Phi_p^{\mbox{\tiny L}++}} &= +a_p \ket{\Phi_p^{\mbox{\tiny L}-+}} , \qquad & \gen{Q}_{\sL}^{\ 1} \ket{\Phi_p^{\mbox{\tiny L}+-}} &= +a_p \ket{\Phi_p^{\mbox{\tiny L}--}} , \\ \overline{\gen{Q}}_{\sL 1} \ket{\Phi_p^{\mbox{\tiny L}-+}} &= +b_p \ket{\Phi_p^{\mbox{\tiny L}++}} , & \overline{\gen{Q}}_{\sL 1} \ket{\Phi_p^{\mbox{\tiny L}--}} &= +b_p \ket{\Phi_p^{\mbox{\tiny L}+-}} , \\ \gen{Q}_{\sL}^{\ 2} \ket{\Phi_p^{\mbox{\tiny L}++}} &= +a_p \ket{\Phi_p^{\mbox{\tiny L}+-}} , & \gen{Q}_{\sL}^{\ 2} \ket{\Phi_p^{\mbox{\tiny L}-+}} &= -a_p \ket{\Phi_p^{\mbox{\tiny L}--}} , \\ \overline{\gen{Q}}_{\sL 2} \ket{\Phi_p^{\mbox{\tiny L}+-}} &= +b_p \ket{\Phi_p^{\mbox{\tiny L}++}} , & \overline{\gen{Q}}_{\sL 2} \ket{\Phi_p^{\mbox{\tiny L}--}} &= -b_p \ket{\Phi_p^{\mbox{\tiny L}-+}} , \end{aligned} \end{equation} \begin{equation} \begin{aligned} \gen{Q}_{\sR 1} \ket{\Phi_p^{\mbox{\tiny L}--}} &= +c_p \ket{\mathbf{0}^+\Phi_p^{\mbox{\tiny L}+-} } , \qquad & \gen{Q}_{\sR 1} \ket{\Phi_p^{\mbox{\tiny L}-+}} &= +c_p \ket{\mathbf{0}^+\Phi_p^{\mbox{\tiny L}++} } , \\ \overline{\gen{Q}}_{\sR}^{\ 1} \ket{\Phi_p^{\mbox{\tiny L}++}} &= +d_p \ket{\mathbf{0}^-\Phi_p^{\mbox{\tiny L}-+} } , \qquad & \overline{\gen{Q}}_{\sR}^{\ 1} \ket{\Phi_p^{\mbox{\tiny L}+-}} &= +d_p \ket{\mathbf{0}^-\Phi_p^{\mbox{\tiny L}--} } , \\ \gen{Q}_{\sR 2} \ket{\Phi_p^{\mbox{\tiny L}--}} &= -c_p \ket{\mathbf{0}^+\Phi_p^{\mbox{\tiny L}-+} } , \qquad & \gen{Q}_{\sR 2} \ket{\Phi_p^{\mbox{\tiny L}+-}} &= +c_p \ket{\mathbf{0}^+\Phi_p^{\mbox{\tiny L}++} } , \\ \overline{\gen{Q}}_{\sR}^{\ 2} \ket{\Phi_p^{\mbox{\tiny L}++}} &= +d_p \ket{\mathbf{0}^-\Phi_p^{\mbox{\tiny L}+-} } , \qquad & \overline{\gen{Q}}_{\sR}^{\ 2} \ket{\Phi_p^{\mbox{\tiny L}-+}} &= -d_p \ket{\mathbf{0}^-\Phi_p^{\mbox{\tiny L}--} } . \end{aligned} \end{equation} Using the commutation relations we find the actions of the central charges \begin{equation} \begin{aligned} \gen{H}_{\mbox{\tiny L}} \ket{\Phi_p^{\mbox{\tiny L}\pm\pm}} &= a_p \bar{a}_p \ket{\Phi_p^{\mbox{\tiny L}\pm\pm}} , \qquad & \gen{C} \ket{\Phi_p^{\mbox{\tiny L}\pm\pm}} &= a_p b_p \ket{\mathbf{0}^+\Phi_p^{\mbox{\tiny L}\pm\pm} } , \\ \gen{H}_{\mbox{\tiny R}} \ket{\Phi_p^{\mbox{\tiny L}\pm\pm}} &= b_p \bar{b}_p \ket{\Phi_p^{\mbox{\tiny L}\pm\pm}} , \qquad & \overline{\gen{C}} \ket{\Phi_p^{\mbox{\tiny L}\pm\pm}} &= \bar{a}_p \bar{b}_p \ket{\mathbf{0}^-\Phi_p^{\mbox{\tiny L}\pm\pm}} . \end{aligned} \end{equation} We stress that the length-changing effects are crucial if we want the central charges $\gen{C},\overline{\gen{C}}$ to have the correct eigenvalues also on multi-particle states. On two-particle states we find\footnote{Differently from the original paper~\cite{Borsato:2012ud,Borsato:2013qpa}, we modify the construction by moving the added or removed vacuum sites $\mathbf{0}^\pm$ to the left of the excitations. This allows us to get the central charge $\gen{C}=\frac{ih}{2} (e^{i \gen{P}} - 1)$ that matches the one derived from the worldsheet computation, rather than $\gen{C}=\frac{ih}{2} (e^{-i \gen{P}} - 1)$. Moreover, the relation between the S-matrix for excitations on the string discussed in Section~\ref{sec:S-mat-T4} and the one for excitations on the spin-chain can be related in a simple way, see~\eqref{eq:S-mat-sp-ch-S-mat-str}.} \begin{equation} \begin{aligned} \gen{C}\ket{\Phi_p^{\mbox{\tiny L}\pm\pm}\Phi_q^{\mbox{\tiny L}\pm\pm}} &= a_p b_p \ket{\mathbf{0}^+\Phi_p^{\mbox{\tiny L}\pm\pm}\Phi_q^{\mbox{\tiny L}\pm\pm}}+a_q b_q \ket{\Phi_p^{\mbox{\tiny L}\pm\pm}\mathbf{0}^+\Phi_q^{\mbox{\tiny L}\pm\pm}} \\ &= (a_p b_p +e^{+i\, p}a_q b_q)\ket{\mathbf{0}^+\Phi_p^{\mbox{\tiny L}\pm\pm}\Phi_q^{\mbox{\tiny L}\pm\pm}}. \end{aligned} \end{equation} Setting \begin{equation} a_p b_p = \frac{ih}{2} (e^{ip} - 1) , \end{equation} we find the eigenvalue \begin{equation} a_p b_p +e^{+i\, p}a_q b_q = \frac{ih}{2} \left( e^{i(p+q)} - 1 \right) . \end{equation} One can repeat the discussion in Section~\ref{sec:RepresentationCoefficientsT4} to find the expressions of the representation coefficients that reproduce the exact eigenvalues of the central charges. Even though other choices are allowed, we prefer to keep the same parameterisation~\eqref{eq:expl-repr-coeff} used for the description of the string. In the spin-chain description one does not need to introduce the parameter $\xi$ for the coefficients in~\eqref{eq:expl-repr-coeff}. This was introduced in the string picture to get a non-local action of the supercharges and reproduce the correct eigenvalue of the central charges on multiparticle states. In the context of the dynamical spin-chain, the same role is played by the length-changing effects, allowing us to set $\xi=0$ also for multiparticle states. When considering one-particle states we can identify the representation of the string with the one presented here for the spin-chain as follow \begin{equation}\label{eq:identif-string-sp-ch-states} \begin{aligned} \ket{\Phi^{\mbox{\tiny L}++}}=\ket{Y^{\mbox{\tiny L}}},\qquad\ket{\Phi^{\mbox{\tiny L}-+}}=\ket{\eta^{\mbox{\tiny L} 1}},\qquad\ket{\Phi^{\mbox{\tiny L}+-}}=\ket{\eta^{\mbox{\tiny L} 2}},\qquad\ket{\Phi^{\mbox{\tiny L}--}}=\ket{Z^{\mbox{\tiny L}}}, \\ \ket{\Phi^{\mbox{\tiny R}++}}=\ket{Y^{\mbox{\tiny R}}},\qquad\ket{\Phi^{\mbox{\tiny R}-+}}=\ket{\eta^{\mbox{\tiny R}}_{\ 1}},\qquad\ket{\Phi^{\mbox{\tiny R}+-}}=\ket{\eta^{\mbox{\tiny R}}_{\ 2}},\qquad\ket{\Phi^{\mbox{\tiny R}--}}=\ket{Z^{\mbox{\tiny R}}}. \end{aligned} \end{equation} This identification is possible by comparing the action of the supercharges on one-particle states. \subsection{The S-matrix for the spin-chain}\label{sec:S-matr-spin-chain-T4} Repeating the derivation of Section~\ref{sec:S-mat-T4} one may find the exact S-matrix governing the scattering of the spin-chain excitations. This is essentially the same object as the one found from the string description, with the only exception that it is written in a different basis. Indeed, although the action of the charges on one-particle representations agrees on the two side---yielding the identification~\eqref{eq:identif-string-sp-ch-states}---one can check that it is different on two-particle states. This is just a consequence of the fact that the basis for the two-particle representations on the two sides are related by a matrix that acts non-locally on the states. To be precise, the S-matrix $\mathcal{S}^{\text{sp-ch}}$ of the spin-chain picture is related to the one found from the string theory $\mathcal{S}^{\text{str}}$ as\footnote{This relation may be found by realising that we can map the supercharges on the two sides by $\gen{Q}^{\text{str}}_{pq} = (\mathbf{1} \otimes \mathbf{U}^\dagger_p) \, \cdot \, \gen{Q}^{\text{sp-ch}}_{pq} \, \cdot \, (\mathbf{1} \otimes \mathbf{U}_p)$.} \begin{equation}\label{eq:S-mat-sp-ch-S-mat-str} \mathcal{S}^{\text{sp-ch}}_{pq} = (\mathbf{1} \otimes \mathbf{U}_q) \, \cdot \, \mathcal{S}^{\text{str}}_{pq} \, \cdot \, (\mathbf{1} \otimes \mathbf{U}^\dagger_p) . \end{equation} In the basis \begin{equation}\label{eq:1partbasis} \left(\Phi^{\mbox{\tiny L}++},\,\Phi^{\mbox{\tiny L}+-},\,\Phi^{\mbox{\tiny L}-+},\,\Phi^{\mbox{\tiny L}--},\,\Phi^{\mbox{\tiny R}++},\,\Phi^{\mbox{\tiny R}+-},\,\Phi^{\mbox{\tiny R}-+},\,\Phi^{\mbox{\tiny R}--},\right) , \end{equation} the matrix that we need is \begin{equation} \mathbf{U}_p=\text{diag} \left(e^{i\, p},e^{i\, p/2},e^{i\, p/2},1,e^{i\, p},e^{i\, p/2},e^{i\, p/2},1\right) . \end{equation} While the string-frame S-matrix satisfies the standard Yang-Baxter equation~\eqref{eq:YBe}, it is easy to check that the above redefinition implies that for the S-matrix of the spin-chain we must have a \emph{twisted} Yang-Baxter equation \begin{equation}\label{eq:YB-mat-twist} \left(\mathbf{F}_p^{\phantom{1}}\mathcal{S}_{qr}\mathbf{F}_p^{{-1}}\right) \otimes \mathbf{1} \, \cdot \, \mathbf{1}\otimes\mathcal{S}_{pr} \, \cdot \, \left(\mathbf{F}_r^{\phantom{1}} \mathcal{S}_{pq}\mathbf{F}_r^{{-1}}\right) \otimes \mathbf{1} = \mathbf{1}\otimes\mathcal{S}_{pq} \, \cdot \, \left(\mathbf{F}_q^{\phantom{1}}\mathcal{S}_{pr}\mathbf{F}_q^{-1}\right) \otimes \mathbf{1} \, \cdot \, \mathbf{1}\otimes\mathcal{S}_{qr}, \end{equation} where $\mathbf{F}_p\equiv\mathbf{U}_p\otimes \mathbf{U}_p$. The same result is actually found after carefully considering the length changing effects on the three-particle states on which we want to check Yang-Baxter. It is possible to repeat the derivation of the previous chapter and write down the Bethe-Yang equations for the spin-chain. Doing so one finds the same six equations for the massive excitations~\eqref{eq:BA-1}-\eqref{eq:BA-3b}, where the interaction terms with massless excitations are obviously missing. Moreover, because of the change of basis~\eqref{eq:S-mat-sp-ch-S-mat-str}, the factors $\nu_j$ are absent in the Bethe-Yang equations for the spin-chain. These factors can actually be reabsorbed in the definition of the length, allowing us to relate the length of the string to the length of the spin-chain. It would be very interesting to construct a long-range spin-chain that describes the CFT$_2$ dual to {AdS$_3\times$S$^3\times$T$^4$} in the spirit of the succesful program carried on in AdS$_5$/CFT$_4$, and compare it to our construction. We refer to~\cite{Pakman:2009mi,Sax:2014mea} for papers taking some preliminary steps in this direction. \section{On the solutions to the crossing equations} \label{app:crossing-AdS3} In this appendix we collect some useful formulae concerning the solutions to the crossing equations of the massive sector of AdS$_3\times$S$^3\times$T$^4$. We start by proving that the expression for the difference of the phases proposed in Section~\ref{sec:dressing-factors} indeed solves the corresponding crossing equation. \subsection{The solution for $\theta^-$} \label{app:solving} We start by defining the integral \begin{equation}\label{eq:Phi-} \begin{aligned} \Phi^-(x,y) &=\ointc \, \frac{dw}{8\pi} \frac{\text{sign}((w-1/w)/i)}{x-w} \log{\ell^-(y,w)} \ - x \leftrightarrow y \\ &=\left( \inturl - \intdlr \right)\frac{dw}{8\pi} \frac{1}{x-w} \log{ \ell^-(y,w)} \ - x \leftrightarrow y, \\ \ell^-(y,w)&\equiv(y-w)\left(1-\frac{1}{yw}\right) \end{aligned} \end{equation} The reader may check that the expressions above match with the ones appearing in the solution for $\chi^-$ presented in~\eqref{eq:chi-}. The statement is that $\chi^-(x,y)$ coincides with $\Phi^-(x,y)$ in the region $|x|>1,\ |y|>1$. Outside this region we have to define $\chi^-$ through a proper analytic continuation, and the two functions stop to coincide. In particular, the first important property that distinguishes them, and that is of crucial important for the proof is that \begin{equation} \label{eq:phi-id} \Phi^-(x,y)-\Phi^-(1/x,y)= 0. \end{equation} To prove it we rewrite $\Phi^-(x,y)$ as \begin{equation}\label{eq:rewrite-Phi-} \begin{aligned} \Phi^-(x,y)&=F(x,y)-F(y,x)\,,\\ F(x,y)&=F^{\rotatebox[origin=c]{360}{$\scriptstyle\curvearrowleft$}}(x,y)-F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(x,y)=\inturl f(w,x,y)dw - \intdlr f(w,x,y)dw\,,\\ f(w,x,y)&=\frac{1}{8\pi} \frac{1}{x-w} \log\ell^-(y,w)\,. \end{aligned} \end{equation} Because of the anti-symmetrisation of $x$ and $y$, we first focus on the second entry of the function $F(y,x)$. Using $f(w,y,x)-f(w,y,1/x)=0$, we can also show \begin{equation} F^{\rotatebox[origin=c]{360}{$\scriptstyle\curvearrowleft$}}(y,x)-F^{\rotatebox[origin=c]{360}{$\scriptstyle\curvearrowleft$}}(y,1/x)=0\,,\qquad \mbox{and}\qquad F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(y,x)-F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(y,1/x)=0\,, \end{equation} that yields $F(y,x)-F(y,1/x)=0$. Looking now at the first entry of $F(x,y)$ \begin{align} F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(1/x,y)&=\intdlr \frac{dw}{8\pi} \frac{1}{1/x-w} \log\ell^-(y,w) \nonumber \\ &=\inturl \frac{du}{8\pi\,u^2} \frac{1}{1/x-1/u} \log\ell^-(y,u) \label{eq:proof1overx} \\ &=-\inturl \frac{du}{8\pi} \frac{1}{x-u} \log\ell^-(y,u) -\inturl \frac{du}{8\pi} \frac{1}{u} \log\ell^-(y,u) \nonumber \\ &=-F^{\rotatebox[origin=c]{360}{$\scriptstyle\curvearrowleft$}}(x,y)-\phi^-(y)\,, \nonumber \end{align} where we used the change of variable $u=1/w$ and we assumed that $|x|\neq1$. Sending $x\to1/x$ in the above equation we get also $F^{\rotatebox[origin=c]{360}{$\scriptstyle\curvearrowleft$}}(1/x,y)=-F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(x,y)-\phi^-(y)$. With all this information we can then show that also for the first entry $F(x,y)-F(1/x,y)=0$, and conclude that~\eqref{eq:phi-id} is proved. The function $\Phi^-(x,y)$ has another important property, it has a jump discontinuity when we cross values of $|x|=1$. To prove it and calculate the amount of the discontinuity, we consider separately the functions $F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(x,y),F^{\rotatebox[origin=c]{360}{$\scriptstyle\curvearrowleft$}}(x,y)$ that were introduced in~\eqref{eq:rewrite-Phi-} as a convenient rewriting. If we start with $F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(x,y)$, on the one hand it is clear that no discontinuity is encountered when we cross the unit cirlce $|x|=1$ from above the real line $\mathbf{Im}(x)>0$. On the other hand, crossing from below the real line $\mathbf{Im}(x)<0$ we get, using the residue theorem \begin{equation} F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(e^{i\varphi+\epsilon},y)=F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(e^{i\varphi-\epsilon},y)+\frac{i}{4}\log \ell^-(y,e^{i\varphi})+O(\epsilon),\qquad \epsilon>0,\quad -\pi<\varphi<0\,. \end{equation} Studying the discontinuity of $F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}$ in the second entry, we find a jump both when we cross the lower half or the upper half circles\footnote{These results may be found by first studying the discountinuity of $\partial_xF^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(y,x)$, and then find the corresponding primitive.} \begin{equation} \begin{aligned} F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(y,e^{i\varphi+\epsilon})&=F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(y,e^{i\varphi-\epsilon})-\frac{i}{4}\,\log\left(y-e^{i\varphi}\right)+\phi_\uparrow(y)\,,+O(\epsilon),\quad &&-\pi<\varphi<0\,, \\ F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(y,e^{i\varphi+\epsilon})&=F^{\rotatebox[origin=c]{180}{$\scriptstyle\curvearrowleft$}}(y,e^{i\varphi-\epsilon})+\frac{i}{4}\,\log\left(\frac{1}{ye^{i\varphi}}-1\right)+\phi_\downarrow(y)+O(\epsilon),\quad &&\phantom{-{}} 0<\varphi<\pi\,, \end{aligned} \end{equation} where $\epsilon>0$, and $\phi_\uparrow(y), \phi_\downarrow(y)$ are functions of $y$, that will not be important for our purposes. The discountinuities of $F^{\rotatebox[origin=c]{360}{$\scriptstyle\curvearrowleft$}}$ are found in the same way, and are equivalent to changing the upper and the lower half circles in the above results. Thanks to these results, we can compute the values of the discontinuities for $\Phi^-(x,y)$ when we cross the unit circle from below or above the real line\footnote{We have omitted functions that depend on $y$ only. They are not important for us, since they do not contribute to the crossing equations.} \begin{equation}\label{eq:disc-Phi-minus} \begin{aligned} \Phi^-(e^{i\varphi+\epsilon},y)&=\Phi^-(e^{i\varphi-\epsilon},y)-\frac{i}{2}\,\log\ell^-(y,e^{i\varphi})+O(\epsilon),\qquad &&-\pi<\varphi<0\,, \\ \Phi^-(e^{i\varphi+\epsilon},y)&=\Phi^-(e^{i\varphi-\epsilon},y)+\frac{i}{2}\,\log\ell^-(y,e^{i\varphi})+O(\epsilon),\qquad &&\phantom{-{}}0<\varphi<\pi\,. \end{aligned} \end{equation} All this information is what we need to construct a solution of the crossing equation for the difference of the phases. We define crossing as an analytic continuation from the physical region $|x|>1,|y|>1$ to the crossed region $|x|<1,|y|>1$, where the path crosses the unit circle below the real line $\mathbf{Im}(x)<0$. Then we construct $\chi^-$ in such a way that it coincides with $\Phi^-$ in the physical region, but is continuos when we perform a crossing transformation and we go to the crossed region \begin{equation} \begin{aligned} \chi^-(x,y)&\equiv\Phi^-(x,y)\,\qquad &&|x|>1,\quad |y|>1\,, \\ \chi^-(x,y)&\equiv\Phi^-(x,y)-\frac{i}{2}\,\log\ell^-(y,x)\,\qquad &&|x|<1,\quad |y|>1\,. \end{aligned} \end{equation} According to these definitions and using~\eqref{eq:phi-id} we have \begin{equation} \chi^-(x,y)-\chi^-(1/x,y)=\frac{i}{2}\,\log\ell^-(y,1/x)=\frac{i}{2}\,\log\ell^-(y,x)\,,\qquad |x|>1,\quad |y|>1\,. \end{equation} Remembering that $\sigma^-(x^\pm,y^\pm)=\text{exp}(i \theta^-(x^\pm,y^\pm))$ and the relation between $\theta^-$ and $\chi^-$ in ~\eqref{eq:thchi} we find \begin{equation} \label{eq:crossdifffinal} \frac{{\sigma^-(x,y)}^2}{{\sigma^-(\bar{x},y)}^2}=\text{exp}\left[-\left(\log\ell^-(x^+,y^+)+\log\ell^-(x^-,y^-)-\log\ell^-(x^+,y^-)-\log\ell^-(x^-,y^+)\right)\right] \end{equation} which proves that we have constructed a solution to~\eqref{eq:cr-ratio}. \subsection{Singularities of the dressing phases}\label{sec:sing-dressing} We discuss possible singularities of the dressing phases~$\theta^{\bullet\bullet}(x,y)$ and~$\widetilde{\theta}^{\bullet\bullet}(x,y)$, defined in terms of $\chi^{\bullet\bullet}(x,y),\,\widetilde{\chi}^{\bullet\bullet}(x,y)$ as in~\eqref{eq:solution}. We use results concerning the analytic properties of the BES phase, that is known to be regular in the physical region~\cite{Dorey:2007xn,Arutyunov:2009kf}. We then focus on the deviations from it, and we look for logarithmic singularities that might arise for special relative values of~$x$ and~$y$ in the functions \begin{equation} \label{eq:PsiPM} \Psi^\pm(x,y)=\frac{1}{2}\big(-\Phi^{\text{HL}}(x,y)\pm\Phi^-(x,y)\big)\,, \end{equation} that contribute to define the two phases as in~\eqref{eq:solution}. Here $\Phi^{\text{HL}}(x,y)$ is the integral defining the HL phase in the physical region, \begin{equation} \Phi^{\text{HL}}(x,y)=\left( \inturl - \intdlr \right)\frac{dw}{4\pi} \frac{1}{x-w} \log\left({\frac{y-w}{y-1/w}}\right), \end{equation} and $\Phi^-(x,y)$ is defined in~\eqref{eq:Phi-}. Because of the above expressions, singularities might arise at~$y=x$ or~$y=1/x$, but an explicit evaluation yields \begin{equation} \Psi^\pm(x,y)\big|_{y=x}=0\,,\qquad\Psi^\pm(x,y)\big|_{y=1/x}=\frac{1}{4\pi}\big(4\,\text{Li}_2(x)-\text{Li}_2(x^{2})\big)\,, \end{equation} with $|y|>1$. This is enough to conclude that the phases have no singularity at $x=y$, where both variables are in the physical region. When $y=1/x$ and $|y|>1$, $x$ lies inside the unit circle, and we have to perform a proper analyitic continuation of the above functions to find the contribution to the phases in the crossed region. We continue the phases through the lower half-circle as in Appendix~\ref{app:solving}. The result for $\Phi^-$ may be found in~\eqref{eq:disc-Phi-minus}, while for $\Phi^{\text{HL}}(x,y)$ we get \begin{equation} \Phi^{\text{HL}}(e^{i\varphi+\epsilon},y)=\Phi^{\text{HL}}(e^{i\varphi-\epsilon},y)-\frac{i}{2}\,\log\left[\frac{y-e^{i\varphi}}{y-e^{-i\varphi}}\right]+O(\epsilon),\qquad \epsilon>0,\quad -\pi<\varphi<0\,. \end{equation} Putting together this information, we find that there is no singularity in $\widetilde{\chi}^{\bullet\bullet}(x,y)$ for $y=1/x$ and $|y|>1$. On the other hand $\chi^{\bullet\bullet}(x,y)$ has a logarithmic singularity such that \begin{equation} \label{eq:chizero} e^{2i \chi^{\bullet\bullet}(x,y)}\sim \left(y-\frac{1}{x}\right)\,,\qquad\text{for}\quad y\sim 1/x\,. \end{equation} \section{Bosonic strings}\label{sec:Bos-string-lcg} Restricting to the bosonic model, we can already capture the essential features of the gauge-fixing procedure. The string moves on a target manifold parameterised by ten coordinates $X^M, \ M=0,\ldots,9$. Two of them---for defineteness $X^0\equiv t$ that is time and $X^5 \equiv \phi$ that for us will be an angle of a compact manifold---correspond to the abelian isometries of the full action that we will exploit to fix light-cone gauge. Invariance under shifts of two such coordinates results in a dependence of the action on just their derivatives. A rank$-2$ symmetric tensor $G_{MN}$ defines a metric on the target space, that we assume to be written in ``block form'' \begin{equation} \begin{aligned} {\rm d}s^2&= G_{MN} {\rm d}X^M{\rm d}X^N\\ &= G_{tt} {\rm d}t^2+G_{\phi\phi} {\rm d}\phi^2+G_{\mu\nu} {\rm d}X^\mu{\rm d}X^\nu\,, \end{aligned} \end{equation} where $X^\mu$ are the eight transversal coordinates and $G_{tt}<0$. In general one might have also a rank$-2$ anti-symmetric tensor $B_{MN}$. We include this possibility, as it will be needed in Chapter~\ref{ch:qAdS5Bos} and Chapter~\ref{ch:qAdS5Fer}. The action for the bosonic string then takes the form of the Polyakov action \begin{equation}\label{eq:bos-str-action} \begin{aligned} S^{\alg{b}}&= \int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma {\rm d} \tau\ L^{\alg{b}}\,, \\ L^{\alg{b}}&=-\frac{g}{2} \left( \, \gamma^{\alpha\beta} \partial_\alpha X^M \partial_\beta X^N G_{MN} -\epsilon^{\alpha\beta} \partial_\alpha X^M \partial_\beta X^N B_{MN} \right)\,. \end{aligned} \end{equation} Here $\tau$ and $\sigma$ are respectively the timelike and spatial coordinates parameterising the worldsheet, for which we use Greek indices $\alpha,\beta$. For closed strings $\sigma\in[-{\frac{L}{2}},{\frac{L}{2}}]$ parameterises a circle of length $L$ and periodic boundary conditions for the fields are used. The symmetric tensor $\gamma^{\alpha\beta}=h^{\alpha\beta}\sqrt{-h}$ is the Weyl-invariant combination\footnote{With abuse of language we will always refer to it as just the worldsheet metric.} of the world-sheet metric $h_{\alpha\beta}$, and for us the component $\gamma^{\tau\tau}<0$. For the anti-symmetric tensor $\epsilon^{\alpha\beta}$ we use the convention $\epsilon^{\tau\sigma}=1$. The whole action is multiplied by $g$, that plays the role of the string tension. We use first-order formalism and introduce conjugate momenta \begin{equation} p_M = \frac{\delta S^{\alg{b}}}{\delta \dot{X}^M} = - g \gamma^{\tau\beta} \partial_\beta X^N G_{MN} + g X^{'N} B_{MN}\,, \end{equation} where we are using the shorthand notation $\dot{X}\equiv\partial_{\tau}X(\tau,\sigma),\ X'\equiv\partial_{\sigma}X(\tau,\sigma)$. Using $\det\gamma^{\alpha\beta}=-1$ we can rewrite the action as \begin{equation}\label{eq:bos-act-I-ord} S^{\alg{b}}= \int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma {\rm d} \tau \left( p_M \dot{X}^M + \frac{\gamma^{\tau\sigma}}{\gamma^{\tau\tau}} C_1 + \frac{1}{2g \gamma^{\tau\tau}} C_2 \right), \end{equation} where $C_1, C_2$ are the Virasoro constraints. They explicitly read as \begin{equation} \begin{aligned} C_1 &= p_M X'^{M}, \\ C_2 &= G^{MN} p_M p_N+ g^2 X'^{M} X'^{N} G_{MN} - 2 g\, p_M X'^{Q} G^{MN} B_{NQ} + g^2 X'^{P} X'^{Q} B_{MP} B_{NQ} G^{MN} . \end{aligned} \end{equation} The components of $\gamma^{\alpha\beta}$ are Lagrange multipliers, implying that we should solve the equations $C_1=0$ and $C_2=0$ in a certain gauge. It is convenient to introduce light-cone coordinates $x^+$ and $x^-$ as linear combinations of $t,\phi$~\cite{Arutyunov:2006gs} \begin{equation}\label{eq:lc-coord} x^+= (1-a)\, t +a\, \phi, \qquad\quad x^-=\phi-t. \end{equation} To be more general, we make the combination defining $x^+$ dependent on a generic parameter $a$. The above combinations have been chosen in such a way that the conjugate momentum\footnote{We use a different convention from~\cite{Arutyunov:2009ga} for what we call $p_+$ and $p_-$.} of $x^+$ is the sum of the conjugate momenta of $t$ and $\phi$ \begin{equation} p_+=\frac{\delta S}{\delta \dot{x}^+}=p_t+p_\phi\,,\qquad p_-=\frac{\delta S}{\delta \dot{x}^-}=-a\, p_t+(1-a)\,p_\phi\,. \end{equation} In these coordinates the two Virasoro constraints are rewritten as \begin{equation}\label{eq:Vira-constr-bos} \begin{aligned} C_1 =& p_+ x'^{+}+p_- x'^{-}+p_\mu X'^{\mu}\,,\\ C_2 =& G^{++} p_+^2 +2 G^{+-} p_+ p_- + G^{--} p_-^2 \\ & + g^2 G_{--} (x'^-)^2 + 2 g^2 G_{+-} x'^+ x'^- + g^2 G_{++} (x'^+)^2 + \mathcal{H}^{\alg{b}}_x\, , \end{aligned} \end{equation} where we have assumed that the $B$-field vanishes along light-cone directions---as this is valid for the examples that we will consider---and \begin{equation} \begin{aligned} \nonumber G^{++} &= a^2 G_{\phi\phi}^{-1} + (a-1)^2 G_{tt}^{-1}, \quad &G^{+-}& = a G_{\phi\phi}^{-1} + (a-1) G_{tt}^{-1}, \quad G^{--} = G_{\phi\phi}^{-1} + G_{tt}^{-1}, \\ G_{--} &= (a-1)^2 G_{\phi\phi} + a^2 G_{tt}, \quad &G_{+-} &= -(a-1) G_{\phi\phi} - a G_{tt}, \quad G_{++} = G_{\phi\phi} + G_{tt}\,. \end{aligned} \end{equation} In $C_2$ we have collected all expressions that depend only on the transverse coordinates $X^\mu$ into the object \begin{equation} \mathcal{H}^{\alg{b}}_x=G^{\mu\nu} p_\mu p_\nu+ g^2 X'^{\mu} X'^{\nu} G_{\mu\nu} - 2 g p_\mu X'^{\rho} G^{\mu\nu} B_{\nu\rho} + g^2 X'^{\lambda} X'^{\rho} B_{\mu\lambda} B_{\nu\rho} G^{\mu\nu} . \end{equation} The \emph{uniform light-cone gauge} is achieved by fixing \begin{equation}\label{eq:unif-lcg} x^+= \tau+a \,m \, \sigma, \qquad\quad p_-=1, \end{equation} where we allow the coordinate $\phi$ to wind $m$ times around the circle $\phi({\frac{L}{2}})-\phi(-{\frac{L}{2}})=2\pi \, m$. The name ``uniform'' comes from the fact that we choose $p_-$ to be independent of $\sigma$, and this choice makes this light-cone momentum uniformly distributed along the string. Thanks to this gauge, the term $p_M\dot{X}^M=p_+\dot{x}^++p_-\dot{x}^-+p_\mu\dot{X}^\mu$ in the action~\eqref{eq:bos-act-I-ord} is simplified, and we are led to identify the light-cone momentum $p_+$ with the Hamiltonian (density) of the gauge-fixed model\footnote{We have dropped the total derivative term $\dot{x}^-$.} \begin{equation} S^{\alg{b}}_{\text{g.f.}}= \int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma {\rm d} \tau \, \left( p_\mu\dot{X}^\mu -\mathcal{H}^{\alg{b}} \right)\,, \qquad \mathcal{H}^{\alg{b}}=-p_+(X^\mu,p_\mu)\,, \end{equation} once the Virasoro constraints are satisfied. In this gauge the first Virasoro constraint $C_1=0$ may be used to solve for $x'^-$ as \begin{equation} x'^-= - p_\mu X'^{\mu}-a\, m\, p_+. \end{equation} Notice that only the \emph{derivative} of this light-cone coordinate can be written as a local expression of the transverse fields. Since we are describing closed strings, we should actually impose the following periodicity condition \begin{equation} 2\pi \, m = x^-(L/2)-x^-(-L/2) = \int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma\, x'^-= -\int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma \, p_\mu X'^\mu+a\, m\, \int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma\, \mathcal{H}^{\alg{b}}\,, \end{equation} that we call \emph{level-matching} condition. We recognise that the above is a constraint involving the woldsheet momentum \begin{equation} p_{\text{ws}}= -\int_{-{\frac{L}{2}}}^{\frac{L}{2}} \, {\rm d}\sigma \, p_\mu X'^\mu\,, \end{equation} which is the charge associated to shifts of the worldsheet coordinate $\sigma$, under which the action is invariant. From now on we will just consider the case of zero winding $m=0$, as it yields a well-defined large-tension limit, see Section~\ref{sec:decomp-limit}. In this case the level-matching condition imposes that the worldsheet momentum must vanish for physical configurations \begin{equation} p_{\text{ws}}=0, \qquad \quad (\text{when } m=0)\,. \end{equation} In Chapter~\ref{ch:symm-repr-T4} we actually use a method where we first need to relax the level-matching condition, meaning that we allow for the configuations to have non-vanishing worldsheet momentum. The above condition is then imposed only at the end, as a constraint on the states of the Hilbert space. \medskip Solving the second Virasoro constraint $C_2=0$, we find explicitly the light-cone Hamiltonian (density). The solution to this quadratic equation that yields a positive Hamiltonian is \begin{equation}\label{eq:lc-Hamilt-bos} \mathcal{H}^{\alg{b}}=-p_+=\frac{G^{+-}+\sqrt{ (G^{+-})^2- G^{++} \left(G^{--}+g^2 G_{--} (x'^-)^2 +\mathcal{H}^{\alg{b}}_x\right)} }{ G^{++}}\,. \end{equation} To relate the Hamiltonian on the worldsheet to the \emph{spacetime} energy of the string, let us note that---because of the invariance of the action under shifts of $t$ and $\phi$---we can define two conserved quantities \begin{equation} E=-\int_{-{\frac{L}{2}}}^{\frac{L}{2}} {\rm d}\sigma \, p_t\,, \qquad\quad J=\int_{-{\frac{L}{2}}}^{\frac{L}{2}} {\rm d}\sigma \, p_\phi\,. \end{equation} The first of them is the spacetime energy, while the second measures the angular momentum in the direction of $\phi$. After going to light-cone coordinates, these are combined into \begin{equation}\label{eq:total-light-cone-mom-P+P-} P_+= \int_{-{\frac{L}{2}}}^{\frac{L}{2}} {\rm d} \sigma \, p_+ = J-E\,, \qquad P_-= \int_{-{\frac{L}{2}}}^{\frac{L}{2}} {\rm d} \sigma \, p_- = (1-a)\,J+a\,E\,. \end{equation} On the one hand, we immediately discover the relation between the light-cone Hamiltonian and the spacetime charges $E$ and $J$. On the other hand, using~\eqref{eq:unif-lcg} we find how these fix the length $L$ of the string \begin{equation}\label{eq:Ham-En-L-P-} \int_{-{\frac{L}{2}}}^{\frac{L}{2}} {\rm d} \sigma \, \mathcal{H}^{\alg{b}} = E-J\,, \qquad L = P_-=(1-a)\,J+a\,E\,. \end{equation} The first of these equations justifies the choice of the gauge. From the point of view of the AdS/CFT correspondence it is indeed desirable to compute the spacetime energy $E$, that is then related by a simple formula to the Hamiltonian on the worldsheet. The second of the above equations shows that the Hamiltonian secretely depends on $P_-$ as well, although just through the integration limits. The length of the string is a gauge-dependent quantity, as it is confirmed by the explicit $a$-dependence. After this discussion on the gauge-fixing procedure for the bosonic model, let us now include also the fermionic degrees of freedom. \section{Kappa-symmetry}\label{sec:kappa-symm-eta-def-quad-theta} The Lagrangian of the deformed model is invariant under kappa-symmetry, as proved in~\cite{Delduc:2013qra,Delduc:2014kha}. Let us first briefly describe what happens when we switch off the deformation. Local transformations are implemented on the coset by multiplication of the group elements from the \emph{right}. A kappa-transformation is a local fermionic transformation. We then implement it with $\text{exp}[\varepsilon(\sigma,\tau)]$, where $\varepsilon$ is a local fermionic parameter that takes values in the algebra $\alg{psu}(2,2|4)$. Multiplication of a coset representative $\alg{g}$ gives \begin{equation}\label{eq:right-ferm-action} \alg{g}\cdot \text{exp}(\varepsilon) = \alg{g}'\cdot \alg{h}\,, \end{equation} where $\alg{g}'$ is a new element of the coset, and $\alg{h}$ is a compensating element of $\text{SO}(4,1)\times \text{SO}(5)$ needed to remain in the coset. A generic $\varepsilon$ will not leave the action invariant. However, taking~\cite{Arutyunov:2009ga} \begin{equation}\label{eq:undef-eps-kappa-tr} \begin{aligned} \varepsilon &= \frac{1}{2} (\gamma^{\alpha\beta} \delta^{IJ}- \epsilon^{\alpha\beta}\sigma_3^{IJ}) \left( \gen{Q}^I\kappa_{J\alpha} A_\beta^{(2)} + A_\beta^{(2)} \gen{Q}^I \kappa_{J\alpha} \right), \end{aligned} \end{equation} supplemented by the corresponding variation of the worldsheet metric, it is possible to show that indeed the action does not change under this transformation. The parameters $\kappa_{1\alpha}$ and $\kappa_{2\alpha}$---whose spinor indices we are omitting---introduced to define $\varepsilon$ are independent local quantities, parameterising odd elements of degree $1$ and $3$ respectively. In the deformed case one can still prove the existence of a local fermionic symmetry of the form~\eqref{eq:right-ferm-action}, meaning that the parameter $\varepsilon$ is related to the \emph{infinitesimal} variation of the coset representative as \begin{equation} \delta_\kappa \alg{g}=\alg{g}\cdot \varepsilon\,. \end{equation} However, the definition~\eqref{eq:undef-eps-kappa-tr} of the parameter $\varepsilon$ has to be deformed in order to get invariance of the action, in particular it will no longer lie just in the odd part of the algebra, but it will have a non-trivial projection also on bosonic generators. It is written in terms of an odd element $\varrho$ as~\cite{Delduc:2013qra} \begin{equation}\label{eq:eps-op-rho-kappa} \varepsilon = \mathcal{O} \varrho, \qquad \varrho= \varrho^{(1)} +\varrho^{(3)} . \end{equation} where $\mathcal{O}$ is the operator defined in~\eqref{eq:defin-op-def-supercoset} and the two projections $\varrho^{(k)}$ are\footnote{Comparing to~\cite{Delduc:2013qra} we have dropped the factor of $i$ because we use ``anti-hermitian'' generators.} \begin{equation}\label{eq:def-varrho-kappa-def} \begin{aligned} \varrho^{(1)} &= \frac{1}{2} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \left( \gen{Q}^1\kappa_{1\alpha} \left(\mathcal{O}^{-1} A_\beta \right)^{(2)} + \left(\mathcal{O}^{-1} A_\beta \right)^{(2)} \gen{Q}^1\kappa_{1\alpha} \right),\\ \varrho^{(3)} &= \frac{1}{2} (\gamma^{\alpha\beta} + \epsilon^{\alpha\beta}) \left( \gen{Q}^2\kappa_{2\alpha} \left(\widetilde{\mathcal{O}}^{-1} A_\beta \right)^{(2)} + \left(\widetilde{\mathcal{O}}^{-1} A_\beta \right)^{(2)} \gen{Q}^2\kappa_{2\alpha} \right), \end{aligned} \end{equation} where we defined \begin{equation} \widetilde{\mathcal{O}}=\mathbf{1} + \eta R_{\alg{g}} \circ \widetilde{d}\,. \end{equation} In Appendix~\ref{app:standard-kappa-sym} we compute explicitly the form of the variations on bosonic and fermionic fields given the above definitions, and we show that they do not have the standard form. However, after implementing the field redefinitions of Section~\ref{sec:canonical-form}---needed to put the Lagrangian in the standard Green-Schwarz form---we find that also the kappa-variations become indeed standard \begin{equation}\label{eq:kappa-var-32-eta-def} \begin{aligned} \delta_{\kappa}X^M &= - \frac{i}{2} \ \bar{\Theta}_I \delta^{IJ} \widetilde{e}^{Mm} \Gamma_m \delta_{\kappa} \Theta_J + \mathcal{O}(\Theta^3), \\ \delta_{\kappa} \Theta_I &= -\frac{1}{4} (\delta^{IJ} \gamma^{\alpha\beta} - \sigma_3^{IJ} \epsilon^{\alpha\beta}) \widetilde{e}_{\beta}^m \Gamma_m \widetilde{K}_{\alpha J}+ \mathcal{O}(\Theta^2), \end{aligned} \end{equation} where \begin{equation} \widetilde{K} \equiv \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \otimes \widetilde{\kappa}, \end{equation} and $\tilde{\kappa}$ is related to $\kappa$ as in~\eqref{eq:def-kappa-tilde-k-symm}. It is interesting to look also at the kappa-variation for the worldsheet metric, as this provides an independent method to derive the couplings of the fermions to the background fields, already identified from the Lagrangian. The variation is given by~\cite{Delduc:2013qra} \begin{equation}\label{eq:defin-kappa-var-ws-metric} \delta_{\kappa}\gamma^{\alpha\beta}=\frac{1-\eta^2}{2} \Str\left( \Upsilon \left[\gen{Q}^1\kappa^{\alpha}_{1+},P^{(1)}\circ \widetilde{\mathcal{O}}^{-1}( A^{\beta}_+ ) \right] +\Upsilon \left[\gen{Q}^2\kappa^{\alpha}_{2-},P^{(3)}\circ {\mathcal{O}}^{-1}( A^{\beta}_- ) \right] \right)\,, \end{equation} where $\Upsilon=\text{diag}(\mathbf{1}_4,-\mathbf{1}_4)$ and the projections of a vector $V_\alpha$ are defined as \begin{equation} V^\alpha_{\pm}= \frac{\gamma^{\alpha\beta}\pm \epsilon^{\alpha\beta}}{2} V_\beta\,. \end{equation} As we show in Appendix~\ref{app:standard-kappa-sym}, after taking into account the field redefinitions performed to get a standard action, we find a standard kappa-variation also for the worldsheet metric \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta}&=2i\Bigg[ \bar{\widetilde{K}}^\alpha_{1+} \widetilde{D}^{\beta 1J}_+\Theta_J+\bar{\widetilde{K}}^\alpha_{2-} \widetilde{D}^{\beta 2J}_-\Theta_J \Bigg]+ \mathcal{O}(\Theta^3) \\ &= 2i\ \Pi^{IJ\, \alpha\a'}\Pi^{JK\, \beta\b'} \ \bar{\widetilde{K}}_{I\alpha'}\widetilde{D}^{KL}_{\beta'}\Theta_{L}+ \mathcal{O}(\Theta^3), \end{aligned} \end{equation} where we have defined \begin{equation} \Pi^{IJ\, \alpha\a'}\equiv\frac{\delta^{IJ}\gamma^{\alpha\a'}+\sigma_3^{IJ}\epsilon^{\alpha\a'}}{2}\,. \end{equation} The operator $\widetilde{D}$ is the one already identified from the computation of the Lagrangian. It is given in Eq.~~\eqref{eq:deform-D-op}, and in particular we find the same RR fields as in the previous section. \section{Clifford algebra and $\alg{psu}(2,2|4)$}\label{app:su224-algebra} Our preferred basis of $4\times 4$ gamma-matrices is\footnote{Here it was useful to exchange the definition of $\gamma_1, \gamma_4$ from the one of~\cite{Arutyunov:2009ga}.} \begin{equation}\label{eq:choice-5d-gamma} \newcommand{\color{black!40}0}{\color{black!40}0} \begin{aligned} \gamma_0 &= \left( \begin{array}{cccc} \phantom{-}1\phantom{-} & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \phantom{-}1\phantom{-} & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & -1\phantom{-} & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & -1\phantom{-} \\ \end{array} \right) , \\ \gamma_1 &= \left( \begin{array}{cccc} \color{black!40}0 & \color{black!40}0 & -i\phantom{-} & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \phantom{-}i\phantom{-} \\ \phantom{-}i\phantom{-} & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & -i\phantom{-} & \color{black!40}0 & \color{black!40}0 \\ \end{array} \right), \qquad &\gamma_2 = \left( \begin{array}{cccc} \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \phantom{-}i\phantom{-} \\ \color{black!40}0 & \color{black!40}0 & \phantom{-}i\phantom{-} & \color{black!40}0 \\ \color{black!40}0 & -i\phantom{-} & \color{black!40}0 & \color{black!40}0 \\ -i\phantom{-} & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \end{array} \right), \\ \gamma_3 &= \left( \begin{array}{cccc} \color{black!40}0 & \color{black!40}0 & \phantom{-}1\phantom{-} & \color{black!40}0 \\ \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & \phantom{-}1\phantom{-} \\ \phantom{-}1\phantom{-} & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \color{black!40}0 & \phantom{-}1\phantom{-} & \color{black!40}0 & \color{black!40}0 \\ \end{array} \right), \qquad &\gamma_4 = \left( \begin{array}{cccc} \color{black!40}0 & \color{black!40}0 & \color{black!40}0 & -1\phantom{-} \\ \color{black!40}0 & \color{black!40}0 & \phantom{-}1\phantom{-} & \color{black!40}0 \\ \color{black!40}0 & \phantom{-}1\phantom{-} & \color{black!40}0 & \color{black!40}0 \\ -1\phantom{-} & \color{black!40}0 & \color{black!40}0 & \color{black!40}0 \\ \end{array} \right). \end{aligned} \end{equation} Matrices for AdS$_5$ and S$^5$ in terms of the above gamma-matrices have been defined in~\eqref{eq:gamma-AdS5-S5}. When we need to write explicitly the matrix indices we use underlined Greek letters for AdS$_5$ ${(\check{\gamma}_m)_{\ul{\alpha}} }^{\ul{\nu}}$, and underlined Latin letters for S$^5$ ${(\hat{\gamma}_m)_{\ul{a}}}^{\ul{b}}$. It is useful to consider the matrices \begin{equation}\label{eq:SKC-gm} \begin{aligned} \Sigma = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{array} \right), \quad K = \left( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ \end{array} \right), \quad C = \left( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \\ \end{array} \right), \end{aligned} \end{equation} that are defined with upper indices $\Sigma^{\ul{a}\ul{b}},K^{\ul{a}\ul{b}},C^{\ul{a}\ul{b}}$. Their inverse matrices are then defined with lower indices. They transform the gamma matrices in the following way \begin{equation} \gamma_m^t = K \gamma_m K^{-1}, \end{equation} \begin{equation} \begin{aligned} \gamma_m^t & = -C \gamma_m C^{-1}, \quad m=1,...,4, \qquad \gamma_0^t & = C \gamma_0 C^{-1}, \\ \gamma_m^\dagger & = -\Sigma \gamma_m \Sigma^{-1}, \quad m=1,...,4, \qquad \gamma_0^\dagger & = \Sigma \gamma_0 \Sigma^{-1}. \end{aligned} \end{equation} The matrix $K$---and not $C$---is the charge conjugation matrix for our Clifford algebra. We choose to follow the same notation of~\cite{Arutyunov:2009ga}. From the last equation one then has $\check{\gamma}_m^\dagger = -\Sigma \check{\gamma}_m \Sigma^{-1}, \quad m=0,...,4$. For raising and lowering spinor indices we follow the conventions of~\cite{Freedman:2012zz} \begin{equation} \lambda^\alpha = K^{\alpha\beta} \lambda_\beta, \qquad \lambda_\alpha = \lambda^\beta K_{\beta\alpha}, \end{equation} where $K^{\alpha\beta}$ are the components of the matrix $K$, that plays the role of charge conjugation matrix for the Clifford algebra. We also have \begin{equation} K^{\alpha\beta} K_{\gamma\beta} = \delta^\alpha_\gamma, \qquad K_{\beta\alpha} K^{\beta\gamma} = \delta_\alpha^\gamma, \qquad \chi^\alpha \lambda_\alpha = - \chi_\alpha \lambda^\alpha . \end{equation} The five-dimensional gamma matrices satisfy the symmetry properties \begin{equation}\label{eq:symm-prop-5dim-gamma} \begin{aligned} (K\gamma^{(r)})^t &= - t_r^\gamma \ K\gamma^{(r)}\,, \\ K(\gamma^{(r)})^t K&= - t_r^\gamma \ \gamma^{(r)}\,, \qquad t_0^\gamma=t_1^\gamma=+1,\quad t_2^\gamma=t_3^\gamma=-1\,. \end{aligned} \end{equation} Here $\gamma^{(r)}$ denotes the antisymmetrised product of $r$ gamma matrices and the coefficients $t_r^\gamma$ are the same for AdS and the sphere---we label them with ${}^\gamma$ to distinguish them from the coefficients of ten-dimensional Gamma matrices. For the rules concerning Hermitian conjugation we find \begin{equation}\label{eq:herm-conj-prop-5dim-gamma} \begin{aligned} &\check{\gamma}_m^\dagger \phantom{{}_n}=+\check{\gamma}^0\check{\gamma}_m\check{\gamma}^0\,, \qquad &&\hat{\gamma}_m^\dagger \phantom{{}_n}=+\hat{\gamma}_m\,, \\ &\check{\gamma}_{mn}^\dagger=+\check{\gamma}^0\check{\gamma}_{mn}\check{\gamma}^0\,, \qquad &&\hat{\gamma}_{mn}^\dagger=-\hat{\gamma}_{mn}\,, \end{aligned} \end{equation} With these ruels we find useful formulas to take the bar of some expressions \begin{equation} \begin{aligned} & ((\check{\gamma}_m \otimes \mathbf{1}_4) \theta_I)^\dagger (\check{\gamma}^0 \otimes \mathbf{1}_4) = - \bar{\theta}_I (\check{\gamma}_m \otimes \mathbf{1}_4) , \\ & ((\mathbf{1}_4 \otimes \hat{\gamma}_m) \theta_I)^\dagger (\check{\gamma}^0\otimes \mathbf{1}_4) = + \bar{\theta}_I (\mathbf{1}_4 \otimes \hat{\gamma}_m ) , \end{aligned} \end{equation} \begin{equation} \begin{aligned} & ((\check{\gamma}_{mn}\otimes \mathbf{1}_4) \theta_I)^\dagger (\check{\gamma}^0\otimes \mathbf{1}_4) = - \bar{\theta}_I (\check{\gamma}_{mn} \otimes \mathbf{1}_4) , \\ & ((\mathbf{1}_4 \otimes \hat{\gamma}_{mn}) \theta_I)^\dagger (\check{\gamma}^0\otimes \mathbf{1}_4) = - \bar{\theta}_I (\mathbf{1}_4 \otimes \hat{\gamma}_{mn}) , \end{aligned} \end{equation} Thanks to~\eqref{eq:symm-prop-5dim-gamma} one can also show that given two Grassmann bi-spinors $\psi_{\ul{\alpha}\ul{a}},\chi_{\ul{\alpha}\ul{a}}$ the ``Majorana-flip'' relations are \begin{equation}\label{eq:symm-gamma-otimes-gamma} \bar{\chi} \left(\check{\gamma}^{(r)}\otimes \hat{\gamma}^{(s)} \right) \psi = - t_r^\gamma t_s^\gamma \ \bar{\psi} \left(\check{\gamma}^{(r)}\otimes \hat{\gamma}^{(s)} \right) \chi. \end{equation} Knowing this, it is easy to prove \begin{equation}\label{eq:Maj-flip} s^{IJ }\bar{\theta}_I \left(\check{\gamma}^{(r)}\otimes \hat{\gamma}^{(s)} \right) \theta_J=0 \qquad \text{ if } \left\{\begin{array}{ccc} s^{IJ} = + s^{JI}& \text{ and } & t_r^\gamma t_s^\gamma=+1 \\ s^{IJ} = - s^{JI}& \text{ and } & t_r^\gamma t_s^\gamma=-1 \\ \end{array} \right.\,. \end{equation} To conclude we also have \begin{equation} \bar{\psi} \mathcal{D} \lambda = \bar{\lambda} \mathcal{D} \psi, \qquad \bar{\psi}_I D^{IJ} \lambda_J = \bar{\lambda}_J D^{JI} \psi_I. \end{equation} up to a total derivative. \medskip Before multiplying the generators by the fermions $\theta$, the commutators between odd and even elements with explicit spinor indices read as \begin{equation} \begin{aligned} & [\genQind{I}{\alpha a}{}, \check{\gen{P}}_m] = - \frac{i}{2} \epsilon^{IJ} \ \genQind{J}{\nu a}{}\ {(\check{\gamma}_m)_{\ul{\nu}}}^{\ul{\alpha}}, & \qquad & [\genQind{I}{\alpha a}{}, \hat{\gen{P}}_m] = \frac{1}{2} \epsilon^{IJ} \ \genQind{J}{\alpha b}{}\ {(\hat{\gamma}_m)_{\ul{b}}}^{\ul{a}}, & \\ & [\genQind{I}{\alpha a}{}, \check{\gen{J}}_{mn}] = - \frac{1}{2} \delta^{IJ} \ \genQind{J}{\nu a}{}\ {(\check{\gamma}_{mn})_{\ul{\nu}}}^{\ul{\alpha}}, & \qquad & [\genQind{I}{\alpha a}{}, \hat{\gen{J}}_{mn}] = -\frac{1}{2} \delta^{IJ} \ \genQind{J}{\alpha b}{}\ {(\hat{\gamma}_{mn})_{\ul{b}}}^{\ul{a}}, & \end{aligned} \end{equation} The anti-commutator of two supercharges gives \begin{equation} \begin{aligned} \{\genQind{I}{\alpha a}{}, \genQind{J}{\nu b}{}\} =& \delta^{IJ} \left( i\, K^{\ul{\alpha}\ul{\lambda}} K^{\ul{a}\ul{b}} \ {(\check{\gamma}^m)_{\ul{\lambda}}}^{\ul{\nu}} \, \check{\gen{P}}_m - \, K^{\ul{\alpha}\ul{\nu}} \, K^{\ul{a}\ul{c}} {(\hat{\gamma}^m)_{\ul{c}}}^{\ul{b}} \, \hat{\gen{P}}_m -\frac{i}{2} K^{\ul{\alpha}\ul{\nu}} K^{\ul{a}\ul{b}} \mathbf{1}_8 \right) \\ - & \frac{1}{2} \epsilon^{IJ} \left( K^{\ul{\alpha}\ul{\lambda}} K^{\ul{a}\ul{b}} \ {(\check{\gamma}^{mn})_{\ul{\lambda}}}^{\ul{\nu}} \, \check{\gen{J}}_{mn} - \, K^{\ul{\alpha}\ul{\nu}} \, K^{\ul{a}\ul{c}} {(\hat{\gamma}^{mn})_{\ul{c}}}^{\ul{b}} \, \hat{\gen{J}}_{mn} \right), \end{aligned} \end{equation} where the indices $m,n$ are raised with the metric $\eta_{mn}$. For completeness we have written also the term proportional to the identity, since the supermatrices are a realisation of $\alg{su}(2,2|4)$. To obtain $\alg{psu}(2,2|4)$ one just needs to drop the term proportional to $i\mathbf{1}_8$ in the r.h.s. of the anti-commutator. Similarly, the supertrace of the product of two odd elements read as \begin{equation} \Str[\genQind{I}{\alpha a}{}\genQind{J}{\nu b}{}] = -2 \epsilon^{IJ} K^{\ul{\alpha}\ul{\nu}} K^{\ul{a}\ul{b}} \end{equation} Remembering that the spinor indices are raised and lowered with the matrix $K$, the last equation can be written also as $\Str[\genQind{I}{\alpha a}{}\genQind{J}{}{\nu b}] = -2\epsilon^{IJ} \delta^{\ul{\alpha}}_{\ul{\nu}} \delta^{\ul{a}}_{\ul{b}}$ . \newpage \section{Action of $R_\alg{g_b}$ on bosonic elements}\label{sec:useful-results-eta-def} The coefficients $\lambda$ introduced in Eq.~\eqref{eq:Rgb-action-lambda}---corresponding to the action of the operator $R_{\alg{g_b}}$ on bosonic generators---are explicitly \begin{equation}\label{eq:lambda11} \lambda_0^{\ 4} = \lambda_4^{\ 0} = \rho, \qquad \lambda_2^{\ 3} = - \lambda_3^{\ 2} = - \rho^2 \sin \zeta, \qquad \lambda_5^{\ 9} =- \lambda_9^{\ 5} = r, \qquad \lambda_7^{\ 8} = - \lambda_8^{\ 7} = r^2 \sin \xi, \end{equation} \begin{equation}\label{eq:lambda12} \begin{aligned} & \lambda_1^{01} =\lambda_2^{02} =\lambda_3^{03} =\lambda_4^{04} = \sqrt{1+\rho^2}, \qquad && \lambda_6^{56} =\lambda_7^{57} =\lambda_8^{58} =\lambda_9^{59} = -\sqrt{1-r^2}, \\ & \lambda_1^{12} =- \lambda_3^{23}= -\rho \cos \zeta, && \lambda_6^{67} =- \lambda_8^{78}= r \cos \xi,\\ & \lambda_2^{34} = - \lambda_3^{24}= -\rho \sqrt{1+\rho^2} \sin \zeta, && \lambda_7^{89} = - \lambda_8^{79}= r \sqrt{1-r^2} \sin \xi, \end{aligned} \end{equation} \begin{equation}\label{eq:lambda21} \begin{aligned} & \lambda_{01}^1 = \lambda_{02}^2 = \lambda_{03}^3 = \lambda_{04}^4 = -\sqrt{1+\rho^2}, \qquad && \lambda_{56}^6 = \lambda_{57}^7 = \lambda_{58}^8 = \lambda_{59}^9 = \sqrt{1-r^2},\\ & \lambda_{12}^1 = - \lambda_{23}^3 = -\rho \cos \zeta, && \lambda_{67}^6 = - \lambda_{78}^8 = -r \cos \xi,\\ & \lambda_{24}^3 = - \lambda_{34}^2 = \rho \sqrt{1+\rho^2} \sin \zeta, && \lambda_{79}^8 = - \lambda_{89}^7 = r \sqrt{1-r^2} \sin \xi, \end{aligned} \end{equation} \begin{equation}\label{eq:lambda22a} \begin{aligned} & \lambda_{01}^{14} = \lambda_{02}^{24} = \lambda_{03}^{34} = \lambda^{01}_{14} = \lambda^{02}_{24} = \lambda^{03}_{34} = -\rho, \qquad & \lambda_{12}^{13} = - \lambda_{13}^{12} = \sin \zeta, \qquad \\ & \lambda_{12}^{14} = -\lambda_{14}^{12}= -\lambda_{23}^{34}= \lambda_{34}^{23}= - \sqrt{1+ \rho^2} \cos \zeta, \qquad & \lambda_{24}^{34} = - \lambda_{34}^{24} = (1+\rho^2) \sin \zeta \end{aligned} \end{equation} \begin{equation}\label{eq:lambda22s} \begin{aligned} & \lambda_{56}^{69} = \lambda_{57}^{79} = \lambda_{58}^{89} = -\lambda^{56}_{69} = -\lambda^{57}_{79} = -\lambda^{58}_{89} = -r, \qquad & \lambda_{67}^{68} = - \lambda_{68}^{67} = \sin \xi, \qquad \\ & \lambda_{67}^{69} = -\lambda_{69}^{67}= -\lambda_{78}^{89}= \lambda_{89}^{78}= - \sqrt{1-r^2} \cos \xi, \qquad & \lambda_{79}^{89} = - \lambda_{89}^{79} = (1-r^2) \sin \xi \end{aligned} \end{equation} They satisfy the properties \begin{equation}\label{eq:app-swap-lambda} {\lambda_m}^n = - \, \eta_{mm'} \eta^{nn'} {\lambda_{n'}}^{m'}, \qquad \check{\lambda}_m^{np} = \eta_{mm'} \eta^{nn'} \eta^{pp'} \check{\lambda}^{m'}_{n'p'}, \qquad \hat{\lambda}_m^{np} = -\, \eta_{mm'} \eta^{nn'} \eta^{pp'} \hat{\lambda}^{m'}_{n'p'}, \end{equation} that are used to simplify some terms in the Lagrangian. \begin{comment} For completeness we give also the coefficients $w_m^{np}$ defined in~\eqref{eq:action-Oinv0-P} corresponding to the action of $\op^{\text{inv}}_{(0)}$ \begin{equation}\label{eq:w12a} \begin{aligned} & w^{04}_0 = \varkappa^2 \frac{\rho \sqrt{1+\rho^2}}{1-\varkappa^2 \rho^2}, \\ & w^{12}_1 = -\varkappa \rho \cos \zeta, \ & w^{01}_1 = \varkappa \sqrt{1+\rho^2}, \\ & w^{02}_2 = \varkappa \frac{\sqrt{1+\rho^2}}{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & w^{03}_2 = -\varkappa^2 \frac{\rho^2 \sqrt{1+\rho^2} \sin \zeta }{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \\ & w^{23}_2 = -\varkappa^2 \frac{\rho^3 \sin \zeta \cos \zeta}{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & w^{24}_2 = -\varkappa^2 \frac{\rho^3 \sqrt{1+\rho^2} \sin^2 \zeta }{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & w^{34}_2 = -\varkappa \frac{\rho \sqrt{1+\rho^2} \sin \zeta }{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & \\ & w^{02}_3 = \varkappa^2 \frac{\rho^2 \sqrt{1+\rho^2} \sin \zeta }{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & w^{03}_3 = \varkappa \frac{\sqrt{1+\rho^2}}{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \\ & w^{23}_3 = \varkappa \frac{\rho \cos \zeta}{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & w^{24}_3 = \varkappa \frac{\rho \sqrt{1+\rho^2} \sin \zeta }{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & w^{34}_3 = -\varkappa^2 \frac{\rho^3 \sqrt{1+\rho^2} \sin^2 \zeta }{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \ & \\ & w^{04}_4 = \varkappa \frac{\sqrt{1+\rho^2}}{1-\varkappa^2 \rho^2}, \\ \end{aligned} \end{equation} \begin{equation}\label{eq:w12s} \begin{aligned} & w^{59}_5 = - \varkappa^2 \frac{r \sqrt{1-r^2}}{1+\varkappa^2 r^2}, \\ & w^{67}_6 = \varkappa r \cos \xi, \ & w^{56}_6 = -\varkappa \sqrt{1-r^2}, \\ & w^{57}_7 = -\varkappa \frac{\sqrt{1-r^2}}{1+\varkappa^2 r^4 \sin^2 \xi}, \ & w^{58}_7 = -\varkappa^2 \frac{r^2 \sqrt{1-r^2} \sin \xi }{1+\varkappa^2 r^4 \sin^2 \xi}, \\ & w^{78}_7 = -\varkappa^2 \frac{r^3 \sin \xi \cos \xi}{1+\varkappa^2 r^4 \sin^2 \xi}, \ & w^{79}_7 = -\varkappa^2 \frac{r^3 \sqrt{1-r^2} \sin^2 \xi }{1+\varkappa^2 r^4 \sin^2 \xi}, \ & w^{89}_7 = \varkappa \frac{r \sqrt{1-r^2} \sin \xi }{1+\varkappa^2 r^4 \sin^2 \xi}, \ & \\ & w^{57}_8 = \varkappa^2 \frac{r^2 \sqrt{1-r^2} \sin \xi }{1+\varkappa^2 r^4 \sin^2 \xi}, \ & w^{58}_8 = -\varkappa \frac{\sqrt{1-r^2}}{1+\varkappa^2 r^4 \sin^2 \xi}, \\ & w^{78}_8 = -\varkappa \frac{r \cos \xi}{1+\varkappa^2 r^4 \sin^2 \xi}, \ & w^{79}_8 = -\varkappa \frac{r \sqrt{1-r^2} \sin \xi }{1+\varkappa^2 r^4 \sin^2 \xi}, \ & w^{89}_8 = -\varkappa^2 \frac{r^3 \sqrt{1-r^2} \sin^2 \xi }{1+\varkappa^2 r^4 \sin^2 \xi}, \ & \\ & w^{59}_9 = -\varkappa \frac{\sqrt{1-r^2}}{1+\varkappa^2 r^2}, \\ \end{aligned} \end{equation} \end{comment} \section{The contribution $\{101\}$ to the fermionic Lagrangian}\label{app:der-Lagr-101} In this Appendix we show how to write $L_{\{101\}}$ in the form presented in~\eqref{eq:Lagr-101}. It is easy to see that the insertion of $\op^{\text{inv}}_{(0)}$ between two odd currents does not change the fact that the expression is anti-symmetric in $\alpha,\beta$ and we have \begin{equation}\label{eq:orig-lagr-101} L_{\{101\}} = - \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \left( -\sigma_1^{IK} + \frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \delta^{IK} \right) (D^{IJ}_\alpha \theta_J)^\dagger \boldsymbol{\gamma}^0 D^{KL}_\beta \theta_L . \end{equation} The above contribution contains terms quadratic in $\partial \theta$, a feature that does no match with the generic type IIB action~\eqref{eq:IIB-action-theta2}. These terms remain even when sending the deformation parameter to $0$. This is not a problem since these terms are of the form $\epsilon^{\alpha\beta} s^{IK} \partial_\alpha \bar{\theta}_I \partial_\beta \theta_K$, where $s^{IK}$ is a generic tensor symmetric in the two indices. Although not vanishing, they can be rewritten and traded for a total derivative $\epsilon^{\alpha\beta} s^{IK} \partial_\alpha \bar{\theta}_I \partial_\beta \theta_K = \partial_\alpha(\epsilon^{\alpha\beta} s^{IK} \bar{\theta}_I \partial_\beta \theta_K)$, using $\epsilon^{\alpha\beta}\partial_\alpha\partial_\beta=0$. The unwanted terms then do not contribute to the action. First we note that \begin{equation} (D^{IJ}_\alpha \theta_J)^\dagger \boldsymbol{\gamma}^0 = \delta^{IJ} \left(\partial_\alpha \bar{\theta}_J + \frac{1}{4} \bar{\theta}_J \omega^{mn}_\alpha \boldsymbol{\gamma}_{mn} \right) + \frac{i}{2} \epsilon^{IJ} \bar{\theta}_J e^m_\alpha \boldsymbol{\gamma}_m , \end{equation} and using this we show that the contribution to the Lagrangian is \begin{equation} \begin{aligned} L_{\{101\}} &= - \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \left( \sigma_1^{IK}- \frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \delta^{IK} \right) \bar{\theta}_J D^{JI}_\alpha D^{KL}_\beta \theta_L \\ &+\partial_\alpha \left(\frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \left( \sigma_1^{IK}- \frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \delta^{IK} \right) \bar{\theta}_J D^{KL}_\beta \theta_L \right). \end{aligned} \end{equation} The last term is the total derivative that we discard. The result is only na\"ively quadratic in $D^{IJ}$. To show it we divide the computation into three terms \begin{equation} \epsilon^{\alpha\beta} s^{IK} \bar{\theta}_L D^{LK}_\alpha D^{IJ}_\beta \theta_J= \text{WZ}_1 + \text{WZ}_2 + \text{WZ}_3 \end{equation} where the object $s^{IK}$ is introduced to keep the computation as general as possible. We will only assume that it is symmetric in the indices $IK$. For each of the terms we then get \begin{equation} \begin{aligned} \text{WZ}_1 & \equiv \epsilon^{\alpha\beta} s^{IK} \bar{\theta}_L \mathcal{D}^{LK}_\alpha \mathcal{D}^{IJ}_\beta \theta_J \\ & = -\frac{1}{4} \epsilon^{\alpha\beta} s^{JL} \bar{\theta}_L e^m_\alpha e^n_\beta \boldsymbol{\gamma}_{m} \boldsymbol{\gamma}_{n} \theta_J , \\ \text{WZ}_2 & \equiv \frac{i}{2} \epsilon^{\alpha\beta} s^{IK} \bar{\theta}_L \left( \epsilon^{IJ} \mathcal{D}^{LK}_\alpha (e^n_\beta \boldsymbol{\gamma}_n \theta_J) + \epsilon^{LK} e^m_\alpha \boldsymbol{\gamma}_m \mathcal{D}^{IJ}_\beta \theta_J \right) \\ & = + i \epsilon^{\alpha\beta} s^{IK} \epsilon^{JI} \bar{\theta}_J e^m_\alpha \boldsymbol{\gamma}_m \mathcal{D}^{KL}_\beta \theta_L, \\ \text{WZ}_3 & \equiv -\frac{1}{4} \epsilon^{\alpha\beta} s^{IK} \epsilon^{LK} \epsilon^{IJ} e^m_\alpha e^n_\beta \bar{\theta}_L \boldsymbol{\gamma}_m \boldsymbol{\gamma}_n \theta_J , \end{aligned} \end{equation} where we used~\eqref{eq:d-veilbein},\eqref{eq:d-spin-conn} and the fact that the covariant derivative $\mathcal{D}$ on the vielbein is zero \begin{equation} \epsilon^{\alpha\beta} \mathcal{D}^{IJ}_\alpha (e^m_\beta \boldsymbol{\gamma}_m \theta ) = \epsilon^{\alpha\beta} e^m_\beta \boldsymbol{\gamma}_m \mathcal{D}^{IJ}_\alpha \theta . \end{equation} The final result for the deformed case is \begin{equation} \begin{aligned} L_{\{101\}} &= - \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_L \, i \, e^m_\alpha \boldsymbol{\gamma}_m \left( \sigma_3^{LK} D^{KJ}_\beta \theta_J -\frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \ \epsilon^{LK} \mathcal{D}^{KJ}_\beta \theta_J \right) \\ &= - \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_I \left( \sigma_3^{IJ} -\frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \ \epsilon^{IJ} \right) \, i \, e^m_\alpha \boldsymbol{\gamma}_m \mathcal{D}_\beta \theta_J + \frac{\tilde{g}}{4} \epsilon^{\alpha\beta} \bar{\theta}_I \sigma_1^{IJ} e^m_\alpha \boldsymbol{\gamma}_m e^n_\beta \boldsymbol{\gamma}_n \theta_J . \end{aligned} \end{equation} \newpage \section{Total Lagrangian and field redefinitions}\label{app:total-lagr-field-red} For convenience, in this appendix we write down explicitly the Lagrangian that is obtained after the field redefinitions~\eqref{eq:red-fer-2x2-sp} and~\eqref{eq:red-bos} have been done. The bosonic-dependent rotation of the fermions~\eqref{eq:red-ferm-Lor-as} has not been implemented yet. The total Lagrangian can be written as the sum of the contribution with the worldsheet metric $\gamma^{\alpha\beta}$ and the contribution with $\epsilon^{\alpha\beta}$: $\mathcal{L}^{\gamma}+\mathcal{L}^{\epsilon}$. The first of these results is \begin{equation}\label{eq:lagr-gamma-no-F-red} \begin{aligned} \mathcal{L}^{\gamma} = & \frac{\tilde{g}}{2} \ \gamma^{\alpha\beta} \bar{\theta}_I \Bigg[ - \frac{i}{2} \delta^{IJ} \boldsymbol{\gamma}_n -\frac{i}{2} \varkappa \sigma_3^{IJ} {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg] ({k^n}_{m}+{k_{m}}^n) e^m_\alpha \partial_\beta \theta_J \\ & - \tilde{g} \gamma^{\alpha\beta} \left( - \partial_\alpha X^M \ \bar{\theta}_I \ \widetilde{G}_{MN} \left( \partial_\beta f^N_{IJ} \right) \ \theta_J - \frac{1}{2} \partial_\alpha X^M \partial_\beta X^N \partial_P \widetilde{G}_{MN} \ \bar{\theta}_I \, f^P_{IJ} \theta_J \right) \\ &+ \frac{\tilde{g}}{4} \gamma^{\alpha\beta} (k^p_{\ q} +{k_{q}}^{p} )e^q_{\alpha} \ \bar{\theta}_I \Bigg[ \frac{i}{4} \delta^{IJ} \boldsymbol{\gamma}_p \omega^{rs}_\beta\boldsymbol{\gamma}_{rs} \\ & +\frac{1}{8} \left( -\varkappa \sigma_1^{IJ} -(-1+\sqrt{1+\varkappa^2} ) \delta^{IJ} \right) \lambda_{p}^{mn} \boldsymbol{\gamma}_{mn} \ \omega^{rs}_\beta \boldsymbol{\gamma}_{rs} \\ & - \frac{1}{2}\left( (-1-2\varkappa^2+\sqrt{1+\varkappa^2})\delta^{IJ}-\varkappa(-1+2\sqrt{1+\varkappa^2}) \sigma_1^{IJ} \right) \ {\lambda_p}^n\boldsymbol{\gamma}_n e^r_\beta \boldsymbol{\gamma}_r \ \\ & +\frac{i}{4} (\varkappa \sigma_3^{IJ} - (-1+\sqrt{1+\varkappa^2})\epsilon^{IJ}) \ {\lambda_p}^n \boldsymbol{\gamma}_n \left( \omega^{rs}_\beta\boldsymbol{\gamma}_{rs}\right) \\ & +\frac{1}{2} (\varkappa \sigma_3^{IJ}+\sqrt{1+\varkappa^2}\epsilon^{IJ}) \boldsymbol{\gamma}_p e^r_\beta \boldsymbol{\gamma}_r \\ & -\frac{i}{4} \left( \varkappa \sigma_3^{IJ} + (-1+ \sqrt{1+\varkappa^2})\epsilon^{IJ} \right) \lambda_{p}^{mn}\boldsymbol{\gamma}_{mn} e^r_\beta \boldsymbol{\gamma}_r \Bigg] \theta_J \\ & + \frac{\tilde{g}}{8} \gamma^{\alpha\beta} \varkappa e^v_\alpha e^m_\beta \, {k^{u}}_v {k_m}^n \, \bar{\theta}_I \\ \Bigg[ & 2 (\sqrt{1+\varkappa^2}\delta^{IJ}+\varkappa \sigma_1^{IJ}) \left( \boldsymbol{\gamma}_u\left(\boldsymbol{\gamma}_n +\frac{i}{4} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) - \frac{i}{4} \lambda^{pq}_{u}\boldsymbol{\gamma}_{pq} \boldsymbol{\gamma}_n\right) \\ & + \epsilon^{IJ} \left( \boldsymbol{\gamma}_u {\lambda_n}^p \boldsymbol{\gamma}_p - {\lambda_u}^p\boldsymbol{\gamma}_p \boldsymbol{\gamma}_n \right) \\ &+\left(-(-1+\sqrt{1+\varkappa^2}) \delta^{IJ} -\varkappa \sigma_1^{KI}\right) \left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) \\ & + \left((1+2\varkappa^2-\sqrt{1+\varkappa^2}) \delta^{IJ} -\varkappa(1-2\sqrt{1+\varkappa^2}) \sigma_1^{IJ}\right) {\lambda_u}^p\boldsymbol{\gamma}_p {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ & +\left( \varkappa \sigma_3^{IJ}-(-1+\sqrt{1+\varkappa^2}) \epsilon^{IJ} \right) {\lambda_u}^p \boldsymbol{\gamma}_p\left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) \\ & + \left( \varkappa \sigma_3^{IJ}+ (-1+\sqrt{1+\varkappa^2}) \epsilon^{IJ} \right)\left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) {\lambda_n}^r \boldsymbol{\gamma}_r \Bigg] \theta_J. \end{aligned} \end{equation} The WZ contribution reads as \begin{equation}\label{eq:lagr-epsilon-no-F-red} \begin{aligned} \mathcal{L}^{\epsilon} = & - \frac{\tilde{g}}{2} \epsilon^{\alpha\beta} \bar{\theta}_I \sigma_3^{IJ} \, i \, e^m_\alpha \boldsymbol{\gamma}_m \partial_\beta \theta_J \\ & - \frac{\tilde{g}}{2} \ \epsilon^{\alpha\beta} \bar{\theta}_I \Bigg[ - \frac{i}{2} \delta^{IJ} \boldsymbol{\gamma}_n -\frac{i}{2} \varkappa \sigma_3^{IJ} {\lambda_{n}}^{p} \boldsymbol{\gamma}_{p} \Bigg] ({k^n}_{m}-{k_{m}}^n) e^m_\alpha \partial_\beta \theta_J \\ & - \tilde{g} \epsilon^{\alpha\beta} \left( + \partial_\alpha X^M \ \bar{\theta}_I \ \widetilde{B}_{MN} \left( \partial_\beta f^N_{IJ} \right) \ \theta_J + \frac{1}{2} \partial_\alpha X^M \partial_\beta X^N \partial_P \widetilde{B}_{MN} \ \bar{\theta}_I \, f^P_{IJ} \theta_J \right) \\ &-\frac{\tilde{g}}{4} \epsilon^{\alpha\beta} \bar{\theta}_I \frac{-1+\sqrt{1+\varkappa^2}}{\varkappa} \epsilon^{IJ} \partial_\alpha X^M \left(\partial_\beta e^m_M \right) i\boldsymbol{\gamma}_m \theta_J \\ & - \frac{\tilde{g}}{8} \epsilon^{\alpha\beta} \bar{\theta}_I \left(- \sigma_3^{IJ} +\frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \ \epsilon^{IJ} \right) \, i \, e^m_\alpha \boldsymbol{\gamma}_m \omega^{np}_\beta \boldsymbol{\gamma}_{np} \theta_J \\ &+ \frac{\tilde{g}}{4} \epsilon^{\alpha\beta} \bar{\theta}_I \left( \varkappa \delta^{IJ} +\sqrt{1+\varkappa^2} \sigma_1^{IJ} \right) e^m_\alpha \boldsymbol{\gamma}_m e^n_\beta \boldsymbol{\gamma}_n \theta_J \\ &- \frac{\tilde{g}}{4} \epsilon^{\alpha\beta} (k^p_{\ q} - {k_{q}}^{p} )e^q_{\alpha} \ \bar{\theta}_I \Bigg[ \frac{i}{4} \delta^{IJ} \boldsymbol{\gamma}_p \omega^{rs}_\beta\boldsymbol{\gamma}_{rs} \\ & +\frac{1}{8} \left( -\varkappa \sigma_1^{IJ} -(-1+\sqrt{1+\varkappa^2} ) \delta^{IJ} \right) \lambda_{p}^{mn} \boldsymbol{\gamma}_{mn} \ \omega^{rs}_\beta \boldsymbol{\gamma}_{rs} \\ & - \frac{1}{2}\left( (-1-2\varkappa^2+\sqrt{1+\varkappa^2})\delta^{IJ}-\varkappa(-1+2\sqrt{1+\varkappa^2}) \sigma_1^{IJ} \right) \ {\lambda_p}^n\boldsymbol{\gamma}_n e^r_\beta \boldsymbol{\gamma}_r \ \\ & +\frac{i}{4} (\varkappa \sigma_3^{IJ} - (-1+\sqrt{1+\varkappa^2})\epsilon^{IJ}) \ {\lambda_p}^n \boldsymbol{\gamma}_n \left( \omega^{rs}_\beta\boldsymbol{\gamma}_{rs}\right) \\ & +\frac{1}{2} (\varkappa \sigma_3^{IJ}+\sqrt{1+\varkappa^2}\epsilon^{IJ}) \boldsymbol{\gamma}_p e^r_\beta \boldsymbol{\gamma}_r \\ & -\frac{i}{4} \left( \varkappa \sigma_3^{IJ} + (-1+ \sqrt{1+\varkappa^2})\epsilon^{IJ} \right) \lambda_{p}^{mn}\boldsymbol{\gamma}_{mn} e^r_\beta \boldsymbol{\gamma}_r \Bigg] \theta_J \\ & - \frac{\tilde{g}}{8} \epsilon^{\alpha\beta} \varkappa e^v_\alpha e^m_\beta \, {k^{u}}_v {k_m}^n \, \bar{\theta}_I \\ \Bigg[ & 2 (\sqrt{1+\varkappa^2}\delta^{IJ}+\varkappa \sigma_1^{IJ}) \left( \boldsymbol{\gamma}_u\left(\boldsymbol{\gamma}_n +\frac{i}{4} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) - \frac{i}{4} \lambda^{pq}_{u}\boldsymbol{\gamma}_{pq} \boldsymbol{\gamma}_n\right) \\ & + \epsilon^{IJ} \left( \boldsymbol{\gamma}_u {\lambda_n}^p \boldsymbol{\gamma}_p - {\lambda_u}^p\boldsymbol{\gamma}_p \boldsymbol{\gamma}_n \right) \\ &+\left(-(-1+\sqrt{1+\varkappa^2}) \delta^{IJ} -\varkappa \sigma_1^{KI}\right) \left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) \\ & + \left((1+2\varkappa^2-\sqrt{1+\varkappa^2}) \delta^{IJ} -\varkappa(1-2\sqrt{1+\varkappa^2}) \sigma_1^{IJ}\right) {\lambda_u}^p\boldsymbol{\gamma}_p {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ & +\left( \varkappa \sigma_3^{IJ}-(-1+\sqrt{1+\varkappa^2}) \epsilon^{IJ} \right) {\lambda_u}^p \boldsymbol{\gamma}_p\left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) \\ & + \left( \varkappa \sigma_3^{IJ}+ (-1+\sqrt{1+\varkappa^2}) \epsilon^{IJ} \right)\left( \boldsymbol{\gamma}_u -\frac{i}{2} \boldsymbol{\gamma}_{pq} \lambda_u^{pq} \right) {\lambda_n}^r \boldsymbol{\gamma}_r \Bigg] \theta_J \end{aligned} \end{equation} The function $f^M_{IJ}(X)$ is defined in~\eqref{eq:def-shift-bos-f}. To implement the bosonic-dependent redefinition on the fermions~\eqref{eq:red-ferm-Lor-as}, we find more efficient to use~\eqref{eq:transf-rule-gamma-ferm-rot} and write its action on the gamma matrices $\boldsymbol{\gamma}$. We have for example the rule \begin{equation} \bar{\theta}_K b^m \gamma_m \theta_I \to \bar{\theta}_K \bar{U}_{(K)} b^m \gamma_m U_{(I)} \theta_I = \bar{\theta}_K b^m (\Lambda_{(K)})_m^{\ n} \gamma_n \bar{U}_{(K)} U_{(I)} \theta_I \,, \end{equation} where we have inserted for convenience the identity $U_{(K)} \bar{U}_{(K)} = \gen{1}$. To give a couple of examples, it means \begin{equation} \begin{aligned} \bar{\theta}_1 b^m \gamma_m \theta_1 &\to \bar{\theta}_1 b^m {(\Lambda_1)_m}^n \gamma_n \theta_1 \\ \bar{\theta}_2 b^m \gamma_m \theta_1 &\to \bar{\theta}_2 b^m {(\Lambda_2)_m}^n \gamma_n \bar{U}_{(2)} U_{(1)} \theta_1 \end{aligned} \end{equation} The terms with derivatives on fermions become (here $I$ is kept fixed) \begin{equation} \bar{\theta}_I b^m \gamma_m \partial_\beta \theta_I \to \bar{\theta}_I b^m {(\Lambda_{(I)})_m}^n \gamma_n \partial_\beta \theta_I +\bar{\theta}_I b^m {(\Lambda_{(I)})_m}^n \gamma_n (\bar{U}_{(I)} \partial_\beta U_{(I)}) \theta_I . \end{equation} The second of these terms will contribute to the coupling to the spin connection and the B-field. To compute these quantities it is useful to know the action of the derivative on the matrix $U_{(I)}$ \begin{equation} \begin{aligned} \bar{U}_{(I)}^{\alg{a}} {\rm d} U_{(I)}^{\alg{a}} &=\sigma_{3II}\, \frac{\varkappa}{2} \left(\frac{ \rho (2 \sin \zeta {\rm d}\rho +\rho {\rm d}\zeta \cos \zeta)}{1+\varkappa ^2 \rho ^4 \sin ^2\zeta} \check{\gamma}_{23}+ \frac{ {\rm d}\rho }{1- \varkappa ^2 \rho ^2} \check{\gamma}_{04} \right), \\ \bar{U}_{(I)}^{\alg{s}} {\rm d} U_{(I)}^{\alg{s}} &= \sigma_{3II}\, \frac{\varkappa}{2}\left( -\frac{r (2 \sin \xi {\rm d} r+r {\rm d} \xi \cos \xi )}{1+\kappa ^2 r^4 \sin ^2\xi } \hat{\gamma}_{78} -\frac{{\rm d} r}{1+\kappa ^2 r^2} \hat{\gamma}_{59} \right), \end{aligned} \end{equation} and also the results for the multiplication of matrices $U_{(I)}$ \begin{equation} \begin{aligned} \bar{U}^{\alg{a}}_{(I)} U^{\alg{a}}_{(J)}&= \delta_{IJ}\mathbf{1}_4+ \frac{\sigma_{1IJ}(\mathbf{1}_4 -i \varkappa ^2 \rho ^3 \sin \zeta \, \check{\gamma}_1) -\epsilon_{IJ} \varkappa (\rho^2 \sin \zeta \, \check{\gamma}_{23}+ \rho \, \check{\gamma}_{04}) }{\sqrt{1-\varkappa ^2 \rho ^2} \sqrt{1+\varkappa ^2 \rho ^4\sin ^2\zeta }} , \\ \bar{U}^{\alg{s}}_{(I)} U^{\alg{s}}_{(J)}&= \delta_{IJ}\mathbf{1}_4+ \frac{\sigma_{1IJ}(\mathbf{1}_4 - \varkappa ^2 r ^3 \sin \xi \, \hat{\gamma}_6) +\epsilon_{IJ}\varkappa ( r^2 \sin \xi \, \hat{\gamma}_{78}+ r \, \hat{\gamma}_{59}) }{\sqrt{1+\varkappa ^2 r ^2} \sqrt{1+\varkappa ^2 r ^4\sin ^2\xi }} . \end{aligned} \end{equation} As a comment, sometimes it is useful to use redefined coordinates $\rho',\zeta',r',\xi'$ given by \begin{equation} \rho = \varkappa^{-1} \sin \rho', \qquad \sin \zeta = \varkappa \frac{\sinh \zeta'}{\sin^2 \rho'}, \qquad r = \varkappa^{-1} \sinh r', \qquad \sin \xi = \varkappa \frac{\sinh \xi'}{\sinh^2 r'}, \end{equation} that help to simplify some expressions. \section{Standard kappa-symmetry}\label{app:standard-kappa-sym} In this Appendix we compute explicitly the variation for bosonic and fermionic fields in the deformed model. We show that using~\eqref{eq:eps-op-rho-kappa} their variation is not the standard one~\eqref{eq:kappa-var-32}. However, after implementing the field redefinitions of Section~\ref{sec:canonical-form}---needed to set the terms with derivatives on fermions in the canonical form---they do become standard. We actually prefer to impose the equation \begin{equation}\label{eq:kappa-var} \mathcal{O}^{-1}(\alg{g}^{-1} \delta_\kappa \alg{g}) = \varrho \,, \end{equation} coming from~\eqref{eq:eps-op-rho-kappa}, where we also used $\varepsilon\equiv \alg{g}^{-1} \delta_\kappa \alg{g}$. The reason is that the computation is then formally the same as the one done in Section~\ref{sec:inverse-op} to derive the results needed to compute the deformed Lagrangian. We just need to do the substitution $\partial_\alpha \to - \delta_{\kappa}$. Let us express the result as a linear combination of generators $\gen{P}_m$ and $\gen{Q}^I$ \begin{equation} \mathcal{O}^{-1}(\alg{g}^{-1} \delta_\kappa \alg{g}) = j^m_{\delta_{\kappa}} \gen{P}_m + \gen{Q}^I j_{\delta_{\kappa},I}+j^{mn}_{\delta_{\kappa}} \gen{J}_{mn}\,. \end{equation} The contributions of the generators $\gen{J}_{mn}$ will not be important for the discussion. The coefficients $j^m_{\delta_{\kappa}} , j_{\delta_{\kappa},I}$ are the quantities that we need to compute explicitly to discover the form of the action of the kappa-symmetry variation on the fields. Because $\varrho$---standing in the right hand side of~\eqref{eq:kappa-var}---belongs to the odd part of the algebra $\varrho= \gen{Q}^I \psi_I$, it means that we get the equations \begin{equation} j^m_{\delta_{\kappa}}=0, \qquad j_{\delta_{\kappa},I}=\psi_I. \end{equation} We may expand the above equations in powers of $\theta$. We actually stop at leading order in the expansion, meaning that we will compute \begin{equation}\label{eq:order-kappa-var} \begin{aligned} & j^m_{\delta_{\kappa}}\sim \left[\# +\mathcal{O}(\theta^2)\right]\delta_{\kappa}X+ \left[\# \theta+\mathcal{O}(\theta^3)\right]\delta_{\kappa}\theta, \\ & j_{\delta_{\kappa},I}\sim \left[\# +\mathcal{O}(\theta^2)\right]\delta_{\kappa}\theta, \qquad \psi \sim \left[\# +\mathcal{O}(\theta^2)\right] \kappa, \end{aligned} \end{equation} where $\#$ stands for functions of the bosons, in such a way that upon solving the equations we get $\delta_{\kappa}X \sim \# \theta \kappa$ and $\delta_{\kappa}\theta \sim \# \kappa$. Let us start computing $j^m_{\delta_{\kappa}}$. Because of the deformation, the term inside parenthesis proportional to $\gen{Q}^I$ contributes \begin{equation} \begin{aligned} j^m_{\delta_{\kappa}} \gen{P}_m &= -P^{(2)}\circ \frac{1}{\mathbf{1} - \eta R_{\alg{g}} \circ d} \left[ \left( \delta_{\kappa}X^M e^m_M + \frac{i}{2} \bar{\theta}_I \boldsymbol{\gamma}^m \delta_{\kappa} \theta_I + \cdots \right) \gen{P}_m -\gen{Q}^I \delta_{\kappa} \theta_I + \cdots\right] \\ &= -\delta_{\kappa}X^M e^m_M {k_m}^q \ \gen{P}_q \\ &-\frac{1}{2} \ \bar{\theta}_I \Bigg[ \delta^{IJ} i \boldsymbol{\gamma}_p + (-\varkappa \sigma_1^{IJ} +(-1+\sqrt{1+\varkappa^2})\delta^{IJ}) \left( i\boldsymbol{\gamma}_p +\frac{1}{2} \lambda^{mn}_{ p} \boldsymbol{\gamma}_{mn} \right) \\ & + i \, (\varkappa \sigma_3^{IJ} -(-1+\sqrt{1+\varkappa^2})\epsilon^{IJ}) {\lambda_p}^n \boldsymbol{\gamma}_n \Bigg] \delta_{\kappa} \theta_J \ k^{pq} \ \gen{P}_q + \cdots \end{aligned} \end{equation} Imposing the equation $j^m_{\delta_{\kappa}}=0$ and solving for $ \delta_{\kappa}X^M$ at leading order we get \begin{equation} \begin{aligned} \delta_{\kappa}X^M = - \frac{1}{2} \ \bar{\theta}_I e^{Mp} \Bigg[ &\delta^{IJ} i \boldsymbol{\gamma}_p + (-\varkappa \sigma_1^{IJ} +(-1+\sqrt{1+\varkappa^2})\delta^{IJ}) \left( i\boldsymbol{\gamma}_p +\frac{1}{2} \lambda^{mn}_{ p} \boldsymbol{\gamma}_{mn} \right) \\ & + i \, (\varkappa \sigma_3^{IJ} -(-1+\sqrt{1+\varkappa^2})\epsilon^{IJ}) {\lambda_p}^n \boldsymbol{\gamma}_n \Bigg] \delta_{\kappa} \theta_J + \cdots. \end{aligned} \end{equation} The computation for $j_{\delta_{\kappa},I} $ gives simply \begin{equation} \begin{aligned} \gen{Q}^I j_{\delta_{\kappa},I} &= (P^{(1)}+P^{(3)}) \circ \frac{1}{\mathbf{1} - \eta R_{\alg{g}} \circ d} \left[ \gen{Q}^I \delta_{\kappa} \theta_I + \cdots\right] \\ &= \frac{1}{2}\left( (1+\sqrt{1+\varkappa^2})\ \delta^{IJ} -\varkappa \sigma_1^{IJ} \right) \gen{Q}^J \delta_{\kappa} \theta_I + \cdots. \end{aligned} \end{equation} When we compute the two projections of $\varrho$ as defined in~\eqref{eq:def-varrho-kappa-def} at leading order we can set $\theta=0$. Then we just have \begin{equation} \begin{aligned} P^{(2)} \circ \mathcal{O}^{-1} A_\beta & = P^{(2)} \circ\mathcal{O}^{-1} \left( e^m_\beta \gen{P}_m +\cdots \right) = e_{\beta m} k^{mn} \gen{P}_n, \\ P^{(2)} \circ \widetilde{\mathcal{O}}^{-1} A_\beta & = P^{(2)} \circ \widetilde{\mathcal{O}}^{-1} \left( e^m_\beta \gen{P}_m +\cdots \right) = e_{\beta m} k^{nm} \gen{P}_n, \end{aligned} \end{equation} where the second result can be obtained from the first one sending $\varkappa \to - \varkappa$. Explicitly \begin{equation} \begin{aligned} \varrho^{(1)} &=\frac{1}{2} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) e_{\beta m} k^{mn} \left( \gen{Q}^1 \gen{P}_n + \gen{P}_n \gen{Q}^1\right)\kappa_{\alpha 1},\\ \varrho^{(3)} &= \frac{1}{2} (\gamma^{\alpha\beta} + \epsilon^{\alpha\beta}) e_{\beta m} k^{nm} \left( \gen{Q}^2 \gen{P}_n + \gen{P}_n \gen{Q}^2\right)\kappa_{\alpha 2} , \end{aligned} \end{equation} A direct computation shows that \begin{equation} \gen{Q}^I \check{\gen{P}}_m + \check{\gen{P}}_m \gen{Q}^I = -\frac{1}{2} \gen{Q}^I \check{\boldsymbol{\gamma}}_m, \qquad \gen{Q}^I \hat{\gen{P}}_m + \hat{\gen{P}}_m \gen{Q}^I = +\frac{1}{2} \gen{Q}^I \hat{\boldsymbol{\gamma}}_m. \end{equation} We get \begin{equation} \begin{aligned} \varrho^{(1)} &=\gen{Q}^1 \psi_1, \qquad \psi_1=\frac{1}{4} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \left( -e_{\beta m} k^{mn} \check{\boldsymbol{\gamma}}_n + e_{\beta m} k^{mn} \hat{\boldsymbol{\gamma}}_n \right)\kappa_{\alpha 1},\\ \varrho^{(3)} &=\gen{Q}^2 \psi_2, \qquad \psi_2= \frac{1}{4} (\gamma^{\alpha\beta} + \epsilon^{\alpha\beta}) \left( -e_{\beta m} k^{nm} \check{\boldsymbol{\gamma}}_n + e_{\beta m} k^{nm} \hat{\boldsymbol{\gamma}}_n \right)\kappa_{\alpha 2}, \end{aligned} \end{equation} and to conclude we can solve the equation $j_{\delta_{\kappa},I}=\psi_I$ setting \begin{equation} \delta_{\kappa} \theta_I = \frac{1}{1+\sqrt{1+\varkappa^2}}\left( (1+\sqrt{1+\varkappa^2})\delta^{IJ} + \varkappa \sigma_1^{IJ} \right) \psi_J. \end{equation} Setting $\varkappa=0$ the formulas are simplified to \begin{equation} \begin{aligned} \delta_{\kappa}X^M &= -\frac{i}{2} \ \bar{\theta}_I \delta^{IJ} e^{Mp} \boldsymbol{\gamma}_p \delta_{\kappa} \theta_J + \cdots, \\ \delta_{\kappa} \theta_I &= \psi_I, \\ \psi_1&=\frac{1}{4} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \left( -e_{\beta}^m \check{\boldsymbol{\gamma}}_m + e_{\beta}^m \hat{\boldsymbol{\gamma}}_m \right)\kappa_{\alpha 1}, \\ \psi_2&= \frac{1}{4} (\gamma^{\alpha\beta} + \epsilon^{\alpha\beta}) \left( -e_{\beta}^m \check{\boldsymbol{\gamma}}_m + e_{\beta}^m \hat{\boldsymbol{\gamma}}_m \right)\kappa_{\alpha 2}, \end{aligned} \end{equation} showing that the kappa-symmetry variation is then the standard one that we expect. The results for the kappa-variations have to be modified according to the field redefinitions needed to put the lagrangian in canonical form. When we rotate the fermions we get that their variation is modified as \begin{equation} \theta_I \to U_{IJ} \theta_J \implies \delta_{\kappa} \theta_I \to U_{IJ} \delta_{\kappa} \theta_J + \delta_{\kappa} U_{IJ} \theta_J, \end{equation} and since we are considering $\delta_{\kappa} \theta$ at leading order, in the following we will drop the term containing $\delta_{\kappa} U_{IJ}$. We first redefine our fermions as \begin{equation} \theta_I \to \frac{\sqrt{1+\sqrt{1+\varkappa^2}}}{\sqrt{2}} \left(\delta^{IJ} + \frac{\varkappa}{1+\sqrt{1+\varkappa^2}} \sigma_1^{IJ} \right) \theta_J. \end{equation} and we get \begin{equation} \begin{aligned} \delta_{\kappa}X^M &= - \frac{1}{2} \ \bar{\theta}_I e^{Mp} \Bigg[ \delta^{IJ} i \boldsymbol{\gamma}_p - (\varkappa \sigma_1^{IJ} +(-1+\sqrt{1+\varkappa^2})\delta^{IJ}) \frac{1}{2} \lambda^{mn}_{ p} \boldsymbol{\gamma}_{mn} \\ & + i \, (\varkappa \sigma_3^{IJ} -(-1+\sqrt{1+\varkappa^2})\epsilon^{IJ}) {\lambda_p}^n \boldsymbol{\gamma}_n \Bigg] \delta_{\kappa} \theta_J + \cdots, \\ \delta_{\kappa} \theta_I &= \sqrt{\frac{2}{1+\sqrt{1+\varkappa^2}}} \ \psi_I \end{aligned} \end{equation} When we shift the bosons as $X^M \to X^M +\bar{\theta}_I f^M_{IJ} \theta_J$ their variation is modified to $\delta_{\kappa}X^M \to \delta_{\kappa}X^M +2\bar{\theta}_I f^M_{IJ} \delta_{\kappa}\theta_J +\bar{\theta}_I \delta_{\kappa}f^M_{IJ} \theta_J$. Once again, since we are considering the variation at leading order we drop the term with $\delta_{\kappa}f^M_{IJ}$. We use the definition of the function $f^M_{IJ}$ given in~\eqref{eq:def-shift-bos-f} and we conclude that, after the shift of the bosons, their variation is \begin{equation} \begin{aligned} \delta_{\kappa}X^M &= -2\bar{\theta}_I f^M_{IJ} \delta_{\kappa}\theta_J - \frac{1}{2} \ \bar{\theta}_I e^{Mp} \Bigg[\delta^{IJ} i \boldsymbol{\gamma}_p - (\varkappa \sigma_1^{IJ} +(-1+\sqrt{1+\varkappa^2})\delta^{IJ}) \frac{1}{2} \lambda^{mn}_{ p} \boldsymbol{\gamma}_{mn} \\ & + i \, (\varkappa \sigma_3^{IJ} -(-1+\sqrt{1+\varkappa^2})\epsilon^{IJ}) {\lambda_p}^n \boldsymbol{\gamma}_n \Bigg] \delta_{\kappa} \theta_J + \cdots \\ &= - \frac{i}{2} \ \bar{\theta}_I e^{Mm} \left( \delta^{IJ} \boldsymbol{\gamma}_m + \varkappa \sigma_3^{IJ} {\lambda_m}^n \boldsymbol{\gamma}_n \right) \delta_{\kappa} \theta_J + \cdots. \end{aligned} \end{equation} The shift does not affect $\delta_{\kappa} \theta_I$ at leading order. The final result is obtained by implementing the bosonic-dependent rotation of the fermions~\eqref{eq:red-ferm-Lor-as} \begin{equation} \begin{aligned} \delta_{\kappa}X^M &= - \frac{i}{2} \ \bar{\theta}_I \bar{U}_{(I)} \ e^{Mm} \left( \delta^{IJ} \boldsymbol{\gamma}_m + \varkappa \sigma_3^{IJ} {\lambda_m}^n \boldsymbol{\gamma}_n \right) \ U_{(I)} \delta_{\kappa} \theta_J + \cdots \\ &= - \frac{i}{2} \ \bar{\theta}_I \delta^{IJ} \widetilde{e}^{Mm} \boldsymbol{\gamma}_m \delta_{\kappa} \theta_J + \cdots, \\ \delta_{\kappa} \theta_1 &= \sqrt{\frac{2}{1+\sqrt{1+\varkappa^2}}} \left( \frac{1}{4} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \bar{U}_{(1)} \left( -\check{e}_{\beta m} k^{mn} \check{\boldsymbol{\gamma}}_n + \hat{e}_{\beta m} k^{mn} \hat{\boldsymbol{\gamma}}_n \right)\kappa_{\alpha 1} \right) \\ \delta_{\kappa} \theta_2 &= \sqrt{\frac{2}{1+\sqrt{1+\varkappa^2}}} \left(\frac{1}{4} (\gamma^{\alpha\beta} + \epsilon^{\alpha\beta}) \bar{U}_{(2)} \left( -\check{e}_{\beta m} k^{nm} \check{\boldsymbol{\gamma}}_n + \hat{e}_{\beta m} k^{nm} \hat{\boldsymbol{\gamma}}_n \right)\kappa_{\alpha 2} \right) \end{aligned} \end{equation} The variation of the bosons already appears to be related to the one of the fermions in the standard way. It has actually the same form as in the undeformed case, where one just puts a tilde to get the deformed quantities. We can achieve the same also for the variation of the fermions if we use the fact that for both expressions \begin{equation} \begin{aligned} \sqrt{\frac{2}{1+\sqrt{1+\varkappa^2}}} \ \bar{U}_{(1)} \left( -\check{e}_{\beta m} k^{mn} \check{\boldsymbol{\gamma}}_n + \hat{e}_{\beta m} k^{mn} \hat{\boldsymbol{\gamma}}_n \right)\kappa_{\alpha 1} = \left( -\widetilde{e}_{\beta}^m \check{\boldsymbol{\gamma}}_m + \widetilde{e}_{\beta}^m \hat{\boldsymbol{\gamma}}_m \right) \widetilde{\kappa}_{\alpha 1} \\ \sqrt{\frac{2}{1+\sqrt{1+\varkappa^2}}} \ \bar{U}_{(2)} \left( -\check{e}_{\beta m} k^{nm} \check{\boldsymbol{\gamma}}_n + \hat{e}_{\beta m} k^{nm} \hat{\boldsymbol{\gamma}}_n \right)\kappa_{\alpha 2} = \left( -\widetilde{e}_{\beta}^m \check{\boldsymbol{\gamma}}_m + \widetilde{e}_{\beta}^m \hat{\boldsymbol{\gamma}}_m \right) \widetilde{\kappa}_{\alpha 2} \end{aligned} \end{equation} where we have inserted the identity $\mathbf{1}=U_{(I)}\bar{U}_{(I)}$ and defined \begin{equation}\label{eq:def-kappa-tilde-k-symm} \widetilde{\kappa}_{\alpha I} \equiv \sqrt{\frac{2}{1+\sqrt{1+\varkappa^2}}} \ \bar{U}_{(I)} \kappa_{\alpha I}. \end{equation} To summarise we have \begin{equation}\label{eq:kappa-var-16} \begin{aligned} \delta_{\kappa}X^M &= - \frac{i}{2} \ \bar{\theta}_I \delta^{IJ} \widetilde{e}^{Mm} \boldsymbol{\gamma}_m \delta_{\kappa} \theta_J + \cdots, \\ \delta_{\kappa} \theta_I &= \widetilde{\psi}_I, \\ \widetilde{\psi}_1&=\frac{1}{4} (\gamma^{\alpha\beta} - \epsilon^{\alpha\beta}) \left( -\widetilde{e}_{\beta}^m \check{\boldsymbol{\gamma}}_m + \widetilde{e}_{\beta}^m \hat{\boldsymbol{\gamma}}_m \right) \widetilde{\kappa}_{\alpha 1}, \\ \widetilde{\psi}_2&= \frac{1}{4} (\gamma^{\alpha\beta} + \epsilon^{\alpha\beta}) \left( -\widetilde{e}_{\beta}^m \check{\boldsymbol{\gamma}}_m + \widetilde{e}_{\beta}^m \hat{\boldsymbol{\gamma}}_m \right)\widetilde{\kappa}_{\alpha 2}. \end{aligned} \end{equation} Also in the deformed case the kappa-symmetry variations can be written in the standard way. We can rewrite the kappa-symmetry variations in terms of $32$-dimensional fermions $\Theta$. To do it we need to introduce $32$-dimensional spinors $\widetilde{K}$ that have the opposite chirality of $\Theta$ \begin{equation} \widetilde{K} \equiv \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \otimes \widetilde{\kappa}. \end{equation} The variations written above are then written as \begin{equation} \begin{aligned} \delta_{\kappa}X^M &= - \frac{i}{2} \ \bar{\Theta}_I \delta^{IJ} \widetilde{e}^{Mm} \Gamma_m \delta_{\kappa} \Theta_J + \cdots, \\ \delta_{\kappa} \Theta_I &= -\frac{1}{4} (\delta^{IJ} \gamma^{\alpha\beta} - \sigma_3^{IJ} \epsilon^{\alpha\beta}) \widetilde{e}_{\beta}^m \Gamma_m \widetilde{K}_{\alpha J}. \end{aligned} \end{equation} Ten-dimensional Gamma-matrices are defined in Appendix~\ref{sec:10-dim-gamma}. Let us now look at the kappa-variation of the worldsheet metric, whose expression is given in~\eqref{eq:defin-kappa-var-ws-metric}. The kappa-variation starts at first order in power of fermions. Then we have to compute \begin{equation} \begin{aligned} P^{(1)}\circ \widetilde{\mathcal{O}}^{-1}( A^{\beta}_+ ) &= P^{(1)}\circ \widetilde{\mathcal{O}}^{\text{inv}}_{(0)} (- \gen{Q}^{I} \, D^{\beta IJ}_+ \theta_J) + P^{(1)}\circ \widetilde{\mathcal{O}}^{\text{inv}}_{(1)} ( e^{m\beta}_+\gen{P}_{m} )+\mathcal{O}(\theta^3)\,, \\ P^{(3)}\circ {\mathcal{O}}^{-1}( A^{\beta}_- ) &= P^{(3)}\circ \op^{\text{inv}}_{(0)} (- \gen{Q}^{I} \, D^{\beta IJ}_- \theta_J) + P^{(3)}\circ \op^{\text{inv}}_{(1)} ( e^{m\beta}_-\gen{P}_{m} )+\mathcal{O}(\theta^3)\,. \end{aligned} \end{equation} Let us start from the last line. We have \begin{equation} \begin{aligned} & P^{(3)}\circ \op^{\text{inv}}_{(0)} (- \gen{Q}^{I} \, D^{\beta IJ}_- \theta_J) = -\left(\frac{1}{2} (1+\sqrt{1+\varkappa^2}) \, \delta^{I2}- \frac{\varkappa}{2} {\sigma_1}^{I2} \, \right) \gen{Q}^{2} D^{\beta IJ}_- \theta_J \\ & P^{(3)}\circ \op^{\text{inv}}_{(1)} ( e^{m\beta}_-\gen{P}_{m} ) = -\frac{\varkappa}{4} \gen{Q}^2 \ e^{m\beta}_- {k_m}^n \ \Bigg[ \left((1+\sqrt{1+\varkappa^2})\delta^{2J} -\varkappa \sigma_1^{2J}\right) \left(i \boldsymbol{\gamma}_n - \frac{1}{2} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) \\ &\qquad\qquad + i \left((1+\sqrt{1+\varkappa^2}) \epsilon^{2J} + \varkappa \sigma_3^{2J}\right) {\lambda_n}^p \boldsymbol{\gamma}_p \Bigg] \theta_J\,. \end{aligned} \end{equation} For the first line we can use that $\widetilde{\mathcal{O}}^{\text{inv}}_{(0)}$ and $\op^{\text{inv}}_{(0)}$ coincide on odd elements, while on even elements their action is equivalent to sending $\varkappa\to-\varkappa$, and we can write \begin{equation} \widetilde{\mathcal{O}}^{\text{inv}}_{(0)}(\gen{Q}^I)=\op^{\text{inv}}_{(0)}(\gen{Q}^I)\,, \qquad \widetilde{\mathcal{O}}^{\text{inv}}_{(0)}(\gen{P}_m)=k^n_{\ m} \gen{P}_n +\# \gen{J}\,, \end{equation} where $k^n_{\ m}=\eta^{nn'}\eta_{mm'} k_{n'}^{\ m'}$. On the other hand, the action of $\widetilde{\mathcal{O}}_{(1)}$ on even elements is minus the one of $\mathcal{O}_{(1)}$ \begin{equation} \widetilde{\mathcal{O}}_{(1)}(\gen{P}_m)=-\mathcal{O}_{(1)}(\gen{P}_m)\,. \end{equation} These considerations need to be taken into account when computing the action of $\widetilde{\mathcal{O}}^{\text{inv}}_{(1)}$ on $\gen{P}_{m}$. Then we find \begin{equation} \begin{aligned} & P^{(1)}\circ \widetilde{\mathcal{O}}^{\text{inv}}_{(0)} (- \gen{Q}^{I} \, D^{\beta IJ}_+ \theta_J) = -\left(\frac{1}{2} (1+\sqrt{1+\varkappa^2}) \, \delta^{I1}- \frac{\varkappa}{2} {\sigma_1}^{I1} \, \right) \gen{Q}^{1} D^{\beta IJ}_+ \theta_J \\ & P^{(1)}\circ \widetilde{\mathcal{O}}^{\text{inv}}_{(1)} ( e^{m\beta}_+\gen{P}_{m} ) = +\frac{\varkappa}{4} \gen{Q}^1 \ e^{m\beta}_+ {k^n}_m \ \Bigg[ \left((1+\sqrt{1+\varkappa^2})\delta^{1J} -\varkappa \sigma_1^{1J}\right) \left(i \boldsymbol{\gamma}_n - \frac{1}{2} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) \\ &\qquad\qquad + i \left((1+\sqrt{1+\varkappa^2}) \epsilon^{1J} + \varkappa \sigma_3^{1J}\right) {\lambda_n}^p \boldsymbol{\gamma}_p \Bigg] \theta_J\,. \end{aligned} \end{equation} When computing the commutators in~\eqref{eq:defin-kappa-var-ws-metric}, we should care only about the contribution proportional to the identity operator, as the others yield a vanishing contribution after we multiply by $\Upsilon$ and take the supertrace. We write the result for the variation of the worldsheet metric, after the redefinition~\eqref{eq:red-fer-2x2-sp} has been done \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta} &= \frac{2i\, \sqrt{2}}{\sqrt{1+\sqrt{1+\varkappa^2}}} \Bigg[ \bar{\kappa}^\alpha_{1+} \Bigg( \delta^{1J}\partial^{\beta}_+ - \frac{1}{4} \delta^{1J} \omega^{\beta mn}_+ \boldsymbol{\gamma}_{mn} +\frac{i}{2} (\sqrt{1+\varkappa^2}\epsilon^{1J}+\varkappa \sigma_3^{1J} ) e^{m\beta}_+ \boldsymbol{\gamma}_m \\ &-\frac{\varkappa}{2} e^{m\beta}_+ {k^n}_m \ \Bigg( \delta^{1J} \left(i \boldsymbol{\gamma}_n - \frac{1}{2} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) + i \left(\sqrt{1+\varkappa^2} \epsilon^{1J} + \varkappa \sigma_3^{1J}\right) {\lambda_n}^p \boldsymbol{\gamma}_p \Bigg)\Bigg) \\ & \qquad \qquad+\bar{\kappa}^\alpha_{2-} \Bigg( \delta^{2J}\partial^{\beta}_- - \frac{1}{4} \delta^{2J} \omega^{\beta mn}_- \boldsymbol{\gamma}_{mn} +\frac{i}{2} (\sqrt{1+\varkappa^2}\epsilon^{2J}+\varkappa \sigma_3^{2J} ) e^{m\beta}_- \boldsymbol{\gamma}_m \\ &+\frac{\varkappa}{2} e^{m\beta}_- {k_m}^n \ \Bigg( \delta^{2J} \left(i \boldsymbol{\gamma}_n - \frac{1}{2} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) + i \left(\sqrt{1+\varkappa^2} \epsilon^{2J} + \varkappa \sigma_3^{2J}\right) {\lambda_n}^p \boldsymbol{\gamma}_p \Bigg)\Bigg)\Bigg] \theta_J. \end{aligned} \end{equation} Here we have written the result in terms of $\bar{\kappa}=\kappa^\dagger\boldsymbol{\gamma}^0$. We do not need to take into account the shift of the bosonic fields~\eqref{eq:red-bos}, since it matters at higher orders in fermions. To take into account the last fermionic field redefinition and write the final form of the variation of the worldsheet metric, we divide the result into ``diagonal''and ``off-diagonal'', where this is meant in the labels $I,J$ for the fermions \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta}|_{\text{diag}} &= 2i \Bigg[ \bar{\tilde{\kappa}}^\alpha_{1+} \Bigg(\partial^{\beta}_+ +\bar{U}_{(1)}\partial^{\beta}_+U_{(1)}\\ &\qquad - \frac{1}{4} \left(\omega^{\beta mn}_+ (\Lambda_{(1)})_{m}^{\ m'}(\Lambda_{(1)})_{n}^{\ n'}\boldsymbol{\gamma}_{m'n'} -\varkappa e^{m\beta}_+ {k^n}_m \lambda_n^{pq} (\Lambda_{(1)})_{p}^{\ p'}(\Lambda_{(1)})_{q}^{\ q'}\boldsymbol{\gamma}_{p'q'}\right) \\ &\qquad +\frac{i \varkappa }{2} e^{m\beta}_+ \left((\Lambda_{(1)})_{m}^{\ m'}\boldsymbol{\gamma}_{m'} -{k^n}_m \ \left( (\Lambda_{(1)})_{n}^{\ n'}\boldsymbol{\gamma}_{n'} + \varkappa {\lambda_n}^p (\Lambda_{(1)})_{p}^{\ p'}\boldsymbol{\gamma}_{p'} \right)\right) \Bigg)\theta_1 \\ & \qquad \bar{\tilde{\kappa}}^\alpha_{2-} \Bigg(\partial^{\beta}_- +\bar{U}_{(2)}\partial^{\beta}_-U_{(2)}\\ &\qquad - \frac{1}{4} \left(\omega^{\beta mn}_- (\Lambda_{(2)})_{m}^{\ m'}(\Lambda_{(2)})_{n}^{\ n'}\boldsymbol{\gamma}_{m'n'} +\varkappa e^{m\beta}_- {k_m}^n \lambda_n^{pq} (\Lambda_{(2)})_{p}^{\ p'}(\Lambda_{(2)})_{q}^{\ q'}\boldsymbol{\gamma}_{p'q'}\right) \\ &\qquad -\frac{i \varkappa }{2} e^{m\beta}_- \left((\Lambda_{(2)})_{m}^{\ m'}\boldsymbol{\gamma}_{m'} -{k_m}^n \ \left( (\Lambda_{(2)})_{n}^{\ n'}\boldsymbol{\gamma}_{n'} - \varkappa {\lambda_n}^p (\Lambda_{(2)})_{p}^{\ p'}\boldsymbol{\gamma}_{p'} \right)\right) \Bigg)\theta_2 \Bigg] , \end{aligned} \end{equation} \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta}|_{\text{off-diag}} &= - \sqrt{1+\varkappa^2}\Bigg[ \bar{\tilde{\kappa}}^\alpha_{1+} \bar{U}_{(1)}U_{(2)} e^{m\beta}_+ \left( (\Lambda_{(2)})_{m}^{\ m'} \boldsymbol{\gamma}_{m'} -\varkappa {k^n}_m \ {\lambda_n}^p(\Lambda_{(2)})_{p}^{\ p'} \boldsymbol{\gamma}_{p'} \right)\theta_2 \\ & \qquad \qquad-\bar{\tilde{\kappa}}^\alpha_{2-} \bar{U}_{(2)}U_{(1)} e^{m\beta}_-\left( (\Lambda_{(1)})_{m}^{\ m'} \boldsymbol{\gamma}_{m'} +\varkappa {k_m}^n \ {\lambda_n}^p(\Lambda_{(1)})_{p}^{\ p'} \boldsymbol{\gamma}_{p'} \right) \theta_1 \Bigg]. \end{aligned} \end{equation} Looking at the diagonal contribution, we find that the expressions containing rank-1 gamma matrices actually vanish, as they should. The rest yields exactly the couplings that we expect to spin-connection and $H^{(3)}$ \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta}|_{\text{diag}}&=2i\Bigg[ \bar{\tilde{\kappa}}^\alpha_{1+} \left( \partial^{\beta}_+ -\frac{1}{4} \widetilde{\omega}^{\beta mn}_+ \boldsymbol{\gamma}_{mn} +\frac{1}{8} e^{m\beta}_+ \widetilde{H}_{mnp} \boldsymbol{\gamma}^{np}\right)\theta_1\\ &\qquad\qquad +\bar{\tilde{\kappa}}^\alpha_{2-} \left( \partial^{\beta}_- -\frac{1}{4} \widetilde{\omega}^{\beta mn}_- \boldsymbol{\gamma}_{mn} -\frac{1}{8} e^{m\beta}_- \widetilde{H}_{mnp} \boldsymbol{\gamma}^{np}\right)\theta_2 \Bigg]\,. \end{aligned} \end{equation} When we consider the off-diagonal contribution we find that it gives the correct couplings to the RR fields \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta}|_{\text{off-diag}}&=2i\left( -\frac{1}{8} e^{\varphi} \right)\Bigg[ \bar{\tilde{\kappa}}^\alpha_{1+} \left( \boldsymbol{\gamma}^n \widetilde{F}^{(1)}_n + \frac{1}{3!}\boldsymbol{\gamma}^{npq} \widetilde{F}^{(3)}_{npq}+\frac{1}{2\cdot 5!} \boldsymbol{\gamma}^{npqrs} \widetilde{F}^{(5)}_{npqrs}\right) e^{m\beta}_+ \boldsymbol{\gamma}_m\, \theta_2\\ &\qquad\qquad +\bar{\tilde{\kappa}}^\alpha_{2-} \left( -\boldsymbol{\gamma}^n \widetilde{F}^{(1)}_n + \frac{1}{3!}\boldsymbol{\gamma}^{npq} \widetilde{F}^{(3)}_{npq}-\frac{1}{2\cdot 5!} \boldsymbol{\gamma}^{npqrs} \widetilde{F}^{(5)}_{npqrs}\right) e^{m\beta}_- \boldsymbol{\gamma}_m\, \theta_1 \Bigg]\,, \end{aligned} \end{equation} where the components of the RR fields are given in~\eqref{eq:flat-comp-F1}-\eqref{eq:flat-comp-F3}-\eqref{eq:flat-comp-F5}. Putting together these results we find a standard kappa-variation also for the worldsheet metric \begin{equation} \begin{aligned} \delta_\kappa \gamma^{\alpha\beta}&=2i\Bigg[ \bar{\tilde{\kappa}}^\alpha_{1+} \widetilde{D}^{\beta 1J}_+\theta_J+\bar{\tilde{\kappa}}^\alpha_{2-} \widetilde{D}^{\beta 2J}_-\theta_J \Bigg] \\ &= 2i\ \Pi^{IJ\, \alpha\a'}\Pi^{JK\, \beta\b'} \ \bar{\tilde{\kappa}}_{I\alpha'}\widetilde{D}^{KL}_{\beta'}\theta_{L}. \end{aligned} \end{equation} where we defined \begin{equation} \Pi^{IJ\, \alpha\a'}\equiv\frac{\delta^{IJ}\gamma^{\alpha\a'}+\sigma_3^{IJ}\epsilon^{\alpha\a'}}{2}\,. \end{equation} The rewriting in terms of 32-dimensional spinors is straightforward. \section{Ten-dimensional $\Gamma$-matrices}\label{sec:10-dim-gamma} We use the $4\times 4$ gamma matrices $\check{\gamma}, \hat{\gamma}$ to define the $32 \times 32$ gamma matrices \begin{equation}\label{eq:def-10-dim-Gamma} \Gamma_m = \sigma_1 \otimes \check{\gamma}_m \otimes {\bf 1}_4 , \ \ m=0, \cdots, 4, \qquad \Gamma_m = \sigma_2 \otimes {\bf 1}_4 \otimes \hat{\gamma}_m , \ \ m=5, \cdots, 9, \end{equation} that satisfy $\{\Gamma_m,\Gamma_n\}= 2\eta_{mn}$ and also gives $\Gamma_{11} \equiv \Gamma_0 \cdots \Gamma_9 = \sigma_3 \otimes {\bf 1}_4 \otimes {\bf 1}_4 $. Anti-symmetrised products of gamma-matrices are defined as $\Gamma_{m_1\cdots m_r} = \frac{1}{r!} \Gamma_{[m_1} \cdots \Gamma_{m_r]}$. The charge conjugation matrix is defined as $\mathcal{C} \equiv i\, \sigma_2 \otimes K \otimes K$, and $\mathcal{C}^2=-\mathbf{1}_{32}$. In the chosen representation, the Gamma matrices satisfy the symmetry properties \begin{equation} \begin{aligned} (\mathcal{C} \Gamma^{(r)})^t &= - t_r^\Gamma \ \mathcal{C} \Gamma^{(r)}, \\ \mathcal{C}( \Gamma^{(r)})^t\mathcal{C} &= - t_r^\Gamma \ \Gamma^{(r)}, \qquad t_0^\Gamma =t_3^\Gamma =+1, \qquad t_1^\Gamma =t_2^\Gamma=-1. \end{aligned} \end{equation} Under Hermitian conjugation we find \begin{equation} \Gamma^0 (\Gamma^{(r)})^\dagger \Gamma^0 = \left\{\begin{array}{c} + \Gamma^{(r)}, \quad r=1,2 \text{ mod } 4, \\ - \Gamma^{(r)}, \quad r=0,3 \text{ mod } 4. \end{array} \right. \end{equation} The $2 \times 2$ space that sits at the beginning is the space of positive/negative chirality. Given two 4-component spinors $\check{\psi}, \hat{\psi}$ with AdS and sphere spinor indices respectively, a 32-component spinor is constructed as \begin{equation} \Psi_+ = \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \otimes \check{\psi} \otimes \hat{\psi} , \qquad \Psi_- = \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \otimes \check{\psi} \otimes \hat{\psi} , \end{equation} for the case of positive and negative chirality respectively. In the main text we use 16-components fermions with two spinor indices $\theta_{\ul{\alpha}\ul{a}}$, and we construct a 32-component Majorana fermion with positive chirality as \begin{equation}\label{eq:def-32-dim-Theta} \Theta = \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \otimes \theta , \qquad \bar{\Theta} = \Theta^t \mathcal{C} = \left( \ 0 \ , \ 1 \ \right) \otimes\bar{\theta} . \end{equation} It is also useful to define $16\times 16$-matrices $\boldsymbol{\gamma}_m$ (that we continue to call gamma-matrices, even if they don't satisfy a Clifford algebra) as in~\eqref{eq:def16x16-gamma} that satisfy \begin{equation} \bar{\Theta}_1 \Gamma_m \Theta_2 \equiv \bar{\theta}_1 \boldsymbol{\gamma}_m \theta_2 \implies \left\{ \begin{array}{rll} \boldsymbol{\gamma}_m &= \check{\gamma}_m \otimes {\bf 1}_4, & \quad m=0,\cdots 4, \\ \boldsymbol{\gamma}_m &= {\bf 1}_4 \otimes i\hat{\gamma}_m, & \quad m=5,\cdots 9, \end{array} \right. \end{equation} The above formulae explain the reason for the factor of $i$ in the definition of $\boldsymbol{\gamma}_m$ for the sphere. In the same way we can explain why there is a $+$ sign and not $-$ in the definition of $\boldsymbol{\gamma}_{mn}$ for the sphere, computing\footnote{When we consider even rank $\Gamma$-matrices, we need to insert also an odd rank $\Gamma$-matrix in order not to get 0 when $\Theta_{1,2}$ have the same chirality.} \begin{equation} \bar{\Theta}_1 \Gamma_{p} \Gamma_{mn} \Theta_2 \equiv \bar{\theta}_1 \boldsymbol{\gamma}_{p} \boldsymbol{\gamma}_{mn} \theta_2 \implies \left\{ \begin{array}{rll} \boldsymbol{\gamma}_{mn} &= \check{\gamma}_{mn} \otimes {\bf 1}_4, & \quad m,n=0,\cdots 4, \\ \boldsymbol{\gamma}_{mn} &= {\bf 1}_4 \otimes \hat{\gamma}_{mn}, & \quad m,n=5,\cdots 9, \\ \boldsymbol{\gamma}_{mn} &= -\check{\gamma}_{m} \otimes i\hat{\gamma}_{n}, & \quad m=0,\cdots 4, \ n=5,\cdots 9. \end{array} \right. \end{equation} Similarly, for rank-3 Gamma matrices we would obtain \begin{equation} \bar{\Theta}_1 \Gamma_{mnp} \Theta_2 \equiv \bar{\theta}_1 \boldsymbol{\gamma}_{mnp} \theta_2 \implies \left\{ \begin{array}{rll} \boldsymbol{\gamma}_{mnp} &= \check{\gamma}_{mnp} \otimes {\bf 1}_4, & \quad m,n,p=0,\cdots 4, \\ \boldsymbol{\gamma}_{mnp} &= {\bf 1}_4 \otimes i\hat{\gamma}_{mnp}, & \quad m,n,p=5,\cdots 9, \\ \boldsymbol{\gamma}_{mnp} &= \frac{1}{3} \check{\gamma}_{mn} \otimes i\hat{\gamma}_{p}, & \quad m,n=0,\cdots 4, \ p=5,\cdots 9, \\ \boldsymbol{\gamma}_{mnp} &= \frac{1}{3} \check{\gamma}_{p} \otimes \hat{\gamma}_{mn}, & \quad p=0,\cdots 4, \ m,n=5,\cdots 9. \end{array} \right. \end{equation} \section{Expansion in fermions of the inverse operator $\mathcal{O}^{-1}$}\label{sec:inverse-op} In this section we collect the relevant ingredients to construct the Lagrangian of the deformed model, once we include also the fermionic degrees of freedom. Following~\cite{Delduc:2013qra} we define linear combinations of the projectors introduced in Section~\ref{sec:algebra-basis} \begin{equation}\label{eq:defin-op-d-dtilde} d=P^{(1)}+\frac{2}{1-\eta^2}P^{(2)}-P^{(3)}, \qquad \tilde{d}=-P^{(1)}+\frac{2}{1-\eta^2}P^{(2)}+P^{(3)}, \end{equation} that are understood as being one the transpose of the other $\Str[Md(N)]=\Str[\tilde{d}(M)N]$. Here $\eta$ is the deformation parameter already introduced in Chapter~\ref{ch:qAdS5Bos}. The above definitions imply \begin{equation} \begin{aligned} d(\gen{J}_{mn}) &=\tilde{d}(\gen{J}_{mn})= \gen{0}, \\ d(\gen{P}_m) &=\tilde{d}(\gen{P}_m)= \frac{2}{1-\eta^2} \gen{P}_m, \\ d(\gen{Q}^{I}) &=-\tilde{d}(\gen{Q}^{I}) = (\sigma_3)^{IJ} \gen{Q}^{J}\,. \end{aligned} \end{equation} We define the operator $R_\alg{g}$ \begin{equation}\label{Rgop} R_\alg{g} = \text{Adj}_{\alg{g}^{-1}} \circ R \circ \text{Adj}_{\alg{g}}\,, \end{equation} that differ from~\eqref{eq:defin-Rg-bos} because now the group element $\alg{g}$ given in~\eqref{eq:choice-full-coset-el} contains also the fermions. For the operator $R$ we use again the definition \begin{equation}\label{Rop} R(M)_{ij} = -i\, \epsilon_{ij} M_{ij}\,,\quad \epsilon_{ij} = \left\{\begin{array}{ccc} 1& \rm if & i<j \\ 0&\rm if& i=j \\ -1 &\rm if& i>j \end{array} \right.\,, \end{equation} that now becomes relevant also on odd roots. Even when we consider the full $\alg{psu}(2,2|4)$, the operator $R$ multiplies by $-i$ and $+i$ generators associated with positive and negative roots respectively, and by $0$ Cartan generators. The operator $R$ still satisfies the modified classical Yang-Baxter equation~\eqref{eq:mod-cl-YBeq-R}. The action of $R_{\alg{g_b}}$ defined through the \emph{bosonic} coset element was studied already in Chapter~\ref{ch:qAdS5Bos}. In the basis of generators used in this chapter we write its action as \begin{equation}\label{eq:Rgb-action-lambda} \begin{aligned} R_{\alg{g_b}}(\gen{P}_m) &= {\lambda_m}^n \gen{P}_n + \frac{1}{2} \lambda_m^{np} \gen{J}_{np} , \\ R_{\alg{g_b}}(\gen{J}_{mn}) &= \lambda_{mn}^{p} \gen{P}_p +\frac{1}{2} \lambda_{mn}^{pq} \gen{J}_{pq}, \\ R_{\alg{g_b}}(\gen{Q}^{I}) &= R (\gen{Q}^{I}) = -\epsilon^{IJ} \gen{Q}^{J} ,\\ \end{aligned} \end{equation} where the coefficients ${\lambda_m}^n, \lambda_m^{np}, \lambda_{mn}^{p}, \lambda_{mn}^{pq}$ for our particular parameterisation are collected in Appendix~\ref{sec:useful-results-eta-def}, see~\eqref{eq:lambda11}-\eqref{eq:lambda22a}. They satisfy the properties \begin{equation}\label{eq:swap-lambda} {\lambda_m}^n = - \, \eta_{mm'} \eta^{nn'} {\lambda_{n'}}^{m'}, \qquad \check{\lambda}_m^{np} = \eta_{mm'} \eta^{nn'} \eta^{pp'} \check{\lambda}^{m'}_{n'p'}, \qquad \hat{\lambda}_m^{np} = -\, \eta_{mm'} \eta^{nn'} \eta^{pp'} \hat{\lambda}^{m'}_{n'p'}, \end{equation} that are used to simplify some terms in the Lagrangian. The operator used to deformed the model is defined as \begin{equation}\label{eq:defin-op-def-supercoset} \mathcal{O}=1-\eta R_\alg{g} \circ d\,, \end{equation} and we find convenient to expand it in powers of the fermions $\theta$ as \begin{equation} \mathcal{O}=\mathcal{O}_{(0)}+\mathcal{O}_{(1)}+\mathcal{O}_{(2)}+\cdots\,, \end{equation} where $\mathcal{O}_{(k)}$ is the contribution at order $\theta^k$. When restricting the action of $\mathcal{O}_{(0)}$ to bosonic generators only, we recover the operator $\mathcal{O}_{\alg{b}}$ defined in~\eqref{eq:def-op-def-bos-mod} and used to deform the purely bosonic model in Chapter~\ref{ch:qAdS5Bos}. The action of $\mathcal{O}_{(0)}$ is defined also on odd elements. The fermionic corrections that we will need read explicitly as \begin{equation}\label{eq:expans-ferm-op} \begin{aligned} \mathcal{O}_{(1)} (M) &= \eta [ \chi,R_{\alg{g_b}} \circ d (M)] -\eta R_{\alg{g_b}} ([\chi , d (M)] ), \\ \mathcal{O}_{(2)} (M) &= \eta [\chi , R_{\alg{g_b}} ([\chi,d(M)])] - \frac{1}{2} \eta R_{\alg{g_b}} ( [\chi,[\chi,d(M)]])- \frac{1}{2} \eta ( [\chi,[\chi,R_{\alg{g_b}} \circ d(M)]]) \\ & = \frac{1}{2} \eta \left( [\chi , [\chi , R_{\alg{g_b}} \circ d(M)]] -R_{\alg{g_b}} [\chi, [\chi,d(M)]] \right) - [\chi, \mathcal{O}_{(1)}(M)], \end{aligned} \end{equation} where we use again the notation $\chi \equiv \genQind{I}{}{}\ferm{\theta}{I}{}{} $. It is actually the inverse operator $\mathcal{O}^{-1}$ that enters the definition of the deformed Lagrangian. Its action is trivial only on generators $\gen{J}$ of grading 0, on which it acts as the the identity, at any order in fermions. To find its action also on the other generators, we invert it perturbatively in powers of fermions. We write it as \begin{equation} \mathcal{O}^{-1}=\op^{\text{inv}}_{(0)}+\op^{\text{inv}}_{(1)}+\op^{\text{inv}}_{(2)}+\cdots\,, \end{equation} where $\op^{\text{inv}}_{(k)}$ is the contribution at order $\theta^k$. Demanding that $\mathcal{O}\cdot\mathcal{O}^{-1}=\mathcal{O}^{-1}\cdot\mathcal{O}=1$ we find \begin{equation}\label{eq:expans-ferm-inv-op} \begin{aligned} \op^{\text{inv}}_{(1)} & = - \op^{\text{inv}}_{(0)} \circ \mathcal{O}_{(1)} \circ \op^{\text{inv}}_{(0)} , \\ \op^{\text{inv}}_{(2)} & = - \op^{\text{inv}}_{(0)} \circ \mathcal{O}_{(2)} \circ \op^{\text{inv}}_{(0)} - \op^{\text{inv}}_{(1)} \circ \mathcal{O}_{(1)} \circ \op^{\text{inv}}_{(0)}. \end{aligned} \end{equation} We will not need higher order contributions. \paragraph{Order $\theta^0$} When we switch off the fermions in $\mathcal{O}^{-1}$ we recover the results of Chapter~\ref{ch:qAdS5Bos}. In particular, using the results of Appendix~\ref{app:bosonic-op-and-inverse} rewritten for our basis of the generators we find that on $\gen{P}_m$ it gives \begin{equation}\label{eq:action-Oinv0-P} \op^{\text{inv}}_{(0)} (\gen{P}_{m})= {k_m}^n \gen{P}_n + \frac{1}{2} w_m^{np} \gen{J}_{np}, \end{equation} where we have \begin{equation}\label{eq:k-res1} \begin{aligned} & k_0^{\ 0} = k_4^{\ 4} = \frac{1}{1-\varkappa^2 \rho^2} , \qquad & k_1^{\ 1} = 1, \qquad & k_2^{\ 2} = k_3^{\ 3} = \frac{1}{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \\ & k_5^{\ 5} =k_9^{\ 9}= \frac{1}{1+\varkappa^2 r^2}, \qquad & k_6^{\ 6} = 1, \qquad & k_7^{\ 7} = k_8^{\ 8}= \frac{1}{1+\varkappa^2 r^4 \sin^2 \xi}, \end{aligned} \end{equation} \begin{equation}\label{eq:k-res2} \begin{aligned} & k_0^{\ 4} = +k_4^{\ 0}= \frac{\varkappa \rho}{1-\varkappa^2 \rho^2}, \qquad & k_2^{\ 3}=-k_3^{\ 2}=- \frac{\varkappa \rho^2 \sin \zeta}{1+\varkappa^2 \rho^4 \sin^2 \zeta}, \\ & k_5^{\ 9} = - k_9^{\ 5}=\frac{\varkappa r}{1+\varkappa^2 r^2}, \qquad & k_7^{\ 8}= -k_8^{\ 7}=\frac{\varkappa r^2 \sin \xi}{1+\varkappa^2 r^4 \sin^2 \xi}. \end{aligned} \end{equation} The coefficients $w_m^{np}$ do not contribute to the Lagrangian, because the generators $\gen{J}$ are projected out by the operators $d,\tilde{d}$. When acting on odd elements, the inverse operator rotates only the labels $I,J$ without modifying the spinor indices \begin{equation} \op^{\text{inv}}_{(0)} (\gen{Q}^{I})= \frac{1}{2} (1+\sqrt{1+\varkappa^2}) \, \gen{Q}^{I}- \frac{\varkappa}{2} {\sigma_1}^{IJ} \, \gen{Q}^{J}. \end{equation} \paragraph{Order $\theta^1$} We use~\eqref{eq:expans-ferm-op} and~\eqref{eq:expans-ferm-inv-op} to compute the action of $\mathcal{O}_{(1)}$ and $\op^{\text{inv}}_{(1)}$ on $\gen{P}_m$ and $\gen{Q}^I$. First we find \begin{equation} \mathcal{O}_{(1)}(\gen{P}_m) = \frac{\varkappa}{2} \gen{Q}^I \left[ \delta^{IJ} \left(i \boldsymbol{\gamma}_m - \frac{1}{2} \lambda_m^{np} \boldsymbol{\gamma}_{np} \right) + i \epsilon^{IJ} {\lambda_m}^n \boldsymbol{\gamma}_n \right] \theta_J\,, \end{equation} and we use this result to get \begin{equation} \begin{aligned} \op^{\text{inv}}_{(1)}(e^m\gen{P}_m) = -\frac{\varkappa}{4} \gen{Q}^I \ e^m {k_m}^n \ \Bigg[ & \left((1+\sqrt{1+\varkappa^2})\delta^{IJ} -\varkappa \sigma_1^{IJ}\right) \left(i \boldsymbol{\gamma}_n - \frac{1}{2} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) \\ & + i \left((1+\sqrt{1+\varkappa^2}) \epsilon^{IJ} + \varkappa \sigma_3^{IJ}\right) {\lambda_n}^p \boldsymbol{\gamma}_p \Bigg] \theta_J\,. \end{aligned} \end{equation} For later convenience we rewrite this as \begin{equation} \begin{aligned} \op^{\text{inv}}_{(1)}(e^m\gen{P}_m) = -\frac{\varkappa}{4} \gen{Q}^I \ e^m {k_m}^n \ \Bigg[ & \left((1+\sqrt{1+\varkappa^2})\delta^{IJ} -\varkappa \sigma_1^{IJ}\right) \Delta^1_n \\ & + \left((1+\sqrt{1+\varkappa^2}) \epsilon^{IJ} + \varkappa \sigma_3^{IJ}\right) \Delta^3_n \Bigg] \theta_J\,, \end{aligned} \end{equation} where $\Delta^1_n\equiv\left(i \boldsymbol{\gamma}_n- \frac{1}{2} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right)$, $\Delta^3_n\equiv i{\lambda_n}^p \boldsymbol{\gamma}_p $. On odd generators we find \begin{equation} \begin{aligned} \mathcal{O}_{(1)}(\gen{Q}^I\psi_I) = \frac{1-\sqrt{1+\varkappa^2}}{\varkappa} \ \bar{\theta}_J \Bigg[ \sigma_1^{JI} \left( i\boldsymbol{\gamma}_p +\frac{1}{2} \lambda^{mn}_{ p} \boldsymbol{\gamma}_{mn} \right) - i \, \sigma_3^{JI} {\lambda_p}^n \boldsymbol{\gamma}_n \Bigg] \psi_I \ \eta^{pq} \gen{P}_q + \cdots \,, \end{aligned} \end{equation} that helps to calculate \begin{equation} \begin{aligned} \op^{\text{inv}}_{(1)}(\gen{Q}^I\psi_I) = - \frac{1}{2} \ \bar{\theta}_K \Bigg[ & (-\varkappa \sigma_1^{KI} +(-1+\sqrt{1+\varkappa^2})\delta^{KI}) \left( i\boldsymbol{\gamma}_p +\frac{1}{2} \lambda^{mn}_{ p} \boldsymbol{\gamma}_{mn} \right) \\ & + i \, (\varkappa \sigma_3^{KI} -(-1+\sqrt{1+\varkappa^2})\epsilon^{KI}) {\lambda_p}^n \boldsymbol{\gamma}_n \Bigg] \psi_I \ k^{pq} \ \gen{P}_q + \cdots \,. \end{aligned} \end{equation} In these formulae we have omitted the terms proportional to $\gen{J}_{mn}$ and replaced them by dots, since they do not contribute to the computation of the Lagrangian. It is interesting to note that the last result can be rewritten as \begin{equation} \begin{aligned} \op^{\text{inv}}_{(1)}(\gen{Q}^I\psi_I) = - \frac{1}{2} \ \bar{\theta}_K \Bigg[ & (-\varkappa \sigma_1^{KI} +(-1+\sqrt{1+\varkappa^2})\delta^{KI}) \bar{\Delta}^{1}_{p} \\ & + (\varkappa \sigma_3^{KI} -(-1+\sqrt{1+\varkappa^2})\epsilon^{KI}) \bar{\Delta}^{3}_{p} \Bigg] \psi_I \ k^{pq} \ \gen{P}_q + \cdots \end{aligned} \end{equation} where one needs to use \eqref{eq:swap-lambda}. The quantities $\bar{\Delta}^{3}_{p'},\bar{\Delta}^{1}_{p'}$ are defined by $(\Delta^{3}_{p'} \theta_K)^\dagger \check{\gamma}^0 = \bar{\theta}_K \bar{\Delta}^{3}_{p'}$ and $(\Delta^{1}_{p'} \theta_K)^\dagger \check{\gamma}^0 = \bar{\theta}_K \bar{\Delta}^{1}_{p'}$. \paragraph{Order $\theta^2$} We need to compute the action of $\mathcal{O}$ and $\mathcal{O}^{-1}$ at order $\theta^2$ just on generators $\gen{P}_m$. Indeed the operators $\mathcal{O}_{(2)}$ and $\op^{\text{inv}}_{(2)}$ acting on generators $\gen{Q}^I$ contribute only at quartic order in the Lagrangian. First we find \begin{equation} \begin{aligned} \mathcal{O}_{(2)}({\gen{P}}_m) = - \frac{\varkappa}{2} \bar{\theta}_K \Bigg[ & \delta^{KI} \left( -\boldsymbol{\gamma}_q \left(\boldsymbol{\gamma}_m +\frac{i}{4} \lambda_m^{np} \boldsymbol{\gamma}_{np} \right) + \frac{i}{4} \lambda^{np}_{q} \boldsymbol{\gamma}_{np} \boldsymbol{\gamma}_m \right) \\ & -\frac{1}{2} \epsilon^{KI} \left( \boldsymbol{\gamma}_q \, {\lambda_m}^n \boldsymbol{\gamma}_n - {\lambda_q}^p \boldsymbol{\gamma}_p \boldsymbol{\gamma}_m \right) \Bigg] \theta_I \ \eta^{qr} \gen{P}_r +\cdots\,, \end{aligned} \end{equation} that gives \begin{equation} \begin{aligned} -\op^{\text{inv}}_{(0)} \circ \mathcal{O}_{(2)} \circ \op^{\text{inv}}_{(0)}(e^m\gen{P}_m) = &- \frac{\varkappa}{2} \bar{\theta}_K \ e^m {k_m}^n \ \Bigg[ \delta^{KI} \left( \boldsymbol{\gamma}_u \left(\boldsymbol{\gamma}_n +\frac{i}{4} \lambda_n^{pq} \boldsymbol{\gamma}_{pq} \right) - \frac{i}{4} \lambda^{pq}_{\ u} \boldsymbol{\gamma}_{pq} \boldsymbol{\gamma}_n \right) \\ & +\frac{1}{2} \epsilon^{KI} \left( \boldsymbol{\gamma}_u {\lambda_n}^p \boldsymbol{\gamma}_p - {\lambda_u}^p \boldsymbol{\gamma}_p \boldsymbol{\gamma}_n \right) \Bigg] \theta_I {k}^{uv} \ \gen{P}_v +\cdots\,. \end{aligned} \end{equation} Also here the dots stand for contributions proportional to $\gen{J}_{mn}$ that we are omitting. The last formula that we will need is \begin{equation} \begin{aligned} & -\op^{\text{inv}}_{(1)} \circ \mathcal{O}_{(1)} \circ \op^{\text{inv}}_{(0)}(e^m\gen{P}_m) = - \frac{\varkappa}{4} \bar{\theta}_K \ e^m {k_m}^n \times\\ & \times \Bigg[ (-1+\sqrt{1+\varkappa^2}) \delta^{KJ} \bigg( \left( \boldsymbol{\gamma}_u -\frac{i}{2} \lambda_u^{pq}\boldsymbol{\gamma}_{pq} \right) \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) + {\lambda_u}^p\boldsymbol{\gamma}_p {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ &+(-1+\sqrt{1+\varkappa^2}) \epsilon^{KJ} \bigg(-{\lambda_u}^p \boldsymbol{\gamma}_p \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) + \left( \boldsymbol{\gamma}_u -\frac{i}{2} \lambda_u^{pq}\boldsymbol{\gamma}_{pq} \right) {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ & -\varkappa \sigma_1^{KJ} \bigg( \left( \boldsymbol{\gamma}_u -\frac{i}{2}\lambda_u^{pq} \boldsymbol{\gamma}_{pq} \right) \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) - {\lambda_u}^p\boldsymbol{\gamma}_p {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \\ &+\varkappa \sigma_3^{KJ} \bigg( {\lambda_u}^p\boldsymbol{\gamma}_p \left(\boldsymbol{\gamma}_n +\frac{i}{2} \lambda_n^{rs} \boldsymbol{\gamma}_{rs}\right) + \left( \boldsymbol{\gamma}_u -\frac{i}{2} \lambda_u^{pq} \boldsymbol{\gamma}_{pq} \right) {\lambda_n}^r \boldsymbol{\gamma}_r \bigg) \Bigg] \theta_J {k}^{uv} \ \gen{P}_v +\cdots\,, \end{aligned} \end{equation} where we have rewritten the result using~\eqref{eq:swap-lambda}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction\label{intro}} Supersymmetry presents effective and elegant tools to solve quantum mechanical problems described by integrable Schr\"odinger equations. Unfortunately the class of known problems which can be solved using their supersymmetry is rather restricted since they should have the additional quality called shape invariance \cite{Gen}, and this feature appears to be rather rare. The classification of shape invariant (scalar) potentials is believed to be completed at least in the case when they include an additive variable parameter \cite{Khare}. However there exist an important class of shape invariant potentials which is not classified yet, and they are matrix valued potentials. Such potentials appear naturally in models using systems of Schr\"odinger-Pauli equations. A famous example of such model was proposed by Pron'ko and Stroganov (PS) \cite{Pron}, its supersymmetric aspects were discovered in papers \cite{Vor} and \cite{Gol}. We note that there exist a relativistic version of the PS problem which is shape invariant too \cite{ninni}. Examples of matrix superpotentials including shape invariant ones were discussed in \cite{Andr}, \cite{Andri}, \cite{Ioffe} \cite{Rodr}, \cite{tkach}. A rather general approach to matrix superpotentials was proposed in paper \cite{Fu}, which, however, was restricted to their linear dependence on the variable parameter. A systematic study of matrix superpotentials was started in recent paper \cite{yur1} where we presented the complete description of a special class of irreducible matrix superpotentials. These superpotentials include terms linear and inverse in variable parameter, moreover, the linear terms where supposed to be proportional to the unit matrix. In this way we formulated five problems for systems of Schr\"odinger equations which are exactly solvable thanks to their shape invariance. Three of these problems are shape invariant with respect of shifts of two parameters, i.e., posses the dual shape invariance \cite{yur1}. The present paper is a continuation and in some sense the completion of the previous one. We classify all irreducible matrix superpotentials realized by matrices of dimension $2\times2$ with linear and inverse dependence on variable parameter. As a result we find 17 matrix potentials which are shape invariant and give rise to exactly solvable problems described by Schr\"odinger-Pauli equation. These potentials are defined up to sets of arbitrary parameters thus the number of non-equivalent integrable models which are presented here is rather large. They include as particular cases all superpotentials discussed in \cite{Gol}, \cite {Ioffe}, \cite{Fu} and \cite{yur1}, but also a number of new ones. Moreover, the list of found shape invariant potentials is complete, i.e., it includes all such potentials realized by $2\times2$ matrices. In addition, we present a constructive description of superpotentials realized by matrices of arbitrary dimension. The case of matrix superpotentials of dimension $3\times3$ is considered in more detail. A simple algorithm for construction of all non-equivalent $3\times3$ matrix superpotentials is presented. A certain subclass of such superpotentials is given explicitly. The found superpotentials give rise to one dimensional integrable models described by systems of coupled Schr\"odinger equations. However, some of these systems are nothing but reduced versions of multidimensional models, which appear as a result of the separation of variables. Examples of such multidimensional systems are presented in section 8. These systems are integrable and most of them are new. In particular, we show that the superintegrable model for vector particles, proposed in \cite{Pron2}, possesses supersymmetry with shape invariance, and so its solutions can be easily found using tools of SUSY quantum mechanics. The same is true for the arbitrary spin models considered in paper \cite{Pron2}, but we do not discuss them here. We also analyze five of found integrable systems in detail and calculate their spectrum and the related eigenfunctions. In particular we give new examples of matrix oscillator models. The paper is organized as follows. In section \ref{matrixproblem} we discuss restrictions imposed on superpotentials by the shape invariance condition and present the determining equations which should be solved to classify these superpotentials. In sections \ref{2x2} and \ref{CompList} the $2\times 2$ matrix superpotentials are described and the complete list of them is presented. The case of arbitrary dimensional matrices is considered in section \ref{ArbDim}, the list of $3\times 3$ matrix superpotentials can be found in section \ref{3x3}. Sections \ref{models} and \ref{3d} are devoted to discussion of new integrable systems of Schr\"odinger equations which has been effectively classified in the previous sections. In addition, in section \ref{3d} we discuss SUSY aspects of superintegrable models for arbitrary spin $s$ proposed in \cite{Pron2}. \section{Shape invariance condition\label{matrixproblem}} Let us consider a Schr\"odinger-Pauli type equation \begin{gather}\label{eq}H_\kappa\psi=E_\kappa\psi\end{gather} where \begin{equation} \label{hamiltonian} H_k=-\frac{\partial^2}{\partial x^2}+V_k(x), \end{equation} and $V_k(x)$ is a matrix-valued potential depending on variable $x$ and parameter $k$. We suppose that $V_k(x)$ is an $n\times n$ dimensional hermitian matrix, and that Hamiltonian $H_k$ admits a factorization \beq\label{s3}H_\kappa=a_\kappa^+a_\kappa^-+c_\kappa\eeq where \[a_\kappa^-=\frac{\partial}{\partial x}+W_\kappa,\ \ a_\kappa^+=- \frac{\partial}{\partial x}+W_\kappa, \] $c_k$ is a constant and $W_k(x)$ is a superpotential. Let us search for superpotentials which generate shape invariant potentials $V_k(x)$. It means that $W_k$ should satisfy the following condition \begin{gather}\label{SI}W_k^2+W'_k=W_{k+\alpha}^2- W'_{k+\alpha}+C_k\end{gather} were $C_k$ and $\alpha$ are constants. In the following sections we classify shape invariant superpotentials, i.e., find matrices $W_k$ depending on of $x, k$ and satisfying conditions (\ref{SI}). More exactly, we find indecomposable hermitian matrices whose dependence on $k$ is is specified by terms proportional to $k$ and $\frac1k$. Let us consider superpotentials of the following generic form \begin{equation} \label{SP} W_k=k Q +\frac1k R+P \end{equation} where $P$, $R$ and $ Q$ are $n\times n$ Hermitian matrices depending on $x$. Superpotentials of generic form (\ref{SP}) were discussed in paper \cite{yur1} were we considered the case of arbitrary dimension matrices but restrict ourselves to the case when $Q=Q(x)$ is proportional to the unit matrix. Rather surprisingly this supposition enables to make a completed classification of superpotentials (\ref{SP}). All such (irreducible) superpotentials include known scalar potentials listed in \cite{Khare} and five $2\times2$ matrix superpotentials found in \cite{yur1}. To complete the classification presented in \cite{yur1} let us consider generic superpotentials (\ref{SP}) with arbitrary hermitian matrices $Q,\ P$ and $R$. We suppose $W_k$ be irreducible, i.e., matrices $R, P$ and $Q$ cannot be simultaneously transformed to a block diagonal form. Substituting (\ref{SP}) into (\ref{SI}), multiplying the obtained expression by $k^2(k+\alpha)^2$ and equating the multipliers for same powers of $k$ we obtain the following determining equations: \begin{gather} Q'=\alpha( Q^2+\nu I), \label{a0}\\\label{a00} P'-\frac\alpha2 \{ Q,P\}+\varkappa I=0,\\\label{a01} \{R,P\}+\lambda I=0\\ R^2=\omega^2I\label{a8} \end{gather} where $Q=\frac1\alpha \tilde Q,\ \ Q'=\frac{\p Q}{\p x},\quad \{ Q,P\}= QP+P Q$ is an anticommutator of matrices $ Q$ and $P$, $I$ is the unit matrix and $\varkappa, \ \lambda,\ \ \omega $ are constants. Equations (\ref{a0})--(\ref{a8}) have been deduced in \cite{yur1} where the anticommutator $\{Q,P\}$ was reduced to doubled product of $Q$ with $P$ since $Q$ was considered to be proportional to the unit matrix and so be commuting with $P$. The system (\ref{a0})--(\ref{a8}) for generic matrices $Q$, $P$ and $R$ is much more complicated then in the case of diagonal $Q$. However it is possible to find its exact solutions for matrices of arbitrary dimension. \section{Determining equations for $2\times2$ matrix superpotentials\label{2x2}} At the first step we restrict ourselves to the complete description of superpotentials (\ref{SP}) which are matrices of dimension $2\times2$. In this case it is convenient to represent $Q$ as a linear combination of Pauli matrices \begin{gather}\label{Qsig}Q=q_0\sigma_0+q_1\sigma_1+q_2\sigma_2+q_3\sigma_3\end{gather} where $\sigma_0=I$ is the unit matrix, \begin{gather}\label{pm} \sigma_1=\begin{pmatrix}0&1\\1&0\end{pmatrix},\quad \sigma_2=\begin{pmatrix}0&-\ri\\\ri&0\end{pmatrix},\quad \sigma_3=\begin{pmatrix}1&0\\0&-1\end{pmatrix}.\end{gather} Let is show that up to the unitary transformation realized by a constant matrix the matrix $Q$ can be transformed to a diagonal form. Substituting (\ref{Qsig}) into (\ref{a0}) and equating coefficients for the linearly independent Pauli matrices $\sigma_a, a=1,2,3$ we obtain the following system: \begin{gather}q_a'=2\alpha q_0q_a,\quad a=1,2,3,\label{q_a} \end{gather} It follows from (\ref{q_a}) that \begin{gather}\label{q_a1} q_a= c_aF(x), \quad F(x)= \exp\left(2\alpha\int\!\!q_0 dx\right)\end{gather} where $c_a$ are integration constants. Since all $q_a$ are expressed via the same functions of $x$ multiplied by constants, we can transform $Q$ to the diagonal form: \begin{gather}\label{diag}Q\to UQU^\dag=\left(\begin{array}{cc}q_+&0\\0&q_-\end{array}\right)\end{gather} where $q_\pm=q_0\pm cF(x)$, $c=\sqrt{c_1^2+c_2^2+c_3^2}$, and $U$ is the constant matrix: \begin{gather*}U=\frac{c+c_3-\ri\sigma_2c_1+\ri\sigma_1c_2}{2\sqrt{c(c+c_3)}} \end{gather*} In accordance with (\ref{diag}) equation (\ref{a0}) is reduced to the decoupled system of Riccati equations for $q_\pm$: \begin{gather} \label{qq}q_\pm'=\alpha(q_\pm^2+\nu)\end{gather} which is easily integrable. The corresponding matrices $P$ can be found from equation (\ref{a00}): \begin{gather}\label{PP}P=\left(\begin{array}{cc}p_+&p\\p^*&p_-\end{array}\right)\end{gather} with $p_\pm$ being solutions of the following equation \begin{gather}\label{PPd}p_\pm'=\alpha p_\pm q_\pm+\varkappa,\end{gather} and \begin{gather}\label{Pa} p=\mu\exp\left(\frac12\alpha\int\!\!(q_++q_-) dx\right)\end{gather} where $\mu$ is an integration constant and the asterisk denotes the complex conjugation. Moreover, up to unitary transformations realized by matrices commuting with $Q$ (\ref{diag}) the constant $\mu$ can chosen be real and so we can restrict ourselves to $p$ satisfying $p^*=p$. Consider the remaining equations (\ref{a01}) and (\ref{a8}). In accordance with (\ref{a8}) $R$ should be a constant matrix whose eigenvalues are $\pm \omega$. Thus it can be represented as $R=r_1\sigma_1+r_2\sigma_2+r_3\sigma_3$ where $r_1,\ r_2$ and $r_3$ are constants satisfying $r_1^2+r_2^2+r_3^2=\omega^2$, or alternatively, $R=\pm\omega I$. Let $\omega\neq0$ then, in order equation (\ref{a01}) to be satisfied we have to exclude the second possibility and to set $r_1= \varkappa=p_\pm=0$. As a result we obtain the general solution of the determining equations (\ref{a0})--(\ref{a8}) with $\omega\neq0$ in the following form: \begin{gather}\label{res}P=\sigma_1p,\quad R=r_3\sigma_3+r_2\sigma_2,\quad Q=q_+\sigma_++q_-\sigma_-\end{gather} where $\sigma_\pm=(1\pm\sigma_3)/2,$ $q_\pm$ are solutions of Riccati equation (\ref{qq}), $p$ is the function defined by (\ref{Pa}) and $r_a$ are constants satisfying $r_2^2+r_3^2=\omega^2$. If $\omega=0$ then conditions (\ref{a01}) and (\ref{a8}) became trivial. The corresponding matrices $Q$ and $P$ are given by equations (\ref{diag}) and (\ref{PP}). \section{Complete list of $2\times2$ matrix superpotentials\label{CompList}} Let us write the found matrix superpotentials explicitly and find the corresponding shape invariant potentials. All nonequivalent solutions of equations (\ref{qq}) are enumerated in the following formulae: \begin{gather}\label{lin1}\begin{split}&q_\sigma=0,\ \nu=0,\end{split}\\ \label{lin2} \begin{split}& q_\sigma=-\frac{1}{\alpha x+ c_\sigma}, \ \nu=0,\\& q_\sigma=\frac\lambda\alpha\tan(\lambda x+ c_\sigma),\quad \nu=\frac{\lambda^2}{\alpha^2}>0,\\& q_\sigma=-\frac\lambda\alpha\tanh(\lambda x+ c_\sigma),\quad \nu=-\frac{\lambda^2}{\alpha^2}<0,\\\end{split}\\\begin{split}&q_\sigma=- \frac\lambda\alpha\coth(\lambda x+ c_\sigma),\quad \nu=-\frac{\lambda^2}{\alpha^2}<0,\\& q_\sigma=-\frac\lambda\alpha,\quad \nu=-\frac{\lambda^2}{\alpha^2}<0 \end{split}\label{lin3} \end{gather} where $\sigma=\pm, c_\sigma=\pm c$ and $c$ is an integration constant. Going over solutions (\ref{lin1})--(\ref{lin3}) corresponding to the same values of parameter $\nu$ it is not difficult to find the related entries of matrix $P$ (\ref{PP}) defined by equations (\ref{PPd}) and (\ref{Pa}). As a result we obtain the following list of superpotentials: \begin{gather} \begin{split}& W^{(1)}_\kappa=\lambda\left(\kappa\left(\sigma_+\tan(\lambda x+c)+\sigma_-\tan(\lambda x-c)\right)\right.\\ &\left.+\mu\sigma_1\sqrt{\sec(\lambda x-c)\sec(\lambda x+c)}+\frac{1}{\kappa}R\right),\end{split}\label{tan} \\ \begin{split}& W^{(2)}_\kappa=\lambda\left(-\kappa(\sigma_+\coth(\lambda x+c)+\sigma_-\coth(\lambda x-c))\right.\\ &\left.+\mu\sigma_1\sqrt{\csch(\lambda x-c)\csch(\lambda x+c)}+\frac{1}{\kappa}R\right),\end{split}\label{cotanh1} \\\begin{split}& W^{(3)}_\kappa=\lambda\left(-\kappa(\sigma_+\tanh(\lambda x+c)+\sigma_-\tanh(\lambda x-c))\right.\\ &\left.+\mu\sigma_1\sqrt{\sech(\lambda x-c)\sech(\lambda x+c)}+\frac{1}{\kappa}R\right),\end{split}\label{tanh1} \\\begin{split}& W^{(4)}_\kappa=\lambda\left(-\kappa(\sigma_+\tanh(\lambda x+c)+\sigma_+\coth(\lambda x-c))\right.\\ &\left.+\mu\sigma_1\sqrt{\sech(\lambda x+c)\csch(\lambda x-c)}+\frac{1}{\kappa}R\right),\end{split}\label{tanhcotanh} \\ \begin{split}& W^{(5)}_\kappa=\lambda\left(-\kappa(\sigma_+\tanh(\lambda x)+\sigma_-)+\mu\sigma_1\sqrt{\sech(\lambda x)\exp(-\lambda x)}+\frac{1}{\kappa}R\right),\end{split}\label{tanh_exp} \\\begin{split}& W^{(6)}_\kappa=\lambda\left(-\kappa(\sigma_+\coth(\lambda x)+\sigma_-)+\mu\sigma_1\sqrt{\csch(\lambda x)\exp(-\lambda x)}+\frac{1}{\kappa}R\right),\end{split}\label{cotanh_exp} \\ \begin{split}& W^{(7)}_\kappa=-\kappa \left(\frac{\sigma_+}{x+c}+\frac{\sigma_-}{x-c} \right)+\frac{\mu\sigma_1}{\sqrt{x^2-c^2}}+ \frac{1}{\kappa}R, \end{split}\label{inin} \\ \begin{split}& W^{(8)}_\kappa=-\kappa \frac{\sigma_+}{x} +\mu\sigma_1\frac{1}{\sqrt{x}}+ \frac{1}{\kappa}R \end{split}\label{in0}\\\label{expp} W^{(9)}_\kappa= \lambda\left(-\kappa I+ \mu\exp(-\lambda x)\so-\frac{\omega}{\kappa}\st\right). \end{gather} Here $\sigma_\pm=\frac12(\sigma_0\pm\sigma_3),\ \ R$ is the numeric matrix given by equation (\ref{res}), $\kappa,\ \mu$ and $\lambda$ are arbitrary parameters. Formulae (\ref{tan})--(\ref{in0}) give the complete list of superpotentials corresponding to nontrivial matrices $R$. In particular this list includes superpotentials with $Q$ being proportional to the unit matrix which has been discussed in paper \cite{yur1}. These cases are specified by equation (\ref{expp}) and equations (\ref{tan}), (\ref{cotanh1}), (\ref{tanh1}), (\ref{inin}) with $c=0$ and $R=\omega\sigma_1$. Finally, let us add the list (\ref{tan})--(\ref{expp}) by superpotentials with $R\equiv0$. Using equations (\ref{diag}), (\ref{lin1})--(\ref{lin3}) and (\ref{PP}), (\ref{Pa}) we obtain the following expressions for operators (\ref{SP}): \begin{gather} \begin{split}& W^{(10)}_\kappa=\lambda\left(\sigma_+(\kappa\tan(\lambda x+c)+\nu \sec(\lambda x+c))\right.\\ &\left.+ \sigma_-(\kappa\tan(\lambda x-c)+\tau \sec(\lambda x-c))+ \mu\sigma_1\sqrt{\sec(\lambda x-c)\sec(\lambda x+c)}\right),\end{split}\label{tan0} \\ \begin{split}& W^{(11)}_\kappa=-\lambda\left(\sigma_+(\kappa\coth(\lambda x+c)+\nu \csch(\lambda x+c))\right.\\ &\left.+ \sigma_-(\kappa\coth(\lambda x-c)+\tau \csch(\lambda x-c))+ \mu\sigma_1\sqrt{\csch(\lambda x-c)\csch(\lambda x+c)}\right),\end{split}\label{cotanh10} \\\begin{split}& W^{(12)}_\kappa=-\lambda\left(\sigma_+(\kappa\tanh(\lambda x+c)+\nu \sech(\lambda x+c))\right.\\ &\left.+ \sigma_-(\kappa\coth(\lambda x-c)+\tau \csch(\lambda x-c))+ \mu\sigma_1\sqrt{\sech(\lambda x-c)\csch(\lambda x+c)}\right),\end{split}\label{tanh10} \\\begin{split}& W^{(13)}_\kappa=-\lambda\left(\sigma_+(\kappa\tanh(\lambda x+c)+\nu \sech(\lambda x+c))\right.\\ &\left.+ \sigma_-(\kappa\tanh(\lambda x-c)+\tau \sech(\lambda x-c))+ \mu\sigma_1\sqrt{\sech(\lambda x-c)\sech(\lambda x+c)}\right),\end{split}\label{tanhcotanh0} \\ \begin{split}& W^{(14)}_\kappa=-\lambda\left(\sigma_+(\kappa\tanh\lambda x+\nu\sech \lambda x) +\sigma_-\kappa)+\mu\sigma_1\sqrt{\sech\lambda x\exp(-\lambda x)}\right),\end{split}\label{tanh_exp0} \\\begin{split}& W^{(15)}_\kappa=-\lambda\left(\kappa(\sigma_+(\kappa\coth\lambda x+\nu\csch \lambda x +\sigma_-\kappa)+\mu\sigma_1\sqrt{\csch\lambda x\exp(-\lambda x)}\right),\end{split}\label{cotanh_exp0} \\ \begin{split}& W^{(16)}_\kappa=-{\sigma_+}\left( \frac{\kappa+\delta}{x+c}+\frac\omega2 (x+c)\right)-{\sigma_-}\left( \frac{\kappa-\delta}{x-c}+\frac\omega2 (x-c)\right)+\frac{\mu\sigma_1}{\sqrt{x^2-c^2}}, \end{split}\label{inin0} \\ \begin{split}& W^{(17)}_\kappa=-{\sigma_+}\left(\frac{2\kappa+1}{2x}-\frac{\omega x}{4} \right)+{\sigma_-}\left(\frac{\omega x}{2} +c\right) -\mu\sigma_1\frac{1}{\sqrt{x}}. \end{split}\label{in00} \end{gather} Formulae (\ref{tan})--(\ref{in00}) give the complete description of matrix superpotentials realized by matrices of dimension $2\times2$. These superpotentials are defined up to translations $x\rightarrow x+c$, $\kappa\rightarrow \kappa+\gamma$, and up to equivalence transformations realized by unitary matrices. In (\ref{tan})--(\ref{in00}) we introduce the rescalled parameter $\kappa=\frac{k}{\alpha}$ such that the transformations $k\to k'=k+\alpha$ is reduced to: \begin{gather}\kappa\to\kappa'=\kappa+1\label{kappa}.\end{gather} The list (\ref{tan})--(\ref{in00}) includes all superpotentials obtained earlier in \cite{Fu} and \cite{yur1}, but also a number of new matrix superpotentials. The corresponding shape invariant potentials are easily calculated starting with superpotentials (\ref{tan})--(\ref{in00}) and using the following definition: \begin{gather}V_\kappa^{(i)}=W_\kappa^{(i)2}-W_\kappa^{(i)'}, \quad i=1,2,...,17.\label{ham}\end{gather} To save a room we will not present all potentials (\ref{ham}) explicitly but restrict ourselves to discussions of particular examples of them, see section \ref{models}. \section{Matrix superpotentials of arbitrary dimension\label{ArbDim}} Let us consider generic superpotentials (\ref{SP}) with arbitrary dimension hermitian matrices $Q,\ P$ and $R$. In this case we again come to the determining equations (\ref{a0})--(\ref{a8}) where $Q, P$ and $R$ are now hermitian matrices of dimension $K\times K$ with arbitrary integer $K$. In accordance with (\ref{a8}) $R$ is a constant matrix whose eigenvalues are $\pm \omega$. Thus up to unitary equivalence it can be chosen in the form \begin{gather}\label{R}R=\omega\left(\begin{array}{lc} I_n&0\\0&-I_m\end{array} \right),\quad n+m=K\end{gather} where $I_n$ and $I_m$ are the unit matrices of dimension $n\times n$ and $m\times m$ correspondingly, $n$ and $m$ are the numbers of positive and negative eigenvalues of $R$. It is convenient to represent matrix $Q$ in a block form: \begin{gather}\label{Q}Q=\left(\begin{array}{cc} A&B\\B^\dag&C\end{array}\right)\end{gather} where $A, B$ and $C$ are matrices of dimension $n\times n$, $n\times m$ and $m\times m$. Using the analogous representation for $P$ and taking into account relations (\ref{a01}) we right it as \begin{gather}\label{P}P=\left(\begin{array}{cc} 0&\hat P\\\hat P^\dag&0\end{array}\right)+\tau R\end{gather} where $\tau=-\frac{\lambda}{2\omega}$. Substituting (\ref{R})--(\ref{P}) into (\ref{a0}) and (\ref{a00}) we obtain the following equations for the block matrices: \begin{gather}\label{A} A'=\alpha(A^2+BB^\dag+\nu I_n),\\ C'=\alpha(C^2+B^\dag B+\nu I_m),\label{C}\\ \label{B} B'=\alpha(AB+BC),\\\hat P'=\frac\alpha2(A\hat P+\hat PC),\label{Pe}\\ 2\tau A+B\hat P^\dag+\hat PB^\dag=2\bar\mu I_n,\label{AB}\\-2\tau C+B^\dag \hat P+\hat P^\dag B=2\bar\mu I_m\label{CB}\end{gather} where $\bar\mu=\frac{\mu}\alpha$. Thus the problem of description of matrix valued superpotentials which generate shape invariant potentials is reduced to finding the general solution of equations (\ref{A})--(\ref{CB}) for irreducible sets of square matrices $A, C$ and rectangular matrices $B$ and $P$. Moreover, $A$ and $C$ are hermitian matrices whose dimension is $n\times n$ and $m\times m$ respectively while dimension of $B$ and $P$ is ($n\times m$). Without loss of generality we suppose that $n\leq m$. The system (\ref{A})--(\ref{CB}) is rather complicated, nevertheless it can be solved explicitly. To save a room we shell not present its cumbersome general solution but restrict ourselves to the special subclass of solutions with trivial matrices $B$. In this case $\bar\mu=\tau=0$ (otherwise the corresponding superpotentials are reduced to a direct sum of $2\times2$ matrices considered in the above), and the system is reduced to the following equations: \begin{gather}\label{Aa} A'=\alpha(A^2+\nu I_n),\quad C'=\alpha(C^2+\nu I_m),\\\hat P'=\frac\alpha2(A\hat P+\hat PC).\label{Pea} \end{gather} Without loss of generality the hermitian matrices $A$ and $C$ which solve equations (\ref{Aa}) can be chosen as diagonal ones, see Appendix. In other words their entries $A_{ab}$ and $C_{ab}$ can be represented as: \begin{gather}\label{A_C}A_{ab}=\delta_{ab}q_{b}, \quad C_{ab}=\delta_{ab} q_{n+b}\end{gather}where $\delta_{ab}$ is the Kronecker symbol and $q_\sigma$ ($\sigma=b $ or $\sigma=n+b$) are solutions of the scalar Riccati equation \begin{gather}q_\sigma'=\alpha(q_\sigma^2+\nu)\label{riki}\end{gather} which is a direct consequence of (\ref{Aa}) and (\ref{A_C}). Solutions of equations (\ref{riki}) are given by equations (\ref{lin1})--(\ref{lin3}). Thus matrices $A$, $C$ and $R$ are defined explicitly by relations (\ref{Aa}), (\ref{lin1})--(\ref{lin3}) and (\ref{R}) while matrices $B$ are trivial. The remaining components of superpotentials (\ref{SP}) are matrices $P$ whose entries $\hat P_{ab}$ are easy calculated integrating equations (\ref{Pe}): \begin{gather}\label{PaB}\hat P_{ab}=\mu_{ab}\exp \left(\frac12\alpha\int\!\!(q_a+q_{n+b}) dx\right)\end{gather} where $\mu_{ab}$ are integration constants satisfying $\mu_{ab}=(\mu_{ba})^*$, and $q_\sigma$ with $\sigma=a, n+b$ are functions (\ref{lin1})--(\ref{lin3}) corresponding to the same value of parameter $\nu/\alpha$. In analogous way we can describe a special subclass of matrix superpotentials (\ref{SP}) with trivial matrices $R$ \cite{Yura}. In this case it is convenient to start with diagonalization of matrix $Q$ and write its entries as \begin{gather}Q_{\alpha\sigma}=\delta_{\alpha\sigma}q_\sigma,\quad \alpha,\sigma=1,2,\dots,K\label{Q!}\end{gather} where $q_\sigma$ are functions satisfying equation (\ref{riki}). Then the corresponding entries of matrix $P$ satisfying (\ref{Pe}) are defined as: \begin{gather}\label{PaBB} P_{\alpha\sigma}=\mu_{\alpha \sigma}\exp \left(\frac12\alpha\int\!\!(q_\alpha+q_{\sigma}) dx\right).\end{gather} Functions $q_\alpha$ and $q_\sigma$ included into (\ref{PaBB}) have to satisfy equation (\ref{riki}) with the same value of parameter $\nu$. In addition, the matrix whose entries are integration constants $\mu_{\alpha \sigma}$ should be hermitian. \section{Superpotentials realized by $3\times3$ matrices\label{3x3}} Let us search for superpotentials (\ref{SP}) realized by matrices of dimension $3\times3$. We will restrict ourselves to the case when parameter $\nu$ in the determining equation (\ref{a0}) is equal to zero and find the complete list of the related superpotentials. Like (\ref{inin}), (\ref{in0}), (\ref{inin0}) and (\ref{in00}) they are liner combinations of power functions of $x+c_i$ with some constant $c_i$. There are three versions of the related matrices $R$ whose general form is given in (\ref{R}): \begin{gather}R=\left(\begin{array}{cc}I_2&0\\0&1\end{array}\right), \label{R1}\\R=I_3,\label{R2}\\ R=0_3.\label{R3}\end{gather} Let us start with the case presented in (\ref{R1}). The corresponding matrices $Q$ and $P$ are given by formulae (\ref{Q}) and (\ref{P}) with \begin{gather}A=\left(\begin{array}{cc}a_1&a_2\\a^*_2&a_3\end{array}\right), \quad B=\begin{pmatrix}b_1\\ b_2\end{pmatrix}, \quad \hat P=\begin{pmatrix}p_1\\ p_2\end{pmatrix}, \label{PU}\end{gather} where $a_1, a_2, a_3, c, b_1, b_2, p_1$ and $p_2$ are unknown scalar functions. Moreover, $a_1, a_3$ and $c$ should be real, otherwise $Q$ is not hermitian. Without loss of generality we suppose that $p_1$ and $p_2$ be imaginary, since applying a unitary transformation to $Q$, $P$ and $R$ these functions always can be reduced to a purely imaginary form. Moreover, the corresponding transformation matrix is diagonal. In the previous section we {\it apriori} restrict ourselves to trivial matrices $B$. Let us show that this restriction is not necessary at least for the considered case $\nu=0$. First let us prove that system (\ref{A})--(\ref{CB}) is compatible iff $\bar\mu=\tau=0$, and it is true for all versions of matrix $R$ enumerated in (\ref{R1})--(\ref{R3}). Calculating traces of matrices present in (\ref{AB}) and (\ref{CB}) we obtain the following relation: \begin{gather}\label{Sp1}\tau (\texttt{Tr}A+\texttt{Tr}C)+\bar\mu(m-n)=0\end{gather} where $m=2, 3, 0$ for versions (\ref{R1}), (\ref{R2}), (\ref{R1}) respectively, and $n=3-m$. Differentiating all terms in (\ref{Sp1}) w.r.t. $x$ and using equations (\ref{A}), (\ref{C}) we obtain: \begin{gather}\label{Sp2}\tau(\texttt{Tr}A^2+\texttt{Tr}C^2+2\texttt{Tr}B^\dag B+\nu(m+n))=0.\end{gather} Three the first terms in brackets are positive defined and we supposed that $\nu=0$. If $\tau\neq0$ all terms in brackets should be zero. If the trace of the square of hermitian matrix is zero, this matrix is zero too, the same is true for matrix $B^\dag B$. Thus for $\tau\neq0$ matrix $Q$ (\ref{Q}) is trivial. To obtain non-trivial solutions we have to set $\tau=0$, then from (\ref{Sp1}) we obtain that $\bar\mu=0$ also. Substituting (\ref{PU}) into (\ref{AB}) and (\ref{CB}) (remember that $\bar\mu=\tau=0$) we obtain the following relations: \begin{gather}\label{pb0}\begin{split}&p_1b^*_2-b_1p_2=0, \quad p_2b^*_1-b_2p_1=0,\quad p_1(b_1-b^*_1)=0,\quad p_2(b_2-b^*_2)=0.\end{split}\end{gather} In accordance with (\ref{pb0}) there are three qualitatively different possibilities: \begin{gather}(a):\ p_1=p_2=0,\quad (b):\ p_1b_2=p_2b_1,\ b^*_1=b_1,\ b_2^*=b_2\label{a)}\end{gather} and \begin{gather}(c):\ b_1=b_2=0.\label{c)}\end{gather} In the cases (\ref{a)}) the corresponding superpotentials are reducible. Indeed, in the case (a) the only condition we need to satisfy is equation (\ref{a0}). But matrix $Q$ can be diagonalized (see Appendix) and so the related superpotential can be reduced to the direct sum of three scalar potentials. The only possibilities to realize the case (b), which differs from cases (a) and (c) is to suppose that \begin{gather}p_1=\alpha p_2\quad \text{and}\quad b_1=\alpha b_2\label{prop}\end{gather} or $p_1=\alpha b_1$ and $p_2=\alpha b_2$ where $\alpha$ is a constant parameter. The second possibility is excluded since $p_\alpha$ can be proportional to $b_\alpha$ only in the case when these functions are reduced to constants (otherwise $p_\alpha$ and $b_\alpha$ are linearly independent, compare equations (\ref{B}) and (\ref{Pe})), and so this possibility is reduced to (\ref{prop}) also. But if conditions (\ref{prop}) are realized the corresponding superpotential is reducible too since the transformation $W\to UWU^\dag$ with \[U=\frac1{\sqrt{1+\alpha^2}}\begin{pmatrix}\alpha&1&0\\-1&\alpha&0\\ 0&0&\sqrt{1+\alpha^2}\end{pmatrix}\] makes it block diagonal. Thus to obtain an irreducible superpotential (\ref{SP}) we should impose the condition (\ref{c)}), and our problem is reduced to solving the system (\ref{Aa}), (\ref{Pea}) where $\nu=0$. Like in section \ref{2x2} the $2\times2$ matrix $A$ can be chosen diagonal, i.e., we can set $a_2=0$ in (\ref{PU}) while the remaining (diagonal) entries of matrix $Q$ can be denoted as $a_1=q_1, a_2=q_2, C=q_3$, compare with (\ref{A_C}). In accordance with (\ref{riki}) with $\nu=0$, functions $q_i$ can independently take the following values: \begin{gather}\begin{split}&q_1=-\frac1{x+c_1}\quad \text{or}\quad q_1=0,\\& q_2=-\frac1{x+c_2}\quad \text{or}\quad q_2=0,\\& q_3=-\frac1{x+c_3}\quad \text{or}\quad q_3=0\end{split}\label{AC}\end{gather} where $c_1, c_2$ and $c_3$ are integration constants. The corresponding values of $p_1$ and $p_2$ are easy calculated using equation (\ref{PaB}). As a result we obtain the following irreducible superpotentials: \begin{gather}\begin{split}&W=(S_1^2-1)\frac\kappa{x+c_1}+(S_2^2-1)\frac\kappa{x+c_2}+ (S_3^2-1)\frac\kappa{x}\\&+ S_1\frac{\mu_1}{\sqrt{x(x+c_1)}}+ S_2\frac{\mu_2}{\sqrt{x(x+c_2)}}+\frac\omega\kappa (2S_3^2-1),\end{split} \label{w1}\\ \begin{split}&W=(S_1^2-1)\frac\kappa{x}+(S_2^2-1)\frac\kappa{x+c}+ S_1\frac{\mu_1}{\sqrt{x}}+ S_2\frac{\mu_2}{\sqrt{x+c}}+\frac\omega\kappa (2S_3^2-1),\end{split}\label{w2} \\\begin{split}&W=(S_1^2-1)\frac\kappa{x+c}+(S_3^2-1)\frac\kappa{x}+ S_1\frac{\mu_1}{\sqrt{x}}+ S_3\frac{\mu_2}{\sqrt{x(x+c)}}+\frac\omega\kappa (2S_3^2-1),\end{split}\label{w3}\\\begin{split}&W=(S_1^2-1)\frac\kappa{x} +S_1c+ S_2\frac{\mu}{\sqrt{x}}+\frac\omega\kappa (2S_3^2-1)\end{split}\label{w4}\end{gather} where $c, c_1, c_2, \mu, \mu_1$ and $\mu_2$ are integration constants, and \begin{gather}\label{s}S_1=\begin{pmatrix}0&0&0\\0&0&-\ri\\0&\ri&0\end{pmatrix} ,\quad S_2=\begin{pmatrix}0&0&\ri\\0&0&0\\-\ri&0&0\end{pmatrix},\quad S_3=\begin{pmatrix}0&-\ri&0\\\ri&0&0\\0&0&0\end{pmatrix} \end{gather} are matrices of spin $s=1$. Formulae (\ref{w1})--(\ref{w4}) give the complete list of $3\times3$ matrix superpotentials including matrix $R$ in form (\ref{R1}). If this matrix is proportional to the unit one (i.e., if the version (\ref{R2}) is realized), the related matrix $P$ should be trivial, see equation (\ref{a01}) with $\lambda=0$. Diagonalizing the corresponding matrix $Q$ we obtain the direct sum of three scalar potentials, i.e., the related superpotentials are reducible. In the case (\ref{R3}) we again can restrict ourselves to diagonal matrices $Q$ whose entries are enumerated in (\ref{AC}). The corresponding matrices $P$ can be calculated using equation (\ref{PaBB}). As a result we obtain the following superpotentials: \begin{gather}\begin{split}&W=(S_1^2-1)\frac\kappa{x+c_1}+(S_2^2-1)\frac\kappa{x+c_2}+ (S_3^2-1)\frac\kappa{x}\\&+ S_1\frac{\mu_1}{\sqrt{x(x+c_1)}}+ S_2\frac{\mu_2}{\sqrt{x(x+c_2)}}+ S_3\frac{\mu_3}{\sqrt{(x+c_1)(x+c_2)}},\end{split}\label{w5}\\ \begin{split}&W=(S_1^2-1)\frac\kappa{x}+(S_2^2-1)\frac\kappa{x+c}+ S_1\frac{\mu_1}{\sqrt{x}}+ S_2\frac{\mu_2}{\sqrt{x+c}}+S_3\frac{\mu_3}{\sqrt{x(x+c)}},\end{split} \label{w6}\\\begin{split}&W=(S_1^2-1)\frac\kappa{x} +S_1c+ S_3\frac{\mu_1}{\sqrt{x}}+S_2\frac{\mu_2}{\sqrt{x}}.\end{split}\label{w7}\end{gather} Formulae (\ref{w1})--(\ref{w7}) present the complete list of irreducible $3\times3$ matrix superpotentials corresponding to zero value of parameter $\nu$ in (\ref{a0}). In full analogy with the above we can find superpotentials with $\nu$ nonzero. In this case the list of solutions (\ref{AC}) is changed to solutions (\ref{lin1})--(\ref{lin3}) with $\sigma=1,2,3$ and the same value of $\nu$ for all values of $\sigma$. The corresponding matrices $P$ again is calculated using equation (\ref{PaB}) for non-trivial $R$ and equation (\ref{a01}) for $R$ trivial. \section{Examples of integrable matrix models\label{models}} Thus we obtain a collection of integrable models with matrix potentials. The related superpotentials of dimension $2\times2$ are given by equations (\ref{tan})--(\ref{in00}). They are defined up to arbitrary constants $c, \lambda, \mu, \nu, \dots$. In addition, in section \ref{ArbDim} we present the infinite number of superpotentials realized by matrices of arbitrary dimension. So we have a rather large database of shape invariant models, whose potentials have the form indicated in (\ref{ham}). Of course it is impossible to present a consistent analysis of all found models in one paper. But we can discuss at least some of them. In this and the following sections we consider some particular examples of found models. \subsection{Matrix Hamiltonians with Hydrogen atom spectra} Let us start with the superpotential given by equation (\ref{inin}). In addition to variable parameter $\kappa$ it includes an arbitrary parameter $\mu$, and two additional parameters, $r_2$ and $r_3$, which define matrix $R$ (\ref{res}). Moreover, $\mu^2+r_2^2\neq0$, otherwise operator (\ref{inin}) reduces to a direct sum of two scalar superpotentials. The simplest version of the considered superpotential corresponds to the case $\mu=0$ and $r_3=0, r_2=\omega$. Then with the unitary transformation $W^{(8)}_\kappa\to W_\kappa= UW^{(8)}_\kappa U^\dag, U=(1+i\sigma_3)/\sqrt{2}\ $ we transform $W^{(8)}_\kappa$ to the following (real) form: \begin{gather}W_\kappa=-\frac{\sigma_+\kappa}{x}- \frac{\sigma_1\omega}{\kappa}. \label{simpl1}\end{gather} The corresponding potential (\ref{ham}) looks as: \begin{gather}V_\kappa=\frac{\kappa(\kappa-1)\sigma_+}{x^2}+ \frac{\omega\sigma_1}{x}+\frac{\omega^2}{\kappa^2}.\label{simpl2}\end{gather} Shape invariance of potential (\ref{simpl2}) is almost evident. Calculating its superpartner $V_\kappa^+=W_\kappa^2+W_\kappa'$ we easily find that \begin{gather}\label{shi} V_\kappa^+=V_{\kappa+1}+C_\kappa\end{gather} where \[C_\kappa=\frac{\omega^2}{(\kappa+1)^2}-\frac{\omega^2}{\kappa^2}.\] Using this property (which makes our model be in some aspects similar to the non-relativistic Hydrogen atom) we immediately find spectrum of Hamiltonian (\ref{hamiltonian}) with potential (\ref{simpl2}): \begin{gather}E_N=-\frac{\omega^2}{N^2}\label{simpl3}\end{gather} where $N=\kappa+n$, $n=0,1,2,\dots$ The ground state vector $\psi_{0}(\kappa,x)$ should solve the equation \begin{gather}a^-_\kappa\psi_{0}(\kappa,x)\equiv\left(\frac{\p}{\p x}+W_\kappa\right)\psi_{0}(\kappa,x)=0\label{simpl4}\end{gather} where $W_\kappa$ is $2\times2$ matrix superpotential (\ref{simpl1}) and $\psi_0(\kappa,x)$ is a two component function: \begin{gather}\psi_0(\kappa,x)=\begin{pmatrix}\varphi\\ \xi\end{pmatrix}\label{psi00}.\end{gather} Substituting (\ref{simpl1}) and (\ref{psi00}) into (\ref{simpl4}) we obtain the following system: \begin{gather}\varphi'-\frac{\kappa}{x}\varphi-\frac{\omega}{\kappa}\xi=0, \label{simpl6} \\\kappa\xi'-\omega \varphi=0.\label{simpl7} \end{gather} Substituting $\varphi=\frac\kappa\omega\xi'$ obtained from (\ref{simpl7}) into (\ref{simpl6}) we come to the second-order equation for $\xi$: \begin{gather*}\xi''-\frac\kappa{x}\xi'- \frac{\omega^2}{\kappa^2}\xi=0,\end{gather*} whose normalizable solutions are: \begin{gather}\xi=\omega x^\nu K_\nu\left(\frac{\omega x}\kappa\right),\quad \nu=\frac{\kappa+1}2\label{simpl8}\end{gather} where $K_\nu(\cdot)$ are modified Bessel functions. The first component of function (\ref{psi00}) is easy calculated using (\ref{simpl7}): \begin{gather}\label{simpl9}\ \varphi=\kappa(\kappa+1)x^{\nu-1}K_\nu\left(\frac{\omega x}\kappa\right)-\omega x^\nu K_{\nu+1}\left(\frac{\omega x}\kappa\right).\end{gather} Solutions (\ref{psi00}), (\ref{simpl8}), (\ref{simpl9}) are square integrable for any positive $\kappa$ and $0\leq x\leq\infty$. Solution $\psi_n(\kappa,x)$ corresponding to the $n^\text{th}$ exited state can be obtained from the ground state vector using the following standard relation of SUSY quantum mechanics (see, e.g. \cite{Khare}): \begin{gather}\label{psin}\psi_n(\kappa,x)= a_{\kappa}^+a_{\kappa+1}^+ \cdots a_{\kappa+n-1}^+\psi_0(\kappa+n,x). \end{gather} It is not difficult to show that vectors (\ref{psin}) are square integrable too provided $\kappa$ is positive. In analogous manner it is possible to handle ground state wave functions corresponding to superpotential (\ref{inin}) with other values of parameters $r_2, r_3$ and $\mu$. In general case (but for $r_2\mu\neq0$) these wave functions are expressed via products of exponentials $\exp\left(-\frac{\omega{x}}{\kappa}\right)$, powers of $x$ and Kummer functions $U_{\alpha\nu} \left(\frac{2\omega x}\kappa\right)$. We will not present the corresponding cumbersome formulae here. \subsection{Matrix Hamiltonians with oscillator spectra\label{oscil}} Consider the next relatively simple model which corresponds to superpotential (\ref{in00}) with $c=0$. Denoting $W^{(17)}_\kappa=W_\kappa$ we obtain the following potential: \begin{gather}\label{pot1}\begin{split}&V_\kappa=W_\kappa^2-W_\kappa'= \sigma_+\left(\frac{4\kappa^2-1}{4x^2}+\frac{\omega^2x^2}{16}+ \frac{\mu^2}{x}-(\kappa+1)\frac{\omega}{2}\right)\\&+\sigma_- \left(\frac{\omega^2x^2}{4}+\frac{\mu^2}{x}-\frac{\omega}{2}\right)-\sigma_1 \left(\frac{3\omega x}{4} -\frac{2\kappa+1}{2x}\right)\frac{\mu}{\sqrt{x}}.\end{split}\end{gather} It is easy to verify that the superpartner of $V_\kappa$, i.e., $V_\kappa^+=W_\kappa^2+W_\kappa'$ is equal to $V_{\kappa+1}$ up to a constant term: \begin{gather}\label{shape}V_\kappa^+=V_{\kappa+1}+\omega.\end{gather} In other words, potential (\ref{pot1}) is shape invariant in accordance with the definition given in \cite{Gen}. Moreover, like in the case of supersymmetric oscillator, the superpartner potential $V_\kappa^+$ differs from $V_{\kappa+1}$ by the constant term $\omega$ which does not depend on variable parameter $k$. Using standard tools of SUSY quantum mechanics it is possible to find the spectrum of system (\ref{eq}), (\ref{hamiltonian}) with hamiltonian (\ref{pot1}): \begin{gather}E_n=n\omega,\quad n=0,1,2,...\label{osc}\end{gather} which coincides with the spectrum of supersymmetric oscillator. The ground state vector is defined as a square integrable solution of equation (\ref{simpl4}). Substituting (\ref{in00}) and (\ref{psi00}) into (\ref{simpl4}) we obtain the following system: \begin{gather}\varphi'-\left(\frac{2\kappa+1}{2x}-\frac{\omega x}{4}\right)\varphi-\frac{\mu}{\sqrt{x}}\xi=0\label{1}\\\xi'+\frac{\omega x}{2}\xi-\frac{\mu}{\sqrt{x}}\varphi=0.\label{2}\end{gather} Changing in (\ref{1}), (\ref{2}) \begin{gather}\label{vp}\varphi=\exp\left(-\frac{\omega x^2}{4}\right)\tilde\varphi,\quad \xi=\exp\left(-\frac{\omega x^2}{4}\right)\tilde\xi,\end{gather} solving equation (\ref{2}) for $\tilde\xi$ and substituting the found expression into (\ref{1}) we obtain the second order equation for $\tilde\varphi$: \begin{gather*}\tilde\varphi''-\left(\frac{\omega x}{4}+\frac{\kappa}{x}\right)\tilde\varphi'-\frac{\mu^2}{x}\tilde\varphi=0. \end{gather*}Its solutions are linear combinations of Heun biconfluent functions: $\tilde\varphi=c_1\tilde\varphi_1+c_2\tilde\varphi_2$ where \begin{gather}\tilde\varphi_1=H_B(-a_+,0,a_-,b,cx),\quad \tilde\varphi_2=x^{\kappa+1}H_B(a_+,0,a_-,b,cx)\label{3}\end{gather}where \begin{gather}a_\pm=1\pm\kappa,\quad b=\frac{4\sqrt{2}\mu^2}{\sqrt{\omega}},\quad c=\frac{\sqrt{2\omega}}{4}. \label{4}\end{gather} Thus, in accordance with (\ref{vp}), (\ref{3}) and (\ref{1}) we have two ground state solutions (\ref{psi00}) with \begin{gather}\begin{split}&\label{5}\varphi=\varphi_1=\exp\left(-\frac{\omega x^2}{4}\right) H_B(-a_+,0,a_-,b,cx),\\& \xi=\xi_1=\exp\left(-\frac{\omega x^2}{4}\right)\left(\frac{\sqrt{2\omega x}}{4\mu}H'_B(-a_+,0,a_-,b,cx)\right.\\&\left.-\frac1{2\mu}\left(\frac{2\kappa+1}{\sqrt{x}}+ \frac{\omega x^\frac32}{2}\right)H_B(-a_+,0,a_-,b,cx)\right)\end{split} \end{gather} and \begin{gather}\begin{split}&\label{6}\varphi=\varphi_2=x^{\kappa+1}H_B(a_+,0,a_-,b,cx),\\& \xi=\xi_2=\exp\left(-\frac{\omega x^2}{4}\right)\left(\frac{\sqrt{2\omega x}}{4\mu}H'_B(a_+,0,a_-,b,cx)\right.\\&\left.-\frac1{2\mu}\left(\frac{2\kappa+1}{\sqrt{x}}+ \frac{\omega x^\frac32}{2}\right)H_B(a_+,0,a_-,b,cx)\right).\end{split} \end{gather} Functions (\ref{psi00}) whose components are defined in (\ref{5}) and (\ref{6}) are square integrable for any real values of parameters $\kappa, \mu$ and positive $\omega$. Notice that for integer $\kappa$ solutions (\ref{5}) and (\ref{6}) are linearly dependent. We will not present here the cumbersome expression of the second solution linearly independent with (\ref{5}) which can be easy found solving system (\ref{1}), (\ref{2}) for $\kappa$ integer. One more matrix superpotential generating the spectrum of supersymmetric oscillator is given by equation (\ref{inin0}). Setting for simplicity $\delta=0$ we obtain the corresponding Hamiltonian (\ref{ham}) in the following form: \begin{gather}\begin{split}&V_\kappa=\frac{\omega^2}{4}(x^2+c^2)+ \frac{\mu^2}{x^2-c^2}+\left(\kappa+\frac12\right)\omega- \sigma_1\mu x\left(\frac{2\kappa-1}{(x^2-c^2)^\frac32}+ \frac{\omega}{(x^2-c^2)^\frac12}\right)\\&+\kappa(\kappa-1) \left(\sigma_+\frac1{(x+c)^2}+\sigma_-\frac1{(x-c)^2}\right) .\end{split}\label{pot2}\end{gather} Like (\ref{pot1}), potential (\ref{pot2}) satisfies the form-invariance condition written in the form (\ref{shape}). The spectrum of the corresponding Hamiltonian (\ref{hamiltonian}) is given by equation (\ref{osc}). The ground state vectors, i.e., solutions of equation (\ref{simpl4}) where $W_\kappa=W^{(16)}_\kappa$ is superpotential (\ref{inin0}) with $\delta=0$, are given by equation (\ref{psi00}) with components $\varphi$ and $\xi$ given below: \begin{gather}\begin{split}&\varphi=\varphi_1=\exp\left(-4\omega (x+c)^2\right)(c^2-x^2)^\kappa(c-x)^\frac12 H_C\left(a,b_-,b_+,d,r;\frac{x+c}{2c}\right),\\&\xi=\xi_1=\frac{\ri}{cx} \exp\left(-4\omega (x+c)^2\right)(c^2-x^2)^\kappa\left(\frac{x-c}{2} H'_C\left(a,b_-,b_+,d,r;\frac{x+c}{2c}\right)\right.\\&\left.+\left(\kappa+\frac12\right) H_C\left(a,b_-,b_+,d,r;\frac{x+c}{2c}\right)\right)\end{split}\label{Sol2}\end{gather} where $H_C(\dots)$ is the confluent Heun function, \begin{gather}\label{Sol3}a=-4\omega c^2,\quad b_\pm=\kappa\pm\frac12,\quad d=2\omega c^2,\quad r=2b_+c^2\omega+\frac12\kappa^2+\frac38-\mu^2.\end{gather} There exist one more ground state vector for Hamiltonian (\ref{hamiltonian}) with potential (\ref{pot2}) whose components are \begin{gather}\begin{split}&\varphi_2=\exp\left(-4\omega (x+c)^2\right)(c^2- x^2)^\frac12(c-x)^\kappa H_C\left(a,-b_-,b_+,d,r;\frac{x+c}{2c}\right),\\&\xi_2=\frac{\ri}{cx} \exp\left(-4\omega (x+c)^2\right)(c-x)^\kappa\left(c\left(2c\kappa-x\right) H_C\left(a,-b_-,b_+,d,r;\frac{x+c}{2c}\right)\right.\\&\left.+\frac{x^2-c^2}{2} H'_C\left(a,-b_-,b_+,d,r;\frac{x+c}{2c}\right)\right)\end{split}\label{Sol4}\end{gather} where $a, b_\pm, d$ and $r$ are parameters defined by equation (\ref{Sol3}). For $\omega>0$ and $k>0$ functions (\ref{Sol2}) and (\ref{Sol4}) are square integrable on whole real axis. \subsection{Potentials including hyperbolic functions} An important model of ordinary (scalar) SUSY quantum mechanics is described by Schr\"o\-din\-ger equation with the hyperbolic Scarf potential \begin{gather}\label{Scarf}V_\kappa=-\kappa(\kappa-1)\sech^2(x).\end{gather} This model possesses a peculiar nature at integer values of the parameter $\kappa$, namely, it is a reflectionless (non-periodic finite-gap) system which is isospectral with the free quantum mechanical particle. In addition, this model possesses a hidden (bosonized) nonlinear supersymmetry \cite{plush1}. Let us consider shape invariant matrix potentials including (\ref{Scarf}) as an entry. The corresponding superpotential can be chosen in the form (\ref{tanh_exp}) where $\mu=0$ and $c_3=0, c_2=\omega$. Then with the unitary transformation $W^{(7)}_\kappa\to W_\kappa= UW^{(7)}_\kappa U^\dag, U=(1+i\sigma_3)/\sqrt{2}$ we transform it to the real form: \begin{gather}W_\kappa=\lambda\left(-\kappa(\sigma_+\tanh(\lambda x)+\sigma_-)+\sigma_1\frac{\omega}{\kappa}\right).\label{tanh_exp1}\end{gather} The corresponding potential looks as: \begin{gather}\label{pott1}V_\kappa=\lambda^2\left(-\sigma_+ \kappa(\kappa-1)\sech^2(x)-\sigma_1\omega(\tanh(\lambda x)+1)+\frac{\omega^2}{\kappa^2}+\kappa^2\right). \end{gather} Potential (\ref{pott1}) satisfies the shape invariance condition (\ref{shi}) with \begin{gather}\label{pott2}C_\kappa=\lambda^2\left(\frac{\omega^2}{(\kappa+1)^2}+(\kappa+1)^2- \frac{\omega^2}{\kappa^2}-\kappa^2\right)\end{gather} thus the discrete spectrum of the corresponding Hamiltonian (\ref{hamiltonian}) is given by the following formula: \[E=-\lambda^2\left(\frac{\omega^2}{(\kappa+n)^2}+(\kappa+n)^2\right)\] where \begin{gather}\label{pott3} n=0,1, \dots ,\quad \kappa+n<0,\quad (\kappa+n)^2>\omega.\end{gather} Conditions (\ref{pott3}) will be justified in what follows. To find the ground state vector we should solve equation (\ref{simpl4}) with $W_\kappa$ and $\psi_0(\kappa,x)$ given by formulae (\ref{tanh_exp1}) and (\ref{psi00}) correspondingly. This equation is easy integrable and has the following normalizable solutions: \begin{gather}\begin{split}&\varphi= y^{-\frac{\sqrt{\kappa^4+\omega^2}} {\kappa}}(1-y)^{\frac\omega{2\kappa}-\frac\kappa2}{_2F_1}(a,b,c; y),\\&\xi=\frac\kappa\omega\left(\kappa(2y-1)\varphi+2y(y-1) \frac{\partial\varphi}{\p y}\right).\end{split}\label{lab}\end{gather} Here $_2F_1(a,b,c;y)$ is the hypergeometric function, \begin{gather}\begin{split}&\label{tan11}c=1-\frac1\kappa\sqrt{\kappa^4+\omega^2},\quad b=c+\frac\kappa2+\frac\omega{2\kappa}, \quad a=b-\kappa-1,\\& y=\frac12(\tanh\lambda x+1).\end{split}\end{gather} Wave functions for exited states can be found starting with (\ref{tan11}) and using equation (\ref{psin}). In order to functions (\ref{tan11}) and the corresponding functions be square integrable, parameters $\kappa, \omega$ and $n$ should satisfy condition (\ref{pott3}), see discussion of normalizability of state vectors including the hypergeometric function in section 10 of paper \cite{yur1}. Consider also superpotential (\ref{tanh_exp}) with $\omega=0$ and $\mu\neq0$: \begin{gather}\label{last0}W_\kappa=-\lambda\left(\kappa(\sigma_+(\tanh\lambda x+\sigma_-)+\sigma_1\mu\sqrt{\sech\lambda x\exp(-\lambda x)} \right).\end{gather} The corresponding potential (\ref{ham}) have the following form: \begin{gather}\begin{split}\label{last1}&V_\kappa=\lambda^2\left(-\sigma_+ \kappa(\kappa-1)\sech^2\lambda x +\kappa^2\right.\\&\left.+\mu^2\sech\lambda x\exp(-\lambda x) +\sigma_1\mu(2\kappa-1)\exp\frac{\lambda x}{2}\sech^{\frac32}\lambda x\right).\end{split}\end{gather} Solving equation (\ref{simpl4}) with $W_\kappa$ and $\psi_0(\kappa,x)$ given by formulae (\ref{tanh_exp1}) and (\ref{psi00}) we find components of the ground state vector: \begin{gather*}\varphi=\frac{1}{\mu}\sqrt{\frac{1+\exp(2\lambda x)}{2}} (\kappa\xi-\xi'),\quad\xi=y^\nu(1-y)^{-\frac\kappa2}\;{_2F_1}(a,b,c; y),\\a=\nu-\frac\kappa2,\quad b=a+\kappa+\frac12,\quad c=1+2\nu, \quad \nu=\frac12\sqrt{\kappa^2+2\mu^2},\quad y=\frac12(\tanh\lambda x+1)\end{gather*} which are square integrable for $\kappa<0$. The discrete spectrum of Hamiltonian (\ref{hamiltonian}) with potential (\ref{simpl2}) is given by the following formula: $E=-\lambda^2(\kappa+n)^2$ where $n$ are natural numbers satisfying the condition $\kappa+n<0$. If this condition is violated, the related eigenvectors (\ref{psin}) are not normalizable. \section{Multidimensional integrable models\label{3d}} The models considered in previous subsections are one dimensional in spatial variable. Of course it is more interesting to search for multidimensional (especially, three dimensional) models which can be reduced to integrable models by separation of variables. Famous examples of such models are the (non-relativistic) Hydrogen atom and the Pron'ko-Stroganov system \cite{Pron} which can be reduced to a scalar and matrix shape invariant systems correspondingly. A more "fresh" example is the reduction of the AdS/CFT holographythe model to the Poschl-Teller system proposed in \cite{plush2}. In this section we consider new examples of the three-dimensional Schr\"odinger-Pauli equations which can be reduced to a shape invariant form by separation of variables. Moreover, the related effective potentials in radial variable belong to the shape invariant potentials deduced above. \subsection{Spinor models \label{spinor}} Consider shape invariant potential generated by the following superpotential: \begin{gather}W=(\mu\sigma_3-j-1)\frac1x+\frac\omega{2(j+1)}\sigma_1. \label{3d1}\end{gather} This operator belongs to the list of matrix superpotentials presented in section \ref{CompList}, see equation (\ref{inin}). More exactly, to obtain (\ref{3d1}) it is necessary to set $c=0, \kappa=j+1, r_2=0$ and $r_1=-\frac\omega2$ in (\ref{inin}) and (\ref{res}). Then such specified superpotential $W_\kappa^{(7)}$ appears to be unitary equivalent to $W$ (\ref{3d1}), namely, \begin{gather}\label{trans}W=UW_\kappa^{(7)}U^{\dag}\quad \text{with}\quad U=\frac1{\sqrt{2}}(1+\ri\sigma_2).\end{gather} Calculating potential (\ref{ham}) corresponding to superpotential (\ref{3d1}) with $\mu=\frac12$ we obtain: \begin{gather}V=W^{2}-W'=\left(j(j+1)+\frac14- \left(j+\frac12\right)\sigma_3\right)\frac1{x^2} -\frac\omega{x}\sigma_1.\label{3d2}\end{gather} By construction, potential (\ref{3d2}) is shape invariant, thus the related eigenvalue problem \begin{gather}\left(-\frac{\p^2}{\p x^2}+V\right)\psi=E\psi\label{3d3}\end{gather} can be solved exactly using standard tools of SUSY quantum mechanics. The corresponding ground state vector is a two-component function (\ref{psi00}) with \begin{gather}\label{3d4} \varphi=y^{j+\frac32} K_{1}\left(y\right), \quad\tilde\xi= y^{j+\frac32}K_{0}\left(y\right),\end{gather} where $y=\frac{\omega x}{2(j+1)}$. Energy spectrum is given by equation (\ref{simpl3}) with $N=2j+n+1,\quad n=0,1,\dots$ The eigenvalue problem (\ref{3d3}) with potential (\ref{3d2}) includes the only independent variable $x$. However, it can be treated as a radial equation corresponding to the following three dimensional Hamiltonian with the Pauli type potential: \begin{gather}H=-\Delta+\omega{\mbox{{\boldmath $\sigma$}}}\cdot {\bf B},\quad {\bf B}=\frac{{ \bf x}}{x^2}.\label{3d5}\end{gather} Here $\delta$ is the Laplace operator, $\frac12\mbox{\boldmath{ $\sigma$}}$ is a spin vector whose components are Pauli matrices (\ref{pm}), and ${\bf B}$ is the coordinate three vector divided by $x^2=x_1^2+x_2^2+x_3^2$. Of course, $\bf B$ has nothing to do with the magnetic field since $\nabla\cdot{\bf B}\neq0$. However, it can represent another field, e.g., the axion one \cite{wilczek}. Existence of such solutions for equations of axion electrodynamics was indicated recently \cite{Oksana}. Expanding solutions of the eigenvalue problem for Hamiltonian (\ref{3d5}) via spherical spinors we obtain exactly equation (\ref{3d4}) for radial functions. We will not present the corresponding calculation here which can be done using the standard representations for the Laplace operator and matrix $\mbox{\boldmath $\sigma$}\cdot \bf{x}$ in the spherical spinor basis, which can be found, e.g., in \cite{FN}. Let us remind that the Pron'ko-Stroganov model is based on the following (rescalled) Hamiltonian \begin{gather}\label{ps}H=-\Delta+\frac{\sigma_1x_2-\sigma_2x_1}{r^2},\quad r^2=x_1^2+x_2^2 \end{gather} which is reduced to the following form in cylindrical variables: \begin{gather} \label{HamP} \hat{H}_m=-\frac{\partial^2}{\partial r^2}+m(m-\sigma_3) \frac{1}{r^2}+\sigma_1\frac{1} {{r}} \end{gather} (we ignore derivatives w.r.t. $x_3$). Hamiltonian (\ref{HamP}) can be expressed in the form (\ref{s3}). Moreover, the corresponding superpotential again can be obtained starting with superpotential (\ref{inin}) by setting $\kappa=m+\frac12, \mu=\frac12, c=r_2=0, r_1=\frac12$ and making transformation (\ref{trans}). \subsection{Vector models\label{vector}} Hamiltonian (\ref{ps}) corresponds to the physically realizable system, i.e., the neutral fermion moving in the field of straight line constant current. A natural desire to generalize this model for particles with spin higher then $\frac12$ appears to be hardly satisfied since if we simple change Pauli matrices in (\ref{ps}) by matrices, say, of spin one, the resultant model will not be integrable \cite{Gol2}. In paper \cite{Pron2} integrable generalizations of model Hamiltonian (\ref{ps}) to the case of arbitrary spin have been formulated. The price paid for this progress was the essential complication of the Pauli interaction term present in (\ref{ps}). However, there are rather strong physical arguments for such a complication \cite{Pron2}, see also \cite{Beckers} for arguments obtained in frames of the relativistic approach. In this section we present a new formulation of the spin-one Pron'ko model \cite{Pron2}. Doing this we perform the following goals: to apply our abstract analysis of shape invariant matrix potentials to a physically relevant system and to show that this model is shape invariant and so can be easily solved using SUSY technique. Let us start with superpotential (\ref{w1}) with $c_1=c_2=\mu_2=0,\ \mu_1=1$. Making the unitary (rotation) transformation $W\to UWU^\dag$ with $U=\exp\left(\ri S_2\pi/4\right)$ and changing the notations $x\to r, \kappa\to m+\frac12$ we reduce it to the following form: \begin{gather} \label{s4} W=\frac{1}{r}S_3-\frac{\omega}{2m+1}\left(2S_1^2-1\right)-\frac{2m+1}{2x}. \end{gather} In addition, we transform the spin matrices to the Gelfand-Tsetlin form: \begin{gather} S_1=\frac1{\sqrt2}\begin{pmatrix}0&1&0\\1&0&1\\0&1&0\end{pmatrix},\quad S_2= \frac\ri{\sqrt2}\begin{pmatrix}0&-1&0\\1&0&-1\\0&1&0\end{pmatrix},\quad S_3=\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix}.\label{SPIN2}\end{gather} The corresponding shape-invariant potential looks as \begin{gather} \label{HamPP} V_m=W^2+W'=\left((m-S_3)^2-\frac14\right) \frac{1}{r^2}+\omega\left(2S_1^2-1\right)\frac{1} {{r}}+\frac{\omega^2}{(2m+1)^2}. \end{gather} So far we simple represented one of the numerous supersymmetric toys classified in the above. Now we are ready to formulate a two dimensional model which generates the effective potential (\ref{HamPP}). The corresponding Hamiltonian can be written as: \begin{gather}H=-\frac{\p^2}{\p x_1^2}-\frac{\p^2}{\p x_2^2}+\omega\frac{2({\bf S}\cdot{\bf H})^2-{\bf H}^2}{|\bf H|}\label{IH}.\end{gather} Here $\mathbf H$ is the two-dimensional vector of magnetic field generated by an infinite straight current; its components are $H_1=q\frac{x_2}{r^2}$ and $H_2=-q\frac{x_1}{r^2}$. First we note that the last term in (\ref{IH}) is a particular case of the interaction term found in \cite{Pron2}, see equations (15), (21), (29) here for $s=1$ and $\beta_1=\beta_0$. However, we believe that our formulation of this term is more transparent. Secondly, introducing radial and angular variables such that $x_1=r\cos\theta,\ x_2=r\sin\theta$, and expanding eigenfunctions of Hamiltonian (\ref{IH}) via eigenfunctions $\psi_m$ of the symmetry operator $J_3=\ri\left(x_2\frac{\p}{\p x_1}-x_1\frac{\p}{\p x_2}\right)+S_3$ which can be written as: \beq\label{s1} \psi_m=\frac{1}{\sqrt{r}} \begin{pmatrix}\exp(\texttt{i}(m+1)\theta)\phi_1(r)\\ \exp(\texttt{i}m\theta)\phi_2(r)\\\exp(\texttt{i}(m-1)\theta)\phi_3(r)\end{pmatrix}\eeq we come to the following hamiltonian in radial variables: $H=-\frac{\p^2}{\p r^2}+V_m\label{IHH}$ where $V_m$ is the effective potential which coincides with (\ref{HamPP}). Thus Hamiltonian (\ref{IH}) is shape invariant and its discrete spectrum and the corresponding eigenvectors are easily calculated using the standard tools of SUSY quantum mechanics. To end this section we present one more integrable model for vector boson. This model is three dimensional in spatial variables and is characterized by the following Hamiltonian \begin{gather}H=-\Delta +\omega\frac{2({\bf S}\cdot{\bf B})^2-{\bf B}^2}{|\bf B|}\label{IHAH}.\end{gather} Here ${\bf B}$ is the three dimensional vector defined in (\ref{3d5}) and ${\bf S}$ is the matrix vector whose components are given in equation (\ref{SPIN2}). Like (\ref{3d5}), Hamiltonian (\ref{IHAH}) corresponds to the shape invariant potential in radial variables, which looks as \begin{gather*}V=\left(j(j+1)+S_3^2-(2j+1)S_3\right)\frac1{x^2}+(2S_1^2-1)\frac{\omega}{x}.\end{gather*} The corresponding superpotential can be obtained from (\ref{s4}) changing $m\to j+\frac12$. \section{Discussion} In spite of that (scalar) shape invariant potentials had been classified long time ago, there exist a great number of other such potentials which where not known till now, and they belong to the class of matrix potentials. The first attempt to classify these potentials which we made in recent paper \cite{yur1} enabled to find five types of them which are defined up to arbitrary parameters. They give rise to new integrable systems of Schr\"odinger equations which can be easily solved within the standard technique of SUSY quantum mechanics. In the present paper we present an infinite number of such integrable systems. In particular we present the list of superpotentials realized by matrices of dimension $2\times2$, see equations (\ref{tan})--(\ref{in00}). The main value of the list is its completeness, i.e., it includes all superpotentials realized by $2\times2$ matrices which correspond to Schr\"odinger-Pauli systems (\ref{eq}) which are shape invariant w.r.t. shifts of variable parameters. In section \ref{ArbDim} an extended class of arbitrary dimension matrix superpotentials is described. We do not present the proof that this class includes all irreducible matrix superpotentials. However such assumption looks rather plausible since by a consequent differentiation of equations (\ref{AB}) and (\ref{CB}) with using conditions (\ref{A})--(\ref{Pe}) we can obtain an infinite number of algebraic compatibility conditions for system (\ref{A})--(\ref{CB}) which are nontrivial but can be satisfied asking for $B$ be equal to zero. An alternative solution of these compatibility conditions is $P=0$, but it leads to reducible superpotentials. Computing experiments with system (\ref{A})--(\ref{CB}) for the cases of $n\times n$ matrix superpotential with $n\leq5$ also support the assumption $B=0$, see section \ref{3x3} where we did not make this assumption {\it a priori} but prove it. Notice that for $n=2$ this condition is not necessarily satisfied, but it is seemed to be the only exceptional case. Nevertheless in section \ref{ArbDim} we consider the condition $B=0$ as an additional requirement, which enables to find superpotentials of arbitrary dimension in a straightforward way. Thus we obtain an entire collection of integrable systems of Schr\"odinger equations. Some examples of these models are considered in section \ref{models} where we we find their energy spectra and ground state solutions. Among them there are two oscillator-like matrix models whose spectra are linear in the main quantum number, see section \ref{oscil}. One dimensional integrable models classified in the present paper are especially interesting in the cases when they can be used to construct solutions of two- and three-dimensional systems. A perfect example of such shape invariant system is the radial equation for the Hydrogen atom. Thus an important task is to search for multidimensional (in particular, two- and three-dimensional) models which can be reduced to found shape invariant systems after separation of variables. Some results of our search can be found in section \ref{3d} were we present new integrable problems for two- and three-dimensional equations of Schr\"odinger-Pauli type. In particular we discuss SUSY aspects of the Pron'ko-Stroganov model generalized to the case of vector particles (such generalization was proposed in paper \cite{Pron2}). It happens that the spin-one model is shape invariant and so it can be easily integrated using tools of SUSY quantum mechanics. Except the case $s=1$ we did not discuss superintegrable models proposed in \cite{Pron2} for arbitrary spin $s$. However, it is possible to show that they are supersymmetric too. Let us note that the results presented in section \ref{3d} can be considered only as an advertisement, and we plane to present the detailed discussion of integrable multidimensional models in the following publications. \renewcommand{\theequation}{A\arabic{equation}} \setcounter{equation}{0}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Wormholes are associated with the amazing spacetime topology of connecting two spacetime geometries located in different regions of the universe or different universes. They are solutions of the Einstein field equations, historically, the first step toward the concept of wormholes was made by Flamm \cite{Flamm}, later on a new spin was put forward by Einstein and Rosen \cite{Einstein}. It is interesting to note that Einstein and Rosen proposed a geometric model for elementary particles, such as the electron, in terms of the Einstein-Rosen bridge (ERB). However, this model it turns out to be unsuccessful, moreover the ERB was shown to be unstable \cite{FullerWheeler,whel,Wheeler,Wheeler1,Ellis,Ellis1}. Traversable wormholes were studied extensively in the past by several authors, notably Ellis \cite{Ellis,Ellis1} and Bronnikov \cite{br1} studied exact traversable wormhole solutions with a phantom scalar, while few years later different wormhole models were discussed by Clement \cite{clm}, followed by the seminal paper by Morris and Thorne \cite{Morris}. Afterwards, Visser developed the concept of thin-shell wormholes \cite{Visser1}. Based on physical grounds, it is well known that all the matter in our universe obeys certain energy conditions, in this context, as we shall see the existence of wormholes is problematic. In particular, the geometry of TW requires a spacial kind of exotic matter concentrated at the wormhole throat (to keep the spacetime region open at the throat). In other words, this kind of matter violates the energy conditions, such as the null energy condition (NEC) \cite{Visser1}. It is speculated that such a matter can exists in the context of quantum field theory. The second problem is related to the stability of the wormholes. Given the wormhole spacetime geometry, one way to check the stability analyses is the linear perturbation method around the wormhole throat proposed by Visser and Poisson \cite{visser2}. Wormholes have been studied in the framework of different gravity theories, for example the rotating traversable wormhole solution found by Teo \cite{Teo}, spinning wormholes in scalar-tensor theory \cite{kunz1}, wormholes with phantom energy \cite{lobo0}, wormholes in Gravity's Rainbow \cite{lobo1}, traversable Lorentzian wormhole with a cosmological constant \cite{lobo2}, wormholes in Einstein-Cartan theory \cite{branikov1}, wormholes in Eddington-inspired Born-Infeld gravity \cite{r1,r2,r3}, wormholes with different scalar fields and charged wormholes \cite{kunz2,barcelo,kim,branikov0,habib,barcelo,kunz2,branikov,jamil}, wormholes from cosmic strings \cite{clement1}, wormholes by GUTs in the early universe \cite{nojiri}, wormholes in $f(R,T)$ gravity \cite{moraes} and recently \cite{hardi,myrzakulov,sar}. Recently, extensive studies have been conducted by different authors related to the thin-shell wormhole approach \cite{rahaman,lobo3,lobo4,farook1,eiroa,ali,kimet,Jusufi:2016eav,Ovgun:2016ujt,Halilsoy:2013iza}. Topological defects are interesting objects predicted to exist by particle physics due to the phase transition mechanism in the early universe \cite{Kibble}. One particular example of topological defects is the global monopole, a spherically symmetric object resulting from the self-coupling triplet of scalar fields $\phi^a$ which undergoes a spontaneous breaking of global $O(3)$ gauge symmetry down to $U(1)$. The spacetime metric describing the global monopole has been studied in many papers including \cite{vilenkin,vilenkin1,narash,Bertrand}. In this latter we provide a new Morris-Thorne wormhole solution with anisotropic fluid and a global monopole charge in $1+3$ gravity theory minimally coupled to a triplet of scalar fields. The deflection of light by black holes and wormholes has attracted great interest, in this context the necessary methodology can be found in the papers by Bozza \cite{bozza1,bozza2,bozza3,bozza4}, Perlick \textit{et al.} \cite{perlick1,perlick2,perlick3,perlick4}, and Tsukamoto \textit{et al.} \cite{t1,t2,t3,t4,t5,t6}. For some recent works concerning the strong/weak lensing see also \cite{wh0,asada,potopov,abe,strong1,nandi,ab,mishra,f2,kuh,Sharif:2015qfa,Hashemi:2015axa,Sajadi:2016hko,Pradhan:2016qxa,Lukmanova:2016czn,Nandi:2016uzg}. While for an alternative method to study gravitational lensing via GBT see the Refs. \cite{GibbonsWerner1,K1,K2,K3,K4,K5}. This paper has the following organization. In Sec. 2, we deduce the metric for a static and spherically symmetric Morris-Thorne wormhole with a global monopole charge. In Sec. 3, we study the weak gravitational lensing applying the Gauss-Bonnet theorem. In Sec. 4, we draw our conclusions. \section{Morris-Thorne Wormhole with a Global Monopole charge} We start by writing the $3 + 1$ −dimensional action without a cosmological constant minimally coupled to a scalar field with matter fields, in units $c=G=1$ given by \begin{equation} S=\int \left(\frac{\mathfrak{R}}{2 \kappa}+\mathcal{L}\right) \sqrt{-g}\,\mathrm{d}^4x+S_m\label{1} \end{equation} in which $\kappa= 8\pi$. The Lagrangian density describing a self-coupling scalar triplet $\phi^{a}$ is given by \cite{vilenkin} \begin{equation} \mathcal{L}=-\frac{1}{2}\sum_a g^{\mu\nu}\partial_{\mu}\phi^{a} \partial_{\nu}\phi^{a}-\frac{\lambda}{4}\left(\phi^{2}-\eta^{2}\right)^{2},\label{2} \end{equation} with $a=1, 2, 3$, while $\lambda$ is the self-interaction term, $\eta$ is the scale of a gauge-symmetry breaking. The field configuration describing a monopole is \begin{equation} \phi^{a}=\frac{\eta f(r) x^{a}}{r}, \end{equation} in which \begin{equation} x^{a}=\left\lbrace r \sin\theta \, \cos\varphi, r \sin\theta \,\sin\varphi,r \cos\theta \,\right\rbrace, \end{equation} such that $\sum_a x^{a}x^{a}=r^{2}$. Next, we consider a static and spherically symmetric Morris-Thorne traversable wormhole in the Schwarzschild coordinates given by \cite{Morris} \begin{equation} \mathrm{d}s^{2}=-e^{2\Phi (r)}\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{b(r)}{r}}+r^{2}\left( \mathrm{d}\theta ^{2}+\sin ^{2}\theta \mathrm{d}\varphi ^{2}\right), \label{5} \end{equation} in which $\Phi (r)$ and $b(r)$ are the redshift and shape functions, respectively. In the wormhole geometry, the redshift function $\Phi (r)$ should be finite in order to avoid the formation of an event horizon. Moreover, the shape function $b(r)$ determines the wormhole geometry, with the following condition $b(r_{0})=r_{0}$, in which $r_{0}$ is the radius of the wormhole throat. Consequently, the shape function must satisfy the flaring-out condition \cite{lobo0}: \begin{equation} \frac{b(r)-rb^{\prime }(r)}{b^{2}(r)}>0, \end{equation}% in which $b^{\prime }(r)=\frac{db}{dr}<1$ must hold at the throat of the wormhole. The Lagrangian density in terms of $f$ reads \begin{equation} \mathcal{L}=-\left(1-\frac{b(r)}{r}\right)\frac{\eta^{2}(f^{\prime})^{2}}{2}-\frac{\eta^{2}f^{2}}{r^{2}}-\frac{\lambda \eta^{4}}{4}\left(f^{2}-1\right)^{2}. \end{equation} On the other hand the Euler-Lagrange equation for the field $f$ gives \begin{eqnarray}\notag \left(1-\frac{b}{r}\right)f^{\prime\prime}+&f^{\prime}&\left[\left(1-\frac{b(r)}{r}\right)\frac{2}{r}+\frac{1}{2}\left(\frac{b-b'r}{r^2}\right)\right]\\ &-& f\left[\frac{2}{r^{2}}+\lambda \eta^{2} \left(f^{2}-1\right)\right]=0.\label{8} \end{eqnarray} The energy momentum tensor from the Lagrangian density \eqref{2} is found to be \begin{equation} \bar{T}_{\mu\nu}=\partial_{\mu}\phi^{a}\partial_{\nu}\phi^{a}-\frac{1}{2}g_{\mu\nu}g^{\rho \sigma}\partial_{\rho}\phi^{a}\partial_{\sigma}\phi^{a}-\frac{g_{\mu\nu} \lambda}{4}\left(\phi^{a}\phi^{a}-\eta^{2}\right)^{2}. \end{equation} Using the last equation, the energy-momentum components are given as follows \begin{equation} \bar{T}_{t}^{t}=-\eta^{2}\left[\frac{f^{2}}{r^{2}}+\left(1-\frac{b}{r}\right)\frac{(f^{\prime})^{2}}{2}+\frac{\lambda \eta^{2}}{4}(f^{2}-1)^{2}\right], \end{equation} \begin{equation} \bar{T}_{r}^{r}=-\eta^{2}\left[\frac{f^{2}}{r^{2}}-\left(1-\frac{b}{r}\right)\frac{(f^{\prime})^{2}}{2}+\frac{\lambda \eta^{2}}{4}(f^{2}-1)^{2}\right], \end{equation} \begin{equation} \bar{T}_{\theta}^{\theta}=\bar{T}^{\varphi}_{\varphi}=-\eta^{2}\left[\left(1-\frac{b}{r}\right)\frac{(f^{\prime})^{2}}{2 }+\frac{\lambda \eta^{2}}{4}(f^{2}-1)^{2}\right]. \end{equation} It turns out that Eq. \eqref{8} cannot be solved exactly, however it suffices to set $f(r)\to 1$ outside the wormhole. Consequently, the energy-momentum components reduces to \begin{equation} \bar{T}_{t}^{t} = \bar{T}_{r}^{r} \simeq - \frac{\eta^{2}}{r^{2}},\,\,\,\,\bar{T}_{\theta}^{\theta} = \bar{T}_{\varphi}^{\varphi} \simeq 0 . \end{equation} On the other hand Einstein's field equations (EFE) reads \begin{equation} G_{\mu \nu }=R_{\mu \nu }-\frac{1}{2}g_{\mu \nu }R=8\pi \mathcal{T}_{\mu \nu }, \label{68} \end{equation}% where $\mathcal{T}_{\mu \nu }$ is the total energy-momentum tensor which can be written as a sum of the matter fluid part and the matter fields \begin{equation} \mathcal{T}_{\mu \nu }=T_{\mu \nu }^{(0)}+\bar{T}_{\mu \nu }. \label{15} \end{equation}% For the matter fluid we shall consider an anisotropic fluid with the following energy-momentum tensor components \begin{equation} {{T^{\mu }}_{\nu }}^{(0)}=\left( -\rho ,\mathcal{P}_{r},\mathcal{P}_{\theta },% \mathcal{P}_{\varphi }\right). \label{16} \end{equation}% Einstein tensor components for the generic wormhole metric \eqref{5} gives \begin{eqnarray} G_{t}^{t} &=&-\frac{b^{\prime }(r)}{r^{2}}, \notag \\ G_{r}^{r} &=&-\frac{b(r)}{r^{3}}+2\left( 1-\frac{b(r)}{r}\right) \frac{\Phi ^{\prime }}{r}, \notag \\ G_{\theta }^{{\theta }} &=&\left( 1-\frac{b(r)}{r}\right) \Big[\Phi ^{\prime \prime }+(\Phi ^{\prime })^{2}-\frac{b^{\prime }r-b}{2r(r-b)}\Phi ^{\prime } \notag \\ &-&\frac{b^{\prime }r-b}{2r^{2}(r-b)}+\frac{\Phi ^{\prime }}{r% }\Big], \notag \\ G_{\varphi }^{{\varphi }} &=&G_{\theta }^{{\theta }}. \label{73n} \end{eqnarray} The energy-momentum components yields \begin{eqnarray} \rho (r) &=&\frac{1}{8\pi r^{2}}\left[ b^{\prime }(r)-8 \pi \eta^2 \right] , \notag \\ \mathcal{P}_{r}(r) &=&\frac{1}{8\pi }\left[ 2\left( 1-\frac{b(r)}{r}\right) \frac{\Phi ^{\prime }}{r}-\frac{b(r)}{r^{3}}+\frac{8 \pi \eta^2}{r^2}\right] , \notag \\ \mathcal{P}(r) &=&\frac{1}{8\pi }\left( 1-\frac{b(r)}{r}\right) \Big[\Phi ^{\prime \prime }+(\Phi ^{\prime })^{2}-\frac{b^{\prime }r-b}{2r(r-b)}\Phi ^{\prime } \notag \\ &-&\frac{b^{\prime }r-b}{2r^{2}(r-b)}+\frac{\Phi ^{\prime }}{r% }\Big]. \label{18} \end{eqnarray}% where $\mathcal{P}=\mathcal{P}_{\theta }=\mathcal{P}_{\varphi }$. To simplify the problem, we use the EoS of the form \cite{lobo0,lobo1,lobo2} \begin{equation} \mathcal{P}_{r}=\omega \rho . \end{equation} In terms of the equation of state, from Eq. \eqref{18} it is possible to find the following result \begin{equation} \frac{b(r)-8\pi \eta^2 r+8 \pi \omega \rho r^3-2r(r-b(r))\Phi'(r)}{r^3}=0.\label{k20} \end{equation} Substituting the energy density relation \begin{equation} \rho (r) =\frac{1}{8\pi r^{2}}\left[ b^{\prime }(r)-8 \pi \eta^2 \right],\label{kim21} \end{equation} into Eq. \eqref{k20} we find \begin{equation} \frac{b'(r)\omega r+b(r)-8\pi \eta^2(\omega+1) r-2r(r-b(r))\Phi'(r)}{r^3}=0.\label{kim22} \end{equation} In our setup we shall consider a constant redshift function, namely a wormhole solution with zero tidal force, i.e., $\Phi'=0$, therefore last equation simplifies to \begin{equation} b'(r)\omega r+b(r)-8\pi \eta^2(\omega+1) r=0. \end{equation} Finally we use the condition $b(r_0)=b_0=r_0$, thus by solving the last differential equation we find the shape function to be \begin{equation}\label{24} b(r)=\left(\frac{r_0}{r}\right)^{1/\omega}r_0(1-8\pi \eta^2) +8 \pi \eta^2r. \end{equation} One can observe that the wormhole solution is not asymptotically flat by checking the following equation \begin{equation} \lim_{r\to \infty} \frac{b(r)}{r}\to \lim_{r\to \infty}\left[\left(\frac{r_0}{r}\right)^{1+\frac{1}{\omega}}(1-8\pi \eta^2)\right] + 8 \pi \eta^2.\label{kim25} \end{equation} The first term blows up when $r\to \infty$, since $\omega<-1$. With the help of the shape function the wormhole metric reduces to \begin{equation}\label{26} \mathrm{d}s^{2}=-\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{\left(1-8 \pi \eta^2\right)\left[1- \left(\frac{r_0}{r}\right)^{1+\frac{1}{\omega}}\right]}+r^{2}d\Omega^2. \end{equation}% \begin{figure}[h!] \center \includegraphics[width=0.45\textwidth]{shape.png} \caption{{\protect\small \textit{ The figure shows the behavior the shape function $b(r)/r$ as a function of $r$ and $\omega$, for chosen $b_0=1$ and $\eta=10^{-5}$. }}} \label{f2} \end{figure} Note that the constant factor $\exp(2 \Phi)=const$, is absorbed into the re-scaled time coordinate $t$. To our best knowledge, this metric is reported here for the first time. On the other hand, the metric coefficient $g_{rr}$ diverges at the throat $b(r_0)=r_0$, however this just signals the coordinate singularity. To see this, one can calculate the scalar curvature or the Ricci scalar which is found to be \begin{equation} \mathfrak{R}=\frac{16 \pi \eta^2}{r^2}+(1-8 \pi \eta^2) \frac{2 r_0}{\omega r^2}\left(\frac{r_0}{r}\right)^{1+\frac{1}{\omega}}, \end{equation} from the last equation we see that the metric is regular at $r=r_0$. Due to the above coordinate singularity it is convenient to compute the the proper radial distance which should be a finite quantity \begin{equation} l=\pm \int_{r_0}^{r} \frac{\mathrm{d}r'}{\sqrt{1-\frac{b_{\pm}(r')}{r'}}}. \end{equation} Using Eq. \eqref{24} we find \begin{equation} l(r)=\pm \frac{\Big[r F_1\left(\frac{1}{2},-\frac{\omega+1}{\omega},\frac{1}{\omega +1}, (\frac{r_0}{r})^{\frac{\omega +1}{\omega}}\right)-\frac{r_0 \sqrt{\pi}\Gamma(\frac{1}{\omega +1})}{\Gamma(\frac{1}{2}-\frac{\omega}{\omega +1})} \Big]}{\sqrt{1-8\pi \eta^2}} \end{equation} in which $\pm$ stands for the upper and lower part, respectively. Next, we verify whether the null energy condition (NEC), and weak energy condition (WEC) are satisfied at the throat of the wormhole. As we know WEC is defined by $T_{\mu \nu }U^{\mu }U^{\nu }\geq 0$ i.e., $\rho \geq 0$ and $% \rho (r)+\mathcal{P}_{r}(r)\geq 0$, where $T_{\mu \nu }$ is the energy momentum tensor with $U^{\mu }$ being a timelike vector. On the other hand, NEC can be defined by $T_{\mu \nu }k^{\mu }k^{\nu }\geq 0$ i.e., $\rho (r)+% \mathcal{P}_{r}(r)\geq 0$, with $k^{\mu }$ being a null vector. In this regard, we have the following energy condition at the throat region: \begin{equation} \rho (r_{0})=\frac{b^{\prime }(r_{0})-8 \pi \eta^2}{8\pi r_{0}^{2}}. \label{83} \end{equation}% Now, using the field equations, one finds the following relations \begin{equation} \rho (r)+\mathcal{P}_{r}(r)=\frac{1}{8\pi }\Big[\frac{b'(r_0)r_0-b(r_0)}{r_0^3}\Big], \end{equation}% considering now the shape function at the throat region we find \begin{eqnarray} \left(\rho+\mathcal{P}_r\right)|_{r=r_0} =- \frac{(1-8\pi \eta^2)(\omega+1)}{ 8\omega \pi r_0^2}, \end{eqnarray} this result verifies that matter configuration violates the energy conditions at the throat $ \left(\rho+\mathcal{P}_r\right)|_{r=r_0}< 0. $ Another way to see this is simply by using the flaring-out condition \begin{equation} b'(r_0) = 8 \pi \eta^2-\frac{(1-8 \pi \eta^2)}{\omega }< 1 \end{equation} which implies $\omega<-1$. This form of exotic matter with $\omega <-1$, is usually known as a phantom energy. Another important quantity is the ``volume integral quantifier," which basically measures the amount of exotic matter needed for the wormhole defined as follows \begin{equation} \mathcal{I}_V =\int\left(\rho(r)+\mathcal{P}_r(r)\right)\mathrm{d}V, \end{equation} with the volume element given by $\mathrm{d}V=r^2\sin\theta \mathrm{d}r \mathrm{d}\theta \mathrm{d}\phi$. For simplicity, we shall evaluate the volume-integral associated to the phantom energy of our wormhole spacetime \eqref{26} by assuming an arbitrary small region, say $r_0$ to a radius situated at $`a'$, in which the exotic matter is confined. More specifically, by considering our shape function $b(r)$ given by Eq. \eqref{24}, for the amount of exotic matter we find \begin{eqnarray} \mathcal{I}_V=\frac{(\omega+1)(1-8 \pi \eta^2)}{2 \omega}(a-r_0). \end{eqnarray} For an interesting observation when $a \rightarrow r_0$ then it follows \begin{equation} \int{(\rho+\mathcal{P}_r)} \rightarrow 0, \end{equation} and thus one may interpret that wormhole can be contracted for with arbitrarily small quantities of ANEC violating matter. \begin{figure}[h!] \center \includegraphics[width=0.45\textwidth]{mat3.png} \caption{{\protect\small \textit{ In this figure we depict the behavior of $\rho+\mathcal{P}_r$ as a function of $r$ and $\omega$. We have chosen $b_0=1$ and $\eta=10^{-5}$. The energy conditions are violated. }}} \end{figure} As we already saw from \eqref{kim25} the first term blows up when $r\to \infty$, since $\omega<-1$. In order to overcome this problem, it is convinient to rewrite the shape function in terms of new dimensionless constants. In particular following Lobo \textit{at al.} \cite{loboasym}, we can consider the following shape function given by \begin{equation} \frac{b(r)}{r_0}=a \left[ \left(\frac{r}{r_0}\right)^{\zeta}(1-8 \pi \eta^2)+8 \pi \eta^2 \left(\frac{r}{r_0}\right)\right] +C \end{equation} where $a$, $\zeta$, and $C$, are dimensionless constants. Without loss of generality we choose $a=1$, then using $b(r_0)/r_0=1$, we find $C=0$. Furthermore, considering a positive energy density implies $\zeta>0$, while the flaring-out condition imposes an additional constraint at the throat, namely $\zeta <1$. Moreover using the equation of state at the throat $\mathcal{P}_r(r_0)=\omega \, \rho(r_0)$, we find $\zeta \omega=-1$. On the other hand from Eqs. \eqref{kim21} and \eqref{kim22} we can deduce the following equation \begin{equation} \Phi'=\frac{b(r)-8 \pi \eta^2 r+\omega r (b'(r)-8 \pi \eta^2)}{2 r^2 (1-b(r)/r)}. \end{equation} To this end using the condition $\zeta \omega=-1$ at $r=r_0$ we find that $\Phi=const$. With this information in hand we can write our wormhole metric as follows \begin{equation} \mathrm{d}s^{2}=-\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{\left(1-8 \pi \eta^2\right)\left[1- \left(\frac{b_0}{r}\right)^{1-\zeta}\right]}+r^{2}\mathrm{d} \Omega^2,\label{kim42} \end{equation} provided that $\zeta $ is in the range $0<\zeta <1$. Now one can check that \begin{equation} \lim_{r\to \infty} \frac{b(r)}{r}=\lim_{r\to \infty} \left(\frac{r_0}{r}\right)^{1-\zeta}(1-8 \pi \eta^2)+8 \pi \eta^2=8 \pi \eta^2. \end{equation} provided $0<\zeta <1$. This equation shows that our wormhole metric \eqref{kim42} is asymptotically conical with a conical deficit angle which is independent of the radial coordinate $r$. Furthermore we can construct the embedding diagrams to visualize the conical wormhole by considering an equatorial slice, $\theta = \pi/2$ and a fixed moment of time, $t = const$, it follows \begin{equation} \mathrm{d}s^2=\frac{\mathrm{d}r^2}{1-\frac{b(r)}{r}}+r^2 \mathrm{d} \varphi^2. \end{equation} On the other hand, we can embed the metric into three-dimensional Euclidean space written in terms of cylindrical coordinates as follows \begin{equation} \mathrm{d}s^2=dz^2+\mathrm{d}r^2 + r^2 \mathrm{d}\varphi^2=\left[1+\left(\frac{\mathrm{d}z}{\mathrm{d}r}\right)^2\right]\mathrm{d}r^2+r^2 \mathrm{d} \varphi^2. \end{equation} From these equations we can deduce the equation for the embedding surface as follows \begin{equation} \frac{\mathrm{d}z}{\mathrm{d}r}=\pm \frac{1}{\sqrt{\left(1-8 \pi \eta^2 \right)\left[ 1-\left(\frac{b_0}{r}\right)^{1-\zeta} \right]}}. \end{equation} Finally we can evaluate this integral numerically for specific parameter values in order to illustrate the conical wormhole shape given in Fig. \ref{conical}. \begin{figure} \center \includegraphics[width=0.3\textwidth]{conical.pdf} \caption{{\protect\small \textit{ The embedding diagram of a two-dimensional section along the equatorial plane with $t = const,$ and $\theta =\pi/2$. To visualize this we plot $z$ vs. $r$ sweep through a $2 \pi $ rotation around the $z$-axis. We chose $b_0=1$, $\eta=0.01$ and $\zeta=0.5$.}}}\label{conical} \end{figure} \section{Gravitational lensing} We can now proceed to elaborate the gravitational lensing effect in the spacetime of the wormhole metric \eqref{kim42}. The wormhole optical metric can be simply find letting $\mathrm{d}s^2=0$, resulting with \begin{equation} \mathrm{d}t^{2}=\frac{\mathrm{d}r^{2}}{\left(1-8 \pi \eta^2\right)\left[1- \left(\frac{b_0}{r}\right)^{1-\zeta}\right]}+r^{2}\mathrm{d} \varphi^2. \end{equation} Consequently the optical metric can be written in terms of new coordinates \begin{equation} \mathrm{d}t^{2}=h_{ab}\,\mathrm{d}y^{a}\mathrm{d}y^{b}=\mathrm{d}u^2+\mathcal{H}^2(u)\mathrm{d}\varphi ^{2}, \end{equation} in which we have introduced $\mathcal{H}=r$ and \begin{equation} \mathrm{d}u=\frac{\mathrm{d}r}{{\sqrt{\left(1-8 \pi \eta^2 \right)\left(1- \left(\frac{b_0}{r}\right)^{1-\zeta}\right)}}}. \end{equation} It is very important to compute first the Gaussian optical curvature (GOC) $\mathcal{K}$ which is defined in terms of the following equation \cite{GibbonsWerner1} \begin{equation} \mathcal{K}=-\frac{1}{\mathcal{H}(u)}\left[ \frac{\mathrm{d}r}{\mathrm{d}u}% \frac{\mathrm{d}}{\mathrm{d}r}\left( \frac{\mathrm{d}r}{\mathrm{d}u}% \right) \frac{\mathrm{d}\mathcal{H}}{\mathrm{d}r}+\left( \frac{\mathrm{d}r}{\mathrm{d}% u}\right) ^{2}\frac{\mathrm{d}^{2}\mathcal{H}}{\mathrm{d}r^{2}}\right] . \end{equation} Applying this to our optical metric we find \begin{equation} \mathcal{K}=-\frac{(1-8 \pi \eta^2)}{2 r^2}\left(\frac{b_0}{r} \right)^{1-\zeta}\left(1-\zeta \right). \end{equation} Obviously the GOC is affected by the global monopole charge and the state parameter. Note the important negative sign which is implying the divergence of light rays in the wormhole geometry. But, as we are going to see this is crucial in evaluating the deflection angle which is really a result of a global spacetime topology in terms of the Gauss-Bonnet theorem (GBT). Thus, in our setup we first choose a non-singular domain, or a region outside the light ray noted as $\mathcal{A}_{R}$, with boundaries $\partial \mathcal{A}_{R}=\gamma _{h}\cup C_{R}$. Then, the global GBT in terms of the above construction is formulated as follows \begin{equation} \iint\limits_{\mathcal{A}_{R}}\mathcal{K}\,\mathrm{d}\sigma+\oint\limits_{\partial \mathcal{% A}_{R}}\kappa \,\mathrm{d}t+\sum_{k}\psi _{k}=2\pi \chi (\mathcal{A}_{R}). \label{10} \end{equation} In this equation $\kappa $ is usually known as the geodesic curvature (GC) and basically measures the deviation from the geodesics; $\mathcal{K}$ is the GOC; $\mathrm{d}\sigma$ is the optical surface element; finally $\psi _{k}$ notes the exterior angle at the $k^{th}$ vertex. The domain is chosen to be outside of the light ray implying the Euler characteristic number to be $\chi (\mathcal{A}_{R})=1$. The GC is defined via \begin{equation} \kappa =h\,\left( \nabla _{\dot{\gamma}}\dot{\gamma},\ddot{\gamma}% \right), \end{equation}% where we impose the unit speed condition $h(\dot{\gamma},\dot{\gamma})=1$. For a very large radial coordinate $R\rightarrow \infty $, our two jump angles (at the source $\mathcal{S}$, and observer $\mathcal{O}) $, yields $\psi _{\mathit{O}}+\psi _{\mathit{S}}\rightarrow \pi $ \cite{GibbonsWerner1}. Then the GBT simplifies to \begin{equation} \iint\limits_{\mathcal{A}_{R}}\mathcal{K}\,\mathrm{d}\sigma+\oint\limits_{C_{R}}\kappa \,% \mathrm{d}t\overset{{R\rightarrow \infty }}{=}\iint\limits_{\mathcal{A}% _{\infty }}\mathcal{K}\,\mathrm{d}\sigma+\int\limits_{0}^{\pi +\hat{\alpha}}\mathrm{d}% \varphi =\pi. \label{12} \end{equation} By definition the GC for $\gamma_{h}$ is zero, hence we are left with a contribution from the curve $C_{R}$ located at a coordinate distance $R$ from the coordinate system chosen at the wormhole center in the equatorial plane. Hence we need to compute \begin{equation} \kappa (C_{R})=|\nabla _{\dot{C}_{R}}\dot{C}_{R}|, \end{equation} In components notation the radial part can be written as \begin{equation} \left( \nabla _{\dot{C}_{R}}\dot{C}_{R}\right) ^{r}=\dot{C}_{R}^{\varphi }\,\left( \partial _{\varphi }\dot{C}_{R}^{r}\right) +\Gamma% _{\varphi \varphi }^{r(op)}\left( \dot{C}_{R}^{\varphi }\right) ^{2}. \end{equation} With the help of the unit speed condition and after we compute the Christoffel symbol related to our optical metric in the large coordinate radius $R$ we are left with \begin{eqnarray} \lim_{R\rightarrow \infty }\kappa (C_{R}) &=&\lim_{R\rightarrow \infty }\left\vert \nabla _{\dot{C}_{R}}\dot{C}_{R}\right\vert , \notag \\ &\rightarrow &\frac{1}{R} \sqrt{1-8 \pi \eta^2}. \end{eqnarray} Hence, GC is in fact affected by the monopole charge. To see what this means we write the optical metric in this limit for a constant $R$. We find \begin{eqnarray} \lim_{R\rightarrow \infty }\mathrm{d}t \rightarrow R\,\mathrm{d}\varphi. \end{eqnarray} Putting the last two equation together we see that $\kappa (C_{R})% \mathrm{d}t=\sqrt{1-8\pi \eta^2}\mathrm{d}\varphi $. This reflects the conical nature of our wormhole geometry, to put more simply, our optical metric is not asymptotically Euclidean. Using this result from GBT we can express the deflection angle as follows \begin{equation} \hat{\alpha}=\pi \left[\frac{1}{\sqrt{1-8 \pi \eta^2}}-1 \right]-\frac{1}{\sqrt{1-8 \pi \eta^2}}\int\limits_{0}^{\pi }\int\limits_{\frac{b}{\sin \varphi }% }^{\infty }\mathcal{K}\mathrm{d}\sigma. \end{equation} If we used the equation for the light ray $r(\varphi )=b/\sin \varphi $, in which $b$ is the impact parameter, which can be approximated with the closest approach distance from the wormhole in the first order approximation. The surface are is also approximated as \begin{equation} \mathrm{d}\sigma= \sqrt{h}\,\mathrm{d}u\,\mathrm{d}\varphi \simeq \frac{r}{\sqrt{1-8\pi \eta^2}}. \end{equation} Finally the total deflection angle is found to be \begin{equation} \hat{\alpha}=4\pi^2 \eta^2+\left(\frac{b_0}{b}\right)^{1-\zeta}\frac{\sqrt{\pi}\,\Gamma\left(1-\frac{\zeta}{2}\right)}{2\, \Gamma\left(\frac{3-\zeta}{2}\right) }. \end{equation} We can recast our wormhole metric \eqref{26} in a different form. In particular if we introduce the coordinate transformations \begin{equation}\label{53} \mathcal{R}\to \frac{r}{\sqrt{1-8\pi \eta^2}}, \end{equation} and \begin{equation}\label{54} \mathcal{B}_0\to \frac{b_0}{\sqrt{1-8\pi \eta^2}}. \end{equation} Taking into the consideration the above transformations the wormhole metric reduces to \begin{equation}\label{55} \mathrm{d}s^{2}=-\mathrm{d}t^{2}+\frac{\mathrm{d}\mathcal{R}^{2}}{1- \left(\frac{\mathcal{B}_0}{\mathcal{R}}\right)^{1-\zeta}}+\left(1-8\pi \eta^2 \right) \mathcal{R}^{2}\mathrm{d}\Omega^2. \end{equation} One can show that the deflection angle remains invariant under the coordinate transformations \eqref{53}-\eqref{54}. In a similar fashion, we can apply the following substitutions $\mathcal{H}=\mathcal{R} \sqrt{1-8 \pi \eta^2}$, and \begin{equation} \mathrm{d}u=\frac{\mathrm{d}\mathcal{R}}{{\sqrt{1- \left(\frac{\mathcal{B}_0}{\mathcal{R}}\right)^{1-\zeta}}}}. \end{equation} Then, for the GOP in this case it is not difficult to find that \begin{equation} \mathcal{K}=-\frac{\left(1-\zeta \right)}{2\mathcal{R}^2}\left(\frac{\mathcal{B}_0}{\mathcal{R}} \right)^{1-\zeta}. \end{equation} In the limit $R\rightarrow \infty$, GC yields \begin{eqnarray} \lim_{R\rightarrow \infty }\kappa (C_{R}) &=&\lim_{R\rightarrow \infty }\left\vert \nabla _{\dot{C}_{R}}\dot{C}_{R}\right\vert , \notag \\ &\rightarrow &\frac{1}{R}, \end{eqnarray} but \begin{eqnarray} \lim_{R\rightarrow \infty }\mathrm{d}t \rightarrow R\, \sqrt{1-8 \pi \eta^2}\,\mathrm{d}\varphi. \end{eqnarray} Although GC is independent by $\eta$, we see that $\mathrm{d}t$ is affected by $\eta$. However, we end up with the same result $\kappa (C_{R}) \mathrm{d}t=\sqrt{1-8\pi \eta^2}\mathrm{d}\varphi $. The equation for the light ray this time can be choosen as $\mathcal{R}=\mathcal{B}/\sin \varphi$, resulting with a similar expression \begin{equation} \hat{\alpha}=\pi \left[\frac{1}{\sqrt{1-8 \pi \eta^2}}-1 \right]-\frac{1}{\sqrt{1-8 \pi \eta^2}}\int\limits_{0}^{\pi }\int\limits_{\frac{\mathcal{B}}{\sin \varphi }% }^{\infty }\mathcal{K}\mathrm{d}\sigma. \label{17n} \end{equation} Solving this integral we can approximate the solution to be \begin{equation} \hat{\alpha}=4\pi^2 \eta^2+\left(\frac{\mathcal{B}_0}{\mathcal{B}}\right)^{1-\zeta}\frac{\sqrt{\pi}\,\Gamma\left(1-\frac{\zeta}{2}\right)}{2\, \Gamma\left(\frac{3-\zeta}{2}\right) }. \end{equation} From the equations of the light rays we deduce that the impact parameters should be related with \begin{equation} \mathcal{B}\to \frac{b}{\sqrt{1-8\pi \eta^2}}, \end{equation} yielding the ratio \begin{equation} \frac{\mathcal{B}_0}{\mathcal{B}}\to \frac{b_0}{b}. \end{equation} Thus, we showed that the final expression for the deflection angle remains invariant under the coordinate transformations \eqref{53}-\eqref{54}. For an important observation we can compare out result with two special case. Firstly, we note that the metric \eqref{55} reduces to the point like global monopole metric by letting $\mathcal{B}_0=0$, thus \begin{equation} \mathrm{d}s^{2}=-\mathrm{d}t^{2}+\mathrm{d}\mathcal{R}^{2}+\left(1-8\pi \eta^2 \right)\mathcal{R}^{2}\mathrm{d}\Omega^2. \end{equation} The deflection angle due to the point like global monopole is given by $4\pi^2 \eta^2$ (see, for example \cite{K5}). It is clear that due to the geometric contribution related to the wormhole thoruat, the light bending is stronger in the wormhole case compared to the point-like global monopole case.\\ \begin{figure}[h!] \center \includegraphics[width=0.45\textwidth]{angle.png} \caption{{\protect\small \textit{ The figure shows the deflection angle as a function of the impact parameter $b$ and $\zeta$, for chosen $b_0=1$ and $\eta=10^{-5}$. }}} \label{f3} \end{figure} \section{Conclusion} In this paper, we have found an asymptotically conical Morris-Thorne wormhole supported by anisotropic matter fluid and a triplet of scalar fields $\phi^a$ minimally coupled to a $1+3$ dimensional gravity. For the anisotropic fluid we have used EoS of the form $\mathcal{P}_r=\omega \rho$, resulting with a phantom energy described by the relation $\omega<-1$. Our phantom wormhole solution is characterized by a solid angle deficit due to the global conical geometry reveling interesting observational effects such as the gravitational lensing. Introducing a new dimensionless constant $\zeta$ we have shown that our wormhole metric is not asymptotically flat, namely $b(r)/r \to 8 \pi \eta^2 $, when $r \to \infty$. We have also studied the deflection of light, more specifically a detailed analysis using GBT revealed the following result for the deflection angle \begin{equation}\nonumber \hat{\alpha}=4\pi^2 \eta^2+\left(\frac{b_0}{b}\right)^{1-\zeta}\frac{\sqrt{\pi}\,\Gamma\left(1-\frac{\zeta}{2}\right)}{2\, \Gamma\left(\frac{3-\zeta}{2}\right) }. \end{equation} Clearly, the first term $4\pi^2 \eta^2 $, is independent of the impact parameter $b$, while the second term is a product of a function written in terms of the throat of the wormhole $b_0/b$, and the Gamma functions depending on the dimensionless constant $\zeta$. It is worth noting that we have performed our analysis in two different spacetime metrics. In both cases we find the same result hence the deflection angle is form-invariant under coordinate transformations. Finally we pointed out that the gravitational lensing effect is stronger in the wormhole geometry case compared to the point like global monopole geometry.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Proper Motions in the Local Group} Proper motion measurements of Local Group galaxies are important for our understanding of the dynamics and evolution of the Local Group. Presently, measurements of extragalactic proper motions by optical telescopes are limited to the most nearby companions of the Milky Way, i.e. the LMC \cite{JonesKlemoaLin1994,KallivayalilvanderMarelAlcock2006a,PedrerosCostaMendez2006}, the SMC \cite{KallivayalilvanderMarelAlcock2006b}, the Sculptor dwarf spheroidal galaxy (dShp) \cite{SchweitzerCurworthMajewski1995,PiatekPryorBristow2006}, the Canis Major dwarf galaxy \cite{DinescuMartinez-DelgadoGirard2005}, the Ursa Minor~dSph \cite{PiatekPryorBristow2005}, the Sagittarius~dSph \cite{DinescuGirardvanAltena2005}, the Fornax~dSph \cite{PiatekPryorOlszewski2002,DinescuKeeneyMajewski2004}, and the Carina~dSph \cite{PiatekPryorOlszewski2003}. These galaxies are all closer than 150 kpc and show motions between 0.2 and a few milliarcseconds (mas) per year. More distant galaxies, such as galaxies in the Andromeda subgroup at distances of $\sim$ 800 kpc, have smaller angular motions, which are currently not measurable with optical telescopes. On the other hand, \citeN{BrunthalerReidFalcke2005} measured the proper motions of two groups of water masers on opposite sides of M33 at radio frequencies with the NRAO\footnote{The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under a cooperative agreement with the National Science Foundation.} Very Long Baseline Array (VLBA). A comparison of the relative proper motion between the two groups of masers and their expected motions from the known rotation curve and inclination of M33 led to a determination of a ``rotational parallax'' (730 $\pm$ 168 kiloparsec) for this galaxy. This distance is consistent with recent Cepheid and tip of the red giant branch estimates (\citeNP{LeeKimSarajedini2002}; \citeNP{McConnachieIrwinFerguson2005}) and earlier distance estimates using the internal motions of water masers in the IC\,133 region \cite{GreenhillMoranReid1993,ArgonGreenhillMoran2004}. Since the proper motion measurements were made relative to a distant extragalactic background source, the proper motion of M33 itself could also be determined. This measured proper motion of M33 is a first important step toward a kinematical model of the Local Group and was used to constrain the proper motion and dark matter content of the Andromeda Galaxy M31 \cite{LoebReidBrunthaler2005}. Water masers in Local Group galaxies have also been found toward the Magellanic Clouds (e.g.~\citeNP{ScaliseBraz1981}) and IC\,10 (e.g. \citeNP{HenkelWouterlootBally1986}). Other Local Group galaxies were searched, but no additional water masers have been detected (see \citeNP{BrunthalerHenkeldeBlok2006} and references therein). In this paper we report on VLBA observations of the maser in IC\,10 to measure its motion. \subsection{IC\,10} The extragalactic nature of IC\,10 was first recognized by \citeN{Mayall1935}. ~\citeN{Hubble1936} proposed that it was likely a member of the Local Group and described it as ``one of the most curious objects in the sky''. However, observations of IC\,10 have been always difficult because of the low Galactic latitude of 3$^\circ$. IC\,10 has been classified as an Ir\,IV galaxy (e.g. \citeNP{vandenBergh1999}), but \citeN{RicherBullejosBorissova2001} argue that it has more properties of a blue compact dwarf galaxy. It is also the nearest galaxy hosting a small starburst, evidenced by its large number of Wolf-Rayet stars (\citeNP{MasseyArmandroffConti1992}) and the discovery of 144 H\,II regions (\citeNP{HodgeLee1990}). Observations of H\,I with the Westerbork Synthesis Radio Telescope by~\citeN{ShostakSkillman1989} revealed that IC\,10 has a regularly rotating disk surrounded by a counter-rotating outer distribution of gas. The distance to IC\,10 is subject to controversy because of difficulties caused by its low Galactic latitude. Early estimates claim a distance of 1--1.5 Mpc \cite{Roberts1962} and 3 Mpc (\citeNP{BottinelliGouguenheimHeidmann1972}, \citeNP{SandageTammann1974}). \citeN{Huchtmeier1979} argued for a closer distance of 1 Mpc. The most recent determination from multi-wavelength observations of Cepheid variables obtained a distance of 660$\pm$66 kpc to IC\,10 \cite{SakaiMadoreFreedman1999}, which we adopt throughout this paper. IC\,10 hosts two known H$_2$O masers, IC\,10-SE and IC\,10-NW \cite{BeckerHenkelWilson1993}. The strong SE-component was first detected by ~\citeN{HenkelWouterlootBally1986} and the whole spectrum of IC\,10 showed strong variability since its discovery with flux densities between less than 1 Jy \cite{BeckerHenkelWilson1993} and a flare with a (single dish) flux density of 125 Jy \cite{BaanHaschick1994}. Even intraday variability has been reported by~\citeN{ArgonGreenhillMoran1994}, but the strong component at $v_\mathrm{LSR}\approx -324$~km~s$^{-1}$ has been persistent until now. \section{Observations and Data Reduction} We observed the usually brightest maser in IC\,10-SE with the VLBA thirteen times between February 2001 and June 2005. The observations are grouped into six epochs, each comprising two closely spaced observations, except the first epoch with three observations, to enable assessment of overall accuracy and systematic errors (Table~\ref{obsinfo}). \begin{table} \caption[]{Details of the observations: Observing date, observation length $t_{obs}$, beam size $\theta$ and position angle $PA$.} \label{obsinfo} \[ \begin{tabular}{p{0.08\linewidth}cp{0.10\linewidth}cp{0.10\linewidth}cp{0.10\l inewidth}cp{0.10\linewidth}cp{0.08\linewidth}} \hline Epoch& Date & $t_{obs} [h]$ & $\theta$ [mas] &$PA [^\circ]$\\ \hline I& 2001/02/09 & 10 & 0.53$\times$0.33& -15 \\ I& 2001/03/28 & 10 & 0.55$\times$0.36& -18 \\ I& 2001/04/12 & 10 & 0.63$\times$0.37& -5 \\ \hline II& 2002/01/12 & 10 & 0.59$\times$0.35& -19 \\ II& 2002/01/17 & 10 & 0.64$\times$0.32& -22 \\ \hline III& 2002/10/01 & 10 & 0.68$\times$0.38& -12 \\ III& 2002/10/11 & 10 & 0.61$\times$0.34& -5\\ \hline IV& 2003/12/12 & 12 & 0.52$\times$0.33& -15\\ IV& 2004/01/10 & 12 & 0.50$\times$0.33& -23\\ \hline V& 2004/08/23 & 12 & 0.60$\times$0.51& -2\\ V& 2004/09/18 & 12 & 0.54$\times$0.35& -17\\ \hline VI& 2005/06/01 & 12 & 0.60$\times$0.50& -11\\ VI& 2005/06/07 & 12 & 0.56$\times$0.39& -6\\  \end{tabular} \] \end{table} We observed in four 8 MHz bands in dual circular polarization. The 128 spectral channels in each band yielded a channel spacing of 62.5 kHz, equivalent to 0.84 km s$^{-1}$, and covered a velocity range of 107 km s$^{-1}$. The observations involved rapid switching between the phase-calibrator VCS1~J0027+5958 from the VLBA Calibrator Survey \cite{BeasleyGordonPeck2002}, which is a compact background source with continuum emission, and the target sources IC\,10 and NVSS~J002108+591132. NVSS~J002108+591132 is a radio continuum source from the NRAO VLA Sky Survey (NVSS) \cite{CondonCottonGreisen1998} and is located only 8 arcminutes from the maser in IC\,10. It was also detected in X-rays \cite{WangWhitakerWilliams2005} and is most likely also a background quasar. The redshifts of VCS1~J0027+5958 and NVSS~J002108+591132 are not known. We switched sources every 30 seconds in the sequence VCS1~J0027+5958 -- IC\,10 -- VCS1~J0027+5958 -- NVSS~J002108+591132 -- VCS1~J0027+5958 and achieved on-source integration times of $\sim$ 22 seconds. The background sources were assumed to be stationary on the sky. Since the phase-calibrator is separated by only 1$^\circ$ on the sky from the target sources, one can obtain precise angular separation measurements. From the second epoch on, we included {\it geodetic-like} observations where we observed for 45 minutes 10--15 strong radio sources ($>$ 200 mJy) with accurate positions ($<$ 1 mas) at different elevations to estimate an atmospheric zenith delay error in the VLBA calibrator model (see \citeNP{ReidBrunthaler2004} and \citeNP{BrunthalerReidFalcke2005b} for a discussion). In the second and third epoch we used two blocks of these geodetic observations before and after the phase-referencing observations. From the fourth epoch on, we included a third geodetic block in the middle of the observation. The data were edited and calibrated using standard techniques in the Astronomical Image Processing System (AIPS). A-priori amplitude calibration was applied using system temperature measurements and standard gain curves. Zenith delay corrections were performed based on the results of the geodetic-like observations. Data from the St. Croix station were flagged due to high phase noise in all observations. The maser in IC\,10 and NVSS~J002108+591132 were imaged in AIPS. All detected maser features and NVSS~J002108+591132 were unresolved and fit by single elliptical Gaussian components. \section{Results} \subsection{Spatial Structure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{ic10_d_color.eps}} \caption{Composite map of the H$_2$O masers in IC\,10 from 2001 March 28. The area of each circle is proportional to the flux density of the respective component. The colors denote different LSR radial velocities with $>-327$ km~s$^{-1}$ (red), $-327$ to $-330$ km~s$^{-1}$ (magenta), $-330$ to $-338$ km~s$^{-1}$ (green), and $<-338$ km~s$^{-1}$ (blue). The crosses mark the positions of the maser emission detected by \protect\citeN{ArgonGreenhillMoran1994}. The positions were aligned on the strongest maser component in the south-east. } \label{ic10_d} \end{figure} In the first epoch, maser emission could be detected in 21 channels spread over $\approx 23$~km~s$^{-1}$. The spatial distribution of the masers on 2001 March 28 can be seen in Fig.~\ref{ic10_d}. It is similar to the distribution in earlier VLBI observations of IC\,10 by \citeN{ArgonGreenhillMoran1994}. The strongest component at a LSR velocity of $\approx -324$~km~s$^{-1}$ is separated by $\approx10$~mas or (projected) 6600 AU from the weaker components. This suggests that the emission is associated with a single object if the maser emission is similar to H$_2$O maser emission in Galactic star forming regions like W3(OH), W49 or Sgr\,B2 (e.g.~\citeNP{ReidArgonMasson1995}, ~\citeNP{WalkerBurkeJohnston1977} and \citeNP{KobayashiIshiguroChikada1989} respectively). The weaker maser components form an apparent ring-like structure with a projected size of $\approx~1.6$~mas or 1060 AU. \subsection{Variability} The correlated flux density of the strong feature at --324~km~s$^{-1}$ LSR velocity increased from 1.0 to 1.5 Jy between the first two VLBA observations of the first epoch. The weaker components are also very variable and they can appear or disappear between observations. In the observations of the second epoch, the flux density of the strongest component was $\sim$ 1.1 Jy and some of the weaker components disappeared. In the third epoch, we detected only the strong component with a flux density of $\sim$0.7 Jy while the weak ring-like structure was not detected anymore. In the later epochs, the flux density of the remaining component dropped to $\sim$ 0.2 Jy, 0.12 Jy, and 0.07 Jy in the fourth, fifth, and sixth epoch, respectively. \subsection{Observed Motions} The position offsets of the strongest maser feature in IC\,10 are shown in Fig.~\ref{pos_ic10}. The uncertainties in the observations of the first epoch are larger than the others, because no geodetic-like observations were done to compensate the zenith delay errors. A rectilinear motion was fit to the data and yielded a value of --2$\pm$6 $\mu$as~yr$^{-1}$ toward the East and 20$\pm$6 $\mu$as~yr$^{-1}$ toward the North. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=-90]{ic10pm.eps}} \caption{The position of the maser in IC\,10 relative to the phase-reference source VCS1 J0027+5958 in East-West (red triangles) and North-South (blue circles). The lines show a variance weighted linear fit to the data points.} \label{pos_ic10} \end{figure} The position offsets of NVSS J002108+591132 are shown in Fig.~\ref{pos_a3}. A rectilinear motion was fit to the data and yielded a motion of --10$\pm$3 $\mu$as~yr$^{-1}$ toward the East and --5$\pm$5 $\mu$as~yr$^{-1}$ toward the North. Hence, NVSS J002108+591132 shows a small but potentially significant motion in right ascension. The apparent motion of NVSS J002108+591132 could be caused by unknown systematic errors. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=-90]{quasar.eps}} \caption{The position of NVSS J002108+591132 relative to the phase-reference source VCS1 J0027+5958 in East-West (red triangles) and North-South (blue circles). The lines show a variance weighted linear fit to the data points.} \label{pos_a3} \end{figure} The phase calibrator VCS1 J0027+5958 may have unresolved structure, e.g. a core-jet structure. The observed image of the source is the convolution of the source structure and the synthesized beam of the VLBA. Flux density variations of the individual components could move the position of the observed image by a fraction of the beam size. Since the phase calibrator is assumed to be stationary, this would shift the positions of all target sources by the same amount. The observed motion of NVSS J002108+591132 could also be caused by some errors in the geometry of the correlator model (i.e. antenna positions, earth orientation parameters). These errors would be similar for closely spaced observations, but different for observations separated by several months. The angular separation between IC\,10 and NVSS J002108+591132 (8') is much smaller than the separation between IC\,10 and VCS1 J0027+5958 (1$^\circ$), and the position shift induced by geometric errors would be similar for IC\,10 and NVSS J002108+591132. In both cases, the motion of the maser in IC\,10 relative to NVSS~J002108+591132 would be a better estimate of the proper motion of IC\,10. However, it cannot be ruled out that the apparent motion of NVSS~J002108+591132 is caused by an unresolved core-jet structure in NVSS~J002108+591132 itself. Since strong sources are expected to show more jet-structure than weak sources, amplitude variations of VCS1 J0027+5958 (70--290 mJy) are larger than those in NVSS~J002108+591132 (6--11 mJy), and the angular separation between IC\,10 and NVSS~J002108+591132 is much smaller than the separation between IC\,10 and VCS1 J0027+5958, we consider NVSS~J002108+591132 as the better astrometric reference source. Fig.~\ref{pos_ic10-a3} shows the position of the strongest maser component in IC\,10 relative to NVSS~J002108+591132. A rectilinear motion was fit to the data and yielded a motion of 6$\pm$5 $\mu$as~yr$^{-1}$ toward the East and 23$\pm$5 $\mu$as~yr$^{-1}$ toward the North and we will adopt these values for the proper motion of the maser. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=-90]{ic10-quasar.eps}} \caption{The position of the strongest maser feature in IC\,10 relative to NVSS J002108+591132 in East-West (red triangles) and North-South (blue circles). The lines show a variance weighted linear fit to the data points.} \label{pos_ic10-a3} \end{figure} \section{Discussion} \subsection{Space Motion of IC\,10} The measured proper motion $\tilde{\vec v}_{prop}$ of the maser in IC\,10 can be decomposed into a sum of several components, relative to a frame at rest at the center of the Milky Way: \begin{eqnarray} \tilde{\vec v}_{prop}= \vec v_{rot} + \vec v_{pec} + \vec v_\odot + \vec v_{IC\,10} \end{eqnarray} Here $\vec v_{rot}$ is the motion of the masers due to the internal galactic rotation in IC\,10, $\vec v_{pec}$ is the peculiar motion of the masers relative to circular galactic rotation and $\vec v_\odot$ is the apparent motion of IC\,10 caused by the rotation of the Sun about the Galactic Center. The last contribution $\vec v_{IC\,10}$ is the true proper motion of the galaxy IC\,10. The H$_2$O masers in IC\,10 are located within a massive H\,I cloud in the central disk. If one assumes that the masers are rotating with the disk, one can calculate its expected proper motion. \citeN{ShostakSkillman1989} measure an inclination of 45$^\circ$ from the ellipticity of its H\,I distribution. The masers are 33 arcseconds (106 pc) east and 99 arcseconds (317 pc) south of the kinematic center. Unfortunately no position angle of the major axis was given. \citeN{WilcotsMiller1998} used higher resolution VLA observations of the H\,I content of IC\,10 to fit a tilted ring model to the velocity field of the disk of IC\,10. This model has a separate rotation speed, inclination and position angle for each ring. They find a highly inclined disk in the inner 110 arcseconds with a position angle of $\approx 75 ^\circ$ and a rotational velocity of $\approx 30 $ km~s$^{-1}$. The position of the kinematic center of their tilted ring model was not given. If one combines the kinematic center of ~\citeN{ShostakSkillman1989} with the inclination and position angle of ~\citeN{WilcotsMiller1998}, one gets an expected transverse motion for the maser ($\vec v_{rot}$) of 26 and 11 km~s$^{-1}$ toward the East and North, respectively. If one calculates the expected motion for different realistic scenarios (i.e. changing the kinematic center by $\pm$20 arcseconds, and the inclination and the position angle of the major axis by $\pm$20$^\circ$), one gets always transverse motions between 20-30 km~s$^{-1}$ toward the East and 5-15 km~s$^{-1}$ toward the North. The deviation of the motion of the masers from the galactic rotation is unknown. The radial velocity of the CO gas at the position of the maser in IC\,10 is about $-330$ km~s$^{-1}$ \cite{Becker1990}, which is close to the radial velocity of the maser. In our Galaxy peculiar motions of star forming regions can be 20 km~s$^{-1}$ as seen in W3(OH) (\citeNP{XuReidZheng2006,HachisukaBrunthalerMenten2006}). Hence, to be conservative, we adopt values of $25\pm20$ and $10\pm20$ km~s$^{-1}$ toward the East and North, respectively. This translates to $\dot\alpha_{rot}=8\pm6$ and $\dot\delta_{rot}=3\pm6$ $\mu$as~yr$^{-1}$ at a distance of 660$\pm$60~kpc. The rotation of the Sun about the Galactic Center causes an apparent motion of IC\,10. The motion of the Sun can be decomposed into a circular motion of the local standard of rest (LSR) and the peculiar motion of the Sun. The peculiar motion of the Sun has been determined from Hipparcos data by ~\citeN{DehnenBinney1998} to be in km~s$^{-1}$ U$_0$=10.00$\pm$0.36 (radially inwards), V$_0$=5.25$\pm$0.62 (in the direction of Galactic rotation) and W$_0$=7.17$\pm$0.38 (vertically upwards). VLBI measurements of the proper motion of SgrA*, the compact radio source at the Galactic Center, yield a motion of 6.379 $\pm$ 0.026 mas~yr$^{-1}$ along the Galactic plane \cite{ReidReadheadVermeulen1999,ReidBrunthaler2004}. Combined with a recent geometric distance estimate of the Galactic Center of 7.62 $\pm$ 0.32 kpc \cite{EisenhauerGenzelAlexander2005}, one gets a circular velocity of 225$\pm$10 km~s$^{-1}$ for the LSR. This motion of the Sun causes an apparent proper motion of $38\pm4~\mu$as~yr$^{-1}$ in Galactic longitude and -6$\pm$1 $\mu$as~yr$^{-1}$ in Galactic latitude (for a distance of 660 kpc and Galactic coordinates of IC\,10 of $l=118.96^\circ$, $b=-3.32^\circ$). Converted to equatorial coordinates, one gets $\dot\alpha_{\odot}=37\pm4~\mu$as~yr$^{-1}$ and $\dot\delta_{\odot}=-11\pm1~\mu$as~yr$^{-1}$. The true proper motion of IC\,10 is then given by \begin{eqnarray} \nonumber\dot\alpha_{IC\,10}&=&\dot{\tilde\alpha}_{prop} - \dot\alpha_{rot} -\dot\alpha_\odot\\\nonumber &=&(6~(\pm5)-8~(\pm6)-37~(\pm4))~\mu\mathrm{as}~\mathrm{yr}^{-1}\\\nonumber &=&-39\pm9~\mu\mathrm{as}~\mathrm{yr}^{-1} =-122\pm31~\mathrm{km}~\mathrm{s}^{-1}\\ \mathrm{and}\\\nonumber \dot\delta_{IC\,10}&=&\dot{\tilde\delta}_{prop} - \dot\delta_{rot} -\dot\delta_\odot\\\nonumber &=& (23~(\pm5)-3~(\pm6)+11~(\pm1))~\mu\mathrm{as}~\mathrm{yr}^{-1}\\\nonumber &=&31\pm8~\mu\mathrm{as}~\mathrm{yr}^{-1}=97\pm27~{\mathrm{km}}~\mathrm{s}^{-1} \end{eqnarray} The measured systematic heliocentric velocity of IC\,10 ($-344\pm3$ km~s$^{-1}$,~\citeNP{deVaucouleursdeVaucouleursCorwins1991}) is the sum of the radial motion of IC\,10 toward the Sun and the component of the solar motion about the Galactic Center toward IC\,10 which is -196$\pm$10 km~s$^{-1}$. Hence IC\,10 is moving with 148$\pm$10 km~s$^{-1}$ toward the Sun. The proper motion and the radial velocity combined give the three-dimensional space velocity of IC\,10. The total velocity is 215$\pm$42~km~s$^{-1}$ relative to the Milky Way. This velocity vector is shown in the schematic view of the Local Group in Fig.~\ref{LG}. Here, we used Cartesian coordinates, where the Sun is located at the origin and the Galactic Center is located at (x,y,z)=(7.62,0,0) (see Appendix~\ref{trans} for details). \begin{figure} \resizebox{0.7\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=17.5cm,bblly=10.5cm,bbury=27cm,clip=,angle=0]{LG.ps}} \resizebox{0.7\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=17.5cm,bblly=10.5cm,bbury=27cm,clip=,angle=0]{LG_2.ps}} \caption{Schematic view of the Local Group from two viewing angles with the space velocity of IC\,10 and M33 (the latter taken from \protect\citeNP{BrunthalerReidFalcke2005}) and the radial velocity of Andromeda relative to the Milky Way. The blue cross marks the position of the Local Group Barycenter (LG BC) according to \protect\citeN{vandenBergh1999}.} \label{LG} \end{figure} \subsection{Local Group Dynamics and Mass of M31} If IC\,10 or M33 are bound to M31, then the velocity of the two galaxies relative to M31 must be smaller than the escape velocity and one can deduce a lower limit on the mass of M31: \begin{eqnarray} M_{M31}>\frac{v_{rel}^2R}{2G}. \end{eqnarray} A relative velocity of 147 km~s$^{-1}$ -- for a zero tangential motion of M31 -- and a distance of 262 kpc between IC\,10 and M31, gives a lower limit of 6.6 $\times 10^{11}$M$_\odot$. One can repeat this calculation for any tangential motion of M31. The results are shown in Fig.~\ref{mass-m31} (top). The lowest value of 0.7 $\times 10^{11}$M$_\odot$ is found for a tangential motion of M31 of --130 km~s$^{-1}$ toward the East and 35 km~s$^{-1}$ toward the North. For a relative motion of 230 km~s$^{-1}$ between M33 and M31 -- again for a zero tangential motion of M31 -- and a distance of 202 kpc, one gets a lower limit of 1.2 $\times 10^{12}$M$_\odot$ \cite{BrunthalerReidFalcke2005}. Fig.~\ref{mass-m31} (top) shows also the lower limit of the mass of M31 for different tangential motions of M31 if M33 is bound to M31. The lowest value is 4 $\times 10^{11}$M$_\odot$ for a tangential motion of M31 of --115 km~s$^{-1}$ toward the East and 160 km~s$^{-1}$ toward the North. \citeN{LoebReidBrunthaler2005} find that proper motions of M31 in negative right ascension and positive declination would have lead to close interactions between M31 and M33 in the past. These proper motions of M31 can be ruled out, since the stellar disk of M33 does not show any signs of strong interactions. \citeN{LoebReidBrunthaler2005} used a total mass of M31 of 3.4$\times10^{12}$M$_\odot$ in their simulations. Although simulations with lower masses of M31 yield weaker interactions, motions in negative right ascension and positive declination are still ruled out. Thus, we can rule out these regions in Fig.~\ref{mass-m31}. This results in a lower limit of 7.5$\times 10^{11}$M$_\odot$ for M31 and agrees with a recent estimate of $12.3^{+18}_{-6}\times10^{11}$~M$_\odot$ derived from the three-dimensional positions and radial velocities of its satellite galaxies \cite{EvansWilkinson2000}. \begin{figure} \center{M$_\mathrm{M31}$ [M$_\odot$]} \resizebox{0.9\hsize}{!}{\includegraphics[bbllx=0cm,bburx=35.5cm,bblly=1.0cm,bbury=5.5cm,clip=,angle=0]{scale2.ps}} \resizebox{0.8\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=13cm,bblly=16.9cm,bbury=25.5cm,clip=,angle=0]{combined.eps}} \resizebox{0.8\hsize}{!}{\includegraphics[bbllx=2.5cm,bburx=13cm,bblly=16.9cm,bbury=25.5cm,clip=,angle=0]{combined+loeb.eps}} \caption{{\bf Top:} Lower limit on the mass of M31 for different tangential motions of M31 assuming that M33 (dashed) or IC\,10 (solid) are bound to M31. The lower limits are (4, 5, 7.5, 10, 15, 25)$\times10^{11}$M$_\odot$ for M33, and (0.7, 1, 2.5, 5, 7.5, 10, 15, 25)$\times10^{11}$M$_\odot$ for IC\,10, rising from inside. The colour scale indicates the maximum of both values. {\bf Bottom:} The colour scale is the same as above and gives the lower limit on the mass of M31. The contours show ranges of proper motions that would have lead to a large ammount of stars stripped from the disk of M33 through interactions with M31 or the Milky Way in the past. The contours delineate 20\% and 50\% of the total number of stars stripped \protect\cite{LoebReidBrunthaler2005}. These regions can be excluded, since the stellar disk of M33 shows no signs of such interactions.} \label{mass-m31} \end{figure} \section{Summary} We have presented astrometric VLBA observations of the H$_2$O maser in the Local Group galaxy IC\,10. We detected a ring-like structure in one epoch with a projected diameter of $\sim$1060 AU. We measured the proper motion of the maser relative to two background quasars. Correcting for the internal rotation of IC\,10 and the rotation of the Milky Way this measurement yields a proper motion of --39$\pm$9 $\mu$as~yr$^{-1}$ toward the East and 31$\pm$8 $\mu$ as~yr$^{-1}$ toward the North, which corresponds to a total space velocity of 215$\pm$42 km~s$^{-1}$ for IC\,10 relative to the Milky Way. If IC\,10 and M33 are bound to M31, one can calculate a lower limit of the mass of M31 of 7.5~$\times 10^{11}$M$_\odot$. \begin{acknowledgements} This research was supported by the DFG Priority Programme 1177. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The classical summation formula of Poisson states that, for a well-behaved function $f:\mathbf{R}\rightarrow\mathbf{C}$ and its (suitably scaled) Fourier Transform $\hat{f}$ we have the relation \[\sum_{n=-\infty}^{\infty}f(n)=\sum_{n=-\infty}^{\infty}\hat{f}(n)\] Fix $x>0$, and replace $f(t)$ with $\frac{1}{x}f(t/x)$. The Poisson formula is linear and trivial for odd $f$, so we assume $f$ is even. Also, assume $f(0)=0$. Then \begin{equation}\sum_{n=1}^{\infty}\hat f(nx)=\frac{1}{x}\sum_{n=1}^{\infty}f(n/x)\end{equation} We discussed in \cite{Faifman} the extent to which this summation formula, which involves sums over lattices in $\mathbf R$, determines the Fourier transform of a function. Taking a weighted form of the Poisson summation formula as our starting point, we define a generalized Fourier-Poisson transform, and show that under certain conditions it is a unitary operator on $L^2[0,\infty)$. As a sidenote, we show a peculiar family of unitary operators on $L^2[0, \infty)$ defined by series of the type $f(x)\mapsto \sum a_n f(nx)$. \section{Some Notation, and a summary of results} The Fourier transform maps odd functions to odd functions, rendering The Poisson summation formula trivial. Thus we only consider square-integrable even functions, or equivalently, all functions belong to $L^2[0,\infty)$. \\Denote by $\delta_n$, $n\geq1$ the sequence given by $\delta(1)=1$ and $\delta(n)=0$ for $n>1$, and the convolution of sequences as $(a\ast b)_k=\sum_{mn=k}a_mb_n$. \\Define the (possibly unbounded) operator \begin{equation}\label{def1}T(a_n)f(x)=\sum_{n=1}^\infty a_nf(nx)\end{equation} It holds that $T_{b_n}T_{a_n}f=T_{a_n\ast b_n}f$ whenever the series in both sides are well defined and absolutely convergent. \\Let $a_n$, $b_n$, $n\geq1$ be two sequences, which satisfy $a\ast b=\delta$. \\This is equivalent to saying that $L(s;a_n)L(s;b_n)=1$ where $L(s;c_n)=\sum_{n=1}^\infty\frac{c_n}{n^s}$. For a given $a_n$ with $a_1\neq0$, its convolutional inverse is uniquely defined via those formulas. \\Then, the formal inverse transform to $T_{a_n}$ is given simply by $T_{b_n}$. \\Note that the convolutional inverse of the sequence $a_n=1$ is the M\"{o}bius function $\mu(n)$, defined as \[\mu(n)=\left\{\begin{array}{ll}(-1)^{\sharp\{p|n \mbox{ prime}\}},& n\mbox{ square-free}\\ 0,& d^2|n\end{array}\right.\] Also, define the operator \[Sf(x)=\frac{1}{x}f\left(\frac{1}{x}\right)\] - a unitary involution on $L^2[0,\infty)$, which is straightforward to check. \\\\In terms of $S$ and $T$, the Poisson summation formula for the Fourier Transform can be written as following: \[T(e_n)\hat{f}(x)=ST(e_n)f\] where $e_n=1$ for all $n$. This suggests a formula for the Fourier transform: \begin{equation}\label{Fourier_definition}\hat{f}(x)=T(\mu_n)ST(e_n)f(x)\end{equation} with $\mu_n=\mu(n)$ the M\"{o}bius function. We would like to mention that Davenport in \cite{Davenport} established certain identities, such as \[\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\{nx\}=-\frac{1}{\pi}\sin(2\pi x)\] which could be used to show that formula (\ref{Fourier_definition}) actually produces the Fourier transform of a (zero-integral) step function. \\\\We define the (possibly unbounded) Fourier-Poisson Transform associated with $(a_n)$ as \begin{equation}\mathcal{F}(a_n)f(x)=T(\overline{a_n})^{-1}ST(a_n)f(x)\end{equation} This is clearly an involution, and produces the operator $Sf(x)=\frac{1}{x}f\left(\frac{1}{x}\right)$ for $a_n=\delta_n$, and (non-formally) the Fourier Transform for $a_n=1$. Note that both are unitary operators on $L_2[0,\infty)$. In the following, we will see how this definition can be carried out rigorously. First we give conditions on $(a_n)$ that produce a unitary operator satisfying a pointwise Poisson summation formula, as was the case with Fourier transform (Theorem \ref{fourier1}). Then we relax the conditions, which produces a unitary operator satisfying a weaker operator-form Poisson summation formula (Theorem \ref{fourier2}). \\\\\textit{Remark.} A similar approach appears in \cite{Baez},\cite{Burnol} and \cite{Duffin}, where it is used to study the Fourier Transform and certain variants of it. \section{The Fourier-Poisson operator is unitary} We prove that under certain rate-of-growth assumptions on the coefficients $a_n$ and its convolution-inverse $b_n$, it holds that $\mathcal F(a_n)=T(\overline{a_n})^{-1}ST(a_n)$ is unitary.\\In the following, $f(x)=O(g(x))$ will be understood to mean at $x\rightarrow\infty$ unless otherwise indicated. \begin{lem}\label{bounded_lemma} Assume \begin{equation}\label{init_condition}\sum\frac{|a_n|}{\sqrt n}<\infty\end{equation} holds, and let $f\in C(0,\infty)$ satisfy $f=O(x^{-1-\epsilon})$ for some $\epsilon>0$. Then $T(a_n)f$ as defined in (\ref{def1}) is a continuous function satisfying $T(a_n)f(x)=O(x^{-1-\epsilon})$. Moreover, $T(a_n)$ extends to a bounded operator on $L^2[0, \infty)$, and $\|T\|\leq\sum\frac{|a_n|}{\sqrt n}$. \end{lem} \begin{proof} Consider a continuous function $f=O(x^{-1-\epsilon})$. It is straightforward to verify that $T(a_n)f$ is well-defined, continuous and $T(a_n)f(x)=O(x^{-1-\epsilon})$. Now apply Cauchy-Schwartz: \[|\langle f(mx), f(nx)\rangle|\leq \frac{1}{\sqrt {mn}}\|f\|^2\] implying \[\|T(a_n)f\|^2\leq\sum_{m,n}|a_m||a_n||\langle f(mx), f(nx)\rangle|\leq\left(\sum_n\frac{|a_n|}{\sqrt n}\right)^2\|f\|^2\] So $T(a_n)$ can be extended as a bounded operator to all $L^2$, and $\|T\|\leq\sum\frac{|a_n|}{\sqrt n}$. \end{proof} Now, consider a sequence $a_n$ together with its convolution-inverse $b_n$. In all the following, we assume that $a_n$, $b_n$ both satisfty (\ref{init_condition}) (as an example, consider $a_n=n^{-\lambda}$ and $b_n=\mu(n)n^{-\lambda}$ with $\lambda>0.5$). \\Then $T(a_n)$, $T(\overline{b_n})$ are both bounded linear operators, and we define the Fourier-Poisson operator \[\mathcal F(a_n)=T(\overline{b_n})ST(a_n)\] Note that $T(a_n)^{-1}=T(b_n)$ (and likewise $T(\overline{a_n})^{-1}=T(\overline{b_n})$), which is easy to verify on the dense subset of continuous functions with compact support. \begin{cor}\label{pointwise} Assume that $\sum|a_n|n^\epsilon<\infty$ for some $\epsilon>0$ and $(b_n)$ satisfies (\ref{init_condition}). Take a continuous $f$ satisfying $f(x)=O(x^{-1-\epsilon})$ as $x\rightarrow\infty$ and $f(x)=O(x^\epsilon)$ as $x\rightarrow0$ for some $\epsilon>0$. Then\\ (a) $\mathcal F(a_n)f$ is continuous and $\mathcal F(a_n)f(x)=O(x^{-1-\epsilon})$.\\ (b) The formula $\sum \overline{a_n}\mathcal F(a_n)f(nx)=(1/x)\sum a_n f(n/x)$ holds pointwise.\end{cor} \begin{proof} (a) It is easy to see that all the properties of the function are preserved by applying $T(a_n)$ (using $\sum|a_n|n^\epsilon<\infty$) and then by $S$. Then by Lemma \ref{bounded_lemma} application of $T(\overline{b_n})$ to $ST(a_n)$ completes the proof. \\(b) By Lemma \ref{bounded_lemma}, we get an equality a.e. of two continuous functions: \[T(\overline{a_n})\mathcal F(a_n)f=ST(a_n)f\qedhere\] \end{proof} \begin{thm}\label{fourier1} Assume that $\sum|a_n|n^\epsilon<\infty$ for some $\epsilon>0$ and $(b_n)$ satisfies (\ref{init_condition}). Then $\mathcal F(a_n)$ is a unitary operator.\end{thm} \begin{proof} Consider $G=ST(\overline{b_n})ST(a_n)$. Take a continuous function $f$ which is compactly supported. Define $g(x)=ST(a_n)f(x)=\frac{1}{x}\sum a_n f(\frac{n}{x})$, and note that $g$ vanishes for small values of $x$, and $|g(x)|=O\left(\sum|a_n|n^\epsilon x^{-1-\epsilon}\right)$. Then $T(\overline{b_n})g$ is given by the series (\ref{def1}), and we obtain the absolutely convergent formula \[Gf(x)=\sum_{m,n}\frac{a_n\overline{b_m}}{m}f(\frac{n}{m}x)\] Take two such $f_1, f_2$ and compute \[\langle Gf_1, Gf_2\rangle=\sum_{k,l,m,n}\frac{a_n\overline{b_m}\overline{a_k}b_l}{ml}\langle f_1(\frac{n}{m}x), f_2(\frac{k}{l}x)\rangle\] the series are absolutely convergent when both $a_n$ and $b_n$ satisfy (\ref{init_condition}). Now we sum over all co-prime $(p,q)$, such that $\frac{n}{m}=\frac{p}{q}\frac{k}{l}$. So, $\frac{nl}{mk}=\frac{p}{q}$, i.e. $nl=up$ and $mk=uq$ for some integer $u$. Then \[\langle f_1(\frac{n}{m}x), f_2(\frac{k}{l}x)\rangle=\frac{l}{k}\langle f_1(\frac{p}{q}x), f_2(x)\rangle\] \[\langle Gf_1, Gf_2\rangle=\sum_{(p,q)=1}\langle f_1(\frac{p}{q}x), f_2(x)\rangle\sum_{u}\sum_{mk=uq}\sum_{nl=up}\frac{a_n\overline{b_m}\overline{a_k}b_l}{mk}=\] \[=\sum_{(p,q)=1}\frac{1}{q}\langle f_1(\frac{p}{q}x), f_2(x)\rangle\sum_{u}\frac{1}{u}\sum_{mk=uq} \overline{a_kb_m}\sum_{nl=up} a_nb_l\] and so the only non zero term corresponds to $p=q=1$, $u=1$, $m=n=k=l=1$, i.e. \[\langle Gf_1, Gf_2\rangle=\langle f_1, f_2\rangle\] Since $\mathcal F(a_n)$ is invertible, we conclude that $\mathcal F(a_n)=SG$ is unitary. \end{proof} \section{An example of a unitary operator defined by series} Let $a_n\in\mathbb C$ be a sequence satisfying (\ref{init_condition}). We denote by $C_0(0, \infty)$ the space of compactly supported continuous functions. \\Let $T(a_n):C_0(0, \infty)\rightarrow C_0(0, \infty)$ be given by $(\ref{def1})$. We will describe conditions on $a_n$ that would imply $\langle Tf,Tg\rangle_{L^2}=\langle f,g\rangle_{L^2}$ for all $f,g\in C_0(0, \infty)$. Then we can conclude that $T$ is an isometric operator on a dense subspace of $L^2[0,\infty)$, and thus can be extended as an isometry of all $L^2[0,\infty)$.\\A $C$-isometric (correspondingly, unitary) operator will mean an isometric (unitary) operator, scaled by a constant factor $C$. \begin{prop}\label{unitary_sum} $\langle Tf,Tg\rangle_{L^2}=C^2\langle f,g\rangle_{L^2}$ for all $f,g\in C_0(0, \infty)$ if and only if for all co-prime pairs $(m_0, n_0)$ \begin{equation}\label{sequence_condition}\sum_{k=1}^{\infty}\frac{a_{m_0 k}\overline{a_{n_0k}}}{k}=\left\{\begin{array}{ll}C^2,& m_0=n_0=1\\ 0,& m_0\neq n_0\end{array}\right.\end{equation}\end{prop} \begin{proof} Take $\epsilon>0$. Denote $M=\sup\{x|f(x)\neq 0 \vee g(x)\neq0\}$. Write \[\int_\epsilon^\infty Tf(x)\overline{Tg(x)}dx=\int_\epsilon^\infty\sum_{m,n=1}^\infty a_m\overline{a_n}f(mx)\overline{g(nx)}dx\] It is only necessary to consider $m, n<M/\epsilon$. Thus the sum is finite, and we may write \[\int_\epsilon^\infty Tf(x)\overline{Tg(x)}dx=\sum_{m,n=1}^\infty a_m\overline{a_n}\int_\epsilon^\infty f(mx)\overline{g(nx)}dx\] Note that \[\int_0^\infty f(mx)\overline{g(nx)}dx\leq\|f(mx)\|\|g(nx)\|=\frac{1}{\sqrt{mn}}\|f\|\|g\|\] and therefore \[\sum_{m,n=1}^\infty a_m\overline{a_n}\int_0^\infty f(mx)\overline{g(nx)}dx\] is absolutely convergent: \[\sum_{m,n=1}^\infty\left|a_m\overline{a_n}\int_{0}^\infty f(mx)\overline{g(nx)}dx\right|\leq\sum_{m,n=1}^\infty\frac{|a_m||a_n|}{\sqrt{mn}}\|f\|\|g\|\] Therefore, the sum \[S(\epsilon)=\sum_{m,n=1}^\infty a_m\overline{a_n}\int_{0}^{\epsilon} f(mx)\overline{g(nx)}dx\] is absolutely convergent. We will show that $S(\epsilon)\rightarrow0$ as $\epsilon\rightarrow\infty$. Assume $|f|\leq A_f$, $|g|\leq A_g$. Then \[|S(\epsilon)|\leq \sum_{m,n}a_m\overline{a_n}\int_0^\epsilon f(mx)\overline{g(nx)}dx\leq\sum_{m,n}\frac{|a_ma_n|}{\sqrt {mn}}\sqrt{\int_0^{m\epsilon} |f|^2}\sqrt{\int_0^{n\epsilon} |g|^2}\] And \[\sum_{m=1}^\infty\frac{|a_m|}{\sqrt m}\sqrt{\int_0^{m\epsilon} |f|^2}=\sum_{m=1}^{\sqrt{1/\epsilon}}+\sum_{m=\sqrt{1/\epsilon}}^\infty\leq \sum_{m=1}^\infty \frac{|a_m|}{\sqrt m}\sqrt{\int_0^{\sqrt\epsilon} |f|^2} + \|f\|\sum_{m=\sqrt{1/\epsilon}}^\infty \frac{|a_m|}{\sqrt m}\longrightarrow 0\] We conclude that \[\langle Tf, Tg\rangle=\sum_{m,n=1}^\infty \frac{a_m\overline{a_n}}{m}\int_{0}^\infty f(x)\overline{g(\frac{n}{m}x)}dx=\left\langle f(x), \sum_{(m_0,n_0)=1}\frac{1}{m_0}\sum_{k=1}^{\infty}\frac{a_{m_0 k}\overline{a_{n_0k}}}{k} g(\frac{n_0}{m_0}x)\right\rangle\] Therefore, $T(a_n)$ is a $C$-isometry on $C_0(0,\infty)$ if and only if $(a_n)$ satisfy (\ref{sequence_condition}). \end{proof} \noindent\textbf{Example 1.} Take $a_n^{(2)}=0$ for $n\neq 2^k$ and \[a_{2^k}^{(2)}=\left\{\begin{array}{ll}1,& k=0\\ (-1)^{k+1},& k\geq1\end{array}\right.\] Then \[T_2f(x)=\sum_n a_n^{(2)}f(nx)=f(x)+f(2x)-f(4x)+f(8x)-f(16x)+...\] is a $\sqrt 2$-isometry. \\\\\textbf{Example 2.} Generelazing example 1 (and using the already defined $a_n^{(2)}$), we fix a natural number $m$, and take $a_{m^k}^{(m)}=\left(\frac{m}{2}\right)^{k/2}a_{2^k}^{(2)}$ and $a_n^{(m)}=0$ for $n\neq m^k$. Then \[T_m f(x)=\sum a_n^{(m)}f(nx)\] is again a $\sqrt 2$-isometry. \\\\\textbf{Example 3.} Similiarly, we could take $a_n=0$ for $n\neq 2^k$ and \[a_{2^k}=\left\{\begin{array}{ll}1,& k=0\\ -1,& k\geq1\end{array}\right.\] Then \[Tf(x)=f(x)-f(2x)-f(4x)-f(8x)-f(16x)-...\] is a $\sqrt 2$-isometry. \\\\\textit{Remarks} \\$\bullet$ If $a_n$ and $b_n$ satisfy (\ref{init_condition}), then so does their convolution $c_n=(a\ast b)_n=\sum_{kl=n}a_k b_l$: \[\sum_n\frac{|c_n|}{\sqrt n}\leq \sum_n\sum_{kl=n}\frac{|a_k||b_l|}{\sqrt{kl}} =\sum_k\frac{|a_k|}{\sqrt k}\sum_l\frac{|b_l|}{\sqrt l}<\infty\] \\$\bullet$ Also, any two scaled isometries of the form $T(a_n)$ commute: If $a_n$ and $b_n$ satisfy (\ref{sequence_condition}), $T(a_n)$ and $T(b_n)$ are isometries from $C_0(0,\infty)$ to itself, and thus so is their composition which is easily computed to be $T(a_n\ast b_n)$. \begin{prop}When $(a_n)$ satisfies (\ref{sequence_condition}), $T(a_n)$ is $C$-unitary.\end{prop} \begin{proof} It is easy to verify that for any $g\in C[0,\infty)$ and $a_n$ satisfying (\ref{sequence_condition}), \[T(a_n)^*g=\sum\frac{\overline{a_n}}{n}g(\frac{x}{n})\] Moreover, $T(a_n)^*$ is a scaled isometry on $\{f\in C_0[0,\infty): supp(f)\subset[a,b], a>0\}$ (proof identical to that of $T(a_n)$), and so a scaled isometry on $L^2$. Thus $T^*T=TT^*=\|T\|^2I$, and so $T(a_n)$ is $C$-unitary.\end{proof} \noindent\textit{Remark.} We recall the operator $Sf(x)=\frac{1}{x}f\left(\frac{1}{x}\right)$ - a unitary operartor of $L^2(0,\infty)$. Then for a continuous function $f$ with compact support which is bounded away from 0, we have $Sf\in C_0(0,\infty)$ and so we can use (\ref{def1}) to obtain $ST(\overline{a_n})Sf=T(a_n)^*f$ and therefore $ST(\overline{a_n})S=T(a_n)^*$ on all $L^2$. In particular, for real sequences $(a_n)$, $ST^m$ and $T^mS$ are unitary involutions (up to scaling) for any integer $m$. \section{Diagonalizing the Fourier-Poisson operator} We further generalize the Poisson summation formula: by removing some of the conditions on the sequence $(a_n)$, we are still able to construct a unitary operator satisfying the summation formula, but only in the weaker operator sense. This is done through a natural isometry between $L^2[0,\infty)$ and $L^2(-\infty, \infty)$ which was suggested to us by Bo'az Klartag (see also \cite{Korani}). \\\\We will denote by $dm$ the Lebesgue measure on $\mathbb R$, and $\hat g$ will stand for the Fourier transform defined as $\hat{g}(\omega)=\int_{-\infty}^\infty g(y)e^{-iy\omega}dy$. \\First, define two isometries of spaces: \\(1) $u:L^2\left([0,\infty), dm(x)\right)\rightarrow L^2\left(\mathbb R, e^ydm(y)\right)$ given by $f(x)\mapsto g(y)=f(e^y)$. \\(2) $v:L^2\left(\mathbb R, e^ydm(y)\right)\rightarrow L^2\left(\mathbb R, dm\right)$ given by $g(y)\mapsto h(x)=(2\pi)^{-\frac{1}{2}}\hat{g}(x+i/2)$. \\$u$ is isometric by a simple change of variables. \\To see that $v$ is isometric, note that $\widehat{f}(x+i/2)=\widehat{e^{t/2}f(t)}(x)$, and so by Plancherel's formula \[\int|\hat{f}(x+i/2)|^2dx=2\pi\int|f(t)|^2e^t dt\] (alternatively, one could decompose $v$ into the composition of two isometries: $f(y)\mapsto e^{y/2}f(y)$, identifying $L^2\left(\mathbb R, e^ydm(y)\right)$ with $L^2(\mathbb R, dm)$, and then Fourier transform). \\ We will denote the composition $v\circ u=w$. \\For $A:L^2[0,\infty)\rightarrow L^2[0,\infty)$, we write $\widetilde{A}=wAw^{-1}:L^2(\mathbb R)\rightarrow L^2(\mathbb R)$ - the conjugate operator to $A$. The conjugate to $S$ is $\tilde S(h)(x)=h(-x)$. \\\\Let $a_n$ satisfy (\ref{init_condition}), implying $|L(1/2+ix; a_n)|$ is bounded and continuous. Then for $g=u(f)$, \[(uT(a_n)u^{-1}g)(y)=\sum a_n g(y+\log n)=g\ast\nu(y)\] where $\nu(y)=\sum a_n\delta_{-\log n}(y)$ and $\hat{\nu}(z)=\sum a_n e^{i z\log n}=\sum a_n n^{iz}=L(-iz; a_n)$, which converges for $Im z\geq1/2$ by (\ref{init_condition}). And so letting $h=vg$, \[\widetilde{T(a_n)}h(x)=(2\pi)^{-\frac{1}{2}}\widehat{g\ast \nu}(x+i/2)= L(1/2-ix; a_n)h(x)\] thus we proved \begin{cor}\label{unitary_l} Assume $a_n$ satisfies (\ref{init_condition}). Then the following are equivalent: \\(a) $|L(1/2+ix; a_n)|=C$ \\(b) $T(a_n)$ is $C$-unitary on $L^2[0,\infty)$\\(c) $(a_n)$ satisfies (\ref{sequence_condition}).\end{cor} \noindent The equivalence of (a) and (c) can easily be established directly. \\\\For example, the $\sqrt 2$-unitary $Tf(x)=f(x)+f(2x)-f(4x)+f(8x)-...$ discussed previously is associated with $L(s; a_n)=\frac{2+2^s}{1+2^s}$ which has absolute value of $\sqrt 2$ on $Re (s)=1/2$. \\\\This suggests that the Fourier-Poisson transform associated with $a_n$, which was defined in section 3 for some special sequences $(a_n)$, could be generalized as follows: $\mathcal F(a_n)f=T(\overline{a_n})^{-1}ST(a_n)f$ should be defined through \begin{equation}\label{Fourier_Poisson}\widetilde{\mathcal F(a_n)}h(x)=h(-x)\frac{L(1/2+ix; a_n)}{L(1/2-ix; \overline{a_n})}=h(-x)\frac{L(1/2+ix; a_n)}{\overline{L(1/2+ix; a_n)}}\end{equation} we arrive at the following \begin{thm}\label{fourier2} Assume $\sum |a_n| n^{-1/2}<\infty$. Then \\(a) There exists a bounded operator $\mathcal F(a_n):L^2[0,\infty)\rightarrow L^2[0,\infty)$ satisfying the Poisson summation formula (in its operator form) $T(\overline{a_n})\mathcal F(a_n)=ST(a_n)$. Moreover, $\mathcal F(a_n)$ is unitary. \\(b) If for some $\epsilon>0$, $\sum |a_n|n^{-1/2+\epsilon}<\infty$, then a bounded $\mathcal F(a_n)$ satisfying $T(\overline{a_n})\mathcal F(a_n)=ST(a_n)$ is unique.\end{thm} \begin{proof} (a) We have $L(1/2+ix; a_n)/\overline{L(1/2+ix; a_n)}=e^{2i(\arg L(1/2+ix; a_n))}$ whenever $L(1/2+ix;a_n)\neq 0$. In accordance with (\ref{Fourier_Poisson}), define \[\widetilde{\mathcal F(a_n)}h(x)=e^{2i(\arg L(1/2+ix; a_n))}h(-x)\] taking $\arg L(1/2+ix; a_n)=0$ whenever $L(1/2+ix; a_n)=0$. We then have \[L(1/2-ix; \overline{a_n})\widetilde{F(a_n)}h(x)=L(1/2+ix; a_n)h(-x)\] for all $h\in L^2(\mathbb R)$, implying $T(\overline{a_n})\mathcal F(a_n)=ST(a_n)$ in $L^2[0, \infty)$. Also, $\mathcal F(a_n)$ is isometric and invertible, thus unitary. \\(b) For uniqueness, observe that $L(s;a_n)$ is analytic in a neighborhood of $Re(s)=1/2$, and so its set of zeros $Z$ is discrete, and the ratio $L(1/2+ix; a_n)/\overline{L(1/2+ix; a_n)}$ is continuous and of absolute value 1 outside of $Z$. Thus for continuous $h$ with $supp(h)\cap Z=\emptyset$, the equation \[L(1/2-ix; \overline{a_n})\widetilde{F(a_n)}h(x)=L(1/2+ix)h(-x)\] determines $\widetilde{F(a_n)h}$ uniquely, and all such $h$ are dense in $L^2(\mathbb R)$. \end{proof} \noindent By part (b) we conclude that under the conditions of Theorem \ref{fourier1}, the operator $\mathcal F(a_n)$ defined in section 3 coincides with the operator defined here. \noindent \\\\\textit{Remark.} It was pointed out to us by Fedor Nazarov that under the conditions of Theorem \ref{fourier2} the Poisson summation formula cannot hold pointwise for all sequences $(a_n)$. \section{A formula involving differentiation} \noindent Denote by $B:L^2[0,\infty)\rightarrow L^2[0,\infty)$ the unbounded operator \[Bf(x)=i(xf'+f/2)\] with $Dom(B)=\{f\in C^\infty: xf'+f/2\in L^2\}$. It is straightforward to check that $B$ is a symmetric operator. \\\\ It is easy to verify that the ordinary Fourier transform $\mathcal F$ satisfies, for a well behaved (i.e. Schwartz) function $f$, the identity $B\mathcal F f+\mathcal FB f=0$. It turns out to be also a consequence of Poisson's formula, and so holds for a large family of operators. We will need the following standard lemma (see \cite{Simon}) \begin{lem}\label{schwartz} Take a function $g\in L^2(\mathbb R)$. The following are equivalent: \\(a) $g\in C^\infty(\mathbb R)$ and \[\sup _{|y|<b}\sup_t e^{yt}|g^{(k)}(t)|<\infty\] for all $b<B$ and $k\geq 0$. \\ (b) $h=\hat g$ is a Schwartz function, which has an analytic extension to the strip $|y|<B$ such that \[\sup_{|y|<b}\sup_x |x|^k |h(x+iy)|<\infty\] for all $b<B$ and $k\geq 0$. \end{lem} \begin{proof} \textit{(a)$\Rightarrow$(b)}. Observe that $\hat g(x+iy)=\widehat{e^{yt}g(t)}(x)$. Thus the existence of analytic extension is clear, and we can write \[ |x|^k |h(x+iy)|=|x|^k|\widehat{e^{yt}g(t)}(x)|=|\widehat{\left(e^{yt}g(t)\right)^{(k)}}(x)|\] Note that \[\left(e^{yt}g(t)\right)^{(k)}=e^{yt}\sum_{j=0}^k P_{j,k}(y) g^{(j)}(t)\] where $P_{j,k}$ denotes some universal polynomial of degree $\leq k$. Therefore \[ \sup_x |x|^k |h(x+iy)|\leq \int_{-\infty}^\infty\left|e^{yt}\sum_{j=0}^k P_{j,k}(y) g^{(j)}(t)\right|dy\] The sum is finite, so we can bound every term separately. Choose $b<Y<B$, $\epsilon=Y-b$. Then \[\sup_{|y|<b}\int_{-\infty}^\infty|e^{yt}g^{(j)}(t)|\leq C(j, Y)\int_{-\infty}^\infty e^{-\epsilon |t|}dt<\infty\] \textit{(b)$\Rightarrow$(a)}. Note that $g$ is a Schwartz function since $h$ is. It sufficies to show (by induction) that supremums of $|(e^{yt}g)^{(k)}|$ are finite for every $k$ and $b<B$. Notice that $\widehat{g^{(k)}}(x)=(ix)^k\hat g(x)$ has an analytic extension to the strip $|y|<B$ (namely: $(iz)^kh(z)$), satisfying the same conditions as $h$ itself. Now take a $C^\infty$ compactly supported function $\phi$ on $\mathbb R$. We will show that \begin{equation}\label{paley}\int_{-\infty}^\infty e^{yt}g(t)\overline{\phi(t)}dt=\int_{-\infty}^\infty h(x+iy)\overline{\hat \phi(x)}dx\end{equation} implying \[\widehat{e^{yt}g(t)}(x)=h(x+iy)\] and therefore for any $k$ \[\widehat{e^{yt}g^{(k)}(t)}(x)=i^k(x+iy)^kh(x+iy)\] which is equivalent to having \[\widehat{\left(e^{yt}g(t)\right)^{(k)}}(x)=i^kx^kh(x+iy)\] Indeed, $\psi=\hat\phi$ is an analytic function satisfying the supremum condition by the "$(a)\Rightarrow(b)$" implication. Then \[\int_{-\infty}^\infty e^{yt}g(t)\overline{\phi(t)}=\int_{-\infty}^\infty \hat g\overline{\widehat{e^{yt}\phi(t)}}dt= \int_{-\infty}^\infty h(x)\overline{\psi(x+iy)}dt\] Observe that $\lambda(z)=h(z)\overline{\psi(iy+\overline z)}$ is an analytic function, and the integrals over the intervals $Re (z)=\pm R$, $-b<Im(z)<b$ of $\lambda(z)$ converge to 0 as $R\rightarrow\infty$ by the uniform bounds on $h$ and $\psi$. Considering the line integral of $\lambda$ over a rectangle with these vertical sides and horizontal lines at $Im(z)=0$ and $Im(z)=y$, we get \[\int_{-\infty}^\infty h(x)\overline{\psi(iy+x)}=\int_{-\infty}^\infty h(x+iy)\overline{\psi(x)}\] which proves (\ref{paley}). Finally, \[\sup_{|y|<b}\sup_t |(e^{yt}g(t))^{(k)}|\leq\sup_{|y|<b}\int_{-\infty}^\infty |x|^kh(x+iy)dx\] which is finite by the assumptions.\end{proof} \noindent Let $\mathcal S_0$ be the following class of "Schwartz" functions in $L^2[0,\infty)$ \[\mathcal S_0=\{f\in C^\infty: \sup |x|^n|f^{(k)}(x)|<\infty \mbox{ }\forall k\geq0\mbox{ ,}\forall n\in\mathbb Z\}\] Note that $n\in\mathbb Z$ can be negative. Observe that $\mathcal S_0\subset Dom(B)$. \begin{prop} Assume $(a_n)$ satisfies $\sum |a_n|n^\epsilon<\infty$ for some $\epsilon>0$, and the convolution inverse $(b_n)$ satisfies $\sum |b_n|/\sqrt n<\infty$. Next, assume that \[L(1/2+iz; a_n)/\overline{L(1/2+i\overline {z};a_n)}\] (which is meromorphic by assumption in the strip $|y|<1/2+\epsilon$) satisfies the following polynomial growth condition: there exist constants $N$ and $C$ such that \[\left|\frac{L(1/2-y+ix; a_n)}{L(1/2+y+ix;a_n)}\right|\leq C_0+C_1 |x|^{N}\] for all $x,y\in\mathbb R$, $|y|\leq 1/2+\epsilon/2$. Let $f\in \mathcal S_0$. Then $\mathcal F(a_n) B f + B\mathcal F(a_n) f=0$.\end{prop} \begin{proof} Denote $g=\mathcal F(a_n)f$. Denote $F(t)=e^{t/2}f(e^t)$ and $G(t)=e^{t/2}g(e^t)$, $h_f=\hat F$ and $h_g=\hat G$. The condition $f\in \mathcal S_0$ implies immediately that $F\in C^\infty$ and $\sup_{t\in\mathbb R} e^{yt}|F^{(k)}(t)|<\infty$ for all $y\in\mathbb R$, since $F^{(k)}(t)=P_k(e^{t/2}, f(e^t),...,f^{(k)}(e^t))$ for some fixed polynomial $P_k$. By Lemma (\ref{schwartz}), $h_f$ is a Schwartz function (on the real line), with an analytic extension to the strip $|y|<1$ such that \[\sup_{|y|\leq1}\sup_x |x|^k |h_f(x+iy)|<\infty\] for all $k\geq 0$. Next, \[h_g(x+iy)=\frac{L(1/2-y+ix; a_n)}{\overline{L(1/2+y+ix;a_n)}}h_f(x+iy)\] is an analytic function in the strip $|y|<1/2+\epsilon$. By the assumed bound on the L-function ratio, it is again a Schwartz function when restricted to the real line; and \[\sup_{|y|<b}\sup_x |x|^k |h_g(x+iy)|<\infty\] for all $b<1/2+\epsilon/2$. Denote $\delta=\epsilon/4$. Again by Lemma (\ref{schwartz}), $G\in C^\infty$ and satisfies $e^{(1/2+\delta )|t|}|G(t)|\leq C \iff |G(t)|\leq C e^{-(1/2+\delta)|t|} $ and likewise $ |G'(t)|\leq C e^{-(1/2+\delta)t}$ for some constant $C$. Then, as $t\to-\infty$, \[|g(e^t)|=e^{-t/2}|G(t)|\leq C e^{-t/2} e^{(1/2+\delta)t}=O(e^{-\delta t})\] and as $t\to+\infty$, \[|g(e^t)|=e^{-t/2}|G(t)|\leq Ce^{-t/2}e^{-(1/2+\delta)t}=O(e^{-(1+\delta)t})\]Also, as $t\to+\infty$, \[|g'(e^t)|=\left|G'(t)-\frac{1}{2}G(t)\right|e^{-3t/2}= O(e^{-(2+\delta)t})\]Thus $g\in C^\infty(0,\infty)$ and $g=O(x^\delta)$ as $x\rightarrow 0$, $g=O(x^{-1-\delta})$ as $x\rightarrow \infty$ while $g'=O(x^{-2-\delta})$ as $x\rightarrow\infty$. By Corollary \ref{pointwise} (b) we can write $\sum \overline{a_n}g(nx)=(1/x) \sum a_n f(n/x)$, and then the functions on both sides are $C^1$, and can be differentiated term-by-term. Carrying the differentiation out, we get \[\sum \overline{a_n} (nx)g'(nx)=-(1/x)\sum a_n f(n/x)-(1/x^2)\sum a_n(n/x)f'(n/x)\] Invoke Lemma \ref{bounded_lemma} to write \[T(\overline{a_n})(xg')= -T(\overline{a_n})g-ST(a_n) (xf')\] and then use Corollary \ref{pointwise} applied to $xf'$ to conclude \[T(\overline{a_n})(xg')= -T(\overline{a_n})g-T(\overline{a_n})\mathcal F(a_n) (xf')\] Finally, apply $T(\overline b_n)$ to obtain the announced result. \end{proof} \noindent\textit{Remark.} As an example of such a sequence, take $a_n=n^\lambda$, $\lambda<-1$. \section{Acknowledgements} I am indebted to Bo'az Klartag for the idea behind section 5, and also for the motivating conversations and reading the drafts. I am grateful to Nir Lev, Fedor Nazarov, Mikhail Sodin and Sasha Sodin for the illuminating conversations and numerous suggestions. Also, I'd like to thank my advisor, Vitali Milman, for the constant encouragement and stimulating talks. Finally, I would like to thank the Fields Institute for the hospitality during the final stages of this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{S}{ystems} of Systems (SoSs), comprised of heterogeneous components capable of localized, autonomous decision making, are becoming increasingly ubiquitous in a wide range of socio-technical systems \citep{sauser2010systomics,maier1996architecting,jamshidi2011system,mina2006complex}. SoSs often rely on multiple types of localized resources, whose management is a crucial challenge for the optimal performance of the system. SoSs are often operated in highly uncertain environments, because of this it is difficult to anticipate demand for resources in various parts of the system at every moment of time; this means that even if the total demand for a resource can be met, achieving an efficient distribution of the resource is not a trivial challenge. The efficient distribution of resources is, among other factors, a strong function of the system architecture, thus modeling this interdependency---that of the architecture and resource allocation mechanisms---becomes an important area of research in SoSs engineering. Using a centralized scheme for resource management can be extremely difficult or impossible, because of the large scale, high complexity, and environmental uncertainty of SoSs. Attempts to manage all decision making centrally, by gathering information from widely-dispersed system components and then broadcasting decisions back to those components, can lead to a system that is slow to respond to changes in the environment and therefore inefficient (see \cite{koutsopoulos2010auction} for a case in radio systems). Allocation by a central planner is made more complex in situations with heterogeneous system components, as is often the case in SoSs. The tendency of SoSs to have heterogeneous components arises from the fact that these components often operate in different environments, which lead to differing operational constraints and resource requirements. One way to overcome the challenge posed by uncertain variable demand for resources is to ensure that all components of the system are supplied with resources equal to the maximum possible demand for any one component; while this will eliminate the risk of under-supplying any part of the system it is very inefficient and likely to be prohibitively expensive in most systems. Alternatively, a centralized decision making process can allocate resources as and when they are needed throughout the system; however, as was explained before, this can lead to an impractically slow and unresponsive system. There has been a shift in the system design paradigm to take advantage of the capabilities that distributed, autonomous or semi-autonomous decision making provides; examples can be seen in many SoSs: fractionated satellite systems\footnote{A fractionated satellite system is a systems architecture concept with the idea being to replace large-scale, expensive, and rigid monolithic satellite systems with a \textit{network} of small-scale, agile, inexpensive, and less complex free-flying satellites that communicate wirelessly and accomplish the same goal as the single monolithic satellite. This new distributed architecture for space systems is argued to be more flexible when responding to uncertainties, such as technology evolution, technical failures, funding availability, and market fluctuations \citep{brown2006value}.} in which detection, processing, and communication tasks are dynamically assigned to members of the satellite cluster \citep{brown2009value, mosleh2014optimal}; communication networks in which frequency spectrum is dynamically allocated for efficient use \citep{mitola1999cognitive, ji2007cognitive}; and groups of unmanned, autonomous vehicles (such as aerial drones) that make dynamic assignment of tasks between them and can each make use of information gathered by other members of the group \citep{alighanbari2005decentralized}. Computational power, bandwidth, and information are examples of scarce resources that the satellites, communication systems, and unmanned vehicles respectively must make efficient use of in their operations. The distributed, autonomous scheme can also help with optimal resource management of systems of systems: Rather than attempting to address the challenge of resource allocation centrally, one can accept that at any given time some parts of the system will have more resources than needed and other parts fewer; this is not necessarily a problem if the system components are capable of sharing resources between themselves locally. If one part of the system is connected to another part of the system, then those parts are able to exchange resources. These connections could be direct or indirect; for example one part of a system could receive resources from another part via any number of intermediary components. Connections between system components typically come with a price, however, there is most likely some immediate cost associated with creating and maintaining a direct connection between two system components; also, while it may be possible for a resource to be shared indirectly between parts of a system, the quantity or quality of that resource will likely be decreased during the multi-step transmission e.g., attenuation, delay, or cost of involving a third party. An architecture perspective, represented by the connectivity structure, can be taken in distributed resource management of a variety of technical and socio-technical SoSs, in which availability of resources is subject to uncertainty. For example, an interconnected network of electrical microgrids can enhance resource access between the units, in which availability of energy resources is affected by the inherent uncertainty of renewable energy resources and fluctuations in electricity demand \citep{katiraei2005micro,saad2011coalitional} i.e., the connectivity structure of the system will play an important role in how unmet demand of one microgrid is supplied by the excess generation of another in an interconnected network of microgrids. Connectivity structure is also a key contributor in distributed resource management of organizations and enterprise systems. For example, in R\&D collaboration networks, firms can either directly combine knowledge, skills, and physical assets to innovate or access innovations of other firms through intermediary firms that serve as conduits through which knowledge and information can spread \citep{konig2012efficiency}. Direct collaboration between two firms has higher benefits, but involves communication and coordination costs while indirect access to resources often discounts benefits due to involving third parties. Given that it is probably inefficient and not practical for every part of a system to be directly connected to every other part, the question becomes that of deciding \textit{what is the best way to connect the system components in order to enhance resource access in uncertain environments}. Traditional systems engineering methods and theories are not sufficient for analyzing and explaining the dynamics of resource allocation for SoSs with autonomous parts. Any framework that is used to address this challenge has to be able to take into account the local interactions between components of the system while also ensuring that the structure of the connections between components is optimal for the system as a whole. The optimality of the connectivity structure should be evaluated both in the case that it is designed by a central planner as well as when the connectivity structure can change at the discretion of autonomous components. A viable approach to find the connectivity structures that enhance access to resources within SoSs is to use Network Theory. Network Theory provides methods that go beyond the traditional systems engineering approach as it combines graph theory, game theory, and uncertainty analysis. The system can be modeled as a graph, with the various components of the system being nodes in the graph; the resource-sharing interactions between the autonomous components can be represented using game theory and uncertainty analysis, in the form of games on networks. In this paper, we will study the system connectivity structures that enhance access to resources in heterogeneous SoSs. We employ Strategic Network Formation from the economics literature as the underlying framework for finding the optimal connectivity structure when the system is centrally designed, as well as when the connectivity structure is determined dynamically by distributed autonomous components. We discuss the characteristics of those connectivity structures for different heterogeneity conditions. The organization of the rest of paper is as follows. In Section~\ref{RS_and_Connectivity}, we discuss a spectrum of systems architectures and explain the role of system connectivity structure and dynamic resource sharing in response to changes in the environment. In Section~\ref{Modelling_access}, we discuss why Network Theory provides a promising theoretical foundation for studying the architecture of SoSs. In Section~\ref{framework}, we introduce a framework based on Economic Networks to model resource access in SoSs with heterogeneous components. In Section~\ref{OptimalConnectivity}, we introduce models that are used to identify optimal connectivity structures for resource access in SoSs with different heterogeneity conditions, and central- and distributed-design schemes. In Sections~\ref{application},~\ref{discussion}, and~\ref{conclusion}, we discuss applications of the suggested framework, conclude, and provide opportunities for future studies. \section{Resource sharing and system connectivity structure} \label{RS_and_Connectivity} Several frameworks have been developed for the architecture of SoSs \citep{maier2009art,rhodes2009architecting,morganwalp2002system}. In this paper, we will focus on using the system's connectivity structure to represent its architecture and will use the framework developed in our previous work \citep{heydari2014,mosleh2015monolithic}. This framework is capable of describing many levels of system connectedness, from fully integral monolithic systems to distributed, adaptive, and dynamic systems. The systems architecture framework is inspired by a general concept of modularity that combines systems modularity \citep{baldwin2000design} and network modularity \citep{newman2006modularity}: that of breaking the larger system into smaller, discrete pieces that are able to interact (communicate) with one another via standardized interfaces \citep{langlois2002modularity}. Given this broad definition of modularity, the systems architecture framework defines five levels of modularity: $M_0-$ Integral (e.g, multi-function valve), $M_1-$ Decomposable (e.g., Smartphone's mainboard), $M_2-$ Modular yet monolithic (e.g., PC's mainboard), $M_3-$ Static-Distributed (e.g., Client-server), and $M_4-$ Dynamic-Distributed (e.g., Internet of Things). \subsection{Systems architecture spectrum} The five levels of modularity in the systems architecture framework, developed in our previous work \citep{heydari2014,mosleh2015monolithic}, form a spectrum in which increased modularity improves system responsiveness to the operating environment. The level of modularity, together with systems flexibility, increases from $M_0$ to $M_4$. However, increased modularity comes with increased interfacing costs, increased system complexity, and increased potential for system instability. The operating environment encompasses the physical surroundings of the system and the effects of stakeholder requirements, consumer demand, market forces, policy and regulation, and budgetary constraints. The ability to respond in a flexible manner to all of these environmental factors comes at a cost: if there is little uncertainty in the environment then the flexibility of high modularity will be costly and could lead to instability because of unintended emergent behavior. The three lowest modularity levels of the framework (i.e., $M_0$, $M_1$, and $M_2$) are related to monolithic systems: systems comprised of a single unit and the interfaces within the monolithic system. The two higher modularity levels of the framework ($M_3$ and $M_4$) correspond to distributed systems that have multiple units capable of inter-unit communications. The interconnected components of the $M_3$ system, which can be clients or servers, communicate and share resources with tasks being assigned to the component with the most appropriate capabilities according to a centralized process. At the $M_3$ level (``static-distributed") decision-making is centralized and the structure of interactions between components is static; while components in an $M_3$-level system may have different roles, processing capacities, available resources, etc., the assignments do not change over time and the structure of the interactions is fixed. While the assignment of tasks to system components is centrally controlled in the static-distributed ($M_3$) architecture, in the dynamic-distributed ($M_4$) architecture tasks are assigned locally to those components that are currently idle or have spare processing capacity for the required task. The assignment decisions are made by the components themselves, i.e., they communicate with each other. The dynamic resource sharing property of an $M_4$ system significantly increases the flexibility and scalability of the system, allowing it to adapt effectively to uncertainties in the environment. While the connectivity structure of a static-distributed ($M_3$) system is typically a tree or two-mode (or bipartite) network, the system connectivity structure of an $M_4$-level dynamic-distributed system will be more complex, having multi-paths and loops. The level of responsiveness to environmental uncertainty of an $M_4$ system can be increased further if its connectivity structure is dynamic, changing in response to environmental factors or additional resource availability. These architectures are illustrated in Figure~\ref{dist_arch} \begin{figure*}[!t] \centering \includegraphics[width=5in]{dist_arch.eps} \caption{Connectivity structure of distributed systems with different levels of flexibility (solid lines represent static connections and dotted lines represent dynamic connections; nodes with solid colors denote fixed roles (client/server) and nodes with gradient color denote components with dynamic roles) (a) Resource allocation and the design of the connectivity structure are centralized ($M_3$). (b) Resource sharing is decentralized but the connectivity structure is static and designed centrally ($M_4$). (c) Resource sharing is decentralized and connectivity structure is dynamic and formed by distributed components ($M_4$). } \label{dist_arch} \end{figure*} \subsection{Multi-layered resource sharing} The sharing of resources between components of a system that has a dynamic-distributed architecture can be considered to be a multi-layer phenomenon. A multi-layered resource sharing effect occurs when the sharing of a resource by a component affects its consumption of other resources; a component may be able to indirectly access another component's resources through a different resource channel. For example, if one component has excess power supply, it may not be able to directly share power with another component but it could accept a power-consuming task from another component that lacks the power to perform the task. The fractionated satellite system is an example of this scenario as the components of the system have limited local power and processing capacity but the ability to transfer tasks between components via communication channels \citep{brown2009value, mosleh2014optimal}; further examples can be found in cases of distributed computing with heterogeneous hardware and software, and distributed robotic systems \citep{roberts1970computer, wang1994resource}. There are three levels of resource sharing in this case because even though only data is shared directly, power and processing capacity can also be indirectly shared, as illustrated in Figure~\ref{MultiLayeredRS}. The relationship between the layers has this structure because the demand for processing capacity affects the data communications between components, which could have a negative effect on pre-existing tasks requiring communication bandwidth. In addition, a component could delegate a task that has a high associated power drain if its own power supply is at or near capacity. In a dynamic-distributed system the number of possible configurations for sharing multiple resource types can grow very quickly. The difficulty in optimizing the configuration centrally is one of the primary reasons why in many such dynamic systems the components have some level of autonomy with regard to resource sharing decisions and the connectivity structure itself. Due to the interconnected, dynamic, and autonomous nature of theses systems, the framework required for their analysis has to capture both the component-level autonomous decisions and the effects of the connectivity structure on overall system efficiency. \section{Modeling resource access in networks} \label{Modelling_access} Network theory, an interdisciplinary field at the intersection of computer science, physics, and economics \citep{jackson2008social,easley2010networks,newman2010networks}, provides a promising approach for studying the architecture of SoSs. Network representations make it possible to create a rigorous and domain-independent model of distributed systems. The methods and tools of network theory can be used to study both individual system components' interactions and aggregate system-level behaviors. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{MultiLayeredRS.eps} \caption{An Example of multi-layer resource sharing: a hierarchical multi-layer resource sharing scheme across two satellite systems. While bandwidth sharing is directly possible, sharing of data processing is indirect and is restricted by the limits of bandwidth sharing. Energy sharing is one stage lower and is achieved by moving data processing load to other fractions to save energy. } \label{MultiLayeredRS} \end{figure} A network, by its very nature, is distributed and can be used to represent system-heterogeneity in the following ways: \begin{itemize} \item Degree, centrality, clustering coefficient, and other properties of each node represent a system's structural heterogeneity. \item Edge weights in the network represent the heterogeneity in the connections between components in the system. \item The type, state, and any associated goals or objective functions of nodes represent heterogeneity of the system components. \item Multilayer networks \citep{kivela2014multilayer, de2013centrality} represent the resource heterogeneity (such as energy, information, and risk). \item The autonomy of decision-making components in the system can be modeled by considering the network's nodes as agents in a game and using game theory to analyze the autonomous components' behavior. \end{itemize} Although network-based analysis has been used in some systems engineering research, such as when studying product architecture \citep{bartolomei2012engineering, braha2006structure, batallas2006information} and supply chain systems \citep{bellamy2013network}, it has not been used to study resource sharing in systems with a distributed architecture. Different theoretical frameworks, based in network theory, can be used to describe the interactions between autonomous system components depending on the protocol used for making the resource sharing decisions. For example, the interactions can be modeled through exchange networks \citep{bayati2015bargaining, kleinberg2008balanced} if a bargaining process is used to decide on resource sharing actions; in exchange networks the connectivity structure of the network determines each node's \textit{bargaining power} and the way any surplus resources are divided between the nodes. In this paper we will focus on finding the connectivity structure that leads to enhanced resource access by considering two scenarios for the formation of the network connectivity structure: (1) Connectivity structure is static and is determined by a central planner; (2) connectivity structure is dynamic and determined by the autonomous decisions of the distributed system components. \section{Framework} \label{framework} In a system in which components can obtain their required resources both directly and indirectly, deciding which connectivity structure enhances access to resources leads to a dilemma. On one hand, direct connection between two components is costly (e.g., cost of interface); on the other hand, indirect connection may depreciate the benefits of acquiring the resource. Hence, to find the optimal connectivity, we need a framework that explicitly models the heterogeneous costs and benefits of individual components as a function of the network structure (based on the paths of access to the resources). The framework also needs to enable the study of the optimal connectivity structure, and to be able to model and quantify the subsequent trade-offs. \subsection{Strategic Networks} A rigorous framework for studying the optimal connectivity structure is Strategic Network Formation, as it explicitly incorporates the costs and benefits of creating and removing each connection into the model. This framework enables us to study how networks evolve as a result of individual incentives to form links or sever links, and to measure the collective utility of the whole network \citep{jackson2008social}. Hence, this approach is capable of modeling both centralized and autonomous schemes for the formation of the system's connectivity structure. This model was originally introduced in the economics literature and has been widely used to study the economic reasons behind the formation of many real-world networks \citep{jackson2005economics, fricke2012core}. Most of the theoretical and analytical literature on strategic network formation is built on the work by \cite{jackson1996strategic}. They introduced an economic network model called the Connection Model in which an agent (node in the network) can benefit from both direct and indirect connections with others, but will only pay a cost for its direct connections. The benefits of indirect connections decrease as the network distance (shortest path) between the nodes increases. This results in a recurring dilemma when creating the optimal connectivity structure (whether static with a central planner or dynamically created by distributed individual agents): (1) should a given agent be connected directly to another agent, in which case they both receive higher benefits, but each also pays a direct connection cost, or (2) should the two nodes be connected through other nodes, in which case, they save the connection cost, but gain only an indirect benefit, which is smaller due to the longer distance between the two nodes. While this dilemma exists for both centrally-designed, static systems and for dynamic systems with autonomous link formation, the resulting structure, in general, can be quite different. In the Connection Model each agent is assumed to have a utility function, which can represent the costs and benefits of accessing a resource from another part of the system. The notions of \textit{strong efficiency} and \textit{pairwise stability} can represent optimality of the connectivity structures for networks that are built by a central planner and by autonomous components respectively. Strong efficiency means maximizing the total utility of all agents in the network. In other words, for a given set of nodes and utility functions, we say a network is strongly efficient if there is no other network that has higher total utility. Pairwise stability is a generalized form of Nash Equilibrium\footnote{A Nash Equilibrium is a solution concept in game theory for non-cooperative games in which each player is assumed to know the equilibrium strategy of other players and no player can benefit from a unilateral change of strategy if the strategies of others remain unchanged \citep{osborne1994course}.}, which depends on the intention of self-interested individuals to form new links or sever existing ones; a network is said to be pairwise stable if for every pair of nodes: (1) neither has an incentive to sever the link between them if it \textit{does} exist, and (2) only one or zero of them has an incentive to form a link if one \textit{does not} exist \citep{jackson1996strategic}. \subsection{Connection model} In this section, we describe the Connection Model as the underlying framework for studying the connectivity structure in order to enhance access to resources in SoS. For a finite set of agents $N=\{1, \dots, n\}$, let $b:\{1, ... ,n-1\} \rightarrow \mathbb{R}$ represent the benefit that an agent receives from (direct or indirect) connections to other agents as a function of the distance (shortest path) between them in a graph. Following \cite{jackson1996strategic}, the (distance-based) utility function of each node, $u_i(g)$, in a graph $g$ and the total utility of the graph, $U(g)$, are as follows: \begin{equation} \label{utility_connection_model} \begin{split} u_i(g)&=\sum_{j\neq i: j\in N_i^{n-1}(g)} b(d_{ij}(g))-\sum_{j\neq i: j\in N^1_i(g) } c_{ij}\\ U(g)&=\sum_{i=1}^{n}u_i(g) \end{split} \end{equation} where $N^1_i(g)$ is the set of nodes to which $i$ is linked directly, and $N^k_i(g)$ is the set of nodes that are path-connected to $i$ by a distance no larger than $k$. $d_{ij}(g)$ is the distance (shortest path) between $i$ and $j$, $c_{ij}$ is the cost that node $i$ pays for connecting to $j$, and $b$ is the benefit that node $i$ receives from a connection with another node in the network. We assume that $b(k)>b(k+1)>0$ for any integer $k \geq 1$. The $c_{ij}$ values in Equation~\ref{utility_connection_model} are elements of the matrix of potential costs, and only those elements corresponding to direct links will eventually be realized. The connection model has been extended to also account for asymmetry and heterogeneity of benefits (e.g., \cite{persitz2010core}). Note that in the original model introduced by \cite{jackson1996strategic}, it is assumed that the benefits are homogeneous and are a function of the shortest path between two nodes, while direct connection costs can be heterogeneous in general. We will revisit this later in the paper in Section~\ref{Dynamic_heterogeneous_connectivity}. However, even only assuming cost heterogeneity can capture many real forms of complexities that arise, from having agents with different bandwidths and information processing capacities, to distance-based cost variations. Moreover, heterogeneous cost models automatically capture heterogeneity in direct benefits, since the difference in direct benefits can be absorbed into the cost. Potential costs and benefits are identified based on components' characteristics, such as location \citep{johnson2003spatial}, available energy or processing power, and interface standards. The assumption is that the states and attributes of the nodes are known and are inputs to the model. Hence, this model does not optimize the location of nodes, or other attributes that are related to individual nodes. Instead, it is used to study which components should be connected to each other in order to fulfill a system-level criterion. For systems with a centrally-determined connectivity structure, we use the notion of an efficient network, that is the network structure that maximizes total utility of all nodes: Let the complete graph $g^N$ denote the set of all subsets of $N$ of size 2. The network $\tilde{g}$ is efficient, if $U(\tilde{g}) \geq U(g^\prime)$ for all $g^\prime \subset g^N$, which indicates that: \begin{equation} \tilde{g}=\argmax_g\sum_{i=1}^{n}u_i(g) \end{equation} For systems where autonomous components are allowed to change the structure, using strong efficiency as the sole notion of optimality is not sufficient. In such systems, different components can change the structure based on local incentives, which might or might not be aligned with the global optimal efficiency. The concept of optimality for the connectivity structure of systems with autonomous components can be defined based on a game-theoretic equilibrium that captures individual and mutual incentives for the formation of connections. Hence, we will use the notion of pairwise stability as defined by \cite{jackson1996strategic} and that has been used in many other subsequent works. This definition describes the intuitive scenario in which adding a link between two agents requires a mutual decision while decisions to remove links can be unilateral. The network $g$ is \textit{pairwise stable} if: \begin{enumerate}[(i)] \item for all $ij \in g$, $u_i(g) \geq u_i(g - ij)$ and $u_j(g) \geq u_j(g - ij)$ and \item for all $ij \in g$, if $u_i(g + ij) \geq u_i(g)$ then $u_j(g + ij) < u_j(g)$. \end{enumerate} where $g+ij$ denotes the network obtained by adding link $ij$ to the existing network $g$, and $g-ij$ represents the network obtained by removing link $ij$ from the existing network $g$. The connection model captures dependencies of components and synergies at the micro level. The utility function of each component represents its goals, which can be aligned or not aligned with those of the whole system. The utility function depends on the connections of one component to the others and can account for the heterogeneous states of components. The utility function has a general form and can capture non-linearity in the preference functions of autonomous components. Thus far, we have mainly discussed (strong) efficiency and pairwise stability as two system-level criteria. However, depending on the context, a variety of criteria can be defined to measure the performance of the system based on individual components' utility functions. The notion of (strong) efficiency is defined based on the assumption that a central authority would design a system to maximize the sum of individual utilities. One can also consider Pareto efficiency as a criterion for a centrally-designed system. However, the pairwise stability metric can represent ``overall satisfaction" in the sense that no autonomous component in the system would be willing to change its connections, as this would not improve its utility. The assumption behind the pairwise stability or two-sided link formation is that a link is formed upon the ``mutual consent" of two agents. However, one can study the connectivity structure that results from one-sided and non-cooperative link formation, where agents unilaterally decide to form the links with another agent \citep{bala2000noncooperative}.\footnote{For a thorough comparison between strong efficiency, Pareto efficiency, and pairwise stability, please refer to \cite{jackson2008social}, Chapter 6, Section 2.} Using the connection model framework we can find the optimal connectivity structure for various conditions for costs and benefits associated with access to a resource in the system. \begin{figure}[!t] \centering \includegraphics[width=3.3in]{HomoNetworks.eps} \caption{Optimal connectivity structure for optimized resource access when the cost of connection between components is homogeneous and the connectivity structure is designed centrally. (a) Low cost of connection i.e., $c < b(1) -b(2)$. (b) Moderate cost of connection i.e., $b(1) - b(2) < c < b(1) + 0.5(n - 2)b(2)$. (c) High cost of connection i.e., $c > b(1) + 0.5(n - 2)b(2)$.} \label{HomoNetwork} \end{figure} \section{Optimal connectivity structure for resource access} \label{OptimalConnectivity} \subsection{Homogeneous connection cost} A system in which connecting every two components has equal cost can be presented by the simple homogeneous form of Equation~\ref{utility_connection_model}, where $c_{ij}=c$. Following~\cite{bloch2007formation}, when the connectivity structure is decided by a central planner, the optimal network does not have a diameter greater than two and will have the following structures depending on the cost and the benefit function\footnote{For details of mathematical proofs, please refer to \cite{jackson2008social} Chapter 6, Section 3.}: \begin{enumerate}[(i)] \item a complete graph if $b(1)-b(2)>c$, \item a star structure if $b(1)-b(2)<c<b(1)+0.5(n-2)b(2)$, \item an empty graph if $c>b(1)+0.5(n-2)b(2)$. \end{enumerate} The structures of efficient networks imply that when the cost of connecting two components in the system is below a certain limit, it is worthwhile to connect all components so that they benefit from direct access to each others resources. However, for a moderate cost of connection, a star structure optimizes access; in this structure a component acts as a hub through which other components can access resources from throughout the system via at most one intermediary. For this cost range, star is the unique efficient structure in that it has the minimum number of links connecting all nodes and minimizes the average path length given the minimal number of links. When the connection cost is beyond a certain limit, sharing resources is not beneficial in the system. These structures are depicted in Figure~\ref{HomoNetwork}. In a system in which components can autonomously establish and sever links to maximize their own access to resources, the optimal network is not necessarily unique. Following \cite{bloch2007formation} the description of the pairwise stable networks with homogeneous costs is as follows: \begin{enumerate}[(i)] \item for $c<b(1)-b(2)$, the unique pairwise stable network is the complete graph, \item for $b(1)-b(2)<c < b(1) $, a star structure is pairwise stable, but not necessarily the unique pairwise stable graph , \item for $b(1)<c$, any pairwise stable network which is non-empty is such that each player has at least two links and thus is inefficient. \end{enumerate} Although for the low connection cost the efficient and pairwise networks coincide, for the higher costs, the stable network is not unique and may not be the same as the efficient network. It is desirable to know how much total inefficiency will result from allowing networks to form at the discretion of autonomous components as opposed to being designed by a central planner. Knowing the magnitude of this inefficiency is often recognized as the \textit{price of anarchy} in the literature as was first introduced and coined by \cite{papadimitriou2001algorithms}. \subsection{Heterogeneous connection cost} The homogeneity assumption does not hold in many real-world systems, where the cost of connection is different from one link to another. A number of models have been proposed in the literature to introduce heterogeneity into the connection model \citep{galeotti2006network, jackson2005economics, vandenbossche2013network}. As an example of these heterogeneous models, we focus on the Separable Connection Cost model \citep{heydari2015efficient}, which is motivated by the distributed systems in which heterogeneous components are each endowed with some \textit{budget} and the total budget needed to establish and maintain connections for each component can be approximated to be proportional to the number of components to which it is connected. In this model each node pays a fixed cost for each connection independent of to whom it connects (i.e., $c_{ij}=c_i$ in Equation~\ref{utility_connection_model}), but this cost varies from node to node. When centrally designed, the connectivity structure that optimizes access to resources with separable and heterogeneous connection costs is as follows (mathematical proofs are provided in \cite{heydari2015efficient} ) \begin{itemize} \item Assuming that $c_1<c_2<\dots<c_n$, let $m$ be the largest integer between 1 and $n$ such that $2b(1) + 2(m-2)b(2) > (c_m + c_1)$. \item If $i > m$, then $i$ is isolated. If $i \leq m$, then there is exactly one link between $i$ and $1$; \item also there is one link between $i$ and $j (1 < i,j \leq m)$ if and only if $b(1)-b(2) > 0.5(c_i + c_j)$. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=1.7in]{GeneralizedStar.eps} \caption{Optimal connectivity structure for optimized resource access when the cost of connection between components is heterogeneous and separable, and connectivity structure is designed centrally.} \label{GeneralizedStar} \end{figure} In the efficient connectivity structure, components with high connection cost are isolated and the rest of the components are connected in a \textit{generalized star} structure. In this structure the component with the minimum connection cost plays the role of the hub, through which other components can access each other's resources. Moreover, if the cost of connection between two components is less than the gain in benefit of a direct connection compared to indirect, they are also connected. This will form a \textit{Core-Periphery} structure where components in the Core are fully interconnected and the components in the Periphery are only connected to those in the Core (Figure~\ref{GeneralizedStar}). Although benefits are still assumed to be homogeneous, one can easily take into account heterogeneity of direct benefits through cost, as long as the separability assumption is maintained, i.e. cost and direct benefit terms appear together in all analyses and the cost terms can capture heterogeneity of direct benefits by embedding them as an offset to the fixed costs of nodes. \subsection{Dynamic heterogeneous connectivity} \label{Dynamic_heterogeneous_connectivity} By integrating heterogeneity of the environment and components' characteristics (e.g., processing capacity, state) into the model, we can capture their effects on the dynamic interactions of the autonomous system components that evolve the connectivity structure. Based on the connection model and agent-based simulation, \cite{heydari2015emergence} suggest a computational framework for studying the connectivity structure that emerges from the component-level decisions for creating and severing links. This model extends the original model of \cite{jackson1996strategic} to capture the effect of both heterogeneous benefits and heterogeneous connection costs on the pairwise stable network. Note that in this model, due to the heterogeneity in both benefits and costs of connections, finding the efficient network is intractable in general. Using this model, self-optimizing components can play a network formation game in a heterogeneous environment and organize themselves in a manner that balances the benefits of access to resources against the associated costs in a way that also takes into account the limited processing capacity of the components. Based on the cost and benefits of access to a resource defined in this model, each component maximizes its own utility by establishing new links, with the mutual consent of the components at the other end of those links, or unilaterally removing existing ones. In a heterogeneous environment, an autonomous component is faced with a fundamental dilemma regarding the aggregate heterogeneity of its connections. On one hand, maximizing the diversity of connections, i.e., direct and indirect, is desirable because it ensures access to a larger pool of resources to respond to changes of environment. On the other hand, each component, when considered to have limited processing capacity, can only handle a certain level of heterogeneity in its direct connections. The reason is that each link imposes a transaction cost on the connected nodes that is a function of expected heterogeneity of the link's endpoints. The effect of the environment further amplifies this dilemma. This is because more heterogeneous environments give rise to a higher expected benefit to nodes from a given diversity in their connections. In this model, heterogeneity of the system environment is captured by the nodes' states. That is, each node in the network exchanges resources with a different environment, which influences its state. Another aspect of this model is the link formation capacity, which is a characteristic of each node. Each connection imposes a cost on a node and the node cannot maintain connections that have a total cost more than its capacity. The cost of link formation depends on the internal states of the two connected nodes. This implies that it is more expensive for an autonomous component to connect to another component that is very different compared with connecting to a component with similar characteristics. For instance, in communication networks, direct connection to a distant node is more expensive than connecting to a node in a close neighborhood. A node's state also affects the benefits another node receives from connecting to it. Having a path to a node with different characteristics provides greater opportunities for resource exchange. For instance, in the communication network a connection to a distant node provides access to a new geographic location. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{TEM} \caption{Optimal connectivity structure that has emerged from formation and removal of links by autonomous components, which seek to improve their access to resources in the system \citep{heydari2015emergence}. The thickness of the links denotes the connection cost, which is a function of the difference between nodes' states (a) Homogeneous set of components in a homogeneous environment. (b) In a heterogeneous environment, nodes have different rates of resource exchange with the environment and will become heterogeneous over time. The transaction cost of having many links will increase as a result. (c) Due to the limited processing capacities, components cannot afford all of their links, and sever a large percentage of them to keep their total transaction costs below their capacity, while still having access to a diverse set of nodes. This creates modularity in the connectivity structure.} \label{tem} \end{figure} The pairwise stable network that is formed based on decisions of individual heterogeneous components is not unique. This makes the analysis of the exact connectivity structure challenging, particularly when the network is large. However, the study of structural features reveals that the pairwise stable connectivity structures exhibit distinctive characteristics for systems containing self-optimizing heterogeneous components. Intuitively, \textit{modular communities} \citep{newman2006modularity} emerge when autonomous components maximize their indirect connections' diversity while keeping their link cost within their processing capacities. This is achieved by obtaining indirect benefits through direct connections to components with higher processing capacities that have the ability to manage a larger number of direct connections to heterogeneous resources. Figure~\ref{tem} illustrates how connectivity structures evolve as the result of self-optimizing decisions in creating and severing connections. To measure the strength of the community structure, \cite{heydari2015emergence} used the modularity index $Q$ developed by \cite{newman2004finding}, where $Q=1$ is the maximum and indicates strongest community structure. The results in \citep{heydari2015emergence} show that when heterogeneity of the environment (measured by diversity of nodes' states) is low and components have high processing capacities, the connectivity structure has a lower modularity index. However, high environmental heterogeneity together with limited processing capacities results in a higher modularity index (Figure~\ref{TEM_het_mod}). Note that although the changes of connectivity structure by autonomous agents in real-time might be partially attributed to the operation of the system, the proposed dynamic network formation model can be used as the basis of several architectural decisions. The model can be used to determine the initial topology of an autonomous system based on a given environment profile. The proposed framework can also be employed to decide about the level of autonomy of distributed agents, i.e., which agents are allowed to dynamically form or sever links (and with whom). Moreover, using the framework, one can decide the initial distribution of resources and the allocation of heterogeneous agents in the network to influence the agents' decisions on link formation. \section{A note on potential applications} \label{application} The proposed framework is applicable in determining the connectivity structure of SoSs when components can autonomously share resources in order to manage uncertainty in the availability of distributed resources. This includes technical and socio-technical systems such as the Internet of Things (IoT), Connected Autonomous Vehicles, fractionated satellite systems, R\&D collaboration networks, or hybrid teams of human and autonomous agents for disaster response. The main focus of this paper is on introducing a framework to enhance resource access in SoS and expanding on the theoretical foundations. In this section we will discuss two potential application areas for the framework. The finer details of these implementations are beyond the scope of the present paper and require that one quantifies the connection costs and benefit functions in the context of the problem, captures components' heterogeneous characteristics in the individual agents' utility functions, and uses appropriate system-level criteria to determine the connectivity structure. In fractionated satellite systems, multi-layer resource sharing enables the exchanging of resources, such as computational capacity, energy, and communication bandwidth, across fractions in the face of uncertainty in the availability of resources. The sources of uncertainty include variations in demand (e.g. market fluctuations and changes of stakeholders' requirements) and supply (e.g., change of mission and technical failure). It is neither practical nor efficient for all fractions to communicate directly with each other, thus it becomes important to find the communications connectivity structure between fractions that optimally enhances resource access throughout the system. This can be modeled through the proposed framework where nodes represent satellite fractions and nodes' states capture the fraction's heterogeneous characteristics (e.g., processing capacity limit and locations). Connection costs and the benefits of direct/indirect resource access can be defined as a function of nodes' states. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{MivHi.eps} \caption{Effect of environmental heterogeneity on modularity index of optimal connectivity structure when autonomous components create and sever links to improve their access to heterogeneous resources within the system \citep{heydari2015emergence}.} \label{TEM_het_mod} \end{figure} The proposed model can also be employed to study the effect of connectivity structure on performance of socio-technical systems such as hybrid teams of human and autonomous agents. Many critical systems of the future will rely on hybrid teams, in which human and autonomous technology agents (such as autonomous robots, self-driving cars or autonomous micro-grids) coordinate their actions, cooperate, share information, and dynamically divide sensing, information processing, and decision-making tasks. For example in a disaster response scenario, a group of geographically distributed heterogeneous agents need to cooperate and share information in a rapidly changing and uncertain environment. On the one hand agents seek to improve their access to information while their processing capacity is limited in handling connections. On the other hand, receiving information through intermediaries is subject to delay and noise. An extended model based on the dynamic network formation model (Section~\ref{Dynamic_heterogeneous_connectivity}) can be used to study connectivity structures that result in a stable network in which agents---while having autonomy over connection formation or severance---do not see it beneficial to deviate from the designated structure. \section{Discussion} \label{discussion} The framework proposed in this paper is domain independent and can be applied in a variety of contexts to study the connectivity structure of systems of systems comprised of heterogeneous and autonomous components. The framework offers a new perspective on distributed resource management in SoS under uncertainty that has been missing in the existing literature. However, the proposed framework is not intended to replace existing approaches that focus on reliability, or context-dependent operational or functional models. Instead, the proposed model can complement the existing approaches for resource management in SoS. Integrating an architecture perspective approach into existing frameworks is a topic of future research. The key difference between the proposed economic network model and classical operations research network approaches, such as minimum spanning trees \citep{kruskal1956shortest}, is the ability to capture autonomous behavior of heterogeneous components. In the proposed framework the utility function of individual components has a general form and, together with the concepts of efficiency and pairwise stability, can be used to study both central and decentralized schemes for forming a connectivity structure. Moreover, the framework explicitly incorporates the benefits of connections as a function of the distance between components and accounts for heterogeneous connection costs. The suggested framework can be used to study how a connectivity structure emerges within a group of agents that are improving their own utilities by severing and creating links (e.g., a communication network of autonomous agents for disaster response). The model enables us to study the economic reasons behind emergence of network structures as a result of individual components' decisions, and also provides us with insights to steer the evolution of those structures by influencing individuals' incentives. In contrast, a minimum spanning tree approach might be used to centrally design a cost effective network encompassing all components in a system (e.g., laying out cables for a telecommunication networks in a new area \citep{graham1985history}). The proposed framework can be used to find the optimal network topology for a given set of parameters at a moment in time. Once a new component is added to/removed from the system, the same framework can be used to find the new optimal topology. However, the proposed framework does not capture the optimal transition strategy and the required changes in the overall architecture to obtain a globally optimal network. This depends on a set of parameters, such as the expected frequency of addition/removal, and the location and interdependency of added/removed nodes that are not considered in this paper. Integrating optimal strategies for transitions in systems with dynamic set of components and finding a global optimal topology are important directions that can complement this work. We used deterministic cost and benefit functions in the optimal connectivity structure models in this paper. When using stochastic functions, with expected values of costs and benefits, similar results will still be valid. Using stochastic functions for costs and benefits enables the integration of other component characteristics such as reliability into the model, i.e., the probability of failure of each component will negatively affect the expected benefits that are received from connections to that node. However, for more complex analysis, one needs to modify the framework to accommodate probability distributions of cost and benefit functions. This paper focused mainly on enhancing individual components' access to resources within the system by finding an optimal connectivity structure. However, the study of mechanisms for sharing resources between autonomous components (a.k.a. Multi-Agent Resource Allocation) is another topic, which is widely studied jointly by computer scientists and economists. These mechanisms are intended to align individual components' utilities, obtained from sharing a resource, with system-wide goals. The resource sharing mechanisms between system components can be defined according to a variety of protocols depending on factors such as the type of the resource (e.g., single vs. multi-unit, continuous vs. discrete), and complexity of the resource allocation algorithm. Many of these protocols are inspired by market mechanisms such as auctions and negotiation \citep{chevaleyre2006issues}. \section{Conclusion} \label{conclusion} Dynamic resource sharing, as a systems mechanism, can add a level of flexibility to SoSs and improve their responsiveness to uncertainty in the environment. In this paper, we took a systems architecture approach to distributed resource management in SoSs. We introduced a framework based on Economic Networks for the connectivity structure of SoSs in which components can share resources through direct and indirect connections. This framework enables us to study the effect of the connectivity structure on individual components' utility that is obtained from access to diverse resources available to other components. The optimal connectivity structure depends on the heterogeneity parameters of the system, the environment, and the way in which the connectivity structure is formed (i.e., by a central planner or distributed components). The proposed model explicitly incorporates the cost of creating and maintaining a connection between two components as well as the benefits that are received through direct and indirect access to a resource. It can also capture a wide range of heterogeneity of system parameters and the environment. Moreover, the notion of strong efficiency is used to represent the optimality of a connectivity structure created by a central planner; similarly, the notion of pairwise stability is used to study the structures emerging from self-optimizing components' incentives to create and sever links. In this paper, we mainly focused on the optimal connectivity structure of few particular heterogeneity conditions. However, the cost and benefit functions in the proposed framework can be extended to capture various levels of heterogeneity in distributed systems while finding the optimal network remains fairly tractable. For example, systems where constituents can be divided into a number of groups (islands), in which connections between islands are generally more costly than connections within islands, can be studied based on the Island-connection model \citep{jackson2005economics}. Moreover, in the original model the benefit received from connection to another component is a function of the distance between two components. However, the benefits of resource access might be negatively affected by the number of connections to the component providing the resource. Extended models such as the degree-distance-based connections model \citep{mohlmeier2013degree} can be used to model this effect. \section*{Acknowledgment} This work was supported in part by DARPA/NASA Ames Contract Number: NNA11AB35C and INCOSE/SERC developing Theory of Systems Engineering. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtranN}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Many online platforms, such as search engines or recommender systems, display results based observed properties of the user and their query. However, a user's behavior is often influenced by \emph{latent state} not explicitly revealed to the system. This might be \emph{user intent} (e.g., reflecting a long-term task) in search, or \emph{user (short- and long-term) preferences} (e.g., reflecting topic interests) in a recommender. The unobserved latent state in each case influences the user response (hence, the associated reward) of the displayed results. A machine learning (ML) system, thus, should take steps to infer the latent state and tailor its results accordingly. While many ML models use either heuristic features \cite{linucb,lints} or recurrent models \cite{rnn_recommender} to capture user history, explicit exploration for \emph{(latent) state identification} (i.e., reducing uncertainty regarding the true state) is less common in practice. In this paper, we study \emph{latent bandits}, which model online interactions of the type above. At each round, the learning agent is given an observed context (e.g., query, user demographics), selects an action (e.g., recommendation), and observes its reward (e.g., user engagement with the recommendation). The action reward depends stochastically on both the context and the user latent state. Hence the observed reward provides information about the unobserved latent state, which can be used to improve predictions at future rounds. We are interested in designing exploration policies that allow the agent to quickly maximize its per-round reward by resolving \emph{relevant} latent state uncertainty. Specifically, we want policies that have low \emph{$n$-round regret}. Latent class structure of this form can allow an agent to quickly adapt it results to new users (e.g., cold start in recommenders) or adapt to new user tasks or intents on a per-session basis. For instance, clusters of users with similar item preferences can be used as the latent state of a new user. Estimated latent state can be used to quickly reach good cold-start recommendations if the number of clusters is much less than the number of items \citep{latent_contextual_bandits}. \emph{Fully} online exploration (e.g., for personalization) also involves learning a reward model---conditional on context and latent state---and generally requires massive amounts of interaction data. Fortunately, many platforms have just such \emph{offline} data (e.g., past user interactions) with which to construct both a latent state space and reasonably accurate conditional reward models \citep{yahoo_contextual,msft_paper}. We assume such a model is available and focus on the simpler online problem of state identification. While previously studied, prior work on this problem assumes the \emph{true} conditional reward models is given \citep{latent_bandits, latent_contextual_bandits}. Moreover, these algorithms are UCB-style, with optimal theoretical guarantees, but sub-par empirical performance. We provide a unified framework that combines offline-learned models with online exploration for both UCB and Thompson sampling algorithms, and propose practical, analyzable algorithms that are contextual and robust to natural forms of model imprecision. Our main contributions are as follows. Our work is the first to propose algorithms that are aware of model uncertainty in the latent bandits setting. In Sec.~\ref{sec:algorithms}, we propose novel, practical algorithms based on UCB and Thompson sampling. Using a tight connection between UCB and posterior sampling \cite{russo_posterior_sampling}, we derive optimal theoretical bounds on the Bayes regret of our approaches in Sec.~\ref{sec:analysis}. Finally, in Sec.~\ref{sec:experiments}, we demonstrate its effectiveness vis-\'{a}-vis state-of-the-art benchmarks using both synthetic simulations and a large-scale real-world recommendation dataset. \section{Problem Formulation} We adopt the following notation. Random variables are capitalized. The set of arms is $\mathcal{A} = [K]$, the set of contexts is $\mathcal{X}$, and the set of latent states is $\mathcal{S}$, with $|\mathcal{S}| \ll K$. We study a {\em latent bandit} problem, where the learning agent interacts with an environment over $n$ rounds. In round $t \in [n]$, the agent observes context $X_t \in \mathcal{X}$, chooses action $A_t \in \mathcal{A}$, then observes reward $R_t \in \mathbb{R}$. The random variable $R_t$ depends on context $X_t$, action $A_t$, and latent state $s \in \mathcal{S}$, where $s$ is fixed but unknown.\footnote{The latent state $s$ can be viewed, say, as a user's current task or preferences, which is fixed over the course of a session or episode. The state is resampled (see below) for each user (or the same user at a future episode).} The \emph{observation history} up to round $t$ is $H_t = (X_1, A_1, R_1, \hdots, X_{t-1}, A_{t-1}, R_{t-1})$. An agent's \emph{policy} maps $H_t$ and $X_t$ to the choice of action $A_t$. The reward is sampled from a \emph{conditional reward distribution}, $P(\cdot \mid A, X, s, \theta)$, which is parameterized by vector $\theta \in \Theta$, where $\Theta$ reflects the space of feasible reward models. Let $\mu(a, x, s, \theta) = \E{R \sim P(\cdot \mid a, x, s, \theta)}{R}$ be the \emph{mean reward} of action $a$ in context $x$ and latent state $s$ under $\theta$. We denote the true (unknown) latent state by $s_*$ and true model parameters by $\theta_*$. These are generally \emph{estimated offline}. We assume that rewards are $\sigma^2$-sub-Gaussian with variance proxy $\sigma^2$: $\E{R \sim P(\cdot \mid a, x, s_*, \theta_*)}{\exp(\lambda (R - \mu(a, x, s_*, \theta_*)))} \leq \exp(\sigma^2 \lambda^2 / 2)$ for all $a$, $x$ and $\lambda > 0$. Note that we do not make strong assumptions about the form of the reward: $\mu(a, x, s, \theta)$ can be any complex function of $\theta$, and contexts generated by any arbitrary process. We measure performance with regret. For a fixed latent state $s_* \in \mathcal{S}$ and model $\theta_* \in \Theta$, let $A_{t, *} = \arg\max_{a \in \mathcal{A}} \mu(a, X_t, s_*, \theta_*)$ be the optimal arm. The \emph{expected $n$-round regret} is: \begin{align} \mathcal{R}(n; s_*, \theta_*) = \E{}{\sum_{t=1}^n \mu(A_{t, *}, X_t, s_*, \theta_*) - \mu(A_t, X_t, s_*, \theta_*)}. \label{eqn:regret} \end{align} While fixed-state regret is useful, we are often more concerned with average performance over a range of states (e.g., multiple users, multiple sessions with the same user). Thus, we also consider Bayes regret, where we take expectation over latent-state randomness. Assuming $S_*$ and $\theta_*$ are drawn from some prior, the \emph{$n$-round Bayes regret} is: \begin{align} \mathcal{BR}(n) = \E{}{\mathcal{R}(n; S_*, \theta_*)} = \E{}{\sum_{t=1}^n \mu(A_{t, *}, X_t, S_*, \theta_*) - \mu(A_t, X_t, S_*, \theta_*)}, \label{eqn:bayes_regret} \end{align} where $A_{t, *} = \arg\max_{a \in \mathcal{A}} \mu(a, X_t, S_*, \theta_*)$ additionally depends on random latent state and model. \section{Algorithms} \label{sec:algorithms} In this section, we develop both UCB and Thompson sampling (TS) algorithms that leverage an environment model, generally learned offline, to expedite online exploration. As discussed above, such offline models can be readily learned given the large amounts of offline interaction data available to many interactive systems. In each subsection below, we specify a particular form of the offline-learned model, and develop a corresponding online algorithm. \vspace{-0.05in} \subsection{UCB with Perfect Model (\ensuremath{\tt mUCB}\xspace)} \vspace{-0.05in} We first design a UCB-style algorithm that uses the learned model parameters $\hat{\theta} \in \Theta$. Let $\hat{\mu}(a, x, s) = \mu(a, x, s, \hat{\theta})$ denote the estimated mean reward, and $\mu(a, x, s) = \mu(a, x, s, \theta_*) $ denote the true reward. We initially assume accurate knowledge of the true model, that is, we are given $\hat{\theta} = \theta_*$ as input. The key idea in UCB algorithms is to compute high-probability upper confidence bounds $U_t(a)$ on the mean reward for each action $a$ in round $t$, where the $U_t$ is some function of history \citep{ucb}. UCB algorithms take action $A_t \!=\! \arg\max_{a \in \mathcal{A}} \! U_t(a)$. Our model-based algorithm \ensuremath{\tt mUCB}\xspace (see Alg.~\ref{alg:ucb}) works in this fashion. It is similar to the method of \citet{latent_bandits}, but also handles context. In round $t$, \ensuremath{\tt mUCB}\xspace maintains a set of latent states $C_t$ that are \emph{consistent} with the rewards observed thus far. It chooses a specific (``believed'') latent state $B_t$ from the consistent set $C_t$ and the arm $A_t$ with the maximum expected reward at that state: $(B_t, A_t) = \arg\max_{s \in C_t, a \in A} \hat{\mu}(a, X_t, s)$. Thus our UCB for $a$ is $U_t(a) = \arg\max_{s \in C_t} \hat{\mu}(a, X_t, s)$. \ensuremath{\tt mUCB}\xspace tracks two key quantities: the number of times $N_t(s)$ that state $s$ has been selected up to round $t$; and the \say{gap} $G_t(s)$ between the expected and realized rewards under $s$ up to round $t$ (see Eq.~\eqref{eqn:ucb_gap} in Alg.~\ref{alg:ucb}). If $G_t(s)$ is high, the algorithm marks $s$ as \emph{inconsistent} and does not consider it in round $t$. Notice that the gap is defined over latent states rather than over actions, and with respect to realized rewards rather than expected rewards. \begin{algorithm}[tb] \caption{\ensuremath{\tt mUCB}\xspace}\label{alg:ucb} \begin{algorithmic}[1] \State \textbf{Input:} Model parameters $\hat{\theta}$ \Statex \For{$t \gets 1, 2, \hdots$} \State Define $N_t(s) \leftarrow \sum_{\ell = 1}^{t-1}\indicator{B_\ell = s}$ and \begin{align} \label{eqn:ucb_gap} G_{t}(s) \leftarrow \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s}\left(\hat{\mu}(A_\ell, X_\ell, s) - R_\ell\right) \end{align} \State Set of consistent latent states $C_t \leftarrow \left\{s \in S: G_t(s) \leq \sigma \sqrt{6N_t(s)\log n} \right\}$ \State Select $B_t, A_t \leftarrow \arg\max_{s \in C_t, a \in A} \hat{\mu}(a, X_t, s)$ \EndFor \end{algorithmic} \end{algorithm} \vspace{-0.05in} \subsection{UCB with Misspecified Model (\ensuremath{\tt mmUCB}\xspace)} \vspace{-0.05in} We now generalize \ensuremath{\tt mUCB}\xspace to handle a misspecified model, i.e., when we are given $\hat{\theta} \neq \theta_*$ as input. We formulate model misspecification assuming the following high-probability worst-case guarantee: there is a $\delta > 0$ such that $\abs{\hat{\mu}(a, x, s) - \mu(a, x, s)} \leq \varepsilon$ holds w.p. at least $1 - \delta$ jointly over all $a \in \mathcal{A}, x \in \mathcal{X}, s \in \mathcal{S}$. Guarantees of this form are, for example, offered by spectral learning methods for latent variable models, where $\varepsilon$ and $\delta$ are functions of the size of the offline dataset \citep{tensor_decomposition}. We modify \ensuremath{\tt mUCB}\xspace to be sensitive to this type of model error, deriving a new method \ensuremath{\tt mmUCB}\xspace for misspecified models. We use the high-probability lower bound to rewrite the gap in Eq. \eqref{eqn:ucb_gap} as \begin{align} \label{eqn:ucb_gap_uncertain} G_t(s) = \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s} \left(\hat{\mu}(A_\ell, X_\ell, s) - \varepsilon - R_\ell\right). \end{align} This allows \ensuremath{\tt mmUCB}\xspace to act conservatively when determining inconsistent latent states, so that $s_* \in C_t$ occurs with high probability. Just as importantly, it is also useful for deriving worst-case regret bounds---we use it below to analyze TS algorithms with misspecified models. \vspace{-0.05in} \subsection{Thompson Sampling with Perfect Model (\ensuremath{\tt mTS}\xspace)} \vspace{-0.05in} Our UCB-based algorithms \ensuremath{\tt mUCB}\xspace and \ensuremath{\tt mmUCB}\xspace are designed for worst-case performance. We now adopt an alternative perspective where, apart from the learned model parameters $\hat{\theta}$, we are given the conditional reward distribution $P(\cdot \mid a, x, s, \theta)$ for all $a$, $x$, $s$ and $\theta$, as well as a prior distribution over latent states $P_1$ as input. As above, we first assume $\hat{\theta} = \theta_*$. TS samples actions according to their posterior probability (given history so far) of being optimal. Let the optimal action (w.r.t.\ the posterior) in round $t$ be $A_{t, *} = \arg\max_{a \in A} \mu(a, X_t, S_*, \theta_*)$, which is random due to the observed context and unknown latent state. TS selects $A_t$ stochastically s.t.\ $\prob{A_t = a \mid H_t} = \prob{A_{t, *} = a \mid H_t}$ for all $a$. An advantage of TS over UCB is that it obviates the need to design UCBs, which are often loose. Consequently, UCB algorithms are often conservative in practice and TS typically offers better empirical performance \citep{ts_empirical}. Our latent-state TS method \ensuremath{\tt mTS}\xspace, detailed in \cref{alg:thompson_1}, assumes an accurate model. For all $s \in \mathcal{S}$, let $P_t(s) = \prob{S_* = s \mid H_t}$ be the posterior probability that $s$ is the latent state in round $t$. In each round, \ensuremath{\tt mTS}\xspace samples the latent state from the posterior $B_t \sim P_t$, and plays action $A_t = \max_{a \in \mathcal{A}} \hat{\mu}(a, X_t, B_t)$. Because $s$ is fixed, the posterior is $P_t(s) \propto P_1(s) \prod_{\ell=1}^{t-1} P(R_\ell \mid A_\ell, X_\ell, s, \hat{\theta})$, and $P_t$ can be updated incrementally in the standard Bayesian filtering fashion \cite{sarkka2013bayesian}. \begin{figure}[tb] \begin{minipage}[tb]{0.48\textwidth} \begin{algorithm}[H] \caption{\ensuremath{\tt mTS}\xspace}\label{alg:thompson_1} \begin{algorithmic}[1] \State \textbf{Input:} \State \quad Model parameters $\hat{\theta}$ \State \quad Prior over latent states $P_1(s)$ \Statex \For {$t \gets 1, 2, \hdots$} \State Define \begin{align*}\textstyle P_t(s) \propto P_1(s) \prod_{\ell=1}^{t-1} P(R_\ell \mid A_\ell, X_\ell, s, \hat{\theta}) \end{align*} \State Sample $B_t \sim P_t$ \State Select $A_t \leftarrow \arg\max_{a \in A} \hat{\mu}(a, X_t, B_t)$ \EndFor \end{algorithmic} \end{algorithm} \end{minipage} \hfill \begin{minipage}[tb]{0.48\textwidth} \begin{algorithm}[H] \caption{\ensuremath{\tt mmTS}\xspace}\label{alg:thompson_2} \begin{algorithmic}[1] \State \textbf{Input:} \State \quad Prior over model parameters $P_1(\theta)$ \State \quad Prior over latent states $P_1(s)$ \Statex \For {$t \gets 1, 2, \hdots$} \State Define \begin{align*}\textstyle P_{t}(s, \theta) \propto P_1(s) P_1(\theta) \prod_{\ell = 1}^{t-1} P(R_\ell \mid A_\ell, X_\ell, s, \theta) \end{align*} \State Sample $B_t, \hat{\theta} \sim P_t$ \State Select $A_t \leftarrow \arg\max_{a \in A} \hat{\mu}(a, X_t, B_t)$ \EndFor \end{algorithmic} \end{algorithm} \end{minipage} \vspace{-0.1in} \end{figure} \vspace{-0.05in} \subsection{Thompson Sampling with Misspecified Model (\ensuremath{\tt mmTS}\xspace)} \vspace{-0.05in} As in the UCB case, we also generalize our TS method \ensuremath{\tt mTS}\xspace to handle a misspecified model. Instead of an estimated $\hat{\theta}$ with worst-case error as in \ensuremath{\tt mmUCB}\xspace, we use a prior distribution $P_1(\theta)$ over possible models, and assume that $\theta_* \sim P_1$. This is well-motivated by prior literature on modeling epistemic uncertainty \cite{model_uncertainty}. In practice, learning a distribution over parameters is intractable for complex models, but approximate inference can be performed using, say, ensembles of bootstrapped models \cite{model_uncertainty}. Our TS method \ensuremath{\tt mmTS}\xspace (see Alg.~\ref{alg:thompson_2}) seamlessly integrates model uncertainty into \ensuremath{\tt mTS}\xspace. At each round $t$, the latent state $B_t$ and estimated model parameters $\hat{\theta}$ are sampled from their joint posterior. Like \ensuremath{\tt mTS}\xspace, the action is chosen to maximize $A_t = \max_{a \in \mathcal{A}} \hat{\mu}(a, X_t, B_t)$ using the sampled state and parameters. Approximate sampling from the posterior can be realized with sequential Monte Carlo methods \cite{smc}. When the model prior is conjugate to the likelihood, the posterior has a closed-form solution. Because $\mathcal{S}$ is finite, we can tractably sample from the joint posterior by first sampling latent state $B_t$ from its marginal posterior, then $\hat{\theta}$ conditioned on latent state $B_t$. For exponential family distributions, the posterior parameters can also be updated online and efficiently (see Appendix \ref{sec:mmts_specific} for details, and Appendix \ref{sec:mmts_pseudocode} for pseudocode for Gaussian prior and likelihood). \section{Regret Analysis} \label{sec:analysis} \citet{latent_bandits} derive gap-dependent regret bounds for a UCB algorithm when the true model is known and arms are independent. We provide a unified analysis of our methods that extend their results to include context, model misspecification, and an analysis for TS. \vspace{-0.05in} \subsection{Regret Decomposition} \vspace{-0.05in} UCB algorithms explore using upper confidence bounds, while TS samples from the posterior. \citet{russo_posterior_sampling} relate these two classes of algorithms with a unified regret decomposition, showing how to analyze TS using UCB analysis. We adopt this approach. Let $s_*$ be the true latent state. The regret of our UCB algorithms in round $t$ decomposes as \begin{align*} \mu(A_{t, *}, X_t, s_*) - \mu(A_t, X_t, s_*) & = \mu(A_{t, *}, X_t, s_*) - U_t(A_t) + U_t(A_t) - \mu(A_t, X_t, s_*) \\ & \leq \left[\mu(A_{t, *}, X_t, s_*) - U_t(A_{t, *})\right] + \left[U_t(A_t) - \mu(A_t, X_t, s_*)\right]\,, \end{align*} where the inequality holds by the definition of $A_t$. A similar inequality without latent states appears in prior work \citep{russo_posterior_sampling}. This yields the following regret decomposition: \begin{align} \begin{split} \mathcal{R}(n; s_*, \theta_*) &\leq \E{}{\sum_{t=1}^n \mu(A_{t, *}, X_t, s_*) - U_t(A_{t, *})} + \E{}{\sum_{t=1}^n U_t(A_t) - \mu(A_t, X_t, s_*)}. \label{eqn:ucb_regret_decomposition} \end{split} \end{align} An analogous decomposition exists for the Bayes regret of our TS algorithms. Specifically, for any TS algorithm and function $U_t$ of history, we have \begin{align} \begin{split} \mathcal{BR}(n) &= \E{}{\sum_{t=1}^n \mu(A_{t, *}, X_t, S_*, \theta_*) - U_t(A_{t, *})} + \E{}{ \sum_{t=1}^n U_t(A_t) - \mu(A_t, X_t, S_*, \theta_*)}. \label{eqn:posterior_regret_decomposition} \end{split} \end{align} The proof uses the fact that $\E{}{U_t(A_{t, *}) \mid X_t, H_t} = \E{}{U_t(A_{t}) \mid X_t, H_t}$ holds for any $H_t$ and $X_t$ by definition of TS. Hence, $U_t$ can be the upper confidence bound of UCB algorithms. Though the UCBs $U_t$ are not used by TS algorithms, they can be used to \emph{analyze} TS due to Eq.~\eqref{eqn:posterior_regret_decomposition}. Thus regret bounds for UCB algorithms can be translated to Bayes regret bounds for TS. We make two important points. First, we must use a worst-case argument over suboptimal actions when bounding the regret, since actions in TS do not maximize $U_t$. Second, because the Bayes regret is an expectation over states, the resulting regret bounds are problem-independent, i.e., gap-free. \vspace{-0.05in} \subsection{Key Steps in Our Proofs} \vspace{-0.05in} Full proofs of our unified regret analyses can be found in the appendix. All proofs follow the same outline, the key steps of which are outlined below. To ease the exposition, we assume the suboptimality of any action is bounded by $1$. \textbf{Step 1: Concentration of realized rewards at their means.} We first show that the total observed reward does not deviate too much from its expectation, under any latent state $s$. Formally, we show $ \prob{ \abs{\sum_{\ell = 1}^{t-1} \indicator{B_\ell = s} \left(\mu(A_\ell, X_\ell, s_*) - R_\ell\right)} \geq \sigma \sqrt{6N_t(s) \log n}} = O(n^{-2}) $ for any round $t$ and latent state $s \in \mathcal{S}$. When the arms are independent, as in prior work, this follows from Hoeffding's inequality. However, we also consider the case of contextual arms, which requires joint estimators over dependent arms. To address this, we resort to martingales and Azuma's inequality. \textbf{Step 2: $s_* \in C_t$ in each round $t$ with a high probability.} We show that our consistent sets are unlikely to rule out the true latent state. This follows from the concentration argument in Step 1, for $s = s_\ast$. Then, in any round $t$ where $s_* \in C_t$, we use that $U_t(a) \geq \mu(a, X_t, s_*)$ for any $a$ for \ensuremath{\tt mUCB}\xspace, or $U_t(a) \geq \mu(a, X_t, s_*) - \varepsilon$ for \ensuremath{\tt mmUCB}\xspace. \textbf{Step 3: Upper bound on the UCB regret.} This bound is proved by bounding each term in the regret decomposition in Eq.~\eqref{eqn:ucb_regret_decomposition}. By Steps 1-2, the first term is at most $0$ with high probability. The second term is the sum over rounds of confidence widths, or difference between $U_t$ and the true expected mean reward at $t$. We partition this sum by the latent state selected at each round. For each $s$, we almost have an upper bound on the its sum, excluding the last round it is played, via $G_n(s)$, \begin{align*} \sum_{t = 1}^{n} \indicator{B_t = s} \left(U_t(A_t) - \mu(A_t, X_t, s)\right) &= (G_n(s) + 1) + \sum_{t = 1}^{n} \indicator{B_t = s} \left(R_t - \mu(A_t, X_t, s)\right). \end{align*} If $s$ is chosen in round $t$, we know $G_t(s) \leq \sigma\sqrt{6N_t(s) \log n}$. The other term is bounded by Step 1, which gives a $2\sigma\sqrt{6N_n(s) \log n}$ total upper-bound. We combine the bounds for $s$'s partition with the Cauchy-Schwarz inequality. \textbf{Step 4: Upper bound on the TS regret.} We exploit the fact that the regret decomposition for Bayes regret in Eq.~\eqref{eqn:posterior_regret_decomposition} is the same as that for the UCB regret in Eq.~\eqref{eqn:ucb_regret_decomposition}. Because our UCB analysis is worst-case over suboptimal latent states and actions, and gap-free, any regret bound transfers immediately to the Bayes regret bound for TS. \vspace{-0.05in} \subsection{Regret Bounds} \vspace{-0.05in} Our first result is an upper bound on the $n$-round regret of \ensuremath{\tt mUCB}\xspace when the true model is known. This result differs from that of \citet{latent_bandits} in two respects: our bound is gap-free and accounts for context. \begin{theorem} \label{thm:ucb_regret} Assume that $\hat{\theta} = \theta_*$. Then, for any $s_* \in \mathcal{S}$ and $\theta_\ast \in \Theta$, the $n$-round regret of \ensuremath{\tt mUCB}\xspace is bounded as $\mathcal{R}(n; s_*, \theta_*) \leq 3|\mathcal{S}| + 2\sigma \sqrt{6|\mathcal{S}| n \log n}$. \end{theorem} A gap-free lower bound on regret in multi-armed bandits with independent arms is $\Omega(\sqrt{K n})$ \cite{exp4}. Our upper bound is optimal up to log factors, but substitutes actions $\mathcal{A}$ with latent states $\mathcal{S}$ and includes context. Our bound can be much lower when $|\mathcal{S}| \ll K$, and holds for arbitrarily complex reward models. Using Step 4 of the proof outline, we also have that the Bayes regret of \ensuremath{\tt mTS}\xspace is bounded: \begin{corollary} \label{cor:posterior_regret} Assume that $\hat{\theta} = \theta_*$. Then, for $S_* \sim P_1$ and any $\theta_* \in \Theta$, the $n$-round Bayes regret of \ensuremath{\tt mTS}\xspace is bounded as $\mathcal{BR}(n) \leq 3|\mathcal{S}| + 2\sigma \sqrt{6|\mathcal{S}| n \log n}$. \end{corollary} Our next results apply to the cases with misspecified models. We assume $\hat{\theta}$ was estimated by some black-box method. For \ensuremath{\tt mmUCB}\xspace, our regret bound depends on the high-probability maximum error $\varepsilon$. \begin{theorem} \label{thm:ucb_regret_uncertain} Let $\prob{\forall a \in \mathcal{A}, x \in \mathcal{X}, s \in \mathcal{S}: |\mu(a, x, s, \hat{\theta}) - \mu(a, x, s, \theta_*)| \leq \varepsilon} \geq 1 - \delta$ for some $\varepsilon, \delta > 0$. Then, for any $s_* \in \mathcal{S}$ and $\theta_* \in \Theta$, the $n$-round regret of \ensuremath{\tt mmUCB}\xspace is bounded as \begin{align*} \mathcal{R}(n; s_*, \theta_*) \leq n\delta + 3|\mathcal{S}| + 2 n \varepsilon + 2\sigma \sqrt{6|\mathcal{S}| n \log n}\,. \end{align*} \end{theorem} The proof of \cref{thm:ucb_regret_uncertain} follows the same proof outline. Steps 1--2 are unchanged, but bounding the regret decomposition in Step 3 requires accounting for the error due to model misspecification. The linear dependence on $\varepsilon$ and probability $\delta$ is unavoidable in the worst-case, specifically if $\varepsilon$ is larger than the suboptimality gap. However, some offline model-learning methods, i.e. tensor decomposition \citep{tensor_decomposition}, allow for $\varepsilon, \delta$ to be arbitrarily small as size of offline dataset increases. For \ensuremath{\tt mmTS}\xspace, we assume that a prior distribution over model parameters is known. Instead of $\hat{\mu}(a, x, s)$ due to a single $\hat{\theta}$, we define $\bar{\mu}(a, x, s) = \int_\theta \mu(a, x, s, \theta) P_1(\theta) d \theta$ as the mean conditional reward, marginalized with respect to the prior. We obtain the following Bayes regret bound: \begin{corollary} \label{cor:posterior_regret_uncertain} For $\theta_* \sim P_1$, let $\prob{\forall a \in \mathcal{A}, x \in \mathcal{X}, s \in \mathcal{S}: |\bar{\mu}(a, x, s) - \mu(a, x, s, \theta_*)| \leq \varepsilon} \geq 1 - \delta$ for some $\varepsilon, \delta > 0$. Then, for $S_*, \theta_* \sim P_1$, the $n$-round Bayes regret of \ensuremath{\tt mmTS}\xspace is bounded as \begin{align*} \mathcal{BR}(n) \leq n\delta + 3|\mathcal{S}| + 2 n \varepsilon + 2\sigma \sqrt{6|\mathcal{S}| n \log n}\,. \end{align*} \end{corollary} We can formally define $\varepsilon$ and $\delta$ in terms of the tails of the conditional reward distributions. Let $\mu(a, x, s, \theta) - \bar{\mu}(a, x, s)$ be $v^2$-sub-Gaussian in $\theta \sim P_1$ for all $a$, $x$, and $s$. For $\delta > 0$, choosing $\varepsilon = O(\sqrt{2v \log(K|\mathcal{X}||\mathcal{S}|/\delta}))$ satisfies the conditions on $\varepsilon$ and $\delta$ needed for \cref{cor:posterior_regret_uncertain}. The proof uses $U_t$ in Eq. \eqref{eqn:posterior_regret_decomposition} as $U_t(a) = \arg\max_{s \in C_t}\bar{\mu}(a, X_t, s)$, i.e., quantities in \ensuremath{\tt mmUCB}\xspace are defined using the marginalized conditional means instead of means using a point estimate $\hat{\theta}$. \section{Experiments} \label{sec:experiments} In this section, we evaluate our algorithms on both synthetic and real-world datasets. We compare the following methods: (i) \textbf{UCB}: \ensuremath{\tt UCB1}\xspace/\ensuremath{\tt LinUCB}\xspace with no offline model \citep{ucb,linucb}; (ii) \textbf{TS}: \ensuremath{\tt TS}\xspace/\ensuremath{\tt LinTS}\xspace with no offline \citep{ts,lints}; (iii) \textbf{EXP4}: \ensuremath{\tt EXP4}\xspace using offline reward model as experts \citep{exp4} (iv) \textbf{mUCB, mmUCB}: our proposed UCB algorithms \ensuremath{\tt mUCB}\xspace and \ensuremath{\tt mmUCB}\xspace; (v) \textbf{mTS, mmTS}: our proposed TS algorithms \ensuremath{\tt mTS}\xspace and \ensuremath{\tt mmTS}\xspace. In contrast to our methods, the UCB and TS baselines do not use an offline learned model. \ensuremath{\tt UCB1}\xspace and \ensuremath{\tt TS}\xspace are used for non-contextual problems, while \ensuremath{\tt LinUCB}\xspace and \ensuremath{\tt LinTS}\xspace are used for contextual bandit experiments. \ensuremath{\tt EXP4}\xspace uses the offline-learned model as a mixture-of-experts, where each expert plays the best arm given context under its corresponding latent state. Because we measure ``fast personalization," we use short horizons of at most $500$. \begin{figure} \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/sim_small.png} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=\linewidth]{figures/sim_large.png} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=\linewidth]{figures/sim_large_worst_case.png} \end{minipage} \caption{Left: Mean reward and standard error in simulation for small model noise ($\sigma_0 = 0.05$). Middle/Right: Mean/worst-case reward and standard error for large model noise ($\sigma_0 = 0.2$).} \label{fig:sim_results} \vspace{-0.1in} \end{figure} \vspace{-0.05in} \subsection{Synthetic Experiments} \vspace{-0.05in} We first experiment with synthetic (non-) multi-armed bandits with $\mathcal{A} = [10]$ and $\mathcal{S} = [5]$. Mean rewards are sampled uniformly at random $\mu(a, s) \sim \mathsf{Uniform}(0, 1)$ for each $a \in \mathcal{A}, s \in \mathcal{S}$. Using rejection sampling, we constrain the suboptimality gap of all actions to be at least $0.1$ at each $s$ to ensure significant comparisons between methods on short timescales. Observed rewards are drawn i.i.d.\ from $P(\cdot \mid a, s) = \mathcal{N}(\mu(a, s), \sigma^2)$ with $\sigma = 0.5$. We evaluate each algorithm using $100$ independent runs with a uniformly sampled latent state, and report average reward over time. We analyze the effect of model misspecification by perturbing the reward means with various degrees of noise: given noise $\sigma_0$, estimated means are sampled from $\hat{\mu}(a, s) \sim \mathcal{N}(\mu(a, s), \sigma_0^2)$ for each arm and latent state. The estimated reward model $\hat{\theta}$ is the concatenation of all estimated means. The leftmost plot in Fig.~\ref{fig:sim_results} shows average reward obtained over time when model noise $\sigma_0 = 0.05$ is small. The middle plot increases noise to $\sigma_0 = 0.2$. Our algorithms \ensuremath{\tt mUCB}\xspace and \ensuremath{\tt mTS}\xspace perform much better than baselines \ensuremath{\tt UCB1}\xspace and \ensuremath{\tt TS}\xspace when model noise is low, but degrade with higher noise, since neither accounts for model error. By contrast, \ensuremath{\tt mmTS}\xspace outperforms \ensuremath{\tt mTS}\xspace in the high-noise setting. However, \ensuremath{\tt mmUCB}\xspace (not reported in the plot to reduce clutter) performs the same as \ensuremath{\tt mUCB}\xspace; this is likely due to the conservative nature of UCB. Though having similar worst-case guarantees, \ensuremath{\tt EXP4}\xspace performs poorly, suggesting that our algorithms generally use the offline model more intelligently. The rightmost plot in Fig.~\ref{fig:sim_results} is the same as the middle one, but shows the ``worst-case'' performance by averaging the $10\%$ of runs, where the final reward of each method is lowest. Baselines \ensuremath{\tt UCB1}\xspace and \ensuremath{\tt TS}\xspace are unaffected by model misspecification, and have better worst-case performance than \ensuremath{\tt mUCB}\xspace and \ensuremath{\tt mTS}\xspace. However, \ensuremath{\tt mmTS}\xspace beats both online baselines; this demonstrates that uncertainty-awareness makes our algorithms more robust to model misspecification or learning error. \vspace{-0.05in} \subsection{MovieLens Results} \vspace{-0.05in} We also assess the empirical performance of our algorithms on MovieLens 1M \citep{movielens}, a large-scale, collaborative filtering dataset, comprising 6040 users rating 3883 movies. Each movie has a set of genres. We filter the data to include only users who rated at least 200 movies, and movies rated by at least 200 users, resulting in 1353 users and 1124 movies. We randomly select $50\%$ of all ratings as our ``offline" training set, and use the remaining 50$\%$ as a test set, giving sparse ratings matrices $M_{\text{train}}$ and $M_{\text{test}}$. We complete each matrix using least-squares matrix completion \citep{pmf} with rank $20$. We chose rank to be expressive enough to yield low prediction error, but small enough to not overfit. The learned factors are $M_{\text{train}} = \hat{U} \hat{V}^T$ and $M_{\text{test}} = U V^T$. User $i$ and movie $j$ correspond to row $U_i$ and $V_j$, respectively, in the matrix factors. We define a latent contextual bandit instance with $\mathcal{A} = [20]$ and $\mathcal{S} = [5]$ as follows. Using $k$-means on rows of $\hat{U}$, we cluster users into $5$ clusters, where $5$ is the largest value that does not yield empty clusters. First, a user $i$ is sampled at uniformly at random. At each round, $20$ genres, then a movie for each genre, are uniformly sampled, creating a set of diverse movies. Context $x_t \in \mathbb{R}^{20 \times 20}$ is the matrix with training movie vectors for the $20$ sampled movies as rows, i.e., movie $j$ has vector $\hat{V}_j$. The agent chooses among movies in $x_t$. The reward distribution $\mathcal{N}(U_i^T V_j, 0.5)$ for movie $j$ under user $i$ has the product of the test user and movie vectors as its mean. We evaluate on $100$ users. Let $\hat{\theta}$ be the mean of the cluster. We assume a Gaussian prior over parameters with mean $\hat{\theta}$ and use the empirical covariance of user factors within each cluster as its covariance. Notice that baselines \ensuremath{\tt LinUCB}\xspace and \ensuremath{\tt LinTS}\xspace are also given movie vectors from the training set via context, and need to only learn the user vector. This is more information than low-rank bandit algorithms \cite{clustering_bandits_1}, which jointly learn user and movie representations, and are unlikely to converge on the short timescales we consider. The left plot in Fig.~\ref{fig:movielens} show the mean rating and standard error of the six algorithms (as above, \ensuremath{\tt mmUCB}\xspace is similar to \ensuremath{\tt mUCB}\xspace and is not shown). \ensuremath{\tt mUCB}\xspace and \ensuremath{\tt mTS}\xspace adapt or ``personalize'' to users more quickly than \ensuremath{\tt LinUCB}\xspace and \ensuremath{\tt LinTS}\xspace even with access to movie vectors, and converge to better policies than \ensuremath{\tt EXP4}\xspace. Despite this, both \ensuremath{\tt mUCB}\xspace and \ensuremath{\tt mTS}\xspace are affected by model misspecification. By contrast, \ensuremath{\tt mmTS}\xspace handles model uncertainty and converges to the best reward. The right plot in Fig.~\ref{fig:movielens} shows average results for the bottom $10\%$ of users. Again, \ensuremath{\tt mmTS}\xspace dramatically outperforms \ensuremath{\tt mTS}\xspace in the worst-case. \begin{figure} \centering \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/movielens.png} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=\linewidth]{figures/movielens_worst_case.png} \end{minipage} \caption{Mean/worst-case rating and standard error on MovieLens 1M.} \label{fig:movielens} \vspace{-0.1in} \end{figure} \section{Related Work} \textbf{Latent bandits.} Latent contextual bandits admit faster personalization than standard contextual bandit strategies, such as LinUCB ~\citep{linucb} or linear TS \citep{lints,lints_2}. The closest work to ours is that of \citet{latent_bandits}, which proposes and analyzes non-contextual UCB algorithms under the assumption that the mean rewards for each latent state are known. \citet{latent_contextual_bandits} extend this formulation to the contextual bandits case, but consider offline-learned policies deployed as a mixture via EXP4. Bayesian policy reuse (BPR) \citep{bpr} selects offline-learned policies by maintaining a belief over the optimality of each policy, but no analysis exists. Our work subsumes prior work by providing contextual, uncertainty-aware UCB and TS algorithms and a unified analysis of the two. \textbf{Low-rank bandits.} Low-rank bandits can be viewed as a generalization of latent bandits, where low-rank matrices that parameterize the reward are learned jointly with bandit strategies. \citet{ts_online_mf} propose a TS algorithm for low-rank matrix factorization; however, their algorithm is inefficient and analysis is provided only for the rank-$1$ case. \citet{latent_confounders} analyze an $\varepsilon$-greedy algorithm, but rely on properties that rarely hold in practice. Another body of work studies online clustering of bandit instances, which is based on a more specific low-rank structure \citep{clustering_bandits_1,clustering_bandits_2, clustering_bandits_3, clustering_bandits_4}. Yet another deals with low-rank matrices where both rows and columns are arms \citep{bandits_rank_1,bernoulli_bandits_rank_1}. None of this existing work leverages models that are learned offline---an important practical consideration given the general availability of offline data---and only linear reward models are learned. In \cref{sec:experiments}, we compare against idealized versions of these methods where low-rank features are provided. \textbf{Structured bandits.} In structured bandits, arms are related by a common latent parameter. \citet{structured_bandits} propose a UCB algorithm for the multi-arm setting. Recently, \citet{unified_structured_bandits} propose a unified framework that adapts classic bandit algorithms, such as UCB and TS, to the multi-arm structured bandit setting. Though similar to our work, the algorithms proposed differ in key aspects: we track confidence intervals around latent states instead of arms, and develop contextual algorithms that are robust to model (parameter) misspecification. \section{Conclusions} In this work, we studied the latent bandits problem, where the rewards are parameterized by a discrete, latent state. We adopted a framework in which an offline-learned model is combined with UCB and Thompson sampling exploration to quickly identify the latent state. Our approach handles both context and misspecified models. We analyzed our proposed algorithms using a unified framework, and validated them using both synthetic data and the MovieLens 1M dataset. A natural extension of our work is to use temporal models to handle latent state dynamics. This is useful for applications where user preferences, tasks or intents change fairly quickly. For UCB, we can leverage existing adaptations to UCB algorithms (e.g., discounting, sliding windows). ~\citep{nonstationary_ucb}. For TS, we can take the dynamics into the account when computing the posterior. \section{Details of \ensuremath{\tt mmTS}\xspace for Exponential Families} \label{sec:mmts_specific} For a matrix (vector) $M$, we let $M_i$ denote its $i$-th row (element). Using this notation, we can write $\theta = (\theta_s)_{s \in \mathcal{S}}$ as a vector of parameters, one for each latent state; each $\theta_s$ parameterizes the reward under latent state $s$. We want to show that the sampling step in \ensuremath{\tt mmTS}\xspace can be done tractably when the conditional reward distribution and model prior are in the exponential family. We can write the conditional reward likelihood as, \begin{align*} P(r \mid a, x, \theta, s) = \exp\left[\phi(r, a, x)^\top \kappa(\theta_s) - g(\theta_s) \right], \end{align*} where $\phi(r, a, x)$ are sufficient statistics for the observed data, $\kappa(\theta_s)$ are the natural parameters, and $g(\theta_s) = \log \sum_{r, a, x}\phi(r, a, x)^\top \kappa(\theta_s)$ is the log-partition function. Then, we assume the prior over $\theta_s$ to be the conjugate prior of the likelihood, which will have the general form, \begin{align*} P_1(\theta_s) = H(\phi_0, m_0)\exp\left[\phi_0^\top \kappa(\theta_s) - m_0 g(\theta_s) \right], \end{align*} where $\phi_0, m_0$ are parameters controlling the prior, and $H(\phi_0, m_0)$ is the normalizing factor. For round $t$, recall that $N_t(s) = \sum_{\ell = 1}^{t - 1} \indicator{B_\ell = s}$ is the number of times $s$ is selected. We can write the joint posterior as, \begin{align} P_t(s, \theta) &\propto P_1(s) P_1(\theta_s) \prod_{\ell = 1}^{t-1} \exp\left[\phi(R_\ell, A_\ell, X_\ell)^\top \kappa(\theta_s) - g(\theta_s)\right]^{\indicator{B_\ell = s}} \label{eqn:exponential_joint_posterior} \\ &\propto P_1(s) \exp\left[\left(\phi_0 + \sum_{\ell = 1}^{t-1}\indicator{B_\ell = s}\phi(R_\ell, A_\ell, X_\ell) \right)^\top \kappa(\theta_s) - (m_0 + N_t(s))g(\theta_s) \right]. \nonumber \end{align} The general form for an exponential family likelihood is still retained. The prior-to-posterior conversion simply involves updating the prior parameters with sufficient statistics from the data. Specifically, updated parameters $\phi_t \leftarrow \phi_0 + \sum_\ell \indicator{B_\ell = s}\phi(R_\ell, A_\ell, X_\ell)$ and $m_t \leftarrow m_0 + N_t(s)$ form the conditional posterior $ P_t(\theta_s) = H(\phi_t, m_t)\exp\left[\phi_t^\top \kappa(\theta_s) - m_t g(\theta_s) \right] $. For round $t$, the marginal posterior of $s$ is given by, \begin{align*} P_t(s) &\propto P_1(s) \int_\theta P_1(\theta_s) \exp\left[\phi_t^\top \kappa(\theta_s) - m_t g(\theta_s) \right] d \theta \\ &\propto P_1(s) H(\phi_t, m_t). \end{align*} So, for all states $s$, and parameters $\theta$, the posterior probabilities $P_t(s)$ and $P_t(\theta_s)$ have analytic, closed-form solutions. Thus, sampling from the joint posterior can be done tractably by sampling state $s$ from its marginal posterior, then parameters $\theta_s$ from its conditional posterior. \section{Pseudocode of \ensuremath{\tt mmTS}\xspace for Gaussians} \label{sec:mmts_pseudocode} Next, we provide specific variants of \ensuremath{\tt mmTS}\xspace when both the model prior and conditional reward likelihood are Gaussian. This is a common assumption for Thompson sampling algorithms \cite{ts,lints,lints_2}. In this case, the joint posterior in Eq. \eqref{eqn:exponential_joint_posterior} consists of Gaussians. We adopt the notation that $\mathcal{N}(r \mid \mu, \sigma^2) \propto \exp[- (r - \mu)^2 / 2 \sigma^2]$ is the Gaussian likelihood of $r$ given mean $\mu$ and variance $\sigma^2$. We detail algorithms for two cases: \cref{alg:thompson_gauss} is for a multi-armed bandit with independent arms (no context), and \cref{alg:thompson_lingauss} is for a linear bandit problem. In the first case, we have that $\theta_s \in \mathbb{R}^K$ are the mean reward vectors where $\theta_{s,a} = \mu(a, s, \theta)$. In the other case, we assume that context is given by $x \in \mathbb{R}^{K \times d}$ where $x_a \in \mathbb{R}^d$ is the feature vector for arm $a$. Then, we have that $\theta_s \in \mathbb{R}^d$ are rank-$d$ vectors such that $x_a^\top\theta_s = \mu(a, x, s, \theta)$. Both algorithms are efficient to implement, and perform exact sampling from the joint posterior. \begin{algorithm}[H] \caption{Independent Gaussian \ensuremath{\tt mmTS}\xspace (Non-contextual)}\label{alg:thompson_gauss} \begin{algorithmic}[1] \State \textbf{Input:} \State \quad Prior over model parameters $P_1(\theta_s) = \mathcal{N}(\bar{\theta}_s, \sigma_0^2I), \forall s \in \mathcal{S}$ \State \quad Prior over latent states $P_1(s)$ \Statex \For {$t \gets 1, 2, \hdots$} \LineComment{Step 1: sample latent state from marginal posterior.} \State Define \begin{align*} P_t(s) \propto P_1(s) \prod_{\ell = 1}^{t-1} \mathcal{N}(R_\ell \mid \bar{\theta}_{s, A_\ell}, \sigma_0^2 + \sigma^2)^{\indicator{B_\ell = s}} \end{align*} \State Sample $B_t \sim P_t$ \Statex\LineComment{Step 2: sample model parameters from conditional posteriors.} \State Define \begin{align*} N_t(a, s) \leftarrow \sum_{\ell = 1}^{t-1} \indicator{A_\ell = a, B_\ell = s}, \text{ and } \quad S_t(a, s) \leftarrow \sum_{\ell = 1}^{t-1} \indicator{A_\ell = a, B_\ell = s} R_\ell \end{align*} \State For each $s \in \mathcal{S}$, sample $\hat{\theta}_s \sim \mathcal{N}(M_s, \mathsf{diag}(K_s))$, where \begin{align*} K_{s,a} \leftarrow \left(\sigma_0^{-2} + N_t(a, s) \sigma^{-2}\right)^{-1}, \text{ and } \quad M_{s,a} \leftarrow K_{s,a} \left(\sigma_0^{-2}\bar{\theta}_{s, a} + \sigma^{-2}S_t(a, s) \right) \end{align*} \State Select $A_t \leftarrow \arg\max_{a \in A} \hat{\theta}_{B_t, a}$ \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{Linear Gaussian \ensuremath{\tt mmTS}\xspace}\label{alg:thompson_lingauss} \begin{algorithmic}[1] \State \textbf{Input:} \State \quad Prior over model parameters $P_1(\theta_s) = \mathcal{N}(\bar{\theta}_s, \Sigma_0), \forall s \in \mathcal{S}$ \State \quad Prior over latent states $P_1(s)$ \Statex \For {$t \gets 1, 2, \hdots$} \LineComment{Step 1: sample latent state from marginal posterior.} \State Define \begin{align*} P_t(s) \propto P_1(s) \prod_{\ell = 1}^{t-1} \mathcal{N}(R_\ell \mid X_{\ell, A_\ell}^\top \bar{\theta}_{s}, \, X_{\ell, A_\ell}^\top \Sigma_0^{-1}X_{\ell, A_\ell} + \sigma^2)^{\indicator{B_\ell = s}} \end{align*} \State Sample $B_t \sim P_t$ \Statex\LineComment{Step 2: sample model parameters from conditional posteriors.} \State Define $N_t(s) \leftarrow \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s}$, \begin{align*} S_t(s) \leftarrow I + \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s} X_{\ell, A_\ell} X_{\ell, A_\ell}^\top, \text{ and } \quad F_t(s) \leftarrow \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s} X_{\ell, A_\ell}R_\ell \end{align*} \State For each $s \in \mathcal{S}$, compute $\hat{\beta}_s \leftarrow S_t(s)^{-1}F_t(s)$, and $\hat{\Sigma}_s \leftarrow \sigma^2 S_t(s)^{-1}$ \State For each $s \in \mathcal{S}$, sample $\hat{\theta}_s \sim \mathcal{N}(M_s, K_s)$, where \begin{align*} K_s \leftarrow \left(\Sigma_0^{-1} + N_t(s) \hat{\Sigma}_s^{-1} \right)^{-1}, \text{ and } \quad M_s \leftarrow K_s \left(\Sigma_0^{-1}\bar{\theta}_s + N_t(s) \hat{\Sigma}_s^{-1} \hat{\beta}_s \right) \end{align*} \State Select $A_t \leftarrow \arg\max_{a \in A} X_{\ell, a}^\top \hat{\theta}_{B_t}$ \EndFor \end{algorithmic} \end{algorithm} \newpage \section{Proofs} Our proofs rely on the following concentration inequality, which is a straightforward extension of the Azuma-Hoeffding inequality to sub-Gaussian random variables. \begin{lemma} \label{thm:azuma_general} Let $(Y_t)_{t \in [n]}$ be a martingale difference sequence with respect to filtration $(\mathcal{F}_t)_{t \in [n]}$, that is $\E{}{Y_t \mid \mathcal{F}_{t - 1}} = 0$ for any $t \in [n]$. Let $Y_t \mid \mathcal{F}_{t - 1}$ be $\sigma^2$-sub-Gaussian for any $t \in [n]$. Then for any $\varepsilon > 0$, \begin{align*} \prob{\Big|\sum_{t = 1}^n Y_t\Big| \geq \varepsilon} \leq 2 \exp\left[- \frac{\varepsilon^2}{2 n \sigma^2}\right]\,. \end{align*} \end{lemma} \begin{proof} For any $\lambda > 0$, which we tune later, we have \begin{align*} \prob{\sum_{t = 1}^n Y_t \geq \varepsilon} = \prob{\prod_{t = 1}^n e^{\lambda Y_t} \geq e^{\lambda \varepsilon}} \leq e^{- \lambda \varepsilon} \E{}{\prod_{t = 1}^n e^{\lambda Y_t}}\,. \end{align*} The inequality is by Markov's inequality. From the conditional independence of $Y_t$ given $\mathcal{F}_{t - 1}$, the right term becomes \begin{align*} \E{}{\prod_{t = 1}^n e^{\lambda Y_t}} = \E{}{\E{}{e^{\lambda Y_n} \mid \mathcal{F}_{n-1}} \prod_{t = 1}^{n - 1} e^{\lambda Y_t}} \leq e^{\frac{\lambda^2 \sigma^2}{2}} \E{}{\prod_{t = 1}^{n - 1} e^{\lambda Y_t}} \leq e^{\frac{n \lambda^2 \sigma^2}{2}}\,. \end{align*} We use that $Y_n \mid \mathcal{F}_{n - 1}$ is $\sigma^2$-sub-Gaussian in the first inequality, and recursively repeat for all rounds in the second. So we have \begin{align*} \prob{\sum_{t = 1}^n Y_t \geq \varepsilon} \leq \min_{\lambda > 0} e^{-\lambda \varepsilon + \frac{n\lambda^2 \sigma^2}{2}} \,. \end{align*} The minimum is achieved at $\lambda = \varepsilon / (n \sigma^2)$. Therefore \begin{align*} \prob{\sum_{t = 1}^n Y_t \geq \varepsilon} \leq \exp\left[- \frac{\varepsilon^2}{2 n \sigma^2}\right]\,. \end{align*} Now we apply the same proof to $\prob{- \sum_{t = 1}^n Y_t \geq \varepsilon}$, which yields a multiplicative factor of $2$ in the upper bound. This concludes the proof. \end{proof} \subsection{Proof of \cref{thm:ucb_regret}} \label{sec:thm1_proof} Recall that $s_* \in \mathcal{S}, \theta_* \in \Theta$ are the true latent state and model. Let $\mu(a, x) = \mu(a, x, s_*, \theta_*)$ be the true mean rewards given observed context and action. Let \begin{align} E_t = \left\{ \forall s \in \mathcal{S}: \, \abs{\sum_{\ell = 1}^{t-1} \mathbbm{1}\{B_\ell = s\} \left(\mu(A_\ell, X_\ell) - R_\ell\right)} \leq \sigma \sqrt{6N_t(s) \log n} \right\} \label{eqn:ucb_event} \end{align} be the event that the total realized reward under each played latent state is close to its expectation. Let $E = \cap_{t=1}^n E_t$ be the event that this holds for all rounds, and $\bar{E}$ be its complement. We can rewrite the expected $n$-round regret by \begin{align} \begin{split} \label{eqn:regret_event_decomposition} \mathcal{R}(n) &= \E{}{\indicator{\bar{E}} \mathcal{R}(n)} + \E{}{\indicator{E} \mathcal{R}(n)} \\ &\leq \E{}{\indicator{\bar{E}} \sum_{t = 1}^n \mu(A_{t, *}, X_t) - \mu(A_t, X_t)} \\ &\quad + \E{}{\indicator{E}\sum_{t = 1}^n \left(\mu(A_{t, *}, X_t) - U_t(A_{t, *})\right)} + \E{}{\indicator{E} \sum_{t = 1}^n \left(U_t(A_t) - \mu(A_t, X_t)\right)}\,, \hspace{-0.1in} \end{split} \end{align} where we use the regret decomposition in Eq. \eqref{eqn:ucb_regret_decomposition} in the inequality. Our first lemma is that the probability of $\bar{E}$ occurring is low. Without context, the lemma would follow immediately from Hoeffding's inequality. Since we have context generated by some random process, we instead turn to martingales. \begin{lemma} \label{thm:ucb_concentration} Let $E_t$ be defined as in Eq. \eqref{eqn:ucb_event} for all rounds $t$, $E = \cap_{t = 1}^n E_t$, and $\bar{E}$ be its complement. Then $\prob{\bar{E}} \leq 2|\mathcal{S}|n^{-1}$. \vspace{-0.05in} \end{lemma} \begin{proof} We see that the choice of action given observed context depends on past rounds. This is because the upper confidence bounds depend on which latent states are eliminated, which depend on the history of observed contexts. For each latent state $s$ and round $t$, let $Y_t(s) = \indicator{B_t = s} (\mu(A_t, X_t) - R_t)$. Observe that in any round $t$, we have $Y_t(s) \mid X_t, H_t$ is $\sigma^2$-sub-Gaussian for any $s$ and round $t$. This implies that $(Y_t(s))_{t \in [n]}$ is a martingale difference sequence with respect to context and history $(X_t, H_t)_{t \in [n]}$, or $\E{}{Y_t(s) \mid X_t, H_t} = 0$ for all rounds $t \in [n]$. For any round $t$, and state $s \in \mathcal{S}$, and any $N_t(s) = u$ for $u < t$, we have the following due to \cref{thm:azuma_general}, \begin{align*} \prob{\abs{\sum_{\ell=1}^{t-1} Y_t(s)} \geq \sigma \sqrt{6 u \log n}} \leq 2\exp\left[-3\log n\right] = 2 n^{-3}\,. \end{align*} So, by the union bound, we have \begin{align*} \prob{\bar{E}} \leq \sum_{t = 1}^n \sum_{s \in \mathcal{S}} \sum_{u = 1}^{t - 1} \prob{\abs{\sum_{\ell=1}^{u - 1} Y_t(s)} \geq \sigma \sqrt{6 u \log n}} \leq 2 |\mathcal{S}| n^{-1}\,. \end{align*} \vspace{-0.05in} \end{proof} The first term in Eq. \eqref{eqn:regret_event_decomposition} is small because the probability of $\bar{E}$ is small. Using \cref{thm:ucb_concentration}, and that total regret is bounded by $n$, we have, $ \E{}{\indicator{\bar{E}} \mathcal{R}(n)} \leq n\prob{\bar{E}} \leq 2|\mathcal{S}|. $ For round $t$, the event $\mu(A_{t, *}, X_t) \geq U_t(A_{t, *})$ occurs only if $s_* \notin C_t$ also occurs. By the design of $C_t$ in \ensuremath{\tt mUCB}\xspace, this happens only if $G_t(s_*) > \sigma\sqrt{6 N_t(s)\log n}$. Event $E_t$ says that the opposite is true for all states, including true state $s_*$. So, the second term in Eq. \eqref{eqn:regret_event_decomposition} is at most $0$. Now, consider the last term in Eq. \eqref{eqn:regret_event_decomposition}. Let $T_s = \{t \leq n: B_t = s\}$ denote the set of rounds where latent state $s$ is selected. We have, \begin{align*} \indicator{E} \sum_{t = 1}^n \left(U_t(A_t) - \mu(A_t, X_t)\right) &= \indicator{E} \sum_{s \in S} \sum_{t \in T_s} \left(\mu(A_t, X_t, s) - \mu(A_t, X_t) \right) \\ &= \indicator{E} \sum_{s \in S} \sum_{t \in T_s} \left(\mu(A_t, X_t, s) - R_t + R_t - \mu(A_t, X_t) \right) \\ &\leq \indicator{E} \sum_{s \in S} \left(G_n(s) + \sum_{t \in T_s} \left(R_t - \mu(A_t, X_t) \right)\right) \\ &\leq \sum_{s \in S} \left(1 + 2\sigma \sqrt{6N_n(s) \log n}\right). \end{align*} For the first inequality, we use that the last round $t'$ where state $s$ is selected, we have an upper-bound on the prior gap $G_{t'}(s) \leq \sqrt{6N_{t'}(s)\log n}$. Accounting for the last round yields $G_{n}(s) \leq \sigma \sqrt{6N_n(s)\log n} + 1$. For the last inequality, we use $E$ occurring to bound $\sum_{t \in T_s} \left(R_t - \mu(A_t, X_t) \right) \leq \sigma \sqrt{6N_n(s)\log n}$. This yields the desired bound on total regret, \begin{align*} \mathcal{R}(n) &\leq 3|\mathcal{S}| + 2\sigma\sqrt{6\log n}\left(\sum_{s \in S} \sqrt{N_n(s)}\right) \\ &\leq 3|\mathcal{S}| + 2\sigma\sqrt{6|\mathcal{S}| \log n \sum_{s \in S}N_n(s)} \\ &= 3|\mathcal{S}| + 2\sigma\sqrt{6|\mathcal{S}|n \log n}, \end{align*} where the last inequality comes from the Cauchy–Schwarz inequality. \subsection{Proof of \cref{cor:posterior_regret}} The true latent state $S_* \in \mathcal{S}$ is random under Bayes regret. In this case, we still assume that we are given the true model $\theta_*$, so only $S_* \sim P_1$ for known $P_1$. We also have that the optimal action $A_{t, *} = \arg\max_{a \in \mathcal{A}} \mu(a, X_t, S_*, \theta_*)$ is random not only due to context, but also $S_*$. We define $U_t(a) = \arg\max_{s \in C_t}\mu(a, X_t, S_*, \theta_*)$ as in \ensuremath{\tt mUCB}\xspace. Note the additional randomness due to $S_*$. We can rewrite the Bayes regret as $ \mathcal{BR}(n) = \E{}{\mathcal{R}(n; S_*, \theta_*)}, $ where the outer expectation is over $S_* \sim P_1$. The expression inside the expectation can be decomposed as \begin{align*} \mathcal{R}(n, S_*, \theta_*) &= \E{}{\indicator{\bar{E}} \sum_{t = 1}^n \mu(A_{t, *}, X_t, S_*) - \mu(A_t, X_t, S_*)} \\ &\hspace{-0.5in}+ \E{}{\indicator{E}\sum_{t = 1}^n \left(\mu(A_{t, *}, X_t, S_*) - U_t(A_{t, *})\right)} + \E{}{\indicator{E} \sum_{t = 1}^n \left(U_t(A_t) - \mu(A_t, X_t, S_*)\right)}\,, \end{align*} where $E, \bar{E}$ are defined as in \cref{sec:thm1_proof}, and we use the decomposition in Eq. \eqref{eqn:posterior_regret_decomposition}. Each above expression can be bounded exactly as in \cref{thm:ucb_regret}. The reason is that the original upper bounds hold for any $S_*$, and therefore also in expectation over $S_* \sim P_1$. This yields the desired Bayes regret bound. \subsection{Proof of \cref{thm:ucb_regret_uncertain}} \label{sec:thm2_proof} The only difference in the analysis is that we need to incorporate the additional error due to model misspecification. Let $\mathcal{E} = \{\forall a \in \mathcal{A}, x \in \mathcal{X}, s \in \mathcal{S}: \abs{\hat{\mu}(a, x, s) - \mu(a, x, s)} \leq \varepsilon\}$ be the event that model $\hat{\theta}$ has bounded misspecification and $\bar{\mathcal{E}}$ be its complement. Also let $E$, $\bar{E}$ be defined as in \cref{sec:thm1_proof}. If $\mathcal{E}$ does not hold, then the best possible upper-bound on regret is $n$; fortunately, we assume in the theorem that the probability of that occurring is bounded by $\delta$. So we can bound the expected $n$-round regret as \begin{align} \begin{split} \label{eqn:regret_event_decomposition_uncertain} \mathcal{R}(n) &= \E{}{\indicator{\bar{\mathcal{E}}} \mathcal{R}(n)} + \E{}{\indicator{\bar{E}, \mathcal{E}} \mathcal{R}(n)} + \E{}{\indicator{E, \mathcal{E}} \mathcal{R}(n)} \\ &\leq n\delta + \E{}{\indicator{\bar{E}, \mathcal{E}} \sum_{t = 1}^n \mu(A_{t, *}, X_t) - \mu(A_t, X_t)} \\ &\quad + \E{}{\indicator{E, \mathcal{E}}\sum_{t = 1}^n \left(\mu(A_{t, *}, X_t) - U_t(A_{t, *})\right)} + \E{}{\indicator{E, \mathcal{E}} \sum_{t = 1}^n \left(U_t(A_t) - \mu(A_t, X_t)\right)}\,, \hspace{-0.5in} \end{split} \end{align} where we use the regret decomposition in Eq. \eqref{eqn:ucb_regret_decomposition}. The second term in Eq. \eqref{eqn:regret_event_decomposition_uncertain} is small because the probability of $\bar{E}$ is small. Using \cref{thm:ucb_concentration}, and that total regret is bounded by $n$, we have, $ \E{}{\indicator{\bar{E}, \mathcal{E}}\mathcal{R}(n)} \leq n\prob{\bar{E}} \leq 2|\mathcal{S}|. $ If $\mathcal{E}$ occurs, the event $\mu(A_{t, *}, X_t) - U_t(A_{t, *}) \geq \varepsilon$ for any round $t$ occurs only if $s_* \notin C_t$ also occurs. By the design of $C_t$ in \ensuremath{\tt mmUCB}\xspace, this happens if $G_t(s_*) \geq \sigma\sqrt{6 N_t(s)\log n}$. Since \begin{align*} G_t(s_*) = \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s_*} \left(\hat{\mu}(A_\ell, X_\ell) - \varepsilon - R_\ell\right) \leq \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s_*} \left(\mu(A_\ell, X_\ell) - R_\ell\right), \end{align*} we see that event $E_t$ says that the opposite is true for all states, including true state $s_*$. Hence, the third term in Eq. \eqref{eqn:regret_event_decomposition_uncertain} is bounded by $n\varepsilon$. Now, consider the last term in Eq. \eqref{eqn:regret_event_decomposition_uncertain}. Let $T_s = \{t \leq n: B_t = s\}$ denote the set of rounds where latent state $s$ is selected. We have, \begin{align*} \indicator{E, \mathcal{E}}\sum_{t = 1}^n \left(U_t(A_t) - \mu(A_t, X_t)\right) &= \indicator{E, \mathcal{E}}\sum_{s \in S} \sum_{t \in T_s} \left(\hat{\mu}(A_t, X_t, s) - \mu(A_t, X_t) \right) \\ &= n\varepsilon + \indicator{E, \mathcal{E}} \sum_{s \in S} \sum_{t \in T_s} \left(\hat{\mu}(A_t, X_t, s) - \varepsilon - R_t + R_t - \mu(A_t, X_t) \right) \\ &\leq n\varepsilon + \indicator{E, \mathcal{E}} \sum_{s \in S} \left(G_n(s) + \sum_{t \in T_s} \left(R_t - \mu(A_t, X_t) \right)\right) \\ &\leq n\varepsilon + \sum_{s \in S} \left(1 + 2\sigma \sqrt{6N_n(s) \log n}\right)\,. \end{align*} For the first inequality, we use that the last round $t'$ where state $s$ is selected, we have an upper-bound on the prior gap $G_{t'}(s) \leq \sqrt{6N_{t'}(s)\log n}$. Accounting for the last round yields $G_{n}(s) \leq \sigma \sqrt{6N_n(s)\log n} + 1$. For the last inequality, we use $E$ occurring to bound $\sum_{t \in T_s} \left(R_t - \mu(A_t, X_t) \right) \leq \sigma \sqrt{6N_n(s)\log n}$. This yields the desired bound on total regret, \begin{align*} \mathcal{R}(n) &\leq n\delta + 3|\mathcal{S}| + 2n\varepsilon + 2\sigma\sqrt{6\log n}\left(\sum_{s \in S} \sqrt{N_n(s)}\right) \\ &\leq n\delta + 3|\mathcal{S}| + 2n\varepsilon + 2\sigma\sqrt{6|\mathcal{S}| \log n \sum_{s \in S}N_n(s)} \\ &= n\delta + 3|\mathcal{S}| + 2n\varepsilon + 2\sigma\sqrt{6|\mathcal{S}|n \log n}, \end{align*} where the last inequality comes from the Cauchy–Schwarz inequality. \subsection{Proof of \cref{cor:posterior_regret_uncertain}} Both latent state $S_* \in \mathcal{S}$ and model $\theta_* \in \Theta$ are random, and drawn as $S_*, \theta_* \sim P_1$, where the prior $P_1$ is known. In this case, the true model $\theta_*$ is not known to us. Using marginalized means $\bar{\mu}(a, x, s)$, and $\varepsilon, \delta > 0$ as defined in the statement of the corollary, we write, \begin{align*} G_t(s) = \sum_{\ell = 1}^{t-1} \indicator{B_\ell = s} \left(\bar{\mu}(A_\ell, X_\ell, s) - \varepsilon - R_\ell\right), \end{align*} and $U_t(a) = \arg\max_{s \in C_t}\bar{\mu}(a, X_t, s)$. This is in contrast to $G_t(s)$ and $U_t(a)$ in \ensuremath{\tt mmUCB}\xspace, which use $\hat{\mu}(a, x, s)$ from a single model. Conceptually though, both $\hat{\mu}(a, x, s)$ and $\bar{\mu}(a, x, s)$ are just $\varepsilon$-close point estimates of $\mu(a, x, s)$ due to the assumptions made about the true model $\theta_*$ in the theorem and corollary, respectively. We can rewrite the Bayes regret as $ \mathcal{BR}(n) = \E{}{\mathcal{R}(n; S_*, \theta_*)}, $ where the outer expectation is over $S_*, \theta_* \sim P_1$. The expression inside the expectation can be written as, \begin{align*} \mathcal{R}(n; S_*, \theta_*) &\leq n\delta + \E{}{\indicator{\bar{E}, \mathcal{E}} \sum_{t = 1}^n \mu(A_{t, *}, X_t, S_*, \theta_*) - \mu(A_t, X_t, S_*, \theta_*)} \\ &\hspace{-0.8in}+ \E{}{\indicator{E, \mathcal{E}}\sum_{t = 1}^n \left(\mu(A_{t, *}, X_t, S_*, \theta_*) - U_t(A_{t, *})\right)} + \E{}{\indicator{E, \mathcal{E}} \sum_{t = 1}^n \left(U_t(A_t) - \mu(A_t, X_t, S_*, \theta_*)\right)}\,, \end{align*} where $\mathcal{E}, E, \bar{E}$ are defined as in \cref{sec:thm2_proof}, and we use the decomposition in Eq. \eqref{eqn:posterior_regret_decomposition}. The expressions can be bounded exactly as in \cref{thm:ucb_regret_uncertain}. The upper bound is worst-case and holds for any $S_*, \theta_*$, and thus also holds after taking an expectation over the prior $S_*, \theta_* \sim P_1$. This bounds the Bayes regret, as desired. \section{Extensions} \subsection{Changing Latent State} In non-stationary environments, Garivier et. Moulines proposed two passive adaptations to the UCB algorithm: discounting past observations, or ignoring them completely via a sliding window. We consider the latter, but the former can be similarly analyzed. Let us define $N_{t+1}(\tau, s) = \sum_{\ell = t - \tau + 1}^t \mathbbm{1}[S_\ell = s]$ as the number of times $s$ was chosen within the $\tau$ most recent rounds, and $$\hat{F}_{t+1}(\tau, s) \leftarrow N_{t+1}(\tau, s)^{-1} \sum_{\ell = t - \tau + 1}^t \mathbbm{1}[S_\ell = s] (\mu_s(A_\ell, X_\ell) - R_\ell),$$ as the empirical average difference between expected and realized reward for choosing latent state $s$ among the most recent rounds. We propose the following UCB algorithm for nonstationary environments. Initialize $\hat{F}_1(\tau, s) = 0$ for each $s \in S$, and window length $\tau > 0$. For each round $t = 1, 2, \hdots,$ \begin{enumerate} \item $C_t \leftarrow \left\{s \in S: \hat{F}_t(\tau, s) < C\sqrt{\frac{\log \min\{\tau, t\}}{N_t(\tau, s)}} \right\}$. \item Observe context $X_t$. For each action $a \in A$, compute $$U_t(a) \leftarrow \max_{s \in C_t} \, \mu_s(X_t, a).$$ \item Select action $A_t \leftarrow \arg\max_{a \in A} U_t(a)$. Let $S_t$ be the corresponding latent state, and $R_t$ the observed reward. \end{enumerate} To simplify analysis, we first assume a perfect model with independent arms, though extending to include context and an imperfect model are straightforward using the same techniques as for a fixed latent state. Let $s_{t, *}$ and $a_{t, *}$ denote the true latent state and optimal arm for round $t$, and let $B$ be the number of breakpoints, or rounds where the latent state switches. Finally, let $\mathcal{T}$ denote all rounds $t$ such that all rounds $\ell \in [t - \tau + 1, t]$ satisfy $s_{\ell, *} = s_{t, *}$. This includes all rounds that are not too close to a breakpoint. We have that the first term in the regret is bounded by, \begin{align*} \sum_{t=1}^T \E{}{\mu_t(a_{t, *}) - U_t(a_{t, *})} &\leq C B \tau + \sum_{t \in \mathcal{T}} P(\mu_t(a_{t, *}) \geq U_t(a_{t, *})) \\ &\leq C B \tau + C \sum_{t \in \mathcal{T}} P(s_{t, *} \not\in C_t), \end{align*} where we ignore the rounds too close to breakpoints because the upper-confidence estimates are biased. For $t \in \mathcal{T}$, and $S_t = s_{t, *}$, we know that $\E{}{\mu_{S_t}(A_t, X_t) - R_t} = \E{}{\mu_t(A_t) - R_t} = 0$. We have, \begin{align*} P\left(\hat{F}_t(\tau, s_{t, *}) \geq C\sqrt{\frac{\log \min\{t, \tau\}}{N_t(\tau, s_{t, *})}} \right) &\leq \exp\left(-\frac{2N_t(s_{t, *})\left(C\sqrt{\frac{\log \min\{t, \tau\}}{N_t(\tau, s_{t, *})}}\right)^2}{C^2}\right) \end{align*} So, the first term can be bounded yielding $P(s_{t, *} \not\in C_t) \leq \frac{1}{\min\{t, \tau\}^2}$. We have that, \begin{align*} \sum_{t \in \mathcal{T}}P(s_{t, *} \not\in C_t) \leq \sum_{t = 1}^{\tau} \frac{1}{t^2} + \sum_{t = \tau + 1}^T \frac{1}{\tau^2} \leq \frac{\pi^2}{6} + \left\lceil\frac{T}{\tau^2}\right\rceil \end{align*} Now, we have for the second term, \begin{align*} &\sum_{t =1}^T \E{}{U_t(A_t) - \mu_t(A_t)} \\ &\qquad\leq C \sum_{t =1}^T \sum_{s \neq s_{t, *}} P(S_t = s) \\ &\qquad\leq C B \tau + \sum_{t \in \mathcal{T}} \sum_{s \neq s_{t, *}} P(s_{t, *} \not\in C_t) + C \sum_{t \in \mathcal{T}} \sum_{s \neq s_{t, *}} P(s_{t, *} \in C_t) P(S_t = s \mid s_{t, *} \in C_t) \end{align*} The term involving $P(s_{t, *} \not\in C_t)$ can be bounded using the same analysis as above. Let $1 = t_0 < t_1 < \hdots < t_{B} < t_{B + 1} = T$ be the rounds for the $B$ breakpoints. We can write, \begin{align*} &\sum_{t \in \mathcal{T}} \sum_{s \neq s_{t, *}} P(s_{t, *} \in C_t) P(S_t = s \mid s_{t, *} \in C_t) \\ &\qquad = \sum_{b = 0}^B \sum_{t = t_b}^{t_{b+1} - 1} \sum_{s \neq s_{t, *}} P(s_{t, *} \in C_t) P(S_t = s \mid s_{t, *} \in C_t) \end{align*} For any stationary segment $b$, let $s_{b, *}$ be the true latent state, and $a_{b, *}$ be the optimal arm. Also, define $\Delta_{b, a}$ as the suboptimality gap of arm $a$ in the stationary segment. For round $t \in [t_b, t_{b+1} - 1]$, if $s_{b, *} \in C_t$ and $S_t \neq s_{b, *}$, we notice that, \begin{align*} \E{}{U_t(A_t) - R_t} \geq \mu_t(a_{t, *}) - \mu_t(A_t) = \Delta_{b, A_t}. \end{align*} This is because, when arms are independent, only the best arm under a latent state will be pulled. Using this, we see that for $s \neq s_{t, *}$, $\hat{F}_t(\tau, s)$ is the sum of random variables with mean at least $\Delta_{b, A_t}$. From this, we can conclude that, \begin{align*} P\left(\hat{F}_t(\tau, s) < C\sqrt{\frac{\log \min\{t, \tau\}}{N_t(\tau, s)}} \right) &\leq \exp\left(-\frac{2 N_t(\tau, s) \left(\Delta_{b, A_t} - C\sqrt{\frac{\log \min\{t, \tau\}}{N_t(\tau, s)}} \right)^2}{C^2}\right). \end{align*} We see that once $N_t(\tau, s) \geq \frac{2C^2 \log \tau}{\Delta_{b, A_t}^2}$, then we have $C\sqrt{\frac{\log \min \{t, \tau\}}{N_t(\tau, s)}} \leq \frac{\Delta_{b, A_t}}{2}$, and \begin{align*} P\left(\hat{F}_t(\tau, s) < C\sqrt{\frac{\log \min\{t, \tau\}}{N_t(\tau, s)}} \right) &\leq \frac{1}{\tau} \end{align*} So once a suboptimal latent state $S_t \neq s_*$ is played $O\left(\frac{\log \tau}{\Delta_{b, A_t}}\right)$ times, it gets ruled out with high probability. Let $a_{s, *} = \max_{a \in A} \mu_s(a)$; then whenever $S_t = s$, we know that $A_t = a_{s, *}$. Using the above, we can bound, for $s \neq s_*$, \begin{align*} \sum_{t = t_b}^{t_{b+1} - 1} P(S_t = s \mid s_* \in C_t) &\leq \sum_{t = t_b}^{t_{b+1} - 1} P\left(s \in C_t \mid \mu_s(a_{s, *}) \geq \mu(a_*) \right) \\ &\leq \left\lceil\frac{t_{b+1} - t_b}{\tau}\right\rceil + \frac{2C^2 \log \tau}{\Delta_{b, a_{b, *}}^2} \end{align*} Combining yields the total regret, \begin{align*} R(T; \pi^U, \theta) &\leq \sum_{t=1}^T \E{}{\mu(a_*) - U_t(a_*)} + \sum_{t=1}^T \E{}{U_t(A_t) - \mu(A_t)} \\ &\leq 2CB\tau + C \left[\frac{\pi^2}{6} + \sum_{s \in S} \left(\frac{\pi^2}{6} + \left\lceil\frac{T}{\tau^2}\right\rceil + \left\lceil\frac{T}{\tau}\right\rceil + \frac{2C^2 \log \tau}{\Delta_{a_{s, *}}^2} \right) \right]. \end{align*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} With societies aging globally, hearing loss is becoming a very common problem worldwide. The development of a hearing loss is typically accompanied by increasing difficulty to discriminate speech from noise in challenging situations. Consequently, besides amplification, modern hearing instruments deliver a great spectrum of algorithmic possibilities to enhance hearing, especially hearing of speech signals in noisy environments. The signal-to-noise ratio is typically improved in hearing instruments by making use of directional microphones \cite{Ricketts:1999wh,KamkarParsi:tp}. Directional processing, however, has the inherent side-effect, that the target speaker needs to be in a defined direction, often in the frontal hemisphere, and that sounds emitted from other angles will likely be attenuated. While, for many situations, this assumption can be safely made, it can also be disturbing in other situations, where the target source cannot be assumed to be coming from the front. Furthermore, for some instruments directional processing is not an option due to limitations in size and power consumption, e.g. devices to be inserted deeply into the ear canal. \section{State of the Art} Single-channel noise reduction aims to solve this problem, making use of a single microphone signal only. Most noise reduction schemes, however, use limited signal properties of noisy environments, effectively only exploiting first and second order statistics and targeting at steady state noises, resulting in difficulties when dealing with non-stationary background signals. To overcome this, codebook-based noise reduction was proposed by Kuropatwinski \cite{Kuropatwinski:jz}. This approach, however, lacked robustness -- a property very important for industry-grade medical products, and was combined with a recursive noise tracker by Rosenkranz and Puder to overcome this limitation~\cite{Rosenkranz:2012jz}. The use of longer context audio segments with neural networks has been proposed by Hermansky and Sharma \cite{hermansky1999temporal} and has found applications in automatic speech recognition \cite{univis90500975}. In recent years, deep learning methods and deep neural networks (DNNs) have become increasingly popular in speech recognition \cite{Dahl:dx} and speech enhancement. Since deep networks are able to learn a complex, nonlinear mapping function, this makes them ideal candidates for noise reduction tasks, where complex priors (speech and distortion signal properties) must be modeled. Auto-encoder networks have been used by multiple authors also in the field of audio denoising. Lu \textit{et al.} were amongst the first to report about successful use of denoising auto-encoders for speech recognition systems \cite{Lu:2013vr}. Xia \textit{et al.} used a denoising auto-encoder to calculate an estimate of clean speech which is then used in a traditional Wiener filtering approach \cite{bib:xia:WDA}. To cope with missing loss sensitivity at high frequencies, they introduce a frequency-dependent weight, which effectively adjusts the learning rate at the last layer of the autoencoder. However, as Kumar \textit{et al.} have highlighted, these studies lack realistic noise scenarios, as the same kind of noise was used for training as for testing. When the network is used for direct speech signal estimation, a high correlation between the loss function and human perception should be demanded. This common problem was addressed by Pascual \textit{et al.}, who proposed to use generative adversarial networks, in effect using discrimination loss instead of a minimum squared error that is typically used in other works \cite{Pascual:2017ug}. Besides auto-encoders, traditional fully-connected network topologies have been used for audio denoising. Xu \textit{et al.} have proposed a DNN-based regression approach for frequency-bin wise enhanced signal amplitude prediction in a filter bank setting, using the noisy phase for signal reconstruction \cite{Xu:2014kl}. In their work, they have also shown that the DNN can profit from the inclusion of a longer acoustic context. A very important property of a hearing assistance device is its overall latency, especially when it comes to mild to moderate hearing losses, where acoustic coupling is typically open and thus a strong component of the direct sound is reaching the ear drum. Here, comb filter effects are introduced by the superposition of processed signal and direct signal. Latencies of below 10\,ms are typically tolerated in this regard, with lower latencies being tolerated better by subjects. This places high demands on delay on every part of the signal processing chain, especially the filter banks and noise reduction schemes. This, together with the constraint for online processing, was not given for any of the state of the art, which builds the motivation for this work. \section{Material and Methods} We used 49 real-world noise signals, including non-stationary signals, recorded at various places in Europe using hearing aid microphones in a receiver in the canal-type hearing instrument shell (Signia Pure 312, Sivantos GmbH, Erlangen, Germany) using calibrated recording equipment at a sampling rate of 24\,kHz. The signals have been mixed with German sentences (N=260) from the EUROM database \cite{bib:eurom:database}, which has been upsampled to 24\,kHz for this purpose. \subsection{Dataset Generation} Since the noise conditions in our dataset have been recorded in real situations, levels should not be modified significantly. The signal mixing process can be expressed as: \begin{equation} x = g_\textrm{L} \left( n_0 + g_\textrm{S} s_0 \right) = g_\textrm{L} n_0 + g_\textrm{L} g_\textrm{S} s_0 = s + n \end{equation} where the original speech signal $s_0$ is adjusted in level to reach a defined SNR using $g_\textrm{S}$. For data augmentation, we adapt the idea of Kumar \textit{et al.}\cite{bib:kumar:SpeechEnhancementMultiNoiseDNN} and combine up to four noises with different offsets within the original files into the noise mixture $n_0$. The noise mixture is adjusted in level ($g_L \in \{-6, 0, 6\}\,\mathrm{dB}$) to increase dataset variance, yielding a signal mixture with realistic level information. For training, signals with SNRs of $\{-100,5,0,5,10,20\}\,\mathrm{dB}$ were generated with an equal distribution. The train-validation-test split for our machine learning approach was done on original signal level, i.e. no speech or noise signal that is contained in the training set is part of the validation or test set. \subsection{Signal Processing Toolchain} Our toolchain, as depicted in Fig. \ref{fig:processingChain}, starts with analysis filter banks (AFB) that are used to process the clean speech signal $s$, the noise signal $n$ and the noisy mixture $x$. A standard uniform polyphase filter bank with 48 frequency bins designed for hearing aid applications is utilized for this \cite{bauml2008uniform}, yielding signals $X(k,f)$, $S(k,f)$ and $N(k,f)$ with time-index $k$ and frequency index $f$. As a first subsequent step after the filter bank, the log power spectrum is calculated, followed by a normalization step. Then, the signal including the temporal context is fed into a fully connected network topology with 3 hidden layers and 2048 nodes per layer. Finally, a 48 channel gain vector $G_w$ is being predicted and applied to the noisy signal $X(k,f)$, which is in turn synthesized again (SFB). This structure is especially suited for hearing instrument applications, as gain application can well be combined with other algorithms working on the signal chain, such as automatic gain control for hearing loss compensation, and could be interpreted as known operator learning \cite{maier2017precision}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{framework} \caption{Processing chain during DNN training. Signals are fed into an analysis filter bank, the mixture is being normalized and statistical values are provided to the network.} \label{fig:processingChain} \end{figure} \subsection{Asymmetric Temporal Context} As previously stated, one constraint in hearing instrument signal processing is the limited latency a device is allowed to produce \cite{bauml2008uniform}. Besides the filter bank design, this constraint also strongly restricts the algorithmic lookahead for noise reduction schemes. We assumed an overall latency of $8\,$ms to be tolerable, where approx. $6\,$ms are consumed by the analysis and synthesis filter bank. Context information from past samples is in turn only limited by the memory and processing constraints of the instrument, which typically scales with available chip technology. The time context of this work can thus be divided into 2 components: \begin{equation} \tau = \tau_1 + 1 + \tau_2 \end{equation} where $\tau_1$ is the look back time constant, and $\tau_2$ is the lookahead time constant. Together with the current frame they define the input matrix to the network. For our setup, $\tau_1 >> \tau_2$. \subsection{Normalization} Normalization plays a key role in deep learning, since it positively influences convergence behavior and aids generalization. Given the filter bank representation $X(k,f)$ of the input signal $x(n)$, the signal is normalized as: \begin{equation} \begin{split} X_\mathrm{norm}(k,f) &= X(k,f) - \frac{\sum_{i=-\tau_1}^{\tau_{2}} X(k+i,f)}{\tau}\\ &= X(k,f) - \mu(f) \end{split} \end{equation} This frequency bin-wise normalization is thus calculated on the available buffer, given the temporal context. This normalization scheme requires no global information and thus enables on-line processing. The calculated mean value vector $\boldsymbol{\mu}$ is, alongside with the frequency bin-wise standard deviation $\boldsymbol{\sigma}$, enhancing the model's input as further contextual information. \subsection{Network Details and Training} We observed that network convergence as well as RMSE loss was positively influenced by using rectified linear unit activation functions within the hidden layers. The system was trained on a Nvidia Titan Xp GPU using TensorFlow and the Adam optimizer with an initial learning rate of $10^{-5}$ for $10$ epochs. \begin{figure} \includegraphics[width=\linewidth]{context_loss} \caption{Validation loss in dependency of lookback time $\tau_1$ and lookahead time $\tau_2$ after 10 epochs of training. Additionally, an exponential curve fit is shown.} \label{context_loss} \end{figure} \section{Results and Discussion} When comparing the loss for different time constants of the asymmetric context (Fig. \ref{context_loss}), we find that the network profits from increased information from both past, and future. The benefit w.r.t. past context saturates at around 200\,\dots300\,ms. This is in line with the findings of Xu \textit{et al.}, and could hint at the rate of speech syllables ($\approx 4\,$Hz) \cite{houtgast1985review}. Looking at the example of a fricative (Fig. \ref{compare_tau}) further hints towards this interpretation: With almost no energy at the low frequencies, the network with greater context is still able to differentiate between the noisy fricative and a real noise signal. We compared the results by our method with a state-of-the-art noise reduction scheme, based on recursive minimum tracking \cite{bib:hansler:fancytrack} that is being applied in commercially available hearing instruments. Another comparison was made against an idealized scheme, where the optimal Wiener gain is applied, and as anchor a badly tuned minimum statistics estimator. For all settings, the maximum attenuation of the algorithms was limited to $14\,\mathrm{dB}$. \begin{figure} \includegraphics[width=\linewidth]{contexts} \caption{Exemplary comparison of a fricative with the ground truth comparing different temporal contexts. Top: ground truth, middle: $t_1=30\,\mathrm{ms}$, bottom: $\tau_1=200\,\mathrm{ms}$.} \label{compare_tau} \end{figure} \subsection{Objective Metrics} For numeric assessment of our complete test data set, we use the short-term objective intelligibility metric from Taal \textit{et al.} \cite{bib:taal:STOI}. To improve visibility of the results, we present the difference between the original noisy signal and the enhanced signal, denoted as $\Delta$STOI (see Fig. \ref{ObjectiveEvaluation}). For the DNN approach in low SNR conditions, we see that $\Delta$STOI is on average close to zero, indicating no gains in intelligibility. For SNR closer to real world conditions, the value improves with saturating effects for higher SNRs. For the baseline approach, the average value is consistently negative. Following the formulation of Chen \textit{et al.}, we further include the noise reduction (NR) and speech distortion (SD) metrics for our evaluation \cite{bib:chen:sdnr}. Comparing the baseline to the predicted gain using the DNN, we find that a higher noise reduction can be achieved while at the same time producing lower speech distortion. \begin{figure} \includegraphics[width=\linewidth]{objective_200.eps} \caption{Objective measures of ideal Wiener gains, DNN prediction ($\tau_1=200\,\mathrm{ms},\tau_2=2\,\mathrm{ms}$) and recursive minimum tracking baseline. Top: differences in STOI index, middle: noise reduction (NR), bottom: speech distortion (SD).} \label{ObjectiveEvaluation} \end{figure} \subsection{Subjective Evaluation} Even though objective figures like STOI or others are designed to correlate well with perception, the validity of such measures can always be questioned for the actual noise reduction scheme that is being applied. Further, quality ratings of a noise reduction system should always be made relative to an upper boundary, limited by the SNR and the achievable optimum performance given the general approach. In audio coding, the evaluation is subject to a similar problem, in this case that the original audio signal might be of a mediocre quality or might have signal parts that are perceived as better in the processed than the original signal given a non-optimal encoding. For this domain, the MUlti Stimulus test with Hidden Reference and Anchor (MUSHRA) \cite{liebetrau2014revision} was developed and is being widely used. We apply this test also for the domain of noise reduction, because we find a similar setup: We also have a potentially perceived non-optimal reference signal, we can derive an anchor signal that should be perceived as being of worse quality, and we want to compare multiple signals relative to each other. For our case, especially the comparison with the reference is of great importance, since a Wiener filter-based scheme will always have the noisy phase in the output signal and is thus restricted in quality for poor SNR conditions. The test was carried out at Sivantos R\&D site in Erlangen, Germany. The vast majority of all (N=20) subjects participating in the test were audio engineering professionals and thus highly qualified for signal quality assessment. Signals were presented over a web interface (webMUSHRA implementation of Schoeffler \textit{et al.}. \cite{schoeffler2015towards}) using headphones in a calm office environment setting. For each SNR condition, 4 input signals of 12\,s length were randomly picked from the test dataset and processed for all test conditions. To not include initialization of the recursive minimum tracker baseline, all signals were cut after initialization. Looking at the results (Fig.\ref{subjective}), we find that the median subjective quality rating improves in all conditions over the baseline. Further, we find ceiling effects for the DNN-generated signal, indicating signals with equal quality to the reference for some conditions (cf. 5\,dB and 10\,dB). \begin{figure} \includegraphics[width=\linewidth]{subjectiveResults} \caption{Subjective results (MUSHRA test, N=20) of the proposed algorithm ($\tau_1=200\,\mathrm{ms},\tau_2=2\,\mathrm{ms}$) against the minimum recursive tracking baseline and an ideal Wiener filter gain (ref). } \label{subjective} \end{figure} \section{Summary} In this work we presented a deep learning-based approach for noise reduction that is able to work with the restrictive conditions of hearing instrument signal processing. The subjective and objective evaluation showed improvements over a recursive minimum tracking approach in a wide range of SNR conditions using realistic noise scenarios.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} Let $(M,\nu,T)$ be a dynamical system, that is a measure space $(M,\nu)$ endowed with a measurable transformation $T:M\rightarrow M$ which preserves the measure $\nu$. The mixing properties deal with the asymptotic behaviour, as $n$ goes to infinity, of integrals of the following form $$C_n(f,g):=\int_Mf.g\circ T^n\, d\nu,$$ for suitable observables $f,g:M\rightarrow \mathbb C$. Mixing properties of probability preserving dynamical systems have been studied by many authors. It is a way to measure how chaotic the dynamical system is. A probability preserving dynamical system is said to be mixing if $C_n(f,g)$ converges to $\int_Mf\, d\nu\, \int_Mg\, d\nu$ for every square integrable observables $f,g$. When a probability preserving system is mixing, a natural question is to study the decorrelation rate, i.e. the rate at which $C_n(f,g)$ converges to zero when $f$ or $g$ have null expectation. This crucial question is often a first step before proving probabilistic limit theorems (such as central limit theorem and its variants). The study of this question has a long history. Such decays of covariance have been studied for wide classes of smooth observables $f,g$ and for many probability preserving dynamical systems. In the case of the Sinai billiard, such results and further properties have been established in \cite{Sinai70,BS80,BS81,BCS90,BCS91,young,Chernov,SV1,SV2}. We are interested here in the study of mixing properties when the invariant measure $\nu$ is $\sigma$-finite. In this context, as noticed in \cite{KS}, there is no satisfactory notion of mixing. Nevertheless the question of the rate of mixing for smooth observables is natural. A first step in this direction is to establish results of the following form: \begin{equation}\label{MIXING} \lim_{n\rightarrow +\infty}{\alpha_n}C_n(f,g)=\int_Mf\, d\nu \, \int_Mg\, d\nu\, . \end{equation} Such results have been proved in \cite{Thaler,MT,Gouezel,BT,LT} for a wide class of models and for smooth functions $f,g$, using induction on a finite measure subset of $M$. An alternative approach, specific to the case of $\mathbb Z^d$-extensions of probability preserving dynamical system, has been pointed out in \cite{FP17}. The idea therein is that, in this particular context, \eqref{MIXING} is related to a precised local limit theorem. In the particular case of the $\mathbb Z^2$-periodic Sinai billiard with finite horizon, it has been proved in \cite{FP17} that $$C_n(f,g)=\frac {c_0} n\int_M f\, d\nu\, \int_M g\, d\nu+o(n^{-1})\, , $$ for some explicit constant $c_0$, for some dynamically Lipschitz functions, including functions with full support in $M$. This paper is motivated by the question of high order expansion of mixing and by the study of the mixing rate for observables with null integrals. This last question can be seen as decorrelation rate in the infinite measure. Let us mention the fact that it has been proved in \cite{damiensoaz}, for the billiard in finite horizon, that sums $\sum_{k\in\mathbb Z}\int_Mf.f\circ T^k\, d\nu$ are well defined for some observables $f$ with null expectation. In the present paper, we use the approach of \cite{FP17} to establish, in the context of the $\mathbb Z^2$-periodic Sinai billiard with finite horizon, a high order mixing result of the following form: \begin{equation}\label{DEVASYMP} C_n(f,g)=\sum_{m=0}^{K-1}\frac{c_m(f,g)}{n^{1+m}}+o(n^{-K}) \, . \end{equation} This estimate enables the study of the rate of convergence of $nC_n(f,g)$ to $\int_Mf\, d\nu\, \int_Mg\, d\nu$ and, most importantly, it enables the study of the rate of decay of $C_n(f,g)$ for functions $f$ or $g$ with integral 0. In general, if $f$ or $g$ have zero integral we have $$C_n(f,g)\sim\frac{c_1(f,g)}{n^{2}}\, ,$$ but it may happen that $$C_n(f,g)\sim\frac{c_2(f,g)}{n^{3}} \, ,$$ and even that $C_n(f,g)=o(n^{-3})$. For example, \eqref{DEVASYMP} gives immediately that, if $\int_Mf\, d\nu\int_Mg\, d\nu\ne 0$, then \begin{eqnarray} C_n(f-f\circ T,g)&=&C_n(f,g)-C_{n-1}(f,g)\nonumber\\ &\sim&-c_0\frac{\int_Mf\, d\nu.\int_Mg\, d\nu}{n^2}=\frac{c_1(f-f\circ T,g)}{n^2}\label{Cncobg} \end{eqnarray} and \begin{eqnarray*} C_n(2f-f\circ T-f\circ T^{-1},g)&=&C_n(f-f\circ T,g-g\circ T)\\ &=&2C_n(f,g)-C_{n-1}(f,g)-C_{n+1}(f,g)\\ &\sim& -\frac {2c_0}{n^3}\int_Mf\, d\nu \int_Mg\, d\nu=\frac{c_2(f-f\circ T,g-g\circ T)}{n^3}\, . \end{eqnarray*} General formulas for the dominating term will be given in Theorem \ref{MAIN}, Remark \ref{RQE} and Corollary \ref{coroMAIN}. In particular $c_1(f,g)$ and $c_2(f,g)$ will be precised. We point out the fact that the method we use is rather general in the context of $\mathbb Z^d$-extensions over dynamical systems with good spectral properties, and that, to our knowledge, these are the first results of this kind for dynamical systems preserving an infinite measure.\medskip We establish moreover an estimate of the following form for smooth observables of the $\mathbb Z^2$-periodic Sinai billiard with infinite horizon: $$C_n(f,g)=\frac {c_0} {n\log n}\int_M f\, d\nu\, \int_M g\, d\nu+o((n\log n)^{-1})\, . $$ The paper is organized as follows. In Section \ref{sec:model}, we present the model of the $\mathbb Z^2$-periodic Sinai billiard and we state our main results for this model (finite/infinite horizon). In Section \ref{sec:GENE}, we state general mixing results for $\mathbb Z^d$-extensions of probability preserving dynamical systems for which the Nagaev-Guivarc'h perturbation method can be implemented. In Section \ref{sec:young}, we recall some facts on the towers constructed by Young for the Sinai billiards. In Section \ref{sec:MAIN}, we prove our main results for the billiard in finite horizon (see also Appendix \ref{sec:coeff} for the computation of the first coefficients). In Section \ref{sec:infinite}, we prove our result for the billiard in infinite horizon. \section{Main results for $\mathbb Z^2$-periodic Sinai billiards}\label{sec:model} Let us introduce the $\mathbb Z^2$-periodic Sinai billiard $(M,\nu,T)$. Billiards systems modelise the behaviour of a point particle moving at unit speed in a domain $Q$ and bouncing off $\partial Q$ with respect to the Descartes reflection law (incident angle=reflected angle). We assume here that $Q:=\mathbb R^2\setminus\bigcup_{\ell\in\mathbb Z^2}\bigcup_{i=1}^I (O_i+\ell)$, with $I\ge 2$ and where $O_1,...,O_I$ are convex bounded open sets (the boundaries of which are $C^3$-smooth and have non null curvature). We assume that the closures of the obstacles $O_i+\ell$ are pairwise disjoint. The billiard is said to have {\bf finite horizon} if every line in $\mathbb R^2$ meets $\partial Q$. Otherwise it is said to have {\bf infinite horizon}. We consider the dynamical system $(M,\nu,T)$ corresponding to the dynamics at reflection times which is defined as follows. Let $M$ be the set of reflected vectors off $\partial Q$, i.e. $$M:=\{(q,\vec v)\in\partial Q\times S^1\ :\ \langle \vec n(q),\vec v\rangle\ge 0\},$$ where $\vec n(q)$ stands for the unit normal vector to $\partial Q$ at $q$ directed inward $Q$. We decompose this set into $M:=\bigcup_{\ell\in\mathbb Z^2}\mathcal C_\ell$, with $$\mathcal C_\ell:=\left\{(q,\vec v)\in M\ :\ q\in\bigcup_{i=1}^I (\partial O_i+\ell)\right\}.$$ The set $\mathcal C_\ell$ is called the $\ell$-cell. We define $T:M\rightarrow M$ as the transformation mapping a reflected vector at a reflection time to the reflected vector at the next reflection time. We consider the measure $\nu$ absolutely continuous with respect to the Lebesgue measure on $M$, with density proportional to $(q,\vec v)\mapsto \langle \vec n(q),\vec v\rangle$ and such that $\nu(\mathcal C_{0})=1$. Because of the $\mathbb Z^2$-periodicity of the model, there exists a transformation $\bar T:\mathcal C_{ 0}\rightarrow\mathcal C_{ 0}$ and a function $\kappa:\mathcal C_{ 0}\rightarrow\mathbb Z^2$ such that \begin{equation}\label{skewproduct} \forall ((q,\vec v),\ell)\in\mathcal C_{ 0}\times\mathbb Z^2,\ T(q+\ell, \vec v)=\left(q'+\ell+\kappa(q,\vec v),\vec v'\right),\ \mbox{if}\ \bar T (q,\vec v)=(q',\vec v'). \end{equation} This allows us to define a probability preserving dynamical $(\bar M,\bar\mu,\bar T)$ (the Sinai billiard) by setting $\bar M:=\mathcal C_{ 0}$ and $\bar\mu=\nu_{|\mathcal C_{ 0}}$. Note that \eqref{skewproduct} means that $(M,\nu,T)$ can be represented by the $\mathbb Z^2$-extension of $(\bar M,\bar\mu,\bar T)$ by $\kappa$. In particular, iterating \eqref{skewproduct} leads to \begin{equation}\label{skewproductn} \forall ((q,\vec v),\ell)\in\mathcal C_{0}\times\mathbb Z^2,\ T^n(q+\ell, \vec v)=\left(q'_n+\ell+S_n(q,\vec v),\vec v'_n\right), \end{equation} if $\bar T^n (q,\vec v)=(q'_n,\vec v'_n)$ and with the notation $$ S_n:=\sum_{k=0}^{n-1}\kappa\circ \bar T^k.$$ The set of tangent reflected vectors $\mathcal S_0$ given by $$\mathcal S_0:=\{(q,\vec v)\in M\ :\ \langle \vec v,\vec n(q)\rangle=0\} $$ plays a special role in the study of $T$. Note that $T$ defines a $C^1$-diffeomorphism from $M\setminus (\mathcal S_0\cup T^{-1}(\mathcal S_0))$ to $M\setminus (\mathcal S_0\cup T(\mathcal S_0))$. Statistical properties of $(\bar M,\bar\mu,\bar T)$ have been studied by many authors since the seminal article \cite{Sinai70} by Sinai. In the finite horizon case, limit theorems have been established in \cite{BS81,BCS91,young,Chernov}, including the convergence in distribution of $(S_n/\sqrt{n})_n$ to a centered gaussian random variable $B$ with nondegenerate variance matrix $\Sigma^2$ given by: $$\Sigma^2:=\sum_{k\in\mathbb Z}\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ \bar T^k]\, ,$$ where we used the notation $X\otimes Y$ for the matrix $(x_iy_j)_{i,j}$, for $X=(x_i)_i,Y=(y_j)_j\in\mathbb C^2$. Moreover a local limit theorem for $S_n$ has been established in \cite{SV1} and some of its refinements have been stated and used in \cite{DSV,FP09a,FP09b,ps10} with various applications. Recurrence and ergodicity of this model follow from \cite{JPC,Schmidt,SV1,Simanyi,FP00}. In the infinite horizon case, a result of exponential decay of correlation has been proved in \cite{Chernov}. A nonstandard central limit theorem (with normalization in $\sqrt{n\log n}$) and a local limit theorem have been established in \cite{SV2}, ensuring recurrence and ergodicity of the infinite measure system $(M,\nu,T)$. This result states in particular that $(S_n/\sqrt{n\log n})_n$ converges in distribution to a centered gaussian distribution with variance $\Sigma_\infty^2$ given by $$\Sigma_\infty^2:=\sum_{x\in \mathcal S_0|\bar T x=x}\frac{d_x^2}{2|\kappa(x)|\, \sum_{i=1}^I|\partial O_i|}(\kappa(x))^{\otimes 2}\, ,$$ where $d_x$ is the width of the corridor corresponding to $x$. Our main results provide mixing estimates for dynamically Lipschitz functions. Let us introduce this class of observables. Let $\xi\in(0,1)$. We consider the metric $d_\xi$ on $M$ given by $$\forall x,y\in M,\quad d_\xi(x,y):=\xi^{s(x,y)},$$ where $s$ is a separation time defined as follows: $s(x,y)$ is the maximum of the integers $k\ge 0$ such that $x$ and $y$ lie in the same connected component of $M\setminus \bigcup_{j=-k}^kT^{-j}\mathcal S_0$. For every $f:M\rightarrow \mathbb C$, we write $L_\xi(f)$ for the Lipschitz constant with respect to $d_\xi$: $$L_\xi(f):=\sup_{x\ne y}\frac{|f(x)-f(y)|}{d_\xi(x,y)}\, . $$ We then set $$\Vert f\Vert_{(\xi)}:=\Vert f\Vert_\infty+L_\xi(f)\, . $$ Before stating our main result, let us introduce some additional notations. We will work with symmetric multilinear forms. For any $A=(A_{i_1,...,i_m})_{(i_1,...,i_m)\in\{1,2\}^m}$ and $B=(B_{i_1,...,i_k})_{(i_1,...,i_k)\in\{1,2\}^k}$ with complex entries ($A$ and $B$ are identified respectively with a $m$-multilinear form on $\mathbb C^2$ and with a $k$-multilinear form on $\mathbb C^2$), we define $A\otimes B$ as the element $C$ of $\mathbb C^{\{1,2\}^{m + m'}}$ (identified with a $(m+ m')$-multilinear form on $\mathbb C^2$) such that $$\forall i_1,,...,i_{m+m'}\in\{1,2\},\quad C_{(i_1,,...,i_{m+m'})}=A_{(i_1,...,i_{m})} B_{(i_{m+1}...,i_{m+m'})} .$$ For any $A=(A_{i_1,...,i_m})_{(i_1,...,i_m)\in\{1,2\}^m}$ and $B=(B_{i_1,...,i_k})_{(i_1,...,i_k)\in\{1,2\}^k}$ symmetric with complex entries with $k\le m$, we define $A* B$ as the element $C$ of $\mathbb C^{\{1,2\}^{m-k}}$ (identified with a $(m-k)$-multilinear form on $\mathbb C^2$) such that $$\forall i_1,,...,i_{m-k}\in\{1,2\},\quad C_{(i_1,,...,i_{m-k})}=\sum_{i_{m-k+1},...,i_m\in\{1,2\}}A_{(i_1,...,i_{m})} B_{(i_{m-k+1},...,i_{m})} .$$ We identify naturally vectors in $\mathbb C^2$ with $1$-linear functions and symmetric matrices with symmetric bilinear functions. For any $C^m$-smooth function $F:\mathbb C^2\rightarrow\mathbb C$, we write $F^{(m)}$ for its $m$-th differential, which is identified with a $m$-linear function on $\mathbb C^2$. We write $A^{\otimes k}$ for the product $A\otimes...\otimes A$. Observe that, with these notations, Taylor expansions of $F$ at $0$ are simply written $$\sum_{k=0}^m F^{(k)}(0)*x^{\otimes k}\, .$$ It is also worth noting that $A* (B\otimes C)=(A*B)*C$, for every $A,B,C$ corresponding to symmetric multilinear forms with respective ranks $m,k,\ell$ with $m\ge k+\ell$. We extend the definition of $\kappa$ to $M$ by setting $\kappa((q+\ell,\vec v))=\kappa(q,\vec v)$ for every $(q,\vec v)\in\bar M$ and every $\ell\in\mathbb Z^2$. For every $k\in\mathbb Z$ and every $x\in M$, we write $\mathcal I_k(x)$ for the label in $\mathbb Z^2$ of the cell containing $T^kx$, i.e. $\mathcal I_k$ is the label of the cell in which the particle is at the $k$-th reflection time. It is worth noting that, for $n\ge 0$, we have $\mathcal I_n-\mathcal I_0=\sum_{k=0}^{n-1}\kappa\circ T^k$ and $\mathcal I_{-n}-\mathcal I_0=-\sum_{k=-n}^{-1}\kappa\circ T^k$. \medskip Now let us state our main results, the proofs of which are postponed to Section \ref{sec:MAIN}. We start by stating our result in the infinite horizon case, and then we will present sharper results in the finite horizon case. \subsection{$\mathbb Z^2$-periodic Sinai billiard with infinite horizon} \begin{theorem} \label{horizoninfini} Let $(M,\nu, T)$ be the $\mathbb Z^2$-periodic Sinai billiard with infinite horizon. Suppose that the set of corridor free flights $\{\kappa(x),\ x\in \mathcal S_0,\ \bar T x=x\}$ spans $\mathbb R^2$. Let $f,g:M\rightarrow\mathbb C$ (with respect to $d_\xi$) be two dynamically Lipschitz continuous functions such that \begin{equation}\label{toto} \sum_{\ell\in\mathbb Z^2}\left(\Vert f \mathbf 1_{\mathcal C_\ell}\Vert_{\infty}+\Vert g \mathbf 1_{\mathcal C_\ell}\Vert_{\infty}\right)<\infty\, . \end{equation} Then $$\int_Mf.g\circ T^n\, d\nu=\frac{1}{2\pi\sqrt{\det\Sigma_\infty^2}\, n\log n}\left(\int_Mf\, d\nu\, \int_Mg\, d\nu+o(1)\right)\, .$$ \end{theorem} \subsection{$\mathbb Z^2$-periodic Sinai billiard with finite horizon} We first state our result providing an expansion of every order for the mixing (see Theorem \ref{MAIN} and Corollary \ref{coroMAIN} for more details). \begin{theorem}\label{PRINCIPAL} Let $K$ be a positive integer. Let $f,g:M\rightarrow\mathbb C$ be two dynamically Lipschitz continuous observables such that \begin{equation*} \sum_{\ell\in\mathbb Z^2}|\ell|^{2K-2}(\Vert f\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)})<\infty\, , \end{equation*} then there exist $c_0(f,g),...,c_{K-1}(f,g)$ such that \[ \int_Mf.g\circ T^n\, d\nu =\sum_{m=0}^{K-1}\frac{c_m(f,g)}{n^{1+m}}+o(n^{-K})\, . \] \end{theorem} We precise in the following theorem the expansion of order 2. \begin{theorem}\label{MAINbis} Let $f,g:M\rightarrow\mathbb R$ be two bounded observables such that $$\sum_{\ell\in\mathbb Z^2}|\ell|^{2}(\Vert f\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)})<\infty\, .$$ Then \begin{eqnarray} \int_M f.g\circ T^n\, d\nu &=&\frac 1{2\pi\sqrt{\det\Sigma^2}}\left\{\frac{1}{n}\int_M f\, d\nu \, \int_M g\, d\nu + \frac {1}{2\, n^2}\, \Sigma^{-2}*\tilde {\mathfrak A}_2(f,g)\nonumber\right.\\ &\ & \left.+\frac 1{4!\, n^2}\int_Mf\, d\nu \, \int_Mg\, d\nu \, (\Sigma^{-2})^{\otimes 2}* \Lambda_4\right\} +o(n^{-2})\, , \label{MAIN2} \end{eqnarray} with $\Sigma^{-2}=(\Sigma^2)^{-1}$ and \[ \tilde {\mathfrak A_2}(f,g):= -\int_Mf\, d\nu \, \mathfrak B_2^-(g)-\int_Mg\, d\nu\, \mathfrak B_2^+(f)-\int_Mf\, d\nu\int_Mg\, d\nu\, \mathfrak B_0+2\, \mathfrak B_1^+(f)\otimes\mathfrak B_1^-(g)\, , \] \[ \mathfrak B_2^+(f):=\lim_{m\rightarrow +\infty} \int_Mf.\left(\mathcal I_m^{\otimes 2}-m\Sigma^2\right)\, d\nu \, , \] \[ \mathfrak B_2^-(g):=\lim_{m\rightarrow -\infty}\int_M g.\left(\mathcal I_m^{\otimes 2}-|m|\Sigma^2\right)\, d\nu\, , \] \[ \mathfrak B_1^+(f):=\lim_{m\rightarrow +\infty}\int_M f.\mathcal I_m\, d\nu \, , \quad \mathfrak B_1^-(g):=\lim_{m\rightarrow -\infty}\int_M g.\mathcal I_m\, d\nu\, , \] \[ \mathfrak B_0=\lim_{m\rightarrow +\infty}(m\Sigma^2-\mathbb E_{\bar\mu}[S_m^{\otimes 2}]) \] and \[ \Lambda_4:=\lim_{n\rightarrow +\infty}\frac{\mathbb E_{\bar\mu}[S_n^{\otimes 4}]-3n^2(\Sigma^2)^{\otimes 2}}{n}+6\Sigma^2\otimes \mathfrak B_0\, . \] \end{theorem} Observe that we recover \eqref{Cncobg} since $\Sigma^2*\Sigma^{-2}=2$, $$\mathfrak B_1^+(f-f\circ T)=\lim_{m\rightarrow +\infty}\int_Mf.\kappa\circ T^m\, d\nu=0 $$ and \begin{eqnarray*} \mathfrak B_2^+(f-f\circ T)&=&\lim_{n\rightarrow +\infty}\int_Mf.(\mathcal I_m^{\otimes 2}-\mathcal I_{m-1}^{\otimes 2})\\ &=&\lim_{m\rightarrow +\infty}\int_Mf.\left(\kappa^{\otimes 2}\circ T^{m-1}+2\sum_{k=0}^{m-2}(\kappa\circ T^k)\otimes\kappa\circ T^{m-1}\right)\, d\nu \\ &=&\lim_{m\rightarrow +\infty}\int_Mf\, d\nu \mathbb E_{\bar\mu}\left[\kappa^{\otimes 2}+2\sum_{k=1}^{m-1}\kappa\otimes\kappa\circ T^{k}\right]\, ,\\ &=&\Sigma^2\int_Mf\, d\nu \, , \end{eqnarray*} where we used Proposition \ref{decoChernov}. \begin{remark} Note that \begin{eqnarray*} \mathfrak B_2^+(f) &=& \sum_{j,m\ge 0}\int_M f .\left(\kappa\circ T^j\otimes\kappa\circ T^m -\mathbb E_{\bar\mu}[\kappa\circ \bar T^j\otimes\kappa\circ\bar T^m]\right)\, d\nu\nonumber\\ &\ &+\int_M f\mathcal I_0^{\otimes 2}\, d\nu+2\sum_{m\ge 0}\int_M f.\mathcal I_0\otimes \kappa\circ \bar T^m\, d\nu-\mathfrak B_0\int_Mf\, d\nu\, , \end{eqnarray*} \begin{eqnarray*} \mathfrak B_2^-(g)&=& \sum_{j,m\le -1} \int_M g.(\kappa\circ T^j\otimes \kappa\circ T^m-\mathbb E_{\bar\mu}[\kappa\circ \bar T^j\otimes \kappa\circ\bar T^m])\, d\nu\\ &\ & +\int_M g.\mathcal I_0^{\otimes 2}\, d\nu -2 \sum_{m\le -1}\int_M g.\mathcal I_0\otimes\kappa \circ T^m\, d\nu-\mathfrak B_0\int_Mg\, d\nu\, , \end{eqnarray*} \[ \mathfrak B_1^+(f)=\sum_{m\ge 0}\int_M f.\kappa\circ T^m\, d\nu+ \int_M f.\mathcal I_0\, d\nu \, , \] \[ \mathfrak B_1^-(g)=-\sum_{m\le -1}\int_M g.\kappa\circ T^m\, d\nu+ \int_M g.\mathcal I_0\, d\nu \, , \] and \[ \mathfrak B_0=\sum_{m\in\mathbb Z}|m|\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ\bar T^m]\, . \] \end{remark} \begin{corollary} Under the assumptions of Theorem \ref{MAINbis}, if $\int_Mf\, d\nu=0$ and $\int_Mg\, d\nu=0$, then $$\int_Mf.g\circ T^n\, d\nu =\frac{\Sigma^{-2}*(\mathfrak B_1^+(f)\otimes \mathfrak B_1^-(g))} {n^2\,2\pi\sqrt{\det\Sigma^2}} +o(n^{-2})\, .$$ \end{corollary} Two natural examples of zero integral functions are $\mathbf 1_{\mathcal C_0}-\mathbf 1_{\mathcal C_{e_1}}$ with $e_1=(1,0)$ or $f\mathcal C_0$ with $\int_{\mathcal C_0}f\, d\nu=0$. Note that $$\int_M((\mathbf 1_{\mathcal C_0}-\mathbf 1_{\mathcal C_{e_1}}).(\mathbf 1_{\mathcal C_0}-\mathbf 1_{\mathcal C_{e_1}})\circ T^n\mathbf)\, d\nu \sim \frac{\sigma^2_{2,2}}{n^2\,2\pi({\det\Sigma^2})^{3/2}},$$ with $\Sigma^2=(\sigma^2_{i,j})_{i,j=1,2}$ and that $$\int_M(f\mathbf 1_{\mathcal C_0}.\mathbf 1_{\mathcal C_0}\circ T^n\mathbf)\, d\nu \sim -\frac{1}{n^2\,2\pi({\det\Sigma^2})^{3/2}}\sum_{m\ge 0} \mathbb E_{\bar\mu}[f.(\sigma^2_{2,2}\kappa_1+\sigma^2_{1,1}\kappa_2)\circ T^m]\, ,$$ with $\kappa=(\kappa_1,\kappa_2)$, provided the sum appearing in the last formula is non null. As noticed in introduction, it may happen that \eqref{MAIN2} provides only $\int_M f.g\circ T^n=o(n^{-2})$. This is the case for example if $\int_Mg\, d\nu=0$ and if $f$ has the form $f(q+\ell,\vec v)=f_0(q,\vec v).h_\ell$ with $\mathbb E_{\bar\mu}[f_0]=0$ and $\sum_\ell h_\ell=0$. Hence it can be useful to go further in the asymptotic expansion, which is possible thanks to Theorem \ref{MAIN}. A formula for the term of order $n^{-3}$ when $\int_M f\, d\nu=\int_Mg\, d\nu=\tilde{\mathfrak A}_2(f,g)=0$ is stated in theorem \ref{MAINter} and gives the following estimate, showing that, for some observables, $C_n(f,g)$ has order $n^{-3}$. \begin{proposition}\label{casparticulier} If $f$ and $g$ can be decomposed in $f(q+\ell,\vec v)=f_0(q,\vec v).h_\ell$ and $g(q+\ell,\vec v)=g_0(q,\vec v).q_\ell$ with $\mathbb E_{\bar\mu}[f_0]=\mathbb E_{\bar\mu}[g_0]=0$ and $\sum_\ell q_\ell=\sum_\ell h_\ell=0$ such that $\sum_{\ell\in\mathbb Z^2}|\ell|^4(\Vert f\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)})<\infty$. Then $$\int_Mf.g\circ T^n\, d\nu=\frac {(\Sigma^{-2})^{\otimes 2}} {2\pi\sqrt{\det \Sigma^2}n^3}*\frac{\mathfrak B_2^+(f)\otimes \mathfrak B_2^-(g)}4+o(n^{-3})\, ,$$ with here $$ \frac{\mathfrak B_2^+(f)\otimes \mathfrak B_2^-(g)}4= -\left(\sum_{\ell\in\mathbb Z^2} h_\ell.\ell\right)\otimes \left(\sum_{j\ge 0}\mathbb E_{\bar\mu}[f_0.\kappa\circ T^j]\right)\otimes\left(\sum_{\ell\in\mathbb Z^2} q_\ell.\ell\right) \otimes \left(\sum_{m\le -1}\mathbb E_{\bar\mu}[g_0.\kappa\circ T^m]\right)\, .$$ \end{proposition} \section{General results for $\mathbb Z^d$-extensions and key ideas}\label{sec:GENE} In this section we state general results in the general context of $\mathbb Z^d$-extensions over dynamical systems satisfying good spectral properties. This section contains the rough ideas of the proofs for the billiard, without some complications due to the quotient tower. Moreover the generality of our assumptions makes our results implementable to a wide class of models with present and future developments of the Nagaev-Guivarch method of perturbation of transfer operators.\medskip We consider a dynamical system $(M,\nu,T)$ given by the $\mathbb Z^d$-extension of a probability preserving dynamical system $(\bar M,\bar\mu,\bar T)$ by $\kappa:\bar M\rightarrow\mathbb Z^d$. This means that $M=\bar M\times\mathbb Z^d$, $\nu=\bar\mu\otimes\mathfrak m_d$ where $\mathfrak m_d$ is the counting measure on $\mathbb Z^d$ and with $$\forall (x,\ell)\in\bar M\times\mathbb Z^d,\quad T(x,\ell)=(\bar T(x),\ell+\kappa(x))\, , $$ so that $$\forall (x,\ell)\in\bar M\times\mathbb Z^d,\ \forall n\ge 1,\quad T^n(x,\ell)=(\bar T^n(x),\ell+S_n(x))\, , $$ with $S_n:=\sum_{k=0}^{n-1}\kappa\circ\bar T^k$. Let $P$ be the transfer operator of $\bar T$, i.e. the dual operator of $f\mapsto f\circ\bar T$. Our method is based on the following key fomulas: \begin{eqnarray} \int_M f.g\circ T^n\, d\nu&=&\sum_{\ell,\ell'\in\mathbb Z^2}\mathbb E_{\bar\mu}[f(\cdot,\ell).\mathbf 1_{S_n=\ell'-\ell}.g(\bar T^n(\cdot),\ell')]\label{FORMULECLEF0}\\ &=&\sum_{\ell,\ell'\in\mathbb Z^d} \mathbb E_{\bar\mu}[P^n(\mathbf 1_{S_n=\ell'-\ell}\, f(\cdot,\ell))g(\cdot,{\ell'})]\, \label{FORMULECLEF} \end{eqnarray} and \begin{eqnarray} P^n(\mathbf 1_{S_n=\ell}\, u)&=&\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it*\ell} P^n(e^{it*S_n}u)\, dt\nonumber\\ &=&\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it*\ell} P_t^n(u)\, dt\, , \label{formulespectrale} \end{eqnarray} with $P_t:=P(e^{it*\kappa}\cdot)$. Note that \eqref{FORMULECLEF} makes a link between mixing properties and the local limit theorem and that \eqref{formulespectrale} shows the importance of the study of the family of perturbed operators $(P_t)_t$ in this study. We will make the following general assumptions about $(P_t)_t$. \begin{hypothesis}[Spectral hypotheses]\label{HHH} There exist two complex Banach spaces $(\mathcal B,\Vert{\cdot}\Vert)$ and $(\mathcal B_0,\Vert\cdot\Vert_0)$ such that: \begin{itemize} \item $\mathcal B \hookrightarrow \mathcal B_0\hookrightarrow L^1 (\bar M, \bar \mu)$ and $\mathbf{1}_{\bar M} \in \mathcal B$ , \item there exist constants $b\in(0,\pi]$, $C>0$ and $\vartheta \in(0,1)$ and three functions $\lambda_\cdot:[-b,b]^d \to \mathbb C$ and $\Pi_\cdot,R_\cdot:[-b,b]^d \to \mathcal L(\mathcal B,\mathcal B)$ such that $\lim_{t\rightarrow 0}\lambda_t=1$ and $\lim_{t\rightarrow 0}\Vert \Pi_t-\mathbb E_\mu [\cdot]\mathbf 1_{\bar M}\Vert_{\mathcal L(\mathcal B,\mathcal B_0)}=0$ and such that, in $\mathcal L(\mathcal B,\mathcal B)$, \begin{equation}\label{decomp} \forall u\in[-b,b]^d,\quad P_u =\lambda_u\Pi_u+R_u,\quad \Pi_u R_u = R_u \Pi_u = 0,\quad \Pi_u^{2} = \Pi_{u}\, , \end{equation} \begin{equation} \sup_{u\in [-b,b]^d} \Vert{R_u^k}\Vert_{\mathcal L(\mathcal B,\mathcal B_0)} \leq C \vartheta^k,\quad\sup_{u\in[-\pi,\pi]^d \setminus [-b,b]^d} \Vert{P_u^k}\Vert_{\mathcal L(\mathcal B,\mathcal B_0)} \leq C \vartheta ^k. \end{equation} \end{itemize} \end{hypothesis} Note that \eqref{decomp} ensures that \begin{equation}\label{decomp2} \forall u\in[-b,b],\quad P_u^n =\lambda_u^n\Pi_u+R_u^n\, . \end{equation} We will make the following assumption on the expansion of $\lambda$ at $0$. \begin{hypothesis}\label{HHH1} Let $Y$ be a random variable with integrable characteristic function $a_.:=e^{-\psi(\cdot)}$ and with density function $\Phi$. Assume that there exists a sequence of invertible matrices $(\Theta_n)_n$ such that $\lim_{n\rightarrow +\infty}\Theta_n^{-1}=0$ and \begin{equation}\label{asymptlambda} \forall u,\quad \lambda_{^{t}\Theta_n^{-1}\cdot u}^n\sim e^{-\psi(u)}=a_u\, ,\quad\mbox{as}\ n\rightarrow +\infty \end{equation} (where $^{t}\Theta_n^{-1}$ stands for the transpose matrix of $\Theta_n^{-1}$) and $$ \forall u\in[-b,b]^d,\quad |\lambda_{u}^n|\le 2\left| e^{-\psi({}^t\Theta_n\cdot u)}\right| \, .$$ \end{hypothesis} Note that, under Hypothesis \ref{HHH} and if \eqref{asymptlambda} holds true, then $$\forall u\in\mathbb R^d,\quad e^{-\psi(u)}=\lim_{n\rightarrow +\infty}\lambda_{^{t}\Theta_n^{-1}\cdot u}^n=\lim_{n\rightarrow +\infty}\mathbb E_{\bar\mu}[P_{^{t}\Theta_n^{-1}\cdot u}^n\mathbf 1]=\lim_{n\rightarrow +\infty}\mathbb E_{\bar\mu}[e^{iu*(\Theta_n^{-1} S_n)}],$$ and so $(\Theta_n^{-1} S_n)_n$ converges in distribution to $Y$. If $Y$ has a stable distribution of index $\alpha\in(0,2]\setminus\{1\}$, i.e. $$ \psi(u)=\int_{\mathbb S^1}|u*s|^{\alpha}(1+\tan\frac \pi\alpha \mbox{sign}(u*s))\, d\Gamma(u),$$ where $\Gamma$ is a Borel measure on the unit sphere $S^1=\{x\in\mathbb R^d\ :\ x*x=1\}$ and if \begin{equation*} \lambda_u = e^{-\psi(u)L(|u|^{-1})} + o\left(|u|^\alpha L(|u|^{-1})\right)\, ,\quad \mbox{as } u\rightarrow 0\, , \end{equation*} with $L$ slowly varying at infinity, then Hypothesis \ref{HHH1} holds true with $\Theta_n:= \mathfrak a_n\, Id$ with $\mathfrak{a}_n:= \inf\{x>0\, :\, n |x|^{-\alpha} L(x) \geq 1\}\, . $ But Hypothesis \ref{HHH1} allows also the study of situations with anisotropic scaling. Before stating our first general result, let us introduce an additional notation. Under Hypothesis \ref{HHH}, for any function $u:\bar M\rightarrow\mathbb C$, we write $\Vert u\Vert_{\mathcal B'_0}:=\sup_{h\in\mathcal B_0}|\mathbb E_{\bar\mu}[u.h]|$. \begin{theorem}\label{MAINGENE0} Assume Hypotheses \ref{HHH} and \ref{HHH1}. Let $f,g:M\rightarrow \mathbb C$ be such that $$\Vert f\Vert_{+}:=\sum_{\ell\in\mathbb Z^d} \Vert f(\cdot,\ell)\Vert <\infty\quad\mbox{and}\quad \Vert g\Vert_{+,\mathcal B_0'}:=\sum_{\ell\in\mathbb Z^d}\Vert g(\cdot,\ell)\Vert_{\mathcal B_0 '}<\infty.$$ Then $$\int_Mf.g\circ T^n\, d\nu=\frac{\Phi(0)}{\det \Theta_n}\left(\int_Mf\, d\nu\, \int_Mg\, d\nu +o(1)\right),\ \mbox{as}\ n\rightarrow +\infty\, .$$ \end{theorem} \begin{proof} For every positive integer $n$ and every $\ell\in\mathbb Z^d$, combining \eqref{formulespectrale} with Hypothesis \ref{HHH}, the following equalities hold in $\mathcal L(\mathcal B,\mathcal B_0)$: \begin{eqnarray} P^n(\mathbf 1_{S_n=\ell}\cdot) &=&\frac 1{(2\pi)^d}\int_{[-b,b]^d} e^{-it*\ell}\lambda_t^n\Pi_t(\cdot)\, dt+O(\vartheta^n)\nonumber\\ &=&\frac 1{(2\pi)^d\det{\Theta_n}}\int_{{}^t\Theta_n[-b,b]^d} e^{-iu*(\Theta_ n^{-1}\ell)}\lambda_{{}^t\Theta_n^{-1} u} ^n\Pi_{{}^t\Theta_n^{-1} u}(\cdot)\, du+O(\vartheta^n)\nonumber\\ &=&\frac 1{(2\pi)^d\det{\Theta_n}}\int_{\mathbb R^d} e^{-iu*(\Theta_ n^{-1}\ell)}e^{-\psi(u)} \Pi_{0}(\cdot)\, du+\varepsilon_{n,\ell}\nonumber\\ &=&\frac{\Phi(\Theta_n^{-1} \ell)}{\det{\Theta_n}}\Pi_0 +\varepsilon_{n,\ell}\, ,\label{egalitecentrale} \end{eqnarray} with $\sup_\ell \Vert\varepsilon_{n,\ell}\Vert_{\mathcal L(\mathcal B,\mathcal B_0)}=o(\det\Theta_n^{-1})$ due to the dominated convergence theorem applied to $\left\Vert\lambda^n_{{}^t\Theta_n^{-1}u}\Pi_{{}^t\Theta_n^{-1}u}-e^{-\psi(u)}\Pi_0\right\Vert_{\mathcal L(\mathcal B,\mathcal B_0)}\mathbf 1_{{}^t\Theta_n[-b,b]^d}(u)$. Setting $u_\ell:=f(\cdot,\ell)$ and $v_\ell:=g(\cdot,\ell)$ and using \eqref{FORMULECLEF}, we obtain \begin{eqnarray} \int_Mf.g\circ T^n\, d\nu &=&\sum_{\ell,\ell'\in\mathbb Z^d}\left(\frac{\Phi(\Theta_n^{-1}( {\ell'-\ell}))}{\det\Theta_n}\mathbb E_{\bar\mu}[u_\ell]\, \mathbb E_{\bar\mu}[v_{\ell'}]+\mathbb E_{\bar\mu}[v_{\ell'}\varepsilon_{n,\ell}(u_\ell)]\right)\nonumber\\ &=&\sum_{\ell,\ell'\in\mathbb Z^d}\left(\frac{\Phi(\Theta_n^{-1}( {\ell'-\ell}))}{\det\Theta_n}\mathbb E_{\bar\mu}[u_\ell]\, \mathbb E_{\bar\mu}[v_{\ell'}]\right)+O\left(\sum_{\ell,\ell'\in\mathbb Z^d}\Vert v_{\ell'}\Vert_{\mathcal B_0'}\, \Vert\varepsilon_{n,\ell}\Vert_{\mathcal L(\mathcal B,\mathcal B_0)}\Vert u_\ell\Vert\right)\, \nonumber\\ &=&\sum_{\ell,\ell'\in\mathbb Z^d}\frac{\Phi(\Theta_n^{-1} (\ell'-\ell))}{\det\Theta_n}\mathbb E_{\bar\mu}[u_\ell]\, \mathbb E_{\bar\mu}[v_{\ell'}] +\tilde\varepsilon_n(f,g)\, ,\label{controlecentral} \end{eqnarray} with $ \lim_{n\rightarrow +\infty}\sup_{f,g}\frac{\det\Theta_{n}\, \tilde\varepsilon_n(f,g)}{\Vert g\Vert_{+,\mathcal B_0'}\Vert f\Vert_{+}}=0$. Now, due to the dominated convergence theorem and since $\Phi$ is continuous and bounded, $$ \lim_{n\rightarrow +\infty}\sum_{\ell,\ell'\in\mathbb Z^d}{\Phi\left(\Theta_n^{-1}(\ell'-\ell)\right)}\mathbb E_{\bar\mu}[u_\ell]\, \mathbb E_{\bar\mu}[v_{\ell'}]=\Phi(0)\sum_{\ell,\ell'\in\mathbb Z^2}\mathbb E_{\bar\mu}[u_\ell]\, \mathbb E_{\bar\mu}[v_{\ell'}]= \Phi(0)\int_Mf\, d\nu\, \int_Mg\, d\nu\, ,$$ which ends the proof. \end{proof} We will reinforce Hypothesis \ref{HHH1}. Notations $\lambda_0^{(k)}$, $a_0^{(k)}$, $\Pi_0^{(k)}$ stand for the $k$-th derivatives of $\lambda$, $a$ and $\Pi$ at 0. \begin{theorem}\label{THMGENE} Assume Hypothesis \ref{HHH} with $\mathcal B_0=\mathcal B$. Let $K, M,P$ be three integers such that $K\ge d/2$, $ 3\le P\le M+1$ and \begin{equation}\label{MAJOM} -\left\lfloor \frac M P\right\rfloor+ \frac{M}2\ge K\, . \end{equation} Assume moreover that $\lambda_\cdot$ is $C^{M}$-smooth and that there exists a positive symmetric matrix $\Sigma^2$ such that \begin{equation}\label{DLlambda} \lambda_u -1\sim -\psi(u):=-\frac 12\Sigma^2*u^{\otimes 2}\, ,\quad\mbox{as}\ u\rightarrow 0\, . \end{equation} Assume that, for every $k<P$, $\lambda^{(k)}_0=a^{(k)}_0$ with $a_t=e^{-\psi(t)}$, for every $k<P$. Assume moreover that the functions $\Pi$ and $R$ are $C^{2K}$-smooth. Let $f,g:M\rightarrow\mathbb C$ be such that \begin{equation}\label{hypofg} \sum_{\ell\in\mathbb Z^d} (\Vert f(\cdot,\ell)\Vert+ \Vert g(\cdot,\ell)\Vert_{\mathcal B '})<\infty\, . \end{equation} Then \begin{equation}\label{FFFF1} \int_Mf.g\circ T^n\, d\nu=\sum_{\ell,\ell'\in\mathbb Z^d} \sum_{m=0}^{2K}\frac {1}{m!}\sum_{j=0}^{M} \frac {i^{m+j}}{(j)!}\frac{\Phi^{(m+j)}\left(\frac {\ell'-\ell}{\mathfrak a_{n}}\right)}{n^{\frac{d+m+j}2}}*(\mathbb E_{\bar\mu}[v_{\ell'} \Pi^{(m)}_0(u_\ell)]\otimes(\lambda^n/a^n)_0^{(j)})+o(n^{-K-\frac d2})\, . \end{equation} If moreover $\sum_{\ell\in\mathbb Z^d}|\ell|^{2K}(\Vert f(\cdot,\ell)\Vert+\Vert g(\cdot,\ell)\Vert_{\mathcal B'})<\infty$, then \begin{eqnarray} \int_Mf.g\circ T^n\, d\nu &=&\sum_{m,j,r}\frac {i^{j+m}}{m!\, r!\, j!} \left(\frac{\Phi^{(j+m+r)}(0)}{n^{\frac{j+d+m+r}2}}*(\lambda^n/a^n)_0^{(j)}\right)\nonumber\\ &\ &*\sum_{\ell,\ell'\in\mathbb Z^d}(\ell'-\ell)^{\otimes r}\otimes \mathbb E_{\bar\mu}[v_{\ell'} \Pi^{(m)}_0(u_\ell)] +o(n^{-K-\frac d2})\, ,\label{decorrelation2b} \end{eqnarray} where the sum is taken over the $(m,j,r)$ with $m,j,r$ non negative integers such that $j+m+r\in 2\mathbb Z$ and $\frac{r+m+j}2-\lfloor \frac j P\rfloor\le K$. \end{theorem} Observe that $$ (\lambda^n/a^n)^{(j)}_0=\sum_{k_1m_1+...+k_rm_r=j}\frac{n!}{m_1!\cdots m_r!(n-m_1-...-m_r)!}((\lambda/a)_0^{(k_1)})^{m_1}\cdots ((\lambda/a)_0^{(k_r)})^{m_r}\, ,$$ where the sum is taken over $r\ge 1$, $m_1,...,m_r\ge 1$, $k_r>...>k_1\ge P$ (this implies that $m_1+...+m_r\le j/P$). Hence $(\lambda^n/a^n)^{(j)}_0$ is polynomial in $n$ with degree at most $\lfloor j/P\rfloor$. \begin{remark} Note that \eqref{MAJOM} holds true as soon as $M\ge 2KP/(P-2)$ and $M$ in \eqref{FFFF1} can be replaced by $(2K-m)P/(P-2)$. Moreover \eqref{decorrelation2b} provides an expansion of the following form: $$\int_Mf.g\circ T^n\, d\nu=\sum_{m=0}^K\frac{c_m(f,g)}{n^{\frac d2+m}}+o(n^{-K-\frac d2}) \, . $$ \end{remark} \begin{remark}\label{DEVASYMP} If $\Pi$ is $C^M$-smooth, using the fact $(\lambda^n/a^n)^{(j)}_0=O(n^{\lfloor j/P\rfloor})$, if $\sum_{\ell\in\mathbb Z^d}|\ell|^{M}(\Vert f(\cdot,\ell)\Vert+\Vert g(\cdot,\ell)\Vert_{\mathcal B'})<\infty$ the right hand side of \eqref{decorrelation2b} can be rewritten \[ \frac 1{n^{\frac d2}} \sum_{\ell,\ell'\in\mathbb Z^d}\sum_{L=0}^{M}\frac 1{n^{L/2}}\frac{\Phi^{(L)}(0)}{L!} i^L\frac{\partial^L}{\partial t^L}\left(\mathbb E_{\bar\mu}\left[v_{\ell'}. e^{-it*(\ell'-\ell)}.\lambda_{t}^n\Pi_{t}.u_{\ell}\right]e^{\frac{n}2\Sigma^2*t^{\otimes 2}}\right)_{|t=0} +o(n^{-K-\frac d2})\, . \] If moreover $\sup_{u\in [-b,b]^d} \Vert{(R_u^n)^{(m)}}\Vert_{\mathcal(\mathcal B,\mathcal B)} =O( \vartheta^n)$ for every $m=0,...,M$, then it can also be rewritten \[ \frac 1{n^{\frac d2}}\sum_{\ell,\ell'\in\mathbb Z^d}\sum_{L=0}^{M}\frac{\Phi^{(L)}(0)}{L!}i^L \frac{\partial^L}{\partial t^L}\left(\mathbb E_{\bar\mu}\left[u_\ell.e^{it*\frac{S_n-(\ell'-\ell)}{\sqrt n}}.v_{\ell'}\circ\bar T^n\right]e^{\frac{1}2\Sigma^2*t^{\otimes 2}}\right)_{|t=0} +o(n^{-K-\frac d2})\, , \] where we used \eqref{decomp2}. \end{remark} \begin{proof}[Proof of Theorem \ref{THMGENE}] We assume, up to a change of $b$ that Hypothesis \ref{HHH1} holds true. Due to \eqref{formulespectrale} and to \eqref{decomp2}, in $\mathcal L(\mathcal B,\mathcal B)$, we have \begin{eqnarray*} P^n(\mathbf 1_{S_n=\ell}\cdot) &=&\frac 1{(2\pi)^d}\int_{[-\pi,\pi]^d}e^{-it*\ell}P_t^n(\cdot)\, dt\\ &=&\frac 1{(2\pi)^d}\int_{[-b,b]^d}e^{-it*\ell}\lambda_t^{n}\Pi_t(\cdot)\, dt +O(\vartheta^{n})\\ &=&\frac 1{(2\pi)^dn^{\frac{d}2}}\int_{[-b\sqrt{n},b\sqrt{n}]^d}e^{-it*\frac{\ell}{\sqrt{n}}}\lambda_{t/\sqrt{n}}^{n}\Pi_{t/\sqrt{n}}(\cdot)\, dt +O(\vartheta^{n})\\ &=&\frac 1{(2\pi)^dn^{\frac{d}2}}\int_{[-b\sqrt{n},b\sqrt{n}]^d}e^{-it*\frac{\ell}{\sqrt{n}}}\lambda_{t/\sqrt{n}}^{n} \sum_{m=0}^{2K}\frac 1{m!}\Pi^{(m)}_0(\cdot)*\frac{t^{\otimes m}}{n^{\frac m2}}\, dt +o(n^{-K-\frac d2})\, , \end{eqnarray*} due to the dominated convergence theorem since there exists $x_{t/\sqrt{n}}\in(0,t/\sqrt{n})$ such that $\Pi_{t/\sqrt{n}}(\cdot)= \sum_{m=0}^{2K-1}\frac 1{m!}\Pi^{(m)}_0(\cdot)*\frac{t^{\otimes m}}{n^{\frac m2}}+\frac 1{(2K)!}\Pi^{(2K)}_0(x_{t/\sqrt{n}})*\frac{t^{\otimes 2K}}{n^{K}}$. Recall that $(\lambda^n/a^n)_0^{(j)}=O(n^{\lfloor j/P\rfloor})$, so \[ \left\vert \lambda_{t/\sqrt{n}}^{n} - a_t\sum_{j=0}^{M}\frac 1{j!}(\lambda^n/a^n)_0^{(j)}*\frac{t^{\otimes j}}{n^{\frac j2}} \right\vert\\ \le n^{\lfloor \frac M P\rfloor}a_t \frac{|t|^{M}}{n^{\frac M2}} \eta(t/\sqrt{n})\, , \] with $\lim_{t\rightarrow 0}\eta(t)=0$ and $\sup_{[-b,b]^d}|\eta|<\infty$. Due to \eqref{MAJOM}, we obtain \begin{eqnarray*} P^n(\mathbf 1_{S_n=\ell}\cdot)&=&\frac 1{(2\pi)^dn^{\frac d2}}\int_{[-b\sqrt{n},b\sqrt{n}]^d}e^{-it*\frac{\ell}{\sqrt{n}}}e^{-\frac 12\Sigma^2* t^{\otimes 2}} \sum_{m=0}^{2K}\frac 1{m!}\Pi^{(m)}_0(\cdot)*\frac{t^{\otimes m}}{n^{\frac m2}}\\ &\ &\left(1+\sum_{j=P}^{M}\frac 1{j!} (\lambda^n/a^n)_0^{(j)}*\frac{t^{\otimes j}}{n^{\frac j2}}\right)\, dt + o\left({n^{-K-\frac d2}}\right)\\ &=&\sum_{m=0}^{2K}\sum_{j=0}^{M}\frac{i^{m+j}}{n^{\frac {m+j+d}2}\, m!\, j!} \Phi^{(m+j)}\left(\frac \ell {\sqrt{n}}\right)*\left(\Pi^{(m)}_0(\cdot)\otimes (\lambda^n/a^n)_0^{(j)}\right)+o(n^{-K-\frac d2}).\label{INT22} \end{eqnarray*} This combined with \eqref{FORMULECLEF} and \eqref{hypofg} gives \eqref{FFFF1}. We assume from now on that $\sum_{\ell\in\mathbb Z^d}|\ell|^{2K}(\Vert f(\cdot,\ell)\Vert+\Vert g(\cdot,\ell)\Vert_{\mathcal B'})$. Recall that $(\lambda^n/a^n)^{(j)}_0$ is polynomial in $n$ of degree at most $\lfloor j/P\rfloor$. Hence, due to the dominated convergence theorem, we can replace $\Phi^{(m+j)}\left(\frac {\ell'-\ell} {\sqrt{n}}\right)$ in \eqref{FFFF1} by $$\sum_{r=0}^{2K-m-j+2\lfloor\frac{j}P\rfloor}\frac 1{r!\, n^{\frac r 2}}\Phi^{(m+j+r)}(0)*(\ell'-\ell)^{\otimes r}\, .$$ Hence we have proved \eqref{decorrelation2b}. \end{proof} Now, we come back to the case of $\mathbb Z^2$-periodic Sinai billiards, with the notations of Section \ref{sec:model}. \section{Young towers for billiards}\label{sec:young} Recall that, in \cite{young}, Young constructed two dynamical systems $(\tilde M,\tilde T, \tilde\mu)$ and $(\hat M, \hat T, \hat \mu)$ and two measurable functions $\tilde\pi\colon \tilde M\to \bar M$ and $\hat \pi\colon \tilde M\to \hat M$ such that $$\tilde\pi\circ \tilde T=\bar T\circ \tilde\pi,\ \tilde\pi_*\tilde\mu=\bar\mu,\ \hat\pi\circ \tilde T=\hat T\circ \hat\pi,\ \hat\pi_*\tilde\mu=\hat\mu$$ and such that, for every measurable $f\colon \bar M\to\mathbb{C}$ constant on every stable manifold, there exists $\hat f\colon \hat M\to\mathbb{C}$ such that $\hat f\circ \hat\pi=f\circ \tilde\pi$. We consider the partition $\hat{\mathcal{D}}$ on $\hat M$ constructed by Young in \cite{young} together with the separation time given, for every $x,y$, by \[ s_0(x,y):=\min\{n\ge -1:\ \hat{\mathcal{D}}(\hat T^{n+1}x)\ne \hat {\mathcal{D}}(\hat T^{n+1}y)\}. \] It will be worth noting that, for any $x,y$, the sets $\tilde\pi\hat\pi^{-1}\{x\}$ and $\tilde\pi\hat\pi^{-1}\{y\}$ are contained in the same connected component of $\bar M\setminus\bigcup_{k=0}^{s_0(x,y)}\bar T^{-k}\mathcal S_0$. Let $p>1$ and set $q$ such that $\frac1p+\frac1q=1$. Let $\eps>0$ and $\beta\in(0,1)$ be suitably chosen and let us define \[ \|\hat f\| = \sup_{\ell} \|\hat f_{|\hat \Delta_\ell}\|_\infty e^{-\ell\eps} + \sup_{\hat A\in\hat{\mathcal D}} \esssup_{x,y\in \hat A} \frac{|\hat f(x)-\hat f(y)|}{\beta^{s_0(x,y)}}e^{-\ell \eps}. \] Let $\mathcal{B}:=\{\hat f\in L^q_\mathbb{C}(\hat M,\hat\mu)\colon \|\hat f\|<\infty\}$. Young proved that the Banach space $(\mathcal{B},\|\cdot\|)$ satisfies $\|\cdot\|_{q}\le \|\cdot\|$, that the transfer opertor $\hat P$ on $\mathcal B$ ($\hat P$ being defined on $L^q$ as the adjoint of the composition by $\hat T$ on $L^p$) is quasicompact on $\mathcal{B}$. We assume without any loss of generality (up to an adaptation of the construction of the tower) that the dominating eigenvalue of $\hat P$ on $\mathcal{B}$ is $1$ and is simple. Since $\kappa\colon \bar M\to\mathbb{Z}^2$ is constant on the stable manifolds, there exists $\hat \kappa\colon\hat M\to\mathbb{Z}^2$ such that $\hat \kappa\circ \hat \pi=\kappa\circ\tilde\pi$. We set $\hat S_n:=\sum_{k=0}^{n-1}\hat\kappa\circ\hat T^k$. For any $u\in{\mathbb R}^2$ and $\hat f\in\mathcal B$, we set $\hat P_u(\hat f):=\hat P(e^{i u*\hat\kappa}\hat f)$. \begin{proposition}\label{pro:pertu2} $t\mapsto \lambda_t$ is an even function. \end{proposition} \begin{proof} Let $\Psi:\bar M\rightarrow \bar M$ be the map which sends $(q,\vec v)\in\bar M$ to $(q,\vec v')\in\bar M$ such that $\widehat{(\vec n(q),\vec v')}=-\widehat{(\vec n(q),\vec v)}$. Then $\kappa\circ \bar T^k\circ\Psi=-\kappa\circ \bar T^{-k-1}$. Hence, $S_n$ as the same distribution (with respect to $\bar\mu$) as $-S_n$ and so $$\forall t\in[-b,b]^2,\quad \mathbb E_\mu[e^{-it*S_n}]=\mathbb E_\mu[e^{it*S_n}]\sim \lambda_t^n \mathbb E_{\hat\mu}[\Pi_t \mathbf 1]\sim \lambda_{-t}^n \mathbb E_{\hat\mu}[\Pi_{-t} \mathbf 1]$$ as $n$ goes to infinity, and so $\lambda$ is even. \end{proof} Let $\mathcal Z_{k}^m$ be the partition of $\bar M\setminus \bigcup_{j=k}^m\bar T^{-j}(\mathcal S_0)$ into its connected components. We also write $\mathcal Z_k^{\infty}:=\bigvee _{j\ge k}\mathcal Z_k^j$. \begin{proposition}\label{pro:pertu2b} Let $k$ be a nonnegative integer and let $u,v:\bar M\rightarrow\mathbb C$ be respectively $\mathcal Z_{-k}^k$-measurable and $Z_{-k}^\infty$-measurable functions. Then there exists $\hat u,\hat v:\hat M\rightarrow \mathbb C$ such that $u\circ\bar T^k\circ\tilde\pi=\hat u\circ\hat \pi$ and $v\circ\bar T^k\circ\tilde\pi=\hat v\circ\hat \pi$. Moreover, $\hat u\in\mathcal B$ and for every $t\in\mathbb R$, $ \hat P_t^{2k}(e^{-it*\hat S_k}\hat u)=\hat P^{2k}(e^{it*\hat S_k\circ\hat T^k}\hat u)$ and \begin{equation}\label{P2ku} \Vert \hat P^{2k}(e^{it*\hat S_k\circ\hat T^k}\hat u)\Vert\le (1+2\beta^{-1})\Vert u\Vert_\infty\, , \end{equation} and \begin{equation}\label{EqClef} \forall n>k,\quad \mathbb E_{\bar\mu}[u.e^{it*S_n}.v\circ\bar T^n]= \mathbb E_{\hat\mu}[\hat v.e^{it*\hat S_k}\hat P_t^n(e^{-it*\hat S_k}\hat u)]\, . \end{equation} \end{proposition} \begin{proof} Using several times $\hat P^m(f.g\circ T^m)=g.\hat P^mf$ and $\hat P_t^m=\hat P^m(e^{it\hat S_m}\cdot)$, we obtain \begin{eqnarray*} \mathbb E_{\bar\mu}[u.e^{it*S_n}.v\circ\bar T^n]&=& \mathbb E_{\bar\mu}[u\circ \bar T^k.e^{it*S_n}\circ \bar T^k.v\circ\bar T^{n+k}]\\ &=&\mathbb E_{\hat\mu}[\hat u. e^{it* \hat S_n}\circ \hat T^k.\hat v\circ\hat T^n]\\ &=&\mathbb E_{\hat\mu}[\hat P^{n+k}(\hat u. e^{it* (\hat S_{n-k}\circ \hat T^k +\hat S_k\circ \hat T^n)}.\hat v\circ\hat T^n)]\\ &=&\mathbb E_{\hat\mu}[\hat P^{k}(e^{it*\hat S_k}\hat v.\hat P^{n}( e^{it* (\hat S_{n-k}\circ \hat T^k)}.\hat u))]\\ &=&\mathbb E_{\hat\mu}[\hat P^{k}(e^{it*\hat S_k}\hat v.\hat P_t^{n}(e^{-it*\hat S_k}\hat u))]\, , \end{eqnarray*} since $\hat S_{n-k}\circ\hat T^k=\hat S_n-\hat S_k$. Hence, we have proved \eqref{EqClef} (since $\hat P$ preserves $\hat\mu$). \end{proof} \section{Proofs of our main results in the finite horizon case}\label{sec:MAIN} We assume throughout this section that the billiard has finite horizon. The Nagaev-Guivarc'h method \cite{nag1,nag2,GH} has been applied in this context by Sz\'asz and Varj\'u \cite{SV1} (see also \cite{FP09a}) to prove Hypotheses \ref{HHH} and \ref{HHH1} hold for $\mathcal B_0=\mathcal B$ the Young Banach space. More precisely, we have the following. \begin{proposition}[\cite{SV1,FP09a}]\label{pro:pertu} There exist a real $b\in(0,\pi)$ and three $C^\infty$ functions $t\mapsto \lambda_t$, $t\mapsto \Pi_t$ and $t\mapsto N_t$ defined on $[-b,b]^2$ and with values in $\mathbb C$, $\mathcal L(\mathcal B,\mathcal B)$ and $\mathcal L(\mathcal B,\mathcal B)$ respectively such that \begin{itemize} \item[(i)] for every $ t\in[-b,b]^2$, $\hat P_t^n=\lambda_t^n \Pi_t + N_t^n$and $\Pi_0=\mathbb E_{\hat\mu}[\cdot]$, $\Pi_t\hat P_t=\hat P_t\Pi_t=\lambda_t\Pi_t$, $\Pi_t^2=\Pi_t$; \item[(ii)] there exists $\vartheta\in(0,1)$ such that, for every positive integer $m$, \[ \sup_{t\in[-b,b]^2} \| (N^n)^{(m)}_t\|_{\mathcal L(\mathcal B,\mathcal B)} = O(\vartheta^n) \quad\text{and}\quad \sup_{t\in[-\pi,\pi]^2\setminus[-b,b]^2} \| \hat P_t^n\|_{\mathcal L(\mathcal B,\mathcal B)} = O(\vartheta^n); \] \item[(iii)] we have $\lambda_t = 1-\frac12\Sigma^2*t^{\otimes 2} = O(|t|^3)$; \item[(iv)] there exists $\sigma>0$ such that, for any $t\in[-b,b]^2$, $|\lambda_t|\le e^{-\sigma|t|^2} $ and $e^{-\frac 12\Sigma^2*t^{\otimes 2}}\le e^{-\sigma|t|^2} $. \end{itemize} \end{proposition} Our first step consists in stating a high order expansion of the following quantity $$\mathbb E_{\bar\mu}[u.\mathbf 1_{S_n=\ell}.v\circ\bar T^n]$$ for $u$ and $v$ dynamically Lispchitz on $\bar M$. Let us recall that, due to \eqref{FORMULECLEF0}, this result corresponds to a mixing result for observables supported on a single cell. We start by studying this quantity for some locally constant observables. This result is a refinement of \cite[prop. 4.1]{ps10} (see also \cite[prop 3.1]{FP17}. Let $\Phi$ be the density function of $B$, which is given by $\Phi(x)=\frac {e^{-\frac {(\Sigma^2)^{-1}*x^{\otimes 2}}2}}{2\pi\, \sqrt{\det\Sigma^2}}$. \subsection{A first local limit theorem} We set $a_t:= e^{-\frac{1}2\Sigma^2*t^{\otimes 2}}$. Note that the uneven derivatives of $\lambda/a$ at 0 are null as well as its three first derivatives. \begin{proposition}\label{TLL} Let $K$ be a positive integer and a real number $p>1$. There exists $c>0$ such that, for any $k\ge 1$, if $u,v:\bar M\rightarrow \mathbb C$ are respectively $\mathcal{Z}_{-k}^{k}$-measurable and $\mathcal{Z}_{-k}^\infty$-measurable, then for any $n > 3k$ and $\ell\in\mathbb{Z}^2$ \begin{eqnarray} &\ &\left| \mathbb E_{\bar\mu}\left[u\mathbf 1_{\{S_n=\ell\}}.v\circ \bar T^{n}\right] -\sum_{m=0}^{2K-2}\frac {1}{m!}\sum_{j=0}^{2K-2-m} \frac {i^{m+2j}}{(2j)!}\frac{\Phi^{(m+2j)}\left(\frac \ell{\sqrt{n}}\right)}{n^{j+1+\frac{m}2}}*(A_m(u,v)\otimes(\lambda^n/a^n)_0^{(2j)}) \right|\nonumber\\ &\ & \le \frac{ck^{2K-1}\Vert v\Vert_p\, \Vert u\Vert_\infty}{n^{K+\frac14}}\, , \label{formuleTLL} \end{eqnarray} with, for every $m\in\{0,...,4K-4\}$, \begin{equation}\label{Am} \left| A_m(u,v)- \frac{\partial^m}{\partial t^m}\left(\frac{\mathbb E_{\bar\mu}[u.e^{it*S_n}.v\circ \bar T^n]}{\lambda_t^n}\right)_{|t=0} \right|\le cn^m\vartheta^{n-2k}\Vert v\Vert_p\Vert u\Vert_\infty\, , \end{equation} \begin{equation}\label{MAJO} \left| A_m(u,v)\right|\le c\, k^m\Vert v\Vert_p\Vert u\Vert_\infty\quad\mbox{and}\quad(\lambda^n/a^n)_0^{(m)}=O(n^{m/4}). \end{equation} In particular, for $K=2$, we obtain \begin{eqnarray*} &\ &\left| \mathbb E_{\bar\mu}\left[u\mathbf 1_{\{S_n=\ell\}}.v\circ \bar T^{n}\right] -\frac{\Phi(\frac\ell{\sqrt{n}})}{n}A_0(u,v) - \frac i{n^{\frac 32}}\Phi'(\frac\ell{\sqrt{n}}) * A_1(u,v)\right.\\ &\ &\left. + \frac 1{n^2}\Phi"(\frac \ell{\sqrt{n}}) * A_2(u,v) -\frac {A_0(u,v)}{ n^2} .\Phi^{(4)}\left(\frac \ell{\sqrt{n}}\right)*(\lambda^n/a^n)^{(4)}_0\right|\\ &\ & \le \frac{ck^3\Vert v\Vert_p\, \Vert u\Vert_\infty}{n^\frac94}\, . \end{eqnarray*} \end{proposition} \begin{remark}\label{FORMULETLL2} Due to \eqref{Am} and \eqref{MAJO}, \eqref{formuleTLL} can be rewritten as follows: \begin{eqnarray*} &\ &\left| \mathbb E_{\bar\mu}\left[u\mathbf 1_{\{S_n=\ell\}}.v\circ \bar T^{n}\right] -\sum_{m=0}^{4K-4}\frac {i^m}{m!} \frac{\Phi^{(m)}(\frac\ell{\sqrt{n}})}{n^{1+\frac m2}}*\left(e^{\frac n2\Sigma^2*t^{\otimes 2}}\mathbb E_{\bar\mu}[u.e^{it*S_n}.v\circ \bar T^n]\right)^{(m)}_{|t=0}\right|\\ &\ & \le \frac{ck^{4K-4}\Vert v\Vert_p\, \Vert u\Vert_\infty}{n^{K+\frac14}}\, . \end{eqnarray*} \end{remark} \begin{proof}[Proof of Proposition \ref{TLL}] Since $u\circ \bar T^k$ is $\mathcal{Z}_{0}^{2k}$-measurable and $v\circ\bar T^{k}$ is $\mathcal{Z}_0^\infty$-measurable, there exist $\hat u,\hat v:\hat M\rightarrow\mathbb C$ such that $\hat u\circ\hat\pi = u\circ\bar T^k\circ \tilde\pi$ and $\hat v\circ\hat\pi = v\circ\bar T^k\circ\tilde\pi$, with $\hat u\in\mathcal B$. As in the proof of \cite[Prop. 4.1]{ps10}, we set \[ C_n(u,v,\ell):=\mathbb E_{\bar\mu}[u.\mathbf 1_{\{S_n=\ell\}}. v\circ \bar T^n]\, . \] Due to \eqref{EqClef}, we obtain \begin{eqnarray} C_n(u,v,\ell)&=&\frac 1{(2\pi)^2}\int_{[-\pi,\pi]^2}e^{-it*\ell}\mathbb E_{\bar\mu}[u. e^{it *S_n}.v\circ\bar T^n]\, dt\nonumber\\ &=&\frac 1{(2\pi)^2}\int_{[-\pi,\pi]^2}e^{-it*\ell}\mathbb E_{\hat\mu}[e^{it *\hat S_k}\hat v.\hat P_t^{n}(e^{-it*\hat S_k}\hat u)]\, dt \, . \end{eqnarray} Let $\Xi_{k,t}:=e^{it*\hat S_k}\Pi_t(e^{-it*\hat S_k}\cdot)$. We will write $\Xi^{(m)}_{k,0}$ for $\frac{\partial^m}{\partial t^m}(\Xi_{k,t})_{|t=0}$. Due to items (i) and (ii) of Proposition~\ref{pro:pertu} and due to \eqref{P2ku}, it comes \begin{eqnarray} C_n(u,v,\ell)&=&\frac 1{(2\pi)^2}\int_{[-b,b]^2}e^{-it*\ell}\lambda_t^{n-2k} \mathbb E_{\hat\mu}[e^{it *\hat S_k}\hat v.\Pi_t\hat P_t^{2k}(e^{-it*\hat S_k}\hat u)]\, dt+O(\vartheta^{n-2k}\Vert u\Vert_\infty.\Vert v\Vert_p )\nonumber\\ &=&\frac 1{(2\pi)^2}\int_{[-b,b]^2}e^{-it*\ell}\lambda_t^{n}\mathbb E_{\hat\mu}[\hat v.{\Xi}_{k,t}\hat u]\, dt+O(\vartheta^{n-2k}\Vert u\Vert_\infty.\Vert v\Vert_p )\, , \end{eqnarray} since $\Pi_t \hat P_t=\lambda_t\Pi_t$ and $\Pi_t^2=\Pi_t$ so that \begin{equation}\label{formuleXi} \Xi_{k,t}=\lambda_t^{-2k}e^{it*\hat S_k}\Pi_t\hat P_t^{2k}(e^{-it*\hat S_k}\cdot)\, . \end{equation} Observe that \begin{equation}\label{MAJOO} \frac{1}{(2\pi)^2}\int_{[-b,b]^2}|t|^{j}|\lambda_t|^{n}\, dt\le \frac{1}{(2\pi)^2 n^{\frac {j+2}2}}\int_{[-b\sqrt{n},b\sqrt{n}]^2}|t|^{j}e^{-\sigma |t|^2}\, dt\, , \end{equation} and so \begin{equation} C_n(u,v,\ell)=\frac 1{(2\pi)^2}\int_{[-b,b]^2}e^{-it*\ell}\lambda_t^{n} \sum_{m=0}^{2K-2}\frac 1{m!}A_m(u,v)*t^{\otimes m}\, dt+ O\left(\frac{k^{2K-1}\Vert v\Vert_p\, \Vert u\Vert_\infty} {n^{K+\frac 12}}\right)\, , \end{equation} with $A_m(u,v):=\mathbb E_{\hat\mu}[\hat v.\Xi^{(m)}_{k,0}\hat u]$. Indeed $\Xi^{(2K-1)}_{k,s}\hat u$ is a linear combination of terms of the form $$e^{is*\hat S_k}.(i\hat S_k)^{\otimes a}\otimes \Pi^{(b)}_s\hat P^{2k}(\otimes (i\hat S_k\circ\hat T^k)^{\otimes c}e^{is*\hat S_k\circ\hat T^k}\hat u)\otimes(\lambda^{-2k})^{(d)}_s$$ over nonnegative integers $a,b,c,d$ such that $a+b+c+d=2K-1$, and these terms are in $O(k^{2K-1}\Vert u\Vert_\infty)$ in $\mathcal B$, uniformly in $k$. Moreover, due to \eqref{formuleXi}, to \eqref{EqClef} and to Item (i) of Proposition \ref{pro:pertu}, we obtain \begin{eqnarray*} \forall t\in[-b,b]^2,\quad \mathbb E_{\hat\mu}[\hat v.\Xi_{k,t}.\hat u]&=&\frac{\lambda_t^{n-2k}\mathbb E_{\hat\nu}[\hat v.e^{it*\hat S_k}\Pi_t\hat P_t^{2k}(e^{-it*\hat S_k}\hat u)]}{\lambda_t^n}\\ &=& \frac{\mathbb E_{\bar\mu}[u.e^{it* S_n}. v\circ\bar T^n]-\mathbb E_{\hat\mu}[e^{it*\hat S_k}\hat v.N_{t}^{n-2k}\hat P_t^{2k}(e^{-it*\hat S_k}\hat u)]}{\lambda_t^n}\, , \end{eqnarray*} so that \[ \mathbb E_{\hat\mu}[\hat v.\Xi_{k,0}^{(m)}.\hat u] =\left(\frac{\mathbb E_{\bar\mu}[u.e^{it*S_n}.v\circ \bar T^n]}{\lambda_t^n}\right)^{(m)}_{|t=0} +O\left(n^m\vartheta^{n-2k}\Vert v\Vert_p\Vert u\Vert_\infty\right)\, . \] Recall that $a_t= e^{-\frac{1}2\Sigma^2*t^{\otimes 2}}$. Since the three first derivatives of $\lambda$ and $a$ coincide, we have $(\lambda^n/a^n)_0^{(j)}=O(n^{j/4})$ and \[ \left\vert \lambda_t^{n} - a_t^{n}\sum_{j=0}^{4K-4-2m}\frac 1{j!}(\lambda^n/a^n)_0^{(j)}*t^{\otimes j} \right\vert\\ \le c_K n^{\frac{4K-3-2m}4}a_t^n |t|^{4K-3-2m}. \] Due to the analogue of \eqref{MAJOO} with $\lambda_t$ replaced by $a_t$, we obtain \begin{eqnarray*}C_n(u,v,\ell)&=&\frac 1{(2\pi)^2}\int_{[-b,b]^2}e^{-it*\ell}e^{-\frac{n}2\Sigma^2* t^{\otimes 2}} \sum_{m=0}^{2K-2}\frac 1{m!}A_m(u,v)*t^{\otimes m}\\ &\ &\left(1+\sum_{j=4}^{4K-4-2m}\frac 1{j!} (\lambda^n/a^n)_0^{(j)}*t^{\otimes j}\right)\, dt + O\left(\frac{k^{2K-1}\Vert v\Vert_p\, \Vert u\Vert_\infty} {n^{K+\frac14}}\right). \end{eqnarray*} Note that \begin{eqnarray} &\ &\frac 1{(2\pi)^2}\int_{[-b,b]^2}e^{-it*\ell}e^{-\frac{n}2\Sigma^2* t^{\otimes 2}}t^{\otimes m}\, dt\nonumber\\ &=&\frac 1{(2\pi)^2\, n^{\frac m2+1}}\int_{[-b\sqrt{n},b\sqrt{n}]^2}e^{-it*\frac\ell{\sqrt{n}}}e^{-\frac{1}2\Sigma^2* t^{\otimes 2}}t^{\otimes m}\, dt\nonumber\\ &=&\frac {i^m}{n^{\frac m2+1}} \Phi^{(m)}\left(\frac \ell {\sqrt{n}}\right)+o(n^{-K-\frac 14}).\label{INT} \end{eqnarray} Hence we have proved that \begin{eqnarray*} &\ &\left| \mathbb E_{\bar\mu}\left[u\mathbf 1_{\{S_n=\ell\}}.v\circ \bar T^{n}\right] -\sum_{m=0}^{2K-2}\frac{i^m}{m!}\frac{\Phi^{(m)}(\frac\ell{\sqrt{n}})}{n^{1+\frac m2}}*A_m(u,v)\right. \\ &\ &\left.-\sum_{m=0}^{2K-2} \sum_{j=4}^{4K-4-2m}\frac{i ^{m+j}}{m!\, j!\, n^{1+\frac{m+j}2}}\Phi^{(m+j)}\left(\frac \ell{\sqrt{n}}\right)*\left( A_m(u,v)\otimes (\lambda^n/a^n)_0^{(j)}\right)\right|\\ &\ & \le \frac{ck^{2K-1}\Vert v\Vert_p\, \Vert u\Vert_\infty}{n^{K+\frac14}}\, , \end{eqnarray*} and so \eqref{formuleTLL} using \eqref{INT} and the fact that the uneven derivatives of $(\lambda/a)$ at 0 are null. \end{proof} \subsection{Generalization} \begin{proposition}\label{TLL2} Let $K$ be a positive integer. Let $\xi_0\in\left(\max(\xi,\vartheta),1\right)$. There exists $\mathfrak c_0>0$ such that, for every $u,v:\bar M\rightarrow \mathbb C$ dynamically Lipschitz continuous functions, with respect to $d_\xi$ with $\xi\in(0,1)$ and for every $\ell\in\mathbb{Z}^2$ \begin{eqnarray} &\ &\left| \mathbb E_{\bar\mu}\left[u\mathbf 1_{\{S_n=\ell\}}.v\circ \bar T^{n}\right] -\sum_{m=0}^{2K-2}\frac 1{m!}\sum_{j=0}^{2K-2-m} \frac {i^{m+2j}}{(2j)!}\frac{\Phi^{(m+2j)}\left(\frac \ell{\sqrt{n}}\right)}{n^{j+1+\frac{m}2}}*(A_{m}(u,v)\otimes(\lambda^n/a^n)_0^{(2j)}) \right|\nonumber\\ &\ & \ \le\mathfrak c_0\frac{(\log n)^{4K-2}}{n^{K+\frac 14}}{\Vert v\Vert_{(\xi)}\, \Vert u\Vert_{(\xi)}}\, ,\label{eq:TLLGENE} \end{eqnarray} with $A_m(u,v)$ such that \begin{equation}\label{Amlim} \left|A_{m}(u,v)-\left(\mathbb E_{\bar\mu}[u.e^{it*S_n}.v\circ \bar T^n]/\lambda_t^n\right)^{(m)}_{|t=0}\right| \le \mathfrak c_0\Vert u\Vert_{(\xi)}\, \Vert u\Vert_{(\xi)}\xi_0^{(\log n)^2} \end{equation} and $|A_m(u,v)|\le \mathfrak c_0\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}$. \end{proposition} \begin{proof} For every positive integer $k$, we define $$u_{k}:= \mathbb E_{\bar\mu}[u|\mathcal Z_{-k}^k]\quad\mbox{and}\quad v_{k}:= \mathbb E[v|\mathcal Z_{-k}^k]\, .$$ Note that $$\Vert u-u_{k}\Vert_\infty\le L_\xi (u)\xi^{k},\quad \Vert v-v_{k}\Vert_\infty\le L_\xi (v)\xi^{k} \, ,$$ and \[ \left|\mathbb E_{\bar\mu}\left[u\mathbf 1_{\{S_n=\ell\}}.v\circ \bar T^{n}\right]-\mathbb E_{\bar\mu}\left[u_{k}\mathbf 1_{\{S_n=\ell\}}.v_{k}\circ \bar T^{n}\right]\right|\le \Vert u\Vert_{(\xi)} \, \Vert v\Vert_{(\xi)}\xi^k\, . \] Now we take $k=k_n=\lceil (\log n)^2\rceil$. Note that, for $n$ large enough, $n>3k_n$. We set $$ A_{m,n}(u,v):=\left(\mathbb E_{\bar\mu}[u.e^{it*S_n}.v\circ \bar T^n]/\lambda_t^n\right)^{(m)}_{|t=0}\, .$$ Note that, for every integers $k,n>0$, \begin{eqnarray*} \left|A_{m,n}(u,v)-A_{m,n}(u_{k},v_{k})\right\Vert &\le& \left\Vert \frac{\partial^m}{\partial t^m}\left(\frac {e^{it*S_n}}{\lambda_t^n}\right)_{|t=0}\right\Vert_{L^1(\bar\mu)}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\xi^{k}\\ &\le& \tilde c_mn^m\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\xi^{k}\, . \end{eqnarray*} For every integers $n,n'$ such that $0<n\le n'\le 2n$, we have \begin{eqnarray*} &\ &\left|A_{m,n}(u,v)-A_{m,n'}(u,v)\right|\\ &\le&\left|A_{m,n}(u_{k_n},v_{k_n})-A_{m,n'}(u_{k_{n}},v_{k_{n}})\right|+(1+2^{m})\tilde c_mn^m\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\xi^{k_n} \\ &\le&K_m\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\xi_0^{k_n} \, , \end{eqnarray*} due to \eqref{Am}. Hence, we conclude that $(A_{m,n}(u,v))_n$ is a Cauchy sequence so that $A_m(u,v)$ is well defined and that $$|A_{m}(u,v)-A_{m,n}(u,v)|\le K_m\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\sum_{j\ge 0}\xi_0^{k_{2^jn}}=O\left(\Vert u\Vert_{(\xi)}\, \Vert u\Vert_{(\xi)}\xi_0^{k_n}\right).$$ Since Applying Proposition \ref{TLL} to the couple $(u_{(k_{n})},v_{(k_{n})})$ leads to \eqref{eq:TLLGENE}. \end{proof} \subsection{Proofs of our main results} \begin{theorem}\label{MAIN} Let $f,g:M\rightarrow\mathbb R$ be two bounded observables such that \[ \sum_{\ell\in\mathbb Z^2}\left(\Vert f \mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g \mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}\right)<\infty\, . \] Then \begin{eqnarray} &\ &\int_Mf.g\circ T^n\, d\nu\nonumber\\ &=&\sum_{m=0}^{2K-2}\frac 1{m!}\sum_{j=0}^{2K-2-m} \frac {i^{m+2j}}{(2j)!}\frac{\sum_{\ell,\ell'\in\mathbb Z^2}\Phi^{(m+2j)}\left(\frac {\ell'-\ell}{\sqrt{n}}\right)*(A_{m}(u_\ell,v_{\ell'}))}{n^{j+1+\frac{m}2}}*(\lambda^n/a^n)_0^{(2j)}) +o(n^{-K})\, ,\label{decorrelation1} \end{eqnarray} with $u_{\ell}(q,\vec v)=f(q+\ell,\vec v)$ and $v_{\ell}(q,\vec v)=f(q+\ell,\vec v)$ and with $A_m(u,v)$ given by \eqref{Amlim}. If moreover, $\sum_{\ell\in\mathbb Z^2}|\ell|^{2K-2}(\Vert f\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)})<\infty$, then \begin{eqnarray} \int_Mf.g\circ T^n\, d\nu =\sum_{L=0}^{K-1}\frac{\tilde c_L}{n^{1+L}} \sum_{j=0}^{2K-2-2L}(-1)^j \frac {\Phi^{(2j+2L)}(0)}{(2j)!n^j}*(\lambda^n/a^n)_0^{(2j)}+o(n^{-K})\, \label{decorrelation2} \end{eqnarray} with \begin{eqnarray*} \tilde c_L(f,g)&:=&\sum_{r,m\ge 0\ :\ r+m=2L}\frac {i^m}{m!\, r!}\sum_{\ell,\ell'\in\mathbb Z^2}(\ell'-\ell)^{\otimes r}\otimes A_{m}(u_\ell,v_{\ell'})\, . \end{eqnarray*} \end{theorem} Since $(\lambda^n/a^n)_0^{(2j)}=O(n^{j/2})$, we conclude that: \begin{remark}\label{RQE} Assume $\sum_{\ell\in\mathbb Z^2}|\ell|^{2K-2}(\Vert f\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)})<\infty$ and $\int_Mf.g\, d\nu=O(n^{-K})$. Then $$\int_Mf.g\, d\nu=\frac{\Phi^{(2K-2)}(0)*\tilde c_{K-1}(f,g)}{n^{K}}+o(n^{-K}) \, ,$$ and $\tilde c_{K-1}(f,g)=\lim_{n\rightarrow +\infty}\frac {(-1)^{K-1}}{(2K-2)!}\sum_{\ell,\ell'\in\mathbb Z^2}\mathbb E_{\bar\mu}\left[u_\ell.\frac{\partial^{2K-2}}{\partial t^{2K-2}}\left(\lambda_t^{-n}e^{it*(S_n-(\ell'-\ell))}\right)_{|t=0}. v_{\ell'}\circ\bar T^n\right]$. \end{remark} \begin{corollary}\label{coroMAIN} Under the assumptions of Theorem \ref{MAIN} ensuring \eqref{decorrelation2}, using the fact that $(\lambda/a)_0^{(2j)}=O(n^{j/2})$, as in Remarks \ref{DEVASYMP} and \ref{FORMULETLL2}, if $\sum_{\ell\in\mathbb Z^2}|\ell|^{4K-4}(\Vert f\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g\mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)})<\infty$, the right hand side of \eqref{decorrelation2} can be rewritten \[ n^{-\frac d2} \sum_{\ell,\ell'\in\mathbb Z^2}\sum_{L=0}^{4K-4}\frac{\Phi^{(L)}(0)}{L!} i^L\frac{\partial^L}{\partial t^L}\left(\mathbb E_{\bar\mu}\left[u_\ell. e^{it*\frac{(S_n-(\ell'-\ell))}{\sqrt{n}}}.v_{\ell'}\circ\bar T^n\right]e^{\frac{1}2\Sigma^2*t^{\otimes 2}}\right)_{|t=0} +o(n^{-K})\, . \] \end{corollary} \begin{proof}[Proof of Theorem \ref{MAIN}] We have $$\int_M f.g\circ T^n \, d\nu=\sum_{\ell,\ell'\in\mathbb Z^2} \mathbb E_{\bar \mu}[u_\ell \mathbf 1_{\{S_n=\ell'-\ell\}}v_{\ell'}\circ \bar T^n].$$ Hence, \eqref{decorrelation1} follows directly from Proposition \ref{TLL2}. Due to the dominated convergence theorem, \begin{eqnarray*} &\ &\lim_{n\rightarrow+\infty}n^{K-1-\frac{m+j}2}\sum_{\ell,\ell'\in\mathbb Z^2}\left(\Phi^{(m+j)}\left(\frac{\ell'-\ell}{\sqrt{n}}\right)-\sum_{r=0}^{2K-2-m-(j/2)}\frac {\Phi^{(m+j+r)}(0)}{r!}*\left(\frac{\ell'-\ell}{\sqrt{n}}\right)^{\otimes r}\right)\\ &\ &\ \ \ \ \ \ \ *\left((\lambda^n/a^n)_0^{(j)}\otimes A_{m}(u_\ell,v_{\ell'})\right)=0\, , \end{eqnarray*} (where we used \eqref{MAJO}) and to the fact that the uneven derivatives of $\Phi$ are null and that $\Phi^{(2k)}(0)=(-\Sigma^2)^{\otimes k}\Phi(0)$. Therefore \begin{eqnarray*} \int_Mf.g\circ T^n\, d\nu &=&\sum_{m=0}^{2K-2}\sum_{r=0:r+m\in 2\mathbb Z}^{2K-2-m}\frac {\Phi(0)}{m!\, r!}\sum_{j=0}^{2K-2-m-r} \frac {(-1)^{j}}{(2j)!}\left(\frac{(-\Sigma^{-2})^{\otimes (j+\frac{m+r}2)}}{n^{j+1+\frac{m+r}2}}*(\lambda^n/a^n)_0^{(2j)}\right)\nonumber\\ &\ &*\sum_{\ell,\ell'\in\mathbb Z^2}i^m(\ell'-\ell)^{\otimes r}\otimes A_{m}(u_\ell,v_{\ell'}) +o(n^{-K})\, , \end{eqnarray*} which ends the proof of \eqref{decorrelation2}. \end{proof} \begin{proof}[Proof of Theorem \ref{PRINCIPAL}] This comes from \eqref{decorrelation2} combined with the fact that $(\lambda^n/a^n)_0^{(2j)}$ is a polynomial in $n$ of degree bounded by $j/2$. \end{proof} \begin{proof}[Proof of Theorem \ref{MAINbis}] Due to \eqref{decorrelation2} of Theorem \ref{MAIN}, we obtain \eqref{MAIN2} with $$ \tilde{\mathfrak A}_2(f,g)=\mathfrak a_{2,0,0}(f,g)+\mathfrak a_{0,2,0}(f,g)+\mathfrak a_{1,1,0}(f,g)\, , $$ where $a_{m,r,j}(f,g)$ corresponds to the contribution of the $(m,r,j)$-term in the sum of the right hand side of \eqref{decorrelation2}. Moreover, due to Proposition \ref{Amn!!}, \begin{eqnarray*} \mathfrak a_{2,0,0}(f,g)&= &\sum_{\ell,\ell'\in\mathbb Z^2}A_2(u_\ell,v_{\ell'})\\ &=&-\lim_{n\rightarrow+\infty}\left\{ \int_Mf\, d\nu\, \sum_{j,m= -n}^{-1} \int_M g.(\kappa\circ T^j\otimes \kappa\circ T^m-\mathbb E_{\bar\mu}[\kappa\circ \bar T^j\otimes \kappa\circ\bar T^m])]\, d\nu\right.\\ &\ &+\int_M g\, d\nu\, \sum_{j,m= 0}^{n-1}\int_M f .[\kappa\circ T^j\otimes\kappa\circ T^m -\mathbb E_{\bar\mu}[\kappa\circ T^j\otimes\kappa\circ T^m]]\, d\nu\nonumber\\ &\ & +2\sum_{r= 0}^{n-1}\int_M f.\kappa \circ T^r\, d\nu \otimes\sum_{m= -n}^{-1}\int_M g.\kappa\circ T^m\, d\nu\nonumber\\ &\ & \left. +\int_Mf\, d\nu\, \int_Mg\, d\nu (\mathbb E_{\bar\mu}[S_n^{\otimes 2}]-n\Sigma^2)\right\}\, , \end{eqnarray*} \[ \mathfrak a_{0,2,0}(f,g) =-\sum_{\ell,\ell'\in\mathbb Z^2}A_0(u_\ell,v_{\ell'}).(\ell'-\ell)^{\otimes 2}=- \sum_{\ell,\ell'\in\mathbb Z^2}(\ell'-\ell)^{\otimes 2}\int_{\mathcal C_\ell}f\, d\nu\, \int_{\mathcal C_{\ell'}}g\, d\nu\, , \] \begin{eqnarray*} \mathfrak a_{1,1,0}(f,g)&=&-2i\sum_{\ell,\ell'\in\mathbb Z^2}A_1(u_\ell,v_{\ell'})\otimes(\ell'-\ell)\\ &= &2\lim_{n\rightarrow +\infty}\left\{\sum_{\ell,\ell'\in\mathbb Z^2}\int_{\mathcal C_{\ell'}}g\, d\nu\sum_{r= 0}^{n-1}\int_{\mathcal C_\ell}f.((\ell'-\ell)\otimes \kappa\circ T^r)\, d\nu\right.\\ &\ &\left.+\sum_{\ell,\ell'\in\mathbb Z^2}\int_{\mathcal C_{\ell}}f\, d\nu\sum_{m=-n}^{-1}\int_{\mathcal C_{\ell'}}g.((\ell'-\ell)\otimes \kappa\circ T^m)\, d\nu\right\}\, . \end{eqnarray*} For the contribution of the term with $(m,r,j)=(0,0,2)$, note that $$(\lambda^n/a^n)^{(4)}_{0}=n(\lambda/a)_0^{(4)}=n(\lambda_0^{(4)}-3(\Sigma^2)^{\otimes 2}).$$ Moreover, due to Proposition \ref{lambda04}, \[ \lambda_0^{(4)}-3(\Sigma^2)^{\otimes 2}= \lim_{n\rightarrow +\infty}\frac{\mathbb E_{\bar\mu}[S_n^{\otimes 4}]-3n^2(\Sigma^2)^{\otimes 2}}{n}+6\Sigma^2\otimes B_0=\Lambda_4\, . \] Note that \begin{eqnarray*} \mathfrak a_{2,0,0}(f,g)&= &-\lim_{n\rightarrow+\infty}\left\{ \int_Mf\, d\nu\, \int_M g((\mathcal I_0-\mathcal I_{-n})^{\otimes 2}-\mathbb E_{\bar\mu}[S_n^{\otimes 2}])\, d\nu\right.\\ &\ &+\int_M g\, d\nu\, \int_M f((\mathcal I_n-\mathcal I_{0})^{\otimes 2}-\mathbb E_{\bar\mu}[S_n^{\otimes 2}])\, d\nu\nonumber\\ &\ & +2\int_Mf(\mathcal I_n-\mathcal I_0)\, d\nu \otimes\int_Mg(\mathcal I_0-\mathcal I_{-n})\, d\nu\nonumber\\ &\ & \left. -\int_Mf\, d\nu\, \int_Mg\, d\nu \, \mathfrak B_0\right\}\, , \end{eqnarray*} \begin{eqnarray*} \mathfrak a_{0,2,0}(f,g) &=& -\int_M f.\mathcal I_0^{\otimes 2}\, d\nu\, \int_Mg\, d\nu- \int_Mf\, d\nu\, \int_M g.\mathcal I_0^{\otimes 2}\, d\nu+2 \int_Mf\mathcal I_0\, d\nu\otimes\int_Mg\mathcal I_0\, d\nu \end{eqnarray*} and \begin{eqnarray*} \mathfrak a_{1,1,0}(f,g) &= &\lim_{n\rightarrow +\infty}\left\{2\int_M g\mathcal I_0\, d\nu\otimes \int_M f(\mathcal I_n-\mathcal I_0)\, d\nu-2\int_M g\, d\nu\, \int_M f.\mathcal I_0\otimes(\mathcal I_n-\mathcal I_0)\, d\nu\right.\\ &\ &\left.+2\int_Mf\, d\nu\, \int_Mg.\mathcal I_0\otimes(\mathcal I_0-\mathcal I_{-n})\, d\nu-2\int_Mf\mathcal I_0\, d\nu\otimes \int_Mg(\mathcal I_0-\mathcal I_{-n})\, d\nu\right\}\, . \end{eqnarray*} Hence we have proved \eqref{MAIN2} with \[ \tilde {\mathfrak A_2}(f,g):= -\int_Mf\, d\nu \, \tilde{\mathfrak B}_2^-(g)-\int_Mg\, d\nu\, \tilde{\mathfrak B}_2^+(f)+\int_Mf\, d\nu\int_Mg\, d\nu\, {\mathfrak B}_0+2\, {\mathfrak B}_1^+(f)\otimes{\mathfrak B}_1^-(g)\, , \] with \begin{eqnarray*} \tilde{\mathfrak B}_2^+(f)&:=&\lim_{m\rightarrow +\infty} \int_Mf\left(\mathcal I_m^{\otimes 2}-\mathbb E[S_m^{\otimes 2}]\right)\, d\nu \, , \end{eqnarray*} \begin{eqnarray*} \mathfrak B_2^-(g)&:=&\lim_{m\rightarrow -\infty}\int_M g\left(\mathcal I_m^{\otimes 2}-\mathbb E[S_m^{\otimes 2}]\right)\, d\nu\, . \end{eqnarray*} \end{proof} \begin{remark}\label{MAINter} Let $f,g:M\rightarrow\mathbb R$ be two bounded observables such that \begin{equation} \sum_{\ell\in\mathbb Z^2}|\ell|^4\left(\Vert f \mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}+\Vert g \mathbf 1_{\mathcal C_\ell}\Vert_{(\xi)}\right)<\infty\, \end{equation} Assume moreover that $\int_Mf\, d\nu\, \int_Mg\, d\nu=0$ and that $\tilde {\mathfrak A}_2(f,g)=0$. Due to Remark \ref{RQE}, \begin{eqnarray*} &\ &\int_Mf.g\circ T^n\, d\nu\\ &\ &= \frac {(\Sigma^{-2})^{\otimes 2}} {2\pi\sqrt{\det \Sigma^2}n^3} *\sum_{\ell,\ell'\in\mathbb Z^2}\left(\frac{A_4(u_\ell,v_{\ell'})}{24} +\frac {A_0(u_\ell,v_{\ell'})}{24}({\ell'-\ell})^{\otimes 4}+\frac {i\, A_1(u_\ell,v_{\ell'})}6\otimes(\ell'-\ell)^{\otimes 3}\right. \\ &\ &\left.-\frac 14A_2(u_\ell,v_{\ell'})\otimes(\ell'-\ell)^{\otimes 2} - \frac i6 A_3(u_\ell,v_{\ell'})\otimes(\ell'-\ell)\right) +o(n^{-3})\, , \end{eqnarray*} where $u_\ell(q,\vec v):=f(q+\ell,\vec v)$ and $v_\ell(q,\vec v):=g(q+\ell,\vec v)$. \end{remark} \begin{proof}[Proof of Proposition \ref{casparticulier}] We apply Remark \ref{MAINter}. Using the definitions of $A_0$ and $A_1$, we observe that $$\forall \ell,\ell'\in\mathbb Z^2,\quad A_0(u_\ell,v_{\ell'})=A_1(u_{\ell},v_{\ell'})=0 $$ (since $\mathbb E_{\bar\mu}[u_\ell]=\mathbb E_{\bar\mu}[v_{\ell'}]=0$) and $$\sum_{\ell,\ell'\in\mathbb Z^2}A_4(u_{\ell},v_{\ell'})=A_4\left(\sum_{\ell\in\mathbb Z^2}u_\ell,\sum_{\ell'\in\mathbb Z^2}v_{\ell'}\right) =0\, .$$ Moreover $$\sum_{\ell,\ell'\in\mathbb Z^2}A_3(u_\ell,v_{\ell'})\otimes(\ell'-\ell) = \sum_{\ell,\ell'\in\mathbb Z^2}h_\ell q_{\ell'}A_3(f_0,g_0)\otimes(\ell'-\ell)=0$$ since $\sum_{\ell\in\mathbb Z^2} h_\ell=\sum_{\ell}q_\ell=0$. Therefore \begin{eqnarray*} &\ &\int_Mf.g\circ T^n\, d\nu\\ &\ &= -\frac 14\frac {(\Sigma^{-2})^{\otimes 2}} {2\pi\sqrt{\det \Sigma^2}n^3} *\sum_{\ell,\ell'\in\mathbb Z^2}A_2(u_\ell,v_{\ell'})\otimes(\ell'-\ell)^{\otimes 2} +o(n^{-3})\\ &\ &= \frac 12\frac {(\Sigma^{-2})^{\otimes 2}} {2\pi\sqrt{\det \Sigma^2}n^3} *\sum_{\ell,\ell'\in\mathbb Z^2}h_\ell q_{\ell'}A_2(f_0,g_0)\otimes \ell\otimes\ell' +o(n^{-3})\\ &\ &= \frac 12\frac {(\Sigma^{-2})^{\otimes 2}} {2\pi\sqrt{\det \Sigma^2}n^3} A_2(f_0,g_0)\otimes \sum_{\ell\in\mathbb Z^2}h_\ell.\ell \otimes\sum_{\ell'\in\mathbb Z^2} q_{\ell'}.\ell' +o(n^{-3})\\ &\ &= -\frac {(\Sigma^{-2})^{\otimes 2}} {2\pi\sqrt{\det \Sigma^2}n^3}*\left( \sum_{j\ge 0}\mathbb E_{\bar\mu}[f_0.\kappa\circ\bar T^j]\otimes \sum_{m\le -1}\mathbb E_{\bar\mu}[g_0.\kappa\circ \bar T^m]\otimes \sum_{\ell\in\mathbb Z^2}h_\ell.\ell \otimes\sum_{\ell'\in\mathbb Z^2} q_{\ell'}.\ell' \right) +o(n^{-3})\\ &\ &= -\frac {(\Sigma^{-2})^{\otimes 2}} {2\pi\sqrt{\det \Sigma^2}n^3}*\left( \sum_{j\ge 0}\int_Mf\mathcal I_0\otimes \kappa\circ T^j\, d\nu\otimes \sum_{m\le -1}\int_Mg\mathcal I_0\otimes \kappa\circ T^m\, d\nu\right)+o(n^{-3})\, . \end{eqnarray*} \end{proof} \section{Proof of the mixing result in the infinite horizon case}\label{sec:infinite} \begin{proof}[Proof of Theorem \ref{horizoninfini}] In \cite{SV2}, Sz\'asz and Varj\'u implemented the Nagaev-Guivarc'h perturbation method via the Keller-Liverani theorem \cite{KL} to prove that Hypothesis \ref{HHH} holds true for the dynamical system $(\hat M,\hat\mu,\hat T)$ with the Young Banach space $\mathcal B$, with $\mathcal B_0:=\mathbb L^1(\hat \mu)$ and with $\lambda$ having the following expansion: $$ \lambda_t-1\sim \Sigma_\infty^2*(t^{\otimes 2})\log |t|\, .$$ Hence Hypothesis \ref{HHH1} holds also true, with $\Theta_n=\sqrt{n\log n}\, Id$ and with $Y$ a gaussian random variable with distribution $\mathcal N(0,\Sigma_\infty^2)$ with density function $\Phi(x)=\exp(-\frac 12(\Sigma_\infty^2)^{-1}*x^{\otimes 2})/ (2\pi\sqrt{\det\Sigma_\infty^2})$. Let $k_n:=\lceil \log^2n\rceil$. Let $u_{n}(x)$ and $v_{n}(x)$ correspond to the conditional expectation of respectively $f$ and $g$ over the connected component of $M\setminus\bigcup_{m=-k_n}^{k_n}T^{-m}\mathcal S_0$ containing $x$. First note that \begin{equation}\label{horinf1} \int_M f.g\circ T^n\, d\nu =\int_M u_n.v_n\circ T^n\, d\nu+O\left( \left(L_\xi(f)\int_M|g|\, d\nu+L_\xi(g)\int_M|f|\, d\nu\right)\xi^{k_n}\right)\, . \end{equation} As noticed in Proposition \ref{pro:pertu2b}, there exist $\hat f_n,\hat g_n:\hat M\times\mathbb Z^2\rightarrow\mathbb C$ such that $$\forall \tilde x\in\tilde M,\quad\hat f_n (\hat \pi(\tilde x),\ell)=u_{n}(\bar T^{k_n}(\tilde\pi (\tilde x))+\ell)\, ,$$ $$\forall \tilde x\in\tilde M,\quad\hat g_n (\hat \pi(\tilde x),\ell)=v_{n}(\bar T^{k_n}(\tilde\pi (\tilde x))+\ell)\, ,$$ with the notation $(q,\vec v)+\ell=(q+\ell,\vec v)$ for every $(q,\vec v)\in \bar M$. For $n$ large enough, $n>3k_n$ and, due to \eqref{EqClef}, \begin{eqnarray*} &\ &\int_M\, u_{n}.v_{n}\circ T^n\, d\nu =\sum_{\ell,\ell'\in\mathbb Z^2}\mathbb E_{\bar\mu}[u_{n}(\cdot+\ell).\mathbf 1_{S_n=\ell'-\ell}.v_{n}(\bar T^n(\cdot)+\ell')]\nonumber\\ &\ &=\sum_{\ell,\ell'\in\mathbb Z^2}\frac 1{(2\pi)^2}\int_{[-\pi,\pi]^2}e^{-it*(\ell'-\ell)}\mathbb E_{\bar\mu}[u_{n}(\cdot+\ell).e^{it*S_n} .v_{n}(\bar T^n(\cdot)+\ell')]\nonumber\\ &\ &=\sum_{\ell,\ell'\in\mathbb Z^2}\frac 1{(2\pi)^2}\int_{[-\pi,\pi]^2}e^{-it*(\ell'-\ell)}\mathbb E_{\hat\mu}[\hat G_{n,t}(\cdot,\ell')\hat P_t^{n-2k_n}\hat P^{2k_n}(\hat F_{n,t}(\cdot,\ell))]\, dt\, ,\nonumber\end{eqnarray*} where $\hat F_{n,t},\hat G_{n,t}:\hat M\rightarrow\mathbb Z^2\rightarrow\mathbb C$ are the functions defined by $$\hat F_{n,t}(\hat x,\ell):=\hat f_n(\hat x,\ell).e^{it*\hat S_{k_n}(\hat T^{k_n}(\hat x))},$$ $$\hat G_{n,t}(\hat x,\ell):=\hat g_n(\hat x,\ell).e^{it*\hat S_{k_n}(\hat x)}.$$ Moreover $\sup_{n,t}\Vert \hat P^{2k_n}\hat F_{n,t}(\cdot,\ell)\Vert\le (1+2\beta^{-1})\Vert f\mathbf 1_{\mathcal C_\ell}\Vert_\infty$. Hence, due to Hypothesis \ref{HHH}, \begin{eqnarray*} &\ &\int_M\, u_{n}.v_{n}\circ T^n\, d\nu\\ &\ &=O(\vartheta^{n-2k_n})+\sum_{\ell,\ell'\in\mathbb Z^2}\frac 1{(2\pi)^2}\int_{[-\pi,\pi]^2}e^{-it*(\ell'-\ell)}\mathbb E_{\hat\mu}[\hat G_{n,t}(\cdot,\ell')\lambda_t^{n-2k_n}\Pi_t \hat P^{2k_n}(\hat F_{n,t}(\cdot,\ell))]\, dt\nonumber\\ &\ &=O(\vartheta^{n-2k_n})+\sum_{\ell,\ell'\in\mathbb Z^2}\frac 1{\mathfrak a_n^2(2\pi)^2}\int_{[-\mathfrak a_n\pi,\mathfrak a_n\pi]^2}e^{-iu*\frac{\ell'-\ell}{\mathfrak a_n}}\mathbb E_{\hat\mu}[\hat G_{n,u/\mathfrak a_n}(\cdot,\ell')\lambda_{u/\mathfrak a_n}^{n-2k_n}\Pi_{u/\mathfrak a_n} \hat P^{2k_n}(\hat F_{n,u/\mathfrak a_n}(\cdot,\ell))]\, du\nonumber\\ &\ &=o(\mathfrak a_n^{-2})+\sum_{\ell,\ell'\in\mathbb Z^2}\frac 1{\mathfrak a_n^2(2\pi)^2}\int_{[-\mathfrak a_n\pi,\mathfrak a_n\pi]^2}\!\!\!\!\!\!\! \mathbb E_{\hat\mu}[\hat G_{n,0}(\cdot,\ell')e^{-\frac12 \Sigma_\infty^2*u^{\otimes 2}}\Pi_0 \hat P^{2k_n}(\hat F_{n,u/\mathfrak a_n}(\cdot,\ell))]\, du\, \\ &\ &=o(\mathfrak a_n^{-2})+\sum_{\ell,\ell'\in\mathbb Z^2}\frac 1{\mathfrak a_n^2(2\pi)^2}\int_{[-\mathfrak a_n\pi,\mathfrak a_n\pi]^2}\!\!\!\!\!\!\! \mathbb E_{\hat\mu}[\hat G_{n,0}(\cdot,\ell')]e^{-\frac12 \Sigma_\infty^2*u^{\otimes 2}} \mathbb E_{\hat\mu}[\hat F_{n,0}(\cdot,\ell))]\, du\, , \end{eqnarray*} where we used the change of variable $u=\mathfrak a_n\, t$ with $\mathfrak a_n:=\sqrt{(n-2k_n)\log(n-2k_n)}$, and twice the dominated convergence theorem. Therefore \[ \int_M\, u_{n}.v_{n}\circ T^n\, d\nu=\frac {\Phi\left(0\right)}{\mathfrak a_n^2(2\pi)^2}\int_Mu_n\, d\nu\, \int_M v_n\, d\nu+o(\mathfrak a_n^{-2})\, . \] The conclusion of the theorem follows from this last formula combined with \eqref{horinf1} and with the facts that $\mathfrak a_n^2\sim n\log n$ and that $$\int_Mu_n\, d\nu\, \int_Mv_n\, d\nu=\int_Mf\, d\nu \, \int_Mg\, d\nu\, ,$$ due to the dominated convergence theorem. \end{proof} \begin{appendix} \section{Billiard with finite horizon: about the coefficients $A_{m}$}\label{sec:coeff} Let $\mathcal W^s$ (resp. $\mathcal W^u$) be the set of stable (resp. unstable) $H$-manifolds. In \cite{Chernov}, Chernov defines two separation times $s_+$ and $s_-$ which are dominated by $s$ and such that, for every positive integer $k$, $$\forall W^u\in\mathcal W^u,\ \forall \bar x,\bar y\in W^u,\quad s^+(\bar T^{-k}\bar x,\bar T^{-k}\bar y)=s^+(x,y)+k,$$ $$\forall W^s\in\mathcal W^s,\ \forall \bar x,\bar y\in W^s,\quad s^-(\bar T^k\bar x,\bar T^{k}\bar y)=s^-(x,y)+k.$$ \begin{proposition}[\cite{Chernov}, Theorem 4.3 and remark after]\label{decoChernov} There exist $C_0>0$ and $\vartheta_0\in(0,1)$ such that, for every positive integer $n$, for every bounded measurable $u,v:\bar M\rightarrow\mathbb R$, $$\left |\mathbb E_{\bar\mu}[u.v\circ\bar T^n] -\mathbb E_{\bar\mu}[u]\mathbb E_{\bar\mu}[v] \right\| \le C_0\left(L_u^+\Vert v\Vert_\infty+L_v^-\Vert u\Vert_\infty+\Vert u\Vert_\infty\Vert v\Vert_\infty\right)\vartheta_0^n\, , $$ with $$ L_u^+:=\sup_{W^u\in \mathcal W^u}\sup_{x,y\in W^u,\, x\ne y}(|u(x)-u(y)|\xi^{-\mathbf{s}_+(x,y)}),$$ and $$L_v^-:=\sup_{W^s\in \mathcal W^s}\sup_{x,y\in W^s,\, x\ne y}(|v(x)-v(y)|\xi^{-\mathbf{s}_-(x,y)})\, .$$ \end{proposition} Note that $$ L_u^+\le L_\xi(u\mathbf 1_{\bar M }),\quad L_u^-\le L_\xi(u\mathbf 1_{\bar M })\, ,$$ $$L^+_{u\circ \bar T^{-k}}\le L_u^+\xi^k\quad \mbox{and}\quad L^-_{v\circ \bar T^{k}}\le L_v^-\xi^k\, . $$ We will set $\tilde u:=u-\mathbb E_{\bar\mu}[u]$ and $\tilde v:=v-\mathbb E_{\bar\mu}[v]$. We will express the terms $A_m(u,v)$ for $m\in\{1,2,3,4\}$ in terms of the follwing quantities: $$B_1^+(u):= \sum_{j\ge 0}\mathbb E_{\bar\mu}[u.\kappa\circ T^j] \, ,\quad B_1^-(v):=\sum_{m\le -1}\mathbb E_{\bar\mu}[v.\kappa\circ \bar T^m]\, ,$$ $$B_2^+(u):=\sum_{j,m\ge 0}\mathbb E_{\bar\mu}[\tilde u.\kappa\circ \bar T^j\otimes\kappa\circ\bar T^m]\, ,\quad B_2^-(v):=\sum_{j,m\le -1} \mathbb E_{\bar\mu}[\tilde v.\kappa\circ \bar T^j\otimes \kappa\circ \bar T^m]\, ,$$ $$B_0^-(v):=\sum_{k\le -1}|k| \mathbb E_{\bar\mu}[\tilde v.\kappa\circ\bar T^k]\, ,\quad B_0^+(u)=\sum_{k\ge 0}k \mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k]\, ,$$ $$B_0:=B_0^-(\kappa)+B_0^+(\kappa)=\sum_{m\in\mathbb Z}|m|\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ\bar T^m]\, , $$ $$B_{0,2}^+(u):=\sum_{k,m\ge 0}\max(k,m) \mathbb E_{\bar\mu}[\tilde u.\kappa \circ \bar T^k\otimes \kappa\circ\bar T^m]\, .$$ $$B_{0,2}^-(v):= \sum_{k,m\ge 1}\max(k,m) \mathbb E_{\bar\mu}[\tilde v.\kappa \circ \bar T^{-k}\otimes \kappa\circ\bar T^{-m}]\, ,$$ \begin{eqnarray*} B_3^+(u)&:=&\sum_{k,r, m\ge 0} \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^{\min(k,r,m)}\\ &\ &\left(\kappa\circ \bar T^{\max(k,r,m)}\otimes\kappa\circ\bar T^{med(k,r,m)}-\mathbb E_{\bar\mu}[\kappa\circ \bar T^{\max(k,r,m)}\otimes\kappa\circ\bar T^{med(k,r,m)}]\right)]\, , \end{eqnarray*} \begin{eqnarray*} B_3^-(v)&:=&\sum_{m, r, s\le -1}\mathbb E_{\bar\mu}[ \tilde v .\kappa\circ\bar T^{\max(m,r,s)}\otimes\\ &\ & \left(\kappa\circ \bar T^{\min(m,r,s)} \otimes\kappa\circ\bar T^{med(m,r,s)}-\mathbb E_{\bar\mu}[\kappa\circ \bar T^{\min(m,r,s)} \otimes\kappa\circ\bar T^{med(m,r,s)}]\right)]\, , \end{eqnarray*} with $med(m,r,s)$ the mediane of $(m,r,s)$. \begin{proposition}\label{Amn!!} Let $u,v:\bar M\rightarrow \mathbb C$ be two dynamically Lipschitz continuous functions, with respect to $d_\xi$ with $\xi\in(0,1)$. Then \begin{eqnarray} A_{0}(u,v)&=&\mathbb E_{\bar\mu}[u].\mathbb E_{\bar\mu}[v]\\ A_{1}(u,v)&=& i\lim_{n\rightarrow +\infty}\mathbb E_{\bar\mu}[u.S_n.v\circ\bar T^n]=i\,B_1^+(u)\mathbb E_{\bar \mu}[v]+i\,B_1^-(v)\mathbb E_{\bar\mu}[u]\\ A_{2}(u,v)&=&\lim_{n\rightarrow +\infty}(n\, \mathbb E_{\bar\mu}[u]\mathbb E_{\bar\mu}[v]\Sigma^2-\mathbb E_{\bar\mu}[u.S_n^{\otimes 2}.v\circ \bar T^n])\\ &=&-2\, B_1^+(u)\otimes B_1^-(v)-\mathbb E_{\bar\mu}[v]B_2^+(u)-\mathbb E_{\bar\mu}[u]B_2^-(v) +\mathbb E_{\bar\mu}[u]\mathbb E_{\bar\mu}[v]\, B_0\, ,\label{EEEE}\label{Pi"0} \end{eqnarray} Moreover \begin{eqnarray} A_{3}(u,v)&=&\lim_{n\rightarrow +\infty}\left(3in\Sigma^2\otimes \mathbb E_{\bar\mu}[u.S_n.v\circ\bar T^n]-i\mathbb E_{\bar\mu}[u.S_n^{\otimes 3}.v\circ\bar T^n]\right)\nonumber\\ &=&3 A_1(u,v)\otimes B_0+3i\Sigma^2 \otimes\left(\mathbb E_{\bar\mu}[u]B_0^-(v)+ \mathbb E_{\bar \mu}[ v]B_0^+(u)\right)\nonumber\\ &\ &-i\mathbb E_{\bar\mu}[v]B_3^+(u)-i\mathbb E_{\bar\mu}[u]B_3^-(v)-3iB_2^-(v)\otimes B_1^+(u)-3iB_2^+(u)\otimes B_1^-(v)\label{Pi'''0} \end{eqnarray} and \begin{eqnarray*} A_{4}(u,v)&=& \lim_{n\rightarrow +\infty}\mathbb E_{\bar\mu}[u.S_n^{\otimes 4}.v\circ\bar T^n]+(\lambda^{-n})_0^{(4)}\mathbb E_{\bar\mu}[u]\mathbb E_{\bar\mu}[v]+6n\Sigma^2\otimes\mathbb E_{\bar\mu}[u.S_n^{\otimes 2}.v\circ\bar T^n]\\ &=&6B_0A_2(u,v)-6\Sigma^2\otimes\left(\mathbb E_{\bar \mu}[ u] B_{0,2}^-(v))-6\mathbb E_{\bar \mu}[ v] B_{0,2}^+(u)\right)\\ &\ &+\mathbb E_{\bar \mu}[ u]\mathbb E_{\bar \mu}[ v](A_{4}(\mathbf 1,\mathbf 1)-6B_0^{\otimes 2})\\ &\ &-12\Sigma^2\otimes (B_1^+(u)\otimes B_0^-(v)+B_1^-(v)\otimes B_0^+(u) -B_1^+(u)\otimes B_1^-(v))\\ &\ &+4B_1^+(u)\otimes B_3^-(v)+6 B_2^+(u)\otimes B_2^-(v) +4\ B_1^-(v)\otimes B_3^+(u)\, .\label{formuleA4} \label{Pi40} \end{eqnarray*} \end{proposition} \begin{proof} As in the proof of Theorem \ref{TLL2}, we set $$ A_{m,n}(u,v):=\left(\mathbb E_{\bar\mu}[v.e^{it*S_n}.u\circ \bar T^n]/\lambda_t^n\right)^{(m)}_{|t=0}.$$ We will only use Proposition \ref{decoChernov} and the fact that $\lambda_t=1-\frac 12\Sigma^2*t^{\otimes 2}+\frac 1{4!}\lambda_0^{(4)}*t^{\otimes 4}+o(|t|^4)$ to compute $A_m(u,v)=\lim_{n\rightarrow +\infty}A_{m,n}(u,v)$. \begin{itemize} \item First we observe that $ A_{0,n}(u,v)=\mathbb E_{\bar\mu}[u.v\circ\bar T^n] $ and we apply Proposition \ref{decoChernov}. \item Second, \begin{eqnarray*} A_{1,n}(u,v)&=&i\, \mathbb E_{\bar\mu}[u.S_n.v\circ\bar T^n] = i\, \sum_{k=0}^{n-1}\mathbb E_{\bar\mu}[u.\kappa\circ\bar T^k.v\circ\bar T^n]\\ &=& i\, \sum_{k=0}^{\lfloor n/2\rfloor}\mathbb E_{\bar\mu}[u.\kappa\circ \bar T^k]\mathbb E_{\bar\mu}[v]+i\, \sum_{\lfloor n/2\rfloor+1}^{n-1}\mathbb E_{\bar\mu}[u]\mathbb E_{\bar\mu}[v.\kappa\circ \bar T^{-(n-k)}]+O\left(n\vartheta_0^{n/2}\Vert u\Vert_{(\xi)}\Vert u\Vert_{(\xi)}\right)\\ &=& i\, \mathbb E_{\bar\mu}[v]\sum_{k\ge 0} \mathbb E_{\bar\mu}[u.\kappa\circ \bar T^k]+i\, \mathbb E_{\bar\mu}[u]\sum_{m\le -1}\mathbb E_{\bar\mu}[v.\kappa\circ \bar T^{m}]+O\left(n\vartheta_0^{n/2}\Vert u\Vert_{(\xi)}\Vert u\Vert_{(\xi)}\right), \end{eqnarray*} where we used several times Proposition \ref{decoChernov}, combined with the fact that $\mathbb E_{\bar\mu}[\kappa]=0$. \item Third, \begin{eqnarray} A_{2,n}(u,v)&=&-\mathbb E_{\bar\mu}[ u . S_n^{\otimes 2}. v\circ \bar T^n] +n \Sigma^2\mathbb E_{\bar\mu}[ u]\mathbb E_{\bar\mu}[ v]\label{formuleA2n}\\ &=&-\sum_{k,m=0}^{n-1}\mathbb E_{\bar\mu}[ u.(\kappa\circ \bar T^k\otimes \kappa\circ\bar T^m). v\circ \bar T^n] +n \Sigma^2\mathbb E_{\bar\mu}[ u]\mathbb E_{\bar\mu}[ v]\nonumber\\ &=&-\sum_{k,m=0}^{n-1}\mathbb E_{\bar\mu} [ \tilde u\kappa\circ\bar T^k\otimes\kappa\circ \bar T^m.\tilde v\circ\bar T^n]\nonumber\\ &\ &-\sum_{k,m=0}^{n-1}\left(\mathbb E_{\bar\mu}[ u]\mathbb E_{\bar\mu}[ \kappa\circ\bar T^k\otimes \kappa\circ\bar T^m \tilde v\circ\bar T^n]+\mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k\otimes\kappa\circ \bar T^m]\mathbb E_{\bar\mu}[ v]\right)\nonumber\\ &\ &\ \ \ \ \ +(n\Sigma^2-\sum_{k,m=0}^{n-1}\mathbb E_{\bar\mu}[ \kappa\circ \bar T^k\otimes \kappa\circ \bar T^m])\mathbb E_{\bar\mu}[ u]\mathbb E_{\bar\mu}[ v] \end{eqnarray} \begin{itemize} \item On the first hand \begin{eqnarray*} n\Sigma^2-\sum_{k,m=0}^{n-1}\mathbb E_{\bar\mu}[\kappa\circ \bar T^k\otimes\kappa\circ \bar T^m] &=&n\sum_{k\in\mathbb Z}\mathbb E_{\bar\mu}[ \kappa\otimes\kappa\circ \bar T^k] -\sum_{k=-n}^n(n-|k|)\mathbb E_{\bar\mu}[ \kappa\otimes\kappa\circ \bar T^k]\\ &=&\sum_{k\in\mathbb Z}\min(n,|k|)\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ \bar T^k], \end{eqnarray*} which converges to $\sum_{k\in\mathbb Z}|k|\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ\bar T^k]$. \item On the second hand, for $0\le k\le m\le n$, due to Proposition \ref{decoChernov} (treating separately the cases $k\ge n/3$, $m-n\ge n/3$ et $ n-m\ge n/3$), \begin{equation}\label{Esp4termes} \mathbb E_{\bar\mu} [ \tilde u.\kappa\circ \bar T^k\otimes \kappa \circ\bar T^m.\tilde v\circ \bar T^n]=\mathbb E_{\bar\mu}[\tilde u.\kappa \circ \bar T^k]\otimes\mathbb E_{\bar\mu}[ \tilde v. \kappa\circ \bar T^{n-m}] +O(\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\vartheta_0^{n/3}). \end{equation} Analogously \begin{equation} \mathbb E_{\bar\mu}[ \kappa\circ \bar T^k\otimes\kappa\circ\bar T^m \tilde v\circ\bar T^n]=O(\Vert v\Vert_{(\xi)}\vartheta_0^{(n-k)/2}) \end{equation} \begin{equation}\label{Esp3termes} \mathbb E_{\bar\mu}[\tilde u.\kappa\circ \bar T^k\otimes\kappa\circ \bar T^m]=O(\Vert u\Vert_{(\xi)}\vartheta_0^{m/2})\, . \end{equation} Hence $$\sum_{k,m=0}^{n-1}\mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k\otimes\kappa\circ\bar T^m] =B_2^+(\tilde u)+O(\vartheta_0^{n/2}\Vert u\Vert_{(\xi)})\, ,$$ \begin{equation} \sum_{k,m=0}^{n-1}\mathbb E_{\bar\mu}[\kappa\circ\bar T^k\otimes\hat\kappa\circ\bar T^m\tilde v\circ \bar T^n]\\ =B_2^-( v)+O(\vartheta_0^{n/2}\Vert v\Vert_{(\xi)}\, , \end{equation} and \begin{eqnarray*} &\ & \sum_{k,m=0}^{n-1}\mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k\otimes\kappa\circ\bar T^m.\tilde v\circ\bar T^n]\\ &=&\left(\sum_{k=0}^{n-1} \mathbb E_{\bar\mu}[\tilde u.\kappa^{\otimes 2}\circ \bar T^k.\tilde v\circ \bar T^n]+2\sum_{0\le k<m< n}\mathbb E_{\bar\mu}[\tilde u.\kappa\circ \bar T^k\otimes\kappa\circ \bar T^m.\tilde v\circ \bar T^n]\right)\\ &=& 2\sum_{0\le k< m<n}\mathbb E_{\bar\mu}[\tilde u.(\kappa\circ \bar T^k)]\otimes\mathbb E_{\hat\mu}[\tilde v.\bar\kappa\circ\bar T^{n-m}]+O(\vartheta_0^{n/2}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\\ &=&2B_1^+(u)\otimes B_1^-(v)+O(\vartheta_0^{n/2}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}) \, , \end{eqnarray*} where we used the fact that $\mathbb E_{\bar\mu}[\tilde u.\kappa^{\otimes 2}\circ \bar T^k.\tilde v\circ \bar T^n]=O(\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\vartheta_0 ^{n/2})$. \end{itemize} Therefore we have proved \eqref{Pi"0}. \item Let us prove \eqref{Pi'''0}. By bilinearity, we have \begin{equation}\label{A311} A_{3,n}(u,v)=A_{3,n}(\tilde u,\tilde v)+\mathbb E_{\bar \mu}[ u]A_{3,n}(\mathbf 1,\tilde v)+\mathbb E_{\bar \mu}[ v]A_{3,n}(\tilde u,\mathbf 1)+\mathbb E_{\bar\mu}[u]\mathbb E_{\bar\mu}[v]A_{3,n}(\mathbf 1,\mathbf 1). \end{equation} Note that $$A_{3,n}(\mathbf 1,\mathbf 1)=-i\mathbb E_{\bar\mu}[S_n^{\otimes 3}]=0.$$ since $(S_n)_n$ has the same distribution as $(-S_n)_n$ (see the begining of the proof of Proposition \ref{pro:pertu2}). We will use the following notations: $c_{(k,m,r)}$ denotes the number of uples made of $k,m,r$ (with their multiplicities) and we will write $\widetilde{\overbrace{F}}$ for $F-\mathbb E_{\bar\mu}[F]$ when $F$ is given by a long formula. \begin{itemize} \item We start with the study of $A_{3,n}(\tilde u,\mathbf 1)$. \begin{eqnarray} A_{3,n}(\tilde u,\mathbf 1)&=&-i\mathbb E_{\bar\mu}[\tilde u.S_n^{\otimes 3}]+3in\Sigma^2\otimes\mathbb E_{\bar\mu}[\tilde u.S_n]\nonumber\\ &=&-i\sum_{0\le k\le m\le r\le n-1}c_{k,m,r}\mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k\otimes \kappa\circ\bar T^m\otimes\kappa\circ \bar T^r]+3in\Sigma^2\otimes\mathbb E_{\bar\mu}[\tilde u.S_n]\nonumber\\ &=&-i\sum_{0\le k\le m\le r\le n-1}c_{k,m,r}\mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k]\otimes \mathbb E_{\bar\mu}[\kappa\circ\bar T^m\otimes\kappa\circ \bar T^r]+3in\Sigma^2\otimes\mathbb E_{\bar\mu}[\tilde u.S_n]\nonumber\\ &\ &-i\sum_{0\le k\le m\le r\le n-1}c_{k,m,r}\mathbb E_{\bar\mu} \left[\widetilde{\overbrace{\tilde u.\kappa\circ\bar T^k}}\otimes\widetilde{\overbrace{\kappa\circ\bar T^m\otimes\kappa\circ\bar T^r}}\right]\nonumber \end{eqnarray} \begin{eqnarray} &\ &A_{3,n}(\tilde u,\mathbf 1)\\ &=&-3i\sum_{k\ge 0}\sum_{m\in\mathbb Z}\max(0,n-|m|-k) \mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k]\otimes \mathbb E_{\bar\mu}[\kappa.\kappa\circ\bar T^m] +3in\Sigma^2\otimes\mathbb E_{\bar\mu}[\tilde u.S_n]\nonumber\\ &\ &-i\sum_{k,m,r=0}^{n-1}\mathbb E_{\bar\mu} \left[\widetilde{\overbrace{\tilde u.\kappa\circ\bar T^{\min(k,m,r)}}}\otimes\widetilde{\overbrace{\kappa\circ\bar T^{med(k,m,r)}\otimes\kappa\circ\bar T^{\max(k,m,r)}}}\right] \nonumber\\ &=&3i\sum_{k\ge 0}\sum_{m\in\mathbb Z}(|m|+k) \mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k]\otimes \mathbb E_{\bar\mu}[\kappa.\kappa\circ\bar T^m]\nonumber\\ &\ &-3in\, (B_1^+(u)-\mathbb E_{\bar\mu}[\tilde u.S_n])\otimes \Sigma^2-iB_{3}^+(\tilde u)+O(\vartheta_0^{n/3}\Vert u\Vert_{(\xi)})\nonumber\\ \end{eqnarray} and so \begin{equation} A_{3,n}(\tilde u,\mathbf 1)=-iB_3^+(\tilde u)+3iB_0^+(\tilde u)\otimes\Sigma^2+3iB_0\otimes B_1^+(\tilde u)\, .\label{A322} \end{equation} \item Analogously, \begin{equation} A_{3,n}(\mathbf 1,\tilde v)=-iB_3^-(\tilde v)+3iB_0^-(\tilde v)\otimes\Sigma^2+3iB_0\otimes B_1^-(\tilde v)\, .\label{A333} \end{equation} \item Finally \begin{eqnarray*} A_{3,n}(\tilde u,\tilde v)&=&-i \mathbb E_{\bar\mu}[\tilde u.S_n^{\otimes 3}.\tilde v \circ \bar T^n]+ 3i\, n \Sigma^2\otimes\mathbb E_{\bar\mu}[\tilde u.S_n.\tilde v \circ \bar T^n]\nonumber\\ &=&-i\sum_{k,m,r=0}^{n-1}\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\kappa\circ\bar T^m\otimes\kappa\circ\bar T^r.\tilde v\circ\bar T^n]+3i n\Sigma^2\otimes\tilde A_{1,n}(\tilde u,\tilde v)\nonumber\\ &=&-i\sum_{k,m,r=0}^{n-1}\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\kappa\circ\bar T^m\otimes\kappa\circ\bar T^r.\tilde v\circ\bar T^n]+O(n^2\vartheta_0^{n/2}\Vert u\Vert_{(\xi)} \Vert v\Vert_{(\xi)})\, . \end{eqnarray*} Assume $0\le k\le m\le r\le n-1$. Considering separately the cases $k\ge n/4$, $m-k\ge n/4$, $r-m\ge n/4$ and $n-r\ge n/4$, we observe that \begin{eqnarray} &\ &\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \hat T^k\otimes\kappa\circ\bar T^m.\otimes\kappa\circ\bar T^r.\tilde v\circ\bar T^n]\nonumber\\ &\ &=\mathbb E_{\bar\mu}[\tilde u.\kappa\circ \bar T^k]\otimes \mathbb E_{\bar\mu}[\tilde v.\kappa\circ\bar T^{-(n-r)}\otimes \kappa\circ\bar T^{-(n-m)}]\nonumber\\ &\ &+ \mathbb E_{\bar\mu}[\tilde v .\kappa\circ\bar T^{-(n-r)}]\otimes \mathbb E_{\bar\mu}[\tilde u.\kappa\circ\bar T^k\otimes\kappa\circ \bar T^m]+O(\vartheta_0^{n/4} \Vert v\Vert_{(\xi)}\, \Vert u\Vert_{(\xi)})\, .\label{Esp5termes} \end{eqnarray} And so \begin{equation} A_{3,n}(\tilde u,\tilde v) =-3iB_1^+(\tilde u)B_2^-(\tilde v)-3iB_1^-(\tilde v)B_2^+(\tilde u)\, . \end{equation} \end{itemize} This combined with \eqref{A311}, \eqref{A322} and \eqref{A333} leads to \eqref{Pi'''0}. \item It remains to prove \eqref{Pi40}. Observe first that \begin{eqnarray} A_{4,n}(u,v)&=&(\lambda^{-n})^{(4)}_0\mathbb E_{\bar \mu}[\bar u]\mathbb E_{\bar \mu}[\bar v]+6n\Sigma^2\otimes\mathbb E_{\bar \mu}[u.S_n^{\otimes 2}.v\circ\bar T^n]+\mathbb E_{\bar \mu}[u.S_n^{\otimes 4}.v\circ\bar T^n]\nonumber\\ &=&(\lambda^{-n})^{(4)}_0\mathbb E_{\bar \mu}[\bar u]\mathbb E_{\bar \mu}[\bar v]+6n\Sigma^2\otimes \left(n\Sigma^2\mathbb E_{\bar\mu}[u]\mathbb E_{\bar\mu}[v]-A_{2,n}(u,v)\right)+\mathbb E_{\bar \mu}[u.S_n^{\otimes 4}.v\circ\bar T^n]\label{decompA4} \end{eqnarray} where we used \eqref{formuleA2n}. Note that \begin{eqnarray} \mathbb E_{\bar \mu}[u.S_n^{\otimes 4}.v\circ\bar T^n] &=& \mathbb E_{\bar \mu}[\tilde u.S_n^{\otimes 4}.\tilde v\circ\bar T^n] +\mathbb E_{\bar \mu}[u]\mathbb E_{\bar \mu}[S_n^{\otimes 4}.\tilde v\circ\bar T^n]\nonumber\\ &\ &+\mathbb E_{\bar \mu}[v]\mathbb E_{\bar \mu}[\tilde u.S_n^{\otimes 4}]+\mathbb E_{\bar \mu}[u]\mathbb E_{\bar \mu}[v]\mathbb E_{\bar \mu}[S_n^{\otimes 4}].\label{A4bilin} \end{eqnarray} We now study separately each term of the right hand side of this last formula. \begin{itemize} \item First: \begin{eqnarray} &\ &\mathbb E_{\bar\mu}[\tilde u.S_n^{\otimes 4} .\tilde v \circ \bar T^n]\nonumber\\ &\ &= \sum_{k,m,r,s=0}^{n-1}\mathbb E_{\hat\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\kappa\circ\bar T^m.\kappa\circ\bar T^r\otimes\kappa\circ\bar T^s.\tilde v\circ\bar T^n]\,\nonumber\\ &\ &= \sum_{0\le k\le m\le r\le s\le n-1}c_{(k,m,r,s)} \mathbb E_{\bar\mu}\left[ \tilde u. \kappa\circ \bar T^k\otimes\widetilde{\overbrace{\kappa\otimes\kappa\circ\bar T^{r-m}}}\circ \bar T^m.\kappa\circ\bar T^s.\tilde v\circ\bar T^n\right]\,\nonumber\\ &\ & \ \ + \sum_{0\le k\le m\le r\le s\le n-1}c_{(k,m,r,s)} \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k.\kappa\circ\bar T^s.\tilde v\circ\bar T^n]\otimes\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ \bar T^{r-m}]\label{CCC2} \end{eqnarray} with $c_{(k,m,r,s)}$ the number of 4-uples made of $k,m,r,s$ (with the same multiplicities). Due to \eqref{Esp4termes}, \begin{eqnarray} &\ &\sum_{0\le k\le m\le r\le s\le n-1}c_{(k,m,r,s)} \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\kappa\circ\bar T^s.\tilde v\circ\bar T^n]\otimes\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ\bar T^{r-m}]\nonumber\\ &\ & =\sum_{0\le k\le m\le r\le s\le n-1}c_{(k,m,r,s)} \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k]\otimes\mathbb E_{\bar \mu}[ \tilde v.\kappa\circ\bar T^{-(n-s)}]\otimes\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ\bar T^{r-m}]+O(n^4\vartheta_0^{n/3}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\nonumber\\ &\ & =\sum_{k\ge 0}\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k] \otimes\sum_{s\ge 1} \mathbb E_{\bar \mu}[\tilde v.\kappa \circ \bar T^{-s}]\otimes \sum_{m=k}^{n-s}\sum_{r=m}^{n-s}c_{(k,m,r,n-s)}\mathbb E_{\bar\mu}[\kappa\otimes\kappa\circ \bar T^{r-m}]+O(n^4\vartheta_0^{n/3}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\nonumber\\ &\ & =\sum_{k\ge 0}\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k] \otimes\sum_{s\ge 1} \mathbb E_{\bar \mu}[\tilde v.\kappa\circ\bar T^{-s}] \otimes 12\mathbb E_{\bar\mu}[ S_{n-s-k+1}^{\otimes 2}]+O(n^4\vartheta_0^{n/3}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\nonumber\\ &\ & =\sum_{k\ge 0}\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \hat T^k] \otimes\sum_{s\ge 1} \mathbb E_{\bar \mu}[\tilde v.\kappa\circ\bar T^{-s}]12((n-s-k+1)\Sigma^2-\sum_{r\in\mathbb Z}|r|\mathbb E_{\hat\mu}[\hat\kappa\otimes \hat\kappa\circ \hat T^r]+O(n^4\vartheta_0^{n/3}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\nonumber\\ &\ &=12 B_1^+(\tilde u)B_1^-(\tilde v)\left(n\Sigma^2-\sum_{r\in\mathbb Z}|r|\mathbb E_{\hat\mu}[\hat\kappa\otimes \hat\kappa\circ \hat T^r]\right)\nonumber\\ &\ &-12 \sum_{k\ge 0}\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k]\otimes \sum_{s\ge 1} \mathbb E_{\bar \mu}[\tilde v.\kappa\circ\bar T^{-s}](s+k-1)\otimes\Sigma^2 +O(n^4\vartheta_0^{n/3}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\, .\label{CCC3} \end{eqnarray} But, on the other hand, treating separately the cases $k\ge n/5$, $m-k\ge n/5$, $r-m\ge n/5$, $s-r\ge n/5$ and $n-s\ge n/5$, we obtain that, for every $0\le k\le m\le r\le s\le n$, \begin{eqnarray} &\ & \mathbb E_{\bar\mu}\left[ \tilde u.\kappa\circ \bar T^k\otimes\widetilde{\overbrace{\kappa\otimes\kappa\circ\bar T^{r-m}}}\circ \bar T^m\otimes\kappa\circ\bar T^s.\tilde v\circ\bar T^n\right]\nonumber\\ &\ &= \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k]\otimes\mathbb E_{\bar\mu}[\widetilde{\overbrace{\kappa\otimes\kappa\circ\bar T^{r-m}}}\circ \bar T^m\otimes\kappa\circ\bar T^s.\tilde v\circ\bar T^n]\nonumber\\ &\ & +\ \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\hat\kappa\circ \hat T^m]\otimes \mathbb E_{\bar\mu}[\kappa\circ\bar T^{r}\otimes\kappa\circ\bar T^s.\tilde v\circ\bar T^n]\nonumber\\ &\ & +\ \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\widetilde{\overbrace{\kappa\otimes\kappa\circ\bar T^{r-m}}}\circ \bar T^m] \otimes\mathbb E_{\bar\mu}[\kappa\circ\bar T^s.\tilde v\circ\bar T^n]+O(\vartheta_0^{n/5}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}). \end{eqnarray} Due to \eqref{Esp3termes}, $$ \mathbb E_{\bar\mu}\left[\widetilde{\overbrace{\kappa.\kappa\circ\bar T^{r-m}}}\circ \bar T^m.\kappa\circ\bar T^s.\tilde v\circ\bar T^n\right]=O(\vartheta_0^{n-m}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\, ,$$ $$ \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k.\widetilde{\overbrace{\kappa.\kappa\circ\bar T^{r-m}}}\circ \bar T^m]\mathbb E_{\bar\mu}[\kappa\circ\bar T^s.\tilde v\circ\bar T^n]=O(\vartheta_0^{m}\vartheta_0^{n-s}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)})\, ,$$ $$\mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\kappa\circ \bar T^m]=O(\vartheta_0^{m}\Vert u\Vert_{(\xi)}).$$ Therefore \begin{eqnarray} &\ &\sum_{0\le k\le m\le r\le s\le n-1}c_{(k,m,r,s)} \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\widetilde{\overbrace{\kappa\circ \bar T^m\otimes\kappa\circ\bar T^{m}}}\otimes\kappa\circ\bar T^s.\tilde v\circ\bar T^n]\nonumber\\ &\ &=4\sum_{k\ge 0} \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k]B_3^-(\tilde v)+4 B_3^+(\tilde u)\otimes\sum_{s\ge 1}\mathbb E_{\bar\mu}[\kappa\circ \bar T^{-s}.\tilde v]\nonumber\\ &\ & +6 \sum_{m, k\ge 0} \mathbb E_{\bar\mu}[ \tilde u.\kappa\circ \bar T^k\otimes\kappa\circ \bar T^m]\otimes \sum_{r,s\ge 1} \mathbb E[\tilde v. \kappa\circ\bar T^{-r}\otimes \kappa\circ\bar T^{-s}]\nonumber\\ &\ & +O\left(\vartheta_0^{n/5}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\right)\, .\label{CCC4} \end{eqnarray} Putting together \eqref{decompA4}, \eqref{CCC2}, \eqref{CCC3} and \eqref{CCC4} leads to \begin{eqnarray} A_{4,n}(\tilde u,\tilde v)&=&-12\sum_{k\ge 0}\mathbb E_{\hat\mu}[ \tilde u.\kappa\circ \hat T^k] \sum_{s\ge 1} \mathbb E_{\hat \mu}[ \kappa\circ\bar T^{-s}.\tilde v] (s+k-1)\otimes\Sigma^2\nonumber\\ &\ &+4B_1^+(u)\otimes B_3^-(\tilde v)+4\ B_1^-(v)\otimes B_3^+(\tilde u)\nonumber\\ &\ & +6 B_2^+(u)\otimes B_2^-(v)-12B_1^+(\tilde u)\otimes B_1^-(\tilde v)\otimes B_0 +O\left(\vartheta_0^{n/5}\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\right)\,\label{A4tilde} \end{eqnarray} \item Second: \begin{equation} \mathbb E_{\bar \mu}[\tilde u.S_n^{\otimes 4}] =\sum_{0\le k\le m\le r\le s\le n-1}c_{(k,m,r,s)}\mathbb E_{\bar\mu}\left[\tilde u.\kappa\circ\bar T^k\otimes\kappa\circ\bar T^m\otimes\kappa\circ\bar T^r\otimes\kappa\circ\bar T^s\right]. \end{equation} But, due to \eqref{Esp5termes}, for $0\le k\le m\le r\le s\le n-1$, we have \begin{eqnarray*} &\ &\mathbb E_{\bar\mu} [ \tilde u.\kappa\circ \bar T^k\otimes\kappa\circ \bar T^m\otimes \kappa \circ\bar T^r\otimes\kappa\circ \bar T^s]\\ &\ &=\mathbb E_{\bar\mu}[\tilde u.\kappa \circ \bar T^k]\otimes\mathbb E_{\bar\mu}[ \kappa\otimes\kappa\circ\bar T^{r-m}\otimes \kappa\circ \bar T^{s-m}] \\ &\ &+\mathbb E_{\bar\mu}[\tilde u.\kappa \circ \bar T^k\otimes \kappa\circ\bar T^m]\otimes\mathbb E_{\bar\mu}[ \kappa\otimes \kappa\circ \bar T^{s-r}]+O(\Vert u\Vert_{(\xi)}\Vert v\Vert_{(\xi)}\vartheta_0^{s/3})\, . \end{eqnarray*} Therefore \begin{eqnarray*} &\ &\mathbb E_{\bar \mu}[\tilde u.S_n^{\otimes 4}]=4\sum_{k\ge 0}\mathbb E_{\bar\mu}[S_{n-k}^{\otimes 3}]\\ &\ &+6\sum_{k,m\ge 0}\sum_{r\in\mathbb Z}\max(0,(n-\max(k,m)-|r|)) \mathbb E_{\bar\mu}[\tilde u.\kappa \circ \bar T^k\otimes \kappa\circ\bar T^m]\otimes\mathbb E_{\bar\mu}[ \kappa\otimes \kappa\circ \bar T^{r}]\\ &=&6nB_2^+(\tilde u)\otimes\Sigma^2-6 \sum_{k,m\ge 0}\sum_{r\in\mathbb Z}(\max(k,m)+|r|) \mathbb E_{\bar\mu}[\tilde u.\kappa \circ \bar T^k\otimes \kappa\circ\bar T^m]\otimes\mathbb E_{\bar\mu}[ \kappa\otimes \kappa\circ \bar T^{r}] \end{eqnarray*} since $\mathbb E_{\bar\mu}[S_n^{\otimes 3}]=0$. It comes \begin{equation} \mathbb E_{\bar \mu}[\tilde u.S_n^{\otimes 4}] =6nB_2^+(\tilde u)\otimes\Sigma^2-6 B_{0,2}^+(\tilde u)\otimes \Sigma^2-6 B_2^+(\tilde u)\otimes B_0+O(\vartheta_0^{n/2})\label{A4u1} \end{equation} \item Analogously, \begin{equation} \mathbb E_{\bar \mu}[\tilde v\circ\bar T^n.S_n^{\otimes 4}] =6nB_2^-(\tilde v)\otimes\Sigma^2-6 B_{0,2}^-(\tilde v)\otimes \Sigma^2-6 B_2^-(\tilde v)\otimes B_0+O(\vartheta_0^{n/2})\, .\label{A41v} \end{equation} \end{itemize} Formula \eqref{formuleA4} follows from \eqref{A4bilin}, \eqref{A4tilde}, \eqref{A4u1} and \eqref{A41v}. \end{itemize} \end{proof} \begin{proposition}\label{lambda04} The fourth derivatives of $\lambda$ at $0$ are given by \[ \lambda_0^{(4)}=\lim_{n\rightarrow +\infty}\frac{\mathbb E_{\bar\mu}[S_n^{\otimes 4}]-3n^2(\Sigma^2)^{\otimes 2}}{n}+3(\Sigma^2)^{\otimes 2}+6\Sigma^2\otimes B_0\, . \] \end{proposition} \begin{proof} Derivating four times $\mathbb E_{\bar\mu}[e^{it*S_n}]=\lambda_t^n \mathbb E_{\bar\mu}[e^{it*S_n}/\lambda_t^n]$ leads to \begin{eqnarray*} \mathbb E_{\bar\mu}[S_n^{\otimes 4}]&=&(\lambda^n)_0^{(4)} +6(\lambda^n)_0^{(2)}\otimes A_{2,n}(\mathbf 1,\mathbf 1)+A_{4,n}(\mathbf 1,\mathbf 1)\\ &=& n\lambda_0^{(4)}+3n(n-1)(\lambda_0^{(2)})^{\otimes 2}+6n\lambda_0^{(2)}\otimes A_{2,n}(\mathbf 1,\mathbf 1) +A_{4,n}(\mathbf 1,\mathbf 1)\, , \end{eqnarray*} and we conclude due to \eqref{Amlim} and due to $\lambda_0^{(2)}=-\Sigma^2$ (coming from Item (iii) of Proposition \ref{pro:pertu}). \end{proof} \noindent{\bf Acknowledgment.\/}{ The author wishes to thank Damien Thomine for interesting discussions having led to an improvement of the assumption for the mixing result in the infinite horizon billiard case. } \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\textcolor{Sepia}{\textbf{ \Large Introduction}}} \label{sec:introduction} The squeezed states in quantum optics have emerged as non-classical states of light which are a consequence of Heisenberg's uncertainty relations. When the uncertainty in one conjugate variable is below the symmetric limit, compared to the other conjugate variable, without affecting the Heisenberg's uncertainty relations, then the state obtained is called squeezed state. For review on the fundamentals of squeezed states see \cite{Ph:31,ssol,book,sql,30yrof,RosasOrtiz2019CoherentAS,Schumaker:1985zz,Garcia-Chung:2020gxy}. These states have been used in the field of quantum optics \cite{PhysRevA.31.3068,Zubairy_2005,Furusawa:07,ncsqo,ZELAYA20183369} for many experimental purposes. From an application perspective the squeezed states of light are being used for various applications in quantum computing \cite{PhysRevLett.97.110501,PhysRevA.103.062405} and quantum cryptography \cite{PhysRevA.61.022309} because these non-classical states of light can perform as an elementary resource in quantum information processing for the continuous variable systems. These states are also now being used in gravitational wave physics \cite{PhysRevLett.97.011101,Chua_2014,Choudhury:2013jya} to enhance the sensitivity of gravitational wave detectors \cite{Darsow-Fromm:21,qmetr,Barsotti:2018hvm,Schnabel:2016gdi}, such as the LIGO \cite{enhans} and to improve measurement techniques in quantum metrology \cite{Vahlbruch_2007,PhysRevLett.104.103602,advinqm,doi:10.1080/00107510802091298,Xu:19,atomchp,Choudhury:2020dpf}. For a more broad class of application of squeezed states see \cite{1981PhRvD..23.1693C,RevModPhys.77.513,contivar,braunstein2005quantum,PhysRevLett.117.110801,RevModPhys.58.1001,Adhikari:2021ked,Ando:2020kdz,Martin:2021znx,Bhargava:2020fhl,Choudhury:2020hil,Adhikari:2021pvv,Choudhury:2021brg}. From the cosmological point of view, the use of the formalism of squeezed states was introduced in one of the early works by Grishchuk and Sidorov \cite{Grishchuk:1990cm,Grishchuk:1991cm} on the inflationary cosmology where they analysed the features of relic gravitons and phenomena such as particle creation and black-hole evaporation using the two-mode squeezed state formalism. They showed that the amplification of quantum fluctuations into macroscopic perturbations which occurs during cosmic inflation is a process of quantum squeezing and in the cosmological scenario they describe primordial density perturbations, amplified by gravitational instability from the quantum vacuum fluctuations. Another important work by Andreas Albrecht et al. \cite{Albrecht:1992kf} have used the two-mode squeezed state formalism (for a single field) to understand the inflationary cosmology and the amplification process of quantum fluctuations during the inflationary epoch. Another more recent work \cite{Colas2021FourmodeSS} discussed on four-mode squeezed states for two quantum systems using the symplectic group theory and its Lie algebra. These works have motivated us to explore both the theoretical and numerical features of two coupled scalar fields in the planar patch of the de Sitter space by developing the formalism and constructing the corresponding four-mode squeezed operator. For more vast applications of squeezed state formalism in High energy physics and in cosmology see \cite{Hasebe:2019ibg,Choudhury:2011jt,Choudhury:2012yh,Choudhury:2011sq,Choudhury:2013zna,Choudhury:2015hvr,Bhattacharyya:2020kgu,Choudhury:2017cos,Akhtar:2019qdn,Choudhury:2016pfr,Choudhury:2016cso,Bhattacharyya:2020rpy,Choudhury:2017bou,Einhorn:2003xb,Choudhury:2017qyl,Baumann:2014nda,Grain:2019vnq,Grishchuk:1992tw,Bhargava:2020fhl,Choudhury:2020hil,Adhikari:2021pvv,Choudhury:2021brg,Martin:2021qkg}. The structure of the paper can be broadly divided into two parts, {section \ref{sec:E2}} and {section \ref{LCDE}}. In the {section \ref{sec:E2}}, we start our analysis by considering two massive scalar fields in the planar patch of de Sitter space with $K$ as weak coupling constant between the fields. In {subsection \ref{motiv}}, we will quantize the modes of the two coupled scalar fields. Here for simplicity we will neglect the ``back action'' of the scalar fields to the de Sitter background geometry. We will find the quantized position and momentum variables for the two coupled scalar fields on the planar patch of the de Sitter space and then compute the quantized version of the Hamiltonian, for our case. In {subsection \ref{Inverte}}, we will briefly present the connection between the two coupled inverted quantum harmonic oscillator system and four mode squeezed state formalism constructed for our model under consideration. In {subsection \ref{fmss1}}, we will construct four mode squeezed state operator which will be useful for understanding the cosmological implications for two interacting scalar fields and also for other systems which can be explained in terms of two coupled inverted quantum harmonic oscillators. In the {section \ref{LCDE}}, we will define the time evolution operator for the four mode squeezed states, which we will use to calculate the time dependent (Heisenberg picture) annihilation and creation operators for two coupled scalar fields in the planar patch of the de Sitter space. In {subsection \ref{eqdiff1}}, we use the Heisenberg equation of motion to calculate the coupled differential equation for the mode functions of the two coupled scalar fields in the planar patch of the de Sitter space. We will also calculate the position and momentum operators in Heisenberg representation for this model. After that we give the expression for mode functions by using the set of coupled differential equation for the mode functions. We give the expression for the squeezing parameters, $R_{1,\bf{k}}$, $\Theta_{1,\bf{k}}$, $\Phi_{1,\bf{k}}$, $R_{2,\bf{k}}$, $\Theta_{2,\bf{k}}$ and $\Phi_{2,\bf{k}}$ which governs the evolution of the quantum state for two coupled scalar fields in the planar patch of de Sitter space. \section{\textcolor{Sepia}{\textbf{ \Large Four-mode Squeezed State Formalism for two field interacting model}}} \label{sec:E2} In the following section, our prime objective is to construct a formalism for two scalar fields $\phi_1$ and $\phi_2$, in the planar patch of de Sitter space, which are weakly interacting with each other through a coupling strength $K$. The action for the two scalar fields contain a usual gravitational part with $R$ as Ricci scalar, a matter source term $T$, it also contains kinetic term and potential term from both the fields. The corresponding action for two coupled interacting scalar fields \cite{PhysRevD.42.3413} in the planar patch of de Sitter space can be written as: \begin{multline} S=\int d^{3}x dt \sqrt{-g} \bigg[-\frac{R}{16 \pi G} + T + \frac{1}{2}\left( \dot{\phi_{1}^{2}}- m_{1}^{2} \phi_{1}^{2}\right) \\ + \frac{1}{2}\left( \dot{\phi_{2}^{2}}- m_{2}^{2} \phi_{2}^{2}\right)-12KR\phi_{1}\phi_{2} \bigg] \label{acti} \end{multline} Here, the corresponding de Sitter metric in the planar patch is described by the following line element: \begin{equation} ds^{2}=-dt^{2}+a^{2}(t)d{\bf x}^{2}\quad\quad {\rm with}\quad a(t)=\exp(Ht). \end{equation} Here $``a$'' is the scale factor which is a function of time and $H$ is the Hubble constant. Also here $m_1$ is the mass of field $\phi_1$ and $m_2$ is the mass of field $\phi_2$. Due to having the homogeneity and isotropy in the spatially flat FLRW background having planar de Sitter solution both the fields are only functions of time and there is no space dependence. For this reason there is no kinetic term appearing which involve the space derivatives. Before moving further we introduce some conditions which will help us in simplifying the calculations. We define the integral over the matter source term to be proportional to the potential function $V(a)$ and is given by: \begin{equation} \int d^{3} x \sqrt{-g}~T=\frac{1}{2} V(a) \end{equation}\\ We will set the Planck length $l_{p}=1$ for the rest of the computation and we also introduce some dimensionless field variables, which are given by: \begin{equation} \begin{aligned} &\mu_{1}(t)= \phi_{1}(t) a^{3 / 2}(t)=\exp(3Ht/2) \phi_{1}(t), \\ &\mu_{2}(t)= \phi_{2}(t) a^{3 / 2}(t)=\exp(3Ht/2) \phi_{2}(t). \\ \end{aligned} \end{equation} After doing all this manipulations and using the field redefinition the Lagrangian for two coupled scalar fields in the planar patch of de Sitter space can be recast in the following simplified form: \begin{eqnarray} L=-\frac{a \dot{a}^{2}}{2}+\frac{V(a)}{2}+l_{1}+l_{2}+l_{3}\label{eq3c} \end{eqnarray} Where, $l_1$, $l_2$ and $l_3$ are given by the following expressions: \begin{eqnarray} &&l_{1}=\frac{\dot{\mu}_{1}^{2}}{2}-\frac{1}{2} \mu_{1}^{2} m_{1}^{2}-\frac{3}{2} \frac{\dot{a}}{a} \dot{\mu_{1}} \mu_{1} +\frac{9}{8}\left(\frac{\dot{a}}{a}\right)^{2} \mu_{1}^{2},\\ &&l_{2}=\frac{\dot{\mu}_{2}^{2}}{2}-\frac{1}{2} \mu_{2}^{2} m_{2}^{2}-\frac{3}{2} \frac{\dot{a}}{a} \dot{\mu_{2}} \mu_{2}+\frac{9}{8}\left(\frac{\dot{a}}{a}\right)^{2} \mu_{2}^{2},\\ &&l_{3}=K\left[\left(\frac{\dot{a}}{a}\right)^{2} \mu_{1} \mu_{2}-\frac{1}{2} \frac{\dot{a}}{a}\left(\dot{\mu_{1}} \mu_{2}+\mu_{1} \dot{\mu_{2}}\right)\right]. \end{eqnarray} With the use of Euler Lagrange equations we get the two equation of motion each for $\mu_1$ and $\mu_2$ fields are given by: \begin{eqnarray} &&\ddot{\mu_{1}}+\left[m_{1}^{2}-\frac{3}{2} \frac{\ddot{a}}{a}-\frac{3}{4}\left(\frac{\dot{a}}{a}\right)^{2}\right] \mu_{1} \nonumber\\ &&\quad\quad\quad-\frac{K}{2}\left[\frac{\ddot{a}}{a}+\left(\frac{\ddot{a}}{a}\right)^{2}\right] \mu_{2}=0,\\ &&\ddot{\mu}_{2}+\left[m_{2}^{2}-\frac{3}{2} \frac{\ddot{a}}{a}-\frac{3}{4}\left(\frac{\dot{a}}{a}\right)^{2}\right] \mu_{2} \nonumber\\ &&\quad\quad\quad-\frac{K}{2}\left[\frac{\ddot{a}}{a}+\left(\frac{\dot{a}}{a}\right)^{2}\right] \mu_{1}=0. \end{eqnarray} Here we can clearly notice that the above two equations are coupled differential equation of motion due to having the interaction in the original theory. Now we can construct the Hamiltonian for our theory from the Lagrangian given in Eq\eqref{eq3c}. The Hamiltonian for two interacting scalar fields in the planar patch of de Sitter background is given by: \begin{equation} H=-\frac{\pi^{2}}{2 M} -\frac{V(a)}{2} + \frac{1}{2}\left(\pi_{1}^{2}+m_{1}^{2} v_{1}^{2}\right) +\frac{1}{2}\left(\pi_{2}^{2}+m_{2}^{2} v_{2}^{2}\right) \end{equation} Where, we define: \begin{align} M&=a+\frac{K}{a^{2}}\left[\mu_{1} \mu_{2}+\frac{K}{4}\left(\mu_{1}^{2}+\mu_{2}^{2}\right)\right] \\ \pi&=p_{a}+\frac{p_{1}}{2 a}\left(3 \mu_{1}+k \mu_{2}\right)+\frac{p_{2}}{2 a}\left(3 \mu_{2}+k \mu_{1}\right) \end{align} Here we introduce, $p_{a}$ : Canonically conjugate momenta of scale factor $a$, $\pi_{1}$, $\pi_{2}$ : Canonically conjugate momenta of redefined fields $\mu_{1}$ and $\mu_{2}$ respectively. \subsection{\textcolor{Sepia}{\textbf{Quantizing the Hamiltonian}}} \label{motiv} We will be quantizing the fields $\mu_{1}$ and $\mu_{2}$, and will be treating gravity classically in this computation. In our analysis the back reaction from the fields is neglected. For this reason we expand $\frac{\pi^{2}}{2 M}$ in a series and retain only the terms $\frac{\pi_{a}^{2}}{a}$ (which governs together with the potential $V(a)$ for a which is a semi-classical behaviour of the background geometry). With these condition defined above we get the approximated form of the Hamiltonian as follows: \begin{equation} \begin{aligned} H &\sim \frac{1}{2}\left(\pi_{1}^{2}+v_{1}^{2} m_{1}^{2}\right)+\frac{1}{2}\left(\pi_{2}^{2}+v_{2}^{2} m_{2}^{2}\right)\\ &-\frac{p_{a}^{2}}{2 a^{2}}\left[3\left(v_{1} \pi_{1}+v_{2} \pi_{2}\right)\right.\left.\left.+ K\left(v_{1} \pi_{2}+v_{2}\pi_{1}\right\}\right)\right]\\ &+\frac{K p_{a}^{2}}{2 a^{4}}\left(v_{1} v_{2}+\frac{k}{4}\left(v_{1}^{2}+v_{2}^{2}\right)\right) \end{aligned} \end{equation} where we define $p_{a}\sim - a\dot{a}$. Now we will quantize this Hamiltonian. For this purpose we promote the fields to operators and take the Fourier decomposition. We use the following ansatz for Fourier decomposition: \begin{equation} \begin{aligned} \hat{v}_{1} &=\int \frac{d^{3} k}{(2 \pi)^{3}} \hat{v}_{\textbf{1,k}} e^{i \textbf{k} \cdot \textbf{x}}, \\ \hat{\pi}_{1} &=\int \frac{d^{3} k}{(2 \pi)^{3}} \hat{\pi}_{\textbf{1,k}} e^{i \textbf{k} \cdot \textbf{x}},\\ \hat{v}_{2} &=\int \frac{d^{3} k}{(2 \pi)^{3}} \hat{v}_{\textbf{2,k}} e^{i \textbf{k} \cdot \textbf{x}}, \\ \hat{\pi}_{2} &=\int \frac{d^{3} k}{(2 \pi)^{3}} \hat{\pi}_{\textbf{2,k}} e^{i \textbf{k} \cdot \textbf{x}}. \end{aligned} \end{equation} For our purpose we will be working in the Schr\"{o}dinger picture, where the operators $\hat{v}_{1,\textbf{k}}$, $\hat{\pi}_{1,\textbf{k}}$, $\hat{v}_{2,\textbf{k}}$ and $\hat{\pi}_{2,\textbf{k}}$ are fixed at an initial time. We define modes and the associated canonically conjugate momenta for the two fields with initial frequency equal to $k$ which, suitably normalized, give us: \begin{equation} \begin{aligned} &\hat{v}_{1,\textbf{k}}=\frac{1}{\sqrt{2 k}_{1}}\left(b_{1,\textbf{k}}+b_{1,\textbf{k}}^{\dagger}\right),~~~~~~~\hat{\pi}_{1,\textbf{k}}=i \sqrt{\frac{k_{1}}{2}}\left(b_{1,\textbf{k}}^{\dagger}-b_{1,\textbf{k}}\right) \\ &\hat{v}_{2,\textbf{k}}=\frac{1}{\sqrt{2 k}_{2}}\left(b_{2,\textbf{k}}+b_{2,\textbf{k}}^{\dagger}\right),~~~~~~~\hat{\pi}_{2,\textbf{k}}=i \sqrt{\frac{k_{2}}{2}}\left(b_{2,\textbf{k}}^{\dagger}-b_{2,\textbf{k}}\right) \end{aligned} \end{equation} The Four-mode Hamiltonian operator after quantization for the two scalar fields interacting with each other via coupling constant $K$ in the planar patch of de Sitter space can be written in the following simple form: \begin{equation}\label{hamilto} \begin{aligned} H(\tau)&=\left(l_{1}(\tau) b_{1 ,\mathbf{-k}} b_{1, \mathbf{k}}+l_{1}^{*}(\tau) b_{1, \mathbf{k}}^{\dagger} b_{1 ,\mathbf{-k}}^{\dagger}\right)\\ &+\left(l_{2}(\tau), b_{2, \mathbf{-k}} b_{2 ,\mathbf{k}}+l_{2}^{*}(\tau) b_{2, \mathbf{k}}^{\dagger} b_{2, \mathbf{-k}}^{\dagger}\right)\\ &+\left\{\omega_{1}(\tau)\left(b^{\dagger}_{1 ,\mathbf{-k}}b_{1 ,\mathbf{k}}+b^{\dagger}_{1, \mathbf{k}} b_{1 ,\mathbf{-k}}\right)\right.\\ &+\omega_{2}(\tau)\left(b_{2, \mathbf{-k}}^{\dagger}b_{2 ,\mathbf{k}}+b_{2,\mathbf{k}}^{\dagger} b_{2 ,\mathbf{-k}}\right)\\ &+g_{1}(\tau)\left(b_{1 ,\mathbf{-k}} b_{2 ,\mathbf{k}}+b_{1 ,\mathbf{k}} b_{2, \mathbf{-k}}\right)\\ &+g_{1}^{*}(\tau)\left(b_{2 ,\mathbf{k}}^{\dagger} b_{1, \mathbf{-k}}^{\dagger}+ b_{2, \mathbf{-k}}^{\dagger} b_{1, \mathbf{k}}\right)\\ &+g_{2}(\tau)\left( b^{\dagger}_{1 ,\mathbf{k}} b_{2 ,\mathbf{-k}}^{\dagger}+b_{1 ,\mathbf{-k}} b_{2 ,\mathbf{k}}^{\dagger}\right)\\ &\left.+g_{2}^{*}(\tau)\left(b_{2, \mathbf{-k}} b_{1, \mathbf{k}}^{\dagger}+b_{2,\mathbf{k}} b_{1, \mathbf{-k}}^{\dagger}\right)\right\} \end{aligned} \end{equation} Where, the terms $l_{1}$, $l_{2}$, $\omega_1$, $\omega_2$, $g_1$ and $g_2$ are defined below: \begin{equation} \begin{aligned} &l_{1}(\tau)=\frac{{K^{2}} \pi_{a}^{2}}{16 a^{4} m_{1}}+i \frac{3}{4 a^{2}} \pi_{a} \\ &l_{2}(\tau)=\frac{{K^{2}} \pi_{a}^{2}}{16 a^{4} m_{2}}+i \frac{3}{4 a^{2}} \pi_{a} \\ &\omega_{1}(\tau)=\frac{1}{2}\left(m_{1}+\frac{{K^{2}} \pi_{a}^{2}}{8 a^{4} m_{1}}\right) \\ &\omega_{2}(\tau)=\frac{1}{2}\left(m_{2}+\frac{{K^{2}} \pi_{a}^{2}}{8 a^{4} m_{2}}\right) \\ &g_{1}(\tau)=\frac{{K^{2}} \pi_{a}^{2}}{4 a^{4}} \frac{1}{\sqrt{m_{1} m_{2}}} \\ &~~~~~~~~+i \frac{{K^{2}} \pi_{a}^{2}}{4 a^{4}} \frac{\left(m_{1}+m_{2}\right)}{\sqrt{m_{1} m}} =g_{2}(\tau) \end{aligned} \end{equation} Here it is important to note that, $\tau$ represents the conformal time which is related to the physical time $t$ by the following expression: \begin{eqnarray} \tau= \int \frac{dt}{a(t)}=-\frac{1}{H}\exp(-Ht). \end{eqnarray} Consequently, in terms of the conformal time coordinate the de Sitter metric in planar patch can be recast as: \begin{eqnarray} ds^{2}=a^{2}(\tau)\left(-d\tau^{2}+d{\bf x}^{2}\right)\quad {\rm where}\quad a(\tau)=-\frac{1}{H\tau}.\quad \end{eqnarray} \subsection{\textcolor{Sepia}{\textbf{The Two Coupled Inverted Harmonic Oscillator}}} \label{Inverte} The two coupled inverted quantum harmonic oscillator \cite{Tarzi_1988,yuce2006inverted,SUBRAMANYAN2021168470} can be described by the Hamiltonian $\text{H}$ which is the sum of the free Hamiltonian of both the inverted quantum harmonic oscillators and the interaction Hamiltonian $\text{H}_{int}$ which depends on the type interaction between them and the coupling constant $K$ accounts for the strength of the interaction between the two coupled inverted quantum harmonic oscillators: \begin{equation}\label{iqhc} \text{H}=\sum_{i=1}^{2} \text{H}_{i} + K \text{H}_{int} \end{equation} Here, ${\text{H}}_{i}$ is the Hamiltonian of a free two inverted quantum harmonic oscillators: \begin{equation} \begin{aligned} \text{H}_{i} &=\frac{\hat{p}_{i}^{2}}{2}-\frac{\hat{q}_{i}^{2}}{2}\quad {\rm where}\quad i=1,2\\ &=i \frac{\hbar}{2}\left(\hat{b}_{i}^{2} e^{2 i \frac{\pi}{4}}-\text { h.c. }\right) . \end{aligned} \end{equation} Here $b_i$ corresponds to the annihilation operator for the quantum harmonic oscillator and index $i$ runs from 1 to 2. In the present context of discussion we will consider the construction of General squeeze Hamiltonian. Writing the free part Hamiltonian in this form facilitates the comparison with the more general squeeze Hamiltonian, which we will be considering. The setup of this two coupled inverted harmonic oscillator can be mapped to case of two coupled scalar fields in de Sitter background as indicated in the Eq\eqref{iqhc} and Eq\eqref{hamilto}. The free part of the Hamiltonian for two coupled inverted quantum harmonic oscillator gets mapped to the terms containing $l_{1}$, $l^{*}_{1}$ for the first scalar field with modes $(1,{\bf{k}},1,{\bf{-k}}$) and the terms containing $l_{2}$, $l^{*}_{2}$ for the second scalar field with modes $(2,{\bf{k}},2,{\bf{-k}}$). The free terms with $\omega_{1}$ and $\omega_{2}$ mimics the role of rotation operator which is absent in Eq\eqref{iqhc} but it can be introduced explicitly in it. The interaction Hamiltonian $H_{int}$ corresponds to the terms containing $g_1$, $g^*_1$, $g_2$ and $g^*_2$. \subsection{\textcolor{Sepia}{\textbf{Four mode Squeezed State operator}}} \label{fmss1} Let us consider a state $\ket{\phi}_{\text{in}}$ which is the initial reference vacuum state of the two scalar fields. The final out state $\ket{\phi}_{\text{out}}$ can be obtained by by applying the operator given in Eq \eqref{eq10} on the initial reference vacuum state of the two scalar fields. \begin{equation} \begin{aligned} \ket{\phi}_{\text {out }} = S_{1}^{(1)}\left(r_{1}, \theta_{1}\right) S_{1}^{(2)}\left(r_{2}, \theta_{2}\right) &S\left(r_{1},r_{2}, \theta_{1}, \theta_{2}\right) \\ &\mathcal{R}\left(\phi_{1}, \phi_{2}\right) \ket{\phi}_{i n} \end{aligned} \label{eq10} \end{equation} Here, the initial vacuum state is $\ket{\phi}_{in}$ and we have \begin{equation} \mathcal{R}\left(\phi_{1}, \phi_{2}\right)|0,0\rangle =e^{i\left(\phi_{1}+\phi_{2}\right)}|0,0\rangle \end{equation} Here, we can see that the contribution of the total rotational operator on the vacuum state just introduces an overall phase factor and we neglect it for further calculations of the total squeezed operator for the two scalar fields in de Sitter background space. We write the most general Squeezed state in the present context, which is defined as: \begin{equation} \begin{aligned} \left|\Psi_{\text {sq }}\right\rangle=S_{1}^{(1)}\left(r_{1}, \theta_{1}\right) S_{1}^{(2)}\left(r_2, \theta_{2}\right) S\left(r_{1},r_{2}, \theta_{1}, \theta_{2}\right)|0,0\rangle \end{aligned} \end{equation} Here it is important to note that \begin{equation} \begin{aligned} \underbrace{S_{1}^{(1)}\left(r_{1}, \theta_{1}\right) S_{1}^{(2)}\left(r_{2}, \theta_{2}\right)}_{\textcolor{blue}{\rm with~interaction~1~and~2}}& \neq &\underbrace{S_{1}^{(1)}\left(r_{1}, \theta_{1}\right) S_{1}^{(2)}\left(r_{2}, \theta_{2}\right)}_{\textcolor{blue}{\rm without~interaction~1~and~2}} \end{aligned} \end{equation} Following are the important points to be noted for operators $S_{1}^{(1)}\left(r_{1}, \theta_{1}\right) ,~ S_{1}^{(2)}\left(r_{2}, \theta_{2}\right)$ and $S\left(r_{1},r_{2}, \theta_{1}, \theta_{2}\right)$: \begin{itemize} \item The contribution from the gravitational part from the $a(t)$ as well as the interaction part of the fields appear in $S\left(r_{1},r_{2}, \theta_{1}, \theta_{2}\right)$ \item The Contribution of $\mu_{1}$ and $\mu_{2}$ with interaction appears in $S_{1}^{(1)}\left(r_{1}, \theta_{1}\right) ,~ S_{1}^{(2)}\left(r_{2}, \theta_{2}\right)$, which makes them different from the contribution obtained from the case when there is no interaction at all. \end{itemize} \subsection{\textcolor{Sepia}{\textbf{ Technical details of the Constructions}}} Here we will give the calculation of the total squeezed operator for two scalar fields having interaction with each other in de Sitter background space. First we consider the operator $S_{1}^{(1)}\left(r_{1}, \theta_{1}\right)$ and write it in the form of Eq\eqref{eq14} which is the product of usual squeezed operator for first field and the interaction term given in Eq\eqref{eq15}. \begin{equation} \begin{aligned} &S_{1}^{(1)}\left(r_{1}, \theta_{1}\right) =\exp \left[\frac { r _ { 1 } } { 2 } \left(e^{-2 i \theta_{1}} b_{1}^{2}\right.\right.\left.-e^{2 i \theta_{1}} b_{1}^{\dagger 2}\right)\\ &+\frac{r_{1}}{2}\left(e^{-2 i \theta_{1}}\left(b_{1} b_{2}+b_{1}^{\dagger} b_{2}^{\dagger}\right)\right.\left.-e^{2 i \theta_{1}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\right)\\ &=S_{1}^{\text{\tiny (non-int.)}}\left(r_{1}, \theta_{1}\right) \underbrace{S_{1}^{ \text{\tiny(1 int. with 2)}}\left(r_{1}, \theta_{1}\right)}_{\textcolor{blue}{\rm New~Contribution}} \end{aligned} \label{eq14} \end{equation} where, $S_{1}^{ \text{\tiny(1 int. with 2)}}\left(r_{1}, \theta_{1}\right)$ is the part of the operator $S_{1}^{(1)}\left(r_{1}, \theta_{1}\right)$ containing interaction, which is given by: \begin{equation} \begin{aligned} S_{1}^{ \text{\tiny(1 int. with 2)}}\left(r_{1}, \theta_{1}\right) = & \exp \Big[ \frac {r_{1}}{2} \Big\{ e^{-2 i \theta_{1}}\left(b_{1} b_{2} +b_{1}^{\dagger} b_{2}^{\dagger}\right) \\ & -e^{2 i \theta_{1}}\left(b_{2}^{\dagger} b_{1}^{\dagger} + b_{2} b_{1}\right)\Big\} +\ldots ] \end{aligned} \label{eq15} \end{equation} Similarly we can write for operator $S_{1}^{(2)}\left(r_{2}, \theta_{2}\right)$, which is also the product of usual squeezed operator for second field and the term coming from the interaction. \begin{equation} \begin{aligned} S_{1}^{(2)}\left(r_{2}, \theta_{2}\right) &= \\ &S_{1}^{\text{\tiny (non-int.)}}\left(r_{2}, \theta_{2}\right) \underbrace{S_{1}^{ \text{\tiny(2 int. with 1)}}\left(r_{2}, \theta_{2}\right)}_{\textcolor{blue}{\rm New~Contribution}} \end{aligned} \end{equation} where, Eq\eqref{eq17} represents the interaction part of the operator $ S_{1}^{(2)}\left(r_{2}, \theta_{2}\right)$ coming from second field. \begin{equation} \begin{aligned} S_{1}^{\text{\tiny (1 int. with 2)}}\left(r_{2}, \theta_{2}\right) = & \exp \Big[ \frac {r_{2}}{2} \Big\{ e^{-2 i \theta_{2}}\left(b_{1} b_{2} +b_{1}^{\dagger} b_{2}^{\dagger}\right) \\ & -e^{2 i \theta_{1}}\left(b_{2}^{\dagger} b_{1}^{\dagger} + b_{2} b_{1}\right)\Big\} +\ldots ] \end{aligned} \label{eq17} \end{equation} Now, the full squeezed state operator is the product of the usual squeezed operators from both the fields with the operators with interaction between 1 and 2 fields, a third additional term is also present denoted as $S\left(r_{1},r_{2}, \theta_{1}, \theta_{2}\right)$. This is the contribution which is appearing due to commutators between $b_{1}$, $b_{1}^{\dagger}$, and $b_{2}$, $b_{2}^{\dagger}$. \begin{equation} \begin{aligned} S_{\text{Full}}=&\\ &\underbrace{S_{1}\left(r_{1}, \theta_{1}\right)S_{1}\left(r_{2}, \theta_{2}\right)}_{\textcolor{blue}{\rm Without~interaction}}\\ &\times ~~ \underbrace{S_{1}^{\text{int.}}\left(r_{1}, \theta_{1}\right)S_{1}^{\text{int.}}\left(r_{2}, \theta_{2}\right)}_{\textcolor{blue}{\rm Interaction~between~1~and~2}}~~\times\underbrace{S\left(r_{1}, \theta_{1}, r_{2}, \theta_{2}\right)}_{\textcolor{blue}{\rm Additional~terms}} \end{aligned} \label{eq18} \end{equation} In order to construct the additional term $S\left(r_{1},r_{2}, \theta_{1}, \theta_{2}\right)$ of the total squeezed operator we will use the Baker–Campbell–Hausdorff formula. Let us consider $ S_{1}^{(1)}$ and $ S_{2}^{(1)}$ given in Eq \eqref{eq19} and Eq \eqref{eq20}. \begin{equation} \begin{aligned} S_{1}^{(1)}\left(r_{1}, \theta_{1}\right)= \exp &\bigg[\frac{r_{ 1 }}{ 2 } \left(e^{-2 i \theta_{1}} b_{1}^{2} - e^{2 i \theta_{1}} b_{1}^{\dagger_2}\right) \\ + &\frac{r_{1}}{2} \bigg\{e^{-2 i \theta_{1}}\left(b_{1} b_{2}+b_{1}^{\dagger}b_{2}^{\dagger}\right)\\ &~~~~~~~~~- e^{2 i \theta_{1}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\bigg\}\bigg] \end{aligned} \label{eq19} \end{equation} and \begin{equation} \begin{aligned} S_{1}^{(2)}\left(r_{1}, \theta_{2}\right)= \exp &\bigg[\frac{r_{ 2 }}{ 2 } \left(e^{-2 i \theta_{2}} b_{1}^{2} - e^{2 i \theta_{2}} b_{1}^{\dagger 2}\right) \\ + &\frac{r_{2}}{2} \bigg\{e^{-2 i \theta_{2}}\left(b_{1} b_{2}+b_{1}^{\dagger}b_{2}^{\dagger}\right)\\ &~~~~~~~~~- e^{2 i \theta_{2}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\bigg\}\bigg] \end{aligned} \label{eq20} \end{equation} Now we call, \begin{equation} \begin{aligned} &~~~~~~~~~~\frac{r_{ 1 }}{ 2 } \left(e^{-2 i \theta_{1}} b_{1}^{2} - e^{2 i \theta_{1}} b_{1}^{\dagger 2}\right) &= \alpha_{1}\\ &\frac{r_{1}}{2} \bigg\{e^{-2 i \theta_{1}}\left(b_{1} b_{2}+b_{1}^{\dagger}b_{2}^{\dagger}\right)\\ &~~~~~~~~~~~- e^{2 i \theta_{1}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\bigg\}\bigg] &= \alpha_{2} \end{aligned} \end{equation} Similarly we have, \begin{equation} \begin{aligned} &~~~~~~~~~~\frac{r_{ 2}}{ 2 } \left(e^{-2 i \theta_{2}} b_{1}^{2} - e^{2 i \theta_{2}} b_{2}^{\dagger 2}\right) &= \beta_{1}\\ &\frac{r_{2}}{2} \bigg\{e^{-2 i \theta_{2}}\left(b_{1} b_{2}+b_{1}^{\dagger}b_{2}^{\dagger}\right)\\ &~~~~~~~~~~~- e^{2 i \theta_{2}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\bigg\}\bigg] &= \beta_{2} \end{aligned} \label{eq22} \end{equation} Such that \begin{equation} \begin{aligned} \alpha_{1}+\alpha_{2}=A_{1} \\ \beta_{1}+\beta_{2}=A_{2} \end{aligned} \label{A1A2} \end{equation} Now we apply the Baker–Campbell–Hausdorff formula between $S_{1}^{(1)}$ and $ S_{2}^{(1)}$, where we have defined the terms in the exponential as $A_1$ and $A_2$. \begin{equation} \begin{aligned} e^{A_{1}} \cdot e^{A_{2}}&=\\ &e^{\{ A_{1} + A_{2} +\frac{1}{2} [A_{1},A_{2}] + \frac{1}{12}\left([A_{1},[A_{1},A_{2}]]-[A_{2},[A_{1},A_{2}]]\right) + \ldots \}}\\ &=\underbrace{~e^{A_{1}}~}_{\textcolor{blue}{S_{1}^{1}}}~ \underbrace{~e^{A_{2}}~}_{\textcolor{blue}{S_{1}^{1}}} \underbrace{e^{f(A_{1},A_{2})}}_{\textcolor{blue}{S~ \rm New~Contribution}} \end{aligned} \label{eq24} \end{equation} We will only consider the BCH-expansion upto $\left[A_{1},\left[A_{1}, A_{2}\right]\right]$ and $\left[A_{2},\left[A_{1}, A_{2}\right]\right]$. \begin{equation} \begin{aligned} f\left(A_{1}, A_{2}\right)&= \\ &\frac{1}{2}\left[A_{1}, A_{2}\right]+\frac{1}{12} \big(\left[A_{1},\left[A_{1}, A_{2}\right]\right] \\ &~~~~~~~~~~~~~~~~~~~~~~-\left[A_{2},\left[A_{1}, A_{2}\right]\right]\big) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~+\cdots\\ \end{aligned} \label{eq25} \end{equation} First we consider the first commutator in the Eq\eqref{eq25} \begin{equation} \begin{aligned} \left[A_{1}, A_{2}\right]=&[\alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2}]\\ &\left[\alpha_{1}, \beta_{1}\right]+\left[\alpha_{2}, \beta_{2}\right] +\left[\alpha_{2}, \beta_{1}\right]+\left[\alpha_{1}, \beta_{2}\right] \end{aligned} \end{equation} Where, \begin{equation} \begin{aligned} \left[\alpha_{1},\beta_{1}\right]&=0 \\ \left[\alpha_{2},\beta_{2}\right]&=0 \\ \left[\alpha_{2}, \beta_{1}\right]&=\frac{r_{1} r_{2}}{2}\left\{\left(e^{- 2 i\left(\theta_{1}+\theta_{2}\right)}\right.\right.\left.-e^{-2 i\left(\theta_{1}-\theta_{2}\right)}\right) b_{1} b_{2}^{\dagger}\\ &~~~~~~~~~~~~\left.+\left(e^{2 i\left(\theta_{1}-\theta_{2}\right)}-e^{-2 i\left(\theta_{1}+\theta_{2}\right)}\right) b_{1}^{\dagger}b_{2}\right\}\\ \left[\alpha_{1}, \beta_{2}\right]&=\frac{r_{1} r_{2}}{2}\left\{\left(e^{- 2 i\left(\theta_{1}+\theta_{2}\right)}\right.\right.\left.-e^{-2 i\left(\theta_{1}-\theta_{2}\right)}\right) b_{1} b_{2}^{\dagger}\\ &~~~~~~~~~~~~\left.+\left(e^{2 i\left(\theta_{1}+\theta_{2}\right)}-e^{2 i\left(\theta_{1}-\theta_{2}\right)}\right) b_{1}^{\dagger}b_{2}\right\} \end{aligned} \end{equation} So, \begin{equation} \begin{aligned} \left[A_{1},A_{2}\right]&=[\alpha_{1},\beta_{2}] +[\alpha_{2},\beta_{1}]\\ &=\frac{r_{1} r_{2}}{2}\left\{f_{1}\left(\theta_{1}, \theta_{2}\right)~b_{1} b_{2}^{\dagger}+f_{2}\left(\theta_{1}, \theta_{2}\right)~b_{1}^{\dagger} b_{2}\right\} \end{aligned} \end{equation} Where, we have defined $f_{1}\left(\theta_{1}, \theta_{2}\right)$ and $f_{2}\left(\theta_{1}, \theta_{2}\right)$ as: \begin{equation} \begin{aligned} f_{1}\left(\theta_{1}, \theta_{2}\right)&=2 \cos \left(2\left(\theta_{1}+\theta_{2}\right)\right) -2 e^{-2 i\left(\theta_{1}-\theta_{2}\right)} \\ &=f_{1}^{\operatorname{Real}}\left(\theta_{1}+\theta_{2}\right)+i f_{1}^{\rm I m}\left(\theta_{1}, \theta_{2}\right) \end{aligned} \end{equation} with the real and the imaginary part being, \begin{equation} \begin{aligned} f_{1}^{\operatorname{Real}}\left(\theta_{1}, \theta_{2}\right)&=2\left(\cos \left(2\left(\theta_{1}+\theta_{2}\right)\right)-\cos(2(\theta_{1}-\theta_{2}))\right) \\ f_{1}^{\operatorname{Im}}\left(\theta_{1}, \theta_{2}\right)&=2 \sin \left(2\left(\theta_{1}-\theta_{2}\right)\right) \end{aligned} \end{equation} and \begin{equation} \begin{aligned} f_{2}\left(\theta_{1}, \theta_{2}\right) &=e^{2 i\left(\theta_{1}+\theta_{2}\right)}-e^{-2 i\left(\theta_{1}+\theta_{2}\right)} \\ &= 2 i\sin\left(2\left(\theta_{1}+\theta_{2}\right)\right) \\ &=f_{2}^{\rm Real}\left(\theta_{1}, \theta_{2}\right) +i f_{2}^{\rm Im}\left(\theta_{1}, \theta_{2}\right) \end{aligned} \end{equation} where, \begin{equation} \begin{aligned} f_{2}^{\rm Real} \left(\theta_{1}, \theta_{2}\right)&=0 \\ f_{2}^{I m}\left(\theta_{1}, \theta_{2}\right)&=2 \sin \left(2\left(\theta_{1}+\theta_{2}\right)\right) \end{aligned} \end{equation} So, the first commutator becomes as follows: \begin{equation} \begin{aligned} \left[A_{1}, A_{2}\right]&=r_{1} r_{2} \left[\left\{ \cos \left(2\left(\theta_{1}+\theta_{2}\right)\right)-\cos(2(\theta_{1}-\theta_{2})) \right.\right.\\ &~~~~~~~~~~~~\left.+i\sin(2(\theta_{1}+\theta_{2})) \right\} b_{1} b_{2}^{\dagger}\\ &~~~~~~~~~~~~~~~~~\left.+\left\{i \sin(2(\theta_{1}-\theta_{2})) \right\} b_{1}^{\dagger}b_{2}\right] \\ \end{aligned} \end{equation} We can express it as follows: \begin{equation} \begin{aligned} \left[A_{1}, A_{2}\right]=&\left[f_{1}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right) b_{1}b_{2}^{\dagger}\right.\\ &~~~~~~~~~~~~~~~~\left.+f_{2}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right) b_{1}^{\dagger} b_{2}\right] \end{aligned} \end{equation} Where, we define $f_{1}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right)$ and $f_{2}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right)$ as: \begin{equation} \begin{aligned} f_{1}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right) &= r_{1}r_{2}\left[\left\{ \cos \left(2\left(\theta_{1}+\theta_{2}\right)\right)-\cos(2(\theta_{1}-\theta_{2}))\right. \right. \\ & +i \sin(2(\theta_{1}-\theta_{2}))] \\ f_{2}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right)&= r_{1} r_{2} i[ \sin(2(\theta_{1}+\theta_{2}))] \\ \end{aligned} \end{equation} Let us denote the following, \begin{equation} \begin{aligned} P= f_{1}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right)b_{1}b_{2}^{\dagger}\\ Q=f_{2}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right)b_{1}^{\dagger}b_{2} \end{aligned} \end{equation} So, $[A_{1},A_{2}]=P+Q$. We will now compute the second commutator in the expansion which is $\left[A_{1},\left[A_{1}, A_{2}\right]\right]$. The first part of this commutator is $\left[\alpha_{1},\left[A_{1},A_{2}\right]\right]$, which becomes: \begin{equation} \begin{aligned} \left[\alpha_{1},\left[A_{1},A_{2}\right]\right]\\ &= \left[\alpha_{1},P + Q \right]\\ &\left(\tilde{f_{2}}b_{1}b_{2}+\tilde{f_{1}}b_{1}^{\dagger}b_{2}^{\dagger}\right) \end{aligned} \end{equation} Where \begin{equation} \begin{aligned} &\tilde{f_{2}} = r_{1}f_{2} e^{-2 i \theta_{1}}\\ &\tilde{f_{1}}=r_{1}f_{1}e^{2i\theta_{1}} \end{aligned} \end{equation} Now we will compute the second part of the commutator $\left[A_{1},\left[A_{1}, A_{2}\right]\right]$ which is given as $[\alpha_{2},[A_{1},A_{2}]]$, Here, first we write $\alpha_{2}$ as, \begin{equation} \alpha_{2} = \Theta_{1}-\Theta_{2} \end{equation} Where, \begin{equation} \begin{aligned} \Theta_{1} &=\frac{r_{1}}{2} e^{-2 i \theta_{1}}\left(b_{1} b_{2}+b_{1}^{\dagger}b_{2}^{\dagger}\right)\\ \Theta_{2} &=\frac{r_{1}}{2}e^{2 i \theta_{1}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right) \end{aligned} \end{equation} Using this we have, \begin{equation} \begin{aligned} \left[\alpha_{2},\left[A_{1}, A_{2}\right]\right]&\\ &=\left[\Theta_{1}-\Theta_{2}, P+Q\right]\\ \end{aligned} \end{equation} where the commutators are as follows: \begin{equation} \begin{aligned} \left[\Theta_{1}, P\right]&=\frac{r_{1} f_{1}}{2} e^{-2 i \theta_{1}}\left(b_{1} b_{1}-b_{2}^{\dagger} b_{2}^{\dagger}\right)\\ \left[\Theta_{1}, Q\right]&=\frac{r_{1} f_{2}}{2} e^{-2 i \theta_{1}}\left(b_{2} b_{2}-b_{1}^{\dagger} b_{1}^{\dagger}\right)\\ \left[\Theta_{2}, P\right]&=\frac{r_{1} f_{1}}{2} e^{2 i \theta_{1}}\left(b_{1} b_{1}-b_{2}^{\dagger} b_{2}^{\dagger}\right)\\ \left[\Theta_{2}, Q\right]&=\frac{r_{1} f_{2}}{2} e^{2 i \theta_{1}}\left(b_{2} b_{2}-b_{1}^{\dagger}b_{1}^{\dagger}\right)\\ \end{aligned} \end{equation} The second part of the commutator $\left[A_{1},\left[A_{1}, A_{2}\right]\right]$ becomes: \begin{equation} \begin{aligned} &\left[\alpha_{2}\left[A_{1},A_{2}\right]\right]=\\ &\frac{r_{1}}{2}\left[e^{2 i \theta_{1}}\left\{f_{2}\left(b_{1}^{\dagger}b_{1}^{\dagger}-b_{2} b_{2}\right) -f_{1}\left(b_{1} b_{1}-b_{2}^{\dagger} b_{2}^{\dagger}\right)\right\}\right. \\ &~~~~~~~~~~~~\left.+e^{-2 i \theta_{1}}\left\{f_{1}\left(b_{1} b_{1}-b_{2}^{\dagger}b_{2}^{\dagger}\right) +f_{2}\left(b_{2} b_{2}-b_{1}^{\dagger} b_{1}^{\dagger}\right)\right\}\right] \end{aligned} \end{equation} For the third and the last commutator of our truncated BCH-expansion, we need to calculate $\left[A_{2},\left[A_{1}, A_{2}\right]\right]$, and previously we had defined, $A_{2}=\beta_{1}+\beta_{2}$ \eqref{A1A2} \newline Where, we define $\beta_1$ and $\beta_2$ as: \begin{equation} \begin{gathered} \beta_{1}=M_{1}-M_{2} \\ \beta_{2}=N_{1}-N_{2} \\ \end{gathered} \label{eq44} \end{equation} and hence, from Eq\eqref{eq22} and \eqref{eq44}, $M_1$ and $M_2$ becomes: \begin{equation} \begin{gathered} M_{1}=\frac{r_{2}}{2} e^{-2 i \theta_{2}} b_{2}^{2}, \quad N_{1}=\frac{r_{2}}{2}\left(e^{-2 i \theta_{2}}\left(b_{1} b_{2}+b_{1}^{\dagger} b_{2}^{\dagger}\right)\right) \\ M_{2}=\frac{r_{2}}{2} e^{2 i \theta_{2}} b_{2}^{\dagger 2}, \quad N_{2}=\frac{r_{2}}{2}\left(e^{2 i \theta_{2}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\right) \end{gathered} \end{equation} With the following commutation relation in Eq\eqref{eq46} , we can write the first part of the $\left[A_{2},\left[A_{1}, A_{2}\right]\right]$ commutator. \begin{equation} \begin{gathered} \left[M_{2}, P\right]=0 \quad \left[M_{1}, P\right]=r_{2} f_{1} e^{-2 i \theta_{2}} b_{1} b_{2} \\ \left[M_{1}, Q\right]=0 \quad \left[M_{2}, Q\right]=-r_{2} f_{2} e^{2 i \theta_{2}} b_{1}^{\dagger} b_{2}^{\dagger} \end{gathered} \label{eq46} \end{equation} Which is given as follows: \begin{equation} \begin{aligned} \left[\beta_{1},\left[A_{1}, A_{2}\right]\right]&=\left[\beta_{1}, P\right]+\left[\beta_{1}, Q\right] \\ &=r_{2}\left(f_{1} e^{-2 i \theta_{2}} b_{1} b_{2}+f_{2} e^{2 i \theta_{2}} b_{1}^{\dagger} b_{2}^{\dagger}\right) \end{aligned} \end{equation} Similarly we calculate the second term of the commutator $\left[A_{2},\left[A_{1}, A_{2}\right]\right]$ and use the following commutation results given in the Eq\eqref{eq48} below, \begin{equation} \begin{aligned} &\left[N_{1}, P\right]=\frac{r_{2} f_{1}}{2} e^{-2 i \theta_{2}}\left(b_{1} b_{1}-b_{2}^{\dagger} b_{2}^{\dagger}\right)\\ &\left[N_{1}, Q\right]=\frac{r_{2} f_{2}}{2} e^{-2 i \theta_{2}}\left(b_{2} b_{2}-b_{1}^{\dagger} b_{1}^{\dagger}\right)\\ &\left[N_{2}, P\right]=\frac{r_{2} f_{1}}{2} e^{2 i \theta_{2}}\left(b_{1} b_{1}-b_{2}^{\dagger} b_{2}^{\dagger}\right)\\ &\left[N_{2}, Q\right]=\frac{r_{2} f_{2}}{2} e^{2 i \theta_{2}}\left(b_{2} b_{2}-b_{1}^{\dagger} b_{1}^{\dagger}\right)\\ \end{aligned} \label{eq48} \end{equation} Finally we get the second term of the $\left[A_{2},\left[A_{1}, A_{2}\right]\right]$ commutator: \begin{equation} \begin{aligned} &\left[\beta_{2},\left[A_{1}, A_{2}\right]\right]=\left[\beta_{2}, P+Q\right]\\ &=\left[N_{1}, P\right]-\left[N_{2}, P\right]+\left[N_{1}, Q\right]-\left[N_{2}, P\right]\\ &= i~r_{2} \sin \left(2 \theta_{2}\right)\left[f_{2}\left(b_{1}^{\dagger}b_{1}^{\dagger}-b_{2}b_{2}\right)\right.\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.-f_{1}\left(b_{1} b_{1}-b_{2}^{\dagger}b_{2}^{\dagger}\right)\right] \end{aligned} \end{equation} After calculating the terms in Eq\eqref{eq25}, now we rewrite the combination of $\left[A_{1},\left[A_{1}, A_{2}\right]\right]$ and $\left[A_{2},\left[A_{1}, A_{2}\right]\right]$ commutators in terms of $X_1$, $X_2$, $X_3$ and $X_4$. \begin{equation} \begin{aligned} &{\left[A_{1},\left[A_{1}, A_{2}\right]\right]-\left[A_{2}\left[A_{1}, A_{2}\right]\right]} \\ &=X_{1} b_{1} b_{2}+X_{2} b_{1}^{\dagger} b_{2}^{\dagger} +X_{3}\left(b_{1}^{\dagger} b_{1}^{\dagger}-b_{2} b_{2}\right) \\ &~~~+X_{4}\left(b_{1} b_{1}-b_{2}^{\dagger} b_{2}^{\dagger}\right) \\ \end{aligned} \end{equation} Where, $X_1$, $X_2$, $X_3$ and $X_4$ are appended below: \begin{equation} \begin{aligned} &X_{1}=\left(r_{1} f_{2} e^{-2 i \theta_{1}}-r_{2} f_{1} e^{-2 i \theta_{2}}\right) \\ &X_{2}=\left(r_{1} f_{1} e^{2 i \theta_{1}}-r_{2} f_{2} e^{2 i \theta_{2}}\right) \\ &X_{3}=i f_{2}\left(r_{1} \sin \left(2 \theta_{1}\right)-r_{2} \sin \left(2 \theta_{2}\right)\right) \\ &X_{4}=i f_{1}\left(r_{2} \sin \left(2 \theta_{2}\right)-r_{1} \sin \left(2 \theta_{1}\right)\right) \\ \end{aligned} \end{equation} Note: Here the ratio of $X_3$ and $X_4$ satisfies the following relation given in Eq\eqref{eq52} : \begin{equation} \frac{X_{3}}{X_{4}}=-\frac{f_{2}}{f_{1}} \label{eq52} \end{equation} To get the full squeezed operator for our case whose form is mentioned in Eq\eqref{eq18}, we multiply the usual free squeezed operators from both the fields with the the BCH expansion given in Eq\eqref{eq24}. The total squeezed operator for two scalar fields in de Sitter background space after applying the Baker-Campbell-Hausdorff will become: \begin{equation} S_{\text{Full}}\left(r_{1}, \theta_{1}, r_{2}, \theta_{2}\right)=e^{\left(M_{1}+M_{2}+M_{3}\right)} \end{equation} Where, $\exp M_1$, and $\exp M_2$ are the usual free squeezed operators given as: \begin{equation} \begin{aligned} \exp M_{1}&=\underbrace{\exp \left[\frac{r_{1}}{2}\left(e^{-2 i \theta_{1}} b_{1}^{2}-e^{2 i \theta_{1}} b_{1}^{\dagger 2}\right)\right]}_{\textcolor{blue}{ _ { for ``1" . }}}\\ \exp M_{2}&=\underbrace{\exp \left[\frac{r_{2}}{2}\left(e^{-2 i \theta_{2}} b_{2}^{2}-e^{2 i \theta_{2}} b_{2}^{\dagger 2}\right)\right]}_{\textcolor{blue}{ _ { for ``2" . }}}\\ \end{aligned} \end{equation} and $\exp M_3$ is the operator term which is coming due to the interaction between both the fields but we will introduce the coupling constant $K$ as an overall factor in the exponential which will help us in doing our further analysis perturbatively in the limit $K<< 1$. \begin{equation} \begin{aligned} \exp {KM_{3}}&=\exp {K(B_{1}+B_{2}+B_{3}+B_{4}+B_{5}+B_{6})} \end{aligned} \end{equation} We will introduce the coupling constant $K$ as an overall factor in the exponential in the above equation Where the various $B$ terms in the exponential of $M_3$ are: \begin{equation} \begin{aligned} &B_{1}=\frac { r _ { 1 } } { 2 } \left[e^{-2 i \theta_{1}}\left(b_{1} b_{2}+b_{1}^{\dagger} b_{2}^{\dagger}\right) -e^{2 i \theta_{1}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\right] \\ &B_{2}=\frac{r_{2}}{2}\left[e^{-2 i \theta_{2}}\left(b_{1} b_{2}+b_{1}^{\dagger} b_{2}^{\dagger}\right)\right.\left.-e^{2 i \theta_{2}}\left(b_{2}^{\dagger} b_{1}^{\dagger}+b_{2} b_{1}\right)\right] \\ &B_{3}= [f_{1}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right) b_{1} b_{2}^{\dagger} \left.+f_{2}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right) b_{1}^{\dagger} b_{2}\right]\\ &B_{4}=\frac{1}{12}\left[\left(X_{1}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right) b_{1} b_{2}\right.+X_{2}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right) b_{1}^{\dagger} b_{2}^{\dagger}\right]\\ &B_{5}=X_{3}\left(r_{1}, r_{2}, \theta_{1}, \theta_{2}\right)\left(b_{1}^{\dagger} b_{1}^{\dagger}-b_{2} b_{2}\right) \\ &B_{6}=X_{4}\left(x_{1}, r_{2}, \theta_{1}, \theta_{2}\right)\left(b_{1} b_{1}-b_{2}^{\dagger} b_{2}^{\dagger}\right)\\ \end{aligned} \end{equation} From the next section we will change the notations for squeezing parameters and replace $r_1\xrightarrow{}R_1$, $\phi_1\xrightarrow{}\Phi_1$, $\theta_1\xrightarrow{}\Theta_1$ and similarly for $r_2,~\phi_2, \theta_2 $ parameters. \section{\textcolor{Sepia}{\textbf{ \Large Calculation for unitary evolution}}}\label{LCDE} To understand the dynamics of the two scalar fields with interaction in de Sitter background space, we construct the most generic evolution operator $\mathcal{U}(\eta_{1},\eta_{2})$ which is the product of total squeezed operator and the total rotational operator for both the fields. The total rotational operator is defined as follows: \begin{equation} \begin{aligned} \mathcal{R}_{\text{Total}}(\Phi_{1}, \Phi_{2})= \mathcal{R}_{1}(\Phi_{1}) \mathcal{R}_{2}(\Phi_{2}) \end{aligned} \end{equation} \begin{equation}\label{unievo} \begin{aligned} \mathcal{U}(\eta_{1},\eta_{2})&=\\ &S_{1}\left(R_{1}, \Theta_{1}\right) S_{2}\left(R_{2}, \Theta_{2}\right) S_{12}\left(R_{1},R_{2}, \Theta_{1}, \Theta_{2}\right)\times\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\mathcal{R}_{1}(\Phi_{1}) \mathcal{R}_{2}(\Phi_{2}) \end{aligned} \end{equation} In the Heisenberg picture representation the operators can be $\hat{v}_{1}(\mathbf{x}, \eta)$, $\hat{\pi}_{1}(\mathbf{x}, \eta)$, $\hat{v}_{2}(\mathbf{x}, \eta)$ and $\hat{\pi}_{2}(\mathbf{x}, \eta)$ can be written as follows \begin{equation} \begin{aligned} &\hat{v}_{1}(\mathbf{x}, \eta)=\mathcal{U}^{\dagger}\left(\eta, \eta_{0}\right) \hat{v}_{1}\left(\mathbf{x}, \eta_{0}\right) \mathcal{U}\left(\eta, \eta_{0}\right)\\ &=\int \frac{d^{3} k}{(2 \pi)^{3 / 2}} e^{i \mathbf{k} \cdot \mathbf{x}}\left(u_{1,\mathbf{k}}^{*}\left(\eta\right) b_{1,\mathbf{k}}+u_{1, -\mathbf{k}}\left(\eta\right) b_{1,-\mathbf{k}}^{\dagger}\right) \\ &\hat{\pi}_{1}(\mathbf{x}, \eta)=\mathcal{U}^{\dagger}\left(\eta, \eta_{0}\right) \hat{\pi}_{1}\left(\mathbf{x}, \eta_{0}\right) \mathcal{U}\left(\eta, \eta_{0}\right)\\ &=\int \frac{d^{3} k}{(2 \pi)^{3 / 2}} e^{i \mathbf{k} \cdot \mathbf{x}}\left(w_{1,\mathbf{k}}^{*}\left(\eta\right) b_{1,\mathbf{k}}+w_{1, -\mathbf{k}}\left(\eta\right) b_{1,-\mathbf{k}}^{\dagger}\right)\\ &\hat{v}_{2}(\mathbf{x}, \eta)=\mathcal{U}^{\dagger}\left(\eta, \eta_{0}\right) \hat{v}_{2}\left(\mathbf{x}, \eta_{0}\right) \mathcal{U}\left(\eta, \eta_{0}\right)\\ &=\int \frac{d^{3} k}{(2 \pi)^{3 / 2}} e^{i \mathbf{k} \cdot \mathbf{x}}\left(u_{2,\mathbf{k}}^{*}\left(\eta\right) b_{2,\mathbf{k}}+u_{2, -\mathbf{k}}\left(\eta\right) b_{2,-\mathbf{k}}^{\dagger}\right) \\ &\hat{\pi}_{2}(\mathbf{x}, \eta)=\mathcal{U}^{\dagger}\left(\eta, \eta_{0}\right) \hat{\pi}_{2}\left(\mathbf{x}, \eta_{0}\right) \mathcal{U}\left(\eta, \eta_{0}\right)\\ &=\int \frac{d^{3} k}{(2 \pi)^{3 / 2}} e^{i \mathbf{k} \cdot \mathbf{x}}\left(w_{2,\mathbf{k}}^{*}\left(\eta\right) b_{2,\mathbf{k}}+w_{2, -\mathbf{k}}\left(\eta\right) b_{2,-\mathbf{k}}^{\dagger}\right) \end{aligned} \label{mo1} \end{equation} In Schr\"odinger representation the position and momentum operators $v_{1,\textbf{k}}$, $\pi_{1,\textbf{k}}$ and $v_{2,\textbf{k}}$, $\pi_{2,\textbf{k}}$ for both the scalar fields in terms of annihilation and creation operator can be written in the following manner: \begin{equation} \begin{aligned} &v_{1,\textbf{k}}\left(\eta\right)=\frac{1}{\sqrt{2 k}_{1}}\left(b_{1,\textbf{k}}\left(\eta\right)+b_{1,-\textbf{k}}^{\dagger}\left(\eta\right)\right)\\ &\pi_{1,\textbf{k}}\left(\eta\right)=i \sqrt{\frac{k_{1}}{2}}\left(b_{1,\textbf{k}}\left(\eta\right)-b_{1,-\textbf{k}}^{\dagger}\left(\eta\right)\right) \\ &v_{2,\textbf{k}}\left(\eta\right)=\frac{1}{\sqrt{2 k}_{2}}\left(b_{2,\textbf{k}}\left(\eta\right)+b_{2,-\textbf{k}}^{\dagger}\left(\eta\right)\right)\\ &\pi_{2,\textbf{k}}\left(\eta\right)=i \sqrt{\frac{k_{2}}{2}}\left(b_{2,\textbf{k}}\left(\eta\right)-b_{2,-\textbf{k}}^{\dagger}\left(\eta\right)\right) \end{aligned} \label{60eq} \end{equation} Where $b_{1,\textbf{k}}\left(\eta\right)$, $b_{1,-\textbf{k}}(\eta)$, $b_{2,\textbf{k}}(\eta)$ and $b_{2,-\textbf{k}}(\eta)$ are the annihilation and creation operators in time dependent Heisenberg representation for the two coupled scalar fields in de Sitter background space and using the factorized representation of the unitary time evolution operator introduced in the Eq\eqref{unievo}, the expression for the annihilation operator for field 1 can be written at any arbitrary time scale as: \begin{widetext} \begin{equation}\label{eq63e} \begin{aligned} b_{1}(\eta) &\equiv \mathcal{U}^{\dagger}\left(\eta, \eta_{0}\right) b_{2} \mathcal{U}\left(\eta, \eta_{0}\right)\\ &={\mathcal{R}}^{\dagger}_{1}(\Phi_{1}) \mathcal{R}^{\dagger}_{2}(\Phi_{2}) S_{12}^{\dagger}\left(R_{1},R_{2}, \Theta_{1}, \Theta_{2}\right) S_{2}^{\dagger}\left(R_{2}, \Theta_{2}\right) S_{1}^{\dagger}\left(R_{1}, \Theta_{1}\right) b_{1} S_{1}\left(R_{1}, \Theta_{1}\right) S_{2}\left(R_{2}, \Theta_{2}\right) S_{12}\left(R_{1},R_{2}, \Theta_{1}, \Theta_{2}\right)\\ &\times \mathcal{R}_{1}(\Phi_{1})\mathcal{R}_{2}(\Phi_{2})\\ &={\mathcal{R}}^{\dagger}_{1}(\Phi_{1}) \mathcal{R}^{\dagger}_{2}(\Phi_{2})(\left(\cosh R_{1}\right) S_{12}^{\dagger} b_{1} S_{12}-\left(\sinh R_{1}\right) e^{2 \Theta_{1}} S_{12}^{\dagger} b_{1}^{\dagger} S_{12}){\mathcal{R}}_{1}(\Phi_{1}) \mathcal{R}_{2}(\Phi_{2}) \end{aligned} \end{equation} \end{widetext} Using the Baker–Campbell–Hausdorff formula upto linear order in coupling constant $K$, the annihilation operator $b_{1}$ becomes \eqref{b1evo}. The total expression is the sum of the free part which is the usual time dependent expression for the annihilation operator on applying the squeezed operator (without interaction term) and there are extra terms with factor $K$, coming due to the interaction between both the fields. For the creation operator $b^{\dagger}_1$ of field 1, one can take the conjugate of Eq\eqref{eq63e} or \eqref{b1evo}. \begin{widetext} \begin{equation}\label{b1evo} \begin{aligned} b_{1}(\eta) &=\left(\cosh R_{1}e^{-i\Phi_{1}} b_{1}-e^{i(\Phi_1+2 \Theta_{1})} \sinh R_{1} b_{1}^{\dagger}\right)+K\bigg\{\Big(-\cosh R_{1}\left(i \left(R_{1} \sin 2 \Theta_{1} +\right.\right.\left.\left.+R_{2} \sin 2 \Theta_{2}\right)\right)-\frac{X_{2}}{12}\\ &-e^{2 i \Theta_{1}}\left(\sinh R_{1}\right)\left(f_{1}\right)\Big) e^{i\Phi_{2}} b_{2}^{\dagger}+ \left[\left(\cosh R_{1}\right)f_{2}+e^{2 i \Theta_{1}}\left(i\left(R_{1} \sin 2 \Theta_{1}+R_{2} \sin 2 \Theta_{2}\right)\right)-\frac{X_{1}}{12}\right](\sinh R_{1})e^{-i\Phi_{2}} b_{2}\\ &+\left[\frac{\cosh R_{1}}{2}\left(R_{1} \sin 2 \Theta_{1}+R_{2} \sin 2 \Theta_{2}\right)^{2}-\sinh R_{1} e^{2i \Theta_{1}}\right. \left.\left(\frac{X_{4}}{6}\right)\right] e^{-i\Phi_{1}}b_{1}+\left[\cosh R_{1}\left(\frac{X_{3}}{6}\right)-\frac{e^{2 i \Theta_{1}}}{2} \sinh R_{1} \times\right.\\&\left.(R_{1} \sin 2\Theta_{1}\right. \left.\left.+R_{2} \sin 2 \Theta_{2}\right)^{2}\right] e^{i\Phi_{1}}b_{1}^{\dagger}\bigg\}+\ldots \end{aligned} \end{equation} \end{widetext} Now for operator $b_2$, we will apply the same factorized representation of the unitary time evolution operator mentioned in the Eq\eqref{unievo}, the expression for the annihilation operator for field 2 can be written at any arbitrary time scale as Eq\eqref{65eqe}. For the creation operator $b^{\dagger}_2$ of field 2, one can take the conjugate of Eq\eqref{65eqe} or \eqref{b2evo}. \begin{widetext} \begin{equation}\label{65eqe} \begin{aligned} b_{2}(\eta) &\equiv \mathcal{U}^{\dagger}\left(\eta, \eta_{0}\right) b_{2} \mathcal{U}\left(\eta, \eta_{0}\right)\\ &={\mathcal{R}}^{\dagger}_{1}(\Phi_{1}) \mathcal{R}^{\dagger}_{2}(\Phi_{2})S_{12}^{\dagger}\left(R_{1},R_{2}, \Theta_{1}, \Theta_{2}\right) S_{2}^{\dagger}\left(R_{2}, \Theta_{2}\right) S_{1}^{\dagger}\left(R_{1}, \Theta_{1}\right) b_{2} S_{1}\left(R_{1}, \Theta_{1}\right) S_{2}\left(R_{2}, \Theta_{2}\right) S_{12}\left(R_{1},R_{2}, \Theta_{1}, \Theta_{2}\right)\times\\&{\mathcal{R}}_{1}(\Phi_{1}) \mathcal{R}_{2}(\Phi_{2})={\mathcal{R}}^{\dagger}_{1}(\Phi_{1}) \mathcal{R}^{\dagger}_{2}(\Phi_{2})(\left(\cosh R_{1}\right) S_{12}^{\dagger} b_{2} S_{12}-\left(\sinh R_{1}\right) e^{2 \Theta_{1}} S_{12}^{\dagger} b_{2}^{\dagger} S_{12}){\mathcal{R}}_{1}(\Phi_{1}) \mathcal{R}_{2}(\Phi_{2}) \end{aligned} \end{equation} \end{widetext} Using the same Baker–Campbell–Hausdorff formula upto linear order in coupling constant $K$, for the annihilation operator $b_{2}$ becomes \eqref{b2evo}. Here also we see that the total expression is sum of the free part which is the usual time dependent expression for the annihilation operator on applying the squeezed operator and the terms with factor $K$, coming due the presence of $S\left(R_{1}, \Theta_{1}, R_{2}, \Theta_{2}\right)$ operator which is accounting for the interaction between the two fields. \begin{widetext} \begin{equation}\label{b2evo} \begin{aligned} b_{2}(\eta) &=\left(\cosh R_{2}e^{-i\Phi_{2}} b_{2}-e^{i(\Phi{2}+2\Theta_{2})} \sinh R_{2} b_{2}^{\dagger}\right)+K\Big(\Big\{-i \cosh R_{2}\left(R_{1} \sin 2 \Theta_{1} +\right. \left.+R_{2} \sin 2 \Theta_{2}\right)-\cosh R_{2}\left(\frac{X_{2}}{12}\right)\\&-e^{2 i \Theta_{2}} \sinh R_{2}\left(f_{2}\right)\Big\} e^{i\Phi_{1}} b_{1}^{\dagger}+\Big(f_{1} \cosh R_{2}+e^{2 i \Theta_{2}} \sinh R_{2}\left(-i\left(R_{1} \sin 2 \Theta_{1}+R_{2} \sin 2 \Theta_{2}\right)\right. \left.+\frac{X_{1}}{12}\right) e^{-i\Phi_{1}} b_{1}+ \\ &\left\{\left(R_{1} \sin 2 \Theta_{1}+R_{2} \sin 2 \Theta_{2}\right)^{2} \frac{\cosh R_{2}}{2}+\right.\left.e^{2 i \Theta_{2}} \sinh R_{2}\left(\frac{X_{3}}{6}\right)\right\} e^{-i\Phi_{2}} b_{2}+\left\{-\left(R_{1} \sin 2 \Theta_{1}+R_{2} \sin 2 \Theta_2\right)^{2}\right.\left.\right.\sinh R_{2}\times \\ &\left.\left(\frac{e^{2 i \Theta_{2}}}{2}\right)-\frac{X_{4}}{6} \cosh R_{2}\right\} e^{i\Phi_{2}} b_{2}^{\dagger}\Big)+\ldots \end{aligned} \end{equation} \end{widetext} \section{\textcolor{Sepia}{\textbf{ \Large Evolution equations}}} \label{eqdiff1} After getting the time evolution of operators $b_1$ and $b_2$, we compute the expressions for the differential equation of mode functions $u_{1,\bf{k}}$, $u_{2,\bf{k}}$, $w_{1,\bf{k}}$ and $w_{2,\bf{k}}$ for the two coupled scalar fields in de Sitter background space because it is crucial step for getting the evolution equation in terms of differential equation for squeeze factor $R_{1,\bf{k}},R_{2,\bf{k}}$, squeeze phase $\Theta_{1,\bf{k}},\Theta_{2,\bf{k}}$ and squeeze angle $\Phi_{1,\bf{k}},\Phi_{2,\bf{k}}$. For calculating the differential equation we apply the Heisenberg equation of motion Eq\eqref{heis} for different position and momentum operators. Using the Heisenberg equation of motion we will find that the mode functions $u_{1,\bf{k}}$, $u_{2,\bf{k}}$, $w_{1,\bf{k}}$ and $w_{2,\bf{k}}$ satisfy the Hamilton equations Eq\eqref{modediff} This set of four differential equation are the classical equation of motion. There are 2 pairs one for each scalar fields the two pairs are symmetric with each other (only differ in indexing variable) but they are also coupled differential equation equation with coupling constant $K$. These equations tells us the dynamics of the position and momentum variables of the classical two interacting scalar field theory given by the action in Eq\eqref{acti}. \begin{equation} \label{heis} \frac{d}{d t} \mathcal{O}_{\mathrm{i}}=i\left[\mathcal{H}, \mathcal{O}_{\mathrm{i}}\right] \end{equation} Where $\begin{aligned} \mathcal{O}_{\mathrm{i}} = \{v_{1}, \pi_{1}, v_{2}, \pi_{2}\}. \end{aligned}$ \begin{equation} \begin{aligned} u_{1,\bf{k}}^{\prime}=&~~ w_{1,\bf{k}}-\left(\frac{P_{a}^{2}}{2 a^{2}}\right)\left(3 u_{1,\bf{k}}+K u_{2,\bf{k}}\right) . \\ u_{2,\bf{k}}^{\prime}=&~~ w_{2,\bf{k}}-\left(\frac{P a^{2}}{2 a^{2}}\right)\left(3 u_{2,\bf{k}}+K u_{1,\bf{k}}\right) . \\ w_{1,\bf{k}}^{\prime}=&~~-\left(m_{1}^{2}+\frac{K^{2} P_{a}^{2}}{4 a^{4}}\right) u_{1,\bf{k}}-\left(\frac{P_{a}^{2}}{2 a^{2}}\right)\left(3 w_{1,\bf{k}}+K w_{2,\bf{k}}\right) \\ &+\left(\frac{K P_{a}^{2}}{2 a^{4}}\right) u_{2,\bf{k}} .\\ w_{2,\bf{k}}^{\prime}=&~~-\left(m_{2}^{2}+\frac{K^{2} P_{a}^{2}}{4 a^{4}}\right) u_{2,\bf{k}}-\left(\frac{P_{a}^{2}}{2 a^{2}}\right)\left(3 w_{2,\bf{k}}+K w_{1,\bf{k}}\right) \\ &+\left(\frac{K P_{a}^{2}}{2 a^{4}}\right) u_{1,\bf{k}} . \end{aligned} \label{modediff} \end{equation} We can notice that if we switch off the interaction strength between the two fields by setting $K=0$, we will get two independent sectors of scalar fields which are symmetric and decoupled with each other. Now having these mode function equations are very useful because one can calculate the equation of motion for $R$, $\Theta$ and $\Phi$ for both the interacting fields. Before moving further we will give a summary of various calculations done till now. We have quantized the Hamiltonian of the two coupled scalar fields in de Sitter background space Eq\eqref{hamilto}, We have shown how the quantized Hamiltonian can be mapped to the two-mode coupled harmonic oscillator. We calculated the four-mode squeezed operator for this quantized Hamiltonian and using the four-mode squeezed operator and the total rotation operator we have defined the evolution operator, which have $R_{1,\bf{k}}$, $\Theta_{1,\bf{k}}$, $\Phi_{1,\bf{k}}$, $R_{2,\bf{k}}$, $\Theta_{2,\bf{k}}$ and $\Phi_{2,\bf{k}}$ as functional variables and this evolution operator governs the properties of the vacuum state for the present cosmological setup. Using the expansion of the position and momentum operators in terms of annihilation and creation operators in Heisenberg's representation we calculated the coupled differential equation for the mode functions. From Eq\eqref{mo1} we get the time dependent position and momentum operators where, we have used the Heisenberg representation of $b_{1,\bf{k}}$ and $b_{2,\bf{k}}$. Here, Eq\eqref{69eqt} consists of time dependent expression for the operators $\hat{v}_{1,\bf{k}}$ and $\hat{\pi}_{1,\bf{k}}$ \begin{widetext} \begin{equation} \begin{aligned} \hat{v}_{1,\bf{k}}(\eta)=&\frac{1}{\sqrt{2 k}}\Bigg[ b_{1,\bf{k}}\left(\cosh R_{1,\bf{k}} e^{-i \Phi_{1,\bf{k}}}-\sinh R_{1,\bf{k}} e^{-i\left(\Phi_{1,\bf{k}}+2 \Theta_{1,\bf{k}}\right)}+Ke^{i \Phi_{1,\bf{k}}}\left(-\sinh R_{1,\bf{k}} e^{2 i \Theta_{1,\bf{k}}}\frac{{X_4}}{6}+\cosh R_{1,\bf{k}} \frac{{X_3}^{\dagger}}{6}\right)\right) \\ &\left.+b_{1,-\bf{k}}^{\dagger}\left(\cosh R_{1,\bf{k}} e^{i \Phi_{1,\bf{k}}}-\sinh R_{1,\bf{k}} e^{i\left(\Phi_{1,\bf{k}}+2 \Theta_{1,\bf{k}}\right)}+Ke^{i \Phi_{1,\bf{k}}}\left(-\sinh R_{1,\bf{k}} e^{2 i \Theta_{1,\bf{k}}}\frac{{{X_4}}^{\dagger}}{6}+\cosh R_{1,\bf{k}} \frac{{X_3}}{6}\right)\right)\right]\\&+\cdots\\ \hat{\pi}_{1,\vec{k}}(\eta)=&-i \sqrt{\frac{k}{2}}\Bigg[ b_{1,\bf{k}}\left(\cosh R_{1,\bf{k}} e^{-i \Phi_{1,\bf{k}}}+\sinh R_{1,\bf{k}} e^{-i\left(\Phi_{1,\bf{k}}+2 \Theta_{1,\bf{k}}\right)}-Ke^{i \Phi_{1,\bf{k}}}\left(\sinh R_{1,\bf{k}} e^{2 i \Theta_{1,\bf{k}}}\frac{{X_4}}{6}+\cosh R_{1,\bf{k}} \frac{{X_3}^{\dagger}}{6}\right)\right) \\ &\left.+b_{1,-\bf{k}}^{\dagger}\left(\cosh R_{1,\bf{k}} e^{i \Phi_{1,\bf{k}}}+\sinh R_{1,\bf{k}} e^{i\left(\Phi_{1,\bf{k}}+2 \Theta_{1,\bf{k}}\right)}+Ke^{i \Phi_{1,\bf{k}}}\left(-\sinh R_{1,\bf{k}} e^{2 i \Theta_{1,\bf{k}}}\frac{{X_4}^{\dagger}}{6}-\cosh R_{1,\bf{k}} \frac{{X_3}}{6}\right)\right)\right]\\ &+\cdots \end{aligned} \label{69eqt} \end{equation} \end{widetext} Here, Eq\eqref{71eq} consists of time dependent expression for the operators $\hat{v}_{1,\bf{k}}$ and $\hat{\pi}_{1,\bf{k}}$ and the $ ``\cdots''$ in Eq(\eqref{69eqt},~\eqref{71eq}) represents the contributions from other field which we will neglect because in Eq(\eqref{mo1},\eqref{60eq}) the position and momentum operators do not contain the creation and annihilation operators of other fields. \begin{widetext} \begin{equation} \begin{aligned} \hat{v}_{2,\bf{k}}(\eta)=&\frac{1}{\sqrt{2 k}}\Bigg[ b_{2,\bf{k}}\left(\cosh R_{2,\bf{k}} e^{-i \Phi_{2,\bf{k}}}-\sinh R_{2,\bf{k}} e^{-i\left(\Phi_{2,\bf{k}}+2 \Theta_{2,\bf{k}}\right)}+Ke^{i \Phi_{2,\bf{k}}}\left(\sinh R_{2,\bf{k}} e^{2 i \Theta_{2,\bf{k}}}\frac{{X_4}}{6}-\cosh R_{2,\bf{k}} \frac{{X_3}^{\dagger}}{6}\right)\right) \\ &\left.+b_{1,-\bf{k}}^{\dagger}\left(\cosh R_{2,\bf{k}} e^{i \Phi_{2,\bf{k}}}-\sinh R_{2,\bf{k}} e^{i\left(\Phi_{2,\bf{k}}+2 \Theta_{2,\bf{k}}\right)}+Ke^{i \Phi_{2,\bf{k}}}\left(\sinh R_{2,\bf{k}} e^{2 i \Theta_{2,\bf{k}}}\frac{{X_4}^{\dagger}}{6}-\cosh R_{2,\bf{k}} \frac{X_3}{6}\right)\right)\right] \\&+\cdots\\ \hat{\pi}_{1,\vec{k}}(\eta)=&-i \sqrt{\frac{k}{2}}\Bigg[ b_{2,\bf{k}}\left(\cosh R_{2,\bf{k}} e^{-i \Phi_{2,\bf{k}}}+\sinh R_{2,\bf{k}} e^{-i\left(\Phi_{2,\bf{k}}+2 \Theta_{2,\bf{k}}\right)}+Ke^{i \Phi_{2,\bf{k}}}\left(\sinh R_{2,\bf{k}} e^{2 i \Theta_{2,\bf{k}}}\frac{{X_4}}{6}+\cosh R_{2,\bf{k}} \frac{{X_3}^{\dagger}}{6}\right)\right) \\ &\left.+b_{1,-\bf{k}}^{\dagger}\left(\cosh R_{2,\bf{k}} e^{i \Phi_{2,\bf{k}}}+\sinh R_{2,\bf{k}} e^{i\left(\Phi_{2,\bf{k}}+2 \Theta_{2,\bf{k}}\right)}+Ke^{i \Phi_{2,\bf{k}}}\left(\sinh R_{2,\bf{k}} e^{2 i \Theta_{2,\bf{k}}}\frac{{X_4}^{\dagger}}{6}+\cosh R_{2,\bf{k}} \frac{X_3}{6}\right)\right)\right] \\&+\cdots \end{aligned} \label{71eq} \end{equation} \end{widetext} On comparing the time dependent form of operators for position and momentum for both the fields given in Eq\eqref{69eqt},\eqref{71eq} with \eqref{mo1} we can identify the mode functions to be \onecolumngrid \begin{equation} \begin{aligned} &u_{1,\bf{k}}(\eta)=\frac{1}{\sqrt{2 k}}\left[\cosh R_{1,\bf{k}} e^{i \Phi_{1,\bf{k}}}-\sinh R_{1,\bf{k}} e^{i (\Phi_{1,\bf{k}}+2\Theta_{1,\bf{k}})}+Ke^{i \Phi_{1,\bf{k}}}\left(-\sinh R_{1,\bf{k}} e^{2 i \Theta_{1,\bf{k}}}\frac{{X_4}^{\dagger}}{6}+\cosh R_{1,\bf{k}} \frac{X_3}{6}\right)\right] \\ &w_{1,\bf{k}}(\eta)=i\sqrt{\frac{k}{2}}\left[\cosh R_{1,\bf{k}} e^{i \Phi_{1,\bf{k}}}+\sinh R_{1,\bf{k}} e^{i (\Phi_{1,\bf{k}}+2\Theta_{1,\bf{k}})}+Ke^{i \Phi_{1,\bf{k}}}\left(-\sinh R_{1,\bf{k}} e^{2 i \Theta_{1,\bf{k}}}\frac{{X_4}^{\dagger}}{6}-\cosh R_{1,\bf{k}} \frac{X_3}{6}\right)\right] \\ &u_{2,\bf{k}}(\eta)=\frac{1}{\sqrt{2 k}}\left[\cosh R_{2,\bf{k}} e^{i \Phi_{2,\bf{k}}}-\sinh R_{2,\bf{k}} e^{i (\Phi_{2,\bf{k}}+2\Theta_{2,\bf{k}})}+Ke^{i \Phi_{2,\bf{k}}}\left(\sinh R_{2,\bf{k}} e^{2 i \Theta_{2,\bf{k}}}\frac{{X_4}^{\dagger}}{6}-\cosh R_{2,\bf{k}} \frac{X_3}{6}\right)\right] \\ &w_{2,\bf{k}}(\eta)=i\sqrt{\frac{k}{2}}\left[\cosh R_{2,\bf{k}} e^{i \Phi_{2,\bf{k}}}+\sinh R_{2,\bf{k}} e^{i (\Phi_{2,\bf{k}}+2\Theta_{2,\bf{k}})}+Ke^{i \Phi_{2,\bf{k}}}\left(\sinh R_{2,\bf{k}} e^{2 i \Theta_{2,\bf{k}}}\frac{{X_4}^{\dagger}}{6}+\cosh R_{2,\bf{k}} \frac{X_3}{6}\right)\right] \end{aligned} \end{equation} and these above equations define the transformation between the variables which are in the Schr\"{o}dinger representation with the mode functions in the Heisenberg representation. It is now a matter of algebra to show that Hamilton's equations for the mode functions \eqref{modediff} give the equations of motion for \begin{equation}\label{76eqt} \begin{aligned} R_{1,\bf{k}}^{\prime} &=\lambda_{1,\bf{k}} \cos 2\left(\varphi_{1,\bf{k}}-\Theta_{1,\bf{k}}\right) + K{Y_1}\\ \Theta_{1,\bf{k}}^{\prime} &=-\Omega_{1,\bf{k}}+\frac{\lambda_{1,\bf{k}}}{2}\left(\tanh R_{1,\bf{k}}+\operatorname{coth} R_{1,\bf{k}}\right) \sin 2\left(\varphi_{1,\bf{k}}-\Theta_{1,\bf{k}}\right) \\ &+ K{Y_2} \\ \Phi_{1,\bf{k}}^{\prime} &=\Omega_{1,\bf{k}}-\lambda_{1,\bf{k}} \tanh R_{1,\bf{k}} \sin 2\left(\varphi_{1,\bf{k}}-\Theta_{1,\bf{k}}\right) + K{Y_3}\\ R_{2,\bf{k}}^{\prime} &=\lambda_{2,\bf{k}} \cos 2\left(\varphi_{2,\bf{k}}-\Theta_{2,\bf{k}}\right) + K{Y_4} \\ \Theta_{2,\bf{k}}^{\prime} &=-\Omega_{2,\bf{k}}+\frac{\lambda_{2,\bf{k}}}{2}\left(\tanh R_{2,\bf{k}}+\operatorname{coth} R_{2,\bf{k}}\right) \sin 2\left(\varphi_{2,\bf{k}}-\Theta_{2,\bf{k}}\right) \\ &+ K{Y_5} \\ \Phi_{2,\bf{k}}^{\prime} &=\Omega_{2,\bf{k}}-\lambda_{2,\bf{k}} \tanh R_{2,\bf{k}} \sin 2\left(\varphi_{2,\bf{k}}-\Theta_{2,\bf{k}}\right) + K{Y_6} \end{aligned} \end{equation} \twocolumngrid Where $Y_i$'s are the contributions coming from the interaction between the two scalar fields. Here we have considered the effects of perturbation terms only upto $\mathcal{O}(R^2$). $Y_i$'s are given in the appendix. \ref{sec:appendixA}. and \begin{align} &\Omega_{i,\bf{k}}=\frac{k}{2}\left(1+c_{si}^{2}\right) \\ &\lambda_{i,\bf{k}}=\left[\left(\frac{k}{2}\left(1-c_{si}^{2}\right)\right)^{2}+\left(\frac{{3P_a}^{2}}{2a^2}\right)^{2}\right]^{\frac{1}{2}} \\ &\varphi_{i,\bf{k}}=-\frac{\pi}{2}+\frac{1}{2} \arctan \left(\frac{k a^2}{3 {P_a}^{2}}\left(1-c_{si}^{2}\right)\right) . \end{align} The index $i$ runs from 1 and 2. Here the $c_{si}$ is the effective sound speed for two fields and is given by \begin{equation} \begin{aligned} k^2~c_{si}^2 =\left[\left({m_i}\right)^{2}+\left(\frac{{{K^2}P_a}^{2}}{4a^4}\right)\right] \end{aligned} \end{equation} Note: We will not consider term with $K^2$ because we are considering terms which are first order in $K$. The differential equations given in Eq[\eqref{76eqt}] are very useful because these equations govern the dynamics of the variables: $R_{1,\bf{k}}$, $\Theta_{1,\bf{k}}$, $\Phi_{1,\bf{k}}$, $R_{2,\bf{k}}$, $\Theta_{2,\bf{k}}$ and $\Phi_{2,\bf{k}}$ which effects the evolution operator for the four mode squeezed states in de Sitter background space. Hence, these equations can be used for numerical studies of the two coupled scalar fields in de Sitter space. \section{\textcolor{Sepia}{\textbf{ \Large Numerical Analysis}}}\label{Numeri} In this Section we present the numerical analysis for two coupled scalar fields which are weakly interacting with coupling strength $K\ll 1$ and we will do the numerical analysis in the limit where the interaction terms are highly linear in nature. We will study the behaviour of squeezing parameter with respect to the conformal time for this particular setup. We give the differential equation for squeezed parameters of the two coupled scalar fields in de Sitter background space in eq \eqref{linear_eqt}. The interacting parts in these set of six equations contain only terms which are linear in nature. We have chosen this limit in order to understand the numerical behaviour of squeezing parameters in a compact manner. For numerical analysis we set the momentum $p_{a}=0.1$, The Hubble parameter is set to 0.1 in natural units, we take $k=0.0001 {\rm Mpc}^{-1}$ and $a=-\frac{1}{H\eta}$. \begin{widetext} \begin{equation}\label{linear_eqt} \begin{aligned} R_{1,\bf{k}}^{\prime}(\eta)&=\lambda_{1,\bf{k}}(\eta) \operatorname{cos}[2(\varphi_{1,\bf{k}}(\eta)+\Theta_{1,\bf{k}}(\eta))]+\frac{K}{{6 k}}\bigg(\mathrm{i}(2 k P(\eta) R_{1,\bf{k}}(\eta)+(-2 k P(\eta)+3 \mathrm{i} S(\eta)) R_{2,\bf{k}}(\eta)-3 S(\eta)(\mathrm{i}+\Phi_{1,\bf{k}}(\eta)\\ &\left.-\Phi_{2,\bf{k}}(\eta))\bigg)\right)\\ R_{2,\bf{k}}^{\prime}(\eta)&=\lambda_{2,\bf{k}}(\eta) \operatorname{cos}[2(\varphi_{2,\bf{k}}(\eta)+\Theta_{1,\bf{k}}(\eta))]+K\left(\frac{1}{3}\left(P(\eta)-P(\eta) R_{1,\bf{k}}(\eta)+P(\eta) R_{2,\bf{k}}(\eta)+\frac{3 S(\eta) \Theta_{2,\bf{k}}(\eta)}{k}\right)\right)\\ \Theta_{1,\bf{k}}^{\prime}(\eta)&=-\Omega_{1,\bf{k}}-(\lambda_{1,\bf{k}} \operatorname{sin}[2(\varphi_{1,\bf{k}}+\Theta_{1,\bf{k}}(\eta))]) \operatorname{coth}[2 R_{1,\bf{k}}(\eta)]+ K\left(\frac{1}{12 kR_{1,\bf{k}}(\eta)}\left(3 S(\eta)+(-2 i k P(\eta)-3 S(\eta)) R_{2,\bf{k}}(\eta)\right.\right.\\ &~~~\left.\left.-4 k P(\eta) \Theta_{1,\bf{k}}(\eta)-2 k P(\eta) \Phi_{1,\bf{k}}(\eta)-3 \mathrm{i} S(\eta) \Phi_{1,\bf{k}}(\eta)+2 k P(\eta) \Phi_{2,\bf{k}}(\eta)+3 \mathrm{i} S(\eta) \Phi_{2,\bf{k}}(\eta)+\right.\right. R_{1,\bf{k}}(\eta)(2 \mathrm{i} kP(\eta)-3 S(\eta)\\ &-4 kP(\eta) \Theta_{1,\bf{k}} (\eta)+6 i S(\eta) \Theta_{1,\bf{k}} (\eta)+R_{2,\bf{k}}(\eta)(3 S(\eta)+8 kP(\eta) \Theta_{1,\bf{k}}(\eta))+3 i S(\eta) \Phi_{1,\bf{k}}(\eta)-3 i S(\eta) \Phi_{2,\bf{k}}(\eta)))\bigg)\\ \Theta_{2,\bf{k}}^{\prime}(\eta)&=-\Omega_{2,\bf{k}}-(\lambda_{2,\bf{k}} \operatorname{sin}[2(\varphi_{2,\bf{k}}+\Theta_{2,\bf{k}}(\eta))]) \operatorname{coth}[2 R_{2,\bf{k}}(\eta)]+ K\bigg(\frac{1}{12 k R_{2,\bf{k}}(\eta)}(3 S(\eta)+R_{1,\bf{k}}(\eta)(-2 \mathrm{i} k P(\eta)-3 S(\eta)+\\ &3 S(\eta) R_{2,\bf{k}}(\eta))-4 kP(\eta)\Theta_{2,\bf{k}}(\eta)+2 kP(\eta) \Phi_{1,\bf{k}}(\eta)+3 \mathrm{i} \mathrm{S}(\eta) \Phi_{1,\bf{k}}(\eta)-2 kP(\eta) \Phi_{2,\bf{k}}(\eta)- 3 i S(\eta) \Phi_{2,\bf{k}}(\eta)\\ &+R_{2,\bf{k}}(\eta)(2 i k P(\eta)-3 S(\eta)-4 k P(\eta) \Theta_{2,\bf{k}}(\eta)+6 i S(\eta) \Theta_{2,\bf{k}}(\eta)-3 i S(\eta) \Phi_{1,\bf{k}}(\eta)+3 i S(\eta) \Phi_{2,\bf{k}}(\eta)))\bigg)\\ \Phi_{1,\bf{k}}^{\prime}(\eta)&=\Omega_{1,\bf{k}}-\lambda_{1,\bf{k}}\operatorname{tanh}[R_{1,\bf{k}}(\eta)] \operatorname{sin} [2(\varphi_{1,\bf{k}}+\Theta_{1,\bf{k}}(\eta))]\\ &+K\left(\frac{i(2 k P(\eta) R_{1,\bf{k}}(\eta)+(-2 k P(\eta)+3 iS(\eta)) R_{2,\bf{k}}(\eta)-3 S(\eta)(i+\Phi_{1,\bf{k}}(\eta)-\Phi_{2,\bf{k}}(\eta)))}{6 k}\right)\\ \Phi_{2,\bf{k}}^{\prime}(\eta)&=\Omega_{2,\bf{k}}-\lambda_{2,\bf{k}}\operatorname{tanh}[R_{2,\bf{k}}(\eta)] \operatorname{sin} [2(\varphi_{2,\bf{k}}+\Theta_{2,\bf{k}}(\eta))]\\ &-K\left(\frac{{i}((2 k P(\eta)-3 i S(\eta)) R_{1,\bf{k}}(\eta)-2 k P(\eta) R_{2,\bf{k}}(\eta)+3 S(\eta)(i-\Phi_{1,\bf{k}}(\eta)+\Phi_{2,\bf{k}}(\eta)))}{6 k}\right)\\ \end{aligned} \end{equation} \end{widetext} \begin{figure*}[htb!] \centering \subfigure{ \includegraphics[width=8cm] {R.pdf}\label{entvsacs1} } \subfigure{ \includegraphics[width=8.2cm] {theta.pdf}\label{entvsacs2} } \subfigure{ \includegraphics[width=8.2cm] {phi.pdf}\label{entvsacs2} } \caption{Behaviour of the squeezing parameters, such as squeezing amplitude in fig (2(a)), squeezing angle in fig (2(b)), squeezing phase in fig (2(c)) with respect to the conformal time $\eta$ for two coupled scalar fields with coupling strength $K=0.01$ and the effective sound speed parameter $c_{s1}$ and $c_{s2}$ for both the fields is equal to 1.} \label{entvsacs} \end{figure*} In fig (1), we have plotted three graphs which shows the behaviour of squeezing parameters with respect to the $\eta$ for the two coupled scalar fields Fig (1(a)) represents behaviour of the squeezing amplitude $R_{1,\bf{k}}$ for first field and $R_{2,\bf{k}}$ for second field with respect to the conformal time $\eta$. We observe that the squeezing amplitude at early times for the two fields are different which is due to the reason that the contribution from the interacting terms in both the field is different. We see that the $R_{1,\bf{k}}$ dominates over $R_{2,\bf{k}}$ at early times and behaviour for both the squeezing amplitude is similar i.e, both the squeezing amplitudes decrease as we move towards the more negative value of the conformal time $\eta$ and at $\eta \approx 0$ which is the present time of the universe the distinguishability between the squeezing amplitude for the two fields vanishes. In fig (1(b)), we have plotted the squeezing angle or rotation angle, $\Theta_{1,\bf{k}}$ for first field and $\Theta_{2,\bf{k}}$ for second field. The squeezing angle for the two fields at early time or more negative value of conformal time shows that squeezing angle of second field is dominating over the first field but as we move towards the right of the graph the field one dominates and decreases and then the second field dominates. But when we move toward the present time $\eta=0$ we see that the these interchanging dominance effects vanishes and squeezing angles for both fields saturates to a particular constant value. The behaviour for the squeezing phase parameters with respect to the conformal time $\eta$ has been shown in fig (1(c)). Here squeezing phase $\Phi_{1,\bf{k}}$ is for first field and $\Phi_{2,\bf{k}}$ is for second field. We observe that its nature is similar to that of squeezing amplitude parameter shown in fig(1(a)), but there is one difference which is the dominance of squeezing phase of second field over the first field, which was another way around in case of squeezing amplitude parameter. We see that in case of squeezing phase the difference in the two fields vanishes and attains a constant value. \begin{figure*}[htb!] \centering \subfigure{ \includegraphics[width=8cm] {Rsecond.pdf}\label{entvsacs1} } \subfigure{ \includegraphics[width=8.2cm] {Thetasecond.pdf}\label{entvsacs2} } \subfigure{ \includegraphics[width=8.2cm] {phisecond.pdf}\label{entvsacs2} } \caption{Behaviour of the squeezing parameters, such as squeezing amplitude in fig (1(a)), squeezing angle in fig (1(b)), squeezing phase in fig (1(c)) with respect to the conformal time $\eta$ for two coupled scalar fields with coupling strength $K=0.1$ and the effective sound speed parameter $c_{s1}$ and $c_{s2}$ for both the fields is equal to 0.024.} \label{entvsacs} \end{figure*} In fig (2) we again plot the squeezing parameters but with a different value of effective sound speed parameter $c_{s1}$ and $c_{s2}$. We set the effective sound speed parameter for both fields to its lowest bound which is 0.024. Here the coupling strength $K=0.1$. We observe a similar behaviour for all the three squeezing parameters but one new observation which we can see in fig (2(a)) and fig(2(c)) is that the squeezing amplitude and squeezing phase tends to zero as we move towards more negative value of conformal time. All the three graphs in fig (2) gets shifted towards right or near the origin as compared to the graphs in fig (1). \section{\textcolor{Sepia}{\textbf{ \Large Conclusion}}}\label{CC} This work dealt with the mathematical formalism of four mode squeezed state in cosmology. From the cosmological perspective we have encountered two scalar fields, where the metric of the background space was set to de Sitter. The motivation for choosing this particular metric was the following setup of the action which we have chosen can be helpful to understand the dynamics of the two scalar fields in FRW cosmological universe. The analysis of our present work can be helpful in the limit where the two scalar fields are weakly coupled $K<<1$ and the perturbation effects was taken only upto linear order $\mathcal{O}(R$). \\ This work could be summarised in following points: \begin{itemize} \item We have quantized the modes of the two coupled scalar fields $\mu_1$ and $\mu_2$. We have also calculated the position and momentum variables for the same in de Sitter background space and obtained the quantized Hamiltonian $H$. We made connection between the two coupled inverted quantum harmonic oscillator system and four mode squeezed state formalism. \item We have given a detailed calculation for constructing the four mode squeezed state operator which is useful for understanding the cosmological action for the two interacting scalar fields and also for other systems which can be explained in terms of two coupled inverted quantum harmonic oscillators. \item The time evolution operator for the four mode squeezed states is also given, which we have used to calculate the time dependent (Heisenberg picture) annihilation and creation operators for two coupled scalar fields in de Sitter background space. Using the Heisenberg equation of motion, we have calculated the coupled differential equation for the mode functions of the two coupled scalar fields in de Sitter space. \item We presented the expression for $R_{1,\bf{k}}$, $\Theta_{1,\bf{k}}$, $\Phi_{1,\bf{k}}$, $R_{2,\bf{k}}$, $\Theta_{2,\bf{k}}$ and $\Phi_{2,\bf{k}}$ which are the parameters of the evolution operator for four mode squeezed state and which governs the evolution of the state for two coupled scalar fields in de Sitter metric. \item We conclude the work by analysing the behaviour of the squeezing parameters for the two coupled scalar fields. We found during early conformal time, the distinction between the squeezing parameters were noticeable. As we move to current time, $\eta\approx0$, we find that the parameters converge and become indistinguishable. \end{itemize} With this tools in hand it would be interesting to compute quantum information quantities such as entanglement entropy, quantum discord, circuit complexity and many more quantum information theoretic measures for two coupled scalar fields in de Sitter space. This will going to help us to know about the feature and the behaviour of the long range quantum correlations for the system under consideration. Earlier the formalism was not developed for the four-mode squeezed states for which these crucial aspects was not studied in the previous works. Now since the formalism is developed and we know how to handle the system numerically, it would be really good to study the mentioned aspects in near future. \textbf{Acknowledgement:} ~~~The Visiting Post Doctoral research fellowship of SC is supported by the J. C. Bose National Fellowship of Director, Professor Rajesh Gopakumar, ICTS, TIFR, Bengaluru. The research of SP is supported by the J. C. Bose National Fellowship. SC also would like to thank ICTS, TIFR, Bengaluru for providing the work friendly environment. SC also would like to thank all the members of our newly formed virtual international non-profit consortium Quantum Structures of the Space-Time \& Matter (QASTM) for for elaborative discussions. AR and NP would like to thank the members of the QASTM Forum for useful discussions. Last but not least, we would like to acknowledge our debt to the people belonging to the various part of the world for their generous and steady support for research in natural sciences. \widetext \section{Appendix} \textcolor{Sepia}{\subsection{\sffamily Interacting part of differential equations}\label{sec:appendixA}} $Y_{1}=$ $\left(-\left(A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}\right)\right. (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}$ $+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}) (B_{2} E_{1} a_{0}-B_{1} E_{2} a_{0}-A_{2} E_{1} b_{0}+A_{1} E_{2} b_{0}+A_{2} B_{1} e_{0}-A_{1} B_{2} e_{0})+ (x_{21} A_{2} B_{1}-x_{21} A_{1} B_{2}-x_{0} A_{2} E_{1}$ $+x_{3} B_{2} E_{1}+x_{0} A_{1} E_{2}-x_{3} B_{1} E_{2}) (A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}) (D_{6} F_{5} C_{0}-D_{5} F_{6} C_{0}$ $-C_{6} F_{5} D_{0}+C_{5} F_{6} D_{0}+C_{6} D_{5} F_{0}-C_{5} D_{6} F_{0}) R_{1}{ }^{2}+ (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})$ $\left(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0}\right) R_{2} (-x_{7} A_{2} E_{1}+x_{7} A_{1} E_{2}+ (-x_{18} A_{2} B_{1}+x_{18} A_{1} B_{2}$ $-x_{6} A_{2} E_{1}+x_{2} B_{2} E_{1}+x_{6} A_{1} E_{2}-x_{2} B_{1} E_{2}) R_{2}) R_{1} (x_{5}(A_{2} E_{1}-A_{1} E_{2}) (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}$ $-C_{4} D_{5} F_{6}) (-B_{3} E_{2} A_{0}+B_{2} E_{3} A_{0}+A_{3} E_{2} B_{0}-A_{2} E_{3} B_{0}-A_{3} B_{2} E_{0}+A_{2} B_{3} E_{0})+ (-x_{19}\left(A_{2} B_{1}-A_{1} B_{2}\right)(C_{6} D_{5} F_{4}$ $-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}) (-B_{3} E_{2} A_{0}+B_{2} E_{3} A_{0}+A_{3} E_{2} B_{0}-A_{2} E_{3} B_{0}-A_{3} B_{2} E_{0}$ $+A_{2} B_{3} E_{0})- x_{1} (B_{2} E_{1}-B_{1} E_{2}) (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}) (-B_{3} E_{2} A_{0} +B_{2} E_{3} A_{0}$ $+A_{3} E_{2} B_{0}-A_{2} E_{3} B_{0}-A_{3} B_{2} E_{0} +A_{2} B_{3} E_{0})- (x_{20} A_{2} B_{1}-X_{20} A_{1} B_{2}+ x_{8} A_{2} E_{1}-x_{4} B_{2} E_{1}-x_{8} A_{1} E_{2} +x_{4} B_{1} E_{2})$ $~(A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2} + A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}) (D_{6} F_{5} C_{0}-D_{5} F_{6} C_{0}-C_{6} F_{5} D_{0} +C_{5} F_{6} D_{0}+C_{6} D_{5} F_{0}$ $\left.-C_{5} D_{6} F_{0})) R_{2})\right) / ((A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})^{2} (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}$ $+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}))$ $Y_{4}=$ $((-C_{6} D_{5} F_{4}+C_{5} D_{6} F_{4}+C_{6} D_{4} F_{5}-C_{4} D_{6} F_{5}-C_{5} D_{4} F_{6}+C_{4} D_{5} F_{6}) (B_{3}((-A_{2} E_{1}+A_{1} E_{2})(D_{6} F_{5}-D_{5} F_{6}) c_{0}+$ $C_{6} d_{1} F_{5}(E_{2} A_{0}-A_{2} E_{0})+C_{5} d_{1} F_{6}(-E_{2} A_{0}+A_{2} E_{0})+ C_{6} (A_{2} E_{1}-A_{1} E_{2}) (F_{5} d_{0}-D_{5} f_{0})-C_{5}(A_{2} E_{1}-A_{1} E_{2})$ $(F_{6} d_{0}-D_{6} f_{0}))+ B_{2}((A_{3} E_{1}-A_{1} E_{3})(D_{6} F_{5}-D_{5} F_{6}) c_{0}+C_{5} d_{1} F_{6}(E_{3} A_{0}-A_{3} E_{0})+C_{6} d_{1} F_{5}(-E_{3} A_{0}+A_{3} E_{0})-$ $ C_{6}(A_{3} E_{1}-A_{1} E_{3})(F_{5} d_{0}-D_{5} f_{0})+C_{5}(A_{3} E_{1}-A_{1} E_{3})(F_{6} d_{0}-D_{6} f_{0}))+ (A_{3} E_{2}-A_{2} E_{3})(B_{1}(-D_{6} F_{5}+D_{5} F_{6}) c_{0}$ $-C_{6}(d_{1} F_{5} B_{0}-B_{1} F_{5} d_{0}+B_{1} D_{5} f_{0})+ C_{5}(d_{1} F_{6} B_{0}-B_{1} F_{6} d_{0}+B_{1} D_{6} f_{0})))+ (A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}$ $+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}) (x_{25} C_{6} D_{5}-x_{25} C_{5} D_{6}-x_{17} C_{6} F_{5}+x_{12} D_{6} F_{5}+x_{17} C_{5} F_{6}-x_{12} D_{5} F_{6}) (D_{6} F_{5} C_{0}$ $-D_{5} F_{6} C_{0}-C_{6} F_{5} D_{0}+C_{5} F_{6} D_{0}+C_{6} D_{5} F_{0}-C_{5} D_{6} F_{0}) R_{1}^{2}+ (x_{23}(C_{6} D_{5}-C_{5} D_{6})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}$ $+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}) (B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0})+ x_{11}(D_{6} F_{5}-D_{5} F_{6})$ $(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}) (B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}$ $-A_{2} B_{3} E_{0})- (A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}) (x_{26} C_{6} D_{5}-x_{26} C_{5} D_{6} +x_{16} C_{6} F_{5}$ $-x_{13} D_{6} F_{5}-x_{16} C_{5} F_{6}+x_{13} D_{5} F_{6}) (D_{6} F_{5} C_{0}-D_{5} F_{6} C_{0}-C_{6} F_{5} D_{0}+C_{5} F_{6} D_{0}+C_{6} D_{5} F_{0}-C_{5} D_{6} F_{0}))R_1 R_2$ $+(x_{24} C_{6} D_{5}-x_{24} C_{5} D_{6}+x_{10} D_{6} F_{5}-x_{10} D_{5} F_{6}) (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})$ $~(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0}) R_2^{2}) / ((A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}$ $+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}) (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})^{2})$ $Y_{2}=$ $((A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})(B_{3} E_{1} a_{0}-B_{1} E_{3} a_{0}-A_{3} E_{1} b_{0}+A_{1} E_{3} b_{0}$ $+A_{3} B_{1} e_{0}-A_{1} B_{3} e_{0})+ ((x_{21} A_{3} B_{1}-x_{21} A_{1} B_{3}-x_{0} A_{3} E_{1}+x_{3} B_{3} E_{1}+x_{0} A_{1} E_{3}-x_{3} B_{1} E_{3})(-A_{3} B_{2} E_{1}+A_{2} B_{3} E_{1}$ $+A_{3} B_{1} E_{2}-A_{1} B_{3} E_{2}-A_{2} B_{1} E_{3}+A_{1} B_{2} E_{3})(D_{6} F_{5} C_{0}-D_{5} F_{6} C_{0}-C_{6} F_{5} D_{0}+C_{5} F_{6} D_{0}+C_{6} D_{5} F_{0}-C_{5} D_{6} F_{0}) R_1^{2}) /$ $(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})+x_{7}(A_{3} E_{1}-A_{1} E_{3})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}$ $+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0}) R_2+ (x_{18} A_{3} B_{1}-x_{18} A_{1} B_{3}+x_{6} A_{3} E_{1}-x_{2} B_{3} E_{1}-x_{6} A_{1} E_{3}+x_{2} B_{1} E_{3})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}$ $-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0}) R_2^{2}+ R_1(x_{5}(A_{3} E_{1}-A_{1} E_{3})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}$ $+A_{3} B_{2} E_{0} -A_{2} B_{3} E_{0})+ (-x_{1}(B_{3} E_{1}-B_{1} E_{3})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0})$ $+x_{19}(A_{3} B_{1}-A_{1} B_{3}) (-B_{3} E_{2} A_{0}+B_{2} E_{3} A_{0}+A_{3} E_{2} B_{0}-A_{2} E_{3} B_{0}-A_{3} B_{2} E_{0}+A_{2} B_{3} E_{0})- ((x_{20} A_{3} B_{1}-x_{20} A_{1} B_{3}$ $+x_{8} A_{3} E_{1}-x_{4} B_{3} E_{1} -x_{8} A_{1} E_{3}+x_{4} B_{1} E_{3})(-A_{3} B_{2} E_{1}+A_{2} B_{3} E_{1}+A_{3} B_{1} E_{2}-A_{1} B_{3} E_{2}-A_{2} B_{1} E_{3}+A_{1} B_{2} E_{3})$ $(D_{6} F_{5} C_{0}-D_{5} F_{6} C_{0}-C_{6} F_{5} D_{0}+C_{5} F_{6} D_{0}+C_{6} D_{5} F_{0}-C_{5} D_{6} F_{0})) / (C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}$ $+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})) R_2)) /(A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})^{2}$ $Y_{5}=$ $((C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}) (B_{3}((-A_{2} E_{1}+A_{1} E_{2})(D_{6} F_{4}-D_{4} F_{6}) c_{0} +C_{6} d_{1} F_{4}$ $(E_{2} A_{0}-A_{2} E_{0})+C_{4} d_{1} F_{6}(-E_{2} A_{0}+A_{2} E_{0})+C_{6}(A_{2} E_{1}-A_{1} E_{2})(F_{4} d_{0}-D_{4} f_{0})-C_{4}(A_{2} E_{1}-A_{1} E_{2})(F_{6} d_{0}-D_{6} f_{0}))$ $+B_{2}((A_{3} E_{1}-A_{1} E_{3})(D_{6} F_{4}-D_{4} F_{6}) c_{0}+C_{4} d_{1} F_{6}(E_{3} A_{0}-A_{3} E_{6})+C_{6} d_{1} F_{4}(-E_{3} A_{0}+A_{3} E_{6})-C_{6}(A_{3} E_{1}-A_{1} E_{3})$ $(F_{4} d_{0}-D_{4} f_{0})+C_{4}(A_{3} E_{1}-A_{1} E_{3})(F_{6} d_{0}-D_{6} f_{0}))+ (A_{3} E_{2}-A_{2} E_{3})(B_{1}(-D_{6} F_{4}+D_{4} F_{6}) c_{0}-C_{6}(d_{1} F_{4} B_{0} -B_{1} F_{4} d_{0}$ $+B_{1} D_{4} f_{0})+C_{4}(d_{1} F_{6} B_{0}-B_{1} F_{6} d_{0}+B_{1} D_{6} f_{0})))+ (A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})$ $(x_{25} C_{6} D_{4}-x_{25} C_{4} D_{6}-x_{17} C_{6} F_{4}+x_{12} D_{6} F_{4}+x_{17} C_{4} F_{6}-x_{12} D_{4} F_{6})(-D_{6} F_{5} C_{6}+D_{5} F_{6} C_{6}+C_{6} F_{5} D_{0}-C_{5} F_{6} D_{0}$ $-C_{6} D_{5} F_{6}+C_{5} D_{6} F_{6}) R_1^{2}+ (x_{23}(C_{6} D_{4}-C_{4} D_{6})(-C_{6} D_{5} F_{4}+C_{5} D_{6} F_{4}+C_{6} D_{4} F_{5}-C_{4} D_{6} F_{5}-C_{5} D_{4} F_{6} +C_{4} D_{5} F_{6})$ $(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0} -A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0})+ x_{11}(D_{6} F_{4}-D_{4} F_{6})(-C_{6} D_{5} F_{4}+C_{5} D_{6} F_{4} +C_{6} D_{4} F_{5}$ $-C_{4} D_{6} F_{5}-C_{5} D_{4} F_{6}+C_{4} D_{5} F_{6})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0})- (A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}$ $-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})(x_{26} C_{6} D_{4}-x_{26} C_{4} D_{6}+x_{16} C_{6} F_{4}-x_{13} D_{6} F_{4}-x_{16} C_{4} F_{6}+x_{13} D_{4} F_{6})(-D_{6} F_{5} C_{0}$ $+D_{5} F_{6} C_{0}+C_{6} F_{5} D_{0}-C_{5} F_{6} D_{0}-C_{6} D_{5} F_{0}+C_{5} D_{6} F_{0})) R_{1} R_2+(x_{24} C_{6} D_{4}-x_{24} C_{4} D_{6}+x_{16} D_{6} F_{4}-x_{10} D_{4} F_{6})(-C_{6} D_{5} F_{4}$ $+C_{5} D_{6} F_{4}+C_{6} D_{4} F_{5}-C_{4} D_{6} F_{5}-C_{5} D_{4} F_{6}+C_{4} D_{5} F_{6})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0} -A_{2} B_{3} E_{0})$ $ R_2^{2}) / ((A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5} +C_{4} D_{6} F_{5}$ $+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})^{2})$ $Y_{3}=$ $(-(A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}$ $-C_{4} D_{5} F_{6}) (B_{2} E_{1} a_{0}-B_{1} E_{2} a_{0}-A_{2} E_{1} b_{0}+A_{1} E_{2} b_{0}+A_{2} B_{1} e_{0}-A_{1} B_{2} e_{0})+ (x_{21} A_{2} B_{1} -x_{21} A_{1} B_{2}-x_{9} A_{2} E_{1}+x_{3} B_{2} E_{1}$ $+x_{9} A_{1} E_{2}-x_{3} B_{1} E_{2})(A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}) ({D}_{6} {F}_{5}{C}_{0}-{D}_{5}{F}_{6} {C}_{0} -{C}_{6} {F}_{5} {D}_{0}$ $+{C}_{5} {F}_{6} {D}_{0}+{C}_{6} {D}_{5} {F}_{0}-{C}_{5} {D}_{6} {F}_{0}) {R}_1^{2}+ ({C}_{6} {D}_{5} {F}_{4}-{C}_{5} {D}_{6} {F}_{4}-{C}_{6} {D}_{4} {F}_{5}+{C}_{4} {D}_{6} {F}_{5}+{C}_{5} {D}_{4} {F}_{6}-{C}_{4} {D}_{5} {F}_{6}) ({B}_{3} {E}_{2} {A}_{0}$ $-{B}_{2} {E}_{3} {A}_{0}-{A}_{3} {E}_{2} {B}_{0}+{A}_{2} {E}_{3} {B}_{0}+{A}_{3} {B}_{2} {E}_{0}-{A}_{2} {B}_{3} {E}_{0}) {R}_2 (-{x}_{7} {A}_{2} {E}_{1}+{x}_{7} {A}_{1} {E}_{2}+(-{x}_{18} {A}_{2} {B}_{1}+{x}_{18} {A}_{1} {B}_{2}-{x}_{6} {A}_{2} {E}_{1}$ $+{x}_{2} {B}_{2} {E}_{1}+{x}_{6} {A}_{1} {E}_{2}-{x}_{2} {B}_{1} {E}_{2}) {R}_2)+ {R_1}({x}_{5}({A}_{2} {E}_{1}-{A}_{1} {E}_{2})({C}_{6} {D}_{5} {F}_{4}-{C}_{5} {D}_{6} {F}_{4}-{C}_{6} {D}_{4} {F}_{5}+{C}_{4} {D}_{6} {F}_{5}+{C}_{5} {D}_{4} {F}_{6}$ $-{C}_{4} {D}_{5} {F}_{6})(-{B}_{3} {E}_{2} {A}_{0}+{B}_{2} {E}_{3} {A}_{0}+{A}_{3} {E}_{2} {B}_{0}-{A}_{2} {E}_{3} {B}_{0}-{A}_{3} {B}_{2} {E}_{0}+{A}_{2} {B}_{3} {E}_{0})+ (-{x}_{19}({A}_{2} {B}_{1}-{A}_{1} {B}_{2})({C}_{6} {D}_{5} {F}_{4}$ $-{C}_{5} {D}_{6} {F}_{4}-{C}_{6} {D}_{4} {F}_{5}+{C}_{4} {D}_{6} {F}_{5}+{C}_{5} {D}_{4} {F}_{6}-{C}_{4} {D}_{5} {F}_{6})(-{B}_{3} {E}_{2} {A}_{0}+{B}_{2} {E}_{3} {A}_{0}+{A}_{3} {E}_{2} {B}_{0}-{A}_{2} {E}_{3} {B}_{0}-{A}_{3} {B}_{2} {E}_{0}+{A}_{2} {B}_{3} {E}_{0})$ $-x_{1}(B_{2} E_{1}-B_{1} E_{2})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})(-B_{3} E_{2} A_{0}+B_{2} E_{3} A_{0}+A_{3} E_{2} B_{0}-$ $A_{2} E_{3} B_{0}-A_{3} B_{2} E_{0}+A_{2} B_{3} E_{0})-$ $(x_{20} A_{2} B_{1}-x_{20} A_{1} B_{2}+x_{8} A_{2} E_{1}-x_{4} B_{2} E_{1}-x_{8} A_{1} E_{2}+x_{4} B_{1} E_{2})(A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-$ $A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3}) (D_{6} F_{5} C_{0}-D_{5} F_{6} C_{0}-C_{6} F_{5} D_{0}+C_{5} F_{6} D_{0}+C_{6} D_{5} F_{0}-C_{5} D_{6} F_{0})) R_2)) /$ $((A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})^{2}(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}$ $-C_{4} D_{5} F_{6}))$ $Y_{6}=$ $((-C_{6} D_{5} F_{4}+C_{5} D_{6} F_{4}+C_{6} D_{4} F_{5}-C_{4} D_{6} F_{5}-C_{5} D_{4} F_{6}+C_{4} D_{5} F_{6}) (B_{3}((-A_{2} E_{1}+A_{1} E_{2})(D_{5} F_{4}-D_{4} F_{5}) c_{0} +C_{5} d_{1} F_{4}$ $(E_{2} A_{0}-A_{2} E_{0})+C_{4} d_{1} F_{5}(-E_{2} A_{0}+A_{2} E_{0})+C_{5}(A_{2} E_{1}-A_{1} E_{2})(F_{4} d_{0}-D_{4} f_{0})- C_{4}(A_{2} E_{1}-A_{1} E_{2})(F_{5} d_{0}-D_{5} f_{0}))+$ $ B_{2}((A_{3} E_{1}-A_{1} E_{3})(D_{5} F_{4}-D_{4} F_{5}) c_{0}+C_{4} d_{1} F_{5}(E_{3} A_{0}-A_{3} E_{0})+C_{5} d_{1} F_{4}(-E_{3} A_{0}+A_{3} E_{0})-C_{5}(A_{3} E_{1}-A_{1} E_{3})(F_{4} d_{0}-$ $D_{4} f_{0})+C_{4}(A_{3} E_{1}-A_{1} E_{3})(F_{5} d_{0}-D_{5} f_{0})) (A_{3} E_{2}-A_{2} E_{3}) (B_{1} (-D_{5} F_{4}+D_{4} F_{5} ) c_{0}-C_{5} (d_{1} F_{4} B_{0}-B_{1} F_{4} d_{0}+B_{1} D_{4} f_{0} )+$ $C_{4} (d_{1} F_{5} B_{0}-B_{1} F_{5} d_{0}+B_{1} D_{5} f_{0} ) ) )- (A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})(x_{25} C_{5} D_{4}-$ $ x_{25} C_{4} D_{5}-x_{17} C_{5} F_{4}+x_{12} D_{5} F_{4}+x_{17} C_{4} F_{5}-x_{12} D_{4} F_{5})$ $(-D_{6} F_{5} C_{0}+D_{5} F_{6} C_{0}+C_{6} F_{5} D_{0}-C_{5} F_{6} D_{0}-C_{6} D_{5} F_{0}+$ $C_{5} D_{6} F_{0}) R_1^{2}+ (x_{23}(C_{5} D_{4}-C_{4} D_{5})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-$ $A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0})+ x_{11}(D_{5} F_{4}-D_{4} F_{5})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+ C_{5} D_{4} F_{6}-$ $C_{4} D_{5} F_{6})(B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}-A_{2} B_{3} E_{0})+ (A_{3} B_{2} E_{1}-A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+$ $A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})(x_{26} C_{5} D_{4}-x_{26} C_{4} D_{5}+x_{16} C_{5} F_{4}-x_{13} D_{5} F_{4}-x_{16} C_{4} F_{5}+x_{13} D_{4} F_{5})$ $.(-D_{6} F_{5} C_{0}+D_{5} F_{6} C_{0}+$ $C_{6} F_{5} D_{0}-C_{5} F_{6} D_{0}-C_{6} D_{5} F_{0}+C_{5} D_{6} F_{0})) R_1 R_2+ (x_{24} C_{5} D_{4}-x_{24} C_{4} D_{5}+x_{10} D_{5} F_{4}-x_{10} D_{4} F_{5})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-$ $C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-C_{4} D_{5} F_{6}) (B_{3} E_{2} A_{0}-B_{2} E_{3} A_{0}-A_{3} E_{2} B_{0}+A_{2} E_{3} B_{0}+A_{3} B_{2} E_{0}- A_{2} B_{3} E_{0}) R_2^{2}) / $ $((A_{3} B_{2} E_{1}- A_{2} B_{3} E_{1}-A_{3} B_{1} E_{2}+A_{1} B_{3} E_{2}+A_{2} B_{1} E_{3}-A_{1} B_{2} E_{3})(C_{6} D_{5} F_{4}-C_{5} D_{6} F_{4}-C_{6} D_{4} F_{5}+C_{4} D_{6} F_{5}+C_{5} D_{4} F_{6}-$ $C_{4} D_{5} F_{6})^{2})$ \textcolor{Sepia}{\subsection{\sffamily Coefficients}\label{sec:appendixB}} Here are the constants which appear in Y's where $S=\frac{{P_{a}}}{3a^{2}}$ and $P=\frac{3{P_{a}}^2}{2a^{2}}$ terms are given as: \begin{align*} A_{0}=&\frac{1}{\sqrt{2\mathrm{k}}} \cosh \left[R_{1,k}\left(\eta\right)\right] (P \cos \left[\Phi_{1,k}\left(\eta\right)\right]+\mathrm{k} \sin \left[\Phi_{1,k}\left(\eta\right)]\right)+\left(-P \cos \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]+\mathrm{k} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right.\right.\\&\left.\left.+\Phi_{1,k}\left(\eta\right)\right]\right) \sinh \mathrm{R}_{1,k}\left(\eta\right)\\ a_{0}=&\frac{P}{3 \sqrt{2} \sqrt{\mathrm{k}}}\left(\cos \left[\Phi_{2,k}\left(\eta\right)\right] \cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]-\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ A_{1}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(-\cos \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right] \cosh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]+\cos \left[\Phi_{1,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\right)\\ A_{2}=&\frac{1}{\sqrt{\mathrm{k}}}\left(\sqrt{2} \sin \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\right)\\ A_{3}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(-\cosh \left[R_{1,k}\left(\eta\right)\right] \sin \left[\Phi_{1,k}\left(\eta\right)\right]+\sin \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right] \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ B_{0}=&-\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(\cosh \left[R_{1,k}\left(\eta\right)\right]\left(-\mathrm{k} \cos \left[\Phi_{1,k}\left(\eta\right)\right]+P \sin \left[\Phi_{1,k}\left(\eta\right)\right]\right)-\left(\mathrm{k} \cos \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]\right.\right.\\ &\left.\left.+P \sin \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ b_{0}=&\frac{1}{3 \sqrt{2} \sqrt{\mathrm{k}}}\left(P\left(\cosh \left[R_{2,k}\left(\eta\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]-\sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\right)\\ B_{1}=&\frac{1}{\sqrt{2} \sqrt{k}}\left(-\cosh \left[R_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]+\sin \left[\Phi_{1,k}\left(\eta\right)\right] \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ B_{2}=&\frac{-1}{\sqrt{k}}\left(\sqrt{2} \cos \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right] \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ B_{3}=&\frac{1}{\sqrt{2} \sqrt{k}}\left(\cos \left[\Phi_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right]-\cos \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right] \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ \mathrm{C}_{0}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(\cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\left(P \cos \left[\Phi_{2,k}\left(\eta\right)\right]+\mathrm{k} \sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)+\left(-P\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right]\right.\right.\\ &\left.\left.+\mathrm{k} \sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right]\right) \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ \mathrm{c}_{0}=&\frac{P}{3 \sqrt{2} \sqrt{\mathrm{k}}}\left(\cos \left[\Phi_{1,k}\left(\eta\right)\right] \cosh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]-\cos \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\right)\\ \mathrm{C}_{4}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(-\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]+\cos \left[\Phi_{2,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ \mathrm{C}_{5}=&\frac{1}{\sqrt{\mathrm{k}}}\left(\sqrt{2} \sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ \mathrm{C}_{6}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(-\cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]+\sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ {D}_{0}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(\cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\left(-\mathrm{k} \cos \left[\Phi_{2,k}\left(\eta\right)\right]+P\sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)-\left(\mathrm{k} \cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right]\right.\right.\\ &\left.\left.+P\sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right]\right) \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ \mathrm{d}_{0}=&\frac{1}{3 \sqrt{2} \sqrt{\mathrm{k}}}\left(P\left(\cosh \left[\mathrm{R}_{1,k}\left(\eta\right)\right] \sin \left[\Phi_{1,k}\left(\eta\right)\right]-\sin \left[2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\right)\right)\\ \mathrm{D}_{4}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(-\cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right]+\sin \left[\Phi_{2,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right) \\ \mathrm{D}_{5}=&\frac{-1}{\sqrt{\mathrm{k}}}\left(\sqrt{2} \cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ \mathrm{D}_{6}=&\frac{1}{\sqrt{2} \sqrt{\mathrm{k}}}\left(\cos \left[\Phi_{2,k}\left(\eta\right)\right] \cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]-\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ \mathrm{E}_{0}=&\frac{e^{i \Phi_{1,k}\left(\eta\right)}}{\sqrt{2}} \sqrt{\mathrm{k}}\left(\cosh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\left(-i P+k c_{s 1}^{2}\right)-e^{2 i \Theta_{1,k}\left(\eta\right)} \sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\left(i P+k c_{s1}^{2}\right)\right)\\ \mathrm{e}_{0}=&\frac{e^{i \Phi_{2,k}\left(\eta\right)}}{3 \sqrt{2} \sqrt{k}}\left((-i k P+3 \mathrm{~S}) \cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]+e^{2 i \Theta_{2,k}\left(\eta\right)} i(-k P+3 i \mathrm{~S}) \sinh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\right)\\ \mathrm{E}_{1}=&\frac{e^{i \Phi_{1,k}\left(\eta\right)}}{\sqrt{2}} i \sqrt{\mathrm{k}}\left(e^{2 i \Theta_{1,k}\left(\eta\right)} \cosh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]+\sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\right) \\ \mathrm{E}_{2}=&-\sqrt{2} e^{i\left(2 \Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right)} \sqrt{\mathrm{k}} \sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right] \\ \mathrm{E}_{3}=&-\frac{1}{\sqrt{2}} e^{i\left(\Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right)} \sqrt{\mathrm{k}}\left(\cosh \left[\mathrm{R}_{1,k}\left(\eta\right)+i \Theta_{1,k}\left(\eta\right)\right]+\sinh \left[\mathrm{R}_{1,k}\left(\eta\right)-i \Theta_{1,k}\left(\eta\right)\right]\right)\\ f_{0}=&\frac{e^{i \Phi_{1,k}\left(\eta\right)}}{3 \sqrt{2} \sqrt{k}}\left((-i k P+3 S) \cosh \left[R_{1,k}\left(\eta\right)\right]+e^{2 i \Theta_{1,k}\left(\eta\right)} i(-k P+3 i S) \sinh \left[R_{1,k}\left(\eta\right)\right]\right) \\ F_{0}=&\frac{-e^{i \Phi_{2,k}\left(\eta\right)}}{\sqrt{2}} \sqrt{k}\left(e^{2 i \Theta_{2,k}\left(\eta\right)} \sinh \left[R_{2,k}\left(\eta\right)\right]\left(i P+k c_{s 2}^{2}\right)+i \cosh \left[R_{2,k}\left(\eta\right)\right]\left(P+i k c_{s 2}^{2}\right)\right) \\ F_{4}=&\frac{e^{i \Phi_{2,k}\left(\eta\right)}}{\sqrt{2}} i \sqrt{k}\left(e^{2 i \Theta_{2,k}\left(\eta\right)} \cosh \left[R_{2,k}\left(\eta\right)\right]+\sinh \left[R_{2,k}\left(\eta\right)\right]\right) \\ F_{5}=&-\sqrt{2} e^{i\left(2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right)} \sqrt{k} \sinh \left[R_{2,k}\left(\eta\right)\right]\\ F_{6}=&-\frac{1}{\sqrt{2}} e^{i \Phi_{2,k}\left(\eta\right)} \sqrt{k}\left(\cosh \left[R_{2,k}\left(\eta\right)\right]+e^{2 i \Theta_{2,k}\left(\eta\right)} \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\\ x_{1}=&\frac{1}{6 \sqrt{2\mathrm{k}}} \sin\left[2 \Theta_{1,k}\left(\eta\right)\right]\left(-2 \cos \left[\Phi_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right]\right.\\ &+\left(\sin \left[2 \Theta_{2,k}\left(\eta\right)-\Phi_{1,k}\left(\eta\right)\right]+2 \sin \left[4 \Theta_{1,k}\left(\eta\right)-2\Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]-\sin \left[4 \Theta_{1,k}\left(\eta\right)+2 \Theta_{2,k}\left(\eta\right)\right.\right.\\ &\left.\left.\left.+\Phi_{1,k}\left(\eta\right)\right]\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ x_{2}=&\frac{1}{12 \sqrt{2\mathrm{k}}} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[\Phi_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right]\right.\\ &+\left(\sin \left[2 \Theta_{2,k}\left(\eta\right)-\Phi_{1,k}\left(\eta\right)\right]+2 \sin \left[4 \Theta_{1,k}\left(\eta\right)-2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]\right.\\ &\left.\left.-\sin \left[4 \Theta_{1,k}\left(\eta\right)+2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ x_{3}=&-\frac{1}{12 \sqrt{2} \sqrt{k}} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(2 \cos \left[\Phi_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right]\right.\\ &+\left(\sin \left[2 \Theta_{2,k}\left(\eta\right)-\Phi_{1,k}\left(\eta\right)\right]+2 \sin \left[4 \Theta_{1,k}\left(\eta\right)-2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]-\sin \left[4 \Theta_{1,k}\left(\eta\right)+2 \Theta_{2,k}\left(\eta\right)\right.\right.\\ &\left.\left.\left. +\Phi_{1,k}\left(\eta\right)\right]\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ x_{4}=&\frac{1}{6 \sqrt{2 k}} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[\Phi_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right]\right.\\ &+\left(\sin \left[2 \Theta_{2,k}\left(\eta\right)-\Phi_{1,k}\left(\eta\right)\right]+2 \sin \left[4 \Theta_{1,k}\left(\eta\right)-2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]-\sin \left[4 \Theta_{1,k}\left(\eta\right)+2 \Theta_{2,k}\left(\eta\right)\right.\right.\\ &\left.\left.\left.+\Phi_{1,k}\left(\eta\right)\right]\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ x_{6}=&\frac{1}{6 \sqrt{2} \sqrt{k}}\left(\cosh \left[R_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sin [\Phi_{1,k}\left(\eta\right)]\right)\\ x_{7}=&\left(\cos \left[2\Theta_{2,k}\left(\eta\right)-\Phi_{1,k}\left(\eta\right)\right]-2 \cos \left[4 \Theta_{1,k}\left(\eta\right)-2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]+\cos \left[4 \Theta_{1,k}\left(\eta\right)+2 \Theta_{2,k}\left(\eta\right)\right.\right.\\ &\left.\left.+\Phi_{1,k}\left(\eta\right)\right]\right) \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\\ x_{8}=&\frac{1}{6 \sqrt{2} \sqrt{\mathrm{k}}} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \left(\cosh \left[R_{1,k}\left(\eta\right)\right]\left(-\cosh \left[i\left(2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)+\Phi_{1,k}\left(\eta\right)\right)\right]\right.\right.\\ &\left.+\cosh \left[2 i\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)-i \Phi_{1,k}\left(\eta\right)\right]\right)+\left(\cos \left[2 \Theta_{2,k}\left(\eta\right)-\Phi_{1,k}\left(\eta\right)\right]-2 \cos \left[4 \Theta_{1,k}\left(\eta\right)-2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]\right.\\ &\left.\left.+\cos \left[4 \Theta_{1,k}\left(\eta\right)+2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ x_{9}=&\frac{1}{12 \sqrt{2} \sqrt{\mathrm{k}}} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \left(\cosh \left[R_{1,k}\left(\eta\right)\right]\left(\cosh \left[i\left(2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)+\Phi_{1,k}\left(\eta\right)\right)\right]-\cosh \left[2 i\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right.\right.\right.\\ &\left.\left.-i \Phi_{1,k}\left(\eta\right)\right]\right)-\left(\cos \left[2 \Theta_{2,k}\left(\eta\right)-\Phi_{1,k}\left(\eta\right)\right]-2 \cos \left[4 \Theta_{1,k}\left(\eta\right)-2 \Theta_{2,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right]\right.\\ &\left.\left.+\cos \left[4 \Theta_{1,k}\left(\eta\right)+2 \Theta_{2,k}\left(\eta\right)+\bar{\Phi}_{1,k}\left(\eta\right)\right]\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\\ x_{10}=&\frac{1}{6 \sqrt{2}(\sqrt{k})} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(\cosh \left[\mathrm{R}_{2,k}\left(\eta\right)\right]\left(\cos \left[\Phi_{2,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right)\right]\right.\right.\\ &\left.\left.-2 \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)+\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\\ x_{11}=& \frac{-1}{3 \sqrt{2}\left(\sqrt{k}\right)} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(\cosh \left[R_{2,k}\left(\eta\right)\right]\left(\cos \left[\Phi_{2,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right)\right]\right.\right.\\ &\left.\left.-2 \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)+\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\\ x_{12}=&-\frac{1}{6 \sqrt{2} \sqrt{k}} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \left(\cosh \left[R_{2,k}\left(\eta\right)\right]\left(\cos \left[\Phi_{2,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right)\right]\right.\right.\\ &\left.\left.-2 \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)+\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\\ x_{13}=& \frac{1}{3 \sqrt{2} \sqrt{k}} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] (\cosh \left[R_{2,k}\left(\eta\right)\right](\cos [\Phi_{2,k}\left(\eta\right)] \sin [2(\Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right))]\\ &\left.\left.-2 \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)+\cos \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\\ x_{14}=&-\frac{1}{3 \sqrt{2k}} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \left(\cosh \left[R_{2,k}\left(\eta\right)\right]\left(2 \cos \left[\Phi_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]+\sin \left[2\left(\Theta_{1,k}\left(\eta\right)\right.\right.\right.\right.\\ &\left.\left.\left.\left.-\Theta_{2,k}\left(\eta\right)\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)+\sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\\ x_{15}=& \frac{1}{6 \sqrt{2} \sqrt{k}} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \left(\cosh \left[R_{2,k}\left(\eta\right)\right]\left(2 \cos \left[\Phi_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\right.\right.\\ &\left.\left.+\sin \left[2\left(\Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right)\right] \sin \left[\Phi_{2,k}\left(\eta\right)\right]\right)+\sin \left[2\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sin \left[2 \Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right] \sinh \left[R_{2,k}\left(\eta\right)\right]\right)\\ x_{16}=&\frac{1}{12 \sqrt{2}} \mathrm{i} \mathrm{e}^{\mathrm{i}(\Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right))} \sqrt{\mathrm{k}} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \left({e}^{-\mathrm{i} \Theta_{1,k}\left(\eta\right)} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[2 \Theta_{1,k}\left(\eta\right)\right] \cosh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\right.\right.\\ &\left.-\left(1-3 e^{4 i \Theta_{1,k}\left(\eta\right)}\right) \sinh \left[\mathrm{R}_{1,k}\left(\eta\right)\right]\right)+ 2 \cos \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(\cosh \left[\mathrm{R}_{1,k}\left(\eta\right)-\mathrm{i} \Theta_{1,k}\left(\eta\right)\right]\right.\\ &\left.\left.-\sinh \left[\mathrm{R}_{1,k}\left(\eta\right)+i \Theta_{1,k}\left(\eta\right)\right]\right)\right)\\ x_{17}=&-\frac{1}{6 \sqrt{2}} i e^{i\left(\Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right)} \sqrt{k} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \left(e^{-i \Theta_{1,k}\left(\eta\right)} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[2 \Theta_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right]\right.\right.\\&\left.-\left(1-3 e^{4 i \Theta_{1,k}\left(\eta\right)}\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)+ 2 \cos \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(\cosh \left[R_{1,k}\left(\eta\right)-i \Theta_{1,k}\left(\eta\right)\right]\right.\\ &\left.\left.-\sinh \left[R_{1,k}\left(\eta\right)+i \Theta_{1,k}\left(\eta\right)\right]\right)\right)\\ x_{18}=& \frac{1}{6 \sqrt{2}} e^{i \Phi_{1,k}\left(\eta\right)} i \sqrt{\mathrm{k}} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \left(\sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[2 \Theta_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right]-\left(1-3 e^{4 i \Theta_{1,k}\left(\eta\right)}\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\right.\\ &\left.+2 e^{i \Theta_{1,k}\left(\eta\right)} \cos \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(\cosh \left[R_{1,k}\left(\eta\right)-i \Theta_{1,k}\left(\eta\right)\right]-\sinh \left[R_{1,k}\left(\eta\right)+i \Theta_{1,k}\left(\eta\right)\right]\right)\right)\\ x_{19}=& \frac{1}{12 \sqrt{2}} e^{i \Phi_{1,k}\left(\eta\right)} i \sqrt{k} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \left(\sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[2 \Theta_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right]-\left(1-3 e^{4 i \Theta_{1,k}\left(\eta\right)}\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)\right.\\ &\left.+2 e^{i \Theta_{1,k}\left(\eta\right)} \cos \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(\cosh \left[R_{1,k}\left(\eta\right)-i \Theta_{1,k}\left(\eta\right)\right]-\sinh \left[R_{1,k}\left(\eta\right)+i \Theta_{1,k}\left(\eta\right)\right]\right)\right)\\ x_{20}=& \frac{1}{6 \sqrt{2}} i e^{i\left(\Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right)} \sqrt{k} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \left(e^{-i \Theta_{1,k}\left(\eta\right)} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[2 \Theta_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right]\right.\right.\\ &\left.-\left(1-3 e^{4 i \Theta_{1,k}\left(\eta\right)}\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)+2 \cos \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(\cosh \left[R_{1,k}\left(\eta\right)-i \Theta_{1,k}\left(\eta\right)\right]\right.\\&\left.\left.-\sinh \left[R_{1,k}\left(\eta\right)+i \Theta_{1,k}\left(\eta\right)\right]\right)\right)\\ x_{21}=& \frac{1}{12 \sqrt{2}} i e^{i\left(\Theta_{1,k}\left(\eta\right)+\Phi_{1,k}\left(\eta\right)\right)} \sqrt{k} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \left(e^{-i \Theta_{1,k}\left(\eta\right)} \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\left(2 \cos \left[2 \Theta_{1,k}\left(\eta\right)\right] \cosh \left[R_{1,k}\left(\eta\right)\right]\right.\right.\\ &\left.-\left(1-3 e^{4 i \Theta_{1,k}\left(\eta\right)}\right) \sinh \left[R_{1,k}\left(\eta\right)\right]\right)+2 \cos \left[2 \Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\left(\cosh \left[R_{1,k}\left(\eta\right)-i \Theta_{1,k}\left(\eta\right)\right]\right.\\ &\left.\left.-\sinh \left[R_{1,k}\left(\eta\right)+i \Theta_{1,k}\left(\eta\right)\right]\right)\right)\\ x_{23}=& \frac{1}{6 \sqrt{2}} e^{i\left(\Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right)} \sqrt{k} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \\& \left(\cosh \left[R_{2,k}\left(\eta\right)-2 i \Theta_{1,k}\left(\eta\right)-3 i \Theta_{2,k}\left(\eta\right)\right]-\cosh \left[R_{2,k}\left(\eta\right)+2 i \Theta_{1,k}\left(\eta\right)-i \Theta_{2,k}\left(\eta\right)\right]\right.\\&-\cosh \left[R_{2,k}\left(\eta\right)-2 i \Theta_{1,k}\left(\eta\right)+i \Theta_{2,k}\left(\eta\right)\right] +\cosh \left[R_{2,k}\left(\eta\right)-i\left(2 \Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \\&\left.+2 i \cosh \left[R_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right]-2 i \cosh \left[R_{2,k}\left(\eta\right)+2 i\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sin \left[\Theta_{2,k}\left(\eta\right)\right]\right)\\ x_{24}=& \frac{1}{12 \sqrt{2}} e^{i\left(\Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right)} \sqrt{k}\\ &\left(-\cosh \left[R_{2,k}\left(\eta\right)-2 i \Theta_{1,k}\left(\eta\right)-3 i \Theta_{2,k}\left(\eta\right)\right]+\cosh \left[R_{2,k}\left(\eta\right)+2 i \Theta_{1,k}\left(\eta\right)-i \Theta_{2,k}\left(\eta\right)\right]\right.\\&+\cosh \left[R_{2,k}\left(\eta\right)-2 i \Theta_{1,k}\left(\eta\right)+i \Theta_{2,k}\left(\eta\right)\right]- \cosh \left[R_{2,k}\left(\eta\right)-i\left(2 \Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right]\\&-2 i \cosh \left[R_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right]+2 i \cosh \left[R_{2,k}\left(\eta\right)\right.\\&\left.\left.+2 i\left(\Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right)\right] \sin \left[\Theta_{2,k}\left(\eta\right)\right]\right) \sin \left[2 \Theta_{2,k}\left(\eta\right)\right]\\ x_{25}=& \frac{1}{12 \sqrt{2}} e^{-R_{2,k}\left(\eta\right)+i\left(\Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right)} \sqrt{k} \sin \left[2 \Theta_{1,k}\left(\eta\right)\right] \left(-\cos \left[2 \Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right]+\cos \left[2 \Theta_{1,k}\left(\eta\right)+3 \Theta_{2,k}\left(\eta\right)\right]\right.\\ &+2 i \cos \left[\Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]-2 e^{2 R_{2,k}\left(\eta\right)}\left(2 i \cos \left[\Theta_{2,k}\left(\eta\right)\right] \cos \left[2 \Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right]\right.\\ &\left.\left.+\sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\right) \sin \left[\Theta_{2,k}\left(\eta\right)\right]\right)\\ x_{26}=& \frac{1}{6 \sqrt{2}} e^{-R_{2,k}\left(\eta\right)+i\left(\Theta_{2,k}\left(\eta\right)+\Phi_{2,k}\left(\eta\right)\right)} \sqrt{k} \left(-\cos \left[2 \Theta_{1,k}\left(\eta\right)-\Theta_{2,k}\left(\eta\right)\right]+\cos \left[2 \Theta_{1,k}\left(\eta\right)+3 \Theta_{2,k}\left(\eta\right)\right]\right.\\ &+2 i \cos \left[\Theta_{2,k}\left(\eta\right)\right] \sin \left[2 \Theta_{1,k}\left(\eta\right)\right]-2 e^{2 R_{2,k}\left(\eta\right)}\left(2 i \cos \left[\Theta_{2,k}\left(\eta\right)\right] \cos \left[2 \Theta_{1,k}\left(\eta\right)+\Theta_{2,k}\left(\eta\right)\right]\right.\\ &\left.\left.+\sin \left[2 \Theta_{1,k}\left(\eta\right)\right]\right) \sin \left[\Theta_{2,k}\left(\eta\right)\right]\right) \sin \left[2 \Theta_{2,k}\left(\eta\right)\right] \end{align*} \twocolumngrid \textbf{Corresponding author address:}\\ E-mail:~sayantan.choudhury@icts.res.in, \\ $~~~~~~~~~~~~$sayanphysicsisi@gmail.com
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Thermally or compositionally-driven convection remains a fascinating area of research with diverse applications from geophysics, where it plays a key role in stirring the Earth's atmosphere \citep{Stevens2005} or inner core \citep{Roberts201557}, to nonlinear physics, where it is a canonical example of pattern formation and self-organization \citep{Cross1993}. Convection is often studied in the classical Rayleigh-B\'enard (RB) configuration both due to its simplicity and well-defined control parameters \citep{Bodenschatz2000}. Although this configuration is still actively studied and contributes to our fundamental understanding of convection processes, it is highly idealized compared to more realistic natural configurations involving non-uniform heating \citep{ROSSBY19659,killworth_manins_1980}, unsteady buoyancy forcing \citep{venezian_1969,Roppo1984,Singh2015}, complex geometries \citep{gastine_wicht_aurnou_2015,Toppa2015}, non-constant transport coefficients \citep{Tackley1996,Davaille1999}, compressible effects \citep{matthews_proctor_weiss_1995,Kogan1999,Verhoven2015}, overshooting and interactions with a stably-stratified region \citep{moore_weiss_1973,Couston2017}, etc. Among the phenomena neglected in classical RB convection, the possibility of a non-planar boundary is particularly interesting. The case of rough boundaries has been extensively studied due to its application to laboratory experiments \citep{du_tong_2000} while the case of large-scale topographies can significantly change the nature of convection both close to onset \citep{Kelly1978,Weiss2014} and in the super-critical regime \citep{Toppa2015,zhang_sun_bao_zhou_2018}. While the topography is usually fixed initially, many natural mechanisms can dynamically generate non-trivial topographies. The two-way coupling between a flow and an evolving boundary, being due to erosion, melting or dissolution, has recently received some attention \citep{claudin_duran_andreotti_2017,ristroph_2018}, and is at the origin of many geological patterns \citep{Meakinrspa20090189}. Of interest here is the case of melting, where a natural mechanism able to dynamically generate non-trivial topographies is thermal convection itself. It can locally melt or freeze the solid boundaries as a result of non-uniform heat fluxes. This coupling between thermal convection and melting or freezing boundaries finds applications in various fields, from geophysics where it can affect the dynamics of the Earth's mantle and inner core \citep{Alboussiere2010,Labrosse2017}, the thermal evolution of magma oceans \citep{Ulvrova2012} or the melting of ice in oceans \citep{Martin1977,Keitzl2016}; to dendritic growth where it affects the structure of the growing solid phase \citep{BECKERMANN1999468}. Of particular interest to the present study is the work of \cite{Vasil2011}. They considered the gradual melting of a pure isothermal solid at the melting temperature heated from below. As the solid melts, the liquid layer grows vertically until it reaches the critical height above which convection sets in. The linear stability of this system is not trivial since the equilibrium background is evolving with time due to the continuous melting \citep{Walton1982}. This has led previous authors to focus on the limit of large Stefan numbers, for which there is a time scale separation between the growth rate of the convection instability and the evolution of the background state \citep{Vasil2011}. Many theoretical and numerical studies concerned with this problem focus on a one-way coupling where the release of latent heat affects the buoyancy of the fluid, but the dynamical effect of the topography created by this phase change is often neglected \citep{Keitzl2016}. There exists a variety of methods to take into account the evolving phase change boundary: enthalpy methods \citep{Voller90,Ulvrova2012}, Lattice-Boltzmann approaches \citep{Jiaung2001,Babak2018}, levet set methods \citep{GIBOU2007536} and Arbitrary Lagrangian-Eulerian schemes \citep{MACKENZIE2002526,Ulvrova2012}. Here we consider a self-consistent framework where the free-boundary problem associated with the Stefan boundary condition is solved implicitly using a phase-field method \citep{Boettinger2002}. Adding moderate complexity to the regular Boussinesq equations, our approach is applied to the case of Rayleigh-B\'enard convection with a melting upper boundary. We focus on the particular case where the temperature of the solid is initially close to its melting temperature. This simple configuration does not allow for an equilibrium, since the solid phase is not cooled and will therefore continuously melt. Note also the simultaneous and independent study by \cite{Babak2018}, who considered a similar configuration, but mostly focused on global quantities such as the heat flux or the statistical properties of the interface. In addition to independently confirm some of their findings, we also present a detailed description of the transition between diffusive and convective regimes, we discuss the secondary bifurcation which destabilizes the initial set of convective rolls and we derive scalings for the melting velocity as a function of the Stefan number. The case where the system is both heated from below and cooled from above which can lead to quasi-steady states, as in the experimental study of \cite{davis_muller_dietsche_1984}, will be studied later. The paper is structured as follows. The general formulation of the physical problem is presented in section~\ref{sec:model}. We then discuss how the free-boundary conditions are treated using a phase-field method in section~\ref{sec:methods}. The phenomenology of the melting dynamics is described in section~\ref{sec:phenomeno} and we describe quantitatively the effect of varying the Stefan number in section~\ref{sec:st}. We finally conclude in section~\ref{sec:conclu}. \section{Formulation of the problem\label{sec:model}} We consider the evolution of a horizontal layer of a pure incompressible substance, heated from below. The domain is bounded above and below by two impenetrable, no-slip walls, a distance $H$ apart. The layer is two-dimensional with the $x$-axis in the horizontal direction and the $z$-axis in the vertical direction, pointing upwards. The gravity is pointing downwards $\bm{g}=-g\bm{e}_z$. The horizontal size of the domain is defined by the aspect ratio $\lambda$ so that the substance occupies the domain $0<z<H$ and $0<x<\lambda H$ and we consider periodic boundary conditions in the horizontal direction. We impose the temperature $T=T_1$ at the bottom rigid boundary and $T=T_0$ at the top rigid boundary with $T_0<T_1$. The melting temperature $T_M$ of the substance is such that $T_0<T_M<T_1$. Both liquid and solid phases of the substance are therefore coexisting inside the domain (see Figure~\ref{fig:schema}). In this paper, we focus on the particular case where the solid is isothermal so that $T_M=T_0$. For simplicity, we assume that both density $\rho$ and thermal diffusivity $\kappa_T$ are constant and equal in both phases. The kinematic viscosity of the fluid phase $\nu$ is also assumed constant. In the Boussinesq approximation, using the thermal diffusion time $H^2/\kappa_T$ as a reference time scale and the total depth of the layer $H$ as a reference length scale, the dimensionless equations for the fluid phase read~: \begin{align} \label{eq:momentum} \frac{1}{\sigma}\left(\frac{\partial\bm{u}}{\partial t}+\bm{u}\cdot\nabla\bm{u}\right) & =-\nabla P+ Ra \; \theta \; \bm{e}_z+\nabla^2\bm{u} \\ \frac{\partial \theta}{\partial t}+\bm{u}\cdot\nabla \theta & =\nabla^2\theta \\ \label{eq:div} \nabla\cdot\bm{u} & =0 \end{align} where $\bm{u}=\left(u, w\right)$ is the velocity, $\theta=(T-T_0)/(T_1-T_0)$ is the dimensionless temperature and the pressure $P$ has been made dimensionless according to $P_0=\rho \kappa_T \nu / H^2$. $Ra$ is the Rayleigh number and $\sigma$ is the Prandtl number defined in the usual way by~: \begin{equation} Ra=\frac{g\alpha_t\Delta TH^3}{\nu\kappa_T} \quad \textrm{and} \quad \sigma=\frac{\nu}{\kappa_T} \ . \end{equation} These dimensionless quantities involve $g$ the constant gravitational acceleration, $\alpha_t$ the coefficient of thermal expansion and $\Delta T=T_1-T_0$ the temperature difference between the two horizontal plates. For numerical convenience, the Prandtl number is fixed to be unity throughout the paper. Note that relevant applications such as the melting of ice shelves or geophysical situations involving liquid metals are respectively at high and very low Prandtl numbers. We nevertheless choose to reduce the large parameter space by considering the standard case $Pr=1$, leaving the study of varying the Prandtl number to future works. In the solid phase, which we assume to be non-deformable, the dimensionless heat equation simplifies to~: \begin{equation} \label{eq:stefan1} \frac{\partial \theta}{\partial t}=\nabla^2\theta \ . \end{equation} \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.75\textwidth]{./fig1} \caption{Schematic description of the problem considered. The blue region corresponds to the solid phase and the white region to the liquid phase. $T_M$ is the melting temperature of the pure substance. In this paper, we focus on the particular case where the solid is isothermal so that $T_M=T_0$.\label{fig:schema}} \end{figure} The specificity of this configuration, compared to classical Rayleigh-B\'enard convection with a liquid phase only, lies in the boundary conditions at the interface between solid and liquid phases. They are given by the classical Stefan conditions \citep{woods_1992,batchelor2002perspectives} which we write in dimensionless form as \begin{align} \label{eq:st1} \theta & = \theta_M\\ \label{eq:st2} St \ \bm{v}\cdot\bm{n} & =\left(\nabla \theta^{(S)}-\nabla \theta^{(L)}\right) \cdot \bm{n} \ , \end{align} where $\theta_M=(T_M-T_0)/(T_1-T_0)$ is the dimensionless melting temperature ($0<\theta_M<1$), $\bm{n}$ is the local normal to the interface (pointing towards the liquid phase), $\bm{v}$ is the interface velocity and the superscript $^{(S)}$ (resp. $^{(L)}$) denotes the solid (resp. liquid) phase. $St$ is the Stefan number and corresponds to the ratio between latent and specific heats \begin{equation} \label{eq:stefan} St=\frac{\mathcal{L}}{c_p\Delta T} \ , \end{equation} where $\mathcal{L}$ is the latent heat per unit mass associated with the solid-liquid transition and $c_p$ is the specific heat capacity at constant pressure of the liquid. Since we assume there is no density variations between the solid and the liquid phases and by continuity of the normal velocity, the interface is effectively impenetrable \citep{davis_muller_dietsche_1984}. We additionally consider the realistic case of no-slip boundary conditions on the interface. Finally, in this general formulation, we neglect the so-called Gibbs-Thomson effects associated with the surface energy of the solid-liquid interface \citep{batchelor2002perspectives}. Note however that the phase-field model described in the following section includes such thermodynamical effects in order to derive a continuous model of the interface dynamics. \section{Phase-field and numerical methods\label{sec:methods}} In this paper, we focus on a fixed-grid method \citep{Voller90} where the spatial discretization of the physical domain is fixed with time and the interface is not explicitly tracked. Our motivation is to derive a model which can be directly implemented into any numerical code able to solve the Navier--Stokes equations in the Boussinesq approximation without major alterations. \subsection{Phase field approach for the interface\label{sec:pf}} In order to solve the previous dimensionless equations without having to impose the internal boundary conditions related to the interface, we introduce the continuous phase field or order parameter $\phi(x,z,t)$ such that $\phi=0$ in the solid phase and $\phi=1$ in the liquid. A thin interface of finite width in which $\phi$ takes values between zero and unity exists between the pure solid and liquid phases. Writing an evolution equation for the phase-field parameter can be done in several ways. The simpler derivation, which we briefly explain here, is the geometrical approach described in \cite{BECKERMANN1999468} and starts from the Gibbs--Thomson effect \begin{equation} \label{eq:gt1} \frac{v_n}{\mu} = T_M - T - \frac{\sigma_s T_M}{\mathcal{L}} \kappa \ , \end{equation} where $\mu$ is the mobility, $\sigma_s$ the surface tension, $\kappa$ the mean curvature of the front, and $v_n$ the normal velocity of the interface between the solid and the liquid phases. Although $\phi$ represents a finite-thickness interface, the normal velocity of the front can be related to the time-evolution of $\phi$ at a fixed value (for instance $\phi=1/2$), through the equation \begin{equation} \label{eq:v_n} v_n = \frac{\partial \phi / \partial t}{\left| \nabla \phi \right|} \ . \end{equation} Moreover, the curvature of the front can be computed in terms of $\phi$ through \begin{equation} \label{eq:curvature} \kappa = \nabla \cdot \bm{n} = \nabla \cdot \left( \frac{\nabla \phi}{\left| \nabla \phi \right|} \right)_{\phi=1/2} \ . \end{equation} Substituting equations~\eqref{eq:v_n} and \eqref{eq:curvature} into equation~\eqref{eq:gt1}, we obtain an evolution equation for $\phi$, in which the right-hand-side depends only on $\nabla \phi$ and $\nabla^2 \phi$. However, this equation does not have a unique stationary solution. Therefore, the profile for $\phi$ has to be specified, and this point is motivated by thermodynamics considerations. This leads us to the second approach for deriving the evolution equation for $\phi$, based on thermodynamics and described in detail in \cite{Wang1993} among others \citep{PENROSE199044,Karma1996}. The entropy of a given volume $V$ is represented by the functional \begin{equation} \label{eq:entropy} \mathcal{S}=\int_V \left[s-\frac{\delta^2}{2}\left(\nabla\phi\right)^2\right] \textrm{d}V \ , \end{equation} where $s(e,\phi)$ is the entropy density, $e$ is the internal energy density, $\phi$ the phase field and $\delta$ a constant. The second term in the right-hand-side of equation~\eqref{eq:entropy} is analogous to the Landau-Ginzburg gradient term in the free energy and is accounting for contributions from the liquid-solid interface. In order to ensure that the local entropy production is positive \citep{Wang1993}, the phase-field must evolve according to \begin{equation} \tau\frac{\partial\phi}{\partial t}= \left. \frac{\partial s}{\partial\phi} \right|_e + \delta^2\nabla^2\phi, \end{equation} where $\tau$ is a positive constant. Following the thermodynamically-consistent derivation of \cite{Wang1993}, this leads to the following dimensional phase field equation \begin{equation} \label{eq:dimpf} \tau\frac{\partial\phi}{\partial t}=\delta^2\nabla^2\phi+Q(T)\frac{d p(\phi)}{d\phi}-\frac{1}{4a}\frac{d g(\phi)}{d\phi} \ , \end{equation} where $Q(T)$ is defined as \begin{equation} \label{eq:qto} Q(T)=\int_{T_M}^T\frac{\mathcal{L(\zeta)}}{\zeta^2} \textrm{d}\zeta \ . \end{equation} In the following, we assume that the latent heat $\mathcal{L}$ does not depend on temperature and that the temperature close to the interface is always approximately the melting temperature $T_M$, \textit{i.e.} $|T-T_M|\ll T_M$, so that equation~\eqref{eq:qto} can be simplified to \begin{equation} \label{eq:qt2} Q(T)\approx\frac{\mathcal{L}}{T_M^2}\left(T-T_M\right) \ . \end{equation} Note that the validity of this simplification can be questionable in our case since thermal boundary layers will develop close to the interface. We nevertheless checked its impact on our results by comparing the original function defined by \eqref{eq:qto} to its simplified version \eqref{eq:qt2}, and found no significant differences. The two functions $p(\phi)$ and $g(\phi)$ must be prescribed in order to close the model. While several choices exist in the literature, we use the prescription of \cite{Wang1993} which ensures that the solid and liquid phases correspond to $\phi=0$ and $\phi=1$, irrespective of the temperature distribution across both phases~: \begin{equation} g(\phi)=\phi^2\left(1-\phi\right)^2 \end{equation} and \begin{equation} \label{eq:pdp} p(\phi)=\frac{\int_0^{\phi}g(\xi)\textrm{d}\xi}{\int_0^{1}g(\xi)\textrm{d}\xi}=\phi^3\left(10-15\phi+6\phi^2\right) \ . \end{equation} The function $g(\phi)$ corresponds to a double-well and ensures that the phase-field is either equal to $0$ or $1$ everywhere except close to the liquid-solid interface where the phase change occurs. The positive constant $a$ in equation~\eqref{eq:dimpf} is related to the amplitude of the potential barrier between the two equilibria. The function $p(\phi)$ ensures a continuous transition between each extremum value of $\phi$. Note that in a steady one-dimensional configuration, and assuming that $T=T_M$, equation~\eqref{eq:dimpf} leads to a simple analytical profile for the phase variable around the interface located at $x=x_i$ given by \begin{equation} \label{eq:eqp} \phi(x)=\frac12\Big[1-\tanh\left(\frac{x-x_i}{2\sqrt{2a}\delta}\right)\Big] \ , \end{equation} assuming that $\phi=1$ as $x\rightarrow-\infty$ and $\phi=0$ as $x\rightarrow+\infty$. The diffuse interface has therefore a characteristic thickness equal to $\delta \sqrt{a}$. The corresponding dimensional temperature equation is given by \citep{Wang1993}: \begin{equation} \label{eq:tempeq_dim} \frac{\partial T}{\partial t} +\bm{u}\cdot\nabla T = \kappa_T\nabla^2 T - \frac{\mathcal{L}}{c_p}\frac{\partial p(\phi)}{\partial t} \end{equation} where the last term corresponds to the release or absorption of latent heat as the phase field varies in time. Note that the fluid is assumed to be at rest in \cite{Wang1993}, but other phase field models have since included the advection term \citep{BECKERMANN1999468,ANDERSON2000175}. Using the same non-dimensionalization as in section \ref{sec:model}, the phase-field equation \eqref{eq:dimpf} and the temperature equation \eqref{eq:tempeq_dim} read \begin{eqnarray} \label{eq:pf} \frac{\epsilon^2}{m}\frac{\partial\phi}{\partial t} & = & \epsilon^2\nabla^2\phi + \frac{\alpha\epsilon}{St} \left(\theta-\theta_M\right)\frac{dp}{d\phi} - \frac{1}{4}\frac{d g}{d\phi} \ , \\ \label{eq:tempeq_adim} \frac{\partial\theta}{\partial t} & = & -\bm{u}\cdot\nabla \theta + \nabla^2\theta-St\frac{dp}{d\phi}\frac{\partial\phi}{\partial t} \ , \end{eqnarray} where \begin{equation} \alpha=\frac{\mathcal{L}^2 H \sqrt{a}}{\delta c_p T_M^2} \end{equation} is the coupling parameter between the phase field and the temperature field. The dimensionless interface thickness and mobility are respectively: \begin{equation} \epsilon=\frac{\delta \sqrt{a}}{H} \ , \qquad m=\frac{\delta^2}{\tau\kappa_T} \ , \end{equation} and the Stefan number is defined in equation~\eqref{eq:stefan}. It is clear from equation~\eqref{eq:eqp} that $\epsilon$ represents the typical interface thickness in the dimensionless space. Since we only consider cases where the bottom boundary is in the liquid phase while the top boundary is in the solid phase, we impose Dirichlet boundary conditions on the phase field \begin{equation} \phi\vert_{z=0}=1 \quad \text{and} \quad \phi\vert_{z=1}=0 \ , \end{equation} and we recall that we impose the temperature at the boundaries \begin{equation} \theta\vert_{z=0}=1 \quad \text{and} \quad \theta\vert_{z=1}=0 \ . \end{equation} This phase-field model was initially derived in a much more general context than the classical Stefan problem, focusing on the micro-physics of solidification. It is indeed consistent with the Gibbs--Thompson effects where the temperature at the interface is not exactly the melting temperature, but additionally depends on the local curvature and velocity of the interface. Following the asymptotic analysis of \cite{Caginalp1989}, \cite{Wang1993} showed that in the limit of a vanishing interface thickness $\epsilon\rightarrow0$, the following boundary condition applies at the interface \begin{equation} \label{eq:stefan_bc} \theta-\theta_M=-\frac{St}{\alpha}\left(\kappa+\frac{v_i}{m}\right) \ , \end{equation} where the parameter $St/\alpha$ can be seen as a dimensionless capillary length, $\kappa$ is the dimensionless interfacial curvature and $v_i$ is the normal velocity of the interface. Thus, in the additional limit where $St/\alpha\rightarrow0$, and for finite curvature and interface velocity, we recover the original Stefan boundary condition \eqref{eq:st1} where $\theta=\theta_M$ at the interface, as predicted by \cite{Caginalp1989}. The value of the mobility is irrelevant provided that the two limits above are respected. In conclusion, the original Stefan problem is recovered provided that \begin{align} \label{eq:limit1} \epsilon & \ll 1 \\ \label{eq:limit2} \frac{St}{\alpha} & \ll 1 \end{align} while the mobility is fixed to be unity here. In practice, all the additional parameters introduced by the phase-field formulation are in fact strongly constrained by the limits~\eqref{eq:limit1}-\eqref{eq:limit2}. $\epsilon$ is typically proportional to the numerical grid size in order to accurately solve for the interface region whereas $\alpha$ is limited by stability constraints. For the interested reader, the effect of these parameters is discussed in more details in Appendix~\ref{sec:appA}. In the following, the interface thickness $\epsilon$ is comparable with the smallest grid size whereas $\alpha$ is typically of order $St/\epsilon$ so that both limits~\eqref{eq:limit1} and \eqref{eq:limit2} are satisfied. \subsection{Navier--Stokes and heat equations} The phase-field model described above satisfies the thermal Stefan conditions at the interface given by equations~\eqref{eq:st1}-\eqref{eq:st2}. We also have to ensure that the interface corresponds to a no-slip boundary condition for the velocity. Here we choose an immersed boundary method \citep{Mittal2005} called the volume penalization method. The no-slip boundary condition at the liquid-solid interface is implicitly taken into account by adding a volume force to the classical Navier--Stokes equations, solved simultaneously in both liquid and solid domains, leading to \begin{equation} \label{eq:mompen} \frac{1}{\sigma}\left(\frac{\partial\bm{u}}{\partial t}+\bm{u}\cdot\nabla\bm{u}\right)=-\nabla P+Ra\, \theta \, \bm{e}_z+\nabla^2\bm{u}-\frac{\left(1-\phi\right)^2\bm{u}}{\eta} \ , \end{equation} where the last term is the penalization term and $\eta$ is a positive parameter. The incompressiblity condition \eqref{eq:div} is imposed everywhere so that the total volume is necessarily conserved. The penalized equation \eqref{eq:mompen} converges towards the Navier--Stokes equations with a no-slip boundary condition imposed at the interface \citep{Angot1999}. The error between the original Navier--Stokes equations and their penalized version scales like $\sqrt{\eta}$ so that $\eta$ is taken as small as possible. This ensures that this term is dominant when $\phi=0$ (\textit{i.e.} in the solid) and the velocity exponentially decays to zero on a timescale proportional to $\eta$. When $\phi=1$ (\textit{i.e.} in the liquid), the penalization term vanishes and the regular Navier--Stokes equations \eqref{eq:momentum} are solved. Note that the particular choice $(1-\phi)^2$ in the numerator of the penalization term is arbitrary and any continuous function that is zero in the liquid and unity in the solid is adequate. For example, in the case of porous media, a more complex Carman--Kozeny permeability function $(1-\phi)^2/\phi^2$ can be prescribed, and the momentum and mass conservation equations can also be modified \citep{BECKERMANN1999468,LeBars2006}. Here we choose the simplest approach of using the phase variable directly to prescribe our penalization term. The quadratic form is chosen is order to match the Carman--Kozeny permeability for $\phi\rightarrow1$ while recovering a value of unity for $\phi\rightarrow0$. Finally, note that all of the results discussed in this paper do not qualitatively depend on this particular prescription of the penalization term, it only affects the detailed structure of the transition between the solid and liquid phase which occurs on length scales typically smaller than the thermal and viscous boundary layers. A comparative study of the different possible expressions for the penalization term, expressed as a function of the phase field parameter, is beyond the scope of this paper but would nevertheless prove useful to improve the convergence properties of the current model. The penalization parameter is chosen as small as possible, noting that an explicit treatment of the penalization term leads to stability constraints, typically $\textrm{d}t<\eta$ \citep{KOLOMENSKIY20095687} where $\textrm{d}t$ is the time step. In the following, the penalization parameter is chosen so that $\eta=2\textrm{d}t$. We now suppose that the system is two-dimensional which naturally leads to a stream-function formulation of the Navier--Stokes equations. The stream-function $\psi$ is defined by $\bm{u}=-\nabla\times\left(\psi\bm{e}_y\right)$ or \begin{equation} u=\frac{\partial\psi}{\partial z} \quad \textrm{and} \quad w=-\frac{\partial\psi}{\partial x} \ . \end{equation} Taking the curl of equation~\eqref{eq:momentum} and projecting onto the $y$-direction leads to the vorticity equation for a two-dimensional flow in the $(x,z)$ plane \begin{equation} \label{eq:sf} \frac{\partial\nabla^2\psi}{\partial t}+\frac{\partial\psi}{\partial z}\frac{\partial\nabla^2\psi}{\partial x}-\frac{\partial\psi}{\partial x}\frac{\partial\nabla^2\psi}{\partial z}=-\sigma Ra \frac{\partial \theta}{\partial x}+\sigma\nabla^4\psi-\frac{\sigma}{\eta}\left(\nabla\times(1-\phi)^2\bm{u}\right)\cdot\bm{e}_y \ . \end{equation} The no-slip boundary conditions at the top and bottom boundaries correspond to \begin{equation} \left.\psi\right\vert_{z=0,1}=0 \quad \text{and} \quad \left.\frac{\partial\psi}{\partial z}\right\vert_{z=0,1}=0 \ . \end{equation} Similarly, the heat equation, including the phase-field term and the stream-function decomposition leads to \begin{equation} \label{eq:temp} \frac{\partial\theta}{\partial t}+\frac{\partial\psi}{\partial z}\frac{\partial\theta}{\partial x}-\frac{\partial\psi}{\partial x}\frac{\partial\theta}{\partial z}=\nabla^2\theta-St\frac{dp}{d\phi}\frac{\partial\phi}{\partial t} \ . \end{equation} \subsection{Spatial and temporal discretizations} Equations~\eqref{eq:sf}, \eqref{eq:temp} and \eqref{eq:pf} are solved using a mixed pseudo-spectral finite-difference code. This code has been used in various context from fully-compressible convection \citep{matthews_proctor_weiss_1995,favier2012} to rapidly-rotating Boussinesq convection \citep{favier2014}. Each variable is assumed to be periodic in the $x$-direction and is written as \begin{equation} f(x,z)=\sum_{n_x}\hat{f}(n_x,z)\exp\left(\textrm{i}k_xx\right)+\textrm{cc} \ , \end{equation} where $n_x$ is an integer, $\textrm{cc}$ stands for conjugate terms and the wave number is defined as \begin{equation} k_x=\frac{2\pi n_x}{\lambda} \ . \end{equation} Horizontal spatial derivatives are computed in spectral space whereas vertical derivatives are discretized using a fourth-order finite-difference scheme For the stream-function, the dissipative fourth-order term is solved implicitly whereas the advective, temperature and penalization terms are solved explicitly. This is achieved using a classical second-order Crank-Nicolson scheme for the implicit part coupled with a third-order Adams-Bashforth scheme for the explicit part. For the temperature and the phase field, we use a fully explicit third-order Adams-Bashforth scheme. An explicit treatment of these equations is indeed easier due to the nature of the coupling term on the right hand side of equation~\eqref{eq:temp}. Note that an implicit scheme could be used to solve these equations (see for example \cite{Andersson2002}) but the stability constraint associated with solving explicitly both diffusive terms in equations~\eqref{eq:temp}-\eqref{eq:pf} is not very limiting in our two-dimensional case. We have tested the convergence of our numerical scheme in Appendix~\ref{sec:a3}. \section{Phenomenology of the melting dynamics\label{sec:phenomeno}} We consider the following set of initial conditions, which only depends on the vertical coordinate $z$: \begin{align} \label{eq:uinit} \bm{u}(t=0) & = \bm{0} \\ \label{eq:tinit} \theta(t=0) & = \left\{ \begin{aligned} 1+\left(\theta_M-1\right)z/h_0 & \quad \textrm{if} \quad z\le h_0\\ \theta_M\left(z-1\right)/\left(h_0-1\right) & \quad \textrm{if} \quad z>h_0\\ \end{aligned} \right. \\ \label{eq:pinit} \phi(t=0) & = \frac12\Bigg[1-\tanh\left(\frac{z-h_0}{2\sqrt{2}\epsilon}\right)\Bigg] \end{align} where $h_0$ is the initial position of the planar solid-liquid interface. It corresponds to a simple piece-wise linear temperature profile with a heat flux discontinuity at $z=h_0$. Depending on the values of $\theta_M$ and $h_0$, this can lead to situations dominated by freezing or melting. In this paper, we focus the gradual melting of a solid that is initially nearly isothermal with a temperature close to the melting temperature. In our dimensionless system, this corresponds to $\theta_M\ll1$. In that configuration, no equilibrium is expected and the solid phase continuously melts until the top boundary $z=1$ is reached and only the liquid phase remains. Note that we do not consider the limit case $\theta_M=0$ in order to avoid numerical issues in the phase field equation~\eqref{eq:pf}. In that case, the coupling term proportional to $\theta-\theta_M$ vanishes in the whole solid which can lead to issues in the localization of the interface. The results discussed in this paper are obtained using a typical value of $\theta_M=0.05$. This ensures that the heat conduction in the solid plays a negligible role in the dynamics so that the evolution of the interface is solely due to the heat flux in the liquid phase (this has been checked by varying the value of $\theta_M$). We now define several quantities that will prove useful later. The position of the interface $h(x,t)$, which we assume to be single-valued, evolves in space and time and is implicitly defined as \begin{equation} \phi(x,z=h,t)=1/2 \ . \end{equation} It is useful to define the effective Rayleigh number of the fluid layer, based on the actual temperature gradient across the depth of the fluid layer \begin{equation} \label{eq:era} Ra_e=Ra\left(1-\theta_M\right)\overline{h}^3 \ , \end{equation} where we introduce the averaged fluid height defined as \begin{equation} \overline{h}(t)=\frac{1}{\lambda}\int_0^{\lambda}h(x,t)\textrm{d}x \ , \label{meanheight} \end{equation} where $\lambda$ is the dimensionless horizontal length of the domain. In the following, the operator $\overline{\ \cdot \ }$ corresponds to a horizontal spatial average. For simplicity, and by analogy with classical Rayleigh-B\'enard convection, we only work with the heat flux injected at the bottom boundary. The heat flux consumed at the solid-liquid interface to melt the solid could equally be used, although it is more complicated to measure numerically. A detailed discussion of the different measures of the heat flux in this system can be found in \cite{Babak2018}. The heat flux injected into the fluid is \begin{equation} \label{eq:thf} Q_W=-\frac{1}{\lambda}\int_{0}^{\lambda} \frac{\partial\theta}{\partial z} \bigg\rvert_{z=0} \textrm{d}x \end{equation} so that the Nusselt number can be defined in first approximation (see section~\ref{sec:hflux} for a more detailed discussion) by \begin{equation} \label{eq:nuss} Nu=\frac{Q_W}{Q_D}=\frac{Q_W\overline{h}}{1-\theta_M} \end{equation} where we have introduced the reference diffusive heat flux $Q_D$ which can be approximated for now by $(1-\theta_M)/\overline{h}$. \begin{table} \begin{center} \begin{tabular}{ccccccccc} Case & $N_x$ & $N_z$ & $Ra$ & $St$ & $\theta_M$ & $\lambda$ & $\epsilon$ & $\alpha$ \\[3pt] A & $\quad512\quad$ & $\quad256\quad$ & $\quad15180\quad$ & $10$ & $\quad0.1\quad$ & $\quad8\quad$ & $\quad4\times10^{-3}\quad$ & $2500$\\ B & $256$ & $256$ & $6\times10^5$ & $10$ & $0.05$ & $1$ & $2\times10^{-3}$ & $5000$\\ C & $4096$ & $1024$ & $10^8$ & $1$ & $0.05$ & $8$ & $10^{-3}$ & $1000$\\ D & $1024$ & $512$ & $10^7$ & $[0.02:50]$ & $0.05$ & $6$ & $3\times10^{-3}$ & $[10:20000]$\\ \end{tabular} \caption{List of numerical parameters for the different cases discussed in this study.\label{tab:one}} \end{center} \end{table} \subsection{Critical Rayleigh number\label{sec:crit}} We focus here on the transition between a purely diffusive regime and a convection regime as the fluid depth increases with time. We therefore consider the case where the initial height $h_0$ is small enough so that the initial conditions given by equations~\eqref{eq:uinit}-\eqref{eq:pinit} are stable. It has been showed by \cite{Vasil2011} that the convection threshold can be modified compared to the classical Rayleigh-B\'enard problem and that a morphological mode grows as soon as $Ra(1-\theta_M)\overline{h}^3>Ra_c\approx1295.78$. This corresponds to a significant modification of the stability criterion compared to the case of classical no-slip RB convection, for which $Ra_c\approx1707.76$ \citep{Chandra1961}. The most unstable wave number is also reduced from $k_c\approx3.116$ to $k_c\approx2.552$. These results are however only valid in the asymptotic limit of large Stefan numbers. While this regime is virtually impossible to reach numerically using the current approach (there exists a time-scale separation between the dynamics of the flow and that of the interface), we nevertheless explore this critical transition for finite Stefan numbers in the following. We start from the initial conditions defined by equations~\eqref{eq:uinit}-\eqref{eq:pinit} with various initial heights from $h_0=0.33$ to $h_0=0.45$, and $\theta_M=0.1$. Using a global Rayleigh number of $Ra=15180$, this leads to an initial effective Rayleigh number varying between $Ra_e(t=0)=491$ and $1245$. We start from infinitesimal temperature perturbations in the liquid layer only. We consider a case where $St=10$ which is large enough to get a reasonable timescale separation while still being accessible numerically. The other numerical parameters are given in Table~\ref{tab:one} and correspond to case A. We define the kinetic energy density in the system by \begin{equation} \mathcal{K}(t)=\frac{1}{V_f}\int_{V_f}\bm{u}^2 \textrm{d}V \ , \end{equation} where $V_f(t)$ is the volume of fluid as a function of time. The time evolution of the kinetic energy density versus time for various initial heights $h_0$ is shown in Figure~\ref{fig:threshold}. Initially, the kinetic energy in the system briefly increases. This is a consequence of our choice of initial conditions for which the fluid is at rest and temperature perturbations only are added. After this short transient, the kinetic energy density decreases with time for all cases. Surprisingly, the kinetic energy starts to grow for different effective Rayleigh numbers in each case, as early as $Ra_e\approx650$ for the smallest initial height of $h_0=0.33$. The growth rate of this first phase is however much weaker that the typical growth rate when the effective Rayleigh number becomes larger that the classical value of $1707.76$. We therefore do not observe a clear transition between stable and unstable behaviours at a given critical Rayleigh number, which seems to indicate that perturbations can grow at any value of the effective Rayleigh number. \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.8\textwidth]{./fig2} \caption{Kinetic energy density in the fluid domain versus the time varying Rayleigh number defined by equation~\eqref{eq:era}. Each curve corresponds to a different initial fluid depth $h_0$ in equations~\eqref{eq:tinit}-\eqref{eq:pinit}. The dashed curve corresponds to the case where only the horizontally-averaged component of the phase field is solved (\textit{i.e.} the upper boundary is effectively planar).\label{fig:threshold}} \end{figure} In order to show that this is a consequence of the upper boundary not being exactly planar, we perform an additional simulation with the exact same parameters as above and for $h_0=0.41$. The only difference is that we artificially smooth the upper boundary by only solving the horizontally-averaged value of the phase field (this is performed in Fourier space by truncating all modes except $k_x=0$). This is of course artificial but nevertheless useful to understand the origin of this early growth. The time evolution of the kinetic energy density is shown in Figure~\ref{fig:threshold} as a dashed curve. At early times, there are no noticeable differences between the regular simulation and the artificial planar case. However, at later times, there is no growth for the planar case until the critical value of $Ra_e\approx1710$ is reached. This clearly shows that the very early growth of the kinetic energy is associated with the presence of a topography. This topography is very small in amplitude since it is generated by the initial perturbations, but is nevertheless measurable numerically. It is known that any non-planar topography will drive a baroclinic flow at any Rayleigh number \citep{Kelly1978}. The amplitude of this gravity current scales linearly with the Rayleigh number and linearly with the amplitude of the topography and is directly forced by the misalignment between the hydrostatic pressure gradient and the inclined temperature gradient normal to the boundary. As we get closer to the threshold $Ra_e=1707.76$, we eventually recover the classical instability mechanism of convection through an imperfect bifurcation \citep{Coullet1986}. This is consistent with the slow growth of kinetic energy we observed in Figure~\ref{fig:threshold}, followed by an exponential phase (the growth rate for $Ra_e>1707.76$ is actually super-exponential since the Rayleigh number keeps increasing while the instability develops). There are several reasons why we do not recover the result of \cite{Vasil2011} who found a critical transition at $Ra_e\approx1295$. First, we are not in the asymptotic regime of large Stefan numbers. We repeated the previous simulations at higher $St$, up to $St=100$, without qualitative changes in the results discussed above. It is however possible that the asymptotic regime discussed by \cite{Vasil2011} is only achieved at much higher Stefan numbers. In addition, \cite{Vasil2011} used slightly different boundary conditions and they focused on modes growing on the very slow melting timescale, which is difficult to isolate in our finite Stefan number simulations. Finally, even if we varied all the numerical parameters of our model to confirm that the results discussed above are numerically converged, we cannot discard the possibility that the phase-field approach is inappropriate to study the evolution of infinitesimal perturbations of the topography, as it is the case here. One must remember that the interface is here continuous with a typical width that is here much larger that the perturbations responsible for driving the baroclinic flow. We however note that once the classical convection instability sets in, all the previous simulations starting from various initial heights lead to the same nonlinear state (apart from a temporal shift as seen in Figure~\ref{fig:threshold} at late times) which is discussed in the following sections. \subsection{Nonlinear saturation close to onset and secondary bifurcation\label{sec:bif}} We now explore the evolution of the system once the initial instability saturates and leads to a steady set of convective rolls. In the following, we consider a particular case with a relatively large Stefan number $St=10$ so that we get a reasonable timescale separation between the flow and the interface dynamics. This particular choice is made to simplify the analysis of the bifurcation to be discussed below. For the same reason, we consider a laterally confined case where $\lambda=1$. The other parameters are given in Table~\ref{tab:one} for case B. We start from an initial height of $h_0=0.13$ so that the initial fluid layer is stable ($Ra_e(h_0)\approx1245$). After the transient growth discussed in the previous section, we observe at saturation a steady flow and a significant topography which is now clearly non-planar (see Figure~\ref{fig:bifurc}). Perhaps unsurprisingly, the wavelength of this topography is equal to that of the convective rolls below. The solid is locally melting just above rising hot plumes but less so above sinking cold plumes. Once this nonlinearly equilibrated set of rolls and their associated topography exist, the horizontal wavenumber of the rolls is fixed while the average fluid depth keeps increasing with time. This can be seen by measuring the typical horizontal wavelength of the topography as the distance between two local minima of $h(x,t)$. In order to compare with classical RB convection, we normalize the corresponding wavenumber $k$ by the time dependent averaged fluid depth $\overline{h}(t)$. We show in Figure~\ref{fig:bifurc}(a) the effective Rayleigh number of the fluid layer as a function of this normalized wavenumber $\overline{h}k$. The marginal stability curve of classical RB with no-slip and fixed temperature walls is shown for reference \citep{Chandra1961}. Since the average fluid depth $\overline{h}$ increases, while the horizontal wave number of the convection remains constant, the effective Rayleigh number continuously increases like $Ra_e\sim\overline{h}^3$. Our simulation closely follows this prediction, as shown in Figure~\ref{fig:bifurc}(a). \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.44\textwidth]{./fig3a} \hfill \includegraphics[width=0.53\textwidth]{./fig3b}\\ \vspace{3mm} \includegraphics[width=0.28\textwidth]{./fig3c} \hfill \includegraphics[width=0.28\textwidth]{./fig3d} \hfill \includegraphics[width=0.28\textwidth]{./fig3e} \caption{Generation of mean horizontal shear flows during the collapse of steady convective rolls. (a) Effective Rayleigh number as a function of the normalized wave number $\overline{h}k$. The red curve corresponds to the marginal curve for classical RB \citep{Chandra1961}. The oblique dashed lines correspond to a constant horizontal wave number and follow $Ra_e\sim \overline{h}^3$. (b) Horizontally-averaged flow $\overline{u}$ versus $z$ and time. The black line corresponds to the maximum height $\textrm{max}(h(x,t))$. The onset of convection and the nonlinear saturation are indicated with vertical dashed lines. Bottom row: temperature field shown at three successive instants, shown as empty symbols and vertical dotted lines in (a) and (b). The white line corresponds to the interface defined as $\phi=1/2$ and the grey lines correspond to streamlines.\label{fig:bifurc} } \end{figure} A simple question now arises: how long can this dynamically evolving set of convective rolls persists against the continuous vertical stretching of the fluid domain? One possibility would be to assume that the rolls are vertically elongated until they become stable again. This is indeed possible since the marginal curve behaves like $Ra_c\sim(\overline{h}k)^4$ for large wave numbers whereas the rolls with fixed horizontal wave number follow a $Ra_e\sim(\overline{h}k)^3$ scaling, so that they will eventually become stable as shown in Figure~\ref{fig:bifurc}(a). This is not what is observed in the simulation however, and a bifurcation occurs well before the possible restabilisation of the initially unstable mode. This bifurcation occurs after the rolls have been elongated vertically by an approximate factor of $3$ and corresponds to an abrupt reduction in the horizontal wavenumber $k(t)$ of the convection rolls. The detailed nature of this bifurcation can be qualitatively understood by following the time evolution of the horizontally-averaged mean flow $\overline{u}(z,t)$, showed in Figure~\ref{fig:bifurc}(b). This mean flow remains negligible at early times but abruptly grows when the rolls become elongated enough. It first appears as a shear flow with one vertical wave length, effectively shearing the first set of rolls, as can be seen in the temperature field showed in the bottom middle panel of Figure~\ref{fig:bifurc}. Once the initial set of rolls has been disrupted, the mean flow undergoes damped oscillations. A new set of rolls is then generated by the convection instability, with a larger horizontal wavelength, therefore maintaining the unit aspect ratio of the convective cells. This bifurcation is also visible in Figure~\ref{fig:bifurc}(a) where a jump between two $k\sim\textrm{cste}$ curves is observed. This is reminiscent of the generation of mean shear flows in laterally confined classical RB convection \citep{Busse1983,Prat1995,goluskin_johnston_flierl_spiegel_2014,Fitzgerald2014}. Our case is however slightly more complicated since the volume of fluid increases with time and the upper topography is non-planar. The generation of mean horizontal shear flows is nevertheless a generic mechanism in RB convection that we also observe in our particular system. Note that the detailed properties of this bifurcation depends on the aspect ratio of numerical domain $\lambda$, the Stefan number $St$ and the nature of the initial perturbations. This transition between convection rolls of different sizes is expected to repeat itself as the fluid depth keeps increasing, with the additional complication that the effective Rayleigh is ever increasing due to the gradual melting of the solid, so that the bifurcation is expected to become more and more complicated. \subsection{Behaviour at large $Ra$} We now focus on a representative example at high Rayleigh number for which we fix $St=1$, $Ra=10^8$ and $\sigma=1$. This simulation corresponds to case C in Table~\ref{tab:one}. We consider a domain with a large aspect ratio of $\lambda=8$. Doing so, we aim at minimizing the horizontal confinement effect associated with our periodic boundary conditions. Note that this is the global aspect ratio including the solid domain, the actual aspect ratio of the liquid domain is initially much larger. We therefore expect the liquid phase to display spatio-temporal chaos instead of purely temporal chaos typical of laterally-confined systems \citep{Manneville2006}. The initial position of the interface is $h_0\approx0.02$ so that the effective Rayleigh number of the fluid layer is $Ra_e(t=0)=Ra(1-\theta_M)h_0^3\approx760$, well below the critical value. Using a horizontal resolution of $N_x=4096$ and a vertical resolution of $N_z=1024$, the smallest grid size is typically $\text{d}x\approx2\times10^{-3}$ so that interface width is fixed to $\epsilon=10^{-3}$. Assuming a Prandtl-Blasius scaling \citep{grossmann_lohse_2000}, the dimensionless width of the viscous boundary layers $\delta_v$ scales like $Ra^{-1/3}$ which typically leads to $\delta_v\ge2.2\times10^{-3}$ in our case. Thus, the interface width $\epsilon$ is always significantly smaller that the viscous boundary layer $\delta_v$. \begin{figure} \vspace{4mm} \centering \includegraphics[width=1.0\textwidth]{./fig4a}\\ \vspace{1mm} \includegraphics[width=1.0\textwidth]{./fig4b}\\ \vspace{1mm} \includegraphics[width=1.0\textwidth]{./fig4c}\\ \vspace{1mm} \includegraphics[width=1.0\textwidth]{./fig4d}\\ \vspace{1mm} \includegraphics[width=1.0\textwidth]{./fig4e}\\ \vspace{1mm} \includegraphics[width=1.0\textwidth]{./fig4f} \caption{\label{fig:visus} Visualizations of the total numerical domain for case C in Table~\ref{tab:one}. The temperature is shown on the left (dark red corresponds to $\theta=1$ while dark blue corresponds to $\theta=\theta_M$) while vorticity is shown on the right (blue and red colors correspond to $\pm 0.25 \ \omega_{\text{max}}$ respectively). The grey line corresponds to the interface defined by the isosurface $\phi=1/2$. Time is increasing from top to bottom: $t=5\times10^{-4}$, $1.5\times10^{-3}$, $6\times10^{-3}$, $1.2\times10^{-2}$, $2.4\times10^{-2}$ and $3\times10^{-2}$. See also Movie1 in Supplementary materials.} \end{figure} The simulation is run until the interface reaches the upper boundary $z=1$. For the parameters of this simulation, this approximately takes $0.03$ thermal diffusive timescales. We first show in Figure~\ref{fig:visus} (see also Movie1 in Supplementary materials) visualizations showing the temperature and vorticity distributions at different times during the simulation. At the early times, the solution is purely diffusive until the liquid depth reaches its critical value above which convection sets in. Convection is initially steady and laminar, as observed previously, with approximately $132$ convective cells across the whole domain. As the interface progresses, this initial set of convective rolls is vertically stretched, eventually forcing a secondary transition leading to larger convective cells, as discussed previously in section~\ref{sec:bif}, although the nature of the bifurcation appears to be different in this large aspect ratio domain (see below). This alternation between quasi-stationary phases of melting where the number of convective cells is conserved and more violent transitions associated with a reordering of the convective cells continues until the upper boundary is eventually reached. \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.9\textwidth]{./fig5a} \includegraphics[width=0.95\textwidth]{./fig5b} \caption{(a) Position of the interface $h(x,t)$ as a function of $x$ for different times separated by approximately $5\times10^{-4}$ thermal diffusion times. The color of the curves correspond to the signed curvature defined by equation~\eqref{eq:curv}. Dark colors correspond to small negative curvatures whereas light colors correspond to cusps with large positive curvatures. (b) Three-dimensional view of the spatio-temporal evolution of the interface $h(x,t)$. The color corresponds to the local value of $h(x,t)$.\label{fig:htime}} \end{figure} Let us first discuss the shape of the interface as the solid continuously melts. We first show in Figure~\ref{fig:htime}(a) the interface position as a function of the horizontal coordinate $x$ at different times. The interface is obtained by interpolating the phase field variable in order to find the iso-contour $\phi=1/2$. Equivalently, the interface can be defined as the isotherm $\theta=\theta_M$, which leads to the same results, apart from very localized regions of high curvatures where a slight mismatch between the two isocontours is observed, as expected from the Gibbs-Thomson relation~\eqref{eq:stefan_bc}. These discrepancies are however negligible here (\textit{i.e.} the maximum distance between the isocontours $\phi=1/2$ and $\theta=\theta_M$ is smaller than the thickness of the boundary layers), as expected from our choice of large coupling parameter $\alpha$. The color of the curves in Figure~\ref{fig:htime}(a) corresponds to the signed curvature, derived from the interface position $h(x,t)$ following \begin{equation} \label{eq:curv} \kappa(x,t)=\frac{\partial_{xx}h}{\left[1+\left(\partial_xh\right)^2\right]^{3/2}} \ . \end{equation} The maximum value of the curvature corresponds to cusps joining two cavities of the topography and is approximately $\kappa_{\text{max}}\approx300$. This is of the same order than the largest curvature achievable by our phase-field approach, which can be approximated by the inverse of the interface width $\epsilon^{-1}=10^3$. We can therefore be confident that the cusps are numerically resolved and not artificially smoothed by our diffuse interface approach. The horizontal positions of these cusps appear to be very stable, which corresponds to a spatial locking between the convection rolls and the topography \citep{Vasil2011}. One can also note that these cusps often correspond to small melting rates (\textit{i.e.} the successive profiles of $h(x,t)$ are close) compared to the much larger cavities with negative curvature where intense localized melting events driven by the underlying hot thermal plumes are observed. An alternative three-dimensional view of the spatio-temporal evolution of the interface is also shown in Figure~\ref{fig:htime}(b). The successive bifurcations between different rolls size is clearly visible. We observe various types of cell merging events, from two adjacent cells merging into one, to more complicated behaviours where one cell disappears leading to the merging of its neighbour cells. \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.98\textwidth]{./fig6a}\\ \includegraphics[width=0.99\textwidth]{./fig6b} \caption{Spatio-temporal evolution of the temperature at the middle of the fluid layer $\theta(x,\overline{h}/2,t)$. (a) Full duration of the simulation $0<t<3\times10^{-2}$ and full spatial extent $0<x<8$. The dashed line corresponds to the zoom shown below. (b) Zoom in at early times $0<t<6\times10^{-3}$, the dashed lines follow the propagation of a defect in the initially-steady array of convective rolls.\label{fig:tmid}} \end{figure} We now describe the dynamics of the fluid flow, which is strongly correlated with that of the interface. Our system is not laterally confined so that there is no significant horizontal mean flow, as seen in section~\ref{sec:bif} previously. Instead, we observe a reorganization of the convection cells through local merging events. Figure~\ref{fig:tmid} shows the temperature profile located at the mid-height of the fluid domain, $\theta(x,z=\overline{h}(t)/2,t)$. At early times, as shown in Figure~\ref{fig:tmid}(b), the temperature profile is initially purely diffusive and uniform, $\theta(z=\overline{h}/2)=(1-\theta_M)/2\simeq1/2$. The first network of steady convective cells eventually appears and we again observe that the typical horizontal wavelength remains constant after the first nonlinear saturation of the convection instability. The secondary instability, which involved a horizontal mean flow in section~\ref{sec:bif}, now appears to be more local since the system is not laterally confined. Interestingly, these local transitions tend to propagate horizontally to neighbouring cells in a percolation process. This is indicated in Figure~\ref{fig:tmid}(b) by the inclined dashed lines. The typical speed of propagation of this defect in the convection cells lattice, estimated directly from the slope of the dashed lines in Figure~\ref{fig:tmid}(b), is approximately the same as the fluid vertical velocity. Each cell is therefore destabilized after approximately one turnover time. In addition, the thermal plumes are clearly oscillatory just after the bifurcation but eventually stabilize and become steady after a short transient. This observation is especially interesting since the Rayleigh number of the system is continuously increasing with time. One might therefore expect the dynamics to become more and more complex, transiting from periodic to chaotic and eventually turbulent solutions as it is the case in classical RB convection without topography. This transition from oscillatory convection to steady convection clearly shows the stabilizing effect that the topography exerts on the flow, locking two counter-rotating convection rolls inside each cavity. At later times and higher Rayleigh numbers, shown in Figure~\ref{fig:tmid}(a), although the stabilization of the thermal plumes by the topography is still observed, significant temporal fluctuations are nevertheless visible, indicating that the convective cells will eventually transit to more chaotic behaviours. The inevitable transitions between steady, periodic and chaotic solutions observed in classical RB \citep{gollub_benson_1980,curry_herring_loncaric_orszag_1984,goldhirsch_pelz_orszag_1989} are therefore probably just delayed by the presence of the topography, but will eventually reappear at much larger Rayleigh numbers. This conclusion remains speculative at this stage since this particular simulation is limited to $Ra_e<10^8$. It is nevertheless reasonable to expect a different, potentially reduced, interaction between the topography and the underlying flow in the fully developed turbulent regime. A final interesting observation concerns the clear asymmetry between rising and sinking plumes. Sinking plumes are extremely stable and do not seem to move horizontally, apart from the sudden transitions associated with the reorganisation of the convective cells. As seen in Figure~\ref{fig:visus}, cold plumes are generated by the merging of two boundary layers descending along the topography, leading to the formation of a high curvature cusp. This cusp is therefore protected by the continuous supply of cold fluid generated by the melting of the neighbouring dome by hot rising fluid. The sinking plumes are therefore always found to be emitted by the cusps. In contrast, rising plumes tend to slowly drift horizontally until they eventually collide with an adjacent sinking plume, leading to destabilisation of both convection rolls. The reason of this drift is probably associated with the baroclinic gravity currents, infinitesimal at low Rayleigh numbers and small topography as in section~\ref{sec:crit}, but much stronger at large effective Rayleigh numbers and for finite topography slopes. Once a rising thermal plume slightly moves horizontally, it is continuously dragged by the topographic current until a merger occurs. The competition between thermal convection driven by unstable bulk temperature gradients and gravity currents driven by a baroclinic forcing close to an inclined slope is interesting in itself, although we postpone the study of its detailed dynamics to future studies. \section{Statistical description\label{sec:st}} We now describe the evolution of the convection and of the topography in a more quantitative way by systematically varying the Stefan number and measuring the averaged response in the interface $h(x,t)$ and in the heat flux $Q_W$. In order to reach the large Stefan number regime, for which the solid melts at a much slower rate, we reduce the Rayleigh number to $Ra=10^7$, the other parameters being showed in Table~\ref{tab:one}, case D. \subsection{Melting velocity} We now consider the effect of varying the Stefan number on the dynamics of the convective flow and interface. The parameters are the same as previously but we now vary the Stefan number from $St=2\times10^{-2}$ to $St=50$. The main consequence of increasing the Stefan number is to increase the timescale separation between the turnover time of the convective cells and the typical timescale of evolution of the topography. As $St$ increases, it takes much more time for a given set of convection rolls to form or alter a topography due to the larger amount of latent heat necessary to do so. This can be seen in Figure~\ref{fig:hbar}(a) where the averaged fluid depth is shown versus time for three different Stefan numbers. The dotted lines show the purely diffusive solution in the absence of motions in the fluid phase (\textit{i.e.} for $Ra_e<Ra_c$ at all times). These purely diffusive solutions all display the scaling $\overline{h}\sim t^{1/2}$ as expected for diffusive Stefan problems \citep{Vasil2011}. One observe a departure from this prediction which marks the onset of convection. The larger the Stefan number, the longer it takes to reach the threshold of convection. However, all cases follow the scaling $\overline{h}\sim t$ after the onset of convection. The prefactor however depends on the Stefan number, as shown in Figure~\ref{fig:hbar}(b). The melting velocity, obtained by a best fit of the previous linear law, is plotted against the Stefan number. The melting velocity seems to be independent of the Stefan number for low values and scales as $St^{-1}$ for large values. \begin{figure} \vspace{6mm} \centering \includegraphics[width=0.53\textwidth]{./fig7a} \hspace{2mm} \includegraphics[width=0.44\textwidth]{./fig7b} \caption{(a) Time evolution of the horizontally averaged fluid depth $\overline{h}$ for different Stefan numbers. The two scalings $\overline{h}\sim t$ and $\overline{h}\sim t^{1/2}$ are shown as continuous black lines. The two horizontal dotted lines correspond to the critical heights above which convection sets in, estimated from $Ra_c\approx1707$. (b) Melting velocity $\dot{\overline{h}}$ for different Stefan numbers. The theoretical estimate is derived from equation~\eqref{eq:hdot} using $\gamma\approx0.115$ and $\beta=1/3$.\label{fig:hbar} } \end{figure} These behaviours can be understood by simple energetic arguments. By integrating equation~\eqref{eq:tempeq_adim} over the whole volume $\mathcal{V}$, we find the following relation \begin{equation} \label{eq:cons} \frac{d}{dt}\int_{\mathcal{V}} \Big[\theta+St \ p(\phi)\Big] \textrm{d}\mathcal{V} = \int_{\textrm{upper}}\frac{\partial \theta}{\partial z} \textrm{d}S - \int_{\textrm{lower}}\frac{\partial \theta}{\partial z} \textrm{d}S \ , \end{equation} where the left-hand side corresponds to the total rate of change of the enthalpy in the system whereas the right-hand side corresponds to the heat fluxes entering and leaving the domain. This conservation constraint must be exactly satisfied at all times during the simulations. Since we work with the temperature as a variable with the latent heat being viewed as an external forcing term, the enthalpy is not explicitly conserved by our scheme. We therefore have to check \textit{a posteriori} that the enthalpy is indeed conserved in our system. We typically observe a relative error in the total enthalpy of the system of the order of $1\%$ at the final time of the simulations when all the solid has melted. Equation~\eqref{eq:cons} can also be used to estimate the rate of change of the average fluid height $\overline{h}(t)$. The internal heat associated with the solid can be neglected in first approximation since we consider the limit where $\theta_M\ll1$. In addition, assuming that the fluid layer is fully convective and behaves as in classical RB convection, its average temperature can be approximated by $1/2$. This leads to the following relation (ignoring heat losses from the quasi-isothermal solid above) \begin{equation} \label{eq:tb} \left(\frac12+St\right)\frac{dV_f}{dt} = -\int_{\textrm{lower}}\frac{\partial \theta}{\partial z} \textrm{d}S\approx\lambda\frac{Nu}{\overline{h}} \ , \end{equation} where the heat flux from the lower boundary has been replaced by the Nusselt number $Nu$ as defined in equation~\eqref{eq:nuss} (and we have assumed that $\theta_M\ll1$). In equation~\eqref{eq:tb}, the volume of fluid \begin{equation} V_f=\int_{\mathcal{V}} p(\phi) \textrm{d}\mathcal{V} \end{equation} can be approximated by $\overline{h}\lambda$ and the Nusselt number can be replaced by the usual scaling law involving the effective Rayleigh number of the form $Nu\sim \gamma Ra_e^{\beta}$ (see Section~\ref{sec:hflux} for a more detailed discussion of the heat flux) leading to \begin{equation} \left(\frac12+St\right) \frac{d\overline{h}}{dt} \approx \frac{\gamma Ra_e^{\beta}}{\overline{h}}\approx\gamma Ra^{\beta}\overline{h}^{3\beta-1} \ , \end{equation} where we have again assumed that $\theta_M\ll1$. The solution to this equation reads \begin{equation} \label{eq:hdot} \overline{h}(t)\approx \left[h_0^{2-3\beta}+\frac{(2-3\beta)\gamma Ra^{\beta}}{1/2+St}t\right]^{1/(2-3\beta)} \ , \end{equation} where $h_0=h(t=0)$. Using the typical value of $\beta=1/3$ \citep{grossmann_lohse_2000}, we find that $\overline{h}\sim t$ which is indeed recovered by our simulations. Note that assuming that $\beta=1/4$ leads to $\overline{h}\sim t^{4/5}$ which is also in reasonable agreement with our simulations. In addition, assuming that $\beta=1/3$, the melting velocity $\dot{\overline{h}}$ can be estimated for different Stefan numbers from equation~\eqref{eq:hdot} and compared with the numerical results. The only adjusting parameter is the prefactor $\gamma$ linking the Nusselt number with the Rayleigh number. We show in Figure~\ref{fig:hbar} the best fit with our numerical data which gives $\gamma\approx0.115$. The agreement is very good over nearly four decades of Stefan numbers. In particular, we recover the fact the melting velocity behaves like $St^{-1}$ in the limit of large Stefan numbers. \subsection{Horizontal and vertical scales of the topography} Let us now discuss some properties of the topography as times evolves. We can track the number of local minima $N_{\textrm{min}}(t)$ of the function $h(x,t)$ as a function of time. The typical length of the cavities generated by flow can be estimated as $l_c(t)=\lambda/N_{\textrm{min}}(t)$ where $\lambda$ is the aspect ratio of the numerical domain. This length-scale is plotted in Figure~\ref{fig:lc} for different Stefan numbers as a function of the average fluid depth $\overline{h}(t)$. As expected, the typical size of the cavities grows with time. Additionally, as seen in Figure~\ref{fig:visus}, each cavity contains two convective rolls, each having an opposite circulation. For classical RB convection, the convective rolls typically have a unit aspect ratio (which is related to the fact the critical wave number is approximately $k\approx\pi$). This corresponds to the typical relation $l_c(t)\approx2\overline{h}(t)$, which we also plot in Figure~\ref{fig:lc}. All the curves remain below this curve, showing that our convective rolls are always slightly elongated in the vertical direction despite the successive dynamical transitions that eventually destabilize them. Note that convective cells tend to be less elongated (\textit{i.e.} our results get closer to the prediction $l_c=2\overline{h}$) as the Stefan number increases. This is a direct consequence of the timescale separation typical of the large Stefan number regime, for which the flow can quickly bifurcate to a new, more unstable, set of convective rolls without any significant change in the average fluid depth. \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.455\textwidth]{./fig8a} \hfill \includegraphics[width=0.48\textwidth]{./fig8b} \caption{(a) Typical horizontal extent of the cavities as a function of the averaged fluid depth. The dashed line corresponds to the limit case where each convective roll has a unit aspect ratio. (b) Root-mean-square value of the topography defined by equation~\eqref{eq:rmsh} as a function of the effective Rayleigh number and for different Stefan numbers. The scaling $Ra_e^{1/2}$ is shown for reference. \label{fig:lc}} \end{figure} Finally, it is interesting to consider the typical amplitude of the topography. One can compute the root-mean-square depth as \begin{equation} \label{eq:rmsh} h_{\textrm{rms}}(t)=\sqrt{\overline{\left(h(x,t)-\overline{h}(t)\right)^2}} \ . \end{equation} Figure~\ref{fig:lc} shows this quantity as a function of the effective Rayleigh number of the fluid layer. Interestingly, there is little dependence with the Stefan number, except close to onset where the saturation of convective occurs later for small Stefan numbers. The typical amplitude of the topography, as measured by $h_\textrm{rms}$, seems to depend mostly on the effective Rayleigh number of the layer, following an approximate scaling of $Ra_e^{1/2}$. Note that the fluctuations of our results, a consequence of the dynamical transitions discussed above, and the relatively small variation in effective Rayleigh numbers, are limiting us from getting a conclusive scaling. It is however interesting to note that the local Reynolds number of RB convection typically scales as $Ra^{1/2}$ \citep{grossmann_lohse_2000}, so that there might be a link between the amplitude of the topography and the Reynolds number of the underlying flow, and thus the thickness of the boundary layers developing along the topography. \subsection{Effect of the topography on the heat flux\label{sec:hflux}} The previous section has showed the non-trivial back-reaction that the topography imprints on the convective flow. The effect of non-uniform boundary conditions on the heat flux in a Rayleigh-B\'enard system has a long history. Roughness of the horizontal plates is an obvious candidate to trigger boundary layer instabilities possibly leading to an enhancement of the heat flux \citep{Ciliberto1999} and possibly to the so-called ultimate regime predicted by Kraichnan \citep{Kraichnan1962,Roche2010}. The wave-length and typical amplitude of our topography is however much larger than the typical roughness used in experiments \citep{rusa_2018}. Note also that roughness does not always lead to a heat transfer enhancement \citep{zhang_sun_bao_zhou_2018}. In the present case, the horizontal wavelength of the topography is precisely that of the most unstable wavelength of the idealized Rayleigh-B\'enard problem in the absence of topography, a situation sometimes referred as to the resonant case \citep{Kelly1978,Bhatt1991,Weiss2014}. In this section, we consider the heat flux at the bottom boundary $Q_W$ defined by equation~\eqref{eq:thf}. We show the evolution with time of this heat flux for $St=10$ and $Ra=10^7$ as a function the average fluid depth $\overline{h}$ in Figure~\ref{fig:qw}. Before the onset of convection, the purely diffusive heat flux is simply given by $Q_W\approx(1-\theta_M)/\overline{h}$ that is indeed observed initially. After convection sets in, we observe a rapid increase of heat flux associated with the nonlinear overshoot of the instability. The heat flux then tends to decrease with time, but we also observe a succession of plateaus characterized by an approximately constant heat flux $Q_W$, separated by sudden decays. The plateaus correspond to the quasi-steady phases where the convection is locked inside the topography, whereas the sudden decays correspond to the secondary bifurcation where the mean flow is disrupting the convection rolls and inhibits the heat flux across the fluid layer. Starting from the classical relation $Nu\sim \gamma Ra_e^{\beta}$, where $Nu$ is the Nusselt number defined by equation~\eqref{eq:nuss}, leads to the relation $Q_W\sim\overline{h}^{3\beta-1}$ so that for $\beta<1/3$, the convective heat flux is indeed a decreasing function of the fluid height, all other parameters being fixed, whereas it is independent of the fluid height when $\beta=1/3$. These different scalings are shown in Figure~\ref{fig:qw} for reference. \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.7\textwidth]{./fig9} \caption{Heat flux averaged over the bottom surface $Q_W$, as defined in equation~\eqref{eq:thf}, as a function of the averaged fluid depth $\overline{h}(t)$. The results are only showed for the case $St=10$ for clarity. The dotted lines correspond to various power law scalings.\label{fig:qw}} \end{figure} We now consider the problematic question of the normalization of this heat flux. In classical RB convection, the diffusive flux across a plane-layer domain is trivially derived from the solution of the purely diffusive heat equation. In our case however, the diffusive flux is not trivial since the topology of the fluid domain is fully two-dimensional and of finite amplitude. Formally, one should therefore solve the heat equation in order to know the diffusive heat flux across the layer. This refinement has negligible consequence when the topography is of very small amplitude but this is not the case here, where the topography is of comparable order with the fluid depth. We therefore derive below a second-order correction of the diffusive heat flux at the bottom boundary. We consider the purely diffusive case of a fluid layer heated from below ($\theta=1$ in $z=0$), with the upper surface at temperature $\theta=\theta_M$ located at \begin{equation} z = h(x,t) = \overline{h}(t) \left( 1+ \varepsilon \cos{k x} \right) \ , \label{h_exp} \end{equation} where $\varepsilon \ll 1$, $\overline{h}$ is the mean height given by equation \eqref{meanheight}, and $k$ is the wave number of the topography. We assume that the evolution of $\overline{h}(t)$ is much slower than the diffusion (which is formally justified in the large Stefan limit, see~\cite{Vasil2011}). Therefore, we note $\overline{h} = h_0$, $\Delta \theta=1-\theta_M$, and we look only for stationary solutions of the diffusion equation \begin{equation} \nabla^2 \theta = 0, \qquad z \in [0,h(x)] \ , \label{laplace} \end{equation} with boundary conditions \begin{eqnarray} \theta(x,0) & = & 1\\ \theta(x,h(x)) & = & \theta_M \ . \label{upper_bound} \end{eqnarray} We expand $\theta$ in power series of $\varepsilon$ \begin{equation} \theta(x,z) = \theta_0(x,z) + \varepsilon \theta_1(x,z) + \varepsilon^2 \theta_2(x,z) + \cdots \end{equation} and solve for equation \eqref{laplace} at each order in $\varepsilon$. After some algebra (Cf. Appendix \ref{sec:appC}), we obtain at second-order \begin{equation} Q_D =-\frac{1}{\lambda}\int_0^{\lambda}\frac{\partial\theta}{\partial z}(x,0)\text{d}x= \frac{\Delta \theta}{h_0} +\varepsilon^2 \frac{k \Delta \theta}{2} \coth{k h_0} \ . \label{eq:th_hf2} \end{equation} On the other hand, the area of the topography per unit horizontal length $A$ (a length per unit length in our 2D geometry) can be computed using equation \eqref{h_exp} \begin{equation} A = \frac{1}{\lambda} \int_0^\lambda {\sqrt{1+\left(\frac{\partial h(x,t)}{\partial x}\right)^2}}dx = \frac{1}{\lambda} \int_0^\lambda {\sqrt{1+\left(-kh_0 \varepsilon \sin{kx}\right)^2}}dx \ . \end{equation} At order $\varepsilon^2$, we obtain \begin{equation} \Delta A \equiv A - 1 = \frac{1}{4} k^2 h_0^2 \varepsilon^2 \ . \end{equation} Finally, the diffusion heat flux can be expressed according to this surface area increase \begin{equation} Q_D = \frac{\Delta \theta}{h_0} \left( 1 + \frac{2 \Delta A}{k h_0} \coth{k h_0} \ . \right) \end{equation} The typical wavelength in our simulations being of the order of $2h_0$, we can estimate this diffusion flux by writing $k \simeq \pi/h_0$ leading to \begin{equation} Q_D \simeq \frac{\Delta \theta}{h_0} \left( 1 + \frac{2\Delta A}{\pi} \right) \ . \end{equation} We can now rescale the heat flux through the fluid layer $Q_W$ at each time knowing the averaged fluid depth $\overline{h}$ and the area increase $\Delta A$ computed numerically at each time step. The diffusive heat flux through the fluid layer is approximately \begin{equation} \label{eq:diff*} Q_D(t)=\frac{1-\theta_M}{\overline{h}(t)}\left(1+\frac{2\Delta A(t)}{\pi}\right) \end{equation} and the Nusselt number is finally defined as the ratio between to total heat flux~\eqref{eq:thf} and the diffusive flux estimated using equation~\eqref{eq:diff*}. \begin{figure} \vspace{5mm} \centering \includegraphics[width=0.8\textwidth]{./fig10} \caption{Nusselt number as a function of the effective Rayleigh number $Ra_e(t)$ for a Stefan number of $St=10$. The empty symbols correspond to the classical Rayleigh-B\'enard case whereas the dashed line corresponds to the optimum steady solution of \cite{sondak_smith_waleffe_2015}. The thin line is shown for indication and is obtained by computing the effective Rayleigh number on the maximum fluid height instead on its horizontally-averaged value.\label{fig:nussf}} \end{figure} The result of this normalization is shown in Figure~\ref{fig:nussf}. For the case $St=10$, we show the time evolution of the effective Nusselt number as a function of the effective Rayleigh number. For reference, we also show some typical values obtained for classical Rayleigh number (each point corresponding in that case to the time average of a single simulation at fixed Rayleigh number). We also indicate the results of \cite{sondak_smith_waleffe_2015} which correspond to the optimal heat transfer for a 2D steady solution giving $Nu\approx0.125 Ra^{0.31}$ \footnote{In \cite{sondak_smith_waleffe_2015}, the best fit is actually $Nu\approx0.115Ra^{0.31}$ but corresponds to a Prandtl number of 7. They obtained a slightly larger prefactor for $\sigma=1$ but the same power law.}. Interestingly, although our simulation departs significantly from classical Rayleigh-B\'enard, our renormalization shows that the Nusselt number follows that of classical RB convection in a quasi-static manner. This is of course only true at large Stefan numbers. For lower Stefan numbers (not shown), the curves are much more erratic and no clear trend can be derived. There are however significant differences between our case and the predictions specific to RB. Although the exponent is not significantly altered by the presence of the melting interface, the prefactor appears larger than what it is for regular RB. This is marginally true at low Rayleigh numbers ($Ra_e<10^6$) but quite clear at higher Rayleigh numbers. This can be attributed to the back-reaction of the topography on the convective rolls, which appears to be a stabilizing effect by delaying the transition to periodic convection. In 2D, this transition typically occurs around $Ra\approx10^5$ and reduces the Nusselt number, as can be seen in Figure~\ref{fig:nussf}. The presence of the topography induced by the convective flow itself seems to favor stable quasi-steady rolls as opposed to oscillatory ones. This leads to an increase in heat flux when compared to classical RB and is closer to the optimal solution of \cite{sondak_smith_waleffe_2015}, derived assuming steady laminar solutions. This marginal increase in the Nusselt number was also recently reported in the independent study by \cite{Babak2018}, both in two and three dimensions, although for a Prandtl number of $10$. Note finally that although we carefully normalized the heat flux, our choice of Rayleigh number is rather arbitrary. One could argue for example that the effective Rayleigh number based on the averaged fluid depth is barely relevant and that only the maximum depth where the Rayleigh number is effectively maximum matter for the heat flux. The results corresponding to this particular choice is shown in Figure~\ref{fig:nussf} as the thin line. Although it does reduce the overall heat flux for a given Rayleigh number, our conclusions drawn above remain qualitatively valid. Since we are limited in the maximum value for our effective Rayleigh number (a consequence of the finite vertical extent of our numerical domain), it is not clear how the topography affects the heat flux in the fully chaotic regime reached at much higher Rayleigh numbers. \section{Conclusion\label{sec:conclu}} Numerical simulations of Rayleigh-B\'enard convection in two dimensions with an upper melting boundary have been performed. We have shown that the fact that the upper boundary dynamically becomes non-planar has interesting consequences on the development of convection in the fluid layer. The onset of convection becomes imperfect due to baroclinic effects close to the topography so that it is difficult to study the transition between a purely diffusive regime and thermal convection, even when the Stefan number is large. The initial saturation of the instability leads to steady convective rolls carving a topography with the same wavelength. As the fluid depth increases and when the flow is laterally confined, the steady rolls eventually feed a mean horizontal shear flow, as observed in supercritical RB convection, which disrupts convection until a new array of convective rolls grows with an aspect ratio close to unity. For large horizontal aspect ratios, the transition is replaced by local merging events propagating to neighbouring cells in a percolation process. Finally, at higher Rayleigh numbers, we observe that the convection rolls remain locked into the topography, delaying bifurcations to periodic and chaotic orbits, and effectively increasing the heat flux compared to classical RB convection. Many aspects of this apparently simple system remain to be explored. We focused in this paper on the particular case $\theta_M\rightarrow0$. It is however obvious that the system can reach a quasi-steady equilibrium with both liquid and solid phases present when $0<\theta_M<1$. In that case, the solid is cooled from above, effectively balancing the heat flux brought by thermal convection in the liquid phase below. Depending on the values for $\theta_M$, $Ra$ and $St$, this regime is expected to lead to interesting dynamics which we are currently exploring. The Prandtl number was also fixed to be unity for simplicity but it is well known that classical RB convection crucially depends on this parameter and we expect our system to be the same. It is also worth recalling that liquid metals typically have very low Prandtl numbers, for which we expect a different melting or solidifying dynamics. Finally, while it would be very difficult to generalize our approach to variable densities between the solid and liquid phases, a natural extension involving non-uniform thermal diffusivities is nevertheless possible \citep{Almgren1999}. Based on the phase-field method, our approach is relatively simple to implement in existing numerical codes capable of solving the usual Boussinesq equations. As of now, it remains numerically expensive due to the fact that some diffusive terms are solved explicitly. This could easily be improved by considering a linearized version of the last term on the right-hand size of equation~\eqref{eq:temp}, allowing for a fully-implicit coupling between the temperature and the phase field equations. This would allow us to consider similar problems in three-dimensions, thus extending the early experimental works by \cite{davis_muller_dietsche_1984} and the recent numerical study of \cite{Babak2018}. Some of the results discussed in this paper might not be relevant to the three-dimensional case since the stability of 3D convection patterns are notoriously different from their 2D equivalent. The effects of an upper melting boundary on the development of 3D convection cells remain to be fully explored. The framework developed in this paper could finally be used to study other free-boundary problems. The dynamical creation of non-trivial topographies by dissolution \citep{claudin_duran_andreotti_2017} or erosion \citep{Matthew2013} are also accessible using the current approach. The Stefan boundary condition which depends on the temperature gradients can be generalized to incorporate gradients of concentration or tangential velocity. Several academic configurations could be therefore revisited using a continuous interface approach such as the phase field model coupled with the Navier--Stokes equations. \vspace{1cm} \textbf{Acknowledgments.} We gratefully acknowledge the computational hours provided on Turing and Ada (Projects No. A0020407543 and A0040407543) of IDRIS. This work was granted access (Project No. 017B020) to the HPC resources of Aix-Marseille Universit\'e financed by the project Equip@Meso (ANR-10-EQPX-29-01) of the program ``Investissements d'Avenir'' supervised by the Agence Nationale de la Recherche. We gratefully acknowledge financial support from the Programme National de Plan\'etologie (PNP) of the Institut National des Sciences de l'Univers (INSU, CNRS). We have finally benefited from many discussions with Geoffrey Vasil.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Classical electrodynamics is well known to be a linear theory leading to the superposition principle. At the quantum level the basic QED Lagrangian remains quadratic in the electromagnetic fields, so that the theory still appears to be linear. However, quantum fluctuations in the QED vacuum induce nonlinear effects that lead to a breakdown of the superposition principle \cite{Heisenberg/1936}. In particular, these QED fluctuations make the vacuum appear as an electrically and magnetically polarizable medium. The size of these corrections in nonlinear QED (NLQED) is very tiny, so that experiments with ultra-high intensity lasers have been proposed to search for these effects, e.g. $\mathrm{e}^{+}\mathrm{e}^{-}$ pair production from the vacuum \cite{Schwinger/1951}-\cite{Di Piazza/2009}, vacuum birefringence \cite {Heinzl/2006}-\cite{Adler/2007}, light diffraction by a strong standing electromagnetic wave \cite{Di Piazza/2008}, and nonlinear Compton scattering \cite{Fried/1966}. A different proposal, involving quasistatic external electromagnetic fields interacting with given electric or magnetic sources, has been made recently \cite{Dominguez/2009/1}-\cite{Dominguez/2009}. In \cite{Dominguez/2009/1} general expressions were obtained for the induced electric and magnetic fields in such circumstances, and applied to the case of an electrically charged sphere in the presence of an external, quasistatic magnetic field. As a result of QED nonlinearity there appears an induced magnetic dipole moment, as well as corrections to the Coulomb field of the sphere. In spite of this being a dramatic effect, experimental detection appears very challenging. The complementary case of a purely magnetic dipole moment placed in an external, quasistatic electric field $ \mathbf{E}_{0}}$ was considered in \cite{Dominguez/2009}. The result is an induced electric dipole moment ${\mathbf{p}}_{\mathrm{IND}}$, plus corrections to the magnetic field produced by the magnetic dipole. It was then suggested that the neutron could be used as a probe in the presence of large electric fields of order $|{\mathbf{E}_{0}}|\simeq 10^{10}$ V/m, such as present in certain crystals. A distinctive feature of this induced electric dipole moment, which should help in its detection, is its peculiar dependence on the angle between ${\mathbf{p}}_{\mathrm{IND}}$ and ${\mathbf{ }_{0}}$, or equivalently the angle between ${\mathbf{p}}_{\mathrm{IND}}$ and the neutron spin.\\ In this paper we follow up on the experimental observability of such an induced electric dipole moment of the neutron. On the theoretical side we complete the analysis of \cite{Dominguez/2009} by computing the interaction Hamiltonian of the neutron immersed in a large external quasistatic electric field ${\mathbf{E}_{0}}$, and an external, quasistatic, magnetic field $ \mathbf{B}_{0}}$ of ordinary strength. Given the nonlinearity of the problem one needs to check that (a) the magnetic interaction energy is of the usual form, (nonlinear magnetic corrections due to ${\mathbf{B}_{0}}$ are negligible), and (b) that the induced electric dipole does interact with the electric field ${\mathbf{E}_{0}}$ that generates it. The latter interaction energy is expected to have the standard functional form $H_ \mathrm{int}}\propto {\mathbf{p}}_{\mathrm{IND}}\cdot {\mathbf{E}_{0}}$, albeit with an a-priori unknown coefficient which we determine. Next, we study the quantum behaviour of ${\mathbf{p}}_{\mathrm{IND}}$ by means of the Heisenberg equation of motion. This is important for experiments based on potential changes in the Larmor frequency of the neutron spin around an external magnetic field due to the presence of ${\mathbf{p}}_{{\mathrm{IND}}} $. We find no effect here, thus ruling out experiments of this type to detect an induced electric dipole moment of the neutron. Finally, we discuss in some detail a different approach based on neutron-nucleus scattering and conclude that this experiment offers an excellent opportunity to observe such an effect. This is due to the peculiar angular dependence of ${\mathbf{ }}_{\mathrm{IND}}$. We find that for sufficiently large momentum transfers, a scattering asymmetry is induced with such particular characteristics that it would be easy to distinguish from other standard effects. The experimental discovery of such an asymmetry would be the first ever signal of a nonlinear effect in electrodynamics due to quantum fluctuations in the QED vacuum. \section{Induced electric dipole moment of the neutron} An appropriate framework to discuss nonlinear effects induced by quantum fluctuations in the QED vacuum is that of the Euler-Heisenberg Lagrangian \cite{Heisenberg/1936}. This is obtained from the weak field asymptotic expansion of the QED effective action at one loop order leading t \begin{equation} \mathcal{L}_{\mathrm{EH}}^{\left( 1\right) }=\zeta \left( 4\mathcal{F}^{2}+ \mathcal{G}^{2}\right) +..., \label{L-EH} \end{equation where the omitted terms are of higher order in the expansion parameter \zeta $. In SI units \begin{equation} \zeta =\frac{2\alpha _{\mathrm{EM}}^{2}\epsilon _{0}^{2}\hbar ^{3}} 45m_{e}^{4}c^{5}}\simeq 1.3\times 10^{-52}\,\frac{\mathrm{Jm}}{\mathrm{V}^{4 }, \end{equation with $\alpha _{\mathrm{EM}}=e^{2}/(4\pi \epsilon _{0}\hbar c)$ the electromagnetic fine structure constant, $m_{\mathrm{e}}$ and $e$ are the mass and charge of the electron, respectively, and $c$ the speed of light. The invariants $\mathcal{F}$ and $\mathcal{G}$ are defined as \begin{equation} \mathcal{F}=\frac{1}{2}\left( \mathbf{E}^{2}-c^{2}\mathbf{B}^{2}\right) = \frac{1}{4}F_{\mu \nu }F^{\mu \nu }, \end{equation} \begin{equation} \mathcal{G}=c\text{\/}\mathbf{E}\cdot \mathbf{B}=-\frac{1}{4}F_{\mu \nu \widetilde{F}^{\mu \nu }, \end{equation with $F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }$ and \widetilde{F}^{\mu \nu }=\frac{1}{2}\epsilon ^{\mu \nu \rho \sigma }F_{\rho \sigma }$. The so-called critical field $E_{\mathrm{c}}$, which plays the role of a reference field strength for the onset of nonlinearity, is given by \begin{equation} E_{\mathrm{c}}=\frac{m_{\mathrm{e}}^{2}c^{3}}{\hbar e}\simeq 1.3\times 10^{18}\,\frac{\mathrm{V}}{\mathrm{m}}. \label{E-crit} \end{equation This estimate is obtained by computing the electric field needed to produce an electron-positron pair in a spatial length of one Compton wavelength. For fields stronger than $E_{\mathrm{c}}$ the weak-field asymptotic expansion leading to Eq. (\ref{L-EH}) breaks down. In \cite{Dominguez/2009/1} general expressions were obtained for electric and magnetic fields induced by nonlinearity, to leading order in $\zeta $, in the presence of external quasistatic weak fields (smaller than $E_{\mathrm{c}}$) and arbitrary sources. These induced fields are \begin{equation} \mathcal{E}\left( \mathbf{x}\right) =\frac{\zeta }{2\pi \epsilon _{0}^{2} \nabla _{\mathbf{x}}\int \frac{\mathrm{d}^{3}y}{\left\vert \mathbf{x} \mathbf{y}\right\vert }\nabla _{\mathbf{y}}\cdot \left( 4\mathcal{F}_ \mathrm{M}}\mathbf{D}_{\mathrm{M}}+\frac{7}{c}\mathcal{G}_{\mathrm{M} \mathbf{H}_{\mathrm{M}}\right) , \label{E-ind} \end{equation} \begin{equation} \mathcal{B}\left( \mathbf{x}\right) =\frac{\zeta }{2\pi \epsilon _{0}^{2}c^{2}}\nabla _{\mathbf{x}}\times \int \frac{\mathrm{d}^{3}y} \left\vert \mathbf{x}-\mathbf{y}\right\vert }\nabla _{\mathbf{y}}\times \left(\frac{}{} -4\mathcal{F}_{\mathrm{M}}\mathbf{H}_{\mathrm{M}}+7c\mathcal{G}_ \mathrm{M}}\mathbf{D}_{\mathrm{M}}\frac{}{}\right) , \label{B-ind} \end{equation where $\mathbf{D}_{\mathrm{M}}$ and $\mathbf{H}_{\mathrm{M}}$ are the Maxwell (classical) fields produced by the arbitrary sources. Notice that these fields vanish as $\hbar \rightarrow 0$ ($\zeta \rightarrow 0$). In the case of a current density uniformly distributed on the surface of a sphere of radius $a$, or equivalently, for a uniformly magnetized sphere of the same radius, the Maxwell, magnetic dipole type field, is given by \begin{equation} \mathbf{B}_{\mathrm{d}}=\frac{\mu _{0}}{4\pi }\left\{ \frac{3\left( \mathbf{ }\cdot \mathbf{e}_{r}\right) \mathbf{e}_{r}-\mathbf{m}}{r^{3}}\;\Theta \left( r-a\right) +\frac{2\mathbf{m}}{a^{3}}\;\Theta \left( a-r\right) \right\} , \label{B-d} \end{equation} where $\mathbf{m}$ is identified with the magnetic dipole moment of the source, and $\mathbf{e}_{r}$ is a unit vector in the radial direction. Since the central expressions, Eqs.(\ref{E-ind}) and (\ref{B-ind}), were derived assuming $E\equiv cB<E_{\mathrm{c}}$ the following constraint follows \begin{equation} \frac{\left\vert \mathbf{m}\right\vert }{a^{3}}<\frac{2\pi m_{\mathrm{e }^{2}c^{2}}{\hbar e\mu _{0}}. \label{a-constraint} \end{equation} For instance, if $|\mathbf{m}|=0.96\;\times \,10^{-26}\,\mathrm{A \,m}^{2}$, as for the neutron, then it follows that $a\gtrsim 10\,\mathrm{fm}$. If this magnetic source is placed in an external, constant electric field $\mathbf{E _{0}$ it has been shown \cite{Dominguez/2009} that there is an induced electric field of the dipole type \begin{equation} \mathcal{E}\left( \mathbf{x}\right) =-\nabla _{\mathbf{x}}\left[ \frac{1} 4\pi \epsilon _{0}}\frac{\mathbf{p}\left( \psi \right)_{\mathrm{IND}} \cdot \mathbf{e}_{r}}{\left\vert \mathbf{x}\right\vert ^{2}}\right] +\mathcal{O \left( \left\vert \mathbf{x}\right\vert ^{-6}\right) . \label{E-induced} \end{equation} where $\psi $ is the angle between the external electric field, lying in the $x$-$z$ plane, and the magnetic dipole moment pointing along the $z$ axis, i.e. $\mathbf{E}_{0}=\left\vert \mathbf{E}_{0}\right\vert \left( \sin \psi \ {\mathbf{e}}_{x}+\cos \psi \,{\mathbf{e}}_{z}\right) $, and $\mathbf{m}= \mathbf{m}|\mathbf{e}_{z}$. The induced electric dipole moment $\mathbf{p}(\psi )_{ \mathrm{IND}}$ is given by \begin{equation} \mathbf{p}\left( \psi \right)_{\mathrm{IND}} =\frac{\zeta \mu _{0}\left\vert \mathbf{m}\right\vert ^{2}\left\vert \mathbf{E}_{0}\right\vert }{10\pi \epsilon _{0}a^{3}}\left[ 36\frac{\mathbf{E}_{0}}{|\mathbf{E}_{0}|}-49\left( \frac{\mathbf{E}_{0}}{|\mathbf{E}_{0}|}\cdot {\mathbf{e}}_{x}\right) \mathbf{e}}_{x}\right] \,. \label{p-ind} \end{equation This induced electric field is of the electric dipole type in its radial $1/ \mathbf{x}|^{3}$ dependence, but it has a manifestly peculiar angular dependence. For instance, along the $z$ axis, and unlike a standard electric dipole field, it has a non-zero component along $\mathbf{e}_{\theta }$ that depends on the azimuthal angle $\phi $. It also has a non-zero component along the direction of $\mathbf{e}_{\phi }$, as may be appreciated by writing the induced electric field in spherical coordinates ($r$, $\theta $, $\phi $), i.e. \begin{eqnarray} \mathcal{E}(\mathbf{x}) &=&\;\frac{\zeta \mu _{0}|\mathbf{m}|^{2}|\mathbf{E _{0}|}{40\pi ^{2}\epsilon _{0}^{2}a^{3}\mathbf{|x|}^{3}} \left\{\frac{}{} 2\,\left[ 36\cos \theta \cos \psi -13\sin \theta \cos \phi \sin \psi \right] \, \mathbf{e}_{r}}\right. \nonumber \\ &&+\left. \left[ 13\cos \theta \cos \phi \sin \psi +36\sin \theta \cos \psi \right] {\mathbf{e}_{\theta }}-13\,\sin \phi \sin \psi \;{\mathbf{e}_{\phi } \frac{}{} \right\} \;. \end{eqnarray In addition to the induced electric field Eq.(\ref{E-induced}), there is an induced magnetic field (a correction to the field produced by the magnetic dipole source), which can be derived from a vector potential, i.e. $\mathcal B}(\mathbf{x})=\nabla \times \mathcal{A}(\mathbf{x})$, where after a lengthy calculation one finds \begin{eqnarray} \mathcal{A}\left( \mathbf{x}\right) &=&\frac{\zeta \mu _{0}}{4\pi \epsilon _{0}\left\vert \mathbf{x}\right\vert ^{2}}\left\{\frac{}{} 4\left\vert \mathbf{E _{0}\right\vert ^{2}\left( \mathbf{e}_{r}\times \mathbf{m}\right) -7\left[ \mathbf{m}\cdot \mathbf{E}_{0}+3\left( \mathbf{E}_{0}\cdot \mathbf{e _{r}\right) \left( \mathbf{m}\cdot \mathbf{e}_{r}\right) \right] \left( \mathbf{e}_{r}\times \mathbf{E}_{0}\right) \right. \label{A-calli} \\ &&\left. +7\left( \mathbf{E}_{0}\cdot \mathbf{e}_{r}\right) \left( \mathbf{m \times \mathbf{E}_{0}\right) \frac{}{} \right\} \left[ 1+\mathcal{O}\left( \frac{\mu _{0}\left\vert \mathbf{m}\right\vert ^{2}}{a^{6}\epsilon _{0}\left\vert \mathbf{E}_{0}\right\vert ^{2}}\right) \right] +\mathcal{O}\left( \left\vert \mathbf{x}\right\vert ^{-4}\right) . \nonumber \end{eqnarray Notice that while $\mathcal{E}$ grows linearly with $|\bf{E}_{0}|$, $\mathcal{B}$ depends quadratically on $|\bf{E}_{0}|$.\\ We proceed to discuss the interaction energy of the magnetic dipole source and its induced electric dipole with the external constant field $\mathbf{E _{0}$, and with an external uniform magnetic field $\mathbf{B}_{0}$ weak enough not to induce nonlinear effects, i.e. $c|\mathbf{B}_{0}|\ll E_ \mathrm{c}}$. Given the nonlinearity of the problem it is important to verify that the magnetic interaction Hamiltonian has the expected form $ \mathbf{m}\cdot \mathbf{B}_{0}$, given the strength of $\mathbf{B}_{0}$. In addition, the electric interaction energy of the induced electric dipole and the external field $\mathbf{E}_{0}$ is a-priori unknown. This need not be exactly of the form \begin{equation} H_{\mathrm{int}}=-\frac{1}{2}\mathbf{p}\cdot \mathbf{E}_{0}\mathbf{,} \label{H-el-pol} \end{equation as one would obtain for a linearly polarizable particle immersed in an external electric field, e.g. for a polarizable neutron on account of its quark substructure. In fact, the electric interaction Hamiltonian due to nonlinearity lacks the factor $1/2$ as shown next. The canonical energy-momentum tensor is defined as \begin{equation} T_{\text{ }\nu }^{\mu }=\frac{\partial \mathcal{L}_{\mathrm{tot}}}{\partial \left( \partial _{\mu }A_{\alpha }\right) }\left( \partial _{\nu }A_{\alpha }\right) -\mathcal{L}_{\mathrm{tot}} \;\delta _{\text{ }\nu }^{\mu }, \end{equation where the total Lagrangian density is $\mathcal{L}_{\mathrm{tot}}=\mathcal{L -j_{\mu }A^{\mu }$, with $\mathcal{L}=\epsilon _{0}\mathcal{F}+\mathcal{L}_ \mathrm{EH}}^{\left( 1\right) }$ and $\mathcal{L}_{\mathrm{EH}}^{\left( 1\right) }$ given in Eq.(\ref{L-EH}). This equation can be rewritten as \begin{eqnarray} T_{\text{ }\nu }^{\mu } &=&\left( \frac{\partial \mathcal{L}}{\partial \mathcal{F}}F^{\mu \alpha }+\frac{\partial \mathcal{L}}{\partial \mathcal{G} \widetilde{F}^{\mu \alpha }\right) F_{\alpha \nu }- \mathcal{L} \,\delta _{\text{ }\nu }^{\mu }+\left( j\cdot A\right) \, \delta _{\text{ }\nu }^{\mu } \nonumber \\ &&+A_{\nu }\partial _{\alpha }\left( \frac{\partial \mathcal{L}}{\partial \mathcal{F}}F^{\mu \alpha }+\frac{\partial \mathcal{L}}{\partial \mathcal{G} \widetilde{F}^{\mu \alpha }\right) -\partial _{\alpha }\left[ \left( \frac \partial \mathcal{L}}{\partial \mathcal{F}}F^{\mu \alpha }+\frac{\partial \mathcal{L}}{\partial \mathcal{G}}\widetilde{F}^{\mu \alpha }\right) A_{\nu \right] . \end{eqnarray Since the last term on the right hand side above is the total divergence of an anti-symmetric tensor, employing the equations of motion \begin{equation} \partial _{\beta }\left( \frac{\partial \mathcal{L}}{\partial \mathcal{F} F^{\beta \alpha }+\frac{\partial \mathcal{L}}{\partial \mathcal{G} \widetilde{F}^{\beta \alpha }\right) =j^{\alpha }, \end{equation one can define another energy-momentum tensor as \begin{eqnarray} \theta _{\text{ }\nu }^{\mu } &=&T_{\text{ }\nu }^{\mu }+\partial _{\alpha \left[ \left( \frac{\partial \mathcal{L}}{\partial \mathcal{F}}F^{\mu \alpha }+\frac{\partial \mathcal{L}}{\partial \mathcal{G}}\widetilde{F}^{\mu \alpha }\right) A_{\nu }\right] \nonumber \\ &=&\left( \frac{\partial \mathcal{L}}{\partial \mathcal{F}}F^{\mu \alpha } \frac{\partial \mathcal{L}}{\partial \mathcal{G}}\widetilde{F}^{\mu \alpha }\right) F_{\alpha \nu }-\mathcal{L}\, \delta _{\text{ }\nu }^{\mu }+\left( j_{\alpha }A^{\alpha }\right) \delta _{\text{ }\nu }^{\mu }-j^{\mu }A_{\nu }. \end{eqnarray Notice that this tensor is symmetric and gauge invariant except for the last two terms. The total energy density of the system is defined as the component $\theta _{\text{ }0}^{0}$, \begin{equation} \mathcal{H}_{\mathrm{tot}}=\theta _{\text{ }0}^{0}=\frac{\partial \mathcal{L }{\partial \mathbf{E}}\cdot \mathbf{E}-\mathcal{L}-\mathbf{j}\cdot \mathbf{A =\mathbf{D\cdot E}-\mathcal{L}-\mathbf{j}\cdot \mathbf{A}. \end{equation In general, for a given configuration of the fields the interaction Hamiltonian is defined as the difference of the total Hamiltonian with and without the sources. In a quantum theory it is defined as the difference of the total Hamiltonian evaluated at the fields in the interaction picture, with and without the external sources. Then, the interaction Hamiltonian $H_ \mathrm{int}}$, i.e. the volume integral of the interaction Hamiltonian density ${\mathcal{H}}_{\mathrm{int}}$ is \begin{equation} H_{\mathrm{int}}=\int \mathcal{H}_{\mathrm{int}} \, \mathrm{d}^{3}r=-\int \mathbf{j}\cdot \mathbf{A} \, \mathrm{d}^{3}r=-\int \mathbf{j}\cdot \left( \mathbf{A}_{0}+\mathcal{A}\right) \, \mathrm{d}^{3}r, \label{H-int} \end{equation where $\mathbf{A}=\mathbf{A}_{0}+\mathcal{A}$, with $\mathcal{A}$ given in Eq.(\ref{A-calli}), and $\mathbf{A}_{0}$ is the vector potential associated with $\mathbf{B}_{0}$, i.e. $\mathbf{B}_{0}=\nabla \times \mathbf{A}_{0}$, and $\mathbf{A}_{0}=\frac{1}{2}\,\mathbf{B}_{0}\times \mathbf{r}$. The current $\mathbf{j}$ corresponding to the magnetized sphere producing the field, Eq.(\ref{B-d}), is $\mathbf{j}=\frac{3}{4\pi a^{3}}\,\mathbf{m}\times \mathbf{e}_{r}\,\delta (r-a)$. In Eq.(\ref{H-int}) the self energy of the magnetized sphere, independent of the external field, has been omitted. After performing the integration in Eq.(\ref{H-int}) one finds \begin{equation} H_{\mathrm{int}}=\mathbf{-m\cdot B}_{0}\mathbf{-p}\left( \psi \right)_{\mathrm{IND}} \mathbf{\cdot E}_{0}, \label{pot-energy} \end{equation which has the correct magnetic interaction term as in the linear theory. The electric interaction energy is of the expected form, but it involves a coefficient different from the case of linear QED as a result of nonlinearity. In the absence of the external magnetic field $\mathbf{B}_{0} , and using Eq.(\ref{p-ind}), the interaction Hamiltonian becomes \begin{equation} H_{\mathrm{int}}=-\frac{\zeta \mu _{0}\left\vert \mathbf{m}\right\vert ^{2}\left\vert \mathbf{E}_{0}\right\vert ^{2}}{10\pi \epsilon _{0}a^{3} \left( 36-49\sin ^{2}\psi \right) =\frac{\zeta \mu _{0}\left\vert \mathbf{m \right\vert ^{2}\left\vert \mathbf{E}_{0}\right\vert ^{2}}{10\pi \epsilon _{0}a^{3}}\left( 13-49\cos ^{2}\psi \right) . \label{Hamiltonian} \end{equation Notice the dependence of $H_{\mathrm{int}}$ on $a^{-3}$. It should be pointed out that in an experimental situation one would typically be interested in a point magnetic dipole. This source would produce very strong fields in its proximity so that the limit $a\rightarrow 0$ would obviously not be allowed. Instead, we assume that even in such a case the large distance solution for the fields is well described by the first order approximation to the effective Lagrangian $\mathcal{L}_{\mathrm{EH}}^{\left( 1\right) }$ in Eq.(\ref{L-EH}). We also assume that this solution is robust against short distance modifications of the source as long as its symmetry is preserved. In this sense the parameter $a$ is to be considered as a measure of our ignorance about the higher order corrections to this effective Lagrangian, something necessary when dealing with strong fields. The specific value of $a$ will be discussed later in Section 4. The fact that $H_{\mathrm{int}}$ depends on the orientation of $\mathbf{m}$ with respect to $\mathbf{E}_{0}$ through the angle $\psi$ can be used as a distinctive feature in the design of an experimental asymmetry as described below in Section 5.\\ We consider next the quantum behaviour of ${\mathbf{p}}_{\mathrm{\mathrm{IND} }$ using the Heisenberg equation of motion. To this end we consider a particle with magnetic dipole moment ${\mathbf{m}}$ related to the spin through the standard relation ${\mathbf{m}}= g \hbar {\mathbf{S}}$, where $g$ is the gyromagnetic ratio. Assuming that the dynamics of this particle is described by the Hamiltonian Eq.(\ref{pot-energy}), and given that ${\mathbf{p}}_{\mathrm{IND}}\propto |{\mathbf{m}}|^{2}$, $H_ \mathrm{int}}$ to first order in $\zeta $ contains only quadratic terms in the spin, whose components are the dynamical variables of the problem. The effective Hamiltonian involving these dynamical variables must be symmetrized in order to ensure Hermiticity. Hence, the quadratic terms in the spin entering the Heisenberg equation of motion lead to the commutator \begin{equation} \left[ \left\{ S_{i},S_{j}\right\} ,S_{k}\right] =i\epsilon _{jkl}\left\{ S_{i},S_{l}\right\} +i\epsilon _{ikl}\left\{ S_{j},S_{l}\right\} . \end{equation For a spin $1/2$ particle, such as the neutron, we have $S_{i}=\frac{1}{2 \sigma _{i}$, and $\left\{ S_{i},S_{l}\right\} =\frac{1}{2}\delta _{il}$. In this case, \begin{equation} \left[ \left\{ S_{i},S_{j}\right\} ,S_{k}\right] =i\epsilon _{jkl}\frac{1}{2 \delta _{il}+i\epsilon _{ikl}\frac{1}{2}\delta _{jl}=\frac{i}{2}\left( \epsilon _{jki}+\epsilon _{ikj}\right) =0. \end{equation Therefore, $d\mathbf{S}/dt=0$ so that if one is interested in the time evolution of a spin $1/2$ particle, and Eq.(\ref{pot-energy}) describes its effective Hamiltonian, we find no contribution from this leading order nonlinear correction. In other words, the precession of the spin is not affected. This is not the case, though, for spin-one particles. This unfortunate feature rules out experiments to detect the induced electric dipole moment of the neutron based on Larmor frequency changes. A different approach involving neutron scattering off nuclei is discussed next. \section{Neutron-atom scattering amplitude and cross section} Scattering of slow neutrons by a free atom can be described by a scattering amplitude in the Born-approximation, which in the center of mass system is given by (see e.g. \cite{Turchin/1965}) \begin{equation} f\left( \mathbf{q},\mathbf{s}\right) =-\frac{M}{2\pi \hbar ^{2}}\int \exp \left( i\mathbf{q\cdot r}\right) H_{\mathrm{int}}\left( \mathbf{q},\mathbf{s \right) \mathrm{d}^{3}r. \label{f} \end{equation where $M$ is the reduced mass \begin{equation} M=\frac{m_{\mathrm{n}}m_{\mathrm{A}}}{m_{\mathrm{n}}+m_{\mathrm{A}}}, \label{reduced mass} \end{equation with $m_{\mathrm{n}}$ the neutron mass, and $m_{\mathrm{A}}$ the mass of the atom. The three-momentum transfer $\mathbf{q}$ is $\mathbf{q}=\mathbf{k-k}^{\prime}$, with $\mathbf{k}$ and $\mathbf{k}^{\prime }$ the neutron wave vectors before and after scattering, respectively, and $\mathbf{s}$ \textbf{\ }is the neutron spin in units of $\hbar $. The magnitude of $\bf{q}$ will be denoted as $|\mathbf{q}| \equiv q$ in the sequel. The total Hamiltonian $H_{\mathrm{int}}$ involves all known interactions between the neutron and the atom, to which we add now the new interaction due to NLQED given in Eq.(\ref{Hamiltonian}). Correspondingly, the total scattering amplitude can be written as \begin{equation} f\left( \mathbf{q},\mathbf{s}\right) =f_{\mathrm{N}}\left( \mathbf{q} \mathbf{s}\right) +f_{\mathrm{MAG}}\left( \mathbf{q},\mathbf{s}\right) +f_ \mathrm{e}}\left( \mathbf{q}\right) +f_{\mathrm{POL}}\left( \mathbf{q \right) +f_{\mathrm{SO}}\left( \mathbf{q},\mathbf{s}\right) +f_{\mathrm{PV }\left( \mathbf{q},\mathbf{s}\right) +f_{\mathrm{IND}}\left( \mathbf{q} \mathbf{s}\right) , \label{scattering amp} \end{equation where the various contributions are as follows. The term $f_{\mathrm{N }\left( \mathbf{q},\mathbf{s}\right) $ is due to the hadronic interaction of the neutron with the nucleus, and is usually the dominant term. The amplitude $f_{\mathrm{MAG}}\left( \mathbf{q},\mathbf{s}\right) $ corresponds to the interaction of the neutron magnetic moment with the atomic magnetic field (for atoms with unpaired electrons). This term can be of a similar size as $f_{\mathrm{N}}\left( \mathbf{q},\mathbf{s}\right) $. The next three terms arise from various electromagnetic interactions, i.e. $f_ \mathrm{e}}\left( \mathbf{q}\right) $ is due to scattering of the neutron charge radius by the electric charges in the atom, $f_{\mathrm{POL}}\left( \mathbf{q}\right) $ arises from the electric polarizability of the neutron due to its quark substructure, and $f_{\mathrm{SO}}\left( \mathbf{q},\mathbf s}\right) $ corresponds to the spin-orbit interaction of the neutron in the electric field of the nucleus. The term $f_{\mathrm{PV}}\left( \mathbf{q} \mathbf{s}\right) $ is a weak interaction, parity-violating amplitude which we list separately from $f_{\mathrm{N}}\left( \mathbf{q},\mathbf{s}\right) $ as it has a different dependence on neutron spin. Finally, $f_{\mathrm{IND}}$ is the new component due to the induced electric dipole moment of the neutron, which we wish to isolate experimentally.\\ The scattering amplitude, Eq.(\ref{scattering amp}), enters the differential cross section for elastic neutron scattering by a single atom in the ground state, \begin{equation} \frac{\mathrm{d}\sigma }{\mathrm{d}\Omega }\left( \mathbf{q},\mathbf{P \right) =\left\langle \left\vert f\left( \mathbf{q},\mathbf{s}\right) \right\vert ^{2}\right\rangle , \label{sigma} \end{equation which includes an ensemble average over nuclear and electronic spin degrees of freedom (if present), and the neutron spin. The incident neutrons are characterized by a polarization defined as $\mathbf{P}=2\left\langle \mathbf s}\right\rangle $. In the absence of nuclear and electronic polarization of the atom, the case of interest here, the kinematic scattering variables are \mathbf{q}$ and $\mathbf{P}$. Experimentally, one determines neutron scattering cross sections using a sample containing a macroscopic number of atoms. Considering a single atomic species, the ensemble average in Eq.(\re {sigma}) still has to account for the isotopic composition and the different states of total spin of a neutron scattering off a nucleus with non-zero spin. For slow neutrons with wavelengths much larger than the nuclear radius $R_{ \mathrm{N}}$ the hadronic amplitude $f_{\mathrm{N}}$ is practically independent of $\mathbf{q}$. This is in the absence of nuclear resonances for thermal and epithermal neutrons, i.e. for the energy range of interest here. Scattering thus proceeds in an s-wave and is isotropic in the center of mass system. One defines a neutron scattering length operator as \begin{equation} a_{\mathrm{N}}\left( \mathbf{s}\right) =-\lim_{q\rightarrow 0}f_{\mathrm{N }\left( \mathbf{q},\mathbf{s}\right) . \label{scattering length} \end{equation For a nucleus with spin $\hbar \mathbf{I}$ one has \begin{equation} a_{\mathrm{N}}\left( \mathbf{s}\right) =\frac{\left( I+1\right) a_{+}+Ia_{- }{2I+1}+\frac{2\left( a_{+}-a_{-}\right) }{2I+1}\mathbf{s}\cdot \mathbf{I}, \label{a-spin} \end{equation where $a_{+}$ and $a_{-}$ are the eigenvalues of $a_{\mathrm{N}}\left( \mathbf{s}\right) $ for the two states of total spin $I \pm 1/2$ (see e.g. \cite{Turchin/1965}). For a sample with all nuclear species unpolarized, scattering by the $i$th isotope enters with statistical weight factors w_{i+}=\left( I_{i}+1\right) /\left( 2I_{i}+1\right) $ and w_{i-}=I_{i}/\left( 2I_{i}+1\right) $. The leading term in the cross section is then given by \begin{equation} \overline{\left\vert a_{\mathrm{N}}\right\vert ^{2}}=\sum_{i}c_{i}\left[ w_{i+}\left\vert a_{i+}\right\vert ^{2}+w_{i-}\left\vert a_{i-}\right\vert ^{2}\right] , \end{equation where the bar indicates the averaging over isotopes and spin states, and c_{i}$ stands for the relative abundance of the $i$th isotope. In next-to-leading order the cross section contains interference terms between small amplitudes like $f_{\mathrm{IND}}$ and a usually dominant coherent nuclear scattering length $\overline{a_{\mathrm{N}}}$, which for unpolarized nuclei is given by \begin{equation} \overline{a_{\mathrm{N}}}=\sum_{i}c_{i}\left[ w_{i+}a_{i+}+w_{i-}a_{i- \right] . \end{equation Most scattering lengths $\overline{a_{\mathrm{N}}}$ are found to be positive with typical values of a few $\mathrm{fm}$. Neutron optical measurements determine a coherent {\it bound scattering length} $\overline{b}$, related to the corresponding scattering length $\overline{a}$ of a free atom through \begin{equation} \overline{b}=\overline{a}\left( 1+\frac{m_{\mathrm{n}}}{m_{\mathrm{A}} \right) . \end{equation This relation includes the contributions $-\lim_{q\rightarrow 0}\overline f_{i}}$ due to all amplitudes ($i=\mathrm{N},$ $\mathrm{POL}...$) appearing in Eq.(\ref{scattering amp}). Lacking sufficiently accurate theoretical predictions for the nuclear part, however, one cannot even extract from \overline{b}$ the sum of all non-hadronic components, which normally contribute less than $1\%$. Instead, one needs to perform measurements for q\neq 0$. Values for $\overline{b}$ and the total neutron scattering cross section of an atom fixed in space, $\sigma _{\mathrm{s,b}}=4\pi \overline{ b}^{2}$, can be found e.g. in \cite{Rauch/2002}. For later use we quote the values for lead with natural isotopic abundances \begin{eqnarray} \overline{b} &=&\left( 9.401\pm 0.002\right) \,\mathrm{fm}, \label{values-Pb} \\ \sigma _{\mathrm{s,b}} &=&\left( 11.187\pm 0.007\right) \times 10^{-24}\ \mathrm{cm}^{2}. \nonumber \end{eqnarray} For low neutron energies one also has to take into account interference effects of the neutron waves scattered from different atoms, as e.g. in condensed-matter studies. Classical examples are Bragg scattering by single crystals and measurements of phonon dispersion relations. However, for sufficiently large momentum transfer $ \mathbf{q}$ as considered here, interatomic interferences and eventual nuclear spin correlations between different atoms can be neglected. We thus consider the cross section in the center of mass system as given by \begin{equation} \frac{\mathrm{d}\sigma }{\mathrm{d}\Omega }\simeq \overline{\left\vert a_ \mathrm{N}}\right\vert ^{2}}+\left\langle \left\vert f_{\mathrm{SO}}\left( \mathbf{q},\mathbf{s}\right) \right\vert ^{2}\right\rangle +...-\sum_{j}\left\langle 2\func{Re}\left[ \overline{a_{\mathrm{N}} f_{j}\left( \mathbf{q,s}\right) \right] \right\rangle . \label{sigma-diff} \end{equation where the dots stand for the remaining contributions of squared amplitudes from Eq.(\ref{scattering amp}), and the sum is over $j=\mathrm{e}$, MAG, POL, SO, PV and IND. The formulas to transform this expression to the laboratory reference frame can be found in Ref. \cite{Turchin/1965}. As long as the nucleus is free to recoil, the total cross section does not change and changes in the angular distribution of the scattered neutrons appear only at order $m_{\mathrm{n}}/m_{\mathrm{A}}$. Since we are not interested in angular distributions and will only consider atoms much heavier than the neutron, we may use the scattering cross section as given above in Eq.(\ref{sigma-diff}). \section{Scattering amplitude due to the NLQED induced electric dipole moment} We now turn to the calculation of the new amplitude $f_{\mathrm{IND}}$ due to the NLQED induced electric dipole moment $\mathbf{p}_{\mathrm{IND}}$ given in Eq.(\ref{p-ind}). The magnetic moment of the neutron, $\mathbf{m}$, can be written as \begin{equation} \mathbf{m}=\mu _{\mathrm{n}}\mathbf{\sigma }, \end{equation where \begin{equation} \mu _{\mathrm{n}}=-9.662\times 10^{-27}\,\mathrm{A\, m}^{2}\;, \label{mag-mom-n} \end{equation} and $\mathbf{\sigma }=2\mathbf{s}$ are the Pauli matrices. From Eq. (\ref{a-constraint}) there follows the lower bound \begin{equation} a>7.6\,\mathrm{fm}. \label{a} \end{equation According to Eq.(\ref{pot-energy}), $\mathbf{p}_{\mathrm{IND}}$ interacts with the atomic electrostatic field, which for simplicity we consider as given by a pointlike nucleus with electric charge $Ze$, \begin{equation} \mathbf{E}_{0}=\frac{1}{4\pi \epsilon _{0}}\frac{Ze}{r^{2}}\mathbf{e}_{r}. \label{field nucleus} \end{equation As discussed later, one can neglect electric field shielding due to the atomic electrons. Using Eqs.(\ref{field nucleus}), (\ref{Hamiltonian}) and (\ref{f}) one obtains \begin{equation} f_{\mathrm{IND}}\left( q,R,a,\beta \right) =\frac{\zeta M\mu _{0}\mu _ \mathrm{n}}^{2}Z^{2}e^{2}}{320\pi ^{4}\hbar ^{2}\epsilon _{0}^{3}a^{3}}\left[ -13 \,I_{1}\left( q,R\right) +49 \,I_{2}\left( q,R,\beta \right) \right] , \label{f-QED} \end{equation where $\beta $ is the angle between $\mathbf{s}$ and $\mathbf{q}$. The two integrals $I_{1}\left( q,R\right) $ and $I_{2}\left( q,R,\beta \right) $ can be easily calculated analytically in polar coordinates with $\mathbf{q}$ along the polar axis. They are given by \begin{equation} I_{1}\left( q,R\right) =\int \frac{\exp \left( i\mathbf{q}\cdot \mathbf{r \right) }{r^{4}}\mathrm{d}^{3}r, \label{I-1} \end{equation and \begin{equation} I_{2}\left( q,R,\beta \right) =\int \frac{\exp \left( i\mathbf{q}\cdot \mathbf{r}\right) }{r^{4}}\left( \cos \beta \cos \theta +\sin \beta \sin \theta \cos \varphi \right) ^{2}\mathrm{d}^{3}r. \label{I-2} \end{equation The radial integration extends over all space, excluding a sphere of radius R$ around the nucleus. For a heavy nucleus like lead, electric fields as strong as $10^{23}\,\mathrm{Vm}^{-1}$ exist close to the nuclear surface. This exceeds by far the critical field, Eq.(\ref{E-crit}), beyond which higher order terms in the one-loop effective Lagrangian in Eq.(\ref{L-EH}) become important \cite{Dunne} and thus cannot be neglected. Therefore, $R$ has to be much larger than the nuclear radius $R_{\mathrm{N}}$, and we choose it here as the distance from the nucleus to where the critical field is reached, i.e. \begin{equation} E_{\mathrm{c}} = \frac{1}{4\pi \epsilon _{0}}\frac{Ze}{R^{2}}\;. \label{critical field condition} \end{equation For lead isotopes with $Z=82$ one has $R\simeq 300\,\mathrm{fm}$. The integrals Eqs.(41) and (42) can be solved analytically with the result \begin{eqnarray} I_{1}\left( q,R\right) &=&\frac{2\pi }{R}\left\{ \cos \left( qR\right) \frac{\sin \left( qR\right) }{qR}+\left[ \func{Si}\left( qR\right) -\frac \pi }{2}\right] qR\right\} \bigskip \mathstrut \medskip \label{I-1-sol} \\% [0.4cm] &\simeq &\frac{4\pi }{R}\left[ 1-\frac{\pi }{4}qR+\frac{1}{6}\left( qR\right) ^{2}-...\right] , \nonumber \end{eqnarray where $\func{Si}\left( x\right) $ is the Sine integral, and \begin{eqnarray} I_{2}\left( q,R,\beta \right) &=&\frac{\pi }{4R}\left\{ \frac{1}{\left( qR\right) ^{2}}\left[ 2+3\left( qR\right) ^{2}+\left( 6+\left( qR\right) ^{2}\right) \cos \left( 2\beta \right) \right] \cos \left( qR\right) \right. \nonumber \\ &&-\frac{1}{\left( qR\right) ^{3}}\left[ 2-3\left( qR\right) ^{2}+\left( 6-\left( qR\right) ^{2}\right) \cos \left( 2\beta \right) \right] \sin \left( qR\right) \nonumber \\ &&\left. -qR\left( 3+\cos \left( 2\beta \right) \right) \left( \frac{\pi }{2 -\func{Si}\left( qR\right) \right) \right\} \medskip \label{I-2-sol} \\% [0.4cm] &\simeq &\frac{4\pi }{3R}\left[ 1-\frac{3\pi }{32}\left( 3+\cos \left( 2\beta \right) \right) qR+\frac{1}{10}\left( 2+\cos \left( 2\beta \right) \right) \left( qR\right) ^{2}-...\right] . \nonumber \end{eqnarray Using Eqs.(\ref{I-1-sol}) and (\ref{I-2-sol}) in Eq.(\ref{f-QED}) one finds the final expression for the scattering amplitude due to the NLQED induced electric dipole momen \begin{eqnarray} f_{\mathrm{IND}}\left( q,R,a,\beta \right) &=&\frac{\zeta M\mu _{0}\mu _ \mathrm{n}}^{2}Z^{2}e^{2}}{320\pi ^{3}\hbar ^{2}\epsilon _{0}^{3}a^{3}R \left\{ \frac{49\left[ 1+3\cos \left( 2\beta \right) \right] \left[ qR\cos \left( qR\right) -\sin \left( qR\right) \right]}{2\left( qR\right) ^{3} \right. \label{f-final} \\ &&\left. +\frac{1}{4}\left[ 43+49\cos \left( 2\beta \right) \right] \left[ \cos \left( qR\right) +\frac{\sin \left( qR\right) }{qR}+\left( \func{Si \left( qR\right) -\frac{\pi }{2}\right) qR\right] \right\} \medskip \nonumber \\[0.4cm] &\simeq &\frac{\zeta M\mu _{0}\mu _{\mathrm{n}}^{2}Z^{2}e^{2}}{24\pi ^{3}\hbar ^{2}\epsilon _{0}^{3}a^{3}R}\left\{ 1-\frac{3\pi }{320}\left[ 43+49\cos \left( 2\beta \right) \right] qR\right. \nonumber \\ &&\left. +\frac{1}{100}\left[ 33+49\cos \left( 2\beta \right) \right] \left( qR\right) ^{2}-...\right\} . \nonumber \end{eqnarray The scattering amplitude $f_{\mathrm{IND}}$ exhibits a welcome peculiar dependence on the angle $\beta$ between the neutron spin $\mathbf{s}$ and the three-momentum transfer $\mathbf{q}$. This dependence introduces an asymmetry which for suitable experimental conditions is essentially free from {\it background} contributions due to other well known effects. The largest effect is obtained by evaluating $f_{\mathrm{IND}}$ at $\beta = 0$ and at $\beta = \pi/2$. This feature will then play a crucial role in the experimental detection of the NLQED induced electric dipole moment of the neutron, as discussed in the following section. \section{Scattering asymmetry due to the NLQED induced electric dipole moment} We define the scattering asymmetry as \begin{equation} A\left( q,R,a\right) =\frac{\left( \mathrm{d}\sigma /\mathrm{d}\Omega \right) _{\Vert }-\left( \mathrm{d}\sigma /\mathrm{d}\Omega \right) _{\perp }{\left( \mathrm{d}\sigma /\mathrm{d}\Omega \right) _{\Vert }+\left( \mathrm d}\sigma /\mathrm{d}\Omega \right) _{\perp }}, \label{def-A} \end{equation where the differential cross section, Eq.(\ref{sigma-diff}), is evaluated for two neutron polarization states $\mathbf{P}_{\Vert }$ and $\mathbf{P}_{\perp }$, parallel and perpendicular to the scattering vector $\mathbf{q}$, respectively. The interference term between the coherent nuclear amplitude and the amplitude of interest leads to \begin{equation} A_{\mathrm{IND}}\left( q,R,a\right) =\frac{4\pi }{\sigma _{\mathrm{s}} \overline{a_{\mathrm{N}}}\left( f_{\mathrm{IND}\bot }-f_{\mathrm{IND}\Vert }\right) P, \label{A-new} \end{equation where $f_{\mathrm{IND}\Vert }=f_{\mathrm{IND}}\left( q,R,a,0\right) $ and f_{\mathrm{IND}\bot }=f_{\mathrm{IND}}\left( q,R,a,\pi /2\right) $, P=\left\vert \mathbf{P}_{\Vert }\right\vert \,=\left\vert \mathbf{P}_{\perp }\right\vert $, and $\sigma _{\mathrm{s}}=\sigma _{\mathrm{s,b}} M^{2}/m_ \mathrm{n}}^{2}$ is the total scattering cross section of the free atom. We argue below that in a well-designed experiment possible influences of interference terms other than between $\overline{a_{\mathrm{N}}}$ and $f_ \mathrm{IND}}$ are negligible. Hence, $A\left( q,R,a\right) \simeq A_ \mathrm{IND}}\left( q,R,a\right) $, so that the asymmetry defined in Eq.(\re {def-A}) should allow for a detection and determination of the new amplitude. Using the values for natural lead from Eq.(\ref{values-Pb}) in Eq.(\ref{A-new}) one obtains \begin{equation} A\left( q,R,a\right) \simeq \frac{f_{\mathrm{IND}\bot }-f_{\mathrm{IND}\Vert }}{9.5\,\mathrm{fm}}P. \label{A-lead} \end{equation From Eqs.(\ref{f-final}) and (\ref{A-lead}) it follows that A\left( q,R,a\right) \propto \chi \left( qR\right) /R$, where $\chi $ is a function of the dimensionless parameter $qR$. The maximum of $A\left( q,R,a\right) $ occurs for (see Fig.~1) \begin{equation} qR=1.68. \label{qR} \end{equation Using the value of $a$ given in Eq.(\ref{a}) the asymmetry becomes \begin{equation} A\left( 5.6\times 10^{12}\,\mathrm{m}^{-1},\,300\,\mathrm{fm},\,7.6\,\mathrm fm}\right) =1.4\times 10^{-3}P, \label{A-epi} \end{equation a result which appears experimentally accessible. Neglecting nuclear recoil, valid in good approximation for neutron scattering off a heavy target, one may use the relation \begin{equation} q=2k\sin \frac{\Theta }{2}, \end{equation where $\Theta $ is the angle between $\mathbf{k}$ and $\mathbf{k}^{\prime }$, and $k\equiv\left\vert \mathbf{k}\right\vert =2.197\times 10^{-4}\,\mathrm{fm ^{-1}\sqrt{E(\mbox{eV})}$, with $E$ the neutron kinetic energy in $\mathrm{eV}$. In a backscattering geometry, i.e. for $\Theta \simeq \pi $, the maximum asymmetry, Eq.(\ref{A-epi}), is obtained with epithermal neutrons of energy E\simeq 165\,\mathrm{eV}$. \begin{figure} \centerline{\includegraphics{Figure1.eps}} \caption{The scattering asymmetry $A\left( q,\,R=300\,\mathrm{fm},\,a=7.6\,\mathrm{fm \right)$, Eq.(\protect\re {A-lead}), normalized to the neutron polarization $P$ as a function of the dimensionless variable $qR$.\label{figure1}} \end{figure} The result for the asymmetry depends explicitly on the radii $R$ and $a$ of spheres centered on the nucleus and on the neutron, respectively. These parameters play the role of separating the short-distance physics close to the electromagnetic sources from the long-distance effects involving fields weak enough for the Euler-Heisenberg approximation to be valid. In this regard it is important to point out that electric and magnetic fields induced by quantum fluctuations in the QED vacuum involve various multipolarities \cite{Dominguez/2009/1}. For instance, in the case of a neutron in an external, quasistatic electric field $\left\vert \mathbf{E}_{0}\right\vert <E_{\mathrm{c}}$, the induced electric field Eq.(\ref{E-induced}) has a dipole type term of order $\mathcal{O}(\left\vert \mathbf{x}\right\vert ^{-3})$, as well as a higher multipole of order $\mathcal{O}(\left\vert \mathbf{x}\right\vert ^{-6})$, while the induced magnetic field involves terms of order $\mathcal{O (\left\vert \mathbf{x}\right\vert ^{-3})$, $\mathcal{O}(\left\vert \mathbf{x \right\vert ^{-5})$, and $\mathcal{O}(\left\vert \mathbf{x}\right\vert ^{-9}) $, with $\mathbf{x=r}-\mathbf{r}_{\mathrm{n}}$. These higher order multipoles can be safely neglected in the interaction energy as long as the weak-field approximation remains valid. However, it is not clear what happens at distances closer to the nucleus or to the neutron. This problem is similar to that of the separation into far and near field regions around a localized charge/current distribution in classical electrodynamics, where the lowest-order multipole provides the long-distance solution. Although one cannot compute the interaction energy due to QED vacuum effects stemming from the regions $r<R$ and $\left\vert \mathbf{r}-\mathbf{r}_{\mathrm{n }\right\vert <a$, their contribution to the asymmetry would presumably have a different dependence on $\mathbf{s}$ and $\mathbf{q}$. For large $q$ it might smear out the oscillations appearing in Fig.~\ref{figure1}. For small $q\,$, corresponding to small spatial resolution in probing the QED vacuum, the asymmetry should not be affected much by the short-distance physics. This statement is underlined by the fact that at leading order in $qR$ the $\beta $-dependent term in $f_ \mathrm{IND}}$, Eq.(\ref{f-final}), does not depend on $R$. For $q R\ll 1 , one thus obtain a prediction for the asymmetry which should be robust against variations in the choice of $R$, i.e. \begin{equation} A\left( q\ll R^{-1},R,a\right) \simeq A\left( q,a\right) =\frac{49}{320 \frac{\overline{a_{\mathrm{N}}}}{\sigma _{\mathrm{s}}}\frac{\zeta M\mu _{0}\mu _{\mathrm{n}}^{2}Z^{2}e^{2}}{\pi \hbar ^{2}\epsilon _{0}^{3}a^{3}}q P \;. \label{A-small-q} \end{equation For a neutron energy of $1\,\mathrm{eV}$ in a backscattering geometry one expects \begin{equation} A\left( 4.4\times 10^{11}\,\mathrm{m}^{-1},\,7.6\,\mathrm{fm}\right) =2.2\times 10^{-4}P \;. \label{A-hot} \end{equation} From a practical point of view the polarization of epithermal neutrons with more than $100\,\mathrm{eV}$ requires a spin filter of polarized protons. This is technically demanding if one wishes to polarize a beam with a diameter of several cm. The measured energy-dependent neutron polarization cross section of a polarized-proton spin filter for the energy range of interest may be found in Ref. \cite{Lushchikov/1970}. On the other hand, neutrons with energies up to $\sim 1\,\mathrm{eV}$ are available from a hot moderator in a reactor neutron source with much higher intensity than epithermal neutrons. They can be polarized using magnetic monochromator crystals, or by a spin filter of polarized $^{3}$He gas \cite{Heil/1998} which has a polarization cross section proportional to $k^{-1}$. For the fluxes available at the Institut Laue-Langevin (ILL) in Grenoble an asymmetry as in Eq.(\ref{A-hot}) appears experimentally accessible within a few days of beam time.\\ To conclude this section we stress that the neutron scattering asymmetry due to nonlinear QED has two characteristic properties which should make it rather easy to detect and distinguish from other effects. First, the asymmetry attains its maximum value for perpendicular orientations of the neutron polarization and vanishes for opposite orientations. This is in contrast to most ordinary asymmetries which become maximal for opposite orientations. Second, $A_{\mathrm{IND}}\left( q,R,a\right) $ exhibits a characteristic $q$ dependence with a broad maximum around the value of $q$ given in Eq.(\ref{qR}). These features are discussed in more detail in the sequel. \section{Analysis of background asymmetries} In this section we study the contributions to the asymmetry A\left( q,R,a\right) $, Eq.(\ref{def-A}), from the various ordinary scattering amplitudes defined in Eq.(\ref{scattering amp}). The neutron spin-dependent amplitudes can be written as \begin{equation} f\left( \mathbf{q,s}\right) =f_{0}\left( \mathbf{q}\right) +f_{1}\left( \mathbf{q}\right) \left[ \mathbf{s\cdot w}\left( \mathbf{q}\right) \right] \label{f-spin-dep} \end{equation where $f_0\left( \mathbf{q}\right)$ is spin independent, and $\mathbf{w}$ is a vector not correlated with the neutron spin. In the case of the weak amplitude $f_{\mathrm{PV}}$ the vector $\mathbf{q}$ must be replaced by $\mathbf{k}$. For instance, for the nuclear amplitude in Eq.(\ref{a-spin ), $\mathbf{w}$ is independent of $\mathbf{q}$ and given by the nuclear spin $\mathbf{I}$. It can be shown in general that the terms proportional to \langle \left( \mathbf{s}\cdot \mathbf{w}\right) ^{2}\rangle $ in the differential cross section, Eq.(\ref{sigma-diff}), are all independent of the neutron polarization and therefore cannot generate an asymmetry. In principle these terms influence the size of $A\left( q,R,a\right) $ through the total scattering cross section $\sigma _{\mathrm{s}}$, Eq.(\ref{A-new}). For scattering angles $\Theta \rightarrow 0$ the pure spin-orbit cross section, quadratic in the amplitude $f_{\mathrm{SO}}$, might become large enough to have an impact on $\sigma _{\mathrm{s}}$. However, for sufficiently large \Theta $, and in the absence of nuclear and electronic polarization, corrections to $\sigma _{\mathrm{s}}$ due to squared-amplitude terms can be safely neglected. The interference terms between the nuclear and the other scattering amplitudes in Eq.(\ref{sigma-diff}) may however affect the asymmetry $A\left( q,R,a\right) $ through their dependence on the neutron polarization. This requires careful consideration, and we start with the amplitude $f_{\mathrm{SO }$ for spin-orbit scattering. It originates in the interaction of the neutron magnetic moment with the magnetic field present in the neutron rest frame due to its motion through the atomic electric fields. Its expression is (see e.g. \cite{Squires/1978}) \begin{equation} f_{\mathrm{SO}}\left( \mathbf{q,s}\right) =i\frac{M}{m_{\mathrm{n}}}\cot \left( \Theta /2\right) \frac{\mu _{\mathrm{n}}\mu _{0}}{2\pi \hbar eZ\left[ 1-F\left( q\right) \right] \left( \mathbf{s}\cdot \mathbf{n}\right) , \label{f-so} \end{equation where $eZ\left[ 1-F\left( q\right) \right] $ is the Fourier transform of the electric charge density of the atom. This term involves the nuclear charge $Z $ and the atomic form factor $F\left( q\right) $ normalized to $F\left( 0\right) =1$. This form factor is measured e.g. in X-ray scattering off atoms, and is a real function of the momentum. The unit vector $\mathbf{n}$ points along $\mathbf{k\times k}^{\prime }$, so that the amplitude can contribute only if the neutron polarization has a component out of the scattering plane. The asymmetry due to the spin-orbit interaction is given by \begin{equation} A_{\mathrm{SO}}=\frac{\func{Im}\;\overline{a_{\mathrm{N}}}}{\sigma _{\mathrm{s }}\frac{M}{m_{\mathrm{n}}}\cot \left( \Theta /2\right) \frac{\mu _{\mathrm{n }\mu _{0}}{\hbar }eZ\left[ 1-F\left( q\right) \right] \left( \mathbf{P _{\Vert }\cdot \mathbf{n}-\mathbf{P}_{\perp }\cdot \mathbf{n}\right) . \label{A-so} \end{equation The imaginary part of $\overline{a_{\mathrm{N}}}$ above is due to nuclear absorption and can be calculated using the optical theorem. Lead nuclei absorb neutrons only weakly so that $\func{Im}\; \overline{a_{\mathrm{N}} \simeq \left( 4\pi \right) ^{-1}k\sigma _{\mathrm{s}}$ for neutron energies of interest here. For a scattering angle $\Theta =\pi /2$ a neutron kinetic energy of $E\simeq 330\,\mathrm{eV}$ would be required to observe the maximum asymmetry according to Eq.(\ref{A-epi}). In this case the atomic form factor $F\left( q\right) \simeq 0$, and a maximum asymmetry $A_{\mathrm SO}}$ should be observed for $\mathbf{P}_{\perp }$ perpendicular to the scattering plane, i.e. $\mathbf{P}_{\perp }\cdot \mathbf{n}=P$ (while \mathbf{P}_{\Vert }\cdot \mathbf{n}=0$ by definition). As a result, for E=330\,\mathrm{eV}$ one has $A_{\mathrm{SO}}\simeq 4.8\times 10^{-4}P$, while for E=1\,\mathrm{eV}$ together with the conservative value $F\left( q\right) =0 , the asymmetry becomes $A_{\mathrm{SO}}\simeq 2.6\times 10^{-5}P$. These values are already smaller than the corresponding values of $A_{\mathrm{IND }$, but with suitable experimental settings they can be reduced even further. Notice that the condition $\mathbf{P}_{\perp }\cdot \mathbf{q}=0$ required for $f_{\mathrm{IND}\perp }$ can be realized for different orientations of $\mathbf{P}_{\perp }$, while also fulfilling \mathbf{P}_{\perp }\cdot \mathbf{n}=0$. Hence, choosing a backscattering geometry, for which $\cot \left( \Theta /2\right) \ll 1$ together with $\mathbf{P}_{\perp }\cdot \mathbf{n}\simeq 0$, and using realistic assumptions about the experimental definition of the directions of $\mathbf{q }$ and $\mathbf{P}$, one can easily suppress $A_{\mathrm{SO}}$ by a factor $ 50$. Hence, the impact of $A_{\mathrm{SO}}$ can be kept well under the $1 \%$ level.\\ Next, the amplitude $a_{\mathrm{N}}$ of the neutron-nuclear interaction in Eq.(\ref{a-spin}), when squared, gives rise to interference between its spin-dependent and spin-independent parts. After ensemble averaging this becomes proportional to $P$ and to the nuclear polarization $P_{\mathrm{N}}$. In thermal equilibrium, $P_ \mathrm{N}}=\tanh \left( \mu _{\mathrm{N}}B/\left( k_{\mathrm{B}}T\right) \right) $ for nuclei with magnetic moment $\mu _{\mathrm{N}}$ in a magnetic field $B$ ($k_{\mathrm{B}}$ is the Boltzmann constant). If the target is at room temperature, and given that no magnetic field is needed at the position of the sample, the $P$-dependent cross section is orders of magnitude too small to have an impact on the asymmetry.\\ We consider next the amplitude $f_{\mathrm{MAG}}$, which is due to the interaction of the neutron magnetic moment with the magnetic field produced by unpaired atomic electrons of paramagnetic contaminants in the sample. The operator structure of this amplitude is given by \begin{equation} f_{\mathrm{MAG}}\propto \mathbf{s}\cdot \left[ \mathbf{e}_{q}\times \mathbf{ }\left( \mathbf{q}\right) \times \mathbf{e}_{q}\right] , \label{f-mag} \end{equation where $\mathbf{M}\left( \mathbf{q}\right) $ is the Fourier transform of the total (spin and orbital) magnetization of the atom, and $\mathbf{e}_{q} \mathbf{q}/q$. The interference term with $\overline{a_{\mathrm{N}}}$ thus involves the neutron polarization and the sample-averaged magnetization. It would only influence the asymmetry if (1) the ratio $B/T$ is sufficiently high to result in a sizable magnetization, (2) paramagnetic centers are sufficiently abundant, (3) the two neutron polarization states in the asymmetry have different projections perpendicular to the scattering plane (which is a consequence of the term in brackets in Eq.(\ref{f-mag})), and (4) measurements are performed for sufficiently small $q$, where the magnetic form factor still has a sizable value. Regarding the latter, even for the smaller value of $q$ envisaged in Eq.(\ref{A-hot}), the magnetic form factor leads to a strong suppression of $f_{\mathrm{MAG}}$. Hence, with the conditions 1, 2 and 3 under experimental control one can safely disregard magnetism as a source of an asymmetry.\\ Finally, the parity-violating amplitude $f_{\mathrm{PV}}$ due to the hadronic weak interaction may lead to a different type of asymmetry which has indeed been observed in neutron transmission experiments. Effects depend on the neutron helicity, hence $\mathbf{w}=\mathbf{k}$ in Eq.(\re {f-spin-dep}), with a complex coefficient to describe both parity violating spin rotation and transmission asymmetry. The amplitude is normally so small that it requires special efforts to detect it. For thermal neutrons, transmission asymmetries for longitudinally polarized neutrons \cit {Forte/1980} have typical sizes of a few times $10^{-6}$. However, for neutron energies in the vicinity of p-wave resonances of complex nuclei a strong enhancement due to the weak nuclear interactions may appear. A prominent example is the transmission asymmetry of $7\%$ found at the p-wave resonance of $0.76\,\mathrm{eV}$ in $^{139}$La \cite{Alfimenkov/1983}. However, no effect sufficiently strong to affect the asymmetry $A_{\mathrm{IND}}$ is known for lead in the relevant energy range. In addition, an experimental test can easily be performed. In fact, taking the neutron polarization \mathbf{P}$ parallel and anti-parallel to $\mathbf{q}$, i.e. $\beta =0$ and \beta =\pi $, it follows from Eqs.(\ref{f-final}) and (\ref{A-new}) that A_{\mathrm{IND}}=0$ for these two polarization orientations. In contrast, for $f_{\mathrm{PV}}$ one has $A_{\mathrm{PV}}\propto \sin \left( \Theta /2\right) $, which could be measured separately and corrected for if the need arises. \section{Comparison of the NLQED amplitude with ordinary electric amplitudes} The electric amplitudes $f_{\mathrm{POL}}$ and $f_{\mathrm{e}}$ do not generate any known scattering asymmetry. However, owing to their characteristic $q$-dependences a comparison with the NLQED amplitude $f_ \mathrm{IND}}$ is needed. Like $f_{\mathrm{IND}}$, the amplitude f_{\mathrm{POL}}$ due to the electric polarizability of the neutron, $\alpha _{\mathrm{n}}$, is induced by the nuclear electric field, Eq.(\ref{field nucleus}). In SI units $\alpha _{\mathrm{n}}$ is defined by $\mathbf{p}=4\pi \epsilon _{0}\alpha _{\mathrm{n}}\mathbf{E}_{0}$, so that its dimension is $\left[ \alpha _ \mathrm{n}}\right] =\mathrm{m}^{3}$. The calculation of $f_{\mathrm{POL}}$ follows from Eq.(\ref{f}) with the interaction energy given by Eq.(\ref{H-el-pol}). This leads to the integral Eq.(\ref{I-1}) with the result given in Eq.(\re {I-1-sol}). However, the lower limit of the radial integration is now different from that for $f_{\mathrm{IND}}$. In fact, this lower limit can now be extended down to the nuclear radius $R_{\mathrm{N}}$, since for $r>R_{\mathrm{N}}$ the neutron probes only the long-range electric forces. For $r<R_{\mathrm{N}} $ the electric interaction is small in comparison with the nuclear force, so that in early calculations \cite{Barashenkov/1957}-\cite{Thaler/1959} it has simply been included in the nuclear amplitude. In SI units and for the electric field given in Eq.(\ref{field nucleus}), the dependence of $f_{\mathrm{POL}}$ on $q$ is given by \begin{equation} f_{\mathrm{POL}}\left( q\right) \simeq \frac{1}{4\pi \epsilon _{0}}\frac{M} \hbar ^{2}}\frac{Z^{2}e^{2}}{R_{\mathrm{N}}}\alpha _{\mathrm{n}}\left\{ 1 \frac{\pi }{4}qR_{\mathrm{N}}+\frac{1}{6}\left( qR_{\mathrm{N}}\right) ^{2}-...\right\} . \label{f-QCD} \end{equation The term linear in $q$ is characteristic of the $r^{-4}$ dependence of the Hamiltonian, and it also enters the interference term in the cross section. This feature has been exploited in the past to measure $\alpha _{\mathrm{n}}$. Conflicting results from experiments performed during more than three decades show that a proper assessment of all systematic errors has been difficult (see e.g. the table in Ref. \cite{Schmiedmayer/1989}). The most recent result \cite{Schmiedmayer/1991}, derived from energy-dependent neutron transmission through a $^{208}$Pb target, and reporting the smallest uncertainty is \begin{equation} \alpha _{\mathrm{n,}\exp }=\left( 1.20\pm 0.15\pm 0.20\right) \times 10^{-3}\,\mathrm{fm}^{3}. \label{alpha-exp} \end{equation Calculations using quark bag models \cite{Bernard/1988} agree with this result. An early estimate of Breit and Rustgi \cite{Breit/1959} using data on pion photoproduction already indicated that $\alpha _{\mathrm{n }<2\times 10^{-3}\,\mathrm{fm}^{3}$. These authors also analyzed other effects which might mimic a signal from the neutron electric polarizability. From an estimate of vacuum polarization effects close to the nucleus, and using the Uehling potential \cite{Uehling/1935}, they concluded that this contribution to neutron scattering can be safely neglected. Turning now to the question of shielding of the nuclear charge by the atomic electrons, one notices that this might quench the amplitude $f_{\mathrm{IND}} $. Since $f_{\mathrm{POL}}$ and $f_{\mathrm{IND}}$ both depend quadratically on the electric field $\mathbf{E}_{0}$, one can draw parallels with the analysis of $f_{\mathrm{POL}}$. We recall that in our calculation of $f_ \mathrm{IND}}$ one needs to exclude a spherical region of radius $R$ around the nucleus inside which the weak-field expansion of the Euler-Heisenberg Lagrangian breaks down. This procedure was followed (for different reasons) in the early calculations of $f_{\mathrm{POL}}$ \cite {Barashenkov/1957}-\cite{Thaler/1959}, \cite{Breit/1959} where the nuclear region with radius $R_{\mathrm{N}}$ was excluded. A more recent analysis which does not rely on a simple model for the nuclear charge distribution gives \cite{Sears/1986} \begin{equation} f_{\mathrm{POL}}\left( q\rightarrow 0\right) =\frac{1}{4\pi \epsilon _{0} \sqrt{\frac{3}{\pi }}\frac{M}{\hbar ^{2}}\frac{Z^{2}e^{2}}{r_{\mathrm{N}} \alpha _{\mathrm{n}}, \label{f-pol-q-to-0} \end{equation where $r_{\mathrm{N}}$ is the root mean square charge radius of the nucleus. This shows that slow neutrons are indeed insensitive to details at this length scale. Notice that this result is nearly identical to the leading term in Eq.(\ref{f-QCD}) after replacing $R_{\mathrm{N}}$ by $r_{\mathrm{N}}$. In the derivation of Eq.(\ref{f-pol-q-to-0}) the following intermediate result was obtained in \cite{Sears/1986} \begin{equation} f_{\mathrm{POL}}\left( q\rightarrow 0\right) \propto \int_{0}^{\infty }\left\vert F_{\mathrm{N}}\left( \kappa \right) -F\left( \kappa \right) \right\vert ^{2}\mathrm{d}\kappa =\int_{0}^{\infty }\left\vert F_{\mathrm{N }\left( \kappa \right) \right\vert ^{2}\mathrm{d}\kappa \left[ 1-\mathcal{O \left( R_{\mathrm{N}}/R_{\mathrm{A}}\right) \right] , \label{f with F} \end{equation} where $F_{\mathrm{N}}\left( \kappa \right) $ and $F\left( \kappa \right) $ are the charge form factors of the nucleus and of the electron distribution in the atom, respectively. With $R_{\mathrm{N}}/R_{\mathrm{A}}\simeq 10^{-5}$, shielding of the nuclear charge can be neglected in $f_{\mathrm{POL}}$. Even in the limit $q\rightarrow 0$ the neutron feels the full unscreened nuclear charge as far as the electric polarizability is concerned. For the NLQED induced electric dipole moment it follows that, with $R\simeq 300\,\mathrm{fm}$ for lead, the corresponding correction term of order $\mathcal{O}\left( R/R_{\mathrm{A }\right) $ is much larger than the one of order $\mathcal{O}\left( R_{\mathrm{N}}/R_{\mathrm{ }}\right) $. However, $R/R_{\mathrm{A}}\lesssim 10^{-2}$ is still small enough so that one can neglect shielding of the electric field in the region around the nucleus. We now compare the two amplitudes $f_{\mathrm{POL}}$ and $f_{\mathrm{IND}}$ in the limit $q\rightarrow 0$, i.e. their respective contributions to the neutron scattering length. Setting $R_{\mathrm{N}}=1.2\,\mathrm{fm} \, A^{1/3}$ and using Eq.(\ref{alpha-exp}) one can estimate the leading order term in Eq.(\ref{f-QCD}) as \begin{equation} f_{\mathrm{POL}}\left( q\rightarrow 0\right) \simeq 0.04\,\mathrm{fm}\,. \label{f-pol-q0} \end{equation From Eq.(\ref{f-final}) the corresponding leading term in $f_{\mathrm{IND}}$ is \begin{equation} f_{\mathrm{IND}}\left(q \rightarrow 0\right) =\frac{1}{4\pi \epsilon _{0}} \frac{M}{\hbar ^{2}}\frac{Z^{2}e^{2}}{R}\alpha _{\mathrm{IND}}\;, \end{equation} where \begin{equation} \alpha _{\mathrm{IND}}=\frac{\zeta \mu _{0}\mu _{\mathrm{n}}^{2}}{6\pi ^{2}\epsilon _{0}^{2}a^{3}}\simeq \frac{3.3\,\mathrm{fm}^{3}}{\left( a\left[ \mathrm{fm}\right] \right) ^{3}}\,. \end{equation} With the value of $a$ from Eq.(\ref{a-constraint}) one obtains \begin{equation} \alpha _{\mathrm{IND}}=7.5\times 10^{-3}\,\mathrm{fm}, \end{equation} which is larger than $\alpha _{\mathrm{n}}$, Eq.(\ref{alpha-exp}). However, the impact of this polarizability on the amplitude $f_{\mathrm{IND}}$ for $q \rightarrow 0$ is suppressed with respect to $f_{\mathrm{POL}}$ due to $R\gg R_{\mathrm{N}}$. In fact, for lead one obtains \begin{equation} f_{\mathrm{IND}}\left(q \rightarrow 0\right) \simeq 0.006\,\mathrm{fm}\;. \end{equation} This result, though, is not even an order of magnitude smaller than $f_ \mathrm{POL}}\left( q\rightarrow 0\right) $, Eq.(\ref{f-pol-q0}). It would thus contribute about $5\times 10^{-11}\,\mathrm{eV}$ to the neutron optical potential of solid lead, which is quite substantial given the high precision of some neutron optical methods. The contribution of the NLQED electric dipole moment to the total cross section is $\sigma _{\mathrm{IND}}\simeq -8\pi \overline{a}f_{\mathrm{IND }\left( q\rightarrow 0\right) $, where $\overline{a_ \mathrm{N}}}\simeq \overline{a}$ in Eq.(\ref{sigma-diff}) has been used. Numerically this becomes \begin{equation} \sigma _{\mathrm{IND}}\left( q\rightarrow 0\right) =-0.014\times 10^{-24 \mathrm{cm}^{2}. \end{equation} \begin{figure} \centerline{\includegraphics{Figure2.eps}} \caption{Amplitude $\left\langle f_{\mathrm{IND}}\left( q, R=300\, \mathrm{fm}, a=7.6\,\mathrm{fm}\right) \right\rangle $ for unpolarized neutrons, Eq.(\protect\ref{f_IND-av}), as a function of the dimensionless variable $qR$.\label{figure2}} \end{figure} We discuss next the amplitude $f_{\mathrm{e}}$ which describes the interaction of the electric charges of the atom with the internal charge distribution of the neutron as characterized by its mean squared charge radius. The amplitude for a bound nucleus is given by $f_{\mathrm{e}}=-b_ \mathrm{e}}Z\left[ 1-F\left( q\right) \right] $, with the atomic form factor $F\left( q\right) $ given in Eqs.(\ref{f-so}) and (\ref{f with F}), and the neutron-electron scattering length is $b_{\mathrm{e}}\simeq -1.35\times 10^{-3}\,\mathrm{fm}$ as determined from measurements of the total cross sections of lead and bismuth at different neutron energies \cite {Koester/1976}-\cite{Kopecky/1997}. For lead and sufficiently large $q$, $f_ \mathrm{e}}=0.11\,\mathrm{fm}$, which leads to a contribution to the total scattering cross section \cit {Sears/1986} $\sigma _{\mathrm{e}}\simeq -8\pi \overline{a}b_{\mathrm{e }Z\simeq 0.25\times 10^{-24}\,\mathrm{cm}^{2}$. It is also interesting to notice that since $f_{\mathrm{e}}\left( q\rightarrow 0\right) \rightarrow 0$ this amplitude does not contribute to the neutron scattering length. The atomic form factor changes significantly at small $q$ where interference of neutron waves from different atoms cannot be neglected. Hence, the macroscopic state of the sample enters crucially in the analysis of scattering data. In contrast, in the case of $f_{\mathrm{IND}}$ where larger effects show up at much higher values of $q$, interatomic interferences do not play any significant role.\\ To conclude this section we discuss the contribution of $f_{\mathrm{IND}}$ as a potential background in measurements of the amplitudes $f_{\mathrm{POL}} $ and $f_{\mathrm{e}}$ performed with unpolarized neutrons. For the polarization averaged $f_{\mathrm{IND}}$ one obtains \begin{eqnarray} \left\langle f_{\mathrm{IND}}\left( q,R,a\right) \right\rangle &=&\frac{1}{ }\int_{0}^{\pi }f_{\mathrm{IND}}\left( q,R,a,\beta \right)\, \sin \beta \; \mathrm{d}\beta \nonumber \\ &=&\frac{\zeta M\mu _{0}\mu _{\mathrm{n}}^{2}Z^{2}e^{2}}{48\pi ^{3}\hbar ^{2}\epsilon _{0}^{3}a^{3}R}\left\{ \cos \left( qR\right) +\frac{\sin \left( qR\right) }{qR}+\left[ \func{Si}\left( qR\right) -\frac{\pi }{2}\right] qR\right\} \label{f_IND-av} \end{eqnarray To analyze the impact on $f_{\mathrm{e}}$ one may approximate relativistic Hartree-Fock results for $F\left( q\right) $ by the simple function $\left[ 1+3\left( q/q_{0}\right) ^{2}\right] ^{-1/2}$ \cite{Sears/1986}. With $F\left( q_{0}\right) =1/2$ the momentum $q_{0}$ provides a scale at which significant changes in $f_{\mathrm{e}}$ take place. For lead, $q_{0}=8.3\times 10^{10}\,\mathrm{m ^{-1}$, and \begin{equation} f_{\mathrm{e}}\left( q_{0}\right) -f_{\mathrm{e}}\left( 0\right) =50\times 10^{-3}\,\mathrm{fm}. \end{equation In contrast, the change of $\left\langle f_{\mathrm{IND}}\right\rangle $ is much smaller due to its milder $q$-dependence and its smaller magnitude (see Fig.~\ref{figure2}), \begin{equation} \left\langle f_{\mathrm{IND}}\left( q_{0}\right) \right\rangle -\left\langle f_{\mathrm{IND}}\left( 0\right) \right\rangle =-0.11\times 10^{-3}\,\mathrm fm}. \end{equation Since the precision of the best measurements of $b_{\mathrm{e}}$ is at the level of a few percent, potential background due to $\left\langle f_{\mathrm{IND }\right\rangle $ is negligible. A similar argument leads to the same conclusion for the determination of $\alpha _{\mathrm{n}}$ from $f_{\mathrm POL}}$. \section{Conclusions} Many, if not most proposals to detect nonlinear effects due to quantum fluctuations in the QED vacuum rely on experiments involving lasers of ultra-high intensities \cite{Schwinger/1951}-\cite{Fried/1966}. These intensities, though, are at least two orders of magnitude below current values. An alternative approach has been discussed in this paper, based on the theoretical prediction of an induced electric dipole moment of the neutron, ${\mathbf{p}}_{\mathrm{IND}} , in an external quasistatic electric field \cite{Dominguez/2009}. The peculiar features of this dipole moment, particularly its dependence on the angle between ${\mathbf{p}}_{\mathrm{IND}}$ and the neutron spin, suggests the definition of an asymmetry which could be detected in the scattering of polarized neutrons from heavy nuclei. We have introduced this asymmetry and discussed all possible sources of background asymmetries. We have also compared the new NLQED amplitude with ordinary electric scattering amplitudes, particularly the one due to the polarization of the neutron in an electric field due to its quark substructure. The conclusion from this detailed analysis is that the asymmetry due to NLQED should be observable using epithermal neutrons, and even using thermalized neutrons from a hot moderator. This would be the first ever experimental confirmation of nonlinearity in electrodynamics due to QED vacuum fluctuations. The numerical predictions for the asymmetry made in this paper were calculated using definite values for the parameters $R$ and $a$. These were derived from the condition that the electric and magnetic fields should be below their critical values, beyond which the weak-field expansion of the effective Lagrangian breaks down. While the value of the asymmetry $A$ for small $q$ does not depend on $R$, it does depend on $a$ as seen from Eq.(\ref{Hamiltonian}) Hence, the numerical results given here should be correct up to a numerical factor of order one. \section{Acknowledgments} This work was supported in part by FONDECYT 1095217 (Chile), Proyecto Anillos ACT119 (Chile), by CONICET (PIP 01787) (Argentina), ANPCyT (PICT 00909)(Argentina), and UNLP (Proy.~11/X492) (Argentina), NRF (South Africa), and National Institute for Theoretical Physics (South Africa).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Decomposition into 1-sparse matrices} \label{app:1sparseproof} In \cite{Aharonov2003}, Aharonov and Ta-Shma considered the problem of simulating an arbitrary $d$-sparse Hamiltonian using the ability to query bits of the Hamiltonian. According to their prescription, we should imagine the Hamiltonian as an undirected graph where each basis state corresponds to a node and each nonzero matrix element $H^{\alpha\beta} = H^{\beta\alpha*} \neq 0$ corresponds to an edge which connects node $\ket{\alpha}$ to $\ket{\beta}$. Since an edge coloring of a graph using $\Gamma$ colors is equivalent to the division of that graph into $\Gamma$ sets of disjoint graphs of degree 1, this edge coloring represents a decomposition of the Hamiltonian into $\Gamma$ 1-sparse matrices. Aharonov and Ta-Shma show a procedure for accomplishing the 1-sparse decomposition of any arbitrary $d$-sparse matrix using $\Theta(d^2)$ terms by coloring an arbitrary graph of degree $d$ with $\Theta(d^2)$ colors. This result was tightened from $\Theta(d^2)$ terms to $d^2$ terms in \cite{Berry2013}. Importantly, Aharonov and Ta-Shma also showed how these Hamiltonians can be efficiently simulated using an oracular scheme based on the Trotter-Suzuki decomposition. Toloui and Love used this result to show how the CI matrix can be efficiently simulated under Trotter-Suzuki decomposition with ${\cal O}(N^4)$ colors \cite{Toloui2013}. We provide an improved 1-sparse decomposition into ${\cal O}(\eta^2 N^2)$ terms. For convenience of notation, we denote the occupied spin-orbitals for $\ket{\alpha}$ by $\alpha_1,\ldots,\alpha_\eta$, and the occupied spin-orbitals for $\ket{\beta}$ by $\beta_1,\ldots,\beta_\eta$. We also drop the bra-ket notation for the lists of orbitals (Slater determinants); that is, we denote the list of occupied orbitals for the left portion of the graph by $\alpha$, and the list of occupied orbitals for the right portion of the graph by $\beta$. We require both these lists of spin-orbitals to be sorted in ascending order. According to the Slater-Condon rules, the matrix element between two Slater determinants is zero unless the determinants differ by two spin-orbitals or less. Thus, two vertices (Slater determinants) in the Hamiltonian graph are connected if and only if they differ by a single occupied orbital or two occupied orbitals. In order to obtain the decomposition, for each color (corresponding to one of the resulting 1-sparse matrices) we need to be able to obtain $\beta$ from $\alpha$, and vice versa. Using the approach in \cite{Berry2013}, we take the tensor product of the Hamiltonian with a $\sigma_x$ operator. That is, we perform the simulation under the Hamiltonian $\sigma_x\otimes H$, which is bipartite and has the same sparsity as $H$. The $\sigma_x$ operator acts on the ancilla register that determines whether we are in the left ($\alpha$) or right ($\beta$) partition of the graph. We do this without loss of generality as simulation under $H$ can be recovered from simulation under $\sigma_x\otimes H$ using the fact that $e^{-i(\sigma_x\otimes H)t}\ket{+}\ket{\psi}=\ket{+}e^{-iHt}\ket{\psi}$ \cite{Berry2013}. In order for the graph coloring to be suitable for the quantum algorithm, for any given color we must have a procedure for obtaining $\beta$ given $\alpha$, and another procedure for obtaining $\alpha$ given $\beta$. For this to be a valid graph coloring, the procedure must be reversible, and different colors must not give the same $\beta$ from $\alpha$ or vice versa. To explain the decomposition, we will first consider how it works for $\alpha$ and $\beta$ differing by only a single spin-orbital occupation. We are given a 4-tuple $(a,b,\ell,p)$, where $a$ and $b$ are bits, $\ell$ is a number in the sorted list of occupied orbitals, and $p$ is a number that tells us how many orbitals the starting orbital is shifted by. Our notation here differs slightly from that in \sec{decomp1}, where $i$ and $j$ were used in place of $\ell$ to represent the positions of the two orbitals which differed: here we will use $i$ and $j$ for a different purpose. To simplify the discussion, we do not perform the addition modulo $N$, and instead achieve the same effect by allowing $p$ to take positive and negative values. If adding $p$ takes us beyond the list of allowable orbitals, then the matrix element returned is zero, and the list of occupied orbitals is unchanged (corresponding to a diagonal element of the Hamiltonian). We will also use the convention that $\alpha_0=\beta_0=0$ and $\alpha_{\eta+1}=\beta_{\eta+1}=N+1$. These values are not explicitly stored, but rather are dummy values to use in the algorithm when $\ell$ goes beyond the range $1,\ldots,\eta$. The register $a$ tells us whether the $\ell$ is for $\alpha$ or $\beta$. To simplify the discussion, when $a=0$ we take $i=\ell$, and when $a=1$ we take $j=\ell$. In either case, we require that $\beta_j=\alpha_i+p$, but in the case $a=0$ we are given $i$ and need to work out $j$, whereas in the case $a=1$ we are given $j$ and need to work out $i$. In particular, for $a=0$ we just take $\alpha_i$ and add $p$ to it. Then $j$ is the new position in the list $\beta$, so $\beta_j=\alpha_i+p$. The general principle is that, if we are given $i$ for $\alpha$ and need to determine $j$ for $\beta$, we require that $\beta_{j+1}-\beta_{j-1} \ge \alpha_{i+1}-\alpha_{i-1}$, (i.e.\ the spacing between orbitals is larger in $\beta$ than in $\alpha$). Alternatively, if we were given $j$ for $\beta$ and needed to determine a corresponding $i$ for $\alpha$, we would require $\beta_{j+1}-\beta_{j-1} < \alpha_{i+1}-\alpha_{i-1}$ (i.e.\ the spacing between orbitals is larger in $\alpha$ than in $\beta$). If the inequality is not consistent with the value of $a$ (i.e.\ we are proceeding in the wrong direction), then the matrix element for this term in the decomposition is taken to be zero (in the graph there is no line of that color connecting the nodes). This procedure allows for a unique connection between nodes, without double counting. The reason for requiring these inequalities is that the list of orbitals with a larger spacing will have less ambiguity in the order of occupied orbitals. To reduce the number of terms in the decomposition, we are only given $i$ or $j$, but not both, so we either need to be able to determine $j$ from $i$ given $\beta$, or $i$ from $j$ given $\alpha$. When the spacing between the occupied orbitals for $\beta$ is larger, if we are given $\beta$ and $i$ there is less ambiguity in determining $j$. In particular, when $\beta_{j+1}-\beta_{j-1} \ge \alpha_{i+1}-\alpha_{i-1}$, there can be at most two values of $j$ that could have come from $i$, and the bit $b$ is then used to distinguish between them. There are four different cases that we need to consider. \begin{enumerate} \item We are given $\beta$ and need to determine $\alpha$; $a=0$. \item We are given $\alpha$ and need to determine $\beta$; $a=0$. \item We are given $\alpha$ and need to determine $\beta$; $a=1$. \item We are given $\beta$ and need to determine $\alpha$; $a=1$. \end{enumerate} Next we explain the procedure for each of these cases in detail. In the following we use the terminology ``\textsc{invalid}'' to indicate that we need to return $\alpha=\beta$ and a matrix element of zero. \\ \\ \textbf{1. Given $\beta$ and need to determine $\alpha$; $a=0$.} \\ We are given $\beta$, but $\ell$ is the position in the list of occupied orbitals for $\alpha$. We do not know which is the $\beta_j$ to subtract $p$ from, so we loop through all values as follows to find a list of candidates for $\alpha$, $\tilde\alpha^{(k)}$. We define this as a procedure so we can use it later. \\ \\ procedure FindAlphas \begin{adjustwidth}{1em}{0em} $k=0$ \\ For $j=1,\ldots,\eta$: \begin{adjustwidth}{1em}{0em} Subtract $p$ from $\beta_j$ and check that this yields a valid list of orbitals, in that $\beta_j-p$ does not yield an orbital number beyond the desired range, or duplicate another orbital. That is: \\ If $(( \beta_j-p\in\{1,\ldots,N\}) \wedge (\forall j'\in\{1,\ldots,\eta\} : \beta_j-p \ne \beta_{j'})) {\,\vee\, (p=0) }$ then \begin{adjustwidth}{1em}{0em} Sort the list of orbitals to obtain $\tilde\alpha^{(k)}$, and denote by $i$ the new position of $\beta_j-p$ in this list of occupied orbitals. Check that the new value of $i$ corresponds to $\ell$, and that the spacing condition for $a=0$ is satisfied, as follows. \\ If $(i=\ell)\wedge (\beta_{j+1}-\beta_{j-1} \ge \tilde\alpha^{(k)}_{i+1}-\tilde\alpha^{(k)}_{i-1})$ then \begin{adjustwidth}{1em}{0em} $k=k+1$ \end{adjustwidth} end if \end{adjustwidth} end if \end{adjustwidth} end for \end{adjustwidth} end procedure\\ \\ After this procedure there is a list of at most two candidates for $\alpha$, and $k$ will correspond to how many have been found. Depending on the value of $k$ we perform the following: \\ $\mathbf{k=0}$ We return \textsc{invalid}. \\ $\mathbf{k=1}$ If $b=0$ then return $\alpha=\tilde\alpha^{(0)}$, else return \textsc{invalid}. \\ $\mathbf{k=2}$ Return $\alpha=\tilde\alpha^{(b)}$. \\ That is, if we have two possibilities for $\alpha$, then we use $b$ to choose between them. If there is only one, then we only return that one if $b=0$ to avoid obtaining two colors that both link $\alpha$ and $\beta$. \\ \\ \textbf{2. Given $\alpha$ and need to determine $\beta$; $a=0$.} \\ We are given $\alpha$, and $\ell=i$ is the position of the occupied orbital in $\alpha$ that is changed. We therefore add $p$ to $\alpha_i$ and check that it gives a valid list of orbitals. Not only this, we need to check that we would obtain $\alpha$ if we work backwards from the resulting $\beta$. \\ If $((\alpha_i+p\in\{1,\ldots,N\}) \wedge (\forall i'\in\{1,\ldots,\eta\} : \alpha_i+p\ne\alpha_{i'})) {\,\vee\, (p=0) }$ then \begin{adjustwidth}{1em}{0em} We sort the new list of occupied orbitals to obtain a candidate for $\beta$, denoted $\tilde\beta$. We next check that the spacing condition for $a=0$ is satisfied. \\ If $(\tilde\beta_{j+1}-\tilde\beta_{j-1} \ge \alpha_{i+1}-\alpha_{i-1})$ then \begin{adjustwidth}{1em}{0em} Perform the procedure FindAlphas to find potential candidates for $\alpha$ that could be obtained from $\tilde\beta$. There can only be 1 or 2 candidates returned from this procedure. \\ If $((k=1)\wedge (b=0)) \vee ((k=2)\wedge (\alpha=\tilde\alpha^{(b)}))$ then \begin{adjustwidth}{1em}{0em} return $\beta=\tilde\beta$ \end{adjustwidth} else return \textsc{invalid} \end{adjustwidth} else return \textsc{invalid} \end{adjustwidth} else return \textsc{invalid} \\ \\ \textbf{3. Given $\alpha$ and need to determine $\beta$; $a=1$.} \\ This case is closely analogous to the case where we need to determine $\alpha$ from $\beta$, but $a=0$. We are given $\alpha$, but $\ell$ is the position in the list of occupied orbitals for $\beta$. We do not know which is the $\alpha_i$ to add $p$ to, so we loop through all values as follows to find a list of candidates for $\beta$, $\tilde\beta^{(k)}$. We define this as a procedure so we can use it later. \\ \\ procedure FindBetas \begin{adjustwidth}{1em}{0em} $k=0$ \\ For $i=1,\ldots,\eta$: \begin{adjustwidth}{1em}{0em} Add $p$ to $\alpha_i$ and check that this yields a valid list of orbitals, in that $\alpha_i+p$ does not yield an orbital number beyond the desired range, or duplicate another orbital. That is: \\ If $((\alpha_i+p\in\{1,\ldots,N\}) \wedge (\forall i'\in\{1,\ldots,\eta\} : \alpha_i+p\ne\alpha_{i'})) { \,\vee\, (p=0) }$ then \begin{adjustwidth}{1em}{0em} Sort the list of orbitals to obtain $\tilde\beta^{(k)}$, and denote by $j$ the new position of $\alpha_i+p$ in this list of occupied orbitals. Check that the new value of $j$ corresponds to $\ell$, and that the spacing condition for $a=1$ is satisfied. \\ If $(j=\ell)\wedge (\tilde\beta^{(k)}_{j+1}-\tilde\beta^{(k)}_{j-1} < \alpha_{i+1}-\alpha_{i-1})$ then \begin{adjustwidth}{1em}{0em} $k=k+1$ \end{adjustwidth} end if \end{adjustwidth} end if \end{adjustwidth} end for \end{adjustwidth} end procedure\\ \\ After this procedure there is a list of at most two candidates for $\beta$, and $k$ will correspond to how many have been found. Depending on the value of $k$ we perform the following: \\ $\mathbf{k=0}$ We return \textsc{invalid}. \\ $\mathbf{k=1}$ If $b=0$ then return $\beta=\tilde\beta^{(0)}$, else return \textsc{invalid}. \\ $\mathbf{k=2}$ Return $\beta=\tilde\beta^{(b)}$. \\ That is, if we have two possibilities for $\beta$, then we use $b$ to choose between them. If there is only one, then we only return that one if $b=0$ to avoid obtaining two colors that both link $\alpha$ and $\beta$. \\ \\ \textbf{4. Given $\beta$ and need to determine $\alpha$; $a=1$.} \\ We are given $\beta$, and $\ell=j$ is the position of the occupied orbital in $\beta$ that is changed. We therefore subtract $p$ from $\beta_j$ and check that it gives a valid list of orbitals. Again we also need to check consistency. That is, we work back again from the $\alpha$ to check that we correctly obtain $\beta$. \\ If $ ((\beta_j-p\in\{1,\ldots,N\}) \wedge (\forall j'\in\{1,\ldots,\eta\} : \beta_j-p\ne\beta_{j'})) { \,\vee\, (p=0) }$ then \begin{adjustwidth}{1em}{0em} We sort the new list of occupied orbitals to obtain a candidate for $\alpha$, denoted $\tilde\alpha$. We next check that the spacing condition for $a=1$ is satisfied. \\ If $(\beta_{j+1}-\beta_{j-1} < \tilde\alpha_{i+1}-\tilde\alpha_{i-1})$ then \begin{adjustwidth}{1em}{0em} Perform the procedure FindBetas to find potential candidates for $\beta$ that could be obtained from $\tilde\alpha$. There can only be 1 or 2 candidates returned from this procedure. \\ If $((k=1)\wedge (b=0)) \vee ((k=2)\wedge (\beta=\tilde\beta^{(b)}))$ then \begin{adjustwidth}{1em}{0em} return $\alpha=\tilde\alpha$ \end{adjustwidth} else return \textsc{invalid} \end{adjustwidth} else return \textsc{invalid} \end{adjustwidth} else return \textsc{invalid}\\ To prove that this technique gives a valid coloring, we need to show that it is reversible and unique. The most important part to show is that, provided the spacing condition holds, the ambiguity is limited to two candidates that may be resolved by the bit $b$. We will consider the case that $p>0$; the analysis for $p<0$ is equivalent. Consider Case 1, where we are given $\beta$ and need to determine $\alpha$, but $a=0$. Then we take $i=\ell$, and need to determine $j$. Let $j'$ and $j''$ be two potential values of $j$, with $j'<j''$. For these to be potential values of $j$, they must satisfy \begin{align}\label{con1} \beta_{j'}-p &\in (\alpha_{i-1},\alpha_{i+1}) , \\ \label{con2} \beta_{j'+1}-\beta_{j'-1} &\ge \alpha_{i+1}-\alpha_{i-1} , \\ \label{con3} \beta_{j''}-p &\in (\alpha_{i-1},\alpha_{i+1}) , \\ \label{con4} \beta_{j''+1}-\beta_{j''-1} &\ge \alpha_{i+1}-\alpha_{i-1}. \end{align} Condition \eqref{con1} is required because, for $j'$ to be a potential value of $j$, $\beta_{j'}-p$ must correspond to an $\alpha_i$ that is between $\alpha_{i-1}$ and $\alpha_{i+1}$ ($\alpha$ is sorted in ascending order). Condition \eqref{con2} is the spacing condition for $a=0$. Conditions \eqref{con3} and \eqref{con4} are simply the equivalent conditions for $j''$. Next we consider how $\alpha$ is found from $\beta$. In the case where $j'=i$, then we immediately know that $\alpha_{i-1}=\beta_{i-1}$ and $\alpha_{i+1}=\beta_{i+1}$. Then the conditions \eqref{con1} and \eqref{con2} become \begin{align} \label{con5} \beta_{j'}-p &\in (\beta_{i-1},\beta_{i+1}), \\ \label{con6} \beta_{j'+1}-\beta_{j'-1} &\ge \beta_{i+1}-\beta_{i-1}. \end{align} In the case that $j'>i$, it is clear that $\alpha_{i-1}=\beta_{i-1}$ still holds. Moreover, in going from the sequence of occupied orbitals for $\alpha$ to the sequence for $\beta$, we have then removed $\alpha_i$, which means that $\alpha_{i+1}$ has moved to position $i$. That is to say, $\beta_{i}$ must be equal to $\alpha_{i+1}$. Therefore, conditions \eqref{con1} and \eqref{con2} become \begin{align} \label{con9} \beta_{j'}-p &\in (\beta_{i-1},\beta_i), \\ \label{con10} \beta_{j'+1}-\beta_{j'-1} &\ge \beta_i-\beta_{i-1}. \end{align} In either case ($j'=i$ or $j'>i$), because $j''>j'$, we know that $j''>i$. Then the same considerations as for $j'>i$ hold, and conditions \eqref{con3} and \eqref{con4} become \begin{align} \label{con7} \beta_{j''}-p &\in (\beta_{i-1},\beta_i), \\ \label{con8} \beta_{j''+1}-\beta_{j''-1} &\ge \beta_i-\beta_{i-1}. \end{align} Using \eqref{con8} we have \begin{align} \beta_{j''+1}-p &\ge \beta_{j''-1}-p + \beta_i-\beta_{i-1} \nonumber \\ &\ge \beta_{j'}-p + \beta_i-\beta_{i-1} \nonumber \\ &> \beta_{i-1}+ \beta_i-\beta_{i-1} \nonumber \\ &= \beta_i. \end{align} In the second-last line we have used $\beta_{j'}-p>\beta_{i-1}$ from \eqref{con5} and \eqref{con9}, and in the second line we have used $j''>j'$. The inequality $\beta_{j''+1}-p>\beta_i$ means that $\beta_{j''+1}-p\notin(\beta_{i-1},\beta_i)$, and therefore $\beta_{j''+1}$ could not have come from $\alpha_i$ by adding $p$. That is because $\beta_{j''+1}$ would have to satisfy a relation similar to \eqref{con7}. In turn, any $j>j''+1$ will satisfy $\beta_j-p>\beta_i$, because the $\beta_k$ are sorted in ascending order. The net result of this reasoning is that, if there are two ambiguous values of $j$, then there can be no third ambiguous value. This is because, if we call the first two ambiguous values $j'$ and $j''$, there can be no more ambiguous values for $j>j''$. Hence, if we have a bit $b$ which tells us which of the two ambiguous values to choose, then it resolves the ambiguity and enables us to unambiguously determine $\alpha$, given $\beta$, $p$, and $i$. Next consider Case 3, where we wish to determine $\beta$ from $\alpha$, but $a=1$. In that case, we take $j=\ell$, and need to determine $i$. That is, we wish to determine a value of $i$ such that adding $p$ to $\alpha_i$ gives $\beta_j$, and also require the condition $\beta_{j+1}-\beta_{j-1} < \alpha_{i+1}-\alpha_{i-1}$. Now the situation is reversed; if we start with $\beta$, then we can immediately determine $\alpha$, but if we have $\alpha$ then we potentially need to consider multiple values of $i$ and resolve an ambiguity. In exactly the same way as above, there are at most two possible values of $i$, and we distinguish between these using the bit $b$. In this case, we cannot have $j=i$, because that would imply that $\alpha_k=\beta_k$ for all $k\ne j$, and the condition $\beta_{j+1}-\beta_{j-1} < \alpha_{i+1}-\alpha_{i-1}$ would be violated. Therefore, consider two possible values of $i$, $i'$ and $i''$, with $i''<i'<j$. The equivalents of the conditions in Eqs.~\eqref{con1} to \eqref{con4} are \begin{align}\label{con1b} \alpha_{i'}+p &\in (\beta_{j-1},\beta_{j+1}) , \\ \label{con2b} \beta_{j'+1}-\beta_{j'-1} &< \alpha_{i+1}-\alpha_{i-1} , \\ \label{con3b} \alpha_{i''}+p &\in (\beta_{j-1},\beta_{j+1}) , \\ \label{con4b} \beta_{j''+1}-\beta_{j''-1} &< \alpha_{i+1}-\alpha_{i-1}. \end{align} Because $i''<i'<j$, using similar reasoning as before, we find that $\beta_{j+1}=\alpha_{j+1}$ and $\beta_{j-1}=\alpha_j$. That means that the conditions \eqref{con1b} to \eqref{con4b} become \begin{align} \alpha_{i'}+p &\in (\alpha_j,\alpha_{j+1}), \\ \alpha_{j+1}-\alpha_j &< \alpha_{i'+1}-\alpha_{i'-1}, \\ \alpha_{i''}+p &\in (\alpha_j,\alpha_{j+1}), \\ \alpha_{j+1}-\alpha_j &< \alpha_{i''+1}-\alpha_{i''-1}.\label{conda} \end{align} Starting with Eq.~\eqref{conda} we obtain \begin{align} \alpha_{i''-1}+p &< \alpha_{i''+1}+p-\alpha_{j+1}+\alpha_j \nonumber \\ &\le \alpha_{i'}+p-\alpha_{j+1}+\alpha_j \nonumber \\ &< \alpha_{j+1}-\alpha_{j+1}+\alpha_j \nonumber \\ &= \alpha_j = \beta_{j-1}. \end{align} Hence $\alpha_{i''-1}+p$ is not in the interval $(\beta_{j-1},\beta_{j+1})$, and therefore cannot give $\beta_j$. Therefore there can be no third ambiguous value, in the same way as above for $a=0$. Hence the single bit $b$ is again sufficient to distinguish between any ambiguous values, and enables us to determine $\beta$ given $\alpha$, $p$, and $j$. We now consider the requirement that the procedure is reversible. In particular, Case 1 needs to be the reverse of Case 2, and Case 3 needs to be the reverse of Case 4. Consider starting from a particular $\beta$ and using the method in Case 1. We have shown that the procedure FindAlphas in Case 1 can yield at most two potential candidates for $\alpha$, and then one is chosen \textit{via} the value of $b$. For the resulting $\alpha$, adding $p$ to $\alpha_i$ will yield the original set of occupied orbitals $\beta$. Moreover, the inequality $\beta_{j+1}-\beta_{j-1} \ge \alpha_{i+1}-\alpha_{i-1}$ must be satisfied (otherwise Case 1 would yield \textsc{invalid}). If Case 1 yields $\beta$ from $\alpha$, then Case 2 should yield $\beta$ given $\alpha$. Case 2 simply adds $p$ to $\alpha_i$ (where $i$ is given), which we know should yield $\beta$. The method in Case 2 also performs some checks, and outputs \textsc{invalid} if those fail. These checks are: \begin{enumerate} \item It checks that $\beta$ is a valid list of orbitals, which must be satisfied because we started with a valid $\beta$. \item It checks that $\beta_{j+1}-\beta_{j-1} \ge \alpha_{i+1}-\alpha_{i-1}$, which must be satisfied for Case 1 to yield $\alpha$ instead of \textsc{invalid}. \item It checks that using Case 1 on $\beta$ would yield $\alpha$, which must be satisfied here because we considered initially using Case 1 to obtain $\alpha$ from $\beta$. \end{enumerate} Thus we see that, if Case 1 yields $\alpha$ from $\beta$, then Case 2 must yield $\beta$ from $\alpha$. Going the other way, and starting with $\alpha$ and using Case 2 to find $\beta$, a result other than \textsc{invalid} will only be provided if Case 1 would yield $\alpha$ from that $\beta$. Thus we immediately know that if Case 2 provides $\beta$ from $\alpha$, then Case 1 will provide $\alpha$ from $\beta$. This means that the methods for Cases 1 and 2 are the inverses of each other, as required. Via exactly the same reasoning, we can see that the methods in Cases 3 and 4 are the inverses of each other as well. Next, consider the question of uniqueness. The color will be unique if we can determine the color from a pair $\alpha$, $\beta$. Given $\alpha$ and $\beta$, we will see that all the occupied orbitals are identical, except one. Then the occupied orbitals for $\alpha$ and $\beta$ which are different will be $i$ and $j$, respectively. We can then immediately set $p=\beta_j-\alpha_i$ for the color. We can then compare $\beta_{j+1}-\beta_{j-1}$ and $\alpha_{i+1}-\alpha_{i-1}$. If $\beta_{j+1}-\beta_{j-1}\ge \alpha_{i+1}-\alpha_{i-1}$ then for the color $a=0$ and $\ell=i$. We can then find how many ambiguous values of $\alpha$ there would be if we started with $\beta$. If $\alpha$ was obtained uniquely from $\beta$, then we would set $b=0$ for the color. If there were two ambiguous values of $\alpha$ that could be obtained from $\beta$, then if the first was correct we would set $b=0$, and if the second were correct then we would set $b=1$. If $\beta_{j+1}-\beta_{j-1}<\alpha_{i+1}-\alpha_{i-1}$ then for the color $a=1$ and $\ell=j$. We can then find how many ambiguous values of $\beta$ there would be if we started with $\alpha$. If $\beta$ was obtained uniquely from $\alpha$, then we would set $b=0$ for the color. If there were two ambiguous values of $\beta$ that could be obtained from $\alpha$, then if the first was correct we would set $b=0$, and if the second were correct then we would set $b=1$. In this way we can see that the pair $\alpha$, $\beta$ yields a unique color, and therefore we have a valid coloring. So far we have considered the case where $\alpha$ and $\beta$ differ by just one orbital for simplicity. For cases where $\alpha$ and $\beta$ differ by two orbitals, the procedure is similar. We now need to use the above reasoning to go from $\alpha$ to $\beta$ through some intermediate list of orbitals $\chi$. That is, we have one set of numbers $(a_1,b_1,\ell_1,p)$ that tells us how to find $\chi$ from $\alpha$, then a second set of numbers $(a_2,b_2,\ell_2,q)$ that tells us how to obtain $\beta$ from $\chi$. First, it is easily seen that this procedure is reversible, because the steps for going from $\alpha$ to $\chi$ to $\beta$ are reversible. Second, we need to be able to determine the color from $\alpha$ and $\beta$. First, we find the two occupied orbitals for $\alpha$ and $\beta$ that differ. Call the different occupied orbitals for $\alpha$ $i_1$ and $i_2$, and the different orbitals for $\beta$ $j_1$ and $j_2$ (assume in ascending order so the labels are unique). Then there are four different ways that one could go from $\alpha$ to $\beta$, through different intermediate states $\chi$. \begin{enumerate} \item $\alpha_{i_1} \mapsto \beta_{j_1}$ then $\alpha_{i_2}\mapsto\beta_{j_2}$ \item $\alpha_{i_2}\mapsto\beta_{j_2}$ then $\alpha_{i_1} \mapsto \beta_{j_1}$ \item $\alpha_{i_1} \mapsto \beta_{j_2}$ then $\alpha_{i_2}\mapsto\beta_{j_1}$ \item $\alpha_{i_2}\mapsto\beta_{j_1}$ then $\alpha_{i_1} \mapsto \beta_{j_2}$ \end{enumerate} To resolve this ambiguity we require that the color is obtained by assuming the first alternative that $\alpha_{i_1} \mapsto \beta_{j_1}$ then $\alpha_{i_2}\mapsto\beta_{j_2}$. Then $\alpha$ and $\beta$ yield a unique color. This also requires a slight modification of the technique for obtaining $\alpha$ from $\beta$ and vice versa. First the color is used to obtain the pair $\alpha$, $\beta$, then it is checked whether the orbitals were mapped as in the first alternative above. If they were not, then \textsc{invalid} is returned. To enable us to include the matrix elements where $\alpha$ and $\beta$ differ by a single orbital or no orbitals with a coloring by an 8-tuple $\gamma = (a_1,b_1,\ell_1,p,a_2,b_2,\ell_2,q)$, we can also allow $p=0$ (for only one differing) and $p=q=0$ (for $\alpha=\beta$). The overall number of terms in the decomposition is then ${\cal O}(\eta^2 N^2)$. \section{The CI Matrix Decomposition} \label{sec:decomp1} The molecular Hamiltonian expressed in the basis of Slater determinants is known to chemists as the CI matrix. Elements of the CI matrix are computed according to the Slater-Condon rules \cite{Helgaker2002}, which we will express in terms of the one-electron and two-electron integrals in \eq{single_int1} and \eq{double_int}. In order to motivate our 1-sparse decomposition, we state the Slater-Condon rules for computing the matrix element \begin{equation} H^{\alpha\beta} = \bra{\alpha} H \ket{\beta} \end{equation} by considering the spin-orbitals which differ between the determinants $\ket{\alpha}$ and $\ket{\beta}$ \cite{Helgaker2002}: \begin{enumerate} \item If $\ket{\alpha}$ and $\ket{\beta}$ contain the same spin-orbitals $\{\chi_i\}_{i=1}^{\eta}$ then we have a diagonal element \begin{equation} \label{eq:slate1} H^{\alpha\beta} = \sum_{i=1}^{\eta} h_{\chi_i\chi_i} + \sum_{i = 1}^{\eta - 1}\sum_{j = i + 1}^{\eta} \left(h_{\chi_i\chi_j \chi_i \chi_j} - h_{\chi_i \chi_j \chi_j \chi_i}\right). \end{equation} \item If $\ket{\alpha}$ and $\ket{\beta}$ differ by exactly one spin-orbital such that $\ket{\alpha}$ contains spin-orbital $k$ where $\ket{\beta}$ contains spin-orbital $\ell$, but otherwise contain the same spin-orbitals $\{\chi_i\}_{i=1}^{\eta - 1}$, then \begin{equation} \label{eq:slate2} H^{\alpha\beta} = h_{k\ell} + \sum_{i=1}^{\eta-1} \left(h_{k\chi_i \ell \chi_i} - h_{k \chi_i \chi_i \ell}\right). \end{equation} \item If $\ket{\alpha}$ and $\ket{\beta}$ differ by exactly two spin-orbitals such that occupied spin-orbital $i$ in $\ket{\alpha}$ is replaced with spin-orbital $k$ in $\ket{\beta}$, and occupied spin-orbital $j$ in $\ket{\alpha}$ is replaced with spin-orbital $\ell$ in $\ket{\beta}$, then \begin{equation} \label{eq:slate3} H^{\alpha\beta} = h_{ijk\ell} - h_{ij\ell k}. \end{equation} \item If $\ket{\alpha}$ and $\ket{\beta}$ differ by more than two spin-orbitals, \begin{equation} \label{eq:slate4} H^{\alpha\beta} = 0. \end{equation} \end{enumerate} These rules assume that $\alpha$ and $\beta$ have the list of occupied orbitals given in a corresponding order, so all corresponding occupied orbitals are listed in the same positions. In contrast, we will be giving the lists of occupied orbitals in ascending order. In order to use the rules, we therefore need to change the order of the list of occupied orbitals. In changing the order of the occupied orbitals, there is a sign flip on the state for an odd permutation. This sign flip needs to be included when using the above rules. In general, there is no efficient way to decompose the CI matrix into a polynomial number of tensor products of Pauli operators. It is thus inefficient to directly simulate this Hamiltonian in the same fashion with which we simulate local Hamiltonians. However, the CI matrix is sparse and there exist techniques for simulating arbitrary sparse Hamiltonians. A $d$-sparse matrix is one which contains at most $d$ nonzero elements in each row and column. As discussed in \cite{Toloui2013,Wecker2014}, the Slater-Condon rules imply that the sparsity of the CI matrix is \begin{equation} d = \binom{\eta}{2}\binom{N - \eta}{2} + \binom{\eta}{1}\binom{N - \eta}{1} + 1 = \frac{\eta^4}{4} - \frac{\eta^3 N}{2} + \frac{\eta^2 N^2}{2} + {\cal O}\left(\eta^2N+\eta N^2\right). \end{equation} Because $N$ is always greater than $\eta$, we find that the CI matrix is $d$-sparse where $d \in {\cal O}(\eta^2 N^2)$. This should be compared with the second-quantized Hamiltonian which is also $d$-sparse, but where $d\in{\cal O}(N^4)$. Our strategy here parallels the second-quantized decomposition, but works with the first-quantized wavefunction. This decomposition is explained in four steps, as follows. \begin{enumerate} \item[A.] Decompose the molecular Hamiltonian into ${\cal O}(\eta^2 N^2)$ 1-sparse matrices. \item[B.] Further decompose each of these 1-sparse matrices into 1-sparse matrices with entries proportional to a sum of a constant number of molecular integrals. \item[C.] Decompose those 1-sparse matrices into sums approximating the integrals in Eqs.~\eqref{eq:double_int} and \eqref{eq:single_int}. \item[D.] Decompose the integrands from those integrals into sums of self-inverse matrices. \end{enumerate} \subsection{Decomposition into 1-sparse matrices} \label{sec:dec1} In order to decompose the molecular Hamiltonian into 1-sparse matrices, we require a unique and reversible graph coloring between nodes (Slater determinants). We introduce such a graph coloring here, with the details of its construction and proof of its properties given in \app{1sparseproof}. The graph coloring can be summarized as follows. \begin{enumerate} \item Perform the simulation under $\sigma_x \otimes H$, where $\sigma_x$ is the Pauli $x$ matrix, in order to create a bipartite Hamiltonian of the same sparsity as $H$. \item Label the left nodes $\alpha$ and the right nodes $\beta$. We seek a procedure to take $\alpha$ to $\beta$, or vice versa, with as little additional information as possible, and without redundancy or ambiguity. \item Provide an 8-tuple $\gamma = (a_1,b_1,i,p,a_2,b_2,j,q)$ which determines the coloring. The coloring must uniquely determine $\alpha$ given $\beta$ or vice versa. Using the 8-tuples, proceed via either Case 1, 2, 3, or 4 in \app{1sparseproof} to determine the other set of spin-orbitals, using an intermediate list of orbitals $\chi$. The 4-tuples $(a_1,b_1,i,p)$ and $(a_2,b_2,j,q)$ each define a differing orbital. For a single difference, we can set $p = 0$, and for no differences, we can set $p=q=0$. \end{enumerate} The basic idea is that we give the positions $i$ and $j$ of those orbitals which differ in $\alpha$, as well as by how much the occupied orbital indices shift, which we denote by $p$ and $q$. This allows us to determine $\beta$ from $\alpha$. However, it does not allow us to unambiguously determine $\alpha$ from $\beta$. To explain how to resolve this ambiguity, we consider the case of a single differing orbital. We will denote by $i$ the position of the differing orbital in $\alpha$, and by $k$ the position of the differing orbital in $\beta$. \begin{figure} \centering \subfloat[][]{\includegraphics[width=0.4\textwidth,trim={0in 4.75in 6in -0.5in},clip]{figA.pdf} \label{fig:subfigA}} \\ \subfloat[][]{\includegraphics[width=0.4\textwidth,trim={0in 4.75in 6in 0in},clip]{figB.pdf} \label{fig:subfigB}} \caption{Example of the 1-sparse coloring, where $i$ is the position of the occupied orbital in $\alpha$ that must be moved. \protect\subref{fig:subfigA} $i = 2$, $p = 4$ is sufficient to determine $\beta$ from $\alpha$, as well as to determine $\alpha$ from $\beta$. \protect\subref{fig:subfigB} $i = 2$, $p = 5$ is sufficient to determine $\beta$ from $\alpha$, but not the reverse: subtracting $p=5$ from $\beta_2$, $\beta_3$, or $\beta_4$ all give different valid values for $\alpha_i = \alpha_2$. The spacing condition means that we would need to give the position of the occupied orbital for $\beta$ instead.} \end{figure} Consider the example in \hyperref[fig:subfigA]{Figure~1(\protect\subref*{fig:subfigA})}: given $i$ which is the position in $\alpha$, the position $k$ in $\beta$ can be immediately determined. But given $\beta$, multiple potential positions of occupied orbitals would need to be tested to see if they put the occupied orbital in position $i=2$ in $\alpha$. In this case, given $\beta$ there is only one orbital which can be shifted to position 2 for $\alpha$ so the position in $\beta$ is unambiguous. Now consider \hyperref[fig:subfigB]{Figure~1(\protect\subref*{fig:subfigB})}: multiple positions in $\beta$ could lead to position 2 in $\alpha$. The difference between the two cases is that in \hyperref[fig:subfigA]{Figure~1(\protect\subref*{fig:subfigA})} there is a larger spacing between orbitals for $\beta$, whereas in \hyperref[fig:subfigB]{Figure~1(\protect\subref*{fig:subfigB})} there is a larger spacing for $\alpha$. More specifically, for \hyperref[fig:subfigA]{Figure~1(\protect\subref*{fig:subfigA})} the spacing between $\alpha_1$ and $\alpha_3$ is $3$, whereas the spacing between $\beta_2$ and $\beta_4$ is larger at $5$. For \hyperref[fig:subfigB]{Figure~1(\protect\subref*{fig:subfigB})} the spacing between $\alpha_1$ and $\alpha_3$ is $5$, whereas the spacing between $\beta_2$ and $\beta_4$ is smaller at $2$. It is the spacing between the occupied orbitals adjacent to the one that is moved that should be compared. For the situation in \hyperref[fig:subfigB]{Figure~1(\protect\subref*{fig:subfigB})}, rather than specifying the position in $\alpha$ we should specify the position in $\beta$ to resolve the ambiguity. The bit $a$ determines whether we are specifying the position in $\alpha$ or in $\beta$; this is done depending on the relative spacing of the adjacent occupied orbitals in the two. However, this spacing condition does not completely resolve the ambiguity: there are potentially two different choices for the occupied orbital. The choice is made by the bit $b$. The coloring for the two differing orbitals is done by doing this twice with an intermediate list of occupied orbitals $\chi$. There are ${\cal O}(\eta^2 N^2)$ possible colors: there are two possible choices of each of the bits $a_1$, $a_2$, $b_1$, and $b_2$, $\eta$ choices each of $i$ and $j$, and $N$ choices each of $p$ and $q$. \subsection{Decomposition into $h_{ij}$ and $h_{ijk\ell}$} \label{sec:dec2} Each 1-sparse matrix from \sec{dec1} is associated with some 8-tuple $\gamma = (a_1,b_1,i,p,a_2,b_2,j,q)$. However, without further modification, some of these 1-sparse matrices have entries given by a sum over a number of molecular integrals that grows with $\eta$, namely the diagonal terms as in \eq{slate1}, and the single-orbital terms as in \eq{slate2}. Here, we further decompose those matrices into a sum of 1-sparse matrices $H_{\gamma}$, which have entries proportional to the sum of a constant number of molecular integrals, in order to remove this changing upper bound. We want to have a new set of 1-sparse matrices, each with entries corresponding to a single term in the sum over molecular integrals. To be more specific, the combinations of $\gamma$ correspond to terms in \eq{slate1} to \eq{slate3} as follows. \begin{enumerate} \item If $p=q=0$, this indicates that we have a diagonal 1-sparse matrix. In \eq{slate1}, the entries on the diagonal would be a sum of ${\cal O}(\eta^2)$ terms. As we have freedom in how to use $i$ and $j$, we use these to give terms in the sum. When $i=j$ for $p=q=0$, we take the 1-sparse matrix to have diagonal elements given by $h_{i i}$. If $i < j$ for $p = q = 0$ we take the 1-sparse matrix to have diagonal entries $h_{\chi_i\chi_j \chi_i \chi_j} - h_{\chi_i \chi_j \chi_j \chi_i}$. We do not allow tuples $\gamma$ such that $i>j$ for $p=q=0$ (alternatively we could just give zero in this case). The overall result is that the sum over $i$ and $j$ for the 1-sparse matrices for $\gamma$ with $p=q=0$ yields the desired sum in \eq{slate1}. \item Next, if $p = 0$ and $q \neq 0$, then this indicates that we have a 1-sparse matrix with entries where $\alpha$ and $\beta$ differ by only one spin-orbital. According to \eq{slate2}, each entry would normally be a sum of ${\cal O}(\eta)$ terms. Instead, when $p = 0$ and $q \neq 0$, we use the value of $i$ to index terms in the sum in \eq{slate2}, though we only yield a nonzero result when $i$ is in the Slater determinant. In particular, the 1-sparse matrix has entries $h_{k \chi_i \ell \chi_i} - h_{k \chi_i \chi_i \ell}$. We allow an additional value of $i$ to indicate a 1-sparse matrix with entries $h_{k \ell}$. Then the sum over 1-sparse matrices for different values of $i$ gives the desired sum \eq{slate2}. We do not allow $\gamma$ such that $q=0$ but $p\neq 0$. \item Finally, if both $p$ and $q$ are nonzero, then we have a 1-sparse matrix with entries where $\alpha$ and $\beta$ differ by two orbitals. In this case, there is no sum in \eq{slate3}, so there is no additional decomposition needed. \end{enumerate} Combining these three steps we find that the decomposition into 1-sparse matrices $H_\gamma$ can be achieved with the indices $(a_1,b_1,i,p,a_2,b_2,j,q)$. Thus, there are ${\cal O}(\eta^2 N^2)$ terms without any redundancies. Note that sorting of the spin-orbital indices requires only $\widetilde{\cal O}(\eta)$ gates, which is less than the number of complexity of evaluating the spin-orbitals. In the following sections, we denote the total number of terms given by the above decomposition by $\Gamma$, and the sum over $H_\gamma$ yields the complete CI matrix, \begin{equation} \label{eq:gamma_decomp} H = \sum_{\gamma=1}^{\Gamma} H_\gamma. \end{equation} \subsection{Discretizing the integrals} \label{sec:Riemann} Next we consider discretization of the integrals for $h_{ij}$ and $h_{ijk\ell}$. In \cite{Berry2015} it is shown how to simulate Hamiltonian evolution with an exponential improvement in the scaling with $1/\epsilon$, as compared to methods based on Trotter formulas. In this approach, the time-ordered exponential for the evolution operator is approximated by a Taylor series up to an order $K$. The time $t$ is broken into $r$ segments, and the integrals are discretized in the following way on each segment: \begin{align} {\cal T} \exp\left[-i \int_{0}^{t/r} \!\! H(t) \, d t \right] &\approx \sum_{k=0}^K \frac{(-i)^k}{k!}\int_{0}^{t/r} {\cal T} H(t_k)\dots H(t_1)\, \dd\bs t \nonumber \\ &\approx \sum_{k=0}^K \frac{(-it/r)^k}{\mu^k k!}\sum_{j_1,\ldots,j_k=0}^{\mu-1} H(t_{j_k})\dots H(t_{j_1}), \end{align} where ${\cal T}$ is the time-ordering operator. In our case the Hamiltonian does not change in time, so the time-ordering is unimportant. The Hamiltonian is expanded as a sum of $H_\gamma$ as in \eq{gamma_decomp}, and each of those terms has matrix entries that can be given in the form of an integral as \begin{align} H^{\alpha\beta}_\gamma = \int \aleph^{\alpha\beta}_\gamma(\vec z) \, \dd \vec z \, . \end{align} In cases where $H^{\alpha\beta}_\gamma$ corresponds to $h_{ij}$, the integral is over a three-dimensional region, and where $H^{\alpha\beta}_\gamma$ corresponds to $h_{ijk\ell}$ the integral is over a six-dimensional region, so $\vec z$ represents six parameters. Ideally, each integral can be truncated to a finite domain $D$ with volume ${\cal V}$. Using a set of grid points $\vec z_{\rho}$, we can approximate the integral by \begin{align} \label{eq:double} H^{\alpha\beta}_\gamma & \approx \int_D \aleph^{\alpha\beta}_\gamma(\vec z) \, \dd \vec z \approx \frac{\cal V}{\mu} \sum_{\rho=1}^{\mu} \aleph^{\alpha\beta}_\gamma(\vec z_{\rho}) \, . \end{align} The complexity will then be logarithmic in the number of points in the sum, $\mu$, and linear in the volume times the maximum value of the integrand. In practice the situation is more complicated than this. That is because the integrals are all different. As well as the dimensionality of the integrals (three for $h_{ij}$ and six for $h_{ijk\ell}$), there will be differences in the regions that the integrals will be over, as well as some integrals being in spherical polar coordinates. To account for these differences, it is better to write the discretized integral in the form \begin{equation} \label{eq:double2} H^{\alpha\beta}_\gamma \approx \sum_{\rho=1}^{\mu} \aleph^{\alpha\beta}_{\gamma,\rho} \, . \end{equation} The Hamiltonian $H_\gamma$ can then be written as the sum \begin{equation} \label{eq:double3} H_\gamma \approx \sum_{\rho=1}^{\mu} \aleph_{\gamma,\rho} \, . \end{equation} As discussed in \cite{BabbushSparse1}, the discretization is possible because the integrands can be chosen to decay exponentially \cite{Helgaker2002}. The required properties of the orbitals are given in \thm{maintheorem}. Here we present a more precise formulation of the required properties, and provide specific results on the number of terms needed. We make the following three assumptions about the spin-orbitals $\varphi_\ell$. \begin{enumerate} \item There exists a positive real number $\varphi_\text{max}$ such that, for all spin-orbital indices $\ell$ and for all $\vec{r} \in \R^3$, \begin{equation} \label{eq:spin-orbital_bound} \abs{\varphi_\ell (\vec{r})} \leq \varphi_\text{max}. \end{equation} \item For each spin-orbital index $\ell$, there exists a vector $\vec{c}_\ell \in \R^3$ (called the center of $\varphi_\ell$) and a positive real number $x_\text{max}$ such that, whenever $\abs{\vec{r} - \vec{c}_\ell} \geq x_\text{max}$ for some $\vec{r} \in \R^3$, \begin{equation} \label{eq:spin-orbital_decay} \abs{\varphi_\ell (\vec{r})} \leq \varphi_\text{max} \exp \left( -\frac{\alpha}{x_\text{max}} \norm{\vec{r}-\vec{c}_\ell} \right), \end{equation} where $\alpha$ is some positive real constant. \item For each spin-orbital index $\ell$, $\varphi_\ell$ is twice-differentiable and there exist positive real constants $\gamma_1$ and $\gamma_2$ such that \begin{equation} \label{eq:spin-orbital_first_derivative_bound} \norm{\nabla \varphi_\ell (\vec{r})} \leq \gamma_1 \frac{\varphi_\text{max}}{x_\text{max}} \end{equation} and \begin{equation} \label{eq:spin-orbital_second_derivative_bound} \abs{\nabla^2 \varphi_\ell (\vec{r})} \leq \gamma_2 \frac{\varphi_\text{max}}{x_\text{max}^2} \end{equation} for all $\vec{r} \in \R^3$. \end{enumerate} Note that $\alpha$, $\gamma_1$ and $\gamma_2$ are dimensionless constants, whereas $x_{\max}$ has units of distance, and $\varphi_\text{max}$ has the same units as $\varphi_\ell$. The conditions of \thm{maintheorem} mean that $\varphi_\text{max}$ and $x_\text{max}$ grow at most logarithmically with the number of spin-orbitals. Note that we use $x_{\max}$ in a different way than in \cite{BabbushSparse1}, where it was the size of the cutoff on the region of integrals, satisfying $x_{\max}={\cal O}(\log(Nt/\epsilon))$. Here we take $x_{\max}$ to be the size scale of the orbitals independent of $t$ or $\epsilon$, and the cutoff will be a multiple of $x_{\max}$. We also assume that $x_{\max}$ is bounded below by a constant, so the first and second derivatives of the spin-orbitals grow no more than logarithmically as a function of the number of spin-orbitals. We next define notation used for the integrals for $h_{ij}$ and $h_{ijk\ell}$. These integrals are \begin{equation} S_{ij}^{(0)}\! \left( D_0 \right) := -\frac{1}{2} \int_{D_0} \varphi_i^* (\vec{r}) \nabla^2 \varphi_j (\vec{r}) \dd{\vec{r}}, \end{equation} \begin{equation} S_{ij}^{(1,\,q)}\! \left( D_{1,q} \right) := -Z_q \int_{D_{1,q}} \frac{ \varphi_i^* (\vec{r}) \, \varphi_j (\vec{r}) }{ \|\vec{R}_q - \vec{r}\| } \dd{\vec{r}}, \end{equation} and \begin{equation} S_{ijk\ell}^{(2)}\! \left( D_2 \right) := \int_{D_2} \frac{ \varphi_i^*\!\left(\vec{r}_1\right) \varphi_j^*\!\left(\vec{r}_2\right) \varphi_k\!\left(\vec{r}_2\right) \varphi_\ell\!\left(\vec{r}_1\right) }{ \|\vec{r}_1 - \vec{r}_2\| } \dd{\vec{r}_1} \dd{\vec{r}_2}, \end{equation} for any choices of $D_0, D_{1,q} \subseteq \R^3$ and $D_2 \subseteq \R^6$. Thus \begin{equation} h_{ij} = S_{ij}^{(0)}\! \left( \R^3 \right) + \sum_q S_{ij}^{(1,\,q)}\! \left( \R^3 \right) \end{equation} and \begin{equation} h_{ijk\ell} = S_{ijk\ell}^{(2)}\! \left( \R^6 \right). \end{equation} Using the assumptions on the properties of the orbitals, we can bound the number of terms needed in a Riemann sum that approximates each integral to within a specified accuracy, $\delta$ (which is distinct from the accuracy of the overall simulation, $\epsilon$). These bounds are summarized in the following three lemmas. \begin{lemma} \label{lem:int0} Let $\delta$ be any real number that satisfies \begin{equation} \label{eq:sensible0} 0 < \delta \leq e^{-\alpha/2} K_0 \varphi_\text{max}^2 x_\text{max}\, , \end{equation} where \begin{equation} K_0 := \frac{26 \gamma_1}{\alpha^2} + \frac{8\pi\gamma_2}{\alpha^3} + 32\sqrt{3} \gamma_1 \gamma_2 \, . \end{equation} Then $S_{ij}^{(0)}\! \left( \R^3 \right)$ can be approximated to within error $\delta$ using a Riemann sum with \begin{equation} \label{eq:lem1mu} \mu \le \left\lceil \frac{K_0 \varphi_\text{max}^2 x_\text{max}}{\delta} \left[ \frac{2}{\alpha} \log \left( \frac{K_0 \varphi_\text{max}^2 x_\text{max}}{\delta} \right) \right]^4 \right\rceil^3 \end{equation} terms, where the terms in the sum have absolute value no larger than \begin{equation} \label{eq:lem1bnd} \frac{1}{\mu} \times 32 \frac{\gamma_1^2}{\alpha^3} \varphi_\text{max}^2 x_\text{max} \left[ \log \left( \frac{K_0 \varphi_\text{max}^2 x_\text{max}}{\delta} \right) \right]^3. \end{equation} \end{lemma} \begin{lemma} \label{lem:int1} Let $\delta$ be any real number that satisfies \begin{equation} \label{eq:sensible1} 0 < \delta \leq e^{-\alpha/2} K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2 \, , \end{equation} where \begin{equation} K_1 := \frac{8\pi^2}{\alpha^3}\left( \alpha+2 \right) + 1121 \left( 8 \gamma_1 + \sqrt{2} \right). \end{equation} Then $S_{ij}^{(1,q)}\! \left( \R^3 \right)$ can be approximated to within error $\delta$ using a Riemann sum with \begin{equation}\label{eq:lem2mu} \mu \le \left\lceil \frac{K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2}{\delta} \left[ \frac{2}{\alpha} \log \left( \frac{K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2}{\delta} \right) \right]^4 \right\rceil^3 \end{equation} terms, where the terms in the sum have absolute value no larger than \begin{equation}\label{eq:lem2bnd} \frac{1}{\mu} \times \frac{256\pi^2}{\alpha^3} Z_q \varphi_\text{max}^2 x_\text{max}^2 \left[ \log \left( \frac{K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2}{\delta} \right) \right]^3. \end{equation} \end{lemma} \begin{lemma} \label{lem:int2} Let $\delta$ be any real number that satisfies \begin{equation} \label{eq:sensible2} 0 < \delta \leq e^{-\alpha} K_2 \varphi_\text{max}^4 x_\text{max}^5 \, , \end{equation} where \begin{equation} K_2 := \frac{128\pi}{\alpha^6}(\alpha+2) + 2161 \pi^2 \left( 20 \gamma_1 + \sqrt{2} \right). \end{equation} Then $S_{ijk\ell}^{(2)} \! \left( \R^6 \right)$ can be approximated to within error $\delta$ using a Riemann sum with \begin{equation}\label{eq:lem3mu} \mu \le \left\lceil \frac{K_2 \varphi_\text{max}^4 x_\text{max}^5}{\delta} \left[ \frac{1}{\alpha} \log \left( \frac{K_2 \varphi_\text{max}^4 x_\text{max}^5}{\delta} \right) \right]^7 \right\rceil^6 \end{equation} terms, where the terms in the sum have absolute value no larger than \begin{equation}\label{eq:lem3bnd} \frac{1}{\mu} \times \frac{672\pi^2}{\alpha^6} \varphi_\text{max}^4 x_\text{max}^5 \left[ \log \left( \frac{K_2 \varphi_\text{max}^4 x_\text{max}^5}{\delta} \right) \right]^6. \end{equation} \end{lemma} The conditions in Eqs.~\eqref{eq:sensible0}, \eqref{eq:sensible1} and \eqref{eq:sensible2} are just used to ensure that we are considering a reasonable combination of parameters, and not for example a very large allowable error $\delta$ or a small value of $x_{\max}$. We prove these Lemmas in \app{integral_discretization}. Specifically, we prove \lem{int0} in \app{integral_discretization/int0_proof}, \lem{int1} in \app{integral_discretization/int1_proof} and \lem{int2} in \app{integral_discretization/int2_proof}. In discretizing these integrals it is important that the integrands are Hermitian, because we need ${\cal H}_{\gamma,\rho}$ to be Hermitian. The integrands of these integrals are not Hermitian as discretized in the way given in the proofs in \app{integral_discretization}. This is because the regions of integration are chosen in a non-symmetric way. For example, the region of integration for $S_{ij}^{(0)}$ is chosen centered on the orbital $\varphi_i$, so the integrand is not symmetric. It is simple to symmetrize the integrands, however. For example, for $S_{ij}^{(0)}$ we can add $(S_{ji}^{(0)})^*$ and divide by two. That ensures that the integrand is symmetric, with just a factor of two overhead in the number of terms in the sum. As a consequence of these Lemmas, we see that the terms of any Riemann sum approximation to one of the integrals that define the Hamiltonian coefficients $h_{ij}$ and $h_{ijk\ell}$ have absolute values bounded by \begin{equation} \mathcal{O} \left( \frac{\varphi_\text{max}^4 x_\text{max}^5}{\mu} \left[ \log \left( \frac{\varphi_\text{max}^4 x_\text{max}^5}{\delta} \right) \right]^6 \right), \end{equation} where $\mu$ is the number of terms in the Riemann sum and $\delta$ is the desired accuracy of the approximation. Here we have taken $Z_q$ to be $\mathcal{O}(1)$. \subsection{Decomposition into self-inverse operators} \label{sec:decomp2} The truncated Taylor series strategy introduced in \cite{Berry2015} requires that we can represent our Hamiltonian as a weighted sum of unitaries. To do so, we follow a procedure in \cite{Berry2013} which shows how 1-sparse matrices can be decomposed into a sum of self-inverse matrices with eigenvalues $\pm 1$. Specifically, we decompose each $\aleph_{\gamma,\rho}$ into a sum of $M \in \Theta\big(\max_{\gamma,\rho}\big\|\aleph_{\gamma,\rho}\big\|_\textrm{max}/\zeta\big)$ 1-sparse unitary matrices of the form \begin{equation} \aleph_{\gamma,\rho} \approx {\widetilde{\aleph}}_{\gamma,\rho} \equiv \zeta\sum_{m=1}^{M} C_{\gamma,\rho,m} \end{equation} where $\zeta$ is the desired precision of the decomposition. First, we construct a new matrix ${\widetilde{\aleph}}_{\gamma,\rho}$ by rounding each entry of $\aleph_{\gamma,\rho}$ to the nearest multiple of $2\,\zeta$, so that $\big\|\aleph_{\gamma,\rho}-{\widetilde{\aleph}}_{\gamma,\rho}\big\|_{\textrm{max}} \leq \zeta$. We define $C_{\gamma,\rho}\equiv\aleph_{\gamma,\rho}/\zeta$ so that $\left \| C_{\gamma,\rho} \right\|_{\textrm{max}}\leq 1+ \|\aleph_{\gamma,\rho} \|_\textrm{max}/\zeta$. We decompose each $C_{\gamma,\rho}$ into $ \|C_{\gamma,\rho} \|_{\textrm{max}}$ 1-sparse matrices, indexed by $m$, with entries in $\{0, -2, 2\}$, as follows: \begin{equation} \label{eq:Cthing} C_{\gamma,\rho,m}^{\alpha\beta}\equiv \begin{cases} +2 & C_{\gamma,\rho}^{\alpha\beta}\geq 2m\\ -2 & C_{\gamma,\rho}^{\alpha\beta} < 2m\\ 0 & \text{otherwise}. \end{cases} \end{equation} Finally, we remove zero eigenvalues by further dividing each $C_{\gamma,\rho,m}$ into two matrices $C_{\gamma,\rho,m,1}$ and $C_{\gamma,\rho,m,2}$ with entries in $\{0, -1, +1\}$. For every all-zero column $\beta$ in $C_{\gamma,\rho,m}$, we choose $\alpha$ so that $(\alpha, \beta)$ is the location of the nonzero entry in column $\beta$ in the original matrix ${\cal H}_{\gamma,\rho}$. Then the matrix $C_{\gamma,\rho,m,1}$ has $+1$ in the $(\alpha, \beta)$ position, and $C_{\gamma,\rho,m,2}$ has $-1$ in the $(\alpha, \beta)$ position. Thus, we have decomposed each $H_\gamma$ into a sum of 1-sparse, unitary matrices with eigenvalues $\pm 1$. We now use a simplified notation where $\ell$ corresponds to the triples $(s,m,\gamma)$, and $\aleph_{\ell,\rho} \equiv C_{\gamma,\rho,m,s}$. We denote the number of values of $\ell$ by $L$, and can write the Hamiltonian as a sum of ${\cal O}(N^4 \mu M )$ unitary, 1-sparse matrices \begin{equation}\label{eq:selfinvdec} H = \zeta \sum_{\ell=1}^{L} \sum_{\rho=1}^{\mu} {\cal H}_{\ell,\rho}. \end{equation} That is, the decomposition is of the form in \eq{unit_sum}, but in this case $W_\ell $ is independent of $\ell$. To summarize, we decompose the molecular Hamiltonian into a sum of self-inverse matrices in four steps: \begin{enumerate} \item Decompose the molecular Hamiltonian into a sum of 1-sparse matrices using the bipartite graph coloring given in \app{1sparseproof}, summarized in \sec{dec1}. \item Decompose these 1-sparse matrices further, such that each entry corresponds to a single term in the sum over molecular integrals. This does not change the number of terms, but simplifies calculations. \item Discretize the integrals over a finite region of space, subject to the constraints and bounds given in \cite{BabbushSparse1}. \item Decompose into self-inverse operators by the method proposed in \cite{Berry2013}. \end{enumerate} This decomposition gives an overall gate count scaling contribution of ${\cal O}(\eta^2 N^2)$. \section{The CI Matrix Encoding} \label{sec:encoding} The molecular electronic structure Hamiltonian describes electrons interacting in a nuclear potential that is fixed under the Born-Oppenheimer approximation. Except for the proposals in \cite{Kassal2008,Toloui2013,Whitfield2013b,Whitfield2015,Kivlichan2016}, all prior quantum algorithms for chemistry use second quantization. While in second quantization antisymmetry is enforced by the fermionic anti-commutation relations, in first quantization the wavefunction itself is explicitly antisymmetric. The representation of \eq{electronic} in second quantization is \begin{equation} H = \sum_{ij} h_{ij} a_i^{\dagger}a_j + \frac{1}{2} \sum_{ijk\ell} h_{ijk\ell} a_i^{\dagger} a_j^{\dagger} a_k a_\ell \label{eq:2nd} \end{equation} where the operators $a_i^\dagger$ and $a_j$ in \eq{2nd} obey antisymmetry due to the fermionic anti-commutation relations, \begin{align} \label{eq:anticomm} \{a_i^\dagger, a_j\} = \delta_{ij} \quad \quad \quad \quad \{a_i^\dagger, a_j^\dagger\} = \{a_i, a_j\} = 0. \end{align} The one-electron and two-electron integrals in \eq{2nd} are \begin{align} &h_{ij} = \int \varphi_i^*(\vec r) \left(-\frac{\nabla^2}{2} - \sum_{q} \frac{Z_q}{ \|\vec R_q - \vec r \|} \right)\varphi_j(\vec r) \,\dd\vec r , \label{eq:single_int1}\\ &h_{ijk\ell} = \int \frac{ \varphi_i^*(\vec r_1) \, \varphi_j^*(\vec r_2) \, \varphi_\ell(\vec r_1) \, \varphi_k(\vec r_2) }{\|\vec r_1 - \vec r_2\|} \, \dd\vec r_1\, \dd\vec r_2. \label{eq:double_int} \end{align} where (throughout this paper), $\vec r_j$ represents the position of the $j^\textrm{th}$ electron, and $\varphi_i(\vec r_j)$ represents the $i^\textrm{th}$ spin-orbital when occupied by that electron. To ensure that the integrand in \eq{single_int1} is symmetric, we can alternatively write the integral for $h_{ij}$ as \begin{equation}\label{eq:single_int} h_{ij} = \frac 12\int \nabla \varphi_i^*(\vec r) \cdot \nabla\varphi_j(\vec r) \,\dd\vec r - \int \sum_{q} \varphi^*_i(\vec r) \frac{Z_q}{ \|\vec R_q - \vec r \|}\varphi_j(\vec r) \,\dd\vec r. \end{equation} The second-quantized Hamiltonian in \eq{2nd} is straightforward to simulate because one can explicitly represent the fermionic operators as tensor products of Pauli operators, using either the Jordan-Wigner transformation \cite{Jordan1928,Somma2002} or the Bravyi-Kitaev transformation \cite{Bravyi2002,Seeley2012,Tranter2015}. With the exception of real-space algorithms described in \cite{Kassal2008,Kivlichan2016}, all quantum algorithms for chemistry represent the system in a basis of $N$ single-particle spin-orbital functions, usually obtained as the solution to a classical mean-field treatment such as Hartree-Fock \cite{Helgaker2002}. However, the conditions of \thm{maintheorem} only hold when actually performing the simulation in the atomic orbital basis\footnote{The basis of atomic orbitals is not necessarily orthogonal. However, this can be fixed using the efficient Lowdin symmetric orthogonalization procedure which seeks the closest orthogonal basis \cite{Helgaker2002, McClean2014}.} (i.e.\ the basis prescribed by the model chemistry). The canonical Hartree-Fock orbitals are preferred over the atomic orbitals because initial states are easier to represent in the basis of Hartree-Fock orbitals. These orbitals are actually a unitary rotation of the orthogonalized atomic orbitals prescribed by the model chemistry. This unitary basis transformation takes the form \begin{align} \tilde{\varphi}_i & = \sum_{j=1}^N \varphi_j U_{ij}\\ U = e^{- \kappa}, \quad & \quad \kappa = - \kappa^\dagger = \sum_{ij} \kappa_{ij} a^\dagger_i a_j, \end{align} and $\kappa$ is anti-Hermitian. For $\kappa$ and $U$, the quantities $\kappa_{ij}$ and $U_{ij}$ respectively correspond to the matrix elements of these operators in the basis of spin orbitals. It is a consequence of the Thouless theorem that this unitary transformation is efficient to apply. The canonical Hartree-Fock orbitals and $\kappa$ are obtained by performing a self-consistent field procedure to diagonalize a mean-field Hamiltonian for the system which is known as the Fock matrix. Because the Fock matrix describes a system of non-interacting electrons it can be expressed as the following $N$ by $N$ matrix: \begin{equation} \label{eq:fock} f_{ij} = h_{ij} + \frac{1}{2} \sum_{k} h_{i k k j} - h_{i k j k}. \end{equation} The integrals which appear in the Fock matrix are defined by \eq{single_int1} and \eq{double_int}. Importantly, the canonical orbitals are defined to be the orbitals which diagonalize the Fock matrix. Thus, the integrals in the definition of the Fock matrix are defined in terms of the eigenvectors of the Fock matrix so \eq{fock} is a recursive definition. The canonical orbitals are obtained by repeatedly diagonalizing this matrix until convergence with its own eigenvectors. The Hartree-Fock procedure is important because the Hartree-Fock state (which is a product state in the canonical basis with the lowest $\eta$ eigenvectors of the Fock matrix occupied and the rest unoccupied) has particularly high overlap with the ground state of $H$. As stated before, the conditions of \thm{maintheorem} do not apply if we represent the Hamiltonian in the basis of canonical orbitals. But this is not a problem for us because we can still prepare the Hartree-Fock state in the basis of orthgonalized atomic orbitals (which do satisfy the conditions) and then apply the operator $U = e^{- \kappa}$ to our initial state at cost $\widetilde{\cal O}(N^2)$. Note that the use of a local basis has other advantages, as pointed out in \cite{McClean2014}. In particular, in the limit of certain large molecules, use of a local basis allows one to truncate terms from the Hamiltonian so that there are $\widetilde{\cal O}(N^2)$ terms instead of ${\cal O}(N^4)$ terms. However, \thm{maintheorem} exploits an entirely different property of basis locality which does not require any approximation from truncating terms. The spatial encoding of \eq{2nd} requires $\Theta(N)$ qubits, one for each spin-orbital; under the Jordan-Wigner transformation, the state of each qubit indicates the occupation of a corresponding spin-orbital. Many states representable in second quantization are inaccessible to molecular systems due to symmetries in the Hamiltonian. For instance, molecular wavefunctions are eigenstates of the total spin operator so the total angular momentum is a good quantum number, and this insight can be used to find a more efficient spatial encoding \cite{Whitfield2013b,Whitfield2015}. Similarly, the Hamiltonian in \eq{2nd} commutes with the number operator, $\nu$, whose expectation value gives the number of electrons, $\eta$, \begin{equation} \nu = \sum_{i=1}^N a_i^{\dagger} a_i, \quad \quad \left[H, \nu\right] = 0, \quad \quad \eta = \avg{\nu}. \end{equation} Following the procedure in \cite{Toloui2013}, our algorithm makes use of an encoding which reduces the number of qubits required by recognizing $\eta$ as a good quantum number. Conservation of particle number implies there are only $\xi = \binom{N}{\eta}$ valid configurations of these electrons, but the second-quantized Hilbert space has dimension $2^N$, which is exponentially larger than $\xi$ for fixed $\eta$. We work in the basis of Slater determinants, which are explicitly antisymmetric functions of both space and spin associated with a particular $\eta$-electron configuration. We denote these states as $\ket{\alpha} = \ket{\alpha_0, \alpha_1, \cdots, \alpha_{\eta-1}}$, where $\alpha_i \in \{1,\ldots, N\}$ and $\alpha \in \{1,\ldots, N^{\eta}\}$. We emphasize that $ \alpha_i$ is merely an integer which indexes a particular spin-orbital function $ \varphi_{\alpha_i}(\vec r)$. While each configuration requires a specification of $\eta$ occupied spin-orbitals, there is no sense in which $\alpha_i$ is associated with ``electron $i$'' since fermions are indistinguishable. Specifically, \begin{equation} \label{eq:det} \braket{\vec r_0,\ldots,\vec r_{\eta-1}}{\alpha} = \braket{\vec r_0,\ldots,\vec r_{\eta-1}}{\alpha_0, \alpha_1, \cdots, \alpha_{\eta-1}} = \frac{1}{\sqrt{\eta!}} \begin{vmatrix} \varphi_{ \alpha_0}\!\left(\vec r_0\right) & \varphi_{ \alpha_1}\!\left(\vec r_0\right) & \cdots & \varphi_{\alpha_{\eta-1}} \!\left(\vec r_0\right) \\ \varphi_{\alpha_0}\!\left(\vec r_1\right) & \varphi_{ \alpha_1}\!\left(\vec r_1\right) & \cdots & \varphi_{ \alpha_{\eta-1}} \!\left(\vec r_1\right) \\ \vdots & \vdots & \ddots & \vdots\\ \varphi_{\alpha_0}\!\left(\vec r_{\eta-1}\right) & \varphi_{ \alpha_1}\!\left(\vec r_{\eta-1}\right) & \cdots & \varphi_{ \alpha_{\eta-1}} \!\left(\vec r_{\eta-1}\right) \end{vmatrix} \end{equation} where the bars enclosing the matrix in \eq{det} denote a determinant. Because determinants have the property that they are antisymmetric under exchange of any two rows, this construction ensures that our wavefunction obeys the Pauli exclusion principle. We note that although this determinant can be written equivalently in different orders (e.g.~by swapping any two pairs of orbital indices), we avoid this ambiguity by requiring the Slater determinants to only be written in ascending order of spin-orbital indices. The representation of the wavefunction introduced in \cite{Toloui2013} uses $\eta$ distinct registers to encode the occupied set of spin-orbitals, thus requiring $\Theta(\eta \log N) = \widetilde{\cal O}(\eta)$ qubits. However, it would be possible to use a further-compressed representation of the wavefunction based on the direct enumeration of all Slater determinants, requiring only $\Theta(\log \xi)$ qubits. When using very small basis sets (such as the minimal basis), it will occasionally be the case that the spatial overhead of $\Theta(N)$ for the second-quantized algorithm is actually less than the spatial complexity of our algorithm. However, for a fixed $\eta$, the CI matrix encoding requires exponentially fewer qubits. \section{The CI Matrix Oracle} \label{sec:oracle} In this section, we discuss the construction of the circuit referred to in our introduction as $\textsc{select}({\cal H})$, which applies the self-inverse operators in a controlled way. As discussed in \cite{BabbushSparse1}, the truncated Taylor series technique of \cite{Berry2015} can be used with a selection oracle for an integrand which defines the molecular Hamiltonian. This method will then effect evolution under this Hamiltonian with an exponential increase in precision over Trotter-based methods. For clarity of exposition, we describe the construction of $\textsc{select}({\cal H})$ in terms of two smaller oracle circuits which can be queried to learn information about the 1-sparse unitary integrands. This information is then used to evolve an arbitrary quantum state under a specific 1-sparse unitary. The first of the oracles described here is denoted as $Q^{\textrm{col}}$ and is used to query information about the sparsity pattern of a particular 1-sparse Hermitian matrix from \eq{gamma_decomp}. The second oracle is denoted as $Q^{\textrm{val}}$ and is used to query information about the value of integrands for elements in the CI matrix. We construct $Q^{\textrm{val}}$ by making calls to a circuit constructed in \cite{BabbushSparse1} where it is referred to as ``$\textsc{sample}(w)$''. The purpose of $\textsc{sample}(w)$ is to sample the integrands of the one-electron and two-electron integrals $h_{ij}$ and $h_{ijk\ell}$ in \eq{double_int} and \eq{single_int}. The construction of $\textsc{sample}(w)$ in \cite{BabbushSparse1} requires $\widetilde{\cal O}(N)$ gates. The oracle $Q^{\textrm{col}}$ uses information from the index $\gamma$. The index $\gamma$ is associated with the indices $(a_1,b_1,i,p,a_2,b_2,j,q)$ which describe the sparsity structure of the 1-sparse Hermitian matrix $H_{\gamma}$ according to the decomposition in \sec{dec2}. $Q^{\textrm{col}}$ acts on a register specifying a color $\ket{\gamma}$ as well a register containing an arbitrary row index $\ket{\alpha}$ to reveal a column index $\ket{\beta}$ so that the ordered pair ($\alpha$, $\beta$) indexes the nonzero element in row $\alpha$ of $H_{\gamma}$, \begin{align} Q^\textrm{col} \ket{\gamma} \ket{\alpha}\ket{0}^{\otimes \eta \log N} & = \ket{\gamma}\ket{\alpha}\ket{\beta}. \end{align} From the description in \sec{dec2}, implementation of the unitary oracle $Q^\textrm{col}$ is straightforward. To construct $\textsc{select}({\cal H})$ we need a second oracle that returns the value of the matrix elements in the decomposition. This selection oracle is queried with a register $\ket{\ell} = \ket{s}\ket{m}\ket{\gamma}$ which specifies which part of the 1-sparse representation we want, as well as a register $\ket{\rho}$ which indexes the grid point $\rho$ and registers $\ket{\alpha}$ and $\ket{\beta}$ specifying the two Slater determinants. Specifically, the entries in the tuple identify the color ($\gamma$) of the 1-sparse Hermitian matrix from which the 1-sparse unitary matrix originated, which positive integer index ($m \leq M$) it corresponds to in the further decomposition of $\aleph_{\gamma,\rho}$ into $C_{\gamma,\rho,m}$, and which part it corresponds to in the splitting of $C_{\gamma,\rho,m}$ into $C_{\gamma,\rho,m,s}$ (where $s\in\{1,2\}$). As a consequence of the Slater-Condon rules shown in Eqs.~\eqref{eq:slate1}, \eqref{eq:slate2}, \eqref{eq:slate3}, and \eqref{eq:slate4}, $Q^\textrm{val}$ can be constructed given access to $\textsc{sample}(w)$, which samples the integrand of the integrals in Eqs.~\eqref{eq:double_int} and \eqref{eq:single_int} \cite{BabbushSparse1}. Consistent with the decomposition in \sec{dec2}, the $i$ and $j$ indices in the register containing $\gamma = (i,p,j,q)$ specify the dissimilar spin-orbitals in $\ket{\alpha}$ and $\ket{\beta}$ that are needed in the integrands defined by the Slater-Condon rules; therefore, the determination of which spin-orbitals differ between $\ket{\alpha}$ and $\ket{\beta}$ can be made in ${\cal O}(\log N)$ time (only the time needed to read their values from $\gamma$). As $\textsc{sample}(w)$ is comprised of $\widetilde{\cal O}(N)$ gates, $Q^{\textrm{val}}$ has time complexity $\widetilde{\cal O}(N)$ and acts as \begin{align} &\quad Q^\textrm{val}\ket{\ell}\ket{\rho}\ket{\alpha}\ket{\beta}= {\cal H}_{\ell,\rho}^{\alpha \beta} \ket{\ell}\ket{\rho}\ket{\alpha}\ket{\beta}, \end{align} where ${\cal H}_{\ell,\rho}^{\alpha \beta}$ is the value of the matrix entry at $(\alpha,\beta)$ in the self-inverse matrix ${\cal H}_{\ell,\rho}$. When either $\ket{\alpha}$ or $\ket{\beta}$ represents an invalid Slater determinant (with more than one occupation on any spin-obital), we take ${\cal H}_{\ell,\rho}^{\alpha \beta} = 0$ for $\alpha \ne \beta$. This ensures there are no transitions into Slater determinants which violate the Pauli exclusion principle. The choice of ${\cal H}_{\ell,\rho}^{\alpha \alpha}$ for invalid $\alpha$ will not affect the result, because the state will have no weight on the invalid Slater determinants. Having constructed the column and value oracles, we are finally ready to construct $\textsc{select} ({\cal H})$. This involves implementing 1-sparse unitary operations. The method we describe is related to the scheme presented in \cite{Aharonov2003} for evolution under 1-sparse Hamiltonians, but is simplified due to the simpler form of the operators. As in \eq{selectH}, $\textsc{select}({\cal H})$ applies the term ${\cal H}_{\ell,\rho}$ in the 1-sparse unitary decomposition to the wavefunction $\ket{\psi}$. Writing $\ket{\psi}=\sum_\alpha c_\alpha\ket{\alpha}$, we require that $\textsc{select}({\cal H})$ first call $Q^\textrm{col}$ to obtain the columns, $\beta$, corresponding to the rows, $\alpha$, for the nonzero entries of the Hamiltonian: \begin{align} \ket{\ell}\ket{\rho}\ket{\psi}\ket{0}^{\otimes \eta \log N} & \mapsto \sum_\alpha c_\alpha Q^\textrm{col}\ket{\ell}\ket{\rho}\ket{\alpha}\ket{0}^{\otimes \eta \log N}\nonumber\\ & =\sum_\alpha c_\alpha \ket{\ell}\ket{\rho}\ket{\alpha}\ket{\beta}. \end{align} Now that we have the row and column of the matrix element, we apply $Q^\textrm{val}$ which causes each Slater determinant to accumulate the phase factor $k_\alpha = {\cal H}_{\ell,\rho}^{\alpha \beta} = \pm 1$: \begin{align} \sum_\alpha c_\alpha \ket{\ell}\ket{\rho}\ket{\alpha}\ket{\beta} & \mapsto \sum_\alpha c_\alpha Q^\textrm{val}\ket{\ell}\ket{\rho}\ket{\alpha}\ket{\beta}\\ & =\sum_\alpha c_\alpha k_\alpha\ket{\ell}\ket{\rho}\ket{\alpha}\ket{\beta}\nonumber. \end{align} Next, we swap the locations of $\alpha$ and $\beta$ in order to complete multiplication by the 1-sparse unitary, \begin{align} \sum_\alpha c_\alpha k_\alpha\ket{\ell}\ket{\rho}\ket{\alpha}\ket{\beta} & \mapsto \sum_\alpha c_\alpha k_\alpha\ket{\ell}\ket{\rho}\text{SWAP}\ket{\alpha}\ket{\beta}\nonumber\\ & =\sum_\alpha c_\alpha k_\alpha\ket{\ell}\ket{\rho}\ket{\beta}\ket{\alpha}. \end{align} Finally we apply $Q^\textrm{col}$ again but this time $\beta$ is in the first register. Since $Q^\textrm{col}$ is self-inverse and always maps $\ket{\alpha}\ket{b}$ to $\ket{\alpha}\ket{b \oplus \beta}$ and $\ket{\beta}\ket{b}$ to $\ket{\beta}\ket{b \oplus \alpha}$, this allows us to uncompute the ancilla register. \begin{align} \sum_\alpha c_\alpha k_\alpha\ket{\ell}\ket{\rho}\ket{\beta}\ket{\alpha} & \mapsto \sum_\alpha c_\alpha k_\alpha Q^\textrm{col}\ket{\ell}\ket{\rho}\ket{\beta}\ket{\alpha}\nonumber\\ & = \sum_\alpha c_\alpha k_\alpha\ket{\ell}\ket{\rho}\ket{\beta}\ket{0}^{\otimes \eta \log N} \nonumber \\ &= \ket{\ell}{\cal H}_{\ell,\rho}\ket{\psi}\ket{0}^{\otimes \eta \log N}. \end{align} Note that this approach works regardless of whether the entry is on-diagonal or off-diagonal; we do not need separate schemes for the two cases. The circuit for $\textsc{select}({\cal H})$ is depicted in \fig{select_circuit}. \begin{figure}[h!] \[\xymatrix @*=<0em> @R 1em @C 1em { \lstick{\kets{\rho}} & \qw & \multigate{5}{Q^\textrm{val}} & \qw & \qw &\rstick{\kets{\rho}} \qw\\ \lstick{\kets{s}} & \qw & \ghost{Q^\textrm{val}} & \qw & \qw & \rstick{\kets{s}} \qw\\ \lstick{\kets{m}} & \qw & \ghost{Q^\textrm{val}} & \qw & \qw & \rstick{\kets{m}} \qw\\ \lstick{\kets{\gamma}} & \multigate{2}{Q^\textrm{col}} & \ghost{Q^\textrm{val}} & \qw & \multigate{2}{Q^\textrm{col}}& \rstick{\kets{\gamma}} \qw\\ \lstick{\kets{\psi}} & \ghost{Q^\textrm{col}} & \ghost{Q^\textrm{val}} & *=<0em>{\times} \qw & \ghost{Q^\textrm{col}} &\rstick{{\cal H}_{\ell,\rho}\kets{\psi}} \qw\\ \lstick{\kets{0}^{\otimes \eta \log N}} & \ghost{\widetilde{Q}_\gamma^{col}} & \ghost{Q^\textrm{val}} & *=<0em>{\times} \qw \qwx & \ghost{Q^\textrm{col}} & \rstick{\kets{0}^{\otimes \eta \log N}} \qw\\ }\] \caption{\label{fig:select_circuit} The circuit implementing $\textsc{select}({\cal H})$, which applies the term ${\cal H}_\ell(\vec z_\rho)$ labeled by $\ell=(\gamma, m, s)$ in the unitary 1-sparse decomposition to the wavefunction $\ket{\psi}$.} \end{figure} \section{Discussion} \label{sec:conclusion} We have outlined a method to simulate the quantum chemistry Hamiltonian in a basis of Slater determinants using recent advances from the universal simulation literature. We find an oracular decomposition of the Hamiltonian into 1-sparse matrices based on an edge coloring routine first described in \cite{Toloui2013}. We use that oracle to simulate evolution under the Hamiltonian using the truncated Taylor series technique described in \cite{Berry2013}. We discretize the integrals which define entries of the CI matrix, and use the sum of unitaries approach to effectively exponentially compress evaluation of these discretized integrals. Asymptotic scalings suggest that the algorithms described in this paper series will allow for the quantum simulation of much larger molecular systems than would be possible using a Trotter-Suzuki decomposition. Recent work \cite{Wecker2014,Hastings2015,Poulin2014,McClean2014,BabbushTrotter} has demonstrated markedly more efficient implementations of the original Trotter-Suzuki-based quantum chemistry algorithm \cite{Aspuru-Guzik2005,Whitfield2010}; similarly, we believe the implementations discussed here can still be improved upon, and that numerical simulations will be crucial to this task. Finally, we note that the CI matrix simulation strategy discussed here opens up the possibility of an interesting approach to adiabatic state preparation. An adiabatic algorithm for quantum chemistry was suggested in second quantization in \cite{BabbushAQChem} and studied further in \cite{Veis2014}. However, those works did not suggest a compelling adiabatic path to take between an easy-to-prepare initial state (such as the Hartree-Fock state) and the ground state of the exact Hamiltonian. We note that one could start the system in the Hartree-Fock state, and use the CI matrix oracles discussed in this paper to ``turn on'' a Hamiltonian having support over a number of configuration basis states which increases smoothly with time. \section*{Acknowledgements} The authors thank Jhonathan Romero Fontalvo, Jarrod McClean, Borzu Toloui, and Nathan Wiebe for helpful discussions. D.W.B.\ is funded by an Australian Research Council Future Fellowship (FT100100761). P.J.L.\ acknowledges the support of the National Science Foundation under grant number PHY-0955518. A.A.G.\ and P.J.L.\ acknowledge the support of the Air Force Office of Scientific Research under award number FA9550-12-1-0046. A.A.-G.\ acknowledges the Army Research Office under Award: W911NF-15-1-0256. \subsection{Proof of \lem{int0}} \label{app:integral_discretization/int0_proof} Our proof for \lem{int0} roughly follows the three stages presented in \app{integral_discretization/preliminaries/proof_structure}. Here we give the proof in summary form and relegate some of the details to the later subsections. \subsubsection{First stage for \lem{int0}} The first part of the proof corresponds to the first stage discussed in \app{integral_discretization/preliminaries/proof_structure}. We choose \begin{align} \label{eq:xvallem1} x_0 &:= \frac{2}{\alpha} x_\text{max} \log \left( \frac{K_0 \varphi_\text{max}^2 x_\text{max}}{\delta} \right), \nonumber \\ D_0 &:= \mathcal{C}_{x_0} \left( \vec{c}_i \right). \end{align} The condition \eq{sensible0} ensures that $x_0 \geq x_\text{max}$. We show in \app{integral_discretization/int0_proof/trunc} that the error due to this truncation can be bounded as \begin{equation} \delta_\text{trunc}^{(0)} := \abs{ S_{ij}^{(0)} \!\left( \R^3 \right) - S_{ij}^{(0)} \!\left( D_0 \right) } < \frac{8\pi\gamma_2}{\alpha^3} \varphi_\text{max}^2 x_\text{max} \exp \left( -\frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right). \end{equation} \subsubsection{Green's identity for \lem{int0}} The next part of the proof is specific to \lem{int0}, and is not one of the general stages outlined in \app{integral_discretization/preliminaries/proof_structure}. The integral is given in the form with a second derivative of an orbital, which means that to bound the error we would need additional bounds on the third derivatives of the orbitals. We have not assumed such bounds, so we would like to reexpress the integral in terms of first derivatives before approximating it as a Riemann sum. We have already truncated the domain, though, so we will obtain terms from the boundary of the truncated domain. We reexpress the integral via Green's first identity, which gives \begin{equation}\label{eq:greensfirst} S_{ij}^{(0)} \!\left( D_0 \right) = \frac{1}{2} \int_{D_0} \nabla \varphi_i^* (\vec{r}) \cdot \nabla \varphi_j (\vec{r})\ \dd{V} - \frac{1}{2} \oint_{\partial D_0} \varphi_i^* (\vec{r}) \nabla \varphi_j (\vec{r}) \cdot \dd{\vec{S}}, \end{equation} where $\dd{V}$ and $\dd{\vec{S}}$ are the volume and oriented surface elements, respectively, and $\partial D_0$ is the boundary of $D_0$. The reason why we do not make this change before truncating the domain is that we have not made any assumptions on the rate of decay of the derivatives of the orbitals. We define \begin{equation} \widetilde{S}_{ij}^{(0)} \!\left( D_0 \right) := \frac{1}{2} \int_{D_0} \nabla \varphi_i^* (\vec{r}) \cdot \nabla \varphi_j (\vec{r}) \ \dd{V}. \end{equation} We show (in \app{integral_discretization/int0_proof/Green}) that \begin{equation}\label{eq:greenserror} \delta_\text{Green}^{(0)} := \abs{S_{ij}^{(0)}\! \left( D_0 \right) - \widetilde{S}_{ij}^{(0)}\! \left( D_0 \right)} < \frac{26 \gamma_1}{\alpha^2} \varphi_\text{max}^2 x_\text{max} \exp \left( - \frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right). \end{equation} \subsubsection{Second stage for \lem{int0}} Next we consider the discretization into a Riemann sum for \lem{int0}. We define \begin{equation} \label{eq:partition_size_int0} N_0 := \left\lceil \left( \frac{x_0}{x_\text{max}} \right)^4 \exp \left( \frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right) \right\rceil, \end{equation} so that \begin{equation} N_0 = \left\lceil \frac{K_0 \varphi_\text{max}^2 x_\text{max}}{\delta} \left[ \frac{2}{\alpha} \log \left( \frac{K_0 \varphi_\text{max}^2 x_\text{max}}{\delta} \right) \right]^4 \right\rceil. \end{equation} The Riemann sum is then \begin{equation} \mathcal{R}_0 := \sum_{\vec{k}} \frac{1}{2} \nabla \varphi_i^* \!\left( \vec{r}_{\vec{k}} \right) \cdot \nabla \varphi_j \!\left( \vec{r}_{\vec{k}} \right) \text{vol} \left( T_{\vec{k}}^{(0)} \right), \end{equation} where, for every triple of integers $\vec{k} = \left( k_1, k_2 , k_3 \right)$ such that $0 \leq k_1, k_2, k_3 < N_0$, we define \begin{equation} \vec{r}_{\vec{k}} = \frac{x_0}{N_0} \left[ 2\vec{k} - \left(N_0-1,N_0-1,N_0-1\right) \right] \end{equation} and \begin{equation} T_{\vec{k}}^{(0)} := \mathcal{C}_{x_0/N_0} \left( \vec{r}_{\vec{k}} \right). \end{equation} Thus we have partitioned $D_0$ into $\mu = N_0^3$ equal-sized cubes $T_{\vec{k}}^{(0)}$ that overlap on sets of measure zero. The expression in \eq{lem1mu} of \lem{int0} then follows immediately. Each term of $\mathcal{R}_0$ satisfies \begin{align} \label{eq:int0_summand_bound} \left\| \frac{1}{2} \nabla \varphi_i^*\! \left( \vec{r}_{\vec{k}} \right) \cdot \nabla \varphi_j\! \left( \vec{r}_{\vec{k}} \right) \text{vol} \left( T_{\vec{k}}^{(0)} \right) \right\| & \leq \frac{1}{2} \left\|\nabla \varphi_i^* \!\left( \vec{r}_{\vec{k}} \right)\right\| \left\|\nabla \varphi_j \!\left( \vec{r}_{\vec{k}} \right)\right\| \text{vol} \left( T_{\vec{k}}^{(0)} \right) \nonumber \\ & \leq \frac{1}{2} \left( \gamma_1 \frac{\varphi_\text{max}}{x_\text{max}} \right)^2 \left( \frac{2x_0}{N_0} \right)^3 \nonumber \\ & = \frac{1}{\mu} \times 4 \gamma_1^2 \left( \frac{x_0}{x_\text{max}} \right)^3 \varphi_\text{max}^2 x_\text{max}, \end{align} where the second inequality follows from \eq{spin-orbital_first_derivative_bound}. Using the value of $x_0$ in \eq{xvallem1} in \eq{int0_summand_bound}, each term in the sum has the upper bound on its absolute value (corresponding to \eq{lem1bnd} in \lem{int0}) \begin{equation} \frac{1}{\mu} \times32 \frac{\gamma_1^2}{\alpha^3} \left[ \log \left( \frac{K_0 \varphi_\text{max}^2 x_\text{max}}{\delta} \right) \right]^3 \varphi_\text{max}^2 x_\text{max}. \end{equation} We show (in \app{integral_discretization/int0_proof/Riemann}) that \begin{equation} \delta_\text{Riemann}^{(0)} := \abs{ \widetilde{S}_{ij}^{(0)} \!\left( D_0 \right) - \mathcal{R}_0 } < 16\sqrt{3} \gamma_1 \gamma_2 \varphi_\text{max}^2 x_\text{max} \exp \left( - \frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right). \end{equation} \subsubsection{Third stage for \lem{int0}} In the final part of the proof of \lem{int0} we show that the total error is properly bounded. By the triangle inequality, we have \begin{equation} \delta_\text{total}^{(0)} := \abs{S_{ij}^{(0)} \!\left( \R^3 \right) - \mathcal{R}_0} \leq \delta_\text{trunc}^{(0)} + \delta_\text{Green}^{(0)} + \delta_\text{Riemann}^{(0)}. \end{equation} We therefore arrive at a simple bound on the total error: \begin{equation} \label{eq:total_error_int0} \delta_\text{total}^{(0)} < K_0 \varphi_\text{max}^2 x_\text{max} \exp \left( - \frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right), \end{equation} where \begin{equation} K_0 := \frac{26 \gamma_1}{\alpha^2} + \frac{8\pi\gamma_2}{\alpha^3} + 16\sqrt{3} \gamma_1 \gamma_2. \end{equation} To ensure that $\delta^{(0)}_\text{total} \leq \delta$, we should have \begin{equation} K_0 \varphi_\text{max}^2 x_\text{max} \exp \left( - \frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right) \le \delta. \end{equation} We can satisfy this inequality with $x_0$ given by \eq{xvallem1}. This last step completes our proof. The remainder of this subsection gives the details for some of the steps above. \subsubsection{Bounding $\delta_\text{trunc}^{(0)}$ for \lem{int0}} \label{app:integral_discretization/int0_proof/trunc} Observe first that \begin{equation} \delta_\text{trunc}^{(0)} = \abs{ S_{ij}^{(0)} \!\left(\R^3\right) - S_{ij}^{(0)} \!\left( D_0 \right) } \leq \frac{1}{2} \int_{\R^3 \setminus D_0} \abs{\varphi_i^* \!\left(\vec{r}\right)} \abs{\nabla^2 \varphi_j \!\left(\vec{r}\right)} \dd{\vec{r}} \leq \frac{1}{2} \int_{\R^3 \setminus \mathcal{B}_x \left(\vec{c}_i\right)} \abs{\varphi_i^* \!\left(\vec{r}\right)} \abs{\nabla^2 \varphi_j \!\left(\vec{r}\right)} \dd{\vec{r}}, \end{equation} where the last inequality follows from the fact that $\mathcal{B}_x \left(\vec{c}_i\right) \subset D_0$. Using this fact together with assumptions 2 and 3 from \sec{Riemann}, we have \begin{equation} \delta_\text{trunc}^{(0)} \leq \frac{\gamma_2}{2} \frac{\varphi_\text{max}^2}{x_\text{max}^2} \int_{\R^3 \setminus \mathcal{B}_{x_0} \left(\vec{c}_i\right)} \exp \left( -\alpha \frac{\norm{\vec{r} - \vec{c}_i}}{x_\text{max}} \right) \dd{\vec{r}}. \end{equation} We simplify this expression by expressing $\vec{r}$ in polar coordinates with center $\vec{c}_i$. After integrating over the solid angles, we have \begin{equation} \delta_\text{trunc}^{(0)} \leq 2\pi \gamma_2 \frac{\varphi_\text{max}^2}{x_\text{max}^2} \int_{x_0}^\infty s^2 e^{-\alpha s / x_\text{max}} \dd{s}. \end{equation} Noting that \begin{equation} \int_{x_0}^\infty s^2 e^{-\mu s} \dd{s} = \left( \frac{x_0^2}{\mu} + \frac{2x_0}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu x_0} < \frac{4}{\mu^3} e^{-\mu x_0/2}, \end{equation} we have \begin{equation} \delta_\text{trunc}^{(0)} < \frac{8\pi\gamma_2}{\alpha^3} \varphi_\text{max}^2 x_\text{max} \exp \left( -\frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right). \end{equation} \subsubsection{Bounding $\delta_\text{Green}^{(0)}$ for \lem{int0}} \label{app:integral_discretization/int0_proof/Green} Using \eq{greensfirst} and \eq{greenserror} we have \begin{equation} \delta_\text{Green}^{(0)} = \abs{\frac{1}{2} \oint_{\partial D_0} \varphi_i^* (\vec{r}) \nabla \varphi_j (\vec{r}) \cdot \dd{\vec{S}}}. \end{equation} Then using \eq{spin-orbital_decay} and \eq{spin-orbital_first_derivative_bound} gives us \begin{equation} \delta_\text{Green}^{(0)} \leq \frac{\gamma_1}{2} \frac{\varphi_\text{max}^2}{x_\text{max}} \oint_{\partial D_0} \exp \left( -\alpha \frac{\norm{\vec{r}-\vec{c}_i}}{x_\text{max}} \right) \dd{S}. \end{equation} We further observe that $\norm{\vec{r}-\vec{c}_i} \geq x$ for all $\vec{r} \in \partial D_0$, and the cube with side length $2x$ has surface area $24 x^2$, giving \begin{equation} \delta_\text{Green}^{(0)} \leq 12 \gamma_1 \frac{\varphi_\text{max}^2}{x_\text{max}} x_0^2 \exp \left( -\alpha \frac{x_0}{x_\text{max}} \right) < \frac{26 \gamma_1}{\alpha^2} \varphi_\text{max}^2 x_\text{max} \exp \left( - \frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right), \end{equation} where we have noted $12 z^2 e^{-z} < 26 e^{-z/2}$ for all $z > 0$. \subsubsection{Bounding $\delta_\text{Riemann}^{(0)}$} \label{app:integral_discretization/int0_proof/Riemann} First we bound the derivative of the integrand. We use the chain rule, the triangle inequality, \eq{spin-orbital_first_derivative_bound} and \eq{spin-orbital_second_derivative_bound} to find \begin{equation} \norm{ \nabla \left( \nabla \varphi_i^* (\vec{r}) \cdot \nabla \varphi_j (\vec{r}) \right) } \leq \abs{\nabla^2 \varphi_i^* (\vec{r})} \norm{\nabla \varphi_j (\vec{r})} + \abs{\nabla^2 \varphi_j (\vec{r})} \norm{\nabla \varphi_i^* (\vec{r})} \leq 2 \gamma_1 \gamma_2 \frac{\varphi_\text{max}^2}{x_\text{max}^3}. \end{equation} We have \begin{equation} \text{vol} \left( D_0 \right) = 8 x_0^3 \end{equation} and \begin{equation} \text{diam} \left( T_{\vec{k}}^{(0)} \right) = 2 \sqrt{3} x_0 / N_0 \end{equation} for all $\vec{k}$. Using \eq{Riemann_error} and \eq{partition_size_int0}, and noting that $1/\lceil z \rceil \leq 1/z$ for any $z\in\R$, we have \begin{equation} \delta_\text{Riemann}^{(0)} \leq 16\sqrt{3} \gamma_1 \gamma_2 \frac{\varphi_\text{max}^2}{x_\text{max}^3} \frac{x_0^4}{N_0} \leq 16\sqrt{3} \gamma_1 \gamma_2 \varphi_\text{max}^2 x_\text{max} \exp \left( - \frac{\alpha}{2} \frac{x_0}{x_\text{max}} \right). \end{equation} \subsection{Proof of \lem{int1}} \label{app:integral_discretization/int1_proof} For this proof, the discretization into the Riemann sum will be performed differently depending on whether spin-orbital $i$ is considered distant from or nearby to nucleus $q$. If the nucleus is far from the spin-orbital, the singularity in the integrand is not inside our truncated domain of integration and we need not take special care with it. Otherwise, we can remove the singularity by defining spherical polar coordinates centred at the nucleus. In each case, we select different truncated integration domains and therefore different Riemann sums. We focus on the centre of spin-orbital $i$ for simplicity; in principle, the centre of spin-orbital $j$ could also be taken into account. \subsubsection{First stage for \lem{int1}} We again start by truncating the domain of integration. We select \begin{equation}\label{eq:xvallem2} x_1=\frac{2}{\alpha} x_\text{max} \log \left( \frac{K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2}{\delta}\right). \end{equation} The condition in \eq{sensible1} ensures that $x_1 \geq x_\text{max}$. We regard the spin-orbital as distant from the nucleus if \begin{equation} \norm{\vec{R}_q - \vec{c}_i} \geq \sqrt{3}x_1 + x_\text{max}. \end{equation} If so, we use the truncated domain \begin{equation} D_{1, q, \text{non-singular}} := \mathcal{C}_{x_1}\! \left( \vec{c}_i \right) . \end{equation} Otherwise, we use \begin{equation} D_{1, q, \text{singular}} := \mathcal{B}_{4x_1}\! \left( \vec{R}_q \right). \end{equation} We define \begin{align} \delta_\text{trunc}^{(1, q, \text{non-singular})} & := \abs{ S_{ij}^{(1,q)}\! \left( \R^3 \right) - S_{ij}^{(1,q)}\! \left( D_{1, q, \text{non-singular}} \right) }, \\ \delta_\text{trunc}^{(1, q, \text{singular})} & := \abs{ S_{ij}^{(1,q)}\! \left( \R^3 \right) - S_{ij}^{(1,q)}\! \left( D_{1, q, \text{singular}} \right) }, \\ \delta_\text{trunc}^{(1, q)} & := \max \left\{ \delta_\text{trunc}^{(1, q, \text{non-singular})}, \delta_\text{trunc}^{(1, q, \text{singular})} \right\}, \end{align} and show in \app{integral_discretization/int1_proof/trunc} that \begin{equation} \delta_\text{trunc}^{(1, q)} < \frac{8\pi^2}{\alpha^3}\left( \alpha+2 \right) Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left( - \frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right). \end{equation} \subsubsection{Second stage for \lem{int1} with Cartesian coordinates} Now we consider in the discretization of the integral for the case that $\norm{\vec{R}_q - \vec{c}_i} \geq \sqrt{3}x_1 + x_\text{max}$, so orbital $i$ can be regarded as distant from the nucleus. We set \begin{equation} \label{eq:partition_size_int1} N_1 := \left\lceil \left( \frac{x_1}{x_\text{max}} \right)^4 \exp \left( \frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right) \right\rceil \end{equation} and define two different Riemann sums containing $\mu = N_1^3$ terms. We also use this expression for $N_1$ in the case that the spin-orbital is near the nucleus. Using our value of $x_1$ in \eq{xvallem2}, \begin{equation} N_1 = \left\lceil \frac{K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2}{\delta} \left[ \frac{2}{\alpha} \log \left( \frac{K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2}{\delta} \right) \right]^4 \right\rceil . \end{equation} Then, noting that $\mu = N_1^3$ is the number of terms in either Riemann sum, we obtain the bound on $\mu$ in \eq{lem2mu} of \lem{int1}. We approximate $S_{ij}^{(1,q)} \!\left( D_{1, q, \text{non-singular}} \right)$ with the sum \begin{equation} \mathcal{R}_{1, q, \text{non-singular}} := \sum_{\vec{k}} -Z_q \frac{ \varphi_i^* \!\left( \vec{r}_{\vec{k}} \right) \varphi_j \!\left( \vec{r}_{\vec{k}} \right) }{ \|\vec{R}_q - \vec{r}_{\vec{k}}\| } \text{vol} \left( T_{\vec{k}}^{(1, q, \text{non-singular})} \right), \end{equation} where, for every triple of integers $\vec{k} = \left( k_1, k_2 , k_3 \right)$ such that $0 \leq k_1, k_2, k_3 < N_1$, we define \begin{equation} \vec{r}_{\vec{k}} = \frac{x_1}{N_1} \left[ 2\vec{k} - \left(N_1-1, N_1-1, N_1-1\right) \right] \end{equation} and \begin{equation} T_{\vec{k}}^{(1, q, \text{non-singular})} := \mathcal{C}_{x_1/N_1}\! \left( \vec{r}_{\vec{k}} \right). \end{equation} Thus we have partitioned $D_{1, q, \text{non-singular}}$ into $N_1^3$ equal-sized cubes $T_{\vec{k}}^{(1, q, \text{non-singular})}$ that overlap on sets of measure zero. Each term of $\mathcal{R}_{1, q, \text{non-singular}}$ satisfies \begin{align} \label{eq:int1_summand_bound_non-sing} \abs{ -Z_q \frac{ \varphi_i^* \!\left( \vec{r}_{\vec{k}} \right) \varphi_j \!\left( \vec{r}_{\vec{k}} \right) }{ \|\vec{R}_q - \vec{r}_{\vec{k}}\| } \text{vol} \left( T_{\vec{k}}^{(1, q, \text{non-singular})} \right) } & \leq Z_q \frac{ \abs{\varphi_i^* \!\left( \vec{r}_{\vec{k}} \right)} \abs{\varphi_j \!\left( \vec{r}_{\vec{k}} \right)} }{\|\vec{R}_q - \vec{r}_{\vec{k}}\|} \text{vol} \left( T_{\vec{k}}^{(1, q, \text{non-singular})} \right) \nonumber \\ & \leq Z_q \frac{\varphi_\text{max}^2}{x_\text{max}} \left( \frac{2x_1}{N_1} \right)^3 \nonumber \\ & = \frac{1}{\mu} \times 8 x_1^3 \frac{Z_q \varphi_\text{max}^2}{x_\text{max}}, \end{align} where we have used \eq{spin-orbital_bound} and the fact that $\|\vec{R}_q - \vec{r}\| \geq x_\text{max}$ for every $\vec{r} \in D_{1, q, \text{non-singular}}$. This upper bound is no greater than \begin{equation}\label{eq:bndlem2} \frac{1}{\mu} \times 32 \pi^2 x_1^3 \frac{Z_q \varphi_\text{max}^2}{x_\text{max}}, \end{equation} Now substituting our value of $x_1$ from \eq{xvallem2} shows that no term has absolute value greater than (corresponding to \eq{lem2bnd} in \lem{int1}) \begin{equation} \frac{1}{\mu} \times \frac{256\pi^2}{\alpha^3} Z_q \varphi_\text{max}^2 x_\text{max}^2 \left[ \log \left( \frac{K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2}{\delta} \right) \right]^3. \end{equation} We show in \app{integral_discretization/int1_proof/Riemann_non-sing} that \begin{align} \delta_\text{Riemann}^{(1, q, \text{non-singular})} & := \abs{ S_{ij}^{(1, q)}\! \left( D_{1, q, \text{non-singular}} \right) - \mathcal{R}_{1, q, \text{non-singular}} } \nonumber \\ & \leq 8 \sqrt{3} \left( 2 \gamma_1 + 1 \right) Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left(-\frac{\alpha}{2} \frac{x_1}{x_\text{max}}\right). \end{align} \subsubsection{Second stage for \lem{int1} with spherical polar coordinates} Next we consider discretization of the integral for the case where $\norm{\vec{R}_q - \vec{c}_i} < \sqrt{3}x_1 + x_\text{max}$, so orbital $i$ is nearby the nucleus. We express \begin{equation} S_{ij}^{(1,q)} \! \left( D_{1, q, \text{singular}} \right) = - 16 x_1^2 Z_q \int_0^{2\pi} \dd{\phi} \int_0^\pi \dd{\theta} \int_0^1 \dd{s}\ f_1 (s, \theta, \phi), \end{equation} where we define $\vec{s} := ( \vec{r} - \vec{R}_q )/(4x_1)$ and \begin{equation} f_1 (s, \theta, \phi) := \varphi_i^* \! \left( 4x_1 \vec{s} + \vec{R}_q \right) \varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right) s \sin\theta. \end{equation} Here we use $\theta$ and $\phi$ to refer to the polar and azimuthal angles, respectively, of the vector $\vec{s}$. Note that the singularity in the nuclear Coulomb potential has been absorbed into the spherical polar volume form $s^2 \sin\theta\ \dd{s}\ \dd{\theta}\ \dd{\phi}$. For every triple of natural numbers $\vec{k} = \left( k_s, k_\theta, k_\phi \right)$ such that $0 \leq k_s, k_\theta, k_\phi < N_1$, we define \begin{equation} T_{\vec{k}}^{(1, q, \text{singular})} := \left\{ \vec{s} \left| \begin{array}{c} k_s/N_1 \leq s \leq \left(k_s + 1\right)/N_1 \\ k_\theta \pi/N_1 \leq \theta \leq \left(k_\theta + 1\right) \pi/N_1 \\ 2 k_\phi \pi/N_1 \leq \phi \leq 2 \left(k_\phi + 1\right) \pi/N_1 \\ \end{array} \right. \right\} \end{equation} so that $\bigcup_{\vec{k}} T_{\vec{k}}^{(1, q, \text{singular})} = D_{1, q, \text{singular}}$. We select \begin{equation} \left( s_{\vec{k}}, \theta_{\vec{k}}, \phi_{\vec{k}} \right) = \left( \frac{1}{N_1} \left(k_s + \tfrac{1}{2}\right), \frac{\pi}{N_1} \left(k_\theta + \tfrac{1}{2}\right), \frac{2\pi}{N_1} \left(k_\phi + \tfrac{1}{2}\right) \right). \end{equation} Thus our Riemann sum approximation is \begin{equation} \mathcal{R}_{1, q, \text{singular}} := \sum_{\vec{k}} -16x_1^2 Z_q f_1\! \left( s_{\vec{k}},\theta_{\vec{k}},\phi_{\vec{k}} \right) \text{vol} \left( T_{\vec{k}}^{(1, q, \text{singular})} \right), \end{equation} where we emphasize that \begin{equation} \label{eq:int1_nonstandard_volume} \text{vol} \left( T_{\vec{k}}^{(1, q, \text{singular})} \right) = \int_{T_{\vec{k}}^{(1, q, \text{singular})}} \dd{s}\, \dd{\theta}\, \dd{\phi} = \frac{2 \pi^2}{N_1^3} \end{equation} is \emph{not} the volume of $T_{\vec{k}}^{(1, q, \text{singular})}$ when considered as a subset of $\R^3$ under the usual Euclidean metric. The reason for this discrepancy is that we absorbed the Jacobian introduced by switching from Cartesian to spherical polar coordinates into the definition of $f_1$. Thus we are integrating $f_1$ with respect to the volume form $\dd{s}\,\dd{\theta}\,\dd{\phi}$, not $s^2 \sin\theta\,\dd{s}\,\dd{\theta}\,\dd{\phi}$. The terms of $\mathcal{R}_{1, q, \text{singular}}$ are bounded by \begin{align} \label{eq:integrandlem22} \abs{ -16x_1^2 Z_q f_1 \!\left( s_{\vec{k}}, \theta_{\vec{k}}, \phi_{\vec{k}} \right) \text{vol} \left( T_{\vec{k}}^{(1, q, \text{singular})} \right) } & = 16x_1^2 Z_q \abs{ f_1 \!\left( s_{\vec{k}}, \theta_{\vec{k}}, \phi_{\vec{k}} \right) } \text{vol} \left( T_{\vec{k}}^{(1, q, \text{singular})} \right) \nonumber \\ & \leq \frac{1}{\mu} \times 32 \pi^2 x_1^2 Z_q \varphi_\text{max}^2, \end{align} where the inequality follows from \eq{spin-orbital_bound}. Again this expression is upper bounded by \eq{bndlem2}, so substituting our value of $x_1$ from \eq{xvallem2} gives the upper bound in \eq{lem2bnd}. We show in \app{integral_discretization/int1_proof/Riemann-sing} that \begin{align} \delta_\text{Riemann}^{(1, q, \text{singular})} & := \abs{ S_{ij}^{(1, q)} \left( D_{1, q, \text{singular}} \right) - \mathcal{R}_{1, q, \text{singular}} } \nonumber \\ & < 1121 \left( 8 \gamma_1 + \sqrt{2} \right) Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right). \end{align} \subsubsection{Third stage for \lem{int1}} We again finish the proof by showing that the total error is bounded by $\delta$. From \eq{total_error_bound_trunc_Riemann}, we have \begin{align} \delta_\text{total}^{(1, q, \text{non-singular})} & := \abs{ S_{ij}^{(1,q)} \left( \R^3 \right) - \mathcal{R}_{1, q, \text{non-singular}} } \leq \delta_\text{trunc}^{(1, q, \text{non-singular})} + \delta_\text{Riemann}^{(1, q, \text{non-singular})},\\ \delta_\text{total}^{(1, q, \text{singular})} & := \abs{ S_{ij}^{(1,q)} \left( \R^3 \right) - \mathcal{R}_{1, q, \text{singular}} } \leq \delta_\text{trunc}^{(1, q, \text{singular})} + \delta_\text{Riemann}^{(1, q, \text{singular})}. \end{align} We have given a bound that holds simultaneously for both $\delta_\text{trunc}^{(1, q, \text{non-singular})}$ and $\delta_\text{trunc}^{(1, q, \text{singular})}$, and we have given a bound for $\delta_\text{Riemann}^{(1, q, \text{singular})}$ that is larger (as a function of $x$) than our bound for $\delta_\text{Riemann}^{(1, q, \text{non-singular})}$. We are therefore able to assert that the error of our Riemann sum approximation, no matter which we choose, is always bounded above by \begin{equation} \label{lem2errbnd} K_1 Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right), \end{equation} where \begin{equation} K_1 := \frac{2\pi^2}{\alpha^3}\left( 5\alpha+1 \right) + 1121 \left( 8 \gamma_1 + \sqrt{2} \right). \end{equation} We have found two different upper bounds on the magnitudes of the terms in the Riemann sums given in Eqs.~\eqref{eq:int1_summand_bound_non-sing} and \eqref{eq:integrandlem22}. Finally, we note that by substituting our value of $x_1$ from \eq{xvallem2}, this expression is upper bounded by $\delta$. This last step completes our proof of \lem{int1}. The remainder of this subsection gives the details for some of the steps above. \subsubsection{Bounding $\delta_\text{trunc}^{(1,q)}$ for \lem{int1}} \label{app:integral_discretization/int1_proof/trunc} Note first that $\mathcal{B}_{x_1}\! \left( \vec{c}_i \right) \subset D_{1, q, \text{non-singular}}$ and $\mathcal{B}_{x_1}\! \left( \vec{c}_i \right) \subset D_{1, q, \text{singular}}$. To see the latter, note that we only consider $D_{1, q, \text{singular}}$ in the case that $\|\vec{R}_q - \vec{c}_i \| \leq \sqrt{3}x_1 + x_\text{max}$, which implies that \begin{equation} \max_{\vec{r} \in \mathcal{B}_{x_1} \left( \vec{c}_i \right)} \|\vec{R}_q - \vec{r}\| \leq x_1 + \|\vec{R}_q - \vec{c}_i\| \leq (\sqrt{3} + 1)x_1 + x_\text{max} < 4x_1 . \end{equation} As we have \begin{equation} \abs{S^{(1,q)}\! \left(\R^3\right) - S^{(1,q)} (D)} \leq Z_q \int_{\R^3 \setminus D} \dd{\vec{r}}\ \frac{ \abs{\varphi_i^* (\vec{r}) \varphi_j (\vec{r})} }{ \|\vec{R}_q - \vec{r}\| } \leq Z_q \int_{\R^3 \setminus \mathcal{B}_{x_1} \left( \vec{c}_i \right)} \dd{\vec{r}}\ \frac{ \abs{\varphi_i^* (\vec{r}) \varphi_j (\vec{r})} }{ \|\vec{R}_q - \vec{r}\| } \end{equation} for any $D$ such that $\mathcal{B}_x \left( \vec{c}_i \right) \subset D$, we may compute \begin{equation} \delta_\text{trunc}^{(1,q)} \leq Z_q \varphi_\text{max}^2 \int_{\R^3 \setminus \mathcal{B}_{x_1} \left( \vec{c}_i \right)} \frac{ \exp \left( - \alpha \frac{\norm{\vec{r} - \vec{c}_i}}{x_\text{max}} \right) }{\|\vec{R}_q - \vec{r}\|} \dd{\vec{r}} \\ = Z_q \varphi_\text{max}^2 \Lambda_{\alpha/x_\text{max}, x_1}\! \left( \vec{R}_q - \vec{c}_i \right), \end{equation} where the function $\Lambda$ is as defined in \eq{exp_Coulomb_potential} and the inequality follows from \eq{spin-orbital_decay}. By \lem{exp_Coulomb_potential_bound}, in the case $\|\vec{R}_q - \vec{c}_i\|>x_1$ we have \begin{align} \delta_\text{trunc}^{(1,q)} &< \frac{16 \pi^2}{\alpha^3} Z_q \varphi_\text{max}^2 \frac{x_\text{max}^3}{\|\vec{R}_q - \vec{c}_i\|} \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right)\nonumber \\ &< \frac{16 \pi^2}{\alpha^3} Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right), \end{align} where the second inequality follows from $x_1\ge x_\text{max}$. In the case $\|\vec{R}_q - \vec{c}_i\|\le x_1$ we can use \begin{equation} \delta_\text{trunc}^{(1,q)} < \frac{8 \pi^2}{\alpha^2} Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right). \end{equation} We can add the bounds to find, in general, that \begin{equation} \delta_\text{trunc}^{(1,q)} < \frac{8 \pi^2}{\alpha^3} (\alpha+2) Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right). \end{equation} \subsubsection{Bounding $\delta_\text{Riemann}^{(1, q, \text{non-singular})}$ for \lem{int1}} \label{app:integral_discretization/int1_proof/Riemann_non-sing} Following \app{integral_discretization/preliminaries/proof_structure}, we note that \begin{equation} \text{vol} \left( D_{1, q, \text{non-singular}} \right) = 8x_1^3 \end{equation} and \begin{equation} \text{diam} \left( T_{\vec{k}}^{(1, q, \text{non-singular})} \right) = 2 \sqrt{3} x_1 / N_1 \end{equation} for each $\vec{k}$. We can bound the derivative of the integrand using the product rule and the triangle inequality as follows: \begin{equation} \norm{ \nabla \frac{ \varphi_i^* (\vec{r}) \varphi_j (\vec{r}) }{ \|\vec{R}_q - \vec{r}\| } } \leq \frac{ \norm{\nabla \varphi_i^* (\vec{r})} \abs{\varphi_j (\vec{r})} }{ \|\vec{R}_q - \vec{r}\| } + \frac{ \abs{\varphi_i^* (\vec{r})} \norm{\nabla \varphi_j (\vec{r})} }{ \|\vec{R}_q - \vec{r}\| } + \frac{ \abs{\varphi_i^* (\vec{r}) \varphi_j (\vec{r})} }{ \|\vec{R}_q - \vec{r}\|^2 } \leq \left( 2\gamma_1 + 1 \right) \frac{\varphi_\text{max}^2}{x_\text{max}^2}, \end{equation} where the last inequaity follows from \eq{spin-orbital_bound} and \eq{spin-orbital_first_derivative_bound}, as well as the fact that $\|\vec{R}_q - \vec{r}\| \geq x_\text{max}$ for any $\vec{r} \in D_{1, q, \text{non-singular}}$. From \eq{Riemann_error} and \eq{partition_size_int1}, and noting $1/\lceil z \rceil \leq 1/z$, we have \begin{equation} \delta_\text{Riemann}^{(1, q, \text{non-singular})} \leq 8 \sqrt{3} Z_q \left( 2 \gamma_1 + 1 \right) \frac{\varphi_\text{max}^2}{x_\text{max}^2} \frac{x_1^4}{N_1} \leq 8 \sqrt{3} \left( 2 \gamma_1 + 1 \right) Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left(-\frac{\alpha}{2} \frac{x_1}{x_\text{max}}\right). \end{equation} \subsubsection{Bounding $\delta_\text{Riemann}^{(1, q, \text{singular})}$ for \lem{int1}} \label{app:integral_discretization/int1_proof/Riemann-sing} Recalling that we are using a non-standard metric to evaluate the volumes and diameters of sets, we find \begin{equation} \text{vol} \left( D_{1, q, \text{singular}} \right) = 2\pi^2 \end{equation} and \begin{equation} \text{diam} \left( T^{(1, q, \text{singular})}_{\vec{k}} \right) = \frac{1}{N_1}\sqrt{5\pi^2 + 1}. \end{equation} By \eq{Riemann_error}, it remains to find a bound on the derivative of $f_1$. Throughout this subsection, we write $f^\prime_\text{max}$ for this bound. To bound this derivative, we consider the gradient in three different ways. First there is $\nabla$, which is the gradient with respect to the unscaled position coordinates. Second there is $\nabla_s$, which is the gradient with respect to the spherical polar coordinates, but just taking the derivatives with respect to each coordinate. That is, \begin{equation} \nabla_s := \left( \frac{\partial}{\partial s} , \frac{\partial}{\partial \theta}, \frac{\partial}{\partial \phi} \right) \end{equation} We use this because we are treating the coordinates like they were Euclidean for the discretized integral. Third, there is the usual gradient in spherical polar coordinates, \begin{equation} \nabla'_s := \left( \frac{\partial}{\partial s} , \frac 1s \frac{\partial}{\partial \theta}, \frac 1{s\sin\theta} \frac{\partial}{\partial \phi} \right) \end{equation} Because we consider $s\in[0,1]$, the components of the gradient $\nabla_s$ are upper bounded by the components of the gradient $\nabla'_s$. Therefore \begin{equation} \norm{\nabla_s \left[\varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right]} \le \norm{\nabla'_s \left[\varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right]} \end{equation} The restriction on the magnitude of the gradient in \eq{spin-orbital_first_derivative_bound} holds on the usual gradient in spherical polar coordinates. This means that \begin{equation} \norm{\nabla'_s \left[\varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right]} = 4x_1 \norm{\nabla \left[\varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right]} \le 4x_1 \gamma_1 \frac{\varphi_{\max}}{x_{\max}}. \end{equation} Using these results, we have \begin{align} f^\prime_\text{max} &= \norm{\nabla_s \left[\varphi_i^* \! \left( 4x_1 \vec{s} + \vec{R}_q \right) \varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right) s \sin\theta\right]}\nonumber \\ &\le \left|\varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right) s \sin\theta\right| \norm{\nabla_s \left[\varphi_i^* \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right] }+ \left|\varphi_i^* \! \left( 4x_1 \vec{s} + \vec{R}_q \right) s \sin\theta\right| \norm{ \nabla_s \left[\varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right] }\nonumber \\ & \quad +\left| \varphi_i^* \! \left( 4x_1 \vec{s} + \vec{R}_q \right) \varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right) \right| \norm{ \nabla_s \left[s \sin\theta\right]}\nonumber \\ &\le 4x_1\varphi_{\max}\norm{\nabla \left[\varphi_i^* \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right]} + 4x_1\varphi_{\max} \norm{ \nabla \left[\varphi_j \! \left( 4x_1 \vec{s} + \vec{R}_q \right)\right] } + \varphi_{\max} \sqrt{2} \nonumber \\ &\le 8x_1\gamma_1 \frac{\varphi_{\max}^2}{x_{\max}} + \sqrt{2} \varphi_{\max}. \end{align} Thus we have the bound \begin{equation} f^\prime_\text{max} \leq \left( 8 \gamma_1 \frac{x_1}{x_\text{max}} + \sqrt{2} \right) \varphi_\text{max}^2. \end{equation} We now can give a bound for our approximation to $S^{(1,q)}_{ij} \!\left( D_{1, \text{singular}} \right)$. Using the above definitions of $f^\prime_\text{max}$, $\text{vol} \left( D_{1, \text{singular}} \right)$ and $\text{diam} \left( T^{(1, q, \text{singular})}_{\vec{k}} \right)$, we have \begin{equation} \delta_\text{Riemann}^{(1, q, \text{singular})} \leq \frac 12 16x_1^2 Z_q f^\prime_\text{max} \text{diam} \left( T_{\vec{k}} \right) \text{vol} \left( D_{1, \text{singular}} \right) \leq 16 \pi^2 \sqrt{5\pi^2 + 1}\sqrt{3} \left( 8 \gamma_1 \frac{x_1}{x_\text{max}} + 1 \right) \frac{Z_q x_1^2 \varphi_\text{max}^2}{N_1}. \end{equation} Using \eq{partition_size_int1} and noting $1/\lceil z \rceil \leq 1/z$, we have \begin{equation} \begin{split} \delta_\text{Riemann}^{(1, q, \text{singular})} & < 16 \pi^2 \sqrt{5\pi^2 + 1} \left( 8 \gamma_1 + \frac{x_\text{max}}{x_1} \right) Z_q \varphi_\text{max}^2 x_\text{max}^2 \frac{x_\text{max}}{x_1} \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right) \\ & \leq 1121 \left( 8 \gamma_1 + \sqrt{2} \right) Z_q \varphi_\text{max}^2 x_\text{max}^2 \exp \left( -\frac{\alpha}{2} \frac{x_1}{x_\text{max}} \right), \end{split} \end{equation} where we have used $x_1 \geq x_\text{max}$. \subsection{Proof of \lem{int2}} \label{app:integral_discretization/int2_proof} As in \app{integral_discretization/int1_proof}, we separate our proof into two cases, depending on whether the singularity of the integrand is relevant or not. If the orbitals $i$ and $j$ are distant, then the singularity is unimportant and we can use rectangular coordinates. If these orbitals are nearby, then we use spherical polar coordinates to eliminate the singularity from the integrand. We do not consider the distance between the orbitals $k$ and $\ell$ in order to simplify the analysis. \subsubsection{First stage for \lem{int2}} Again the first stage is to truncate the domain of integration. We take \begin{equation}\label{xlem3} x_2 := \frac{x_\text{max}}{\alpha} \log \left( \frac{K_2 \varphi_\text{max}^4 x_\text{max}^5}{\delta} \right) \end{equation} to be the size of the truncation region. The condition in \eq{sensible2} ensures that $x \geq x_\text{max}$. We regard the orbitals as distant if $\norm{\vec{c}_i - \vec{c}_j} \geq 2\sqrt{3}x_2 + x_\text{max}$. Then we take the truncation region \begin{equation} D_{2, \text{non-singular}} := \mathcal{C}_{x_2} \!\left( \vec{c}_i \right) \times \mathcal{C}_{x_2} \! \left( \vec{c}_j \right). \end{equation} Otherwise, if the orbitals are nearby we take the truncation region \begin{equation} D_{2, \text{singular}} := \left\{ \vec{r}_1 \oplus \vec{r}_2 \left| \vec{r}_1 \in \mathcal{C}_{x_2} \!\left( \vec{c}_i \right), \vec{r}_1 - \vec{r}_2 \in \mathcal{B}_{\zeta x_2} ( \vec{0} ) \right. \right\}, \end{equation} Where $\zeta:=2\sqrt{3}+3$. The error in the first case is \begin{equation} \delta_\text{trunc}^{(2, \text{non-singular})} := \abs{ S_{ijk\ell}^{(2)}\! \left( \R^6 \right) - S_{ijk\ell}^{(2)}\! \left( D_{2, \text{non-singular}} \right) } \end{equation} and the error in the second case is \begin{equation} \delta_\text{trunc}^{(2, \text{singular})} := \abs{ S_{ijk\ell}^{(2)} \!\left( \R^6 \right) - S_{ijk\ell}^{(2)} \!\left( D_{2, \text{singular}} \right) }. \end{equation} The maximum error for either case is denoted \begin{equation} \delta_\text{trunc}^{(2)} := \max \left\{ \delta_\text{trunc}^{(2, \text{non-singular})}, \delta_\text{trunc}^{(2, \text{singular})} \right\} . \end{equation} We upper bound this error in \app{integral_discretization/int2_proof/trunc} as \begin{equation} \delta_\text{trunc}^{(2)} < \frac{128\pi}{\alpha^6} (\alpha+2) \varphi_\text{max}^4 x_\text{max}^5 \exp\left( -\alpha \frac{x_2}{x_\text{max}} \right). \end{equation} \subsubsection{Second stage for \lem{int2} with Cartesian coordinates} The second stage for the proof of \lem{int2} is to discretize the integrals into Riemann sums. In this subsection we consider the case that orbitals $i$ and $j$ are distant, so we wish to approximate the truncated integral $S_{ijk\ell}^{(2)} \!\left( D_{2, \text{non-singular}} \right)$. In the next subsection we consider discretization in the case where orbitals $i$ and $j$ are nearby, and we wish to approximate $S_{ijk\ell}^{(2)} \!\left( D_{2, \text{singular}} \right)$. Each sum contains $\mu = N_2^6$ terms, where \begin{equation} \label{eq:partition_size_int2} N_2 := \left\lceil \left(\frac{x_2}{x_\text{max}}\right)^7 \exp \left( \alpha \frac{x_2}{x_\text{max}} \right) \right\rceil. \end{equation} The same value of $N_2$ will be used for spherical polar coordinates. Using the value of $x_2$ from Eq.~\eqref{xlem3} gives \begin{equation} N_2 = \left\lceil \frac{ K_2 \varphi_\text{max}^4 x_\text{max}^5 }{ \delta } \left[ \frac{1}{\alpha} \log \left( \frac{ K_2 \varphi_\text{max}^4 x_\text{max}^5 }{ \delta } \right) \right]^7 \right\rceil . \end{equation} Since $\mu = N_2^6$ is the number of terms in either Riemann sum, we obtain the lower bound on $\mu$ in \eq{lem3mu} of \lem{int2}. We approximate $S_{ijk\ell}^{(2)} \!\left(D_{2, \text{non-singular}}\right)$ with the sum \begin{equation} \mathcal{R}_{2, \text{non-singular}} := \sum_{\vec{k}_1, \vec{k}_2} \frac{ \varphi_i^* ( \vec{r}_{\vec{k}_1} )\, \varphi_j^* ( \vec{r}_{\vec{k}_2} )\, \varphi_k ( \vec{r}_{\vec{k}_2} )\, \varphi_\ell ( \vec{r}_{\vec{k}_1} ) }{ \|\vec{r}_{\vec{k}_1} - \vec{r}_{\vec{k}_2}\| } \text{vol} \left( T_{\vec{k}_1, \vec{k}_2}^{(2, \text{non-singular})} \right), \end{equation} where, for every triple of integers $\vec{k} = \left( k_1, k_2 , k_3 \right)$ such that $0 \leq k_1, k_2, k_3 < N_2$, we define \begin{equation} \vec{r}_{\vec{k}} = \frac{x_2}{N_2} \left[ 2\vec{k} - \left(N_2-1, N_2-1, N_2-1\right] \right) \end{equation} and \begin{equation} T_{\vec{k}_1, \vec{k}_2}^{(2, \text{non-singular})} := \mathcal{C}_{x_2/N_2} ( \vec{r}_{\vec{k}_1} ) \times \mathcal{C}_{x_2/N_2} ( \vec{r}_{\vec{k}_2} ). \end{equation} Thus we have partitioned $D_{2, \text{non-singular}}$ into $\mu$ equal-sized regions that overlap on sets of measure zero. Each term of $\mathcal{R}_{2, \text{non-singular}}$ has absolute value no greater than \begin{equation} \label{eq:lem3val1} \frac{ \abs{ \varphi_i^* ( \vec{r}_{\vec{k}_1} )\, \varphi_j^* ( \vec{r}_{\vec{k}_2} )\, \varphi_k ( \vec{r}_{\vec{k}_2} )\, \varphi_\ell ( \vec{r}_{\vec{k}_1} ) } }{ \|\vec{r}_{\vec{k}_1} - \vec{r}_{\vec{k}_2}\| } \text{vol} \left( T_{\vec{k}_1, \vec{k}_2}^{(2, \text{non-singular})} \right) \leq \frac{\varphi_\text{max}^4}{x_\text{max}} \left( \frac{2x}{N_2} \right)^6 = \frac{1}{\mu} \times 64 \frac{\varphi_\text{max}^4}{x_\text{max}} x_2^6, \end{equation} where the inequality follows from \eq{spin-orbital_bound} and the fact that the distance between $\mathcal{C}_x \!\left( \vec{c}_i \right)$ and $\mathcal{C}_x \!\left( \vec{c}_j \right)$ is no smaller than $x_\text{max}$ if $\norm{\vec{c}_i - \vec{c}_j} \geq 2\sqrt{3}x_2 + x_\text{max}$. This expression is upper bounded by \begin{equation}\label{bndlem3} \frac{1}{\mu} \times 672 \pi^2 \frac{\varphi_\text{max}^4}{x_\text{max}} x_2^6. \end{equation} Substituting our value of $x_2$ from Eq.~\eqref{xlem3} shows that no term has absolute value greater than (corresponding to \eq{lem3bnd} in \lem{int2}) \begin{equation} \frac{1}{\mu} \times \frac{672\pi^2}{\alpha^6} \left[ \log \left( \frac{K_2 \varphi_\text{max}^4 x_\text{max}^5}{\delta} \right) \right]^6 \varphi_\text{max}^4 x_\text{max}^5. \end{equation} We show in \app{integral_discretization/int2_proof/Riemann_non-sing} that the error may be bounded as \begin{align} \delta_\text{Riemann}^{(2, \text{non-singular})} & := \abs{ S_{ijk\ell}^{(2)} \!\left( D_{2, \text{non-singular}} \right) - \mathcal{R}_{2, \text{non-singular}} } \nonumber \\ & \leq 256 \sqrt{3} \left( 4\gamma_1 + \sqrt{2} \right) \varphi_\text{max}^4 x_\text{max}^5 \exp \left( -\alpha \frac{x_2}{x_\text{max}} \right). \end{align} \subsubsection{Second stage for \lem{int2} with spherical polar coordinates} In this subsection we discretize the integral $S_{ijk\ell}^{(2)} \!\left( D_{2, \text{singular}} \right)$ for the case of nearby orbitals. We introduce the following definition for convenience in what follows: \begin{equation} \eta_{\ell \ell^\prime} (\vec{r}) := \varphi_\ell^* (\vec{r}) \, \varphi_{\ell^\prime} (\vec{r}). \end{equation} We define $\vec{s} := \left( \vec{r}_1 - \vec{c}_i \right)/x_2$ and $\vec{t} := \left( \vec{r}_1 - \vec{r}_2 \right)/(\zeta x_2)$. We write $\vec{s} = \left( s_1, s_2, s_3 \right)$ and $\theta$ and $\phi$ for the polar and azimuthal angles of $\vec{t}$. Next we define \begin{equation} f_2 \left( s_1, s_2, s_3, t, \theta, \phi \right) := \eta_{i\ell} \!\left( x_2\vec{s} + \vec{c}_i \right) \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) t \sin\theta = \frac 1{\zeta x_2} \eta_{i\ell}\! \left( \vec{r}_1 \right) \eta_{jk} \!\left( \vec{r}_2 \right) \|\vec{r}_1 - \vec{r}_2\| \sin\theta . \end{equation} Then we can write \begin{equation} S^{(2)}_{ijk\ell} \!\left( D_{2, \text{singular}} \right) = \zeta^2 x^5 \int_{-1}^1\dd{s_1}\ \int_{-1}^1\dd{s_2}\ \int_{-1}^1\dd{s_3}\ \int_0^1 \dd{t}\ \int_0^\pi \dd{\theta}\ \int_0^{2\pi} \dd{\phi}\ f_2 \left( s_1, s_2, s_3, t, \theta, \phi \right). \end{equation} Let $\vec{k}_{\vec{s}} = \left( k_1, k_2, k_3 \right)$, where $0 \leq k_1, k_2, k_3 < N_2$, and let $\vec{k}_{\vec{t}} = \left( k_t, k_\theta, k_\phi \right)$, where $0 \leq k_t, k_\theta, k_\phi < N_2$. Define $s_{k_\ell} = \left( 2k_\ell +1-N_2 \right)/N_2$ for each $\ell = 1, 2, 3$ so that $\vec{s}_{\vec{k}_{\vec{s}}} = \left( s_{k_1}, s_{k_2}, s_{k_3} \right)$ and define \begin{equation} \left( t_{\vec{k}_{\vec{t}}}, \theta_{\vec{k}_{\vec{t}}}, \phi_{\vec{k}_{\vec{t}}} \right) = \left( \frac{1}{N_2} \left(k_t + \tfrac{1}{2}\right), \frac{\pi}{N_2} \left(k_\theta + \tfrac{1}{2}\right), \frac{2\pi}{N_2} \left(k_\phi + \tfrac{1}{2}\right) \right) \end{equation} for each $\vec{k}_{\vec{t}}$. We then define \begin{equation} T_{\vec{k}_{\vec{s}} \oplus \vec{k}_{\vec{t}}}^{(2, \text{singular})} := \left\{ \left( s_1, s_2, s_3, t, \theta, \phi \right) \left| \begin{array}{c} s_{k_1} - \frac{1}{N_2} \leq s_1 \leq s_{k_1} + \frac{1}{N_2} \\ s_{k_2} - \frac{1}{N_2} \leq s_2 \leq s_{k_2} + \frac{1}{N_2} \\ s_{k_3} - \frac{1}{N_2} \leq s_3 \leq s_{k_3} + \frac{1}{N_2} \\ t_{k_t} - \frac{1}{2 N_2} \leq t \leq t_{k_t} + \frac{1}{2 N_2} \\ \theta_{k_\theta} - \frac{\pi}{2 N_2} \leq \theta \leq \theta_{k_\theta} + \frac{\pi}{2 N_2} \\ \phi_{k_\phi} - \frac{\pi}{N_2} \leq \phi \leq \phi_{k_\phi} + \frac{\pi}{N_2} \\ \end{array} \right. \right\}. \end{equation} Now we define our Riemann sum: \begin{equation} \mathcal{R}_{2,\text{singular}} := \sum_{\vec{k}_{\vec{s}}, \vec{k}_{\vec{t}}} \zeta^2 x_2^5 f_2 \left( s_{k_1}, s_{k_2}, s_{k_3}, t_{k_t}, \theta_{k_\theta}, \phi_{k_\phi} \right) \text{vol} \left( T_{\vec{k}_{\vec{s}} \oplus \vec{k}_{\vec{t}}} ^{(2, \text{singular})} \right), \end{equation} where \begin{equation} \text{vol} \left( T_{\vec{k}_{\vec{s}} \oplus \vec{k}_{\vec{t}}} ^{(2, \text{singular})} \right) = \int_{T_{\vec{k}_{\vec{s}} \oplus \vec{k}_{\vec{t}}} ^{(2, \text{singular})}} \dd{s_1}\ \dd{s_2}\ \dd{s_3}\ \dd{t}\ \dd{\theta}\ \dd{\phi} = \frac{ \text{vol} \left( D_{2, \text{singular}} \right) }{ N_2^6 } = \frac{1}{\mu} \times 16\pi^2. \end{equation} Here \begin{equation} \text{vol} \left( D_{2, \text{singular}} \right) = \int_{-1}^1\dd{s_1}\ \int_{-1}^1\dd{s_2}\ \int_{-1}^1\dd{s_3}\ \int_0^1 \dd{t}\ \int_0^\pi \dd{\theta}\ \int_0^{2\pi} \dd{\phi} = 16\pi^2 \end{equation} is \emph{not} the volume of $D_{2, \text{singular}}$ considered under the usual Euclidean metric, as in \eq{int1_nonstandard_volume}. We need to use this non-standard volume because the Jacobian introduced by changing from Cartesian to spherical polar coordinates was absorbed into the definition of our integrand $f_2$. Therefore, each term in the Riemann sum has absolute value no greater than \begin{equation} \label{eq:lem3val2} \abs{ \zeta^2 x_2^5 f_2 \left( s_{k_1}, s_{k_2}, s_{k_3}, t_{k_t}, \theta_{k_\theta}, \phi_{k_\phi} \right) \text{vol} \left( T_{\vec{k}_{\vec{s}} \oplus \vec{k}_{\vec{t}}} ^{(2, \text{singular})} \right) } \leq \frac{1}{\mu} \times 672 \pi^2 x_2^5 \varphi_\text{max}^4, \end{equation} where the inequality follows from \eq{spin-orbital_bound} applied to the definition of $f_2$ in terms of $\eta_{i\ell}$ and $\eta_{jk}$. Again this expression is upper bounded by Eq.~\eqref{bndlem3} and substituting our value of $x_2$ yields the upper bound in \eq{lem3bnd}. We show in \app{integral_discretization/int2_proof/Riemann_sing} that \begin{equation} \delta_\text{Riemann}^{(2, \text{singular})} := \abs{ S_{ijk\ell}^{(2)} \left( D_{2, \text{singular}} \right) - \mathcal{R}_{2,\text{singular}} } < 2161 \pi^2 \left( 20 \gamma_1 + \sqrt{2} \right) \varphi_\text{max}^4 x_\text{max}^5 \exp\left( -\alpha \frac{x_2}{x_\text{max}} \right). \end{equation} \subsubsection{Third stage for \lem{int2}} Lastly we show that the error is properly bounded. From \eq{total_error_bound_trunc_Riemann}, we have \begin{equation} \delta_\text{total}^{(2, \text{non-singular})} := \abs{ S_{ijk\ell}^{(2)} \left( \R^6 \right) - \mathcal{R}_{2, \text{non-singular}} } \leq \delta_\text{trunc}^{(2, \text{non-singular})} + \delta_\text{Riemann}^{(2, \text{non-singular})} \end{equation} and \begin{equation} \delta_\text{total}^{(2, \text{singular})} := \abs{ S_{ijk\ell}^{(2)} \left( \R^6 \right) - \mathcal{R}_{2, \text{singular}} } \leq \delta_\text{trunc}^{(2, \text{singular})} + \delta_\text{Riemann}^{(2, \text{singular})}. \end{equation} We have given a bound that holds simultaneously for both $\delta_\text{trunc}^{(2, \text{non-singular})}$ and $\delta_\text{trunc}^{(2, \text{singular})}$, and we have given a bound for $\delta_\text{Riemann}^{(2, \text{singular})}$ that is larger (as a function of $x_2$) than our bound for $\delta_\text{Riemann}^{(2, \text{non-singular})}$. We are therefore able to assert that the error of our Riemann sum approximation, no matter which we choose, is always bounded above by \begin{equation} K_2 \varphi_\text{max}^4 x_\text{max}^5 \exp \left( -\alpha \frac{x_2}{x_\text{max}} \right), \end{equation} where \begin{equation} K_2 := \frac{128\pi}{\alpha^6}(\alpha+2) + 2161 \pi^2 \left( 20 \gamma_1 + \sqrt{2} \right). \end{equation} We have also found that the terms in the Riemann sum are upper bounded by Eqs.~\eqref{eq:lem3val1} and \eqref{eq:lem3val2} in the two cases. A bound that will hold for both is given by \begin{equation}\label{bndlem3} \frac{1}{\mu} \times 672 \pi^2 \frac{\varphi_\text{max}^4}{x_\text{max}} x_2^6, \end{equation} Then substituting our value of $x_2$ from Eq.~\eqref{xlem3} shows that the error is upper bounded by $\delta$. This last step completes our proof of \lem{int2}. The remainder of this subsection gives the details for some of the steps above. \subsubsection{Bounding $\delta_\text{trunc}^{(2)}$ for \lem{int2}} \label{app:integral_discretization/int2_proof/trunc} Note that $\mathcal{B}_{x_2} \!\left( \vec{c}_i \right) \times \mathcal{B}_{x_2} \!\left( \vec{c}_j \right)$ is a subset of both $D_{2, \text{non-singular}}$ and $D_{2, \text{singular}}$. The former is immediately apparent. To see the latter, observe that $\norm{\vec{c}_i - \vec{c}_j} < 2\sqrt{3}x_2 + x_\text{max}$ implies that the maximum possible value of $\norm{\vec{r}_1 - \vec{r}_2}$ for any $\vec{r}_1 \in \mathcal{B}_x \!\left( \vec{c}_i \right)$ and $\vec{r}_2 \in \mathcal{B}_x \!\left( \vec{c}_j \right)$ is $2\sqrt{3}x_2 + 2x_2+ x_\text{max} \le (2\sqrt{3}+3)x_2=\zeta x_2$. Therefore, \begin{align} \delta_\text{trunc}^{(2)} & \leq \varphi_\text{max}^4 \int_{\R^3 \setminus \mathcal{B}_{x_2} \left( \vec{c}_i \right)} \dd{\vec{r}_1} \int_{\R^3 \setminus \mathcal{B}_{x_2} \left( \vec{c}_j \right)} \dd{\vec{r}_2} \frac{ e^{ -\alpha \norm{\vec{r}_1 - \vec{c}_i}/x_\text{max} } e^{ -\alpha \norm{\vec{r}_2 - \vec{c}_j}/x_\text{max} } }{ \norm{\vec{r}_1 - \vec{r}_2} } \nonumber \\ & = \varphi_\text{max}^4 \int_{\R^3 \setminus \mathcal{B}_{x_2} ( \vec{0} )} \dd{\vec{s}}\ e^{ -\alpha s/x_\text{max} } \Lambda_{\alpha/x_\text{max}, x_2} \left( \vec{s} + \vec{c}_i - \vec{c}_j \right), \end{align} where we have used \eq{spin-orbital_decay} and, with the change of variables $\vec{s} = \vec{r}_1 - \vec{c}_i$, the definition of $\Lambda$ from \eq{exp_Coulomb_potential}. By \lem{exp_Coulomb_potential_bound}, for $\|\vec{s} + \vec{c}_i - \vec{c}_j\|\le x_2$ we get \begin{equation} \Lambda_{\alpha/x_\text{max}, x_2} \left( \vec{s} + \vec{c}_i - \vec{c}_j \right) \leq \frac{8\pi x_\text{max}^2}{\alpha^2} e^{ -\alpha s/x_\text{max} }, \end{equation} and for $\norm{\vec{s} + \vec{c}_i - \vec{c}_j} > x_2$ we get \begin{align} \Lambda_{\alpha/x_\text{max}, x_2} \left( \vec{s} + \vec{c}_i - \vec{c}_j \right) & \le \frac{16\pi x_\text{max}^3}{\alpha^3} \frac{e^{ -\alpha s/x_\text{max} }}{\norm{\vec{s} + \vec{c}_i - \vec{c}_j}} \nonumber \\ &<\frac{16\pi x_\text{max}^3}{x_2\alpha^3} e^{ -\alpha s/x_\text{max} }\nonumber \\ &\le \frac{16\pi x_\text{max}^2}{\alpha^3} e^{ -\alpha s/x_\text{max} }. \end{align} In either case we then get \begin{equation} \delta_\text{trunc}^{(2)} \leq \frac{8\pi}{\alpha^3} \varphi_\text{max}^4 x_\text{max}^2 (\alpha+2) \exp\left( -\frac{\alpha}{2} \frac{x_2}{x_\text{max}} \right) \int_{\R^3 \setminus \mathcal{B}_{x_2} ( \vec{0} )} \dd{\vec{s}}\ e^{ -\alpha s/x_\text{max} }. \end{equation} We use the fact that $(z^2 + 2z + 2) e^{-z} < 4 e^{-z/2}$ for any $z>0$ to find \begin{equation} \int_{\R^3 \setminus \mathcal{B}_{x_2}\! \left( \vec{0} \right)} \dd{\vec{s}}\ e^{-\mu s} = 4\pi \int_{x_2}^\infty \dd{s}\ e^{-\mu s} s^2 = 4\pi \left( \frac{x_2^2}{\mu} + \frac{2x_2}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu x} < \frac{16\pi}{\mu^3} e^{-\mu x_2/2}, \end{equation} which gives us \begin{equation} \delta_\text{trunc}^{(2)} < \frac{128\pi}{\alpha^6}(\alpha+2) \varphi_\text{max}^4 x_\text{max}^5 \exp\left( -\alpha \frac{x_2}{x_\text{max}} \right). \end{equation} \subsubsection{Bounding $\delta_\text{Riemann}^{(2, \text{non-singular})}$ for \lem{int2}} \label{app:integral_discretization/int2_proof/Riemann_non-sing} Following \app{integral_discretization/preliminaries/proof_structure}, we note that \begin{equation} \text{vol} \left( D_{2, \text{non-singular}} \right) = 64x_2^6 \end{equation} and \begin{equation} \text{diam} \left( T_{\vec{k}_1, \vec{k}_2}^{(2, \text{non-singular})} \right) = \sqrt{ \text{diam} \left( \mathcal{C}_{x_2/N_2} ( \vec{r}_{\vec{k}_1} ) \right)^2 + \text{diam} \left( \mathcal{C}_{x_2/N_2} ( \vec{r}_{\vec{k}_2} ) \right)^2 } = 2\sqrt{6} x_2/N_2 \end{equation} for each $\vec{k}_1$ and $\vec{k}_2$. To find a bound on $\delta_\text{Riemann}^{(2, \text{non-singular})}$, it only remains to find a bound on the derivative of the integrand. To bound the derivative of the integrand, we first find bounds on the gradients of the numerator and the denominator separately. The gradient of the numerator can be bounded using the product rule and triangle inequality, as well as \eq{spin-orbital_bound} and \eq{spin-orbital_first_derivative_bound}: \begin{align} & \norm{ \left( \nabla_1 \oplus \nabla_2 \right) \left[ \varphi_i^* ( \vec{r}_1 ) \, \varphi_j^* ( \vec{r}_2 )\, \varphi_k ( \vec{r}_2 ) \,\varphi_\ell ( \vec{r}_1 ) \right] }^2 \nonumber \\ & = \norm{ \nabla_1 \left[ \varphi_i^* ( \vec{r}_1 )\, \varphi_j^* ( \vec{r}_2 )\, \varphi_k ( \vec{r}_2 )\, \varphi_\ell ( \vec{r}_1 ) \right] }^2 + \norm{ \nabla_2 \left[ \varphi_i^* ( \vec{r}_1 ) \varphi_j^* ( \vec{r}_2 ) \varphi_k ( \vec{r}_2 ) \varphi_\ell ( \vec{r}_1 ) \right] }^2 \nonumber \\ & = \left( \abs{\varphi_j^* ( \vec{r}_2 )} \abs{\varphi_k ( \vec{r}_2 )} \norm{ \nabla_1 \left[ \varphi_i^* ( \vec{r}_1 )\, \varphi_\ell ( \vec{r}_1 ) \right] }\right)^2 + \left( \abs{\varphi_i^* ( \vec{r}_1 )} \abs{\varphi_\ell ( \vec{r}_1 )} \norm{ \nabla_2 \left[ \varphi_j^* ( \vec{r}_2 )\, \varphi_k ( \vec{r}_2 ) \right] }\right)^2 \nonumber \\ & \leq \left( \varphi_\text{max}^2 \norm{ \nabla_1 \left[ \varphi_i^* ( \vec{r}_1 )\, \varphi_\ell ( \vec{r}_1 ) \right] }\right)^2 + \left( \varphi_\text{max}^2 \norm{ \nabla_2 \left[ \varphi_j^* ( \vec{r}_2 )\, \varphi_k ( \vec{r}_2 ) \right] }\right)^2 \nonumber \\ & \leq \left( \varphi_\text{max}^2 \abs{\varphi_\ell ( \vec{r}_1 )} \norm{ \nabla_1 \left[ \varphi_i^* ( \vec{r}_1 ) \right] } + \varphi_\text{max}^2 \abs{\varphi_i^* ( \vec{r}_1 )} \norm{ \nabla_1 \left[ \varphi_\ell ( \vec{r}_1 ) \right] }\right)^2 \nonumber \\ &\qquad +\left( \varphi_\text{max}^2 \abs{\varphi_k ( \vec{r}_2 )} \norm{ \nabla_2 \left[ \varphi_j^* ( \vec{r}_2 ) \right] } + \varphi_\text{max}^2 \abs{\varphi_j^* ( \vec{r}_2 )} \norm{ \nabla_2 \left[ \varphi_k ( \vec{r}_2 ) \right] }\right)^2 \nonumber \\ & \leq 2 \left(2\gamma_1 \frac{\varphi_\text{max}^4}{x_\text{max}}\right)^2. \end{align} The gradient of the denominator can be computed directly: \begin{equation} \norm{\left( \nabla_1 \oplus \nabla_2 \right) \norm{\vec{r}_1 - \vec{r}_2}} = \norm{ \left( \nabla_1 \norm{\vec{r}_1 - \vec{r}_2} \right) \oplus \left( \nabla_2 \norm{\vec{r}_1 - \vec{r}_2} \right) } = \norm{ \left( \frac{\vec{r}_1 - \vec{r}_2}{\norm{\vec{r}_1 - \vec{r}_2}} \right) \oplus \left( \frac{\vec{r}_2 - \vec{r}_1}{\norm{\vec{r}_2 - \vec{r}_1}} \right) } = \sqrt{2} \end{equation} Again by the product rule and the triangle inequality, \begin{equation} \norm{ (\nabla_1 \oplus \nabla_2) \frac{ \varphi_i^* ( \vec{r}_1 )\, \varphi_j^* ( \vec{r}_2 )\, \varphi_k ( \vec{r}_2 )\, \varphi_\ell ( \vec{r}_1 ) }{ \norm{\vec{r}_1 - \vec{r}_2} } } \leq \frac{2\sqrt{2}\gamma_1}{\norm{\vec{r}_1 - \vec{r}_2}} \frac{\varphi_\text{max}^4}{x_\text{max}} + \frac{\sqrt{2}}{\norm{\vec{r}_1 - \vec{r}_2}^2} \varphi_\text{max}^4 \leq \sqrt{2}\left( 2\gamma_1 + 1 \right) \frac{\varphi_\text{max}^4}{x_\text{max}^2}. \end{equation} The last inequality follows from our assumption that $\norm{\vec{c}_i - \vec{c}_j} > 2\sqrt{3}x_2 + x_\text{max}$, which implies that the distance between $\mathcal{C}_{x_2} \!\left( \vec{c}_i \right)$ and $\mathcal{C}_{x_2} \!\left( \vec{c}_j \right)$ is greater than $x_\text{max}$. Therefore, \begin{equation} \delta_\text{Riemann}^{(2)} \leq \frac 12 \left( \sqrt{2} \left( 2\gamma_1 + 1 \right) \frac{\varphi_\text{max}^4}{x_\text{max}^2}\right)\left( 64 x_2^6 \right)\left(2\sqrt{6} \frac {x_2}{N_2}\right) \leq 128 \sqrt{3} \left( 2\gamma_1 + 1 \right) \varphi_\text{max}^4 x_\text{max}^5 \exp \left( -\alpha \frac{x_2}{x_\text{max}} \right). \end{equation} \subsubsection{Bounding $\delta_\text{Riemann}^{(2, \text{singular})}$ for \lem{int2}} \label{app:integral_discretization/int2_proof/Riemann_sing} Following \app{integral_discretization/preliminaries/proof_structure}, we again note that $\text{vol} \left( D_{2, \text{singular}} \right) = 16\pi^2$. We also observe that \begin{equation} \text{diam} \left( T_{\vec{k}_{\vec{s}} \oplus \vec{k}_{\vec{t}}} ^{(2, \text{singular})} \right) = \frac{1}{N_2} \sqrt{2^2 + 2^2 + 2^2 + 1^2 + \pi^2 + 4\pi^2} = \frac{1}{N_2} \sqrt{13 + 5\pi^2} < \frac{8}{N_2}, \end{equation} where we are again treating the variables $s$, $\theta$ and $\phi$ formally as Euclidean coordinates instead of spherical polar. It then remains to bound the derivative of the integrand \begin{equation} f_2 \left( s_1, s_2, s_3, t, \theta, \phi \right) = \eta_{i\ell} \!\left( x\vec{s} + \vec{c}_{i\ell} \right) \eta_{jk} \!\left( x\vec{s} - \zeta x\vec{t} + \vec{c}_i \right) t \sin\theta, \end{equation} where $\vec{s} = \left( s_1, s_2, s_3 \right)$. The derivative of the integral is bounded as follows. Define $\nabla_s = \left( \frac{\partial}{\partial s_1}, \frac{\partial}{\partial s_2}, \frac{\partial}{\partial s_3} \right)$ and $\nabla_t = \left( \frac{\partial}{\partial t}, \frac{\partial}{\partial \phi}, \frac{\partial}{\partial \theta} \right)$. By the product rule and the triangle inequality, \begin{align} \norm{ \left( \nabla_s \oplus \nabla_t \right) f_2 } & \leq \abs{ \eta_{jk}\! \left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) t \sin\theta } \norm{ \left( \nabla_s \oplus \nabla_t \right) \eta_{i\ell} \!\left( x_2\vec{s} + \vec{c}_i \right) } \nonumber \\ &\qquad + \abs{ \eta_{i\ell} \left( x_2\vec{s} + \vec{c}_i \right) } \norm{ \left( \nabla_s \oplus \nabla_t \right) \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) t \sin\theta } \nonumber \\ & \leq \varphi_\text{max}^2 \norm{ \left( \nabla_s \oplus \nabla_t \right) \eta_{i\ell} \!\left( x_2\vec{s} + \vec{c}_i \right) } + \varphi_\text{max}^2 \norm{ \left( \nabla_s \oplus \nabla_t \right) \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) t \sin\theta }, \end{align} where the last inequality follows from \eq{spin-orbital_bound}. We also have \begin{align} \norm{ \left( \nabla_s \oplus \nabla_t \right) \eta_{i\ell} \! \left( x_2\vec{s} + \vec{c}_i \right) } & = \norm{\nabla_s \eta_{i\ell} \! \left( x_2\vec{s} + \vec{c}_i \right)} \nonumber \\ & \leq \abs{\varphi_\ell \!\left( x_2\vec{s} + \vec{c}_i \right)} \norm{\nabla_s \varphi_i^* \!\left( x_2\vec{s} + \vec{c}_i \right)} + \abs{\varphi_i^* \!\left( x_2\vec{s} + \vec{c}_i \right)} \norm{\nabla_s \varphi_\ell \!\left( x_2\vec{s} + \vec{c}_i \right)} \nonumber \\ & \leq x_2 \varphi_\text{max} \norm{\nabla \varphi_i^* \!\left( x_2\vec{s} + \vec{c}_i \right)} + x_2 \varphi_\text{max} \norm{\nabla \varphi_\ell \!\left( x_2\vec{s} + \vec{c}_i \right)} \nonumber \\ & \leq 2 x_2 \gamma_1 \frac{\varphi_\text{max}^2}{x_\text{max}}, \end{align} where $\nabla$ in the second-to-last inequality refers to the gradient operator expressed in the usual basis and the final inequality follows from \eq{spin-orbital_first_derivative_bound}. Finally, we have \begin{align} & \norm{ \left( \nabla_s \oplus \nabla_t \right) \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) t \sin\theta }\nonumber \\ & \leq \norm{ \nabla_s \!\left[ \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) t \sin\theta \right] } + \norm{ \nabla_t \!\left[ \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) t \sin\theta \right] } \nonumber \\ & \leq \abs{t \sin\theta} \norm{ \nabla_s \!\left[ \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) \right] } + \abs{t \sin\theta} \norm{ \nabla_t \!\left[ \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) \right] } +\abs{ \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) } \norm{ \nabla_t (t \sin\theta) } \nonumber \\ & \leq \norm{ \nabla_s \!\left[ \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) \right] } + \norm{ \nabla_t \!\left[ \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) \right] } + \varphi_\text{max}^2 \norm{ \nabla_t (t \sin\theta) }, \end{align} where we have again used the product rule and the triangle inequality and, in the last inequality, \eq{spin-orbital_bound}. We have also used the bounds on the gradient operator $\nabla_t$ in the same was as in \app{integral_discretization/int1_proof/Riemann-sing}. We note that $\norm{\nabla_t (t \sin\theta)} \leq \sqrt{2}$, $\norm{ \nabla_s \left( \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) \right) } \leq 2 x \gamma_1 \varphi_\text{max}^2 / x_\text{max}$ (as above) and \begin{align} & \norm{ \nabla_t \eta_{jk} \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) }\nonumber \\ & \leq \abs{\varphi_k \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right)} \norm{ \nabla_t \varphi_j^* \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) } + \abs{\varphi_j^* \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right)} \norm{ \nabla_t \varphi_k \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) } \nonumber \\ & \leq \zeta x_2 \varphi_\text{max} \norm{ \nabla \varphi_j^* \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) } + \zeta x_2 \varphi_\text{max} \norm{ \nabla \varphi_k \!\left( x_2\vec{s} - \zeta x_2\vec{t} + \vec{c}_i \right) } \nonumber \\ & \leq 2\zeta x_2 \gamma_1 \frac{\varphi_\text{max}^2}{x_\text{max}}. \end{align} In summary, we have shown \begin{equation} \norm{ \left( \nabla_s \oplus \nabla_t \right) f_2 } \leq \left( 20 \gamma_1 \frac{x_2}{x_\text{max}} + \sqrt{2} \right) \varphi_\text{max}^4. \end{equation} We can now compute our bound on $\delta_\text{Riemann}^{(2, \text{singular})}$. Including the factor of $\zeta^2 x_2^5$ in the integral and using \eq{Riemann_error}, \begin{align} \delta_\text{Riemann}^{(2, \text{singular})} & \leq \frac 12 \zeta^2 x_2^5 f_\text{max}^\prime \text{vol} \left( D_{2, \text{singular}} \right) \max_{\vec{k}_{\vec{s}}, \vec{k}_{\vec{t}}} \text{diam} \left( T_{\vec{k}_{\vec{s}} \oplus \vec{k}_{\vec{t}}} ^{(2, \text{singular})} \right) \nonumber \\ & < \frac 12 \zeta^2 x_2^5 \times \left( 20 \gamma_1 \frac{x_2}{x_\text{max}} + \sqrt{2} \right) \varphi_\text{max}^4 \times 2\zeta \pi^2 \times 8/N_2 \nonumber \\ & < 2161 \pi^2 \left( 20 \gamma_1 + \sqrt{2} \frac{x_\text{max}}{x_2} \right) \varphi_\text{max}^4 x_\text{max}^5 \frac{x_\text{max}}{x_2} \exp\left( -\alpha \frac{x_2}{x_\text{max}} \right) \nonumber \\ & \leq 2161 \pi^2 \left( 20 \gamma_1 + \sqrt{2} \right) \varphi_\text{max}^4 x_\text{max}^5 \exp\left( -\alpha \frac{x_2}{x_\text{max}} \right). \end{align} \section{Riemann Sum Approximations of Hamiltonian Coefficients} \label{app:integral_discretization} The aim of this Appendix is to prove \lem{int0}, \lem{int1} and \lem{int2}. We begin in \app{integral_discretization/preliminaries} with preliminary matters that are integral to the proofs themselves. We then prove the Lemmas in \app{integral_discretization/int0_proof}, \app{integral_discretization/int1_proof} and \app{integral_discretization/int2_proof} respectively. Throughout this Appendix, we employ the following two notational conventions. \begin{enumerate} \item The vector symbol $\vec{\bullet}$ refers to an element of $\R^3$. We write $\bullet$ for the Euclidean length of $\vec{\bullet}$. Thus $\vec{v}$ refers to a $3$-vector of magnitude $v$. We denote the zero vector as $\vec{0} = (0, 0, 0)$. We use $\oplus$ to denote vector concatenation: if $\vec{v} = \left( v_1, v_2, v_3 \right)$ and $\vec{w} = \left( w_1, w_2, w_3 \right)$, we write $\vec{v} \oplus \vec{w} = \left(v_1, v_2, v_3, w_1, w_2, w_3 \right)$. The gradient operator over $\R^6$ is then written as $\nabla_1 \oplus \nabla_2$. \item If $x$ is a positive real number and $\vec{v}$ is a $3$-vector, we write $\mathcal{B}_x (\vec{v})$ for the closed ball of radius $x$ centered at $\vec{v}$ and $\mathcal{C}_x (\vec{v})$ for the closed cube of side length $2x$ centered at $\vec{v}$. Thus $\mathcal{B}_x (\vec{v}) \subset \mathcal{C}_x (\vec{v})$ and $\mathcal{B}_y (\vec{v}) \not\subseteq \mathcal{C}_x (\vec{v})$ whenever $y > x$. \end{enumerate} This notation will be used extensively and without comment in what follows. \input{preliminaries} \input{int0_proof} \input{int1_proof} \input{int2_proof} \section{Introduction} \label{sec:intro} The first quantum algorithm for quantum chemistry was introduced nearly a decade ago \cite{Aspuru-Guzik2005}. That algorithm was based on the Trotter-Suzuki decomposition, which Lloyd and Abrams first applied to quantum simulation in \cite{Lloyd1996,Abrams1997}. The Trotter-Suzuki decomposition has been used in almost all quantum algorithms for quantum chemistry since then \cite{Jones2012,Veis2010,Wang2014,Li2011,Yung2013,Kassal2008, Toloui2013,Whitfield2013b,Whitfield2015,Wecker2014,Hastings2015, Poulin2014,McClean2014,BabbushTrotter}, with the exception of the adiabatic algorithm detailed in \cite{BabbushAQChem}, the variational quantum eigensolver approach described in \cite{Peruzzo2013,McClean2015,OMalley2016}, and in our prior papers using the Taylor series technique \cite{BabbushSparse1,Kivlichan2016}. Recently, there has been substantial renewed interest in these algorithms due to the low qubit requirement compared with other algorithms such as factoring, together with the scientific importance of the electronic structure problem. This led to a series of papers establishing formal bounds on the cost of simulating various molecules \cite{Wecker2014,Hastings2015,Poulin2014,McClean2014,BabbushTrotter}. Whereas qubit requirements for the quantum chemistry problem seem modest, using arbitrarily high-order Trotter formulas, the tightest-known upper bound on the gate count of the second-quantized, Trotter-based quantum simulation of chemistry is $\widetilde{\cal O}(N^{8+o(1)} t / \epsilon^{o(1)})$ \cite{Berry2006,Wiebe2011}\footnote{We use the typical computer science convention that $f\in \Theta(g)$, for any functions $f$ and $g$, if $f$ is asymptotically upper and lower bounded by multiples of $g$, ${\cal O}$ indicates an asymptotic upper bound, $\widetilde{{\cal O}}$ indicates an asymptotic upper bound suppressing any polylogarithmic factors in the problem parameters, $\Omega$ indicates the asymptotic lower bound and $f \in o(g)$ implies $f / g \rightarrow 0$ in the asymptotic limit.}, where $N$ is the number of spin-orbitals and $\epsilon$ is the required accuracy. However, using significantly more practical Trotter decompositions, the best known gate complexity for this quantum algorithm is $\widetilde{\cal O}(N^9 \sqrt{t^3 / \epsilon})$ \cite{Hastings2015}. Fortunately, recent numerics suggest that the scaling for real molecules is closer to $\widetilde{\cal O}(N^6 \sqrt{t^3 / \epsilon})$ \cite{Poulin2014} or $\widetilde{\cal O}(Z^3 N^4 \sqrt{t^3 / \epsilon})$ \cite{BabbushTrotter}, where $Z$ is the largest nuclear charge in the molecule. Still, the Trotter-based quantum simulation of many molecular systems remains a costly proposition \cite{Gibney2014,Mueck2015}. In Ref.~\cite{BabbushSparse1}, we introduced two novel quantum algorithms for chemistry based on the truncated Taylor series simulation method of \cite{Berry2015}, which are exponentially more precise than algorithms using the Trotter-Suzuki decomposition. Our first algorithm, referred to as the ``database'' algorithm, was shown to have gate count scaling as $\widetilde{\cal O}(N^4 \| H\| t)$. Our second algorithm, referred to as the ``on-the-fly'' algorithm, was shown to have the lowest scaling of any approach to quantum simulation previously in the literature, $\widetilde{\cal O}(N^5 t)$. Both of these algorithms use a second-quantized representation of the Hamiltonian; in this paper we employ a more compressed, first-quantized representation of the Hamiltonian known as the configuration interaction (CI) matrix. We also analyze the on-the-fly integration strategy far more rigorously, by making the assumptions explicit and rigorously deriving error bounds. Our approach combines a number of improvements: \begin{itemize} \item a novel 1-sparse decomposition of the CI matrix (improving over that in \cite{Toloui2013}), \item a self-inverse decomposition of 1-sparse matrices as introduced in \cite{Berry2013}, \item the exponentially more precise simulation techniques of \cite{Berry2015}, \item and the on-the-fly integration strategy of \cite{BabbushSparse1}. \end{itemize} The paper is outlined as follows. In \sec{summary}, we summarize the key results of this paper, and note the improvements presented here over previous approaches. In \sec{encoding}, we introduce the configuration basis encoding of the wavefunction. In \sec{decomp1}, we show how to decompose the Hamiltonian into 1-sparse unitary matrices. In \sec{oracle}, we use the decomposition of \sec{decomp1} to construct a circuit which provides oracular access to the Hamiltonian matrix entries, assuming access to $\textsc{sample}(w)$ from \cite{BabbushSparse1}. In \sec{simulation}, we review the procedures in \cite{Berry2015} and \cite{BabbushSparse1} to demonstrate that this oracle circuit can be used to effect a quantum simulation which is exponentially more precise than using a Trotter-Suzuki decomposition approach. In \sec{conclusion}, we discuss applications of this algorithm and future research directions. \subsection{Preliminaries} \label{app:integral_discretization/preliminaries} The purpose of this subsection is to present two key discussions that will be needed at many points in the proofs of this Appendix. First, in \app{integral_discretization/preliminaries/proof_structure}, we discuss the general structure of the proofs of \lem{int0}, \lem{int1} and \lem{int2}. Second, in \app{integral_discretization/preliminaries/exp_Coulomb_potential}, we prove an ancillary Lemma (\lem{exp_Coulomb_potential_bound}) that we use several times. The ancillary Lemma offers bounds on the function \begin{equation} \label{eq:exp_Coulomb_potential} \Lambda_{\mu, x} (\vec{c}) := \int_{\R^3 \setminus \mathcal{B}_x (\vec{0})} \frac{\exp(-\mu r)}{\norm{\vec{r}-\vec{c}}} \dd{\vec{r}}, \end{equation} where $\mu$ is a positive real constant and $\vec{c}$ is a constant vector. The Lemma is stated as follows. \begin{lemma} \label{lem:exp_Coulomb_potential_bound} Suppose $\mu$ and $x$ are positive real numbers and $\vec{c} \in \R^3$ is some constant vector. Then \begin{equation} \label{eq:exp_Coulomb_potential_bound_c>x} \Lambda_{\mu, x} (\vec{c}) < \frac{16\pi}{\mu^3 c} e^{-\mu x/2}, \end{equation} and for $c \leq x$, \begin{equation} \label{eq:exp_Coulomb_potential_bound_c<x} \Lambda_{\mu, x} (\vec{c}) < \frac{8\pi}{\mu^2} e^{-\mu x/2}. \end{equation} \end{lemma} The function $\Lambda_{\mu, x} (\vec{c})$ appears in bounds derived in the proofs of \lem{int1} and \lem{int2}. Although it is possible to compute an analytic formula for the value of integral, the result is unwieldy. The bounds of \lem{exp_Coulomb_potential_bound} are then used to ensure meaningfully expressed bounds on the Riemann sum approximations. \subsubsection{Structure of the Proofs} \label{app:integral_discretization/preliminaries/proof_structure} The proofs of \lem{int0}, \lem{int1} and \lem{int2} each roughly follow a general structure consisting of the following three stages, though with minor deviations. \textbf{First stage:} The domain of integration is truncated to a domain $D$. The size of $D$ is specified by a positive real parameter $x$, which the conditions of the lemmas ensure is at least $x_{\max}$. We then bound the error due to the truncation \begin{equation} \delta_\text{trunc} := \abs{ \int_{\R^d} f\left(\vec{r}\right) \dd{\vec{r}} - \int_{D} f\left(\vec{r}\right) \dd{\vec{r}} }, \end{equation} where $f: \R^d \rightarrow \R$ refers to the relevant integrand. \textbf{Second stage:} We specify a Riemann sum that is designed to approximate this truncated integral and give a bound on the error $\delta_\text{Riemann}$ of this Riemann sum approximation. We specify the number of terms in the Riemann sum in order to give the bound on $\mu$ in the lemma. We also give a bound on the absolute value of each term in the Riemann sum using the value of $x$ specified in the first stage. \textbf{Third stage:} In the final stage of each proof, we bound the total error \begin{equation} \delta_\text{total} := \abs{ \int_{\R^d} f(\vec{r}) \, \dd{\vec{r}} - \sum_T f(\vec{r}_T) \text{vol}(T) } \end{equation} via the triangle inequality as \begin{equation} \label{eq:total_error_bound_trunc_Riemann} \delta_\text{total} \leq \delta_\text{trunc} + \delta_\text{Riemann}. \end{equation} Our choice of $x$ then ensures that the error is bounded by $\delta$. To be more specific about the approach in the second stage, we partition $D$ into regions $T$, and the Riemann sum approximates the integral over each $T$ with the value of the integrand multiplied by the volume of $T$. The error due to this approximation is bounded by observing the following. Suppose $f: \R^d \rightarrow \R$ is once-differentiable and $f^\prime_{\max}$ is a bound on its first derivative. If $\vec{r}_T$ is any element of $T$, we will seek to bound the error of the approximation \begin{equation} \int_T f\left(\vec{r}\right) \dd{\vec{r}} \approx f\left(\vec{r}_T\right) \text{vol}(T), \end{equation} where $\text{vol}(T)$ is the $d$-dimensional hypervolume of the set $T$. The error of this approximation is \begin{equation} \delta_T := \abs{ \int_T \left[ f\!\left(\vec{r}\right) - f\!\left(\vec{r}_T\right) \right] \dd{\vec{r}} }, \end{equation} which can be bounded as follows: \begin{equation} \delta_T \leq \int_T \abs{f\!\left(\vec{r}\right) - f\!\left(\vec{r}_T\right)}\dd{\vec{r}} \leq \max_{\vec{r} \in T} \abs{f\!\left(\vec{r}\right) - f\!\left(\vec{r}_T\right)} \text{vol}(T) \leq f^\prime_{\max}\max_{\vec{r} \in T} \norm{\vec{r} -\vec{r}_T} \text{vol}(T) \end{equation} where \begin{equation} \text{vol}(T) = \int_T \dd{\vec{r}}, \qquad f^\prime_{\max} = \max_{\vec r} \norm{\nabla f(\vec r)}. \end{equation} We will choose the points $\vec{r}_T$ in the centers of the regions $T$, so that \begin{equation} \delta_T \leq \frac 12 f^\prime_{\max} \text{diam}(T) \text{vol} (T), \end{equation} where \begin{equation} \text{diam}(T) := \max_{\vec{r}_1,\vec{r}_2 \in T} \norm{\vec{r}_1-\vec{r}_2}. \end{equation} The Riemann sum approximations we define will then take the form \begin{equation} \int_D f(\vec{r})\, \dd{\vec{r}} \approx \sum_T f(\vec{r}_T) \text{vol}(T), \end{equation} and the error of this approximation is \begin{equation} \delta_\text{Riemann} := \abs{\int_D f(\vec{r}) \,\dd{\vec{r}}- \sum_T f(\vec{r}_T) \text{vol}(T)}, \end{equation} which can be bounded via the triangle inequality as \begin{equation} \label{eq:Riemann_error} \delta_\text{Riemann} \leq \sum_T \delta_T \leq \frac 12 f^\prime_{\max} \text{vol}(D) \left(\max_T \text{diam}(T)\right). \end{equation} \subsubsection{Proof of \lem{exp_Coulomb_potential_bound}} \label{app:integral_discretization/preliminaries/exp_Coulomb_potential} We prove the Lemma by deriving exact formulae for $\Lambda_{\mu, x} (\vec{c})$ in the cases $c \leq x$ and $c > x$ and then deriving bounds on these formulae that have simpler functional forms. To derive exact formulae for $\Lambda_{\mu, x} (\vec{c})$, we use the Laplace expansion \begin{equation} \frac{1}{\norm{\vec{r}-\vec{c}}} = \sum_{\ell = 0}^\infty \sum_{m=-\ell}^\ell (-1)^m I_{\ell, -m} (\vec{r}) R_{\ell, m} (\vec{c}), \end{equation} where $R_{\ell, m}$ and $I_{\ell, m}$ refer to the regular and irregular solid spherical harmonic functions, respectively, and $r \geq c$. That is to say, \begin{equation} R_{\ell, m} (\vec{r}) := \sqrt{\frac{4\pi}{2\ell+1}} r^\ell Y_{\ell, m} (\theta,\phi) \end{equation} and \begin{equation} I_{\ell, m} (\vec{r}) := \sqrt{\frac{4\pi}{2\ell+1}} \frac{1}{r^{\ell+1}} Y_{\ell, m} (\theta,\phi), \end{equation} where \begin{equation} Y_{\ell, m} (\theta, \phi) := \sqrt{\frac{2\ell + 1}{4\pi} \frac{(\ell-m)!}{(\ell+m)!}} e^{i m \phi} P_\ell^m (\cos\theta) \end{equation} are the spherical harmonics (see \S 14.30(i) in \cite{NIST:DLMF}), $P_\ell^m$ are the associated Legendre polynomials, and $\theta$ and $\phi$ are respectively the polar and azimuthal angles of $\vec{r}$. Via Eq.~(8) of \S 14.30(ii) in \cite{NIST:DLMF}, we have \begin{equation} \int_0^{2\pi} \dd{\phi} \int_0^\pi \dd{\theta}\ I_{\ell, m} (\theta,\phi) \sin\theta = \frac{4\pi}{r} \delta_{m,0} \delta_{\ell,0} \end{equation} and \begin{equation} \int_0^{2\pi} \dd{\phi} \int_0^\pi \dd{\theta}\ R_{\ell, m} (\theta,\phi) \sin\theta = 4\pi \delta_{m,0} \delta_{\ell,0}, \end{equation} where $\delta_{a,b}$ denotes the Kronecker delta. If $c \leq x$: \begin{align} \label{eq:Lamba_c<x_exact} \Lambda_{\mu, x} (\vec{c}) & = \sum_{\ell = 0}^\infty \sum_{m=-\ell}^\ell (-1)^m R_{\ell, m} (\vec{c}) \int_{\R^3 \setminus \mathcal{B}_x (\vec{0})} \dd{\vec{r}} \, e^{-\mu r} I_{\ell, -m} (\vec{r}) \nonumber \\ & = 4 \pi R_{0,0} (\vec{c}) \int_x^\infty \dd{r}\ r e^{-\mu r} \nonumber \\ & = 4\pi \left( \frac{x}{\mu} + \frac{1}{\mu^2} \right) e^{-\mu x}. \end{align} \eq{exp_Coulomb_potential_bound_c<x} follows from the fact that $(1 + z) e^{-z} < 2 e^{-z/2}$ for all $z > 0$. If $c > x$: \begin{align} \Lambda_{\mu, x} (\vec{c}) - \Lambda_{\mu, c} (\vec{c}) & = \int_{\mathcal{B}_c (\vec{0})\setminus\mathcal{B}_x (\vec{0})}\dd{\vec{r}} \ \frac{\exp(-\mu r)}{\norm{\vec{r}-\vec{c}}} \nonumber \\ & = \sum_{\ell = 0}^\infty \sum_{m=-\ell}^\ell (-1)^m I_{\ell,m} (\vec{c}) \int_{\mathcal{B}_c (\vec{0}) \setminus \mathcal{B}_x (\vec{0})} \dd{\vec{r}} \ e^{-\mu r} R_{\ell,-m} (\vec{r}) \nonumber \\ & = \frac{4\pi}{c} \left[ \left( \frac{x^2}{\mu} + \frac{2x}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu x} - \left( \frac{c^2}{\mu} + \frac{2c}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu c} \right]. \end{align} Therefore, \begin{align} \Lambda_{\mu, x} (\vec{c}) & = \Lambda_{\mu, c} (\vec{c}) + \frac{4\pi}{c} \left[ \left( \frac{x^2}{\mu} + \frac{2x}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu x} - \left( \frac{c^2}{\mu} + \frac{2c}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu c} \right] \nonumber \\ & = \frac{4\pi}{c} \left[ \left( \frac{x^2}{\mu} + \frac{2x}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu x} - \left( \frac{c}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu c} \right] \nonumber \\ & < \frac{4\pi}{c} \left( \frac{x^2}{\mu} + \frac{2x}{\mu^2} + \frac{2}{\mu^3} \right) e^{-\mu x} \nonumber \\ & < \frac{16\pi}{\mu^3 c} e^{-\mu x/2}, \end{align} where we use the fact that $(z^2 + 2z + 2) e^{-z} < e^{-z/2}$ for any $z>0$ and the fact that \begin{equation} \Lambda_{\mu, c} (\vec{c}) = 4\pi \left( \frac{c}{\mu} + \frac{1}{\mu^2} \right) e^{-\mu c}, \end{equation} which follows from \eq{Lamba_c<x_exact}. This gives us \eq{exp_Coulomb_potential_bound_c>x} for $c>x$. In the case that $c\le x$, \begin{equation} \frac{16\pi}{\mu^3 c} e^{-\mu x/2} \ge \frac{16\pi}{\mu^3 x} e^{-\mu x/2}, \end{equation} and, since $(z^2 + z) e^{-z} < 4e^{-z/2}$ for all $z > 0$, we have \begin{equation} \frac{16\pi}{\mu^3 x} e^{-\mu x/2} > 4\pi \left( \frac{x}{\mu} + \frac{1}{\mu^2} \right) e^{-\mu x}. \end{equation} Therefore the bound \eq{exp_Coulomb_potential_bound_c>x} holds for $c\le x$ as well. \section{Simulating Hamiltonian Evolution} \label{sec:simulation} The simulation technique we now discuss is based on that of Ref.~\cite{Berry2015}. We partition the total time $t$ into $r$ segments of duration $t/r$. For each segment, we expand the evolution operator $e^{-iHt/r}$ in a Taylor series up to order $K$, \begin{align} \label{eq:Ur} U_r := e^{-i H t / r} \approx \sum_{k=0}^K\frac{\left(-i Ht/r\right)^k}{k!}. \end{align} The error due to truncating the series at order $K$ is bounded by \begin{equation} \label{eq:error} {\cal O}\left(\frac{\left(\left\| H \right\| t / r\right)^{K + 1}}{\left(K+1\right)!}\right). \end{equation} In order to ensure the total simulation has error no greater than $\epsilon$, each segment should have error bounded by $\epsilon/r$. Therefore, provided $r \ge \| H\| t$, the total simulation will have error no more than $\epsilon$ for \begin{equation} \label{eq:K} K \in {\cal O} \left(\frac{\log\left(r/\epsilon\right)}{\log\log\left(r/\epsilon\right)}\right). \end{equation} Using our full decomposition of the Hamiltonian from \eq{selfinvdec} in the Taylor series formula of \eq{Ur}, we obtain \begin{align} \label{eq:Ur3} U_r \approx \sum_{k=0}^K\frac{\left(-i t \zeta \right)^k}{r^k k!} \! \! \sum_{\ell_1, \cdots, \ell_k=1}^{L} \sum_{\rho_1, \cdots, \rho_k=1}^{\mu} \!\!\! {\cal H}_{\ell_1,\rho_1} \cdots {\cal H}_{\ell_k,\rho_k}. \end{align} The sum in \eq{Ur3} takes the form \begin{align} \label{eq:bV} & \widetilde{U} =\sum_{j} \beta_j V_j, \quad \quad j = \left(k, \ell_1, \cdots, \ell_k,{\rho_1} \cdots {\rho_k} \right),\nonumber \\ & V_j =\left(-i\right)^k{\cal H}_{\ell_1,\rho_1} \cdots {\cal H}_{\ell_k,\rho_k}, \quad \beta_j = \frac{t^k \zeta^k }{r^k k!}, \end{align} where $\widetilde{U}$ is close to unitary and the $V_j$ are unitary. Note that in contrast to \cite{BabbushSparse1}, $\beta_j\ge 0$, consistent with the convention used in \cite{Berry2015}. Our simulation will make use of an ancillary ``selection'' register $\ket{j} = \ket{k}\ket{\ell_1} \cdots \ket{\ell_K} \ket{\rho_1} \cdots \ket{\rho_K}$ for $0\leq k\leq K$, with $1\leq \ell_\upsilon\leq L$ and $1 \leq \rho_\upsilon \leq \mu$ for all $\upsilon$. It is convenient to encode $k$ in unary, as $\ket{k}=\ket{1^k0^{K-k}}$, which requires $\Theta(K)$ qubits. Additionally, we encode each $\ket{\ell_k}$ in binary using $\Theta (\log L )$ qubits and each $\ket{\rho_k}$ in binary using $\Theta(\log \mu)$ qubits. We denote the total number of ancilla qubits required as $J$, which scales as \begin{equation} \label{eq:J} J \in \Theta\left(K\log \left(\mu L\right)\right) = {\cal O}\left(\frac{\log \left(\mu L\right)\log\left(r/\epsilon\right)}{\log\log\left(r/\epsilon\right)}\right). \end{equation} To implement the truncated evolution operator in \eq{bV}, we wish to prepare a superposition state over $j$, then apply a controlled $V_j$ operator. Following the notation of Ref.~\cite{Berry2015}, we denote the state preparation operator as $B$, and it has the effect \begin{equation} \label{eq:B} B\ket{0}^{\otimes J} = \sqrt{\frac{1}{\lambda}}\sum_{j} \sqrt{\beta_j} \ket{j}, \end{equation} where $\lambda$ is a normalization factor. We can implement $B$ by applying Hadamard gates (which we denote as $H\!d$) to every qubit in the $\ket{\ell_\upsilon}$ and $\ket{\rho_\upsilon}$ registers, in addition to $K$ (controlled) single-qubit rotations applied to each qubit in the $\ket{k}$ register. We need to prepare the superposition state over $\ket k$ \begin{equation} \left(\sum_{k=0}^K \frac{(\zeta L \mu t)^k}{r^k k!}\right)^{-1/2}\sum_{k=0}^K \sqrt{\frac{(\zeta L \mu t)^k}{r^k k!}} \ket{k}. \end{equation} Using the convention that $R_y(\theta_k) := \exp [-i\, \theta_k\, \sigma_y / 2 ]$, this can be prepared by applying $R_y(\theta_1)$ followed by $R_y (\theta_k)$ controlled on qubit $k-1$ for all $k \in [2, K]$, where \begin{equation} \label{eq:theta_k} \theta_k := 2\arcsin{\left(\sqrt{ 1- \frac{(\zeta L \mu t)^{k-1}}{r^{k-1}\left(k-1\right)!} \left( \sum_{s = k}^K \frac{(\zeta L \mu t)^{s}}{r^s s!} \right)^{-1}} \right)}. \end{equation} The Hadamard gates are applied to each of the qubits in the $2 K$ remaining components of the selection register $\ket{\ell_1} \cdots \ket{\ell_K} \ket{\rho_1} \cdots \ket{\rho_K}$. This requires ${\cal O}(K\log (\mu L ))$ gates; parallelizing the Hadamard transforms leads to circuit depth ${\cal O} (K)$ for $B$. A circuit implementing $B$ is shown in \fig{B_circuit}. \begin{figure} \[\xymatrix @*=<0em> @R 1em @C 0.25em { &&&&&\lstick{\kets{0}} &\gate{R_y\left(\theta_1\right)} & \ctrl{1} & \qw &&&& \cdots &&&& & \qw & \qw & \qw \\ &&&&&\lstick{\kets{0}} &\qw & \gate{R_y\left(\theta_2\right)} & \qw &&&& \cdots &&&& & \qw & \qw & \qw \\ &\vdots &&&&&&&&&&& \ddots&& && & \vdots & \\\\ &&&&&\lstick{\kets{0}} &\qw & \qw & \qw &&&& \cdots& &&& & \ctrl{1} & \qw & \qw \\ &&&&&\lstick{\kets{0}} &\qw & \qw & \qw &&&& \cdots&&& && \gate{R_y\left(\theta_K\right)} & \qw & \qw \\ &&&&&\lstick{\kets{0}^{\otimes \log L}} & {/} \qw & \gate{H\!d^{\otimes \log L}} &\qw & \qw& \qw & \qw &\qw & \qw& \qw & \qw &\qw & \qw& \qw & \qw \\ &&&&&\lstick{\kets{0}^{\otimes \log\mu}} & {/} \qw & \gate{H\!d^{\otimes \log \mu}} &\qw & \qw & \qw & \qw &\qw & \qw& \qw & \qw&\qw & \qw& \qw & \qw \\ }\] \caption{\label{fig:B_circuit} The circuit for $B$ as described in \eq{B}. An expression for $\theta_k$ is given in \eq{theta_k}.} \end{figure} We then wish to implement an operator to apply the $V_j$ which is referred to in \cite{BabbushSparse1,Berry2015} as $\textsc{select}(V)$, \begin{equation} \label{eq:selectV} \textsc{select}\left(V\right)\ket{j}\ket{\psi} =\ket{j}V_j\ket{\psi}. \end{equation} This operation can be applied using ${\cal O}(K)$ queries to a controlled form of the oracle $\textsc{select}({\cal H})$ defined in \sec{oracle}. One can apply $\textsc{select}({\cal H})$ $K$ times, using each of the $\ket{\ell_\upsilon}$ and $\ket{\rho_\upsilon}$ registers. Thus, given that $\textsc{select}({\cal H})$ requires $\widetilde{\cal O}(N)$ gates, our total gate count for $\textsc{select}(V)$ is $\widetilde {\cal O}(K N)$. \tab{parameters} lists relevant parameters along with their bounds, in terms of chemically relevant variables. \tab{operators} lists relevant operators and their gate counts. \begin{table*}[ht] \caption{Taylor series simulation parameters and bounds} \label{tab:parameters} \begin{tabular}{ c | c | c } \hline\hline Parameter & Explanation & Bound\\ \hline $r$ & number of time segments, \eq{r} & $ \zeta L\mu t/\ln(2)$\\ $L$ & terms in unitary decomposition, \eq{L} & ${\cal O}\left(\eta^2 N^2 \max_{\gamma,\rho}\|\aleph_{\gamma,\rho} \|_{\max}/\zeta\right)$\\ $K$ & truncation point for Taylor series, \eq{K} & ${\cal O}\left(\frac{\log(r/\epsilon)}{\log\log(r/\epsilon)}\right)$\\ $J$ & ancilla qubits in selection register, \eq{J} & $\Theta\left(K\log \left(\mu L\right)\right)$\\ \hline \end{tabular} \end{table*} \begin{table*}[ht] \caption{Taylor series simulation operators and complexities} \label{tab:operators} \begin{tabular}{ c | c | c } \hline\hline Operator & Purpose & Gate Count\\ \hline $\textsc{select}\left({\cal H}\right)$ & applies specified terms from decomposition, \eq{selectH} & $\widetilde{\cal O}\left(N\right)$\\ $\textsc{select}\left(V\right)$ & applies specified strings of terms, \eq{selectV} & $\widetilde{\cal O}\left(N K\right) $\\ $B$ & prepares superposition state, \eq{B} & $ {\cal O}\left(K \log \left(\mu L\right) \right) $\\ ${\cal W}$ & probabilistically performs simulation under $H t / r$, \eq{W} & $ \widetilde{\cal O}\left(N K\right) $\\ $P$ & projector onto $\ket{0}^{\otimes J}$ state of selection register, \eq{P} & $ {\cal O}\left(K \log \left(\mu L\right)\right) $\\ $G$ & amplification operator to implement sum of unitaries, \eq{G} & $ \widetilde{\cal O}\left(N K \right) $\\ $\left(PG\right)^r$ & entire algorithm & $ \widetilde{\cal O}\left(r N K\right) $\\ \hline \end{tabular} \end{table*} A strategy for implementing the evolution operator in \eq{bV} becomes clear if we consider the combined action of $B$ followed by $\textsc{select}(V)$ on state $\ket{\psi}$, \begin{equation} \label{eq:BV} \textsc{select}\left(V\right) B \ket{0}^{\otimes J} \ket{\psi} = \sqrt{\frac{1}{\lambda}}\sum_{j} \sqrt{\beta_j} \ket{j} V_j \ket{\psi}. \end{equation} This state is similar to that desired for $\widetilde{U}\ket{\psi}$ except for the presence of the register $\ket{j}$. The need to disentangle from that register motivates the operator \cite{Berry2015} \begin{align} \label{eq:W} & {\cal W} := \left( B\otimes \openone \right)^\dagger \textsc{select}\left(V\right)\left(B \otimes \openone\right), \\ & {\cal W} \ket{0}^{\otimes J}\ket{\psi} = \frac{1}{\lambda}\ket{0}^{\otimes J}\widetilde{U}\ket{\psi} + \sqrt{1 - \frac{1}{\lambda^2}}\ket{\Phi}, \end{align} where $\ket{\Phi}$ is a state for which the selection register ancilla qubits are orthogonal to $\ket{0}^{\otimes J}$. Accordingly, we define a projection operator $P$ so that \begin{align} \label{eq:P} P = \left(\ket{0}\!\bra{0}\right)^{\otimes J} \otimes \openone , \\ P {\cal W} \ket{0}^{\otimes J} \ket{\psi} = \frac{1}{\lambda}\ket{0}^{\otimes J} \widetilde{U} \ket{\psi}. \end{align} Using these definitions, we follow the procedure for oblivious amplitude amplification of \cite{Berry2015} to deterministically execute the intended unitary. To this end, we define the oblivious amplitude amplification operator \begin{equation} \label{eq:G} G \equiv - {\cal W} \left( \openone - 2 P\right) {\cal W}^\dagger \left( \openone - 2 P\right) {\cal W}. \end{equation} We use the amplification operator $G$ in conjunction with the projector onto the empty ancilla state $P$ so that \begin{equation} PG\ket{0}\ket{\psi}=\ket{0}\left(\frac{3}{\lambda}\widetilde{U}-\frac{4}{\lambda^3}\widetilde{U}\widetilde{U}^\dagger\tilde{U}\right)\ket{\psi}. \end{equation} The sum of the absolute values of the coefficients in the self-inverse decomposition of the Hamiltonian in \eq{selfinvdec} is $\zeta L\mu$. If we choose the number of segments as $r=\zeta L\mu t / \ln(2)$, then our choice of $K$ as in \eq{K} ensures that $\big\|\widetilde{U}-U_r\big\|_{\textrm{max}} \in {\cal O}(\epsilon/r)$, and hence \cite{Berry2015} \begin{equation} \left\|PG\ket{0}\ket{\psi}-\ket{0}U_r\ket{\psi}\right\|_{\textrm{max}} \in {\cal O}\left(\epsilon/r\right). \end{equation} Then the total error due to oblivious amplitude amplification on all segments will be ${\cal O}(\epsilon)$. Therefore, the complexity of the total algorithm is $r$ times the complexity of implementing $\textsc{select}(V)$ and $B$. While we implement $B$ with gate count ${\cal O}(K \log (\mu L))$, our construction of $\textsc{select}({\cal H})$ has gate count $\widetilde{\cal O}(N K)$. The gate count of our entire algorithm depends crucially on $r$. Above we have taken $r \in {\cal O}(\zeta L \mu t)$ where \begin{align} \label{eq:L} L & \in {\cal O}\left(M \Gamma\right),\\ M & \in \Theta\left(\max_{\gamma,\rho}\left \|\aleph_{\gamma,\rho}\right\|_{\max}/\zeta\right),\\ \Gamma & \in {\cal O}\left(\eta^2 N^2 \right). \end{align} As a result, we may bound $r$ as \begin{equation} \label{eq:r} r\in{\cal O}\left(\eta^2 N^2 t \mu \max_{\gamma,\rho}\left \|\aleph_{\gamma,\rho}\right\|_{\max}\right). \end{equation} As a consequence of \lem{int0}, \lem{int1} and \lem{int2}, $\mu \max_{\gamma,\rho}\left \|\aleph_{\gamma,\rho}\right\|_{\max}$ can be replaced with \begin{equation} \mathcal{O} \left( \varphi_{\max}^4 x_{\max}^5 \left[ \log \left( \frac{\varphi_{\max}^4 x_{\max}^5}{\delta} \right) \right]^6 \right), \end{equation} where $\varphi_{\max}$ is the maximum value taken by the orbitals, and $x_{\max}$ is the scaling of the spatial size of the orbitals. To relate $\epsilon$ to $\delta$, in \sec{dec2} the Hamiltonian is broken up into a sum of ${\cal O}(\eta^2 N^2)$ terms, each of which contains one or two of the integrals. Therefore, the error in the Hamiltonian is ${\cal O}(\delta \eta^2 N^2)$. The Hamiltonian is simulated for time $t$, so the resulting error in the simulation will be ${\cal O}(\delta t \eta^2 N^2)$. To ensure that the error is no greater than $\epsilon$, we should therefore choose $\delta=\Theta(\epsilon/( t \eta^2 N^2))$. Since we are considering scaling with large $\eta$ and $N$, $\delta$ will be small and the conditions \eq{sensible0}, \eq{sensible1} and \eq{sensible2} will be satisfied. In addition, the conditions of \thm{maintheorem} mean that $\varphi_{\max}$ and $x_{\max}$ are logarithmic in $N$. Hence one can take, omitting logarithmic factors, \begin{equation} r\in \widetilde{\cal O}(\eta^2 N^2 t). \end{equation} The complexity of $B$ does not affect the scaling, because it is lower order in $N$. Therefore, our overall algorithm has gate count \begin{equation} \widetilde{\cal O}\left(r N K \right) = \widetilde{\cal O}\left(\eta^2 N^3 t\right), \end{equation} as stated in \thm{maintheorem}. This scaling represents an exponential improvement in precision as compared to Trotter-based methods. However, we suspect that the actual scaling of these algorithms is much better for real molecules, just as has been observed for the Trotter-based algorithms \cite{Poulin2014,BabbushTrotter}. Furthermore, the approach detailed here requires fewer qubits than any other approach to quantum simulation of chemistry in the literature. \section{Summary of Results} \label{sec:summary} In our previous work \cite{BabbushSparse1}, simulation of the molecular Hamiltonian was performed in second quantization using Taylor series simulation methods to give a gate count scaling as $\widetilde{\cal O}(N^5 t)$. In this work, we use the configuration interaction representation of the Hamiltonian to provide an improved scaling of $\widetilde{\cal O}(\eta^2 N^3 t)$. This result is summarized by the following Theorem. \begin{theorem} \label{thm:maintheorem} Using atomic units in which $\hbar$, Coulomb's constant, and the charge and mass of the electron are unity, we can write the molecular Hamiltonian as \begin{align} \label{eq:electronic} H = - \sum_i \frac{\nabla_{\vec r_i}^2}{2} - \sum_{i,j} \frac{Z_i}{\|\vec R_i - \vec r_j \|} + \sum_{i, j>i} \frac{1}{\|\vec r_i - \vec r_j \|} \end{align} where $\vec R_i$ are the nuclear coordinates, $\vec r_j$ are the electron coordinates, and $Z_i$ are the nuclear atomic numbers. Consider a basis set of $N$ spin-orbitals satisfying the following conditions: \begin{enumerate} \item each orbital takes significant values up to a distance at most logarithmic in $N$, \item beyond that distance the orbital decays exponentially, \item the maximum value of each orbital, and its first and second derivatives, scale at most logarithmically in $N$, \item and the value of each orbital can be evaluated with complexity $\widetilde {\cal O}(1)$. \end{enumerate} Evolution under the Hamiltonian of \eq{electronic} can be simulated in this basis for time $t$ within error $\epsilon>0$ with a gate count scaling as $\widetilde{\cal O}(\eta^2 N^3 t)$, where $\eta$ is the number of electrons in the molecule. \end{theorem} We note that these conditions will be satisfied for most, but not all, quantum chemistry simulations. To understand the limitations of these conditions, we briefly discuss the concept of a model chemistry (i.e.\ standard basis set specifications) and how model chemistries are typically selected for electronic structure calculations. There are thousands of papers which study the effectiveness of various basis sets developed for the purpose of representing molecules \cite{Huzinaga85}. These model chemistries associate specific orbital basis functions with each atom in a molecule. For example, wherever Nitrogen appears in a molecule a model chemistry would mandate that one add to the system certain basis functions which are centered on Nitrogen and have been pre-optimized for Nitrogen chemistry; different basis functions would be associated with each Phosphorus, and so on. In addition to convenience, the use of standardized model chemistries helps chemists to compare different calculations and reproduce results. Within a standard model chemistry, orbital basis functions are almost always represented as linear combinations of pre-fitted Gaussians which are centered on each atom. Examples of such model chemistries include Slater Type Orbitals (e.g.\ STO-3G), Pople Basis Sets (e.g.\ 6-31G*) and correlation consistent basis sets (e.g.\ cc-DVTZ). We note that all previous studies on quantum algorithms for quantum chemistry in an orbital basis have advocated the use of one of these models. Simulation within any of these model chemistries would satisfy the conditions of our theorem because the basis functions associated with each atom have maximum values, derivatives and distances beyond which each orbital decays exponentially. Similarly, when molecular instances grow because more atoms are added to the system it is standard practice to perform these progressively larger calculations using the same model chemistry and the conditions of Theorem 1 are satisfied. For instance, in a chemical series such as progressively larger Hydrogen rings or progressively longer alkane chains or protein sequences, these conditions would be satisfied. We note that periodic systems such as conducting metals might require basis sets (e.g.\ plane waves) violating the conditions of Theorem 1. When systems grow because atoms in the molecule are replaced with heavier atoms, the orbitals do tend to grow in volume and their maximum values might increase (even within a model chemistry). However, there are only a finite number of elements on the periodic table so this is irrelevant for considerations of asymptotic complexity. Finally, we point out that these conditions do not hold if the simulation is performed in the canonical molecular orbital basis, but this is not a problem for our approach since the Hartree-Fock state can easily be prepared in the atomic orbital basis at cost that is quadratic in the number of spin-orbitals. We discuss this procedure further in \sec{encoding}. The simulation procedure of Ref.~\cite{Berry2015} requires a decomposition of the Hamiltonian into a weighted sum of unitary matrices. In \cite{BabbushSparse1}, we decomposed the molecular Hamiltonian in such a way that all the coefficients were integrals, i.e. \begin{align} \label{eq:unit_sum} H = \sum_\ell W_\ell H_\ell \quad \quad \quad \quad W_\ell = \int \! w_\ell \!\left(\vec z\right) \, \dd\vec z, \end{align} where the $H_\ell$ are unitary operators, and the $w_\ell \!\left(\vec z\right)$ are determined by the procedure. We then showed how one could evolve under $H$ while simultaneously computing these integrals. In this paper, we investigate a different representation of the molecular Hamiltonian with the related property that the Hamiltonian matrix elements $H^{\alpha\beta}$ can be expressed as integrals, \begin{equation} \label{eq:int_H2} H^{\alpha\beta} = \int \! \aleph^{\alpha\beta}(\vec z) \, \dd\vec z, \end{equation} or a sum of a limited number of integrals. We decompose the Hamiltonian into a sum of one-sparse Hamiltonians, each of which has only a single integral in its matrix entries. We then decompose the Hamiltonian by discretizing the integrals and then further decompose the Hamiltonian into a sum of self-inverse operators, ${\cal H}_{\ell,\rho}$. Using this decomposition, we construct a circuit called $\textsc{select}({\cal H})$ which selects and applies the self-inverse operators so that \begin{equation} \label{eq:selectH} \textsc{select}\left({\cal H}\right) \ket{\ell} \ket{\rho} \ket{\psi} = \ket{\ell} \ket{\rho} {\cal H}_{\ell,\rho} \ket{\psi}. \end{equation} By repeatedly calling $\textsc{select}({\cal H})$, we are able to evolve under $H$ with an exponential improvement in precision over Trotter-based algorithms. The CI matrix is a compressed representation of the molecular Hamiltonian that requires asymptotically fewer qubits than all second-quantized algorithms for chemistry. Though the CI matrix cannot be expressed as a sum of polynomially many local Hamiltonians, a paper by Toloui and Love \cite{Toloui2013} demonstrated that the CI matrix can be decomposed into a sum of ${\cal O}(N^4)$ 1-sparse Hermitian operators, where $N$ is the number of spin-orbitals. We provide in this paper a new decomposition of the CI matrix into a sum of ${\cal O}(\eta^2 N^2)$ 1-sparse Hermitian operators, where $\eta \ll N$ is the number of electrons in the molecule. This new decomposition enables our improved scaling. Using techniques introduced in \cite{Berry2013}, we further decompose these 1-sparse operators into unitary operators which are also self-inverse. As a consequence of the self-inverse decomposition, the Hamiltonian is an equally weighted sum of unitaries. $\textsc{select}({\cal H})$ requires the ability to compute the entries of the CI matrix; accordingly, we can use the same strategy for computing integrals on-the-fly that was introduced in \cite{BabbushSparse1}, but this time our Hamiltonian is of the form in \eq{int_H2}. Using this approach, the simulation of evolution over time $t$ then requires $\widetilde{\cal O}(\eta^2 N^2 t)$ calls to $\textsc{select}({\cal H})$. To implement $\textsc{select}({\cal H})$, we make calls to the CI matrix oracle as described in \sec{oracle}, which requires $\widetilde{\cal O}(N)$ gates. This scaling is due to using a database approach to computing the orbitals, where a sequence of $N$ controlled operations is performed. This causes our overall approach to require $\widetilde{\cal O}(\eta^2 N^3 t)$ gates. As in \cite{Toloui2013}, the number of qubits is $\widetilde{\cal O}(\eta)$ rather than $\widetilde{\cal O}(N)$, because the compressed representation stores only the indices of occupied orbitals, rather than occupation numbers of all orbitals. To summarize, our algorithm with improved gate count scaling of $\widetilde{\cal O}(\eta^2 N^3 t)$ proceeds as follows: \begin{enumerate} \item Represent the molecular Hamiltonian in \eq{electronic} in first quantization using the CI matrix formalism. This requires selection of a spin-orbital basis set, chosen such that the conditions in \thm{maintheorem} are satisfied. \item Decompose the Hamiltonian into sums of self-inverse matrices approximating the required molecular integrals \textit{via} the method of \sec{decomp1}. \item Query the CI matrix oracle to evaluate the above self-inverse matrices, which we describe in \sec{oracle}. \item Simulate the evolution of the system over time $t$ using the method of \cite{Berry2015}, which is summarized in \sec{simulation}. \end{enumerate}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Processor sharing (PS) is one of the most interesting service disciplines in queueing theory. The $M/M/1$-PS queue assumes Poisson arrivals with rate $\lambda$ and exponential i.i.d. service times with mean $1/\mu$. The traffic intensity is $\rho=\lambda/\mu$. Here we consider a system which can accommodate at most $K$ customers. If the system is filled to capacity, we assume that further arrivals are turned away and lost. Most past work on PS models deals with systems that have an infinite capacity of customers. In \cite{COF}, Coffman, Muntz, and Trotter analyzed the $M/M/1$-PS queue, and derived an expression for the Laplace transform of the sojourn time distribution, conditioned on both the number of other customers seen, and the amount of service required, by an arriving customer. Using the results in \cite{COF}, Morrison \cite{MO} obtains asymptotic expansions for the unconditional sojourn time distribution in the heavy traffic limit, where the Poisson arrival rate $\lambda$ is nearly equal to the service rate $\mu$ (thus $\rho=\lambda/\mu\uparrow 1$). A service discipline seemingly unrelated to PS is that of random order service (ROS), where customers are chosen for service at random. In \cite{PO} Pollaczek derives an explicit integral representation for the generating function of the waiting time distribution $\mathbf{W}_{\mathrm{ROS}}$, from which the following tail behavior is obtained \begin{equation}\label{S1_tail} \Pr\left[\mathbf{W}_{\mathrm{ROS}}>t\right]\sim e^{-\alpha t-\beta t^{1/3}}\gamma t^{-5/6}, t\rightarrow\infty. \end{equation} Here $\alpha$, $\beta$ and $\gamma$ are explicitly computed constants, with $\alpha=(1-\sqrt{\rho})^2$. Cohen \cite{COH} establishes the following relationship between the sojourn time $\mathbf{V}_{\mathrm{PS}}$ in the PS model and the waiting time $\mathbf{W}_{\mathrm{ROS}}$ in the ROS model, \begin{equation}\label{S1_equi} \rho\Pr\left[\mathbf{V}_{\mathrm{PS}}>t\right]=\Pr[\mathbf{W}_{\mathrm{ROS}}>t], \end{equation} which extends also to the more general $G/M/1$ case. In \cite{BO} relations of the form (\ref{S1_equi}) are explored for other models, such as finite capacity queues, repairman problems, and networks. The present finite capacity model will have purely exponential behavior for $t\to\infty$, and thus the subexponential ($e^{-\beta\,t^{1/3}}$) and algebraic ($t^{-5/6}$) factors that appear in (\ref{S1_tail}) will be absent. Writing $\Pr[\mathbf{V}>t]\sim C\,e^{-\delta t}$ for $t\to\infty$ for the finite capacity model, the relaxation rate $\delta=\delta(K)$ depends on the capacity, and we shall study its behavior for $K\to\infty$ and for various values of $\rho$. In particular, if $\rho<1$ and $K\to\infty$ it will prove instructive to compare the tail behaviors when $K=\infty$ (where (\ref{S1_tail}), (\ref{S1_equi}) apply) and when $K$ is large but finite. Knessl \cite{knessl1990} obtains asymptotic expansions for the first two conditional sojourn time moments for the finite capacity model. The assumption is that $K\gg 1$ and separate analyses are carried out for the cases $\rho<1$, $\rho=1$ and $\rho>1$. In \cite{knessl1993} he obtains expansions for the sojourn time distribution by performing asymptotic analyses for the three scales $\rho-1=O(K^{-1})$; $\rho-1=bK^{-1/2},\, b>0$ and $\rho>1$. The main purpose of this note is to give an explicit exact expression for the Laplace transform of the sojourn time distribution in the finite capacity $M/M/1$-PS model. We apply a Laplace transform to the basic evolution equation and solve it using a discrete Green's function. The solution is summarized in Theorem 2.1. Then in Theorem 2.2 we locate the dominant singularity, which leads to the tail behavior, for $K\gg 1$. \section{Summary of results} We set the service rate $\mu=1$. Then the traffic intensity is $\rho=\lambda>0$. We define the conditional density of the sojourn time $\mathbf{V}$ by \begin{equation*}\label{S2_def_pnt} p_n(t)dt=\Pr\big[\mathbf{V}\in(t,t+dt)\big|\mathbf{N(0^-)}=n], \quad 0\leq n\leq K-1. \end{equation*} Here $\mathbf{N(0^-)}$ is the number of customers present in the system immediately before the tagged customer arrives. From \cite{knessl1993}, the quantity $p_n(t)$ satisfies the evolution equation \begin{equation}\label{S2_recu} p'_n(t)=\rho\; p_{n+1}(t)-(1+\rho)\; p_n(t)+\frac{n}{n+1}\;p_{n-1}(t),\quad 0\leq n\leq K-1, \end{equation} for $t>0$, with the initial condition $p_n(0)=\frac{1}{n+1}$. If $n=K-1$ the term $\rho\, p_{n+1}(t)$ is absent, and we shall use the ``artificial" boundary condition $p_{_{K-1}}(t)=p_{_K}(t)$. Taking the Laplace transform of (\ref{S2_recu}) with $\widehat{p}_n(\theta)=\int^\infty_0 p_n(t)e^{-\theta t}dt$ and multiplying by $n+1$, we have \begin{equation}\label{S2_recu2} (n+1)\;\rho\;\widehat{p}_{n+1}(\theta)-(n+1)\;(1+\rho+\theta)\;\widehat{p}_n(\theta)+n\;\widehat{p}_{n-1}(\theta)=-1 \end{equation} for $n=0,1,...,K-1$ with the boundary condition \begin{equation}\label{S2_bc2} \widehat{p}_{_{K-1}}(\theta)=\widehat{p}_{_K}(\theta). \end{equation} Solving the recurrence equation (\ref{S2_recu2}) with (\ref{S2_bc2}), we obtain the following result. \begin{theorem} \label{th1} The Laplace-Stieltjes transform of the conditional sojourn time density has the following form: \begin{equation}\label{S2_th1_phat} \widehat{p}_n(\theta)=MG_n\sum_{l=0}^n\rho^lH_l+MH_n\sum_{l=n+1}^{K-1}\rho^lG_l-\frac{\Delta G_K}{\Delta H_K}MH_n\sum_{l=0}^{K-1}\rho^lH_l \end{equation} for $n=0,1,...,K-2$ and \begin{equation}\label{S2_th1_phat_K} \widehat{p}_{_{K-1}}(\theta)=\frac{1}{K\rho^K\Delta H_K}\sum_{l=0}^{K-1}\rho^lH_l. \end{equation} Here \begin{equation}\label{S2_th1_M} M=M(\theta)\equiv z_-\Big(\frac{z_+}{z_-}\Big)^\alpha, \end{equation} \begin{equation}\label{S2_th1_G} G_n=G_n(\theta)\equiv\int_0^{z_-}z^n(z_+-z)^{-\alpha}(z_--z)^{\alpha-1}dz, \end{equation} \begin{equation}\label{S2_th1_H} H_n=H_n(\theta)\equiv\frac{e^{i\alpha\pi}}{2\pi i}\int_{\mathcal{C}}z^n(z_+-z)^{-\alpha}(z-z_-)^{\alpha-1}dz, \end{equation} $\mathcal{C}$ is a closed contour in the complex $z$-plane that encircles the segment $[z_-,z_+]$ of the real axis counterclockwise and \begin{equation*}\label{S2_th1_deltag} \Delta G_K=G_K-G_{K-1}, \end{equation*} \begin{equation}\label{S2_th1_deltah} \Delta H_K=H_K-H_{K-1}, \end{equation} \begin{equation}\label{S2_th1_z} z_\pm=z_\pm(\theta)\equiv\frac{1}{2\rho}\Big[1+\rho+\theta\pm\sqrt{(1+\rho+\theta)^2-4\rho}\Big], \end{equation} \begin{equation}\label{S2_th1_alpha} \alpha=\alpha(\theta)\equiv\frac{z_+}{z_+-z_-}. \end{equation} \end{theorem} The singularities of $\widehat{p}_n(\theta)$ are poles, which are all real, and solutions of $H_K(\theta)=H_{K-1}(\theta)$. A spectral expansion can be given in terms of these poles, but we believe that (\ref{S2_th1_phat}) is more useful for certain asymptotic analyses, such as $K\to\infty$. Below we give the least negative pole $\theta_s$, which is also the tail exponent, i.e. $\lim_{t\to\infty}\{t^{-1}\log\Pr[\mathbf{V}>t]\}$. \begin{theorem} \label{th2} The dominant singularity $\theta_s$ in the $\theta$-plane has the following asymptotic expansions, for $K\to\infty$: \begin{enumerate} \item $\rho<1$, \begin{equation}\label{S2_th2_rho<1} \theta_s=-(1-\sqrt{\rho})^2-\frac{\sqrt{\rho}}{K}+\frac{\sqrt{\rho}\,r_0}{K^{4/3}}-\frac{8\sqrt{\rho}\,r_0^2}{15K^{5/3}}+O(K^{-2}), \end{equation} where $r_0\approx -2.3381$ is the largest root of the Airy function $Ai(z)$. \item $\rho=1+\eta\,K^{-2/3}$ with $\eta=O(1)$, \begin{equation}\label{S2_th2_rho=1} \theta_s=-\frac{1}{K}+\frac{r_1}{K^{4/3}}-\frac{16r_1^3+8\eta^2\,r_1^2+(\eta^4+19\eta)r_1+\eta^3+9}{30r_1\,K^{5/3}}+O(K^{-2}), \end{equation} where $r_1=r_1(\eta)$ is the largest root of \begin{equation}\label{S2_th2_rho=1_r1} \frac{Ai'(r_1+\eta^2/4)}{Ai(r_1+\eta^2/4)}=-\frac{\eta}{2}. \end{equation} \item $\rho>1$, \begin{equation}\label{S2_th2_rho>1} \theta_s=-\frac{1}{K}-\frac{1}{(\rho-1)K^{2}}-\frac{1}{(\rho-1)^2K^{3}}+\frac{\rho^2+1}{(\rho-1)^4K^4}+O(K^{-5}). \end{equation} \end{enumerate} \end{theorem} Comparing $e^{\theta_st}$ with (\ref{S2_th2_rho<1}) to (\ref{S1_tail}) we see that they both contain the dominant factor $e^{-(1-\sqrt{\rho})^2t}$ but whereas (\ref{S1_tail}) has the subexponential factor $e^{-\beta\,t^{1/3}}$, (\ref{S2_th2_rho<1}) leads to purely exponential correction terms that involve the maximal root of the Airy function. \section{Brief derivations} We use a discrete Green's function to derive (\ref{S2_th1_phat}) and (\ref{S2_th1_phat_K}). Consider the recurrence equation (\ref{S2_recu2}) and (\ref{S2_bc2}). The discrete Green's function $\mathcal{G}(\theta;n,l)$ satisfies \begin{eqnarray}\label{S3_dgf} &&(n+1)\rho\,\mathcal{G}(\theta;n+1,l)-(n+1)(1+\rho+\theta)\,\mathcal{G}(\theta;n,l)\nonumber\\ &&\quad\quad\quad+n\,\mathcal{G}(\theta;n-1,l)=-\delta(n,l),\quad (n,l=0,1,...,K-1) \end{eqnarray} and \begin{equation}\label{S3_dgf_bc} \mathcal{G}(\theta;K,l)=\mathcal{G}(\theta;K-1,l),\quad (l=0,1,...,K), \end{equation} where $\delta(n,l)=1_{\{n=l\}}$ is the Kronecker delta. To construct the Green's function requires that we have two linearly independent solutions to \begin{equation}\label{S3_dgf_Homo} (n+1)\rho\,G(\theta;n+1,l)-(n+1)(1+\rho+\theta)\,G(\theta;n,l)+n\,G(\theta;n-1,l)=0, \end{equation} which is the homogeneous version of (\ref{S3_dgf}). We seek solutions of (\ref{S3_dgf_Homo}) in the form $ G_n=\int_\mathcal{D}z^ng(z)dz,$ where the function $g(z)$ and the path $\mathcal{D}$ of integration in the complex $z$-plane are to be determined. Using the above form in (\ref{S3_dgf_Homo}) and integrating by parts yields \begin{eqnarray}\label{S3_IBP} &&z^ng(z)\big[\rho z^2-(1+\rho+\theta)z+1\big]\Big|_\mathcal{D}\nonumber\\ &&\quad\quad -\int_\mathcal{D}z^n\big[(\rho z^2-(1+\rho+\theta)z+1)g'(z)+\rho zg(z)\big]dz=0. \end{eqnarray} The first term represents contributions from the endpoints of the contour $\mathcal{D}$. If (\ref{S3_IBP}) is to hold for all $n=0,1,...,K-1$ the integrand must vanish, so that $g(z)$ must satisfy the differential equation $$\big[\rho z^2-(1+\rho+\theta)z+1\big]g'(z)+\rho zg(z)=0.$$ We denote the roots of $\rho z^2-(1+\rho+\theta)z+1=0$ by $z_+$ and $z_-$, with $z_+>z_->0$ for real $\theta$. These are given by (\ref{S2_th1_z}) and if $\alpha$ is defined by (\ref{S2_th1_alpha}), the solution for $g(z)$ is $ g(z)=(z_+-z)^{-\alpha}(z_--z)^{\alpha-1}.$ By appropriate choices of the contour $\mathcal{D}$, we obtain the two linearly independent solutions $G_n$ (cf. (\ref{S2_th1_G})) and $H_n$ (cf. (\ref{S2_th1_H})). We note that $G_n$ decays as $n\to\infty$, and is asymptotically given by \begin{equation}\label{S3_Gasymp} G_n\sim\frac{\Gamma(\alpha)}{n^\alpha}z_-^{\alpha+n}(z_+-z_-)^{-\alpha},\quad n\to\infty. \end{equation} However, $G_n$ becomes infinite as $n\to-1$, and $nG_{n-1}$ goes to a nonzero limit as $n\to 0$. Thus $G_n$ is not an acceptable solution at $n=0$. $H_n$ is finite as $n\to-1$, but grows as $n\to\infty$, with \begin{equation}\label{S3_Hasymp} H_n\sim\frac{n^{\alpha-1}}{\Gamma(\alpha)}z_+^{n+1-\alpha}(z_+-z_-)^{\alpha-1},\quad n\to\infty. \end{equation} Thus, the discrete Green's function can be represented by \begin{eqnarray}\label{S3_G_step} \mathcal{G}(\theta;n,l) = \left\{ \begin{array}{ll} AH_n+BG_n & \textrm{if $l\leq n\leq K$}\\ CH_n & \textrm{if $0\leq n\leq l\leq K$}. \end{array} \right. \end{eqnarray} Here $A$, $B$ and $C$ depend upon $\theta$, $l$ and $K$. Using continuity of $\mathcal{G}$ at $n=l$ and the boundary condition (\ref{S3_dgf_bc}), we obtain from (\ref{S3_G_step}) $$ A=\frac{H_l\Delta G_{{K}}}{H_l\Delta G_{{K}}-G_l\Delta H_{{K}}}C,\quad \quad B=-\frac{H_l\Delta H_{K}}{H_l\Delta G_{{K}}-G_l\Delta H_{{K}}}C.$$ Hence, (\ref{S3_G_step}) can be rewritten as \begin{eqnarray*}\label{S3_G_step2} \mathcal{G}(\theta;n,l) = \left\{ \begin{array}{ll} {\displaystyle\frac{H_n\Delta G_K-G_n\Delta H_K}{H_l\Delta G_{{K}}-G_l\Delta H_{{K}}}\,CH_l} & \textrm{if $l\leq n\leq K$}\bigskip\\ CH_n & \textrm{if $0\leq n\leq l\leq K$}. \end{array} \right. \end{eqnarray*} To determine $C$, we let $n=l$ in (\ref{S3_dgf}) and use the fact that both $G_l$ and $H_l$ satisfy (\ref{S3_dgf_Homo}) with $n=l$, which gives \begin{equation}\label{S3_C} C=\frac{G_l\Delta H_{{K}}-H_l\Delta G_{{K}}}{(l+1)\,\rho\,\Delta H_K(G_l\,H_{l+1}-G_{l+1}\,H_l)}. \end{equation} From (\ref{S3_dgf_Homo}) we can infer a simple difference equation for the discrete Wronskian $G_l\,H_{l+1}-G_{l+1}\,H_l$, whose solution we write as \begin{equation}\label{S3_G1} G_l\,H_{l+1}-G_{l+1}\,H_l=\frac{1}{C_1\,(l+1)\,\rho\,^l}, \end{equation} where $C_1=C_1(\theta)$ depends upon $\theta$ only. Letting $l\to\infty$ in (\ref{S3_G1}) and using the asymptotic results in (\ref{S3_Gasymp}) and (\ref{S3_Hasymp}), we determine $C_1$ as $C_1=\rho M$ and then by (\ref{S3_C}) obtain $$C=\frac{G_l\Delta H_{{K}}-H_l\Delta G_{{K}}}{\Delta H_K}\rho^lM,$$ where $M=M(\theta)$ is defined by (\ref{S2_th1_M}). Then, we multiply (\ref{S3_dgf}) by the solution $\widehat{p}_l(\theta)$ to (\ref{S2_recu2}) and sum over $0\leq l\leq K-1$. After some manipulation and using (\ref{S3_dgf_bc}), this yields $$\widehat{p}_n(\theta)=\sum_{l=0}^{K-1}\mathcal{G}(\theta;n,l),\quad n=0,1,...,K-1,$$ which is equivalent to (\ref{S2_th1_phat}) and (\ref{S2_th1_phat_K}). In the remainder of this section, we obtain the dominant singularity of $\widehat{p}_n(\theta)$ in the $\theta$-plane. From (\ref{S2_th1_phat}), the dominant singularity comes from solving $\Delta H_{{K}}=H_{{K}}-H_{{K-1}}=0$. We first consider $\rho<1$. We evaluate $H_n$ in (\ref{S2_th1_H}) by branch cut integration, which yields \begin{equation}\label{S3_Hn} H_n=\frac{\sin({\alpha\pi})}{\pi}\int_{z_-}^{z_+}\xi^n\,(\xi-z_-)^{\alpha-1}(z_+-\xi)^{-\alpha}d\xi. \end{equation} Thus, we have \begin{equation}\label{S3_deltah} \Delta H_K=\frac{\sin({\alpha\pi})}{\pi}\int_{z_-}^{z_+}\frac{\xi-1}{\xi-z_-}\xi^{K-1}\left(\frac{\xi-z_-}{z_+-\xi}\right)^\alpha d\xi. \end{equation} Since the factor $\sin(\alpha \pi)$ appears in both the numerators and denominators in (\ref{S2_th1_phat}) and (\ref{S2_th1_phat_K}), zeros of $\sin(\alpha \pi)$ do not correspond to poles of $\widehat{p}_n(\theta)$. To be at a pole we must have the integral in (\ref{S3_deltah}) vanish. When $\rho<1$ and $K\gg 1$, the effects of finite capacity should become asymptotically negligible and the finite capacity model may be approximated by the corresponding infinite capacity model. Thus, from (\ref{S1_tail}) the dominant singularity $\theta_s$ should be close to $-\alpha=-(1-\sqrt{\rho})^2$. We scale $\theta=-(1-\sqrt{\rho})^2+s/K$ and let \begin{equation}\label{S3_xi} \xi=\frac{z_++z_-}{2}+\frac{z_+-z_-}{2}\,w \end{equation} in (\ref{S3_deltah}). After some calculation, the integral in (\ref{S3_deltah}) is approximately given by \begin{equation}\label{S3_delta_rho<1} \frac{1-\sqrt{\rho}}{\rho^{K/2}}\int_{-1}^1\Big[g(w,s)+O(K^{-1/2})\Big]\,e^{\sqrt{K}\,f(w,s)}\,dw, \end{equation} where \begin{equation*}\label{S3_f} f(w,s)=\rho^{-1/4}\,w\,\sqrt{s}+\frac{\rho^{1/4}}{2\sqrt{s}}\log\Big(\frac{1+w}{1-w}\Big), \end{equation*} \begin{equation*}\label{S3_g} g(w,s)=\frac{1}{\sqrt{1-w^2}}\exp\Big[\frac{s(1-w^2)}{2\sqrt{\rho}}\Big]. \end{equation*} By solving $\frac{\partial}{\partial w}f(w,s)=0$, we find that there are two saddle points, at $w_*=\pm\sqrt{1+\sqrt{\rho}/s}$. In order to be in the oscillatory range of (\ref{S3_delta_rho<1}) the two saddles must coalesce, which occurs at $w=0$ when $s=-\sqrt{\rho}$. Then we introduce the scaling $w=uK^{-1/6}$ and $s=\sqrt{\rho}(-1+rK^{-1/3})$ and expand the functions $f(w,s)$ and $g(w,s)$ as \begin{equation}\label{S3_f_asym} f(w,s)=-i\,\Big(ru+\frac{u^3}{3}\Big)K^{-1/2}-i\,\Big(\frac{r^2u}{2}+\frac{ru^3}{6}+\frac{u^5}{5}\Big)K^{-5/6}+O\big(K^{-7/6}\big), \end{equation} \begin{equation}\label{S3_g_asym} g(w,s)=e^{-1/2}\Big[1+\Big(\frac{r}{2}+u^2\Big)K^{-1/3}+O(K^{-2/3})\Big]. \end{equation} By using the definition of the Airy function \begin{equation*} Ai(z)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-i(t^3/3+zt)}dt, \end{equation*} along with (\ref{S3_f_asym}) and (\ref{S3_g_asym}), the integral in (\ref{S3_delta_rho<1}) can be evaluated as \begin{equation}\label{S3_delta_rho<1_2} 2\pi e^{-1/2}K^{-1/6}\bigg\{Ai(r)+K^{-1/3}\Big[\frac{2}{15}rAi(r)+\frac{8}{15}r^2Ai'(r)\Big]+O(K^{-2/3})\bigg\}. \end{equation} To find the dominant singularity, (\ref{S3_delta_rho<1_2}) must vanish. It follows that $r$ is asymptotic to the maximal root of $Ai$. To get a refined approximation, we expand $r$ as $r=r_0+\alpha_0K^{-1/3}+O(K^{-2/3})$ and calculate $\alpha_0$ from (\ref{S3_delta_rho<1_2}). Then using $\theta=-(1-\sqrt{\rho})^2-\sqrt{\rho}/K+\sqrt{\rho}\,r\,K^{-4/3}$ we obtain (\ref{S2_th2_rho<1}). Next, we consider $\rho=1+\eta K^{-2/3}$ with $\eta=O(1)$. We now scale $\theta=-1/K+R\,K^{-4/3}$ and $w=u\,K^{-1/6}$, with $R$ and $u=O(1)$. Then the leading order approximation to the integrand in (\ref{S3_deltah}) is \begin{equation}\label{S3_integrand_rho=1} e^{-K^{1/3}\eta/2-1/2}K^{-5/6}\Big[\big(i\,u-\frac{\eta}{2}\big)+O(K^{-1/3})\Big]\exp\Big\{-i\Big[\frac{u^3}{3}+\big(R+\frac{\eta^2}{4}\big)\,u\Big]\Big\}. \end{equation} Integrating (\ref{S3_integrand_rho=1}) over $u$ now leads to \begin{equation}\label{S3_delta_rho=1_2} 2\pi e^{-K^{1/3}\eta/2-1/2}K^{-5/6}\Big[-\frac{\eta}{2}Ai\big(R+\frac{\eta^2}{4}\big)-Ai'\big(R+\frac{\eta^2}{4}\big)+O(K^{-1/3})\Big]. \end{equation} Setting (\ref{S3_delta_rho=1_2}) equal to zero we obtain the expansion $R=r_1+\alpha_1K^{-1/3}+O(K^{-2/3})$, which leads to $\theta_s\sim-1/K+r_1K^{-4/3}$, where $r_1$ satisfies (\ref{S2_th2_rho=1_r1}). We note that to obtain the $O(K^{-5/3})$ term in (\ref{S2_th2_rho=1}), we needed the $O(K^{-1/3})$ correction terms in (\ref{S3_integrand_rho=1}) and (\ref{S3_delta_rho=1_2}), which we do not give here. Finally, we consider $\rho>1$. We use (\ref{S2_th1_H}) and rewrite $\Delta H_K$ as \begin{equation}\label{S3_delta_rho>1} \Delta H_K=\frac{e^{i\alpha\pi}}{2\pi i}\int_{\mathcal{C}}z^{K-1}(z-1)\frac{(z-z_-)^{\alpha-1}}{(z_+-z)^\alpha}dz. \end{equation} Scaling $z=z_++u/K$, the contour $\mathcal{C}$ may be approximated by the contour $\mathcal{E}$, which starts at $-\infty-i0$, below the real axis, circles the origin in the counterclockwise direction and ends at $-\infty+i0$, above the real axis. Thus, (\ref{S3_delta_rho>1}) becomes \begin{equation}\label{S3_delta_rho>1_2} \Delta H_K\sim K^{\alpha-1}z_+^{K-1}(z_+-z_-)^{\alpha-1}\Big[I_1+I_2/K+I_3/K^2+O(K^{-3})\Big], \end{equation} where the $I_j$ are expressible in terms of the $\Gamma$ function, with \begin{eqnarray*} I_1&\equiv&\frac{1}{2\pi i}\int_{\mathcal{E}}e^{u/z_+}(z_+-1)u^{-\alpha}du\\ &=&z_+^{1-\alpha}(z_+-1)/\Gamma(\alpha), \end{eqnarray*} and $I_2$ and $I_3$ can be given as similar contour integrals. To find the dominant singularity, (\ref{S3_delta_rho>1_2}) must vanish. Viewing the $I_j=I_j(\theta)$ as functions of $\theta$, $I_1$ will vanish when $z_+=z_+(\theta)=1$, which occurs at $\theta=0$ if $\rho>1$. Thus we expand $\theta$ as $\theta=\alpha_2K^{-1}+\beta_2K^{-2}+O(K^{-3})$. Then (\ref{S3_delta_rho>1_2}) yields $$I_1'(0)\theta+\frac{1}{2}I_1''(0)\theta^2+K^{-1}\big[I_2(0)+I_2'(0)\theta\big]+K^{-2}I_3(0)=O(K^{-3}),$$ so that $\alpha_2=-I_2(0)/I_1'(0)$ and $$\beta_2=-\Big[\frac{1}{2}I_1''(0)\alpha_2^2+I_2'(0)\alpha_2+I_3(0)\Big]\Big/I_1'(0),$$ which leads to $\theta_s\sim-1/K-1/[(\rho-1)K^2]$. We note that to obtain the higher order approximations in (\ref{S2_th2_rho>1}), we needed to compute two more terms in (\ref{S3_delta_rho>1_2}). This concludes our derivation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Motion of particles in the vicinity of black holes is a subject that has been continuing to attract interest until recently. In doing so, a special role is played by circular orbits - see. e.g., recent papers \cite{rufk}, \cite{ruf3} and references therein. Especially, this concerns an innermost stable circular orbit (ISCO). It is important in phenomena connected with accretion disc and properties of cosmic plasma \cite{ac1}, \cite{ac2}. Apart from astrophysics, such a kind of orbits posseses a number of nontrivial features and, therefore, is interesting from the theoretical viewpoint. In a "classic" \ paper \cite{72} it was shown that in the extremal limit ISCO approaches the horizon. As a result, some subtleties arise here since the horizon is a ligntlike surface, so a massive particle cannot lie within it exactly. Nowadays, near-horizon circular orbits for near-extremal and extremal rotating black holes is still a subject of debates \cite{ted11}, \cite{ind2}, \cite{m}, \cite{extc}. Quite recently, a new circumstance came into play that makes the properties of ISCO important in a new context. Namely, it is the ISCO that turns out a natural venue for realizaiton of the so-called BSW\ effect. Several years ago, it was shown by Ba\~{n}ados, Silk and West that if two particles collide near the black hole horizon of the extremal Kerr metric, their energy $E_{c.m.}$ in the centre of mass (CM) frame can grow unbound \cit {ban}. This is called the BSW effect, after the names of its authors. These findings stimulated further study of high-energy collisions near black holes. The validity of the BSW effect was extended to extremal and nonextremal more general black holes. It was also found that there exists the version of this effect near nonrotating electrically charged black holes \cite{jl}. Another version of ultra-high energy collisions reveals itself in the magnetic field, even if a black hole is neutral, vacuum and nonrotating, so it is described by the Schwarzschild metric \cite{fr}. Generalization to the case when the background is described by the Kerr metric was done in \cite{weak}. In the BSW effect, one of colliding particle should be so-called critical. It means that the energy and the angular momentum (or electric charge) of this particle should be fine-tuned. In particular, the corresponding critical condition is realized with good accuracy if a particle moves on a circular orbit close to the horizon. Therefore, an innermost stable circular orbit (ISCO) can play a special role in ultra-high energy collisions in astrophysical conditions. Without the magnetic field, this was considered in \cite{circkerr} for the Kerr black hole and in \cite{circ} for more general rotating black holes. Kinematically, the effect is achieved due to collision of a rapid typical so-called usual particle (without fine-tuning) and the slow fine-tuned particle on the ISCO \cite{k} (see also below). In \cite{fr} and \cite{weak} collisions were studied just near the ISCO in the magnetic field. In both cases, a black hole was taken to be a vacuum one. Meanwhile, in astrophysical conditions, black holes are surrounded by matter. By definition, such black holes are called "dirty", according to the terminolgy suggested in Ref. \cite{vis}. (We wold like to stress that it is matter but not the electromagnetic field that makes a black hole dirty.) The aim of our work is two-fold since two different issues overlap here. The first one is the properties of ISCO near dirty black holes in the magnetic field, so both matter and the magnetic field are present. The second issue is the scenarios of high-energy particle collisions near such orbits. We derive general asymtotic formulas for the poistion of the ISCO in the magnetic field which are used further for the evaluaiton of $E_{c.m.}$ and examining of two scenarios of the BSW effect near ISCO. In both works \cite{fr} and \cite{weak}, it was assumed that the magnetic field is weak in the sense that backreaction of the magnetic field on the metric is negligible but, at the same time, it is strong in the sense that it affects motion of test particles. Such combination is self-consistent since the dimensionless parameter $b$ that controls the magnetic field strength contains a large factor $q/m$ relevant for motion of particles. Our approach is model-independent and is not restricted by some explicit background metric.\ Therefore, the most part of formulas applies also to the metrics which are affected by the magnetic field. From the other hand, if the magnetic field is too strong, its backreaction on the metric can change the properties of $E_{c.m.}$ itself, as will be seen below. Thus we discuss two news features absent from previous works in the sense that both matter and magnetic field are taken into account in a model-independent way. It is worth noting that high-energy collisions in the magnetic field were studied also in another context, including scenarions not connected with ISCO - see \cite{string} - \cite{tur2}. In general, it is hard to find and analyze ISCO even in the Kerr or Kerr-Newman cases \cite{rufk}, \cite{ruf3}. However, it is the proximity to the horizon that enables us to describe some properties of ISCO, even not specifying metric (so in a model-independent way) and even with the magnetic field.\ This can be considered as one of manifestation of universality of black hole physics. The paper is organized as follows. In Sec. II, the metric and equations of motion are presented. In Section III, we give basic equations that determine ISCO. In Sec. IV, we consider ISCO in the magnetic field for near-extremal black holes and analyze the cases of small and large fields. In Sec. V the case of nonrotating (but dirty) and slowly rotating black hole is discussed. As we have two small parameters (slow rotation and inverse field strength), we consider different relations between them separately. In Sec. VI we show that if a black hole rotates, even in the limit of strong magnetic field ISCO does not tend to the horizon radius. In Sec. VII, it is shown that for extremal nonrotating black holes, for large $b$, ISCO approaches the horizon radius. In Sec. VIII, it is shown that this property is destroyed by rotation. In Sec. IX, general formulas for $E_{c.m.}$ for particle collisions in the magnetic field are given. In Sec. X, we find the velocity of a particle on ISCO and argue that kinematic explanation of high-energy collision is similar to that for the BSW effect \cite{k}. In Sec. XI, we apply general formulas of collision to different black hole configurations and different scenarios. In Sec. XII, the exact solution of the Einstein-Maxwell equations (static Ernst black hole) is chosen as background for collisions. This enables us to evaluate the role of backreaction of the magnetic field on $E_{c.m.}$. In Sec. XIII, the main results are summarized. Some technical points connected with cumbersome formulas are put in Appendix. Throughout the paper we use units in which fundamental constants are $G=c=1$. \section{Metric and equations of motion} Let us consider the metric of the form \begin{equation} ds^{2}=-N^{2}dt^{2}+\frac{dr^{2}}{A}+R^{2}(d\phi -\omega dt)^{2}+g_{\theta }d\theta ^{2}\text{,} \end{equation where the metric coefficients do not depend on $t$ and $\phi $. The horizon corresponds to $N=0$. We also assume that there is an electromagnetic field with the four-vector $A^{\mu }$ where the only nonvanishing component equals \begin{equation} A^{\phi }=\frac{B}{2}\text{.} \label{ab} \end{equation} In vacuum, this is an exact solution with $B=const$ \cite{wald}. We consider configuration with matter (in this sense a black hole is "dirty"), so in general $B$ may depend on $r$ and $\theta $. Let us consider motion of test particles in this background. The kinematic momentum $p^{\mu }=mu^{\mu }$, where $m$ is the particle's mass, four-velocity $u^{\mu }=\frac{dx^{\mu }}{d\tau }$, where $\tau $ is the proper time, $x^{\mu }$ are coordinates. Then, the generalized momentum is equal to \begin{equation} p_{\mu }=P_{\mu }-qA_{\mu }\text{,} \end{equation $q$ is the particle's electric charge. Due to the symmetry of the metric, P_{0}=-E$ and $P_{\phi }=L$ are conserved, where $E$ is the energy, $L$ is the angular momentum. We consider motion constrained within the equatorial plane, so $\theta \frac{\pi }{2}$. Redefining the radial coordinate $r\rightarrow \rho $, we can always achieve that \begin{equation} A=N^{2} \label{an} \end{equation within this plane. Then, equations of motion give us \begin{equation} \dot{t}=\frac{X}{N^{2}m}\text{,} \label{t} \end{equation \begin{equation} \dot{\phi}=\frac{\beta }{R}+\frac{\omega X}{mN^{2}}\text{,} \label{ft} \end{equation \begin{equation} m^{2}\dot{\rho}^{2}+V=0\text{,} \label{rt} \end{equation \begin{equation} X=E-\omega L\text{,} \label{X} \end{equation \begin{equation} \beta =\frac{\mathcal{L}}{R}-\frac{qBR}{2m}\text{,} \label{b} \end{equation \begin{equation} V=m^{2}N^{2}(1+\beta ^{2})-X^{2}\text{.} \label{v} \end{equation Dot denotes differentiation with respect to the proper time $\tau $. As usual, we assume the forward in time condition $\dot{t}>0$, so $X\geq 0$. Hereafter, we use notations \begin{equation} \mathcal{L\equiv }\frac{L}{m}\text{, }\mathcal{E}=\frac{E}{m},\text{ }b \frac{qB_{+}R_{+}}{2m}\text{.} \label{bb} \end{equation Subscripts "+", "0" denote quantities calculated on the horizon and ISCO, respectively. In what follows, we will use the Taylor expansion of quantity $\omega $ near the horizon. We denote $x=\rho -\rho _{+}$, where $\rho _{+}$ is the horizon radius. Then, \begin{equation} \omega =\omega _{+}-a_{1}x+a_{2}x^{2}+... \label{om} \end{equation} \section{Equations determining ISCO} By definition, ISCO is determined by equations \cite{72 \begin{equation} V(\rho _{0})=0\text{,} \label{0} \end{equation \begin{equation} \frac{dV}{d\rho }(\rho _{0})=0\text{,} \label{fd} \end{equation \begin{equation} \frac{d^{2}V}{d\rho ^{2}}(\rho _{0})=0. \label{sd} \end{equation} Eqs. (\ref{v}), (\ref{0}) entai \begin{equation} X(\rho _{0})=mN(\rho _{0})\sqrt{1+\beta ^{2}(\rho _{0})} \label{xx} \end{equation and eqs. (\ref{fd}), (\ref{sd}) turn int \begin{equation} \frac{1}{m^{2}}\frac{dV_{eff}}{d\rho }=\frac{d}{d\rho }[N^{2}(1+\beta ^{2})]+2\mathcal{L}\omega ^{\prime }\sqrt{1+\beta ^{2}}N=0\text{,} \label{1d} \end{equation \begin{equation} \frac{1}{m^{2}}\frac{d^{2}V_{eff}}{d\rho ^{2}}=\frac{d^{2}}{d\rho ^{2} [N^{2}(1+\beta ^{2})]-2\mathcal{L}^{2}\omega ^{\prime }\sqrt{1+\beta ^{2} \omega ^{\prime 2}+2\mathcal{L}\omega ^{\prime }\sqrt{1+\beta ^{2}}\omega ^{\prime \prime }N\sqrt{1+\beta ^{2}}=0, \label{2d} \end{equation where all quantities in (\ref{1d}), (\ref{2d}) are to be taken at $\rho =\rho _{0}$. Prime denotes derivative with respect to $\rho $ (or, equivalently, $x$). In general, it is impossible to find exact solutions of eqs. (\ref{1d}), \ref{2d}). Therefore, in next sections we analyze separately different particular situations, with main emphasis made on the near-horizon region. In doing so, we develop different versions of the perturbation theory that generalize the ones of \cite{weak}. The radius of ISCO, its energy and angular momentum are represented as some series with respect to the corresponding small parameter, truncated at the leading or subleading terms similarly to \cite{weak}. \section{Near-extremal black holes} Let a black hole be nonextremal. In what follows, we are interested in the immediate vicinity of the horizon and use the Taylor series for corresponding quantities. Then, near the horizon we have the expansion \begin{equation} N^{2}=2\kappa x+Dx^{2}+Cx^{3}...\text{,} \label{N} \end{equation where $\kappa $ has the meaning of the surface gravity. By definition, we call a black hole near-extremal i \begin{equation} \kappa \ll Dx_{0}\text{,} \end{equation where $x_{0}=\rho _{0}-\rho _{+}$. Then, for the lapse function we have expansion near ISC \begin{equation} N=x\sqrt{D}+\frac{\kappa }{\sqrt{D}}-\frac{\kappa ^{2}}{2D^{3/2}x}+\frac{C}{ \sqrt{D}}x^{2}+... \label{nx} \end{equation} Taking into account (\ref{2d}), after straightforward (but somewhat cumbersome) calculations, one can find tha \begin{equation} -\frac{1}{2}\frac{dV_{eff}}{d\rho }(\rho _{0})=A_{2}x^{2}+A_{3}\frac{\kappa ^{2}}{x}+... \label{1} \end{equation \begin{equation} \mathcal{L}a_{1}\approx \sqrt{DP}\text{,} \label{ld} \end{equation \begin{equation} P\equiv 1+\beta ^{2}\text{,} \label{p} \end{equation \begin{equation} A_{2}\approx \frac{D}{2}\frac{dP}{dx}+\frac{CP}{2}+\frac{a_{2}}{a_{1}}PD \end{equation} \begin{equation} A_{3}=-\frac{P}{2D}, \end{equation where $P$ and $\frac{dP}{dx}$ are to be taken at $x=x_{0}$ or, with the same accuracy, at $x=0$ (i.e., on the horizon). Then, \begin{equation} x_{0}^{3}\approx -\frac{A_{3}}{A_{2}}\kappa ^{2}=H^{3}\kappa ^{2}\text{,} \label{h} \end{equation \begin{equation} H=(\frac{P_{0}}{2DA_{2}})^{1/3}=\frac{1}{[D(2\frac{a_{2}}{a_{1}}D+C+\frac{D} P}\frac{dP}{dx})]^{1/3}}\text{.} \label{H} \end{equation} From (\ref{xx}), (\ref{nx}) we have \begin{equation} N_{0}\approx \sqrt{D}H\kappa ^{2/3}\text{,} \label{nk} \end{equation \begin{equation} X_{0}\approx m\sqrt{P_{+}}\sqrt{D}H\kappa ^{2/3}\text{.} \label{x23} \end{equation} Using (\ref{ld}), (\ref{p}) and (\ref{b}) we derive equation for the value of the angular momentum $L_{0}$ on ISCO: \begin{equation} \frac{\mathcal{L}_{0}^{2}}{R_{+}^{2}}+2b\frac{D}{d-D}\frac{\mathcal{L}_{0}} R_{+}}-\frac{D(1+b^{2})}{d-D}=0. \end{equation where \begin{equation} d\equiv R_{+}^{2}a_{1}^{2}. \label{da} \end{equation To have a well-defined limit $b=0$, we demand $d-D>0$. We are interested in the positive root according to (\ref{ld}). Then, \begin{equation} \frac{\mathcal{L}_{0}(b)}{R_{+}}=-\frac{bD}{d-D}+\frac{\sqrt{D}}{d-D}\sqrt d(1+b^{2})-D}, \label{l0} \end{equation and, in a given approximation \begin{equation} \beta _{0}=\frac{1}{d-D}[\sqrt{D}\sqrt{d(1+b^{2})-D}-bd]\text{, } \end{equation \begin{equation} P_{0}=\frac{d}{d-D}-2\frac{b\sqrt{D}d}{(d-D)^{2}}\sqrt{d(1+b^{2})-D}+\frac b^{2}d(d+D)}{(d-D)^{2}}\text{,} \end{equation \begin{equation} \left( \frac{d\beta }{dx}\right) _{+}=-\frac{R_{+}^{\prime }}{R_{+}}[b+\frac \mathcal{L}_{0}(b)}{R_{+}}]-\frac{B_{+}^{\prime }}{B_{+}}b\text{,} \end{equation \begin{equation} A_{2}\approx D\beta _{0}\left( \frac{d\beta }{dx}\right) _{+}+\frac{CP_{0}}{ }+\frac{a_{2}}{a_{1}}P_{0}D\text{,} \label{a2} \end{equation where we neglected the difference between $\left( \frac{d\beta }{dx}\right) _{+}$ and $\left( \frac{d\beta }{dx}\right) _{0}$. Eqs. (\ref{l0}) - (\re {a2}) give the expression for $H$ after substitution into (\ref{H}). To avoid cumbersome expressions, we leave it in the implicit form. Now, two different limiting cases can be considered. \subsection{Small magnetic field} If $B=0$, \begin{equation} \mathcal{L}_{0}(0)=\frac{\sqrt{D}}{\sqrt{a_{1}^{2}-\frac{D}{R_{+}^{2}}}}, \end{equation \begin{equation} \beta (0)=\frac{\sqrt{D}}{\sqrt{d-D}}\text{,} \label{bsm} \end{equation \begin{equation} P_{0}(0)\approx \frac{d}{d-D}=\frac{R_{+}^{2}a_{1}^{2}}{R_{+}^{2}a_{1}^{2}-D} \end{equation that agrees with eq. (44) of Ref. \cite{circ}. It follows from (\ref{xx}) and (\ref{x23}) that \begin{equation} \mathcal{E}(0)\approx \omega _{+}\frac{\sqrt{D}}{\sqrt{a_{1}^{2}-\frac{D} R_{+}^{2}}}}\text{.} \end{equation Let us consider small but nonzero $b$. We can find from (\ref{l0}) that \begin{equation} \mathcal{L}_{0}(b)\approx \mathcal{L}_{0}(0)-\mathcal{L}_{0}^{2}(0)\frac{b} R_{+}}+O(b^{2}), \end{equation \begin{equation} \mathcal{E}_{0}(b)\approx \omega _{0}\mathcal{L}_{0}+O(\kappa ^{2/3}\text{, b^{2})\text{.} \end{equation} \subsection{Large magnetic field} Let $b\gg 1$. Now, $P_{0}\sim b^{2}$, $A_{2}\sim b^{2}.$ According to (\re {h}), there exists a finite $\lim_{B\rightarrow \infty }H=H_{\infty }$. In doing so, we find from (\ref{l0}), (\ref{b}), (\ref{x23}) \begin{equation} \frac{\mathcal{L}_{0}}{R_{+}}\approx \frac{b\sqrt{D}}{\sqrt{d}+\sqrt{D}} \frac{b\sqrt{D}}{a_{1}R_{+}+\sqrt{D}}\text{,} \label{lal} \end{equation \begin{equation} \beta \approx -\frac{\sqrt{d}b}{\sqrt{d}+\sqrt{D}}=-\frac{R_{+}a_{1}b} R_{+}a_{1}+\sqrt{D}}\text{,} \label{bs} \end{equation \begin{equation} P_{0}\approx b^{2}\frac{d}{(\sqrt{d}+\sqrt{D})^{2}}, \end{equation \begin{equation} X_{0}\approx m\sqrt{D}H_{\infty }\kappa ^{2/3}\frac{ba_{1}R_{+}}{a_{1}R_{+} \sqrt{D}}\text{,} \end{equation \begin{equation} \mathcal{E}\approx b\sqrt{D}[\frac{\omega _{+}R_{+}}{a_{1}R_{+}+\sqrt{D}} \sqrt{D}H_{\infty }\kappa ^{2/3}\frac{a_{1}R_{+}}{a_{1}R_{+}+\sqrt{D}}]\text .} \label{el} \end{equation} Thus according to (\ref{h}), in general the radius of ISCO depends on the value of the magnetic field via the coefficient $H$. However, there is an exception. Let \begin{equation} C=0\text{, }R_{+}^{\prime }=0\text{, }B^{\prime }=0. \label{c} \end{equation} Then, \begin{equation} H^{3}=-\frac{A_{3}}{A_{2}}=\frac{a_{2}}{2a_{1}}\frac{1}{D^{2}}\text{,} \end{equation so the dependence on $b$ drops out from the quantity $H$ and, correspondingly, from the ISCO radius (\ref{h}). One can check easily that the conditions (\ref{c}) are satisfied for the near-extremal Kerr metric in the magnetic field. This agrees with eq. (38) of \cite{weak} where the observation was made that in the main corrections of the order $\kappa ^{2/3} $ the magnetic field does not show up. Thus this is the point where dirty black holes behave qualitatively differently from the Kerr metric in that the dependence of the ISCO radius on $b$ is much stronger than in the Kerr case. It is instructive to evaluate the relation between $H(0)$ and $H(\infty )$ for vanishing and large magnetic fields that results, according to (\ref{h ), in different values of corresponding radii $x_{0}$. The dependence on the magnetic field is due to the term $\frac{1}{P}\frac{dP}{dx}$ in the denominator. \begin{equation} \frac{H^{3}(0)}{H^{3}(\infty )}=\frac{2\frac{a_{2}}{a_{1}}D+C+Dw_{b=\infty }{2\frac{a_{2}}{a_{1}}D+C+Dw_{b=0}}\text{, }w\equiv \frac{1}{P}\frac{dP}{dx \text{.} \end{equation} One can find tha \begin{equation} w_{b=0}=-\frac{2R_{+}^{\prime }}{R_{+}}\frac{D}{d}\text{,} \end{equation \begin{equation} w_{b=\infty }=-2\{\frac{R_{+}^{\prime }}{R_{+}}[\frac{2\sqrt{D}+\sqrt{d}} \sqrt{d}+\sqrt{D}}]+\frac{B_{+}^{\prime }}{B_{+}}\}\frac{(\sqrt{d}+\sqrt{D} }{\sqrt{d}}. \end{equation} Thus for $d\sim D\sim C$, $H(0)\sim H(\infty )$. However, in general they can differ significantly. Say, for $C=0=B_{+}^{\prime }$ and $d\ll D$, $d\ll D\frac{R_{+}^{\prime }}{R_{+}}\frac{a_{1}}{a_{2}}$, we have $\frac{H^{3}(0)} H^{3}(\infty )}$ $\approx \frac{2d}{D}\ll 1$. As a result, the ISCO radius \ref{h}) also may vary over wide range. \section{Slowly rotating black hole} Now, we assume that $\kappa $ is not small, so the first term in (\ref{N}) dominates. Here, we will consider different cases separately. \subsection{Non-rotating black hole} Here, we generalize the results known for the Schwarzschild black hole \cit {ag}, \cite{fs}, to a more general metric of a dirty static black hole. In eqs. (\ref{fd}), (\ref{sd}) we should put $a_{1}=0=a_{2}\,.$ For a finite value of the magnetic field parameter $b$, ISCO lies at some finite distance from the horizon. However, now we will show that in the limit $b\rightarrow \infty ,$ the radius of ISCO tends to that of the horizon with $x_{0}\sim b^{-1}.$ We will show that this indeed happens, provided the term with $L$ in (\ref{b ) is large and compensates the second one with $b$. Correspondingly, we write \begin{equation} \mathcal{L}=\mathcal{L}_{0}+\mathcal{L}_{1}\text{,} \label{l01} \end{equation wher \begin{equation} \frac{\mathcal{L}_{0}}{R_{+}}=b\text{.} \label{lb} \end{equation For what follows, we introduce the quantity \begin{equation} \alpha =\frac{\mathcal{L}_{1}}{R_{+}}\text{,} \label{al} \end{equation $\alpha =O(1)$. Then, near the horizon, where $x$ is small, we can use the Taylor expansion \begin{equation} \beta =\alpha -2\frac{\beta _{0}}{R_{+}}x-x\alpha \frac{R_{+}^{\prime }} R_{+}}+\frac{\beta _{2}}{R_{+}^{2}}bx^{2}+...\text{,} \label{expb} \end{equation \begin{equation} \beta _{2}=R_{+}^{\prime 2}-R_{+}R_{+}^{\prime \prime }-\frac{R_{+}^{\prime }R_{+}B_{+}^{\prime }}{B_{+}}-\frac{R_{+}^{2}B_{+}^{\prime \prime }}{2B_{+} \text{,} \label{b2} \end{equation wher \begin{equation} \beta _{0}=bs\text{, } \label{bbs} \end{equation \begin{equation} s=R_{+}^{\prime }+\frac{1}{2}\frac{B_{+}^{\prime }R_{+}}{B_{+}}\text{.} \label{sb} \end{equation Now, $\beta _{0}\gg 1$ but, by assumption, $\beta $ is finite. In terms of the variabl \begin{equation} u=\frac{\beta _{0}}{R_{+}}x\text{,} \label{ux} \end{equation it can be rewritten as \begin{equation} \beta =\alpha -2u+\frac{u}{\beta _{0}}(\frac{\beta _{2}u}{s}-\alpha c)+O(\beta _{0}^{-2})\text{,} \label{bu} \end{equation \begin{equation} c=R_{+}^{\prime }\text{.} \label{cr} \end{equation} It is clear from the above formulas that the expansion with respect to the coordinate $x$ is equivalent to the expansion with respect to inverse powers of the magnetic field $b^{-1}$, so for $b\gg 1$ this procedure is reasonable. Then, after substitution of (\ref{bu}), we can represent (\ref{fd}) and (\re {sd}) in the form of expansion with respect to $\beta _{0}^{-1}$: \begin{equation} \frac{1}{m^{2}}\frac{dV_{eff}}{d\rho }=C_{0}+\frac{C_{1}}{\beta _{0} +O(\beta _{0}^{-2})=0, \label{ff} \end{equation \begin{equation} -\frac{1}{2m^{2}}\frac{d^{2}V_{eff}}{d\rho ^{2}}=-S_{1}\beta _{0}-S_{0}+O(\beta _{0}^{-1})=0\text{.} \label{ss} \end{equation Here, the coefficients at leading powers are equal to \begin{equation} C_{0}=2\kappa (12u^{2}-8u\alpha +1+\alpha )\text{,} \end{equation \begin{equation} S_{1}=16\kappa (3u-\alpha )\text{.} \end{equation Then, in the main approximation we have equations $C_{0}=0$ and $S_{1}=0$ which give us \begin{equation} u=\frac{1}{\sqrt{3}},\alpha =\sqrt{3},\beta =\frac{1}{\sqrt{3}}. \label{main} \end{equation To find the corrections $O(b^{-1})$, we solve eqs. (\ref{ff}) and (\ref{ss}) perturbatively. In doing so, it is sufficient to substitute these values into further coefficients $C_{1}$ and $S_{0}\,$. The results are listed in Appendix. In the particular case of the Schwarschild metric, $c=\beta _{2}=s=1$, D=-r_{+}^{-2}$, $\kappa =(2R_{+})^{-1}$, $R_{+}=r_{+}$. Writing $r_{+}=2M\, , where $M$ is the black hole mass, we have from (\ref{l01}), (\ref{al}), \ref{ux}), (\ref{b1}), (\ref{lb3}) and (\ref{en} \begin{equation} \frac{r_{0}-r_{+}}{M}\approx \frac{2}{\sqrt{3}b}-\frac{8}{3b^{2}}\text{,} \label{r1} \end{equation \begin{equation} \frac{\mathcal{L}}{R_{+}}\approx b+\sqrt{3}-\frac{1}{3b}\text{,} \label{lsw} \end{equation \begin{equation} \mathcal{E}_{0}\approx \frac{2}{3^{3/4}\sqrt{b}}. \label{ens} \end{equation} Eqs.(\ref{r1}), (\ref{ens}) agree with \cite{fr} and \cite{weak}. It is interesting that in terms of variables $u,$ $\frac{\mathcal{L}_{0}} R_{+}}$ and $b$ the result (\ref{main}) looks model-independent in the main approximation. This can be thought of as manifestation of the universality of black hole physics near the horizon. Dependence on a model reveals itself in higher-order corrections. \subsection{Extremely slow rotation} Now, we consider rotation as perturbation. Here, the angular velocity of rotation is the most small parameter. Correspondingly, in the expressions \ref{2d}), (\ref{sd}) we neglect the term $L^{2}$ since it contains $\omega ^{\prime 2}.$ More precisely, we assum \begin{equation} \mathcal{L}a_{1}^{2}\ll a_{2}N\text{,} \label{ln} \end{equation so from (\ref{n0}), (\ref{lb3}) we hav \begin{equation} R_{+}b^{3/2}a_{1}^{2}\ll a_{2}\sqrt{\kappa R_{+}}\text{.} \label{lan} \end{equation} In the particular case of the slow rotating Kerr metric, $\kappa \approx \frac{1}{2R_{+}}$, $a_{1}\sim \frac{a}{M^{3}}=\frac{a^{\ast }}{M^{2}}$, a_{2}\sim \frac{a}{M^{4}}=\frac{a^{\ast }}{M^{3}}$,where $a=J/M$, $J$ is the angular momentum of a black hole, $a^{\ast }=\frac{a}{M}$. Then, (\ref{ln}) read \begin{equation} a^{\ast }b^{3/2}\ll 1\text{.} \label{abk} \end{equation} There are two kinds of corrections - due to the magnetic field and due to rotation. One can check that the presence of rotation leads to the appearance in the series (\ref{ff}), (\ref{ss}) of half-integer inverse powers of $\beta _{0}$, in addition to integer ones. In the main approximation, we consider both kinds of corrections as additive contributions. Omitting details, we list the results \begin{equation} u\approx \frac{1}{\sqrt{3}}+\frac{1}{3bs}(\frac{5}{6}\frac{\beta _{2}}{s} \frac{1}{3}\frac{DR_{+}}{\kappa }-\frac{3c}{2})-\frac{\sqrt{2}\sqrt{R_{+}}} 3^{5/4}\sqrt{\kappa }\sqrt{s}}a_{1}R_{+}\sqrt{b}\text{,} \end{equation \begin{equation} \frac{\mathcal{L}}{R_{+}}\approx b+\sqrt{3}-\sqrt{2}\frac{1}{3^{3/4}\sqrt \kappa }\sqrt{s}}a_{1}R_{+}^{3/2}\sqrt{b}, \end{equation \begin{equation} N_{0}\approx \frac{\sqrt{2\kappa R_{+}}}{3^{1/4}\sqrt{bs}}. \label{n1} \end{equation It follows from (\ref{X}), (\ref{xx}) tha \begin{equation} X\approx \frac{2^{3/2}m}{3^{3/4}}\sqrt{\frac{\kappa R_{+}}{bs}}, \label{xsl} \end{equation \begin{equation} \mathcal{E}\approx R_{+}\omega _{+}b+\frac{\sqrt{\kappa R_{+}}}{\sqrt{b \sqrt{s}}\frac{2^{3/2}}{3^{3/4}}\,\text{.} \label{esr} \end{equation} For the slow rotating Kerr metric, $R_{+}\approx 2M$, \begin{equation} \omega =\frac{R_{+}a}{r^{3}}+O(a^{2})\text{.} \label{kom} \end{equation In \ the main approximation the difference between the Boyer-Lindquist coordinate $r$ and quasiglobal one $\rho $ has the same order $a^{2}$ and can be neglected. Then \begin{equation} a_{1}=\frac{3a^{\ast }}{2R_{+}^{2}}\text{,} \label{a1} \end{equation \begin{equation} u\approx \frac{1}{\sqrt{3}}+\frac{1}{\sqrt{3}b}-\frac{4}{3b^{2}}-\frac{1} 3^{1/4}}\frac{a^{\ast }}{\sqrt{b}}\text{,} \label{ua} \end{equation} \begin{equation} \mathcal{E}\approx \frac{1}{\sqrt{b}}\frac{2}{3^{3/4}}+\frac{a^{\ast }}{2} \text{,} \label{eab} \end{equation \begin{equation} \frac{\mathcal{L}}{R_{+}}\approx b+\sqrt{3}-3^{3/4}a^{\ast }\sqrt{b}. \label{la} \end{equation} They agree with the results of Sec. 3 B 2 of \cite{weak}. It is seen from \ref{ua}) - (\ref{la}) that the fractional corrections have the order a^{\ast }b^{3/2}$ and are small in accordance with (\ref{abk}). In a more general case, the small parameter of expansion corresponds to (\ref{lan}), so it is the quantity $\frac{\sqrt{R_{+}}b^{3/2}a_{1}^{2}}{a_{2}\sqrt{\kappa }}$. \subsection{Modestly slow rotation} Let now, instead of (\ref{ln}), (\ref{lan}) the opposite inequalities hold: \begin{equation} \mathcal{L}a_{1}^{2}\gg a_{2}N\text{,} \end{equation \begin{equation} R_{+}b^{3/2}a_{1}^{2}\gg a_{2}\sqrt{\kappa R_{+}}, \end{equation o \begin{equation} a^{\ast }b^{3/2}\gg 1 \label{ab3} \end{equation in the Kerr case. Correspondingly, in what follows the small parameter of expansion is $\frac{a_{2}\sqrt{\kappa },}{\sqrt{R_{+}}b^{3/2}a_{1}^{2}}$ that reduces to ($a^{\ast }b^{3/2}$)$^{-1}$ in the Kerr case. Additionally, we assume tha \begin{equation} ba_{1}^{\ast 2}\gg 1\text{.} \label{ba1} \end{equation} It turns out (see the details in Appendix) that \begin{equation} x_{0}\approx \frac{R_{+}\delta ^{2}}{36}a_{1}^{\ast 2}\text{,} \end{equation \begin{equation} \omega _{0}\approx \omega _{+}-\frac{\delta ^{2}}{36R_{+}}a_{1}^{\ast 3 \text{,} \label{oma} \end{equation \begin{equation} \frac{\mathcal{L}}{R_{+}}=b(1-\frac{\delta ^{2}}{6}sa_{1}^{\ast 2})\text{,} \label{lm} \end{equation \begin{equation} N_{0}\approx \frac{1}{3\sqrt{2}}\sqrt{\kappa R_{+}}a_{1}^{\ast }\delta \text ,} \label{n2} \end{equation where $\delta =\frac{1}{s\sqrt{2\kappa R_{+}}}$. It follows from (\ref{49}), (\ref{hy}), (\ref{hh}) tha \begin{equation} \beta _{+}=\beta (0)\approx -\frac{1}{2}\beta _{0}a_{1}^{\ast 2}, \label{be0} \end{equation \begin{equation} \beta (x_{0})\approx -2\frac{\delta ^{2}}{9}bsa_{1}^{\ast 2}\text{.} \label{bh1} \end{equation \begin{equation} X_{0}\approx m\frac{1}{27}\sqrt{2\kappa R_{+}}\delta ^{3}bsa_{1}^{\ast 3 \text{,} \label{x27} \end{equation \begin{equation} \mathcal{E}_{0}\approx \omega _{+}R_{+}b+\nu ba_{1}^{\ast 3}\text{,} \label{ea} \end{equation \begin{equation} \nu =\frac{1}{27}\sqrt{2\kappa R_{+}}s\delta ^{3}-R_{+}s\delta ^{2}\frac \omega _{+}}{a_{1}^{\ast }}-\frac{\delta ^{2}}{36}\text{.} \label{nu} \end{equation} \subsubsection{Kerr metric} In the case of the slow rotating Kerr black hole, eq. (\ref{a1}) entail \begin{equation} a_{1}^{\ast }=\frac{3}{2}a^{\ast }\text{,} \label{3a} \end{equation $\delta =1=s$, \begin{equation} x_{0}\approx \frac{R_{+}}{36}a_{1}^{2\ast }=\frac{R_{+}}{16}a^{\ast 2}\text{ } \label{x16} \end{equation where we used (\ref{ea}). One should compare this result to that in \cite{weak}. Now, $R_{+}=2M$, the horizon radius of the Kerr metric $r_{+}\approx 2M(1-\frac{a^{\ast 2}}{4})$. Eq. (53) of \cite{weak} gives u \begin{equation} r_{0}\approx 2M(1-\frac{3a^{\ast 2}}{16})\text{,} \end{equation whence $x_{0}=r_{0}-r_{+}\approx \frac{R_{+}}{16}a^{\ast 2}$ that coincides with (\ref{x16}). It is seen from (\ref{lm}), (\ref{3a}) that the angular momentum takes the valu \begin{equation} \frac{\mathcal{L}}{R_{+}}\approx b(1-\frac{3}{8}a^{\ast 2}) \end{equation that coincides with eq. (55) of \cite{weak}. Also, one finds that \begin{equation} X_{0}\approx \frac{m}{8}ba^{\ast 3}\text{.} \end{equation} In eq. (\ref{oma}) one should take into account that $\omega _{+}$ depends on $r_{+}$ that itself can be expressed in terms of $a^{\ast }$ and $M$. Collecting all terms, one obtains from (\ref{ea} \begin{equation} \mathcal{E}_{0}\approx \frac{a^{\ast }b}{2}-\frac{ba^{\ast 3}}{32} \end{equation that agrees with eq. (54) of \cite{weak}. \section{ISCO for rotating nonextremal black holes in the strong magnetic field} In the previous section we saw that in the limit $b\rightarrow \infty $ the ISCO radius does not coincide with that of the horizon that generalizes the corresponding observation made in Sec. III B 3 of \cite{weak}. Now, we will see that this is a general result which is valid for an arbitrary degree of rotation and finite $\kappa $ (so, for generic nonextremal black holes). It is worth noting that for $b=0$ it was noticed that the near-horizon ISCO are absent \cite{piat}, \cite{circ}. However, for $b\gg 1$ the corresponding reasonings do not apply, so we must consider this issue anew. We have to analyze eqs. (\ref{1d}), (\ref{2d}) in which (\ref{xx}) is taken into account. Neglecting higher order corrections, we can rewrite them in the form \begin{equation} (2\kappa +2Dx)(1+\beta ^{2})+(2\kappa x+Dx^{2})\frac{d\beta ^{2}}{dx}-\frac 2L\sqrt{1+\beta ^{2}}}{m}N(a_{1}-2a_{2}x)=0\text{,} \label{d1} \end{equation} \begin{equation} \frac{2L^{2}}{m^{2}}(a_{1}-2a_{2}x)^{2}-2a_{2}N\sqrt{1+\beta ^{2}}\frac{L}{m -W=0\text{,} \label{d2} \end{equation \begin{equation} W=(2D+6Cx)(1+\beta ^{2})+2\frac{d\beta ^{2}}{dx}(2\kappa +2Dx)+(2\kappa x+Dx^{2})\frac{d^{2}\beta ^{2}}{dx^{2}}\text{.} \label{w} \end{equation} 1) Let us suppose that $\beta $ is finite or, at least, $\beta \ll b$. Then, it follows from (\ref{expb}), (\ref{b}), (\ref{lb}) that $\frac{d\beta }{dx \sim b$ and $L\sim b$. Also, $x\sim b^{-1}$ according to (\ref{ux}), $N\sim \sqrt{x}\sim b^{-1/2}$. However, it is impossible to compensate the term with $L^{2}$ in (\ref{d2}) having the order $b^{2}$. 2) Let $\beta \sim L\sim b$. Then, in (\ref{d1}) the first term has the order $b^{2}\,$and cannot be compensated. 3) $\beta \gg b$. Then, (\ref{b}) gives us that $L\sim \beta $. Again, the first term in (\ref{d1}) cannot be compensated. Thus we see that, indeed, in the limit $b\rightarrow \infty $ the assumption about $x\rightarrow 0$ leads to contradictions, so ISCO radius does not approach the horizon. \section{Extremal nonrotating black hole} Up to now, we considered the case of a nonextremal black hole, so the surface gravity $\kappa $ was arbitrary or small quantity but it was nonzero anyway. Let us discuss now the case of the extremal black hole, so $\kappa =0 $ exactly. We pose the question: is it possible to get the ISCO such that for $b\rightarrow \infty $ the ISCO radius tends to that of the horizon? Now, we will see that this is indeed possible for a nonrotating black hole ( \omega =0$). We assume that the electric charge that can affect the metric is negligible. The extremal horizon appears due to properties of matter that surrounds the horizon that is possible even in the absence of the electric charge, provided equation of state obeys some special conditions \cite{bron}. For ISCO close to the horizon we can use the expansio \begin{equation} N^{2}=Dx^{2}+... \label{nd} \end{equation in which we drop the terms of the order $x^{3}$ and higher. Now we show that the case under discussion does exist with a finite quantity $\beta $. We can use now (\ref{bu}) in which only the first term is retained, s \begin{equation} \beta \approx \alpha -2u\text{,} \end{equation where $u$ is given by eq. (\ref{ux}). Then, (\ref{v}) reads \begin{equation} V\approx m^{2}\frac{DR_{+}^{2}}{b^{2}s^{2}}f(u)-E^{2}\text{,} \end{equation wher \begin{equation} f(u)=u^{2}(1+\alpha ^{2}-4u\alpha +4u^{2}). \end{equation Eqs. (\ref{fd}), (\ref{sd}) reduce to \begin{equation} \frac{df}{du}(u_{0})=0\text{,} \end{equation \begin{equation} \frac{d^{2}f}{du^{2}}(u_{0})=0\text{.} \end{equation They have the solutio \begin{equation} u_{0}=\frac{3}{2^{3/2}}\text{, }\alpha =\frac{4}{\sqrt{2}}=2\sqrt{2}\text{,} \end{equation whence \begin{equation} \beta \approx \frac{1}{\sqrt{2}}. \label{eb2} \end{equation Correspondingly, eqs. (\ref{nd}), eq. (\ref{0}) give u \begin{equation} N(x_{0})\approx \frac{3}{2^{3/2}}\sqrt{D}\frac{R_{+}}{bs}, \label{ne} \end{equation \begin{equation} \frac{X_{0}}{m}=\mathcal{E}_{0}\approx \frac{3^{3/2}}{4}\sqrt{D}\frac{R_{+}} bs}\text{.} \label{ee} \end{equation} We can also find the angular momentum on ISC \begin{equation} \frac{\mathcal{L}_{0}}{R_{+}}\approx b+\frac{1}{2}\sqrt{2}. \end{equation} Thus for big $b$ there is ISCO outside the horizon that tends to it in the limit $b\rightarrow \infty $, so that the quantity $x_{0}\rightarrow 0$. \section{Extremal rotating black hole} Now, we consider the same question but now for rotaitng black holes: is it possible to have ISCO in the near-horizon region (as closely as we like) for the extremal BH, when $\kappa =0$? Mathematically, it would mean tha \begin{equation} \lim_{b\rightarrow \infty }x_{0}=0\text{.} \end{equation} Then, (\ref{1d}), (\ref{nx}) with $\kappa =0$ give us for small $x$ tha \begin{equation} x_{0}D[(1+\beta ^{2})-\frac{La_{1}\sqrt{1+\beta ^{2}}}{m\sqrt{D}}]+\frac x_{0}^{2}}{2}\{C[3(1+\beta ^{2})-\frac{La_{1}\sqrt{1+\beta ^{2}}}{m\sqrt{D} ]+D\left( \beta ^{2}\right) ^{\prime }\}=0. \label{s1} \end{equation Eq. (\ref{2d}) with terms of the order $x_{0}^{2}$ and higher neglected, gives rise t \begin{equation} D(1+\beta ^{2})-\mathcal{L}^{2}a_{1}^{2}+x_{0}[2D\left( \beta ^{2}\right) ^{\prime }+2a_{2}\mathcal{L}^{2}\sqrt{D}\sqrt{1+\beta ^{2}(x_{0})}+2\mathcal L}^{2}a_{1}a_{2}+3C(1+\beta ^{2})]=0. \label{s} \end{equation} Then, the main terms in (\ref{s1}), (\ref{s}) entai \begin{equation} \mathcal{L}a_{1}=\sqrt{D}\sqrt{1+\beta ^{2}}\text{.} \label{lad} \end{equation For $b\gg 1$, assuming for definiteness that $d>D$ ($d$ is defined according to (\ref{da})), one finds from (\ref{b}) and (\ref{lad}) that \begin{equation} \beta _{+}\approx -b\frac{\sqrt{d}}{\sqrt{d}+\sqrt{D}}\text{,} \end{equation \begin{equation} \frac{\mathcal{L}}{R_{+}}=b\frac{\sqrt{D}}{\sqrt{d}+\sqrt{D}}\text{.} \end{equation \begin{equation} \frac{L^{2}}{m^{2}}a_{1}^{2}=D+D(\frac{L^{2}}{m^{2}R_{+}^{2}}-2\frac{L} mR_{+}}b+b^{2}) \end{equation} The terms $x_{0}^{2}$ in (\ref{s1}) and $x_{0}$ in (\ref{s}) give us, with \ref{lad}) taken into accoun \begin{equation} (1+\beta ^{2})C+D\left( \beta ^{2}\right) ^{\prime }=0, \end{equation} \begin{equation} \left( \beta ^{2}\right) ^{\prime }+2\frac{a_{2}}{a_{1}}(1+\beta ^{2})+\frac 3C}{2D}(1+\beta ^{2})=0\text{,} \end{equation whenc \begin{equation} C=-4D\frac{a_{2}}{a_{1}}\text{.} \label{cd} \end{equation} The system is overdetermined, eq. (\ref{cd}) cannot be satisfied in general. In principle, one can consider (\ref{cd}) as restriction on the black hole parameters. This is similar to the situation for the extremal Kerr-Newman metric ($b=0$), where ISCO near the horizon exists only for the selected value of the angular momentum approximately equal to $\frac{a}{M}\approx \frac{1}{\sqrt{2}}$ \cite{m}, \cite{extc}. However, we will not discuss such exceptional cases further. Generically, the answer to our question is negative, so the ISCO radius does not approach the horizon in the limit b\rightarrow \infty .$ \section{Particle collisions: general formulas} Let two particles collide. We label their characteristics by indices 1 and 2. Then, in the point of collision, one can define the energy in the centre of mass (CM) frame a \begin{equation} E_{c.m.}^{2}=-p_{\mu }p^{\mu }=m_{1}^{2}+m_{2}^{2}+2m_{1}m_{2}\gamma \text{.} \end{equation} Here \begin{equation} p^{\mu }=m_{1}u_{1}^{\mu }+m_{2}u_{2}^{\mu } \end{equation is the total momentum \begin{equation} \gamma =-u_{1\mu }u_{2}^{\mu } \end{equation is the Lorentz factor of their relative motion. For motion in the equatorial plane in the external magnetic field (\ref{ab ), one finds from the equations of motion (\ref{ft}), (\ref{rt}) tha \begin{equation} \gamma =\frac{X_{1}X_{2}-\varepsilon _{1}\varepsilon _{2}\sqrt{V_{1}V_{2}}} m_{1}m_{2}N^{2}}-\beta _{1}\beta _{2}\text{.} \label{ga} \end{equation} Here, $\varepsilon =+1$, if the particle moves away from the horizon and \varepsilon =-1$, if it moves towards it. Now, there are two scenarios relevant in our context. We call them O-scenario and H-scenario according to the terminology of \cite{circ}. Correspondingly, we will use superscripts "O" and "H". \subsection{O - scenario} Particle 1 moves on ISCO. As $V_{1}(\rho _{0})=0$ on ISCO, the formula simplifies t \begin{equation} (E_{c.m.}^{O})^{2}=m_{1}^{2}+m_{2}^{2}+2(\frac{X_{1}X_{2}}{N^{2} -m_{1}m_{2}\beta _{1}\beta _{2})\text{.} \label{o} \end{equation} As we are interested in the possibility to get $\gamma $ as large as one likes, we will consider the case when the ISCO is close to the horizon, so N $ is small. In doing so, we will assume that $(X_{2})\neq 0$, so particle 2 is usual according to the terminology of \cite{circ}. We also must take into account eq. (\ref{xx}), whenc \begin{equation} (E_{c.m.}^{O})^{2}=m_{1}^{2}+m_{2}^{2}+2(m_{1}\frac{X_{2}\sqrt{1+\beta _{1}^{2}}}{N}-m_{1}m_{2}\beta _{1}\beta _{2})\text{.} \label{gis} \end{equation} For ISCO close to the horizon, the first term dominates and we hav \begin{equation} (E_{c.m.}^{O})^{2}\approx 2m_{1}\frac{(X_{2})_{0}(\sqrt{1+\beta _{1}^{2} )_{0}}{N_{0}}\text{.} \label{go} \end{equation} \subsection{H - scenario} Now, particle 1 leaves ISCO (say, due to additional collision) with the corresponding energy $E=E(x_{0})$ and angular momentum $L=L(x_{0})$ that corresponds just to ISCO. This particle moves towards the horizon where it collides with particle 2. Mathematically, it means that we should take the horizon limit $N\rightarrow 0$ first in formula (\ref{ga}). We assume that both particles move towards the horizon, so $\varepsilon _{1}\varepsilon _{2}=+1$. Then \begin{equation} (E_{c.m.}^{H})^{2}=m_{1}^{2}+m_{2}^{2}+m_{1}^{2}(1+\beta _{1}^{2})\frac{X_{2 }{X_{1}}+m_{2}^{2}(1+\beta _{2}^{2})\frac{X_{1}}{X_{2}}-2m_{1}m_{2}\beta _{1}\beta _{2}, \label{hor} \end{equation where all quantities are to be calculated on the horizon. For small $X_{1}$, when \begin{equation} X_{1}\ll X_{2}\frac{m_{1}}{m_{2}}\sqrt{\frac{1+\beta _{1}^{2}}{1+\beta _{2}^{2}}}, \end{equation} we see from (\ref{hor}) tha \begin{equation} (E_{c.m.}^{H})^{2}\approx m_{1}^{2}(1+\beta _{1}^{2})_{+}\frac{(X_{2})_{+}} (X_{1})_{+}}\text{.} \label{e1} \end{equation Now, \begin{equation} X_{1}=E_{0}-\omega _{+}L_{0}=X_{0}+(\omega _{0}-\omega _{+})L_{0}\text{,} \end{equation where $X_{0}=E_{0}-\omega _{0}L$ corresponds to ISCO. With (\ref{om}), (\re {xx}) taken into account, in the main approximatio \begin{equation} X_{1}\approx m_{1}N_{0}\sqrt{1+\beta _{1}^{2}(x_{0})}-a_{1}x_{0}L_{0}\text{.} \label{xis} \end{equation} Now, we apply these formulas to different cases considered above. \section{Kinematics of motion on ISCO} It is instructive to remind that the general explanation of high $E_{c.m.}$ consists in the simple fact that a rapid usual particle having a velocity close to speed of light, hits the slow particle that has parameters approximately equal the critical values. This was explained in detail in \cite{k} for the standard BSW effect (without considering collision near ISCO). Does this explanation retain its validity in the present case? One particle that participates in collision is usual, so it would cross the horizon with the velocity approaching the speed of light in an appropriate stationary frame (see below). We consider the near-horizon ISCO, so the velocity of a usual particle is close to the speed of light. Now, we must check what happens to the velocity of a particle on ISCO. To describe kinematic properties, it is convenient to introduce the tetrads that in the local tangent space enable us to use formulas similar to those of special relativity. A natural and simple choice is the tetrad of so-called zero-angular observer (ZAMO) \cite{72}. It read \begin{equation} h_{(0)\mu }=-N(1,0,0,0)\text{, } \label{h0} \end{equation \begin{equation} h_{(1)\mu }=N^{-1}(0,1,0,0), \end{equation \begin{equation} h_{(2)\mu }=\sqrt{g_{\theta }}(0,0,1,1), \end{equation \begin{equation} h_{(3)\mu }=R(-\omega ,0,0,1)\text{.} \label{h3} \end{equation Here, $x^{0}=t,x^{1}=r$, $x^{2}=\theta $, $x^{3}=\phi $. It is also convenient to define the local three-velocity \cite{72} according t \begin{equation} v^{(a)}=v_{(a)}=\frac{u^{\mu }h_{\mu (a)}}{-u^{\mu }h_{\mu (0)}}, \label{vi} \end{equation $a=1,2,3$. From equations of motion (\ref{0}) - (\ref{1})\ and formulas for tetrad components, we obtain \begin{equation} -u^{\mu }h_{\mu (0)}=\frac{X}{mN}, \end{equation \begin{equation} u^{_{\mu }}h_{\mu (3)}=\beta \text{,} \end{equation \begin{equation} v^{(3)}=\frac{m\beta N}{X}, \label{v3} \end{equation \begin{equation} v^{(1)}=\sqrt{1-\frac{m^{2}N^{2}}{X^{2}}(1+\beta ^{2})}, \label{v1} \end{equation the component $v^{(2)}=0$ for equatorial motion. Then, introducing also the absolute value of the velocity $v$ according t \begin{equation} v^{2}=\left[ v^{(1)}\right] ^{2}+\left[ v^{(3)}\right] ^{2} \end{equation one can find tha \begin{equation} X=m\gamma _{0}N\text{, }\gamma _{0}=\frac{1}{\sqrt{1-v^{2}}}\text{.} \label{xv} \end{equation} Eq. (\ref{xv}) was derived in \cite{k} for the case when the magnetic field is absent. We see that its general form does not depend on the presence of such a field. For a circle orbit, eq.(\ref{xx}) should hold. Comparing it with (\ref{xv}), we find tha \begin{equation} \gamma _{0}=\sqrt{1+\beta ^{2}} \end{equation that has the same form as for the static case \cite{fr}. Now we can consider different cases depending on the value of the magnetic field and a kind of a black hole. \subsection{Near-extremal black holes} For small $b$, eq. (\ref{bsm}) shows that $\beta $ is finite, so is the quantity $\gamma _{0}$. Therefore, $v<1$. For the Kerr metric, in the main approximation, $\beta =\frac{L_{0}}{mR_{+}}=\frac{1}{\sqrt{3}}$ on ISCO (see eqs. \ 4.7 of \cite{circkerr} and eq. 77 of \cite{circ}, $R_{+}=2M$), so \gamma _{0}=\frac{2}{\sqrt{3}}$, $v=\frac{1}{2}$ that is a known result (see discussion after eq. 3.12 b in \cite{72}). For \ large $b$, accoding to (\ref{bbs}), the quantity $\beta \,\ $is proportional to $b,$and grows, $v\rightarrow 1$. However, this case is not very interesting since the individual energy (\ref{el}) diverges itself. \subsection{Non-rotating or slowly rotating nonextremal black \ holes} According to eq. (\ref{main}), $\beta \approx \frac{1}{\sqrt{3}}$. Slow rotation adds only small corrections to this value. Thus, rather unexpectedly, we again obtain that on ISC \begin{equation} v\approx \frac{1}{2}. \label{12} \end{equation This value coincides for the near-extremal Kerr without a magnetic field and a nonrotating or slow rotating dirty black hole in the strong magnetic field. \subsection{Modestly rotating nonextremal black hole} It follows from (\ref{be0}), (\ref{ba1}) that $\left\vert \beta \right\vert \gg 1$. However, as the energy of a particle on ISCO (\ref{ea}) tends to inifnity, this case is also not so interesting. To summarize, in all cases of interest (when an individual energy is finite), $\beta $ remains finite even in the strong magnetic field. Correspondingly, $v<1$ on ISCO and the previous explanation of the high E_{c.m.}$ \cite{k} applies. For less interesting cases, when an individual energy diverges, we have collision between two rapid particles but their velocities are not parallel and this also gives rise to high $\gamma _{0}$ (see eq. 20 in \cite{k}). \section{Centre-of mass energy of collision} \subsection{Near-extremal black hole} \subsubsection{O - scenario} Using (\ref{go}), (\ref{nx}) and (\ref{nk}) one obtain \begin{equation} (E_{c.m.}^{O})^{2}\approx 2m_{1}\frac{(X_{2})_{+}\sqrt{1+\beta _{1}^{2}}} \sqrt{D}H\kappa ^{2/3}}\text{.} \end{equation} In the strong magnetic field, with $b\gg 1$, using the expression (\ref{bs}) for $\beta $, we obtain \begin{equation} (E_{c.m.}^{O})^{2}\approx \frac{2m_{1}(X_{2})_{+}}{\sqrt{D}H\kappa ^{2/3} \frac{R_{+}a_{1}b}{R_{+}a_{1}+\sqrt{D}}\text{.} \end{equation} In the near-extremal Kerr case, $D=M^{-2}$, $R_{+}=2M$, $\kappa \approx \frac{1}{2}\sqrt{1-a^{\ast 2}},$ $H=M^{-5/3}$, $a_{1}=M^{-2}$. As \ a result \begin{equation} (E_{c.m.}^{O})^{2}\approx \frac{2^{8/3}m_{1}(X_{2})_{+}b}{3(1-a^{\ast 2})^{1/3}} \end{equation that coincides with eq. (61) of \cite{weak} in which the limit $b\rightarrow \infty $ should be taken. \subsubsection{H - scenario} Now, due to (\ref{lad}), eq. (\ref{xis}) gives us $X_{1}=0$. It means that in the expansion (\ref{nx}) we must retain the first correction in the expression for $N$, when it is substituted into (\ref{xis}). As a result, we hav \begin{equation} X_{1}\approx \frac{m_{1}\kappa \sqrt{1+\beta _{1}^{2}(x_{0})}}{\sqrt{D} \text{.} \end{equation There are also terms of the order $x_{0}^{2}\sim \kappa ^{4/3}$ but they are negligible as compared to $\kappa $. Correspondingly, (\ref{hor}) gives us \begin{equation} (E_{c.m.}^{H})^{2}\approx m_{1}\sqrt{D}\sqrt{1+\beta _{1}^{2}}\left( X_{2}\right) _{+}\kappa ^{-1}\text{.} \end{equation} In the strong magnetic field, with $b\gg 1$, using (\ref{bs}) again we obtain \begin{equation} (E_{c.m.}^{H})^{2}\approx m_{1}\sqrt{D}\frac{R_{+}a_{1}b}{R_{+}a_{1}+\sqrt{D }\left( X_{2}\right) _{+}\kappa ^{-1}\text{.} \end{equation} Thus in both versions, for $b\gg 1$ the effect is enhanced due to the factor $b$. For $b=0$ we return to \cite{circ}. In the Kerr case, \begin{equation} (E_{c.m.}^{H})^{2}\approx \frac{4}{3}m_{1}\frac{\left( X_{2}\right) _{+}b} (1-a^{\ast 2})^{1/2}} \end{equation that corresponds to eq. (59) of \cite{weak} in which $b\gg 1$. \subsection{Extremely slow rotating or nonrotating black hole} \subsubsection{O - scenario} Now, (\ref{n0}) and (\ref{go}) give u \begin{equation} (E_{c.m.}^{O})^{2}\approx 3^{-1/4}\frac{4m_{1}(X_{2})_{+}}{\sqrt{2\kappa R_{+}}}\sqrt{bs}. \label{os} \end{equation} In the Schwarschild case, $2\kappa R_{+}=1=s$ \begin{equation} (E_{c.m.}^{O})^{2}\approx \frac{4m_{1}(X_{2})_{+}}{3^{1/4}}\sqrt{b} \end{equation that coincides with eq. (63) of \cite{weak}. \subsubsection{H - scenario} Using (\ref{xsl}) and neglecting in (\ref{xis}) the second term (rotational part), we ge \begin{equation} (E_{c.m.}^{H})^{2}\approx \frac{2}{3^{3/4}}\frac{m_{1}(X_{2})_{+}}{\sqrt 2\kappa R_{+}}}\sqrt{bs}\text{.} \label{hs} \end{equation} \subsection{Modestly rotating black holes in strong magnetic field} \subsubsection{O - scenario} With $\beta _{1}\gg 1$, it follows from (\ref{n2}), (\ref{bh1}) and (\ref{go ) that \begin{equation} (E_{c.m.}^{O})^{2}\approx \frac{4}{3}\sqrt{2}\frac{m_{1}}{\sqrt{\kappa R_{+} }\delta bs(X_{2})_{+}a_{1}^{\ast }\text{.} \end{equation} In the Kerr case, taking into account (\ref{3a}), we obtain \begin{equation} (E_{c.m.}^{O})^{2}\approx 4m_{1}(X_{2})_{+}a^{\ast }b \end{equation that agrees with eq. (67) of \cite{weak}. \subsubsection{H - scenario} In a similar manner, one can obtain from (\ref{e1}), (\ref{xis}) and (\re {x27}) that $(E_{c.m.}^{H})^{2}\sim b$ with a somewhat cumbersome coefficient that we omit here. Both these scenarios are less interesting since according to (\ref{ea}), the individual energy $\mathcal{E}_{0}\sim b$ diverges itself in the limit b\rightarrow \infty $. \subsection{Extremal nonrotating black holes} \subsubsection{O - scenario} Using (\ref{ne}), (\ref{eb2}) we find from (\ref{go}) tha \begin{equation} (E_{c.m.}^{O})^{2}\approx \frac{4m_{1}(X_{2})_{+}bs}{\sqrt{3D}R_{+}}\text{.} \end{equation} \subsubsection{H - scenario} Now, it follows from (\ref{ee}), (\ref{eb2}) and (\ref{e1}) tha \begin{equation} (E_{c.m.}^{H})^{2}\approx 2m_{1}\frac{(X_{2})_{+}bs}{\sqrt{3D}R_{+}}\text{.} \end{equation $\frac{b(1+\xi ^{2})}{\Lambda _{+}^{3}}$ \section{Backreaction of magnetic field: Ernst static black hole} Now, we illustrate the obtained results using the metric of a static magnetized black hole \cite{ernst} that can be considered as the generalization of the Schwarzschild solution. This will also allow us to elucidate the role of backreaciton due to the magnetic field on the behavior of $E_{c.m.}$ that bounds the BSW effect. The metric read \begin{equation} ds^{2}=\Lambda ^{2}[-(1-\frac{r_{+}}{r})dt^{2}+\frac{dr^{2}}{1-\frac{r_{\_}} r}}+r^{2}d\theta ^{2}]+\frac{r^{2}\sin ^{2}\theta }{\Lambda ^{2}}d\phi ^{2 \text{, }\Lambda ^{2}=1+\frac{B^{2}r^{2}}{4}\sin ^{2}\theta \label{me} \end{equation \begin{equation} A^{\phi }=\frac{\tilde{B}}{2}\text{, }\tilde{B}=B\Lambda \text{,} \end{equation $r_{+}=2M$ is the horizon radius, $B$ is a constant parameter. It follows from (\ref{bb}) (with $B$ replaced with $\tilde{B}$) and (\ref{me}) tha \begin{equation} \beta =\frac{\mathcal{L}}{R}-b\frac{r}{2M}\text{, }b=\frac{qBM}{m}\text{.} \end{equation} Many important details of particle's motion in this background can be found in Ref. \cite{galpet}. Calculating the corresponding coefficients according to (\ref{b2}) - (\re {sb}) and substituting them into (\ref{u1}) - (\ref{en}), we obtai \begin{equation} \frac{(r-r_{+})}{r_{+}}=\frac{(1+\xi )}{(1+\xi ^{2})b}[\frac{1}{\sqrt{3}} \frac{-8+18\xi -3\xi ^{2}-2\xi ^{3}-\xi ^{4}}{18b(1+\xi ^{2})^{2}}] \label{re} \end{equation \begin{equation} \xi =B^{2}M^{2}\text{, } \end{equation \begin{equation} \frac{\mathcal{L}}{2M}=\frac{1}{(1+\xi )}[b+\sqrt{3}+\frac{-1+3\xi -\xi ^{3}-\xi ^{4}}{3b(1+\xi ^{2})^{2}}]+O(b^{-2}) \label{le} \end{equation \begin{equation} \mathcal{E}_{0}\approx \frac{2(1+\xi )^{3/2}}{3^{3/4}\sqrt{b}}\frac{1}{\sqrt 1+\xi ^{2}}}\text{.} \end{equation It follows from (\ref{bet}) tha \begin{equation} \beta \approx \frac{1}{\sqrt{3}}+\frac{(\xi -1)(-\xi ^{3}-\xi ^{2}-2\xi +2)} 3b(1+\xi ^{2})^{2}} \end{equation} For the energy of collision we have from (\ref{os}), (\ref{hs}) \begin{equation} (E_{c.m.}^{O})^{2}\approx 3^{-1/4}4m_{1}(X_{2})_{+}z, \label{oe} \end{equation} \begin{equation} (E_{c.m.}^{H})^{2}\approx \frac{2}{3^{3/4}}m_{1}(X_{2})_{+}z\text{,} \label{he} \end{equation wher \begin{equation} z=\sqrt{\frac{b(1+\xi ^{2})}{(1+\xi )^{3}}}\text{.} \end{equation} When $\xi \ll 1$, there is agreement with the results for the Schwarzschild metric \cite{fr}, \cite{weak} since eq. (\ref{re}) turns into (\ref{r1}) and (\ref{le}) turns into (\ref{lsw}). It is interesting that for any $\xi $, the velocity of a particle on ISCO is equal to $1/2$ like this happens for \xi \ll 1.$ The approach under discussion works well also for $\xi \gg 1$, provided that the ISCO lies close to the horizon to ensure large $E_{c.m.}$, i,e, $N^{2}\ll 1.$According to (\ref{me}), (\ref{re}), this require \begin{equation} b\gg \xi \end{equation or, equivalently \begin{equation} \frac{q}{m}\gg BM\gg 1\text{.} \end{equation} Otherwise, both energies (\ref{oe}), (\ref{he}) contain the factor $\sqrt \frac{b}{\xi }}\sim \sqrt{\frac{q}{mBM}}$ that bounds $E_{c.m.}$ which begins to decrease when $B$ increases. One should also bear in mind that it is impossible to take the limit $\xi \rightarrow \infty $ literally since the geometry becomes singular. In particular, the component of the curvature tensor $R_{\theta \phi }^{\theta \phi }$ grows like $\xi ^{2}$. The maximum possible $E_{c.m.}$ is achieved when $\xi \sim 1$, then $z\sim \sqrt{\frac{ }{m}}$. The example with the Ernst metric shows that strong backreaction of the magnetic field on the geometry may restrict the growth of $E_{c.m.}$ to such extent that even in spite of large $b$, the effect disappears because of the factor $\xi $ that enters the metric. It is of interest to consider the exact rotating magnetized black hole \cite{ek} that generalizes the Kerr metric but this problem certainly needs separate treatment. \section{Summary and conclusion} We obtained characteristics of ISCO and the energy in the CM frame in two different situations. For the near-extremal case, we considered the BSW effect. Previous results applied to the weakly magnetized Kerr metric or dirty black holes without the magnetic field. Now, we took into account both factors, so generalized the previous results for the case when both matter and magnetic field are present. In doing so, there is qualitative difference between dirty rotating black holes and the Kerr one. Namely, the radius of ISCO depends on the magnetic field strength $b$ already in the main approximation with respect to small surface gravity $\kappa $ in contrast to the case of the vacuum metric \cite{weak}, where this dependence reveals itself in the small corrections only. For extremal black holes, we showed that, due to the strong magnetic field, there exists the near-horizon ISCO that does not have a counterpart in the absence of this field. Correspondingly, we described the effect of high-energy collisions near these ISCO. We demonstrated that rotation destroys near-horizon ISCO both for the nonextremal and extremal horizons. so $\lim_{b\rightarrow \infty }r_{0}(b)\neq r_{+}$. However, if the parameter responsible for rotation is small, $E_{c.m.}$ is large in this limit. For slowly rotating black holes we analyzed two different regimes of rotation thus having generalized previous results on the Kerr metric \cit {weak}. The parameters of expansion used in calculations and the results agree with the Kerr case. In particuar, for modestly slow rotation the individal energy of the particle on ISCO is unbound. In the main approximation, the expressions for the ISCO radius and angular momentum in dimensionless variables are model-independent, so here one can see universality of black hole physics. We also found the three-velocity of a particle on ISCO in the ZAMO frame. It turned out that for slowly rotating dirty black holes in the magnetic field it coincides with the value typical of the Kerr metric without a magnetic field, $v\approx \frac{1}{2}$. Correspondingly, previous explanation of the high $E_{c.m.}$ as the result of collision of very fast and slow particles \cite{k} retains its validity in the scenarios under discussion as well. In previous studies of the BSW effect in the magnetic field \cite{fr}, \cit {weak}, \cite{tur2}, some fixed background was chosen. In this sense, the magnetic field was supposed to be weak in that it did not affect the metric significantly (although it influenced strongly motion of charged particles). Meanwhile, the most part of the formulas obtained in the present work applies to generic background and only asymptotic behavior of the metric near the horizon was used. Therefore, they apply to the backgrounds in which the magnetic field enters the metric itself, with reservation that the surface gravity $\kappa =\kappa (b)$, etc. In particular, we considered the static magnetized Ernst black hole and showed that strong backreaction of the magnetic field on the geometry bounds the growth of $E_{c.m.}$. Thus we embedded previous scenarios of high energy collisions in the magnetic field near ISCO in the vicinity of black holes \cite{fr}, \cit {weak} and took into account the influence of the magnetic field on the metric. Throughout the paper, it was assumed that the effect of the electric charge on the metric is negligible. It is of interest to extend the approach of the present work to the case of charged black holes. \begin{acknowledgments} This work was funded by the subsidy allocated to Kazan Federal University for the state assignment in the sphere of scienti c activities. \end{acknowledgments} \section{Appendix} Here, we list some rather cumbersome formulas which are excluded from the main text. \subsection{Non-rotating black holes} \begin{equation} C_{1}=4\kappa \frac{\beta _{2}}{3\sqrt{3}s}+\frac{4}{3\sqrt{3}}DR_{+}, \end{equation \begin{equation} S_{0}\approx 8\frac{\kappa }{R_{+}}(-\frac{\beta _{2}}{s}+3c)\text{.} \end{equation} The results with the leading term and subleading corrections rea \begin{equation} u\approx \frac{1}{\sqrt{3}}+\varepsilon _{1}\text{, }\varepsilon _{1}=\frac{ }{3bs}(\frac{5}{6}\frac{\beta _{2}}{s}+\frac{1}{3}\frac{DR_{+}}{\kappa } \frac{3c}{2})\text{,} \label{u1} \end{equation \begin{equation} \alpha \approx \sqrt{3}+\delta _{1}\text{, }\delta _{1}=\frac{1}{3bs}(\frac \beta _{2}}{s}+\frac{DR_{+}}{\kappa })\text{,} \label{b1} \end{equation} \begin{equation} N\approx \sqrt{2\kappa x_{0}}\approx \sqrt{\frac{2\kappa R_{+}}{bs}}\frac{1} 3^{1/4}}\text{,} \label{n0} \end{equation \begin{equation} \frac{\mathcal{L}}{R_{+}}\approx b+\sqrt{3}+\delta _{1}\text{,} \label{lb3} \end{equation \begin{equation} \beta =\frac{1}{\sqrt{3}}+\frac{1}{bs}(\frac{\beta _{2}}{s}\frac{1}{3 -c)+O(b^{-2})\text{,} \label{bet} \end{equation \begin{equation} \mathcal{E}_{0}=X_{0}\approx \frac{2^{3/2}}{3^{3/4}}\sqrt{\frac{\kappa R_{+ }{bs}}. \label{en} \end{equation} \subsection{Modestly slow rotation} Now, one can check that, in contrast to the previous case, a finite $\beta $ is inconsistent with eqs. (\ref{ff}), (\ref{ss}). Instead, $\beta \sim b$ for large $b$. By trial and error approach, one can find that the suitable ansatz reads \begin{equation} x=\frac{4}{9}R_{+}ya_{1}^{\ast 2}\text{, } \label{xe} \end{equation where we introduced in this ansatz the dimensionless quantity \begin{equation} a_{1}^{\ast }=R_{+}^{2}a_{1}, \end{equation and the coefficient $\frac{4}{9}$ to facilitate comparison to the case of the Kerr metric (otherwise, this coefficient can be absorbed by $y$). In doing so \begin{equation} \beta =\frac{4}{9}\beta _{0}a_{1}^{\ast 2}h(y)\text{, } \label{49} \end{equation \begin{equation} h\approx h_{1}-2y \label{hy} \end{equation that is analogue of (\ref{bu}), $\beta _{0}=bs$ according to (\ref{bbs}). By definition, here $h\neq 0$. For the angular momentum we have from (\ref{b} \begin{equation} \frac{\mathcal{L}}{R_{+}}=b+\beta _{+}=b+\frac{4}{9}\beta _{0}a_{1}^{\ast 2}h_{1}\text{.} \label{hl} \end{equation} Let us consider the main approximation with respect to the parameter \varepsilon =\frac{4}{9}a_{1}^{\ast 2}$ and take into account that \left\vert \beta \right\vert \gg 1$. Then, eq. (\ref{1d}) gives us \begin{equation} \left\vert \beta \right\vert \mathcal{L}\sqrt{2\kappa x}a_{1}\approx \kappa \beta ^{2}+\kappa x\frac{d\beta ^{2}}{dx}. \label{l1} \end{equation} Eq. (\ref{2d}) read \begin{equation} \mathcal{L}^{2}a_{1}^{2}\approx 2\frac{d\beta ^{2}}{dx}\kappa +\kappa x\frac d^{2}\beta ^{2}}{dx^{2}}\text{,} \label{l2} \end{equation where now, with a given accuracy \begin{equation} \frac{d\beta ^{2}}{dx}=\frac{4}{R_{+}}\beta _{0}^{2}\varepsilon (2y-h_{1}), \end{equation \begin{equation} \frac{d\beta ^{2}}{dx^{2}}=\frac{\beta _{0}^{2}}{R_{+}^{2}}\frac{dh^{2}} dy^{2}}=\frac{8\beta _{0}^{2}}{R_{+}^{2}}\text{.} \end{equation Substituting $\frac{\mathcal{L}}{R_{+}}\approx b$ into (\ref{l1}), (\ref{l2 ) and assuming $h<0$, one finds the system of two equations \begin{equation} 6y-h_{1}=3\delta \sqrt{y}\text{,} \label{root} \end{equation} \begin{equation} h_{1}=3y-\frac{9}{16}\delta ^{2}, \label{h1} \end{equation} $\delta =\frac{1}{s\sqrt{2\kappa R_{+}}}$.This system can be solved easily. There are two roots here but only one of them satisfies the condition $h\neq 0$ \begin{equation} y_{0}=\frac{\delta ^{2}}{16}\text{, }h_{1}=-\frac{3}{8}\delta ^{2},\text{ h(y_{0})=-\frac{\delta ^{2}}{2}\text{.} \label{hh} \end{equation (For $\beta >0$, one can obtain the equation $6y-h_{1}=-3\delta \sqrt{y}$ but in combination with (\ref{h1}) it would give $y<0$ that is unacceptable since outside the horizon we should have $y>0$. Thus this case should be rejected.) Then, using (\ref{xe}), (\ref{hl}), (\ref{om}) we find the results (\ref{be0 ) - (\ref{nu}).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quantum spin liquids represent a class of exotic quantum phases of matter beyond the traditional Landau symmetry-breaking paradigm. Besides being conceptually interesting and experimentally relevant on their own~\cite{balents_2010,savary_2017,Zhou_QSL_review}, they are also connected to various deep problems ranging from high-temperature superconductivity to topological order and strongly coupled gauge theories, to name a few~\cite{LeeNagaosaWen,wenbook}. A particularly interesting quantum spin liquid in two spatial dimensions is the Dirac spin liquid (DSL)~\cite{LeeNagaosaWen,hastings_2000,hermele_2005_mother,ran_2007,hermele_2008,he_2017,Iqbal_triangular,zhu_2018}. The DSL is described by fermionic spinons -- emergent particles carrying spin-$1/2$ -- whose dispersion at low energies is described by the massless Dirac equation. These Dirac spinons interact with an emergent photon ($U(1)$ gauge field), an effective field theory known as QED$_3$. This spin liquid state was originally discussed on the square lattice in the context of high-Tc cuprates\cite{LeeNagaosaWen} and as a ``mother state" of different competing orders\cite{hermele2005}. On the kagom\'e lattice the DSL is a candidate ground state for the Heisenberg antiferromagnet~\cite{ran_2007,hermele_2008,Iqbal_kagome}, as supported by recent DMRG calculations~\cite{he_2017,zhu_2018}, and may potentially be relevant for experimental systems such as hertbertsmithite \cite{Kagomeexpt,Norman}, although gapped spin liquids have also been proposed in this context. On the triangular lattice, a spin liquid is observed in DMRG studies when a small second neighbor spin coupling $J_2$ is added in the range $0.07<J_2/J_1<0.15$~\cite{Zhu_triangular}, and variational Monte Carlo simulation suggested it to be a Dirac spin liquid~\cite{Iqbal_triangular}. As predicted for a Dirac spin liquid, a chiral spin liquid is obtained in this parameter range as soon as a time reversal symmetry breaking perturbation is applied~\cite{Sheng,Lauchli}. Further support comes from recent lattice gauge theory simulations have reported that QED$_3$, even with relatively small number of fermion flavors, may exhibit a stable critical phase, at least when symmetry lowering perturbations and monopoles were absent (sometimes called `non-compact' QED$_3$)~\cite{qedcft0,qedcft}. This raises the remarkable possibility that the DSL may be realized as a stable phase (or perhaps a critical point, see ~\cite{jian_2017}) on the triangular lattice. Intriguingly, quantum spin liquid materials candidates have recently emerged on the triangular lattice \cite{triangle_sl, Law6996}. To make progress however, one really needs a rigorous understanding of monopoles~\cite{polyakov_1977,hermele_20041,kapustin_2002}, an important class of excitations (or more accurately critical fluctuations), in the DSL. If symmetry-allowed and relevant (in the renormalization-group sense), the proliferation of monopole instantons leads to instabilities of the DSL~\cite{shortpaper}. The properties of the monopoles also decide the nature of other, more conventional, phases in proximity to the DSL~\cite{shortpaper}. However, a fundamental aspect of the monopoles, their quantum numbers under the microscopic symmetries (lattice, time-reversal, etc.), has long been an unresolved issue. In the simpler case of a semiclassical theory of fluctuating Neel order, or equivalently a theory based on {\em bosonic} spinons (Schwinger bosons) coupled to a U(1) gauge field that naturally appears on bipartite lattices, the lattice symmetry properties of monopoles were calculated in~\cite{HaldaneBerry,ReSaSUN}. This played an important role in predicting valence bond solid (VBS) order as a competing singlet state, and in the development of deconfined criticality~\cite{senthil_20031,senthil_20041}, of the Neel-VBS phase transiton. However, for fermionic spinons which provides an intrinsically quantum mechanical description, such an analytic understanding is still absent. Some progress was made in Ref.~\cite{alicea_2008}, in which the monopole quantum numbers on square lattice were shown to be constrained by group-theoretic considerations~\cite{AliceaFVortex} and were eventually calculated numerically. Subsequently an analysis of the honeycomb~\cite{ran_2008} and kagom\'e~\cite{hermele_2008} lattices were also initiated. We report the numerical computation of monopole quantum numbers for several symmetries on these lattices in a parallel paper\cite{shortpaper}, and also discuss consequences for the stability and the phenomenology of the DSL. In contrast, in this work we uncover a close and unexpected connection between the symmetry properties of monopoles and fermion band topology. This allows us to build on recent progress understanding band topology protected by crystalline symmetries, to develop a systematic analytical approach to calculate the monopole symmetry quantum numbers on essentially any lattice -- although we focus on the physically relevant ones including square, honeycomb, triangular and kagom\'e. Armed with this deeper understanding and analytical machinery, we are able to obtain a complete understanding of the symmetry action on monopoles. Along with several new results, we clarify some misconceptions in previous work, and also verify consistency with generalized Lieb-Shultz-Mattis theorems. Our new understanding was enabled by developments in the theories of topological insulators and topological crystalline insulators over the past decade. Essentially, the symmetry properties of the monopoles are fixed by the ``band topology" of the underlying fermionic spinons. By establishing the precise connection between band topology and monopole quantum numbers, the latter can be calculated using the technology of topological band theory. An analogous approach has long been used to determine monopole quantum numbers associated with continuous, on-site symmetries. For example, when fermions fill a Chern band, a Chern-Simons term is generated that represents charge-flux attachment. Similarly, the $S_z$-spin is carried by the monopole (a flux-quanta) in the presence of a quantized spin Hall conductance\cite{ran_2008}. In this work we leverage the full power of topological band theory to determine monopole quantum numbers to include lattice symmetries and time-reversal. It turns out that that the monopoles' lattice momenta and angular momenta (the most challenging part of the problem) is related to an old concept in band theory: the charge (or Wannier) centers of an occupied band. The basic idea is extremely simple: if a charge sits at a point in space (the Wannier center), a monopole (magnetic flux) picks up a Berry phase when moving around it. Recent developments\cite{PoIndicators,Pofragile,Cano_2018,bouhon_2018}, especially the clarification of the notion of ``fragile topology", enabled us to calculate the charge centers even when there is an obstruction to obtaining localized Wannier states. In fact, this is a frequent occurance in the states we will discuss, nevertheless we are able to obtain the location of charge centers efficiently, which then feature sites with both negative and positive charges. We note that a similar calculation has been applied to a particular spinon mean field theory on the square lattice in Ref.~\cite{thomson_2018}, where the charges could be localized on lattice sites. We will also see that monopoles behave very differently on bipartite (honeycomb and square) and non-bipartite (triangular and kagom\'e) lattices: on bipartite lattices there always exists a monopole that transforms trivially under all the microscopic symmetries, making it an allowed perturbation to the theory, thereby likely destabilizing the DSL; on non-bipartite lattices this does not happen, at least in the examples we considered. The difference can be traced to the fact that on bipartite lattices one can continuously tune the DSL state to another spin liquids state with $SU(2)$ (instead of $U(1)$) gauge group. This connection leads to a different, and simpler, way of calculating monopole quantum numbers on bipartite lattices, with results that are consistent with the band topology approach. The rest of this paper is organized as follows. In Sec.~\ref{generalities} we review for completeness, aspects of $U(1)$ Dirac spin liquids, and define precisely the problem of monopole quantum numbers. We also derive some general results on time-reversal and reflection symmetries. In Sec.~\ref{MQNBipartite} we calculate the monopole quantum numbers on bipartite lattices (honeycomb and square) by two different methods -- with give identical results. In Sec.~\ref{wanniercenter} we develop a more general method, based on spinon charge centers, that is applicable to both bipartite and non-bipartite lattices. In Sec.~\ref{calculation} we apply this method to calculate monopole quantum numbers for the DSL on the triangular and kagom\'e lattices. In Sec.~\ref{anomalyLSM} we discuss the connection between some of the monopole quantum numbers and generalized Lieb-Schultz-Mattis theorems and their corresponding field theory anomalies. We conclude in Sec.~\ref{Discussion} and discusses open issues. The spinon mean-field theory used on four types of lattices and the fermion bilinear transformations are relegated to Appendices. \section{Generalities} \label{generalities} \subsection{$U(1)$ Dirac spin liquid and monopole operators} We start with the standard parton decomposition of the spin-$1/2$ operators on the lattice \begin{equation} \label{eqn:parton} \vec{S}_i=\frac{1}{2}f^{\dagger}_{i,\alpha}\vec{\sigma}_{\alpha\beta}f_{i,\beta}, \end{equation} where $f_{i,\alpha}$ is a fermion (spinon) on site $i$ with spin $\alpha\in\{\uparrow,\downarrow\}$ and $\vec{\sigma}$ are Pauli matrices. This re-writing is exact if we stay in the physical Hilbert space, defined by the constraint $\sum_{\alpha}f_{i,\alpha}^{\dagger}f_{i,\alpha}=1$. We now relax the constraint and allow the fermionic spinons to hop on the lattice (for more details see Ref.~\cite{wenbook}), according to a mean field Hamiltonian \begin{equation} \label{eqn:ansatz} H_{MF}=-\sum_{ij}f^{\dagger}_it_{ij}f_j. \end{equation} There is a gauge redundancy $f_i\to e^{i\alpha_i}f_i$ in the parton decomposition Eq.~\eqref{eqn:parton}, which results in the emergence of a dynamical $U(1)$ gauge field $a_{\mu}$ that couples to the fermions $f$. Each site carries a gauge charge $q_i=\sum_{\alpha}f_{i,\alpha}^{\dagger}f_{i,\alpha}-1$. In the strong coupling limit where the gauge field simply enforces a constraint $q_i=0$ on each site, the physical spin Hilbert space is recovered. However if the gauge coupling does not flow to infinity in the low energy limit (this can almost be viewed as the definition of a spin liquid phase), the gauge charge only needs to vanish on average $\langle q\rangle=0$. We now arrange the hopping amplitudes $t_{ij}$ in a way so that the spinons form four Dirac cones at low energy: two Dirac valleys per spin, as required by fermion doubling. For example, on honeycomb lattice one can just take a uniform, real, and non-bipartite hopping, and two Dirac valleys will generically appear. The non-bipartite nature (second-neighbor hopping) is needed to make sure that the gauge group is $U(1)$ rather than $SU(2)$\cite{wenbook} -- this will play an important role later in Sec.~\ref{MQNQCD}. The relevant mean field Hamiltonians on square, honeycomb, triangle and kagom\'e lattices are described in detail in Appendix~\ref{bilinears}. Taking the continuum limit, the $U(1)$ Dirac spin liquids in the low energy, long wavelength (IR) limit can effectively be described by the following (Euclidean) Lagrangian: \begin{equation} \label{eqn:qedL} \mathcal{L}=\sum_{i=1}^{4}\bar{\psi}_ii\slashed{D}_a\psi_i+\frac{1}{4e^2}f_{\mu\nu}^2, \end{equation} where $\psi_i$ is a two-component Dirac fermion and $a$ is a dynamical $U(1)$ gauge field. We choose $(\gamma_0, \gamma_1, \gamma_2)=(\mu^2, \mu^3, \mu^1)$ where $\mu$ are Pauli matrices. This theory is also known as QED$_3$ with $N_f=4$. The theory flows to strong coupling at energy scale below $e^2$, and its ultimate IR fate is not completely known. In this work we assume that when monopole instantons are suppressed (to be explained in more details below), this QED theory flows to a stable critical fixed point in the IR, as supported by recent numerics\cite{qedcft0,qedcft}. Naively there is a conserved current in the theory \begin{equation} j_{\mu}=\frac{1}{2\pi}\epsilon_{\mu\nu\lambda}\partial_{\nu}a_{\lambda}, \end{equation} that corresponds to a global $U(1)$ symmetry sometimes called $U(1)_{top}$. The conserved charge is simply the magnetic flux of the emergent $U(1)$ gauge field. One can then define operators that carries this global $U(1)_{top}$ charge, i.e. operators that create or annihilate total gauge flux by $2\pi$. We denote these operators by $\mathcal{M}$ -- one can pictorially think of it as a point in space-time surrounded by a $2\pi$-flux. This operator is not included in Eq.~\eqref{eqn:qedL}, but in principle may be included as a perturbation which explicitly breaks the $U(1)_{top}$ symmetry. In the absence of the gapless Dirac fermions (or other matter fields), it is known that such a perturbation\cite{polyakov_1977} will open a gap for the Maxwell photon and confine gauge charges. With gapless matter fields (like the Dirac fermions here) the effect of monopole perturbation is more subtle: at large enough $N_f$ (fermion flavor) the monopole becomes an irrelevant perturbation, but the lower critical $N_f$ is not completely known (some bounds were estimated from F-theorem~\cite{TarunF}). Let us look at the monopoles in more detail. It is helpful to think in the large-$N_f$ limit, where gauge fluctuations are suppressed. In this case the monopole simply creates a static $2\pi$ magnetic flux in which the Dirac fermions move freely. The most relevant monopole corresponds to the ground state of these Dirac fermions, with all negative-energy levels filled and all positive-energy levels empty\footnote{All these can be made more precise through radial quantization and the state-operator correspondence, but for the purpose of this work we do not need to use those machineries.}. However, each Dirac cone also contributes to a zero-energy mode (guaranteed by the Atiyah-Singer index theorem) in a $2\pi$ flux background. The filling of any of these four zero modes do not affect energetics. However, gauge-invariance (i.e. vanishing of the overall gauge charge) requires that exactly half of the zero modes to be filled\cite{kapustin_2002}. This gives in total $C_4^2=6$ distinct (but equally relevant) monopoles, schematically written as \begin{equation} \label{eqn:Mzero} \Phi\sim f^{\dagger}_if^{\dagger}_j\mathcal{M}_{bare}, \end{equation} where $f^{\dagger}_i$ creates a fermion in the zero-mode associated with $\psi_i$, and $\mathcal{M}_{bare}$ creates a ``bare" flux quanta without filling any zero mode. For later convenience, we define the six monopoles as \begin{align} \label{eqn:monopole_type} \Phi^\dagger_{1/2/3}=f^\dagger_{\alpha,s} (\epsilon \tau^{1/2/3})^{\alpha\beta} \epsilon^{ss'} f^\dagger_{\beta,s'}\mathcal{M}_{bare}\nonumber\\ \Phi^\dagger_{4/5/6}=i f^\dagger_{\alpha,s} (\epsilon )^{\alpha\beta}( \epsilon\sigma^{1/2/3})^{ss'} f^\dagger_{\beta,s'}\mathcal{M}_{bare} \end{align} where we refine the label of zero mode by valley indices $\alpha=1,2$ and spin indices $s=\uparrow,\downarrow$, $\epsilon$ is the antisymmetric rank-$2$ tensor, which is necessary because of the anticommutation relations of $f$ operators, $\tau,\sigma$ acts on valley/spin indices as the standard Pauli matrices formalism. The factor $i$ in the second line is necessary such that the six monopoles are related by $SU(4)$ rotations of Dirac fermions (to be explained in more detail later). From our construction, it's straightforward to see that the first three monopoles are spin singlets, while the latter three monopoles are spin triplets. Both Dirac sea and the zero modes contribute to properties of monopoles such as symmetry quantum numbers. One can likewise define ``anti-monopoles" as operators inserting $-2\pi$ flux and appropriately filling Dirac zero modes. However, it is more convenient for us to simply view such operators as the ``anti-particles", or hermitian conjugates, of the monopole operators defined above. Notice that under a $2\pi$-flux, each zero-mode behaves as a Lorentz scalar \cite{kapustin_2002}(since there is no other index responsible for higher spin), in contrast to its parent Dirac fermion (originally a spinor). This makes the monopole operator also a Lorentz scalar. \subsection{Symmetries} We now carefully examine the global symmetries of the continuum QED$_3$ theory. Clearly we have the Lorentz symmetry, together with the standard charge conjugation $\mathcal{C}$, time-reversal $\mathcal{T}$ and space reflection $\mathcal{R}_x$. As we have discussed already, the conservation of the gauge flux corresponds to a global $U(1)$ symmetry known as the topological $U(1)_{top}$. The fermion flavor symmetry is naively $SU(4)$: $\psi_i\to U_{ij}\psi_j$ where $U\in SU(4)$, but we should remember that global symmetries, properly defined, should only act on gauge invariant local operators. Naively one would consider fermion bilinear operators like $\bar{\psi}\sigma^{\mu}\tau^{\nu}\psi$ as the simplest gauge-invariant operators, which transform as ($15$-dimensional) $SU(4)$ adjoints. However, the monopole operators (defined in Eq.~\eqref{eqn:Mzero} or \eqref{eqn:monopole_type}) transform as a $6$-dimensional vector under $SU(4)$, or more precisely $SO(6)=SU(4)/\mathbb{Z}_2$. Notice that this operator is odd under both the $SO(6)$ center ($-I_{6\times6}$) and a $\pi$-rotation in $U(1)_{top}$. More generally one can show that any local operator has to be simultaneously odd or even under the two operations -- for example a fermion bilinear $\bar{\psi}_i\psi_j$ carries no gauge flux and is even under the $SO(6)$ center. So the proper global symmetry group should be \begin{equation} \frac{SO(6)\times U(1)_{top}}{\mathbb{Z}_2} \end{equation} together with $\mathcal{C, T, R}_x$ and Lorentz. One can certainly consider $2\pi$-monopoles in higher representations of $SO(6)$, but in this work we will assume that the leading monopoles (with lowest scaling dimension) are the ones forming an $SO(6)$ vector -- this is physically reasonable and can be justified in large-$N_f$ limit. Instead of working with the explicit definition of monopoles from Eq.~\eqref{eqn:Mzero}, we shall simply think of the monopoles as six operators $\{\Phi_1,...\Phi_6\}$ that carry unit charge under $U(1)_{top}$ and transform as a vector under $SO(6)$: $\Phi_i\to O_{ij}\Phi_j$. Likewise we define ``anti-monopoles" as six operators $\{\Phi_1^{\dagger},...\Phi_6^{\dagger}\}$ that also transform as an $SO(6)$ vector, but carry $-1$ charge under $U(1)_{top}$. The virtue of defining the monopole operators abstractly based on symmetry representations is that we can easily fix the $\mathcal{C, T, R}_x$ symmetry actions on the monopoles completely based on the group structure. Consider a ``bare" time-reversal symmetry \begin{equation} \label{eqn:T0} \mathcal{T}_0: \psi\to i\gamma_0\sigma^2\tau^2\psi, \end{equation} where $\gamma$ acts on the Dirac index, $\sigma$ acts on the physical spin index, and $\tau$ acts on the ``valley" index. The physical time-reversal symmetry $\mathcal{T}$ (to be discussed later) is in general a combination of $\mathcal{T}_0$ and some additional $SU(4)$ rotation $U_T$. Now consider the group structure of the $SO(6)\times U(1)_{top}/\mathbb{Z}_2$ symmetry and $\mathcal{T}_0$. Clearly $\mathcal{T}_0$ commutes with $U(1)_{top}$, which simply means that monopoles become anti-monopoles. $\mathcal{T}_0$ also commutes with $SU(4)$ rotations generated by $\{\sigma^{i},\tau^{i}\}$, namely the spin-valley subgroup $SO(3)_{spin}\times SO(3)_{valley}$. But for those generated by $\{\sigma^{i}\ \tau^j\}$ we have $\mathcal{T}_0U=U^{\dagger}\mathcal{T}_0$. One can then show that the only consistent implementations on the monopoles are $\mathcal{T}_0: \Phi\to \pm O_T\Phi^{\dagger}$, where \begin{equation} \label{eqn:OT} O_T=\left(\begin{array}{cc} I_{3\times 3} & 0 \\ 0 & -I_{3\times 3} \end{array}\right). \end{equation} The basis is chosen so that $\Phi_{1,2,3}$ rotates under the $SO(3)$ generated by $\tau^{i}$, and $\Phi_{4,5,6}$ rotates under that by $\sigma^{i}$. Importantly, $O_T\in O(6)$ but not $SO(6)$. One can likewise consider a ``bare" reflection \begin{equation} \label{eqn:R0} \mathcal{R}_{0}: \psi(x)\to i\gamma_1\psi(\mathcal{R}x). \end{equation} Since this symmetry commutes with $SO(6)$ rotations but flips $U(1)_{top}$ charge, we have for the monopoles $\mathcal{R}_{0}: \Phi_i\to \Phi_i^{\dagger}$ (up to a phase factor which can be absorbed through a re-definition of $\Phi$). Finally for the ``bare" charge-conjugation \begin{equation} \label{eqn:C0} \mathcal{C}_0: \psi\to \sigma^2\tau^2\psi^{*}. \end{equation} We notice that it has the same commutation relation to $SO(6)$ as $\mathcal{T}_0$ and also flips $U(1)_{top}$ charge. Therefore $\mathcal{C}_0: \Phi\to \pm O_T\Phi^{\dagger}$. {This analysis also shows that the fermion mass operators $\bar{\psi}_iT_{ij}\psi_j$ that form an adjoint representation of $SU(4)$ ($T$ is an $SU(4)$ generator) is indistinguishable from $i\Phi_i^{\dagger}A_{ij}\Phi_j$ in terms of symmetry quantum numbers, where $A$ is a real $6\times 6$ anti-symmetric matrix.} Clearly the lattice spin Hamiltonians would not have the full continuum symmetry -- typically we only have spin rotation, lattice translation and rotation, reflection and time-reversal symmetries. It was argued in Ref.~\cite{hermele2005} that the enlarged symmetry (such as $SO(6)\times U(1)_{top}/\mathbb{Z}_2$) would emerge in the IR theory since terms breaking this symmetry down to the microscopic ones are likely to be irrelevant (justified in large-$N_f$ analysis). In this work we will assume that the enlarged symmetry does emerge, at least before the monopole tunnelings are explicitly added to the Lagrangian. The central question in this paper is: given a realization of a $U(1)$ Dirac spin liquid on some lattice, how do the monopoles transform under the microscopic symmetries (such as lattice translation)? Since we already know how the monopoles transform under the IR emergent symmetries (such as $SO(6)\times U(1)/\mathbb{Z}_2$), the question can be equivalently formulated as: how are the microscopic symmetries embedded into the enlarged symmetry group? Clearly spin-rotation can only be embedded as an $SO(3)$ subgroup of the $SO(6)$ flavor group, meaning that three of the six monopoles ($\Phi_{4,5,6}$ from Eq.~\eqref{eqn:monopole_type}) form a spin-$1$ vector, and the other three ($\Phi_{1,2,3}$ from Eq.~\eqref{eqn:monopole_type}) are spin singlets. Other discrete symmetries can be realized, in general, as combinations of certain $SO(6)$ rotations followed by a nontrivial $U(1)_{top}$ rotation, and possibly some combinations of $\mathcal{C}_0, \mathcal{T}_0, \mathcal{R}_0$. (Remember that Lorentz group acts trivially on the $2\pi$-monopoles.) In fact, in all the examples we are interested in, all those discrete symmetries commute with the spin $SO(3)$ rotation. This means that for most purposes we can focus on the $SO(3)_{spin}\times SO(3)_{valley}$ subgroup of $SO(6)$, and the realization of the discrete symmetries should only envolve $SO(3)_{valley}$ and possibly $\mathcal{C,P,T}$. Many of these group elements in a symmetry realization can be fixed from the symmetry transformations of the Dirac fermions $\psi_i$, which is in turn fixed by the symmetry of the mean field ansatz in Eq.~\eqref{eqn:ansatz}, under the name of projective symmetry group (PSG)\cite{wenbook}. For example, if the symmetry operation acts on $\psi$ as $\psi\to U\psi$ with a nontrivial $U\in SU(4)$, then we know that the monopoles should also be multiplied by an $SO(6)$ matrix $O$ that corresponds to $U$. This $SO(6)$ matrix $O$ can be uniquely identified up to an overall sign, which can also be viewed as a $\pi$-rotation in $U(1)_{top}$. In practice the $SO(6)$ flavor rotation involved in a symmetry realization is always within the $SO(3)_{spin}\times SO(3)_{valley}$ subgroup. The six monopoles transform as $(1,0)\oplus (0,1)$ under this subgroup, which is the same representation of the six fermion bilinears $\{\bar{\psi}\sigma^i\psi,\bar{\psi}\tau^i\psi\}$. Therefore to fix the $SO(6)$ rotation of the monopoles in a given symmetry realization it is sufficient to fix that for the six masses, also known as quantum spin Hall and quantum valley Hall masses, respectively. For the examples we are interested in, these information are also reviewed in Appendix.~\ref{bilinears}. Similar logic applies to operations like $\mathcal{C, T, R}_x$. The only exception is the flux symmetry $U(1)_{top}$: there is no information regarding $U(1)_{top}$ in the PSG. Fixing the possible $U(1)_{top}$ rotations in the implementations of the microscopic discrete symmetries is the main task of this work. The difficulty in fixing the $U(1)_{top}$ factor in a symmetry transform lies in its UV nature: intuitively, the $U(1)_{top}$ phase factor comes from the Berry phase accumulated when a monopole moves on the lattice scale, in a nontrivial ``charge background"\cite{HaldaneBerry, ReSaSUN}. This lattice-scale feature is not manifested directly in simple objects in the IR (such as the Dirac fermions). In previous studies such phase factors were decided numerically\cite{alicea_2008,ranvishwanathlee,shortpaper}. In this paper we will develop several different analytical methods to calculate such phase factors. We should emphasize here that the questions addressed in this work are kinematic (rather than dynamical) in nature: i.e. we are interested in the qualitative properties of the monopoles, such as symmetry representations, rather than quantitative properties such as scaling dimensions. We will introduce, at various stages of our argument, assumptions that are only justified in certain limits (such as at large-$N_f$), and importantly although these assumptions will not be completely satisfied, they will provide a rationale for selecting an answer typically from a discrete set of possibilities. Some of the particularly important assumptions that follow from this treatment are: (1) the most relevant monopole operator are those that transform as an $SO(6)$ vector and Lorentz scalar -- this is physically reasonable but justifiable only in large-$N_f$, (2) when perturbed by an adjoint mass $\bar{\psi}\sigma^{\mu}\tau^{\nu}\psi$, the small mass limit is adiabatically connected with the large mass limit which describes lattice scale physics and the Wannier limit (roughly speaking, this means that the adjoint mass is not only relevant, but flows all the way to infinity in the IR)\footnote{ The flow to infinity is smooth with no singularity/fixed point.} -- another physically reasonable assumption that is justified in large-$N_f$, and (3) the $U(1)_{top}$ phase factors in the microscopic symmetry realizations are decided completely by the mean-field theory of the spinons (which is a free fermion theory) -- gauge fluctuations only modify other quantitative features (such as scaling dimensions) but not the (discrete) symmetry properties. In particular, assumption (3) may not always be valid (depending on microscopic details), but when it is not valid, the parton construction combined with the mean-field description is itself not likely to provide a reasonable starting point to describe the phase. \subsection{Time-reversal, reflection and band topology} \label{dimred} We now derive some general results for time-reversal and reflection symmetries that will be generally applicable in all the systems we are interested in. Essentially, with the help of the exact $SO(3)$ spin rotation symmetry, the $U(1)_{top}$ phase factors associated with time-reversal and reflection can be uniquely fixed. Since the $U(1)_{top}$ phase factor comes from UV physics, we can deform the QED$_3$ theory with a fermion mass to make the theory IR trivial, so that we can focus on the UV part. Consider perturbing with a mass term \begin{equation} \label{eqn:adjointmass} \Delta\mathcal{L}=m\bar{\psi}_iT_{ij}\psi_j, \end{equation} where $T_{ij}$ is chosen to be an $SU(4)$ generator without loss of generality. The fermions are now gapped, and there is a pure Maxwell $U(1)$ gauge theory left in the IR. The zero-modes associated with the monopoles are lifted (according to the signs of the fermion masses), lifting the six-fold degeneracy of the monopole completely, so that there is only one gapless monopole left in the Maxwell theory. The identity of the surviving monopole is fixed again by symmetry. The mass term breaks the flavor symmetry from $SO(6)$ to $SO(4)\times SO(2)$. If we probe the theory with an $SO(4)\times SO(2)$ gauge field $\mathcal{A}^{SO(4)}+A^{SO(2)}$, the massive fermions generate a topological term \begin{equation} \label{eqn:qshtop} \mathcal{L}_{top}=\frac{{\rm sgn}(m)}{2\pi}A^{SO(2)}da. \end{equation} This means that the monopole (now unique) carries $\pm 1$ charge under the $SO(2)$ generated by $T$ in Eq.~\eqref{eqn:adjointmass} and is a singlet under the remaining $SO(4)$ -- this uniquely fixes the identity of the monopole among the six degenerate ones in the gapless phase. The argument will also be useful for deciding the nature of the symmetry-breaking phase when a mass perturbation is turned on, since the monopole will eventually spontaneously condense in the Maxwell theory, possibly breaking further symmetries (such as the $SO(2)$ here). Now as long as the relevant symmetry, let us call it $g$, is not broken by the mass perturbation, the $U(1)_{top}$ phase factor $U_{top}^g$ associated with the implementation of $g$ will not be affected. We are thus left with the simpler problem of finding the Berry phase of a non-degenerate monopole moving in an insulating (gapped) charge background, which is essentially determined by the topology of the insulator. Since the other monopoles are related to this monopole by some $SO(6)$ flavor rotations, their symmetry transformations are fixed once we obtain the transformation of this particular monopole. It turns out to be particularly useful, for all the examples to be considered in this work, to consider a ``quantum spin Hall" (QSH) mass perturbation \begin{equation} \Delta\mathcal{L}_{QSH}=m\bar{\psi}\sigma^3\psi, \end{equation} where $\sigma^3$ acts in the spin but not valley index. This term breaks the spin $SO(3)$ rotation down to $SO(2)$ and generates a mutual spin-charge Hall response as in Eq.~\eqref{eqn:qshtop}, so that the low energy monopole operator transforms as $S_z={\rm sgn}(m)$ -- in our notation this monopole is denoted as $\Phi_4\pm i\Phi_5$. Crucially, this term leaves all other discrete symmetries unbroken -- except for reflection symmetries $\mathcal{R}$, but it is still a symmetry when combined with a spin-flip operation ${\mathcal R}'=\sigma^2{\mathcal R}$. Since the $U(1)_{top}$ factors can be nontrivial only for lattice symmetries and time reversal, they can all be determined by considering monopole quantum numbers in this QSH insulator. For example, it is well known that the QSH insulator is also a $\mathbb{Z}_2$ topological insulator\cite{KaneMele}, protected by the Kramers time-reversal symmetry $\mathcal{T}: f\to i\sigma^2f$. This fixes the transformation of the monopole under time-reversal to be: \begin{equation} \label{eqn:monopoleTR} \mathcal{T}: \mathcal{M}\to -\mathcal{M}^{\dagger}, \end{equation} where the non-triviality of the topological insulator is manifested in the minus sign. This can be seen most easily through the edge state of the QHE insulator \begin{equation} \label{eqn:edge} H_{edge}=\int dx \chi^{\dagger}i\sigma^2\partial_x\chi, \end{equation} where $\chi$ is a two-component Dirac fermion in $(1+1)d$, with time-reversal acting as $\chi\to i\sigma^2\chi$ that forbids any mass term. In the QHE state a monopole tunneling event will transfer one left-moving fermion into a right-moving one, which is nothing but the physics of axial anomaly. This leads to the operator identification on the edge $\mathcal{M}\sim \chi^{\dagger}(\sigma^1+i\sigma^3)\chi$, from which Eq.~\eqref{eqn:monopoleTR} follows. Reflection symmetry can be discussed in a similar manner. If the reflection does not involve charge conjugation, the monopole simply transforms as \begin{equation} \mathcal{R}_x: \mathcal{M}\to e^{i\theta_{\mathcal{R}}}\mathcal{M}^{\dagger}. \end{equation} where the overall phase factor can change by a re-definition of $\mathcal{M}$ and is therefore physically meaningless -- unless there are more than one reflection axis, in which case the relative phases in the transforms become meaningful. Charge conjugation symmetry (if exists) alone is not interesting for the same reason. It becomes more interesting, and turns out to be also much simpler, when a reflection involves an extra charge conjugation operation, denoted as $\mathcal{C}{\mathcal R}$. Under this symmetry, the monopole is mapped to itself, possibly with a sign \begin{equation} \label{eqn:CRmonopole} \mathcal{CR}: \mathcal{M}\to \pm \mathcal{M}, \end{equation} where we have assumed that $(\mathcal{CR})^2=1$ on local operators. The two different signs in the above transformation are physically distinct and represent different topology of the underlying insulators. In a quantum spin Hall insulator, it turns out to be particularly simple to tell if the insulator is also nontrivial under an additional $\mathcal{CR}$: it is trivial if $(\mathcal{CR})^2\psi=-\psi$, and nontrivial if $(\mathcal{CR})^2\psi=+\psi$ (which then leads to the nontrivial sign in Eq.~\eqref{eqn:CRmonopole}). In particular, the ``bare" $\mathcal{CR}$ symmetry defined in Eq.~\eqref{eqn:C0} and \eqref{eqn:R0} squares to one and therefore has nontrivial transformation. The easiest way to understand the above statement is to consider the edge state Eq.~\eqref{eqn:edge} that preserves both charge, spin-$S_z$ and $\mathcal{CR}$ symmetry. There are two different ways to implement a $\mathcal{CR}$ symmetry: $\mathcal{CR}_+: \chi(x)\to \chi^{\dagger}(-x)$ or $\mathcal{CR}_-: \chi(x)\to \sigma^2\chi^{\dagger}(-x)$, where $(\mathcal{CR}_+)^2=1$ and $(\mathcal{CR}_-)^2=-1$ on the fermions. It is easy to see that $\mathcal{CR}_+$ forbids a Dirac mass term, making the insulator also nontrivial under $\mathcal{CR}_+$, while $\mathcal{CR}_-$ does not forbid any mass term and is therefore trivial. The monopole transformation under $\mathcal{CR}$ in Eq.~\eqref{eqn:CRmonopole} can be obtained through the operator identification $\mathcal{M}\sim \chi^{\dagger}(\sigma^1+i\sigma^3)\chi$. This result is perhaps natural when we think of $\mathcal{CR}$ as obtained from Wick-rotating time-reversal symmetry $\mathcal{T}$ using $\mathcal{C}{\mathcal R}\mathcal{T}$ theorem\cite{wittenreview}, where $\mathcal{T}^2=(-1)^F$ rotates to $(\mathcal{C}{\mathcal R})^2=+1$. It is now natural to ask what would happen if we had a ``quantum valley Hall" mass $\bar{\psi}\tau^i\psi$ instead. Now since the full spin $SO(3)$ is unbroken, the insulator cannot have the band topology of topological insulators (under either $\mathcal{T}$ or $\mathcal{CR}$) -- this is simply the famous statement that topological insulator requires spin-orbit coupling. Therefore time-reversal and reflection (or $\mathcal{CR}$) should act trivially on spin singlet monopoles selected by the valley Hall masses.\footnote{One might wonder why our previous argument for the quantum spin Hall insulator cannot be used for the quantum valley Hall insulator -- afterall the time-reversal action $\mathcal{T}_0$ appeared to be democratic between valley and spin indices. Crucially, in our systems the continuous valley symmetry is never exact in the UV, but can only be emergent in the IR. This has to do with a mixed anomaly between the valley $SO(3)_{valley}$ and spin $SO(3)_{spin}$ in the continuum field theory, which is one manifestation of the parity anomaly. This means that one cannot use the previous argument here, since the link between quantum spin Hall and time-reversal topological insulating behavior relies crucially on the exactness of the spin rotation symmetry -- in particular, the edge state argument assumes that there is no anomaly associated to the symmetries involved (a point dubbed ``edgability" in Ref.~\cite{WS2013}). } To summarize, the two symmetries $\mathcal{T}_0$ and $\mathcal{CR}_0$ defined in Eq.~\eqref{eqn:T0},~\eqref{eqn:C0} and \eqref{eqn:R0} acts on the six monopoles as \begin{eqnarray} \label{T0CR0} \mathcal{T}_0: && \left(\begin{array}{c} \Phi_{1,2,3}\\ \Phi_{4,5,6} \end{array} \right) \to \left(\begin{array}{c} \Phi_{1,2,3}^{\dagger}\\ -\Phi_{4,5,6}^{\dagger} \end{array} \right), \nonumber\\ \mathcal{CR}_0: && \left(\begin{array}{c} \Phi_{1,2,3}\\ \Phi_{4,5,6} \end{array} \right) \to \left(\begin{array}{c} \Phi_{1,2,3}\\ -\Phi_{4,5,6} \end{array} \right). \end{eqnarray} The physical time-reversal and reflection may further involve additional $SO(6)$ rotations or charge conjugations, which can be included straightforwardly. Next we turn to the more complicated symmetries including lattice translation and rotations. We first discuss the simpler cases on bipartite lattices. \section{Monopole quantum numbers I: bipartite lattices} \label{MQNBipartite} \subsection{Monopole quantum numbers constrained by QCD$_3$} \label{MQNQCD} On bipartite lattices, at least for the examples considered in this work, we can always continuously tune the mean field Hamiltonian Eq.~\eqref{eqn:ansatz}, without breaking any symmetry or changing the low-energy Dirac dispersion, to reach a point with particle-hole symmetry: \begin{equation} \mathcal{C}: f_{i,\alpha}\to (-1)^ii\sigma^2_{\alpha\beta}f^{\dagger}_{i,\beta}. \end{equation} This theory will then have a larger gauge symmetry of $SU(2)_g$, with $(f_{\alpha},i\sigma^y_{\alpha\beta}f^{\dagger}_{\beta})^T$ forming an $SU(2)_g$ fundamental (anti-fundamental) on each site in A-sublattice (B-sublattice) for each spin $\alpha$. The low energy theory again has four Dirac cones, with two valleys, each forming a fundamental under both gauge $SU(2)_g$ and spin $SU(2)_s$. The continuum field theory of such state, described by an $SU(2)$ gauge field coupled to four Dirac cones, is also known as QCD$_3$ with $N_f=2$. The Lagrangian is given by \begin{equation} \mathcal{L}_{QCD}=\sum_{i=1}^{2}\bar{\chi}_i(i\slashed{\partial}+\slashed{a}^{SU(2)})\chi_i. \end{equation} At the lattice scale the $SU(2)$ gauge symmetry can be Higgsed down to $U(1)$ by reinstating the particle-hole symmetry breaking hopping (which could be weak), and our familiar $U(1)$ Dirac spin liquid will be recovered at low energy. However, it turns out to be very useful to consider an intermediate theory in which the $SU(2)$ gauge symmetry is Higgsed down to $U(1)$, but the particle-hole symmetry survives as a global $\mathbb{Z}^C_2$ symmetry. At low energy this theory also flows to QED$_3$, but with an extra $\mathbb{Z}_2^C$ symmetry compared to the $U(1)$ Dirac spin liquid. In fact this theory does not faithfully represent a lattice spin system due to the extra $\mathbb{Z}^C_2$. However, it is a perfectly well-defined lattice gauge theory, and can represent a lattice spin system if the spin-rotation symmetry is enlarged from $SO(3)$ to $O(3)\sim SO(3)\times\mathbb{Z}^C_2$. In the continuum theory this intermediate QED$_3$ can be obtained from the QCD$_3$ theory by condensing a Higgs field that carries spin-$1$ of the gauge $SU(2)$ and is even under $\mathbb{Z}_2^C$. This Higgs condensation does not break any global symmetry of the QCD$_3$ and the low energy Dirac dispersion is not affected. We can then safely view the continuum QCD$_3$ field theory (which is free in the UV), instead of the original lattice theory, as the UV completion of the intermediate QED$_3$ theory. As we shall see below, the virtue of this alternative UV completion is that the QCD$_3$ theory is much easier to understand than the full lattice theory. The QCD theory has the standard Lorentz and $\mathcal{T, R}_x$ symmetries. It may not necessarily flow to a conformal fixed point\cite{KNQCD}, but this should not be important for our discussion since we are not interested in the ultimate IR fate of this theory. The flavor symmetry of QCD$_3$ at $N_f=2$ is $SO(5)$, which acts on the fermions as $Spin(5)=Sp(4)$. Crucially, there is no additional topological symmetry since the flux of $SU(2)$ gauge field is not conserved. Now we notice that the implementation of the microscopic (continuous or discrete) symmetries in the QCD$_3$ theory is completely fixed by the symmetry transform of the Dirac fermions $\chi$, due to the absence of any gauge flux conservation. For example, a nontrivial $Sp(4)$ transform on $\chi$ maps to a unique $SO(5)$ transform on gauge-invariant operators such as fermion bilinears. Notice that in the $SU(4)\to SO(6)$ mapping discussed in the QED$_3$ context we still had a sign ambiguity due to the existence of $SO(6)$ center $-I_{6\times6}$. That ambiguity is absent here since $SO(5)$ has no center. The bottom line is that we know completely how the microscopic symmetries are embedded into the symmetries of the continuum QCD$_3$ field theory. Once we reach the QCD description, the exact nature of these symmetries at the lattice scale is no longer important -- we simply view them as part of the $SO(5)\times Lorentz\times \mathcal{T,R}_x$ symmetry. Now we Higgs the $SU(2)$ gauge symmetry down to $U(1)$, and far below the Higgs scale we obtain the intermediate QED$_3$ theory which has a larger emergent symmetry including the $SO(6)\times U(1)/\mathbb{Z}_2$ and $\mathbb{Z}_2^C$ symmetries. We now show that there is a unique way to embed the symmetries of QCD ($SO(5)$ and $\mathcal{R, T}$) into the symmetries of QED. This will in turn fix the embedding of the microscopic symmetries into the symmetries of QED. First, it is obvious that there is a unique way to embed the continuous $SO(5)$ symmetry of QCD to $SO(6)\times U(1)/\mathbb{Z}_2$ of QED, up to re-ordering of operators: five of the six monopoles should transform as an $SO(5)$ vector and the remaining one (call it $\Phi_{trivial}$) should be an $SO(5)$ singlet, or $6=1\oplus 5$. Since the microscopic spin rotation symmetry $SO(3)_{spin}$ must be part of the $SO(5)$, the $SO(5)$ singlet monopole $\Phi_{trivial}$ must also be a spin singlet, i.e. it is a combination of $\Phi_{1,2,3}$. Crucially, there is no nontrivial $U(1)_{top}$ phase factor involved in the realization of the $SO(5)$ symmetry on the monopoles. Now consider the bare time-reversal and reflection-conjugation defined in Eq.~\eqref{eqn:T0},~\eqref{eqn:C0} and \eqref{eqn:R0}. As we argued before they act on the monopoles as Eq.~\eqref{T0CR0}. This immediately implies that the $SO(5)$ singlet monopole $\Phi_{trivial}$ also transforms trivially under $\mathcal{T}_0$ and $\mathcal{CR}_0$. The physical time-reversal and reflection symmetry may involve a further $SO(5)$ flavor rotation, but this will not affect $\Phi_{trivial}$ since it is an $SO(5)$ singlet. For the charge-conjugation symmetry $\mathcal{C}$, we expect $\Phi_{trivial}\to e^{i\theta}\Phi_{trivial}^\dagger$ for some phase factor $e^{i\theta}$ since $\mathcal{C}$ cannot mix $SO(5)$ singlet with $SO(5)$ vector, and the phase can be chosen to be trivial by redefining the monopole operators. In this case ${\rm Re}\Phi_{trivial}\equiv(\Phi_{trivial}+\Phi_{tivial}^{\dagger})/2$ is trivial under $\mathcal{C}$ (in later examples sometimes the opposite convention is chosen). We then conclude that ${\rm Re}\Phi_{trivial}$ is trivial under all microscopic symmetries in the intermediate QED$_3$ theory. We now consider the actual $U(1)$ Dirac spin liquid of interest to us. This can be obtained from the intermediate QED$_3$ by explicitly breaking the $\mathcal{C}$ symmetry. It is also possible, as we shall see on square lattice, that some other symmetries such as translation $T_{1,2}$ and time-reversal $\mathcal{T}$ are also broken, but the combinations $T_{1,2}\mathcal{C}$ and $\mathcal{T}\mathcal{C}$ are preserved. In any case, the symmetry-breaking term does not change the low-energy Dirac dispersion (except velocity anisotropy) and is expected to be irrelevant. Therefore we do not expect any change in monopole symmetry quantum numbers -- as long as the symmetries are still unbroken. In particular, the trivial monopole $\Phi_{trivial}$, or at least ${\rm Re}\Phi_{trivial}\equiv(\Phi_{trivial}+\Phi_{tivial}^{\dagger})/2$, should still transform trivially under all global symmetries. In summary, on bipartite lattices there is always one monopole operator (at least the real or imaginary part of it) that transforms trivially under all microscopic symmetries. The reasoning is summarized in Fig.~\ref{fig:Flow}. Although we have emphasized the bipartieness of the lattices in our argument, the discussion above showed that what really mattered was whether the $U(1)$ spin liquid mean-field ansatz could be upgraded to an $SU(2)$ gauge theory. One could certainly consider mean-field ansatz on bipartite lattices that are ``intrinsically non-bipartite", meaning they cannot be adiabatically tuned to have an $SU(2)$ gauge structure. One could also consider ansatz on non-bipartite lattices that are compatible with $SU(2)$ gauge structure, for example by making all the hoppings imaginary (giving up time-reversal symmetry; although this would typically induce a gap). Our dichotomy on bipartiteness should be applied with care in those scenarios. \begin{figure} \captionsetup{justification=raggedright} \begin{center} \adjustbox{trim={0.12\width} {0\height} {0\width} {0\height},clip} {\includegraphics[width=1.3\columnwidth]{Flow.pdf}} \end{center} \caption{For the Dirac spin liquid on bipartite lattices (honeycomb and square) we can view the continuum QCD$_3$ $+$ Higgs field theory (instead of the original lattice theory) as the UV completion of the QED$_3$ theory. The implementation of microscopic symmetries (at the lattice scale) in the QCD theory can be fixed by PSG analysis, while the implementation of QCD symmetries in the QED theory is also uniquely fixed by field theory analysis. This uniquely fixes the implementation of microscopic symmetries in the QED ($U(1)$ Dirac spin liquid) theory.} \label{fig:Flow} \end{figure} The above argument also completely fixes all the monopole quantum numbers on bipartite lattices. We now look at honeycomb and square lattices in detail. \subsubsection{Honeycomb lattice} The uniform hopping mean-field ansatz on honeycomb gives the $N_f=4$ QED$_3$ low-energy theory. The Dirac points stay at momenta $\mathbf K=(\frac{2\pi}{3},\frac{2\pi}{3}),\mathbf K'=-\mathbf K$. Under appropriate basis, the low-energy effective Hamiltonian takes the standard Dirac form. The physical symmetries act as \begin{align} \label{eqn:honey_psg} T_{1/2}&: \psi\rightarrow e^{-i\frac{2\pi}{3}\tau^3} \psi\quad C_6: \psi\rightarrow -i e^{-i\frac{\pi}{6}\mu^3} \tau^1 e^{-i\frac{2\pi}{3}\tau^3} \psi \nonumber\\ R_x&: \psi\rightarrow -\mu^2\tau^2 \psi\quad R_y: \psi\rightarrow \mu^1\tau^3 \psi \nonumber\\ \mathcal T&: \psi\rightarrow -i\sigma^2\mu^2\tau^2 \psi \quad \mathcal C: \psi\rightarrow i\mu^1\tau^1\sigma^2\psi^* \end{align} where $\mu^i$ are Pauli matrices acting on the Dirac spinor index, $T_{1/2}$ is the translation along two basis vectors with $2\pi/3$ angle between them,$C_6$ is $\pi/3$ rotation around a center of a honeycomb plaquette, and $R_{x/y}$ denotes reflection perpendicular to the direction of the unit cell/ the axis perpendicular to unit cell direction, respectively. As an illustrative example, let us consider the $C_6$ rotation (the most nontrivial symmetry here). In general the six monopoles could transform as $\Phi_i\to e^{i\theta_{C_c}}O_{ij}\Phi_j$ with $e^{i\theta_{C_6}}$ an overall phase and the $SO(6)$ matrix $O$ given by \begin{equation} O=\left( \begin{array}{cccccc} -\cos(2\pi/3) & \sin(2\pi/3) & 0 & 0 & 0 & 0 \\ \sin(2\pi/3) & \cos(2\pi/3) & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 \end{array}\right). \end{equation} The form of the matrix $O_{ij}$ is fixed by the symmetry transforms of the Dirac fermions $C_6: \psi\to \tau^1\exp\left(-i\frac{2\pi}{3} \tau^3 \right)\psi$, up to an overall sign that can be absorbed into the overall phase factor. Our argument implies that the phase factor $e^{i\theta_{C_6}}$ should be chosen so that the total transform takes the form \begin{equation} \left( \begin{array}{cc} 1 & 0 \\ 0 & SO(5) \end{array} \right), \end{equation} with the trivial monopole being a spin singlet. Since $O$ already takes the above form (with $\Phi_3$ being a trivial singlet), the additional $U(1)_{top}$ phase factor must be trivial. Focusing on the spin triplet monopoles ($\Phi_{4,5,6}$), this reproduces the result previously obtained through numerical calculations\cite{ran_2008}. The other symmetry actions can be determined using the same logic. The results are tabulated in Table~\ref{table:honeycomb_monopole}. \begin{widetext} \begin{table*} \captionsetup{justification=raggedright} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline & $T_1$ & $T_2$ & $ R$ & $ C_6$ & $\mathcal T$\\%& note \\ \hline $\Phi_1^\dagger$ & \multicolumn{2}{c|}{$ \cos(\frac{2\pi}{3}) \Phi_1^\dagger-\sin(\frac{2\pi}{3}) \Phi_2^\dagger$} &$\Phi_1$ & $ -\cos(\frac{2\pi}{3}) \Phi_1^\dagger+\sin(\frac{2\pi}{3}) \Phi_2^\dagger$ & $\Phi_1$\\%& VBS order, $ Im[\Phi_1]$ transforms as $\overline\psi\tau^2\psi$\\ $\Phi_2^\dagger$ & \multicolumn{2}{c|}{$ \cos(\frac{2\pi}{3}) \Phi_2^\dagger+\sin(\frac{2\pi}{3}) \Phi_1^\dagger$} & $-\Phi_2$ & $ \cos(\frac{2\pi}{3}) \Phi_2^\dagger+\sin(\frac{2\pi}{3}) \Phi_1^\dagger$& $\Phi_2$\\%&VBS order, $Im[\Phi_2]$transforms as $\overline\psi\tau^1\psi$\\ $\Phi_3^\dagger$ & $\Phi_3^\dagger$& $\Phi_3^\dagger$&$\Phi_3$ & $\Phi_3^\dagger$ & $\Phi_3$\\%& $Re[\Phi_3]$ trivial, $ Im[\Phi_3]$ odd under $R$\\ $\Phi_{4/5/6} ^\dagger$ & $\Phi_{4/5/6}^\dagger $& $\Phi_{4/5/6}^\dagger $& $-\Phi_{4/5/6} $ &$-\Phi_{4/5/6}^\dagger $ & $-\Phi_{4/5/6}$\\%& Spin order, $ Im[\Phi_1]$ as $\overline \psi\tau^3\ \sigma^{1/2/3}\psi$\\ \hline \end{tabular} \caption{The transformation of monopoles on honeycomb lattice. $T_{1/2}$ is translation along two basic lattice vectors, $R$ is reflection perpendicular to the axis defined by a unit cell (shown in fig~\ref{fig:honeycomb2}, $C_6$ is the six-fold rotation around the center of a plaquette. There is a trivial monopole, i.e, the third monopole $\Phi_3$. } \label{table:honeycomb_monopole} \end{center} \end{table*} \end{widetext} An important conclusion is that the operator $\Phi_3+\Phi_3^{\dagger}$ transforms trivially under all physical symmetries, and is therefore allowed as a perturbation to the QED$_3$ theory. \subsubsection{Square lattice} We consider the staggered flux state on the square lattice, with lattice hopping amplitudes (in a certain gauge) \begin{equation} \label{eqn:SFhopping} t_{i,i+\hat{y}}=\exp[i(-1)^{x+y}\theta/2]t, \hspace{5pt} t_{i,i+\hat{x}}=(-1)^yt. \end{equation} The staggered flux state can be continuously tuned (without changing the low-energy Dirac dispersion) to have $\theta=0$, also known as the $\pi$-flux state. The $\pi$-flux state has all the symmetries of the staggered flux state, together with an additional particle-hole symmetry. In fact the $\pi$-flux state has an $SU(2)$ gauge symmetry and the unit cell contains two sites (sublattice $A,B$) with a vertical link. As far as monopole quantum number is concerned, there is no distinction between the two except for the particle-hole symmetry which does not exist in the staggered flux state. We will therefore calculate the monopole quantum number in the $\pi$-flux state, which is simpler. There are two gapless points in the reduced Brillouin zone at $\bf Q=(\pi/2,\pi), \bf Q'=-\bf Q$. The low-energy theory takes the standard Dirac form in an apropriate basis. In the $\pi$-flux phase, we can write the (projective) symmetry realizations on the Dirac fermions as \begin{align} \label{eqn:square_psg} T_1&: \psi\rightarrow i\tau^3\psi\quad T_2: \psi\rightarrow i\tau^1\psi\nonumber\\ R_x&: \psi\rightarrow \mu^3\ \tau^3\ \psi\quad C_4: \psi\rightarrow e^{i\frac{\pi}{4}\mu^1}e^{-i\frac{\pi}{4}\tau^2} \psi\nonumber\\ \mathcal T&: \psi\rightarrow i\mu^2 \sigma^2\ \tau^2\ \psi, \quad \end{align} together with a particle-hole symmetry: \begin{equation} \mathcal C: \psi\rightarrow i \mu^3 \sigma^2\ \psi^*, \end{equation} where $\mu^i$ are Pauli matrices acting on the Dirac spinor index, and $C_4$ means a four-fold rotation around a lattice site. If we turn on a nonzero $\theta$ in Eq.~\eqref{eqn:SFhopping} and convert the state to the staggered flux phase, $\mathcal{C},T_{1,2}, C_4, \mathcal{T}$ will be broken, but $\mathcal{C}T_{1,2}, \mathcal{C}C_4, \mathcal{C}\mathcal{T}$ will be preserved and provide the realizations of the physical symmetries. The monopole quantum numbers can now be deduced using the argument outlined above. The results are tabulated in Table~\ref{table:square_monopole}. To be concrete, we start with the $\pi$-flux state (first four rows in Table~\ref{table:square_monopole}). The $C_4$ operation, based on its action on the Dirac fermions, should act on the monopoles as $C_4: \Phi_i\to e^{i\theta_{C_4}}O^{C_4}_{ij}\Phi_j$, where \begin{equation} O^{C_4}=\left( \begin{array}{cccccc} 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right). \end{equation} Based on the argument outlined before, we should have $e^{i\theta_{C_4}}=1$ and $\Phi_2$ is the $SO(5)$ singlet monopole. This in turn fixed the actions of $T_{1,2}$ on monopoles. For example, the action of $T_1$ on the Dirac fermions requires that $T_1: \Phi_i\to e^{i\theta_{T_1}}O^{T_1}_{ij}\Phi_j$ where \begin{equation} O^{T_1}=\left( \begin{array}{cccccc} -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right). \end{equation} Since $\Phi_2$ should be a singlet under flavor symmetries, we must have $e^{i\theta_{T_1}=-1}$, which gives the transformation tabulated in Table~\ref{table:square_monopole}. The combined symmetry action $\mathcal{CR}_x$, as we discussed before, can act on the monopoles with potentially nontrivial Berry phases. For this purpose we can view this symmetry as a combination of $\mathcal{CR}_0$ (as defined in Eq.~\eqref{eqn:C0} and \eqref{eqn:R0}) and a flavor rotation $\psi\to i\tau^1\psi$, followed by a Lorentz rotation which does not affect the scalar monopoles. The $\mathcal{CR}_0$ transforms the monopoles as Eq.~\eqref{T0CR0} and the flavor rotation is essentially the $T_2$ transformation. Therefore under $\mathcal{CR}_x$ we should have $\Phi_1\to -\Phi_1$ and $\Phi_i\to\Phi_i$ for $i\neq1$. For $\mathcal{R}_x$ and $\mathcal{C}$ separately, the overall phase depends on the definition of the monopoles as we discussed, but the relative transformation between the six monopoles is still meaningful. Since $\mathcal{R}_x$ involves a flavor rotation, it should act on the monopoles as shown in Table~\ref{table:square_monopole} up to an overall phase which we fix to be trivial. The transformation of $\mathcal{C}$ then follows immediately. Finally the transformation of $\mathcal{T}$ on the monopoles is simply given by Eq.~\eqref{T0CR0}. \begin{table*} \captionsetup{justification=raggedright} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $T_1$ & $T_2$ & $ R_x$ & $ C_4$ &$\mathcal C$& $\mathcal T$\\%& note \\ \hline $\Phi_1^\dagger$ & $+$ & $-$ &$-\Phi_1$& $-\Phi_3^\dagger$&$\Phi_1$&$\Phi_1$\\% &VBS order, $Im[\Phi_1]$ as $\overline\psi\tau^3\psi$\\ $\Phi_2^\dagger$ & $+$ & $+$ &$-\Phi_2$& $\Phi_2^\dagger$&$-\Phi_2$&$\Phi_2$\\%&$Im[\Phi_2]$ trivial$\\ $\Phi_3^\dagger$ & $-$ & $+$ &$\Phi_3$& $\Phi_1^\dagger$&$\Phi_3$ &$\Phi_3$ \\%&VBS order, $ Im[\Phi_3]$ as $\overline\psi\tau^1\psi$\\ $\Phi_{4/5/6} ^\dagger$ &$-$ & $-$ &$\Phi_{4/5/6}$& $\Phi_{4/5/6}^{\dagger}$ &$\Phi_{4/5/6}$ &$-\Phi_{4/5/6}$ \\%&canting/N\'eel order, $ Im[\Phi_1]$ as $\overline \psi\tau^2 \sigma^{1/2/3}\psi$\\ \hline $\Phi_1^\dagger$ & $\Phi_1$ & $-\Phi_1$ &$-\Phi_1$& $-\Phi_3$& &$\Phi_1^{\dagger}$ \\%&VBS order, $Re[\Phi_1]$ as $\overline\psi\tau^3\psi$\\ $\Phi_2^\dagger$ & $-\Phi_2$ & $-\Phi_2$ &$-\Phi_2$& $-\Phi_2$& &$-\Phi_2^{\dagger}$\\%&$Im[\Phi_2]$ trivial$\\ $\Phi_3^\dagger$ & $-\Phi_3$ & $\Phi_3$ &$\Phi_3$& $\Phi_1$& &$\Phi_3^{\dagger}$ \\%&VBS order, $ Re[\Phi_3]$ as $\overline\psi\tau^1\psi$\\ $\Phi_{4/5/6} ^\dagger$ &$-\Phi_{4/5/6}$ & $-\Phi_{4/5/6}$ &$\Phi_{4/5/6}$& $\Phi_{4/5/6}$& &$-\Phi_{4/5/6}^{\dagger}$ \\%&N\'eel order, $ Re[\Phi_1]$ as $\overline \psi\tau^2\sigma^{1/2/3}\psi$\\ \hline \end{tabular} \caption{The transformation of monopoles on square lattice. $T_{1/2}$ is translation along two basic lattice vectors, $R_x$ is reflection perpendicular to the horizontal axis, $C_4$ is the four-fold rotation around the site (shown in fig~\ref{fig:honeycomb2}). There is a trivial monopole by our definition, i.e, the second monopole $\Phi_2$. The first 4 rows and last 4 rows of monopole transformation correspond to $\pi$ flux state and staggered flux state, respectively, which differ by a charge conjugation for $T_{1/2},C_4$ and $\mathcal{T}$. The last 4 rows align with the results of $M$ transformations of Ref \onlinecite{alicea_2008} after making the identification $\Phi_{1/3}=M_{3/2},\Phi_2=iM_1,\Phi_4\mp i\Phi_5=M_{4/6},\Phi_6=M_5$ where $M^\dagger$'s are the ``monopole operators" defined in Ref.~\onlinecite{alicea_2008}. We emphasize that the ``$\pi$-flux state" with $U(1)$ gauge symmetry does not actually represent a spin system, and should be viewed as an intermediate state for our purpose.} \label{table:square_monopole} \end{center} \end{table*} Now we proceed to consider the staggered-flux state (the actual $U(1)$ Dirac spin liquid). The only difference now is that $\mathcal{C}$ is no longer a symmetry, and the action of $T_{1,2}, C_4, \mathcal{T}$ should all be combined with $\mathcal{C}$. The results are tabulated in Table~\ref{table:square_monopole} in the last four rows. Again we see that there is a trivial monopole ${\rm Im}\Phi_2$ that can be added as a perturbation. \subsection{Another approach: lattice symmetries generated by $\mathcal{CR}$} There is another trick to obtain lattice symmetry quantum number for the monopoles on bipartite lattices, thanks to the extra $\mathcal{C}$ symmetry. The key is to realize that rotation and translation symmetries can all be generated by repeatedly applying $\mathcal{CR}$ symmetries with respect to different reflection axes (simple $\mathcal{R}$ will not work since the overall phase is not well defined). We will show below that this approach gives results identical to those obtained in Sec.~\ref{MQNQCD}, which gives us more confidence since the two approaches appear to be very different from each other. Following Sec.~\ref{dimred}, we calculate the monopole lattice quantum numbers when the fermions form a quantum spin Hall insulator (which preserves all lattice symmetries). Recall that in the quantum spin Hall phase, $\mathcal{CR}: \mathcal{M}\to \pm\mathcal{M}$ if $\mathcal{CR}^2=\mp1$ on the fermions. This gives us the quantum numbers of a particular monopole (say $\Phi_4+i\Phi_5$), and those of other monopoles can be fixed by the $SO(6)$ flavor symmetry. Therefore our task below is to show that the quantum numbers of $\Phi_4+i\Phi_5$ calculated in the quantum spin Hall phase agree with those tabulated in Table~\ref{table:honeycomb_monopole} and \ref{table:square_monopole}. \subsubsection{Honeycomb} Here we find $(\mathcal C R_{x/y})^2=-1 (1)$ when acting on fermions, (see Fig.~\ref{fig:Honeycombreflection}). This means that (from Sec.~\ref{dimred}) the spin triplet monopoles ($\Phi_{4,5,6}$) stays invariant/ gets a minus sign under $\mathcal C R_x, \mathcal CR_y$, respectively. All other space symmetries are generated from reflections and we get the transformation results in table \ref{table:honeycomb_monopole}. For example, $C_6$ operation can be obtained by two different $\mathcal{CR}$ reflections in succession, and we immediately see that $C_6: \Phi_{4,5,6}\to \Phi_{4,5,6}$. Likewise, using two different reflections to generate translations leads to the results $T_{1,2}: \Phi_{4,5,6}\to \Phi_{4,5,6}$. These are all in agreement with Table~\ref{table:honeycomb_monopole}. \begin{center} \begin{figure} \captionsetup{justification=raggedright} \adjustbox{trim={0\width} {0\height} {0\width} {0\height},clip} {\includegraphics[width=1\columnwidth]{HoneycombReflection.png}} \caption{Reflection axis to be considered in the main text.} \label{fig:Honeycombreflection} \end{figure} \end{center} We emphasize here that the above logic works because if two different reflection axes are related by a symmetry operation, then these two reflections should act on the (unique) monopole $\mathcal{M}$ in the same way. More precisely, $g(\mathcal{CR}_1)g^{-1}=\mathcal{CR}_1$ when acted on an one-dimensional representation. \subsubsection{Square} As discussed before, it suffices to consider the $\pi$-flux phase which has a charge conjugation symmetry. Now consider $\mathcal{CR}_1\equiv\mathcal{C}R_x$, $\mathcal{C}R_2\equiv \mathcal C T_1 C_4^2R_x$ and $\mathcal{C}R_3\equiv \mathcal C C_4T_1C_4^2R_x$ -- the last two are reflections across axis labeled in Fig.~\ref{fig:reflections}. \begin{center} \begin{figure} \captionsetup{justification=raggedright} \adjustbox{trim={.18\width} {.05\height} {0\width} {.05\height},clip} {\includegraphics[width=1.2\columnwidth]{reflection.pdf}} \caption{Reflection axis to be considered in the main text.} \label{fig:reflections} \end{figure} \end{center} It is easy to check that $(\mathcal{C}R_1)^2=(\mathcal{C}R_3)^2=-1$ and $(\mathcal{C}R_2)^2=+1$ on the fermions, which immediately leads to \begin{eqnarray} \mathcal{CR}_1:&& \mathcal{M}\to \mathcal{M} \nonumber\\ \mathcal{CR}_2:&& \mathcal{M}\to -\mathcal{M} \nonumber\\ \mathcal{CR}_3:&& \mathcal{M}\to \mathcal{M}. \end{eqnarray} It is now straightforward to read off other symmetry transformations on this monopole $\mathcal{M}\sim \Phi_4+i\Phi_5$. For example $C_4\sim \mathcal{CR}_3\cdot \mathcal{CR}_1: \mathcal{M}\to \mathcal{M}$ and $T_{1,2}\sim \mathcal{CR}_1\cdot\mathcal{CR}_2: \mathcal{M}\to-\mathcal{M}$. We also make a phase choice so that \begin{equation} \mathcal{C}: \mathcal{M}\to \mathcal{M}^{\dagger}. \end{equation} The transformation of charge conjugation $\mathcal C$ depends on the phase choice of monopole but the relative phase between $6$ monopoles are fixed and hence meaningful. Then we essentially have reproduced Table~\ref{table:square_monopole} for the $\pi$-flux state. The transformation for $\Phi_{1,2,3}$ can be fixed by further applying the emergent $SO(6)$ symmetry. \section{A more general scheme: atomic (Wannier) centers} \label{wanniercenter} We have seen that on bipartite lattices (honeycomb and square), with the help of particle-hole symmetry (a hallmark of bipartite lattices), the monopole quantum numbers under lattice rotation and translation can be uniquely fixed. Here we shall discuss a method applicable on all lattices including non-bipartite ones such as triangular and kagom\'e. Notice that in this problem lattice rotation plays a more fundamental role than lattice translation in two ways. First, if lattice rotation symmetry is absent, there will be no reason for the monopole to have a quantized lattice momentum, which can take continuous value in the entire Brillouin zone. Now if we impose certain rotation symmetry $C_n$ ($n=2,3,4,6$), the momentum will only take certain (discrete) rotationally invariant values, which will be robust as the system is adiabatically deformed. Second, once we know how the monopoles transform under rotations around different centers, we know automatically how it should transform under translation since translation operation can be generated by subsequent rotations around different centers. Presumably the lattice momentum and angular momentum of the monopole operator is also decided by the ``band topology" of the spinon insulator, just like time-reversal symmetry -- but how? Before answering this question systematically, let us consider a more familiar example of bosonic spinon (Schwinger boson) theory, where it is well known that on bipartite lattices the monopole carries $l=\pm1$ angular momentum under site-centered rotations, leading to valence bond solid (VBS) order when the spinons are gapped\cite{HaldaneBerry, ReSaSUN} -- a fact important in the context of deconfined quantum phase transition between Neel and VBS states\cite{senthil_20031,senthil_20041}. We now review this fact with a physically intuitive picture, which could then be generalized to more complicated cases with fermionic spinons. For the fermionic spinons, we will illustrate the new method in this section with the bipartite examples. We will obtain the same results as in Sec.~\ref{MQNBipartite} with considerably more involved calculations -- the goal being to establish the method in a setting where the answers are independently known, so that we can apply it to the DSL on non-bipartite lattices (and further confirm the results on bipartite lattices). \subsection{ Warm-up: bosonic spinons and valence bond solid} \label{warmup} For concreteness let us consider a honeycomb lattice with spin-$1/2$ per site. In the Schwinger boson formulation we decompose the spin operator as \begin{equation} \vec{S}_i=(-1)^i\frac{1}{2}b^{\dagger}_{i,\alpha}\vec{\sigma}_{\alpha\beta}b_{i,\beta}, \end{equation} where $i$ is the site label and $(-1)^i$ takes $\pm1$ on the two sub-lattices, respectively. $b_{\alpha}$ is a hard-core boson ($(b^{\dagger}_{\alpha})^2$=0) with spin $\alpha\in\{\uparrow,\downarrow\}$. The physical Hilbert space has $\sum_{\alpha}b^{\dagger}_{\alpha}b_{\alpha}=1$. Similar to the fermionic spinon theory, this constraint only needs to be satisfied on average in a spin liquid phase. There is again a dynamical $U(1)$ gauge field $a_{\mu}$ coupled to the Schwinger bosons, with gauge charge on each site defined as $q_i=\sum_{\alpha}b^{\dagger}_{\alpha}b_{\alpha}-1$. The $(-1)^i$ factor in the parton decomposition is chosen so that when the Schwinger bosons condense in a uniform manner, the spins order as a collinear anti-ferromagnet (Neel state). Due to this $(-1)^i$ factor, under the $C_6$ rotation (which exchanges the two sub-lattices), the Schwinger bosons should transform (in addition to the coordinate change) as \begin{equation} b_{\alpha}\to i\sigma^y_{\alpha\beta}b^{\dagger}_{\beta}. \end{equation} Similar transform also happens on square lattice under translation. We would like to construct a state in which the Schwinger bosons are gapped, i.e. they form a bosonic Mott insulator. The simplest such state respecting all the symmetries -- especially spin rotation and $C_6$ rotation -- is shown pictorially in Fig.~\ref{fig:honeycomb1}. In this state every site in A sub-lattice is empty, and every site in B sub-lattice has both bosonic orbits occupied. The wavefunction is simply a product state \begin{equation} \prod_{i\in A}|0\rangle_i\ \prod_{j\in B}b^{\dagger}_{j, \uparrow}b^{\dagger}_{j,\downarrow}|0\rangle_j. \end{equation} \begin{center} \begin{figure} \captionsetup{justification=raggedright} \adjustbox{trim={.18\width} {.15\height} {0\width} {.15\height},clip} {\includegraphics[width=1.5\columnwidth]{honeycomb1.pdf}} \caption{The simplest spinon (boson) Mott insulator, with every site in A sub-lattice empty, and every site in B sub-lattice completely filled (recall that the bosons $b_{\alpha}$ are hard-core). This respects $C_6$ rotation since it acts as $b_{\alpha}\to i\sigma^y_{\alpha\beta}b^{\dagger}_{\beta}$. As a monopole (flux) moves in this background, it sees a fixed gauge charge pattern with $q_A=-1$ and $q_B=+1$. The amount of gauge charge sitting on each rotation center dictates the nontrivial Berry phase accumulated by the monopole as it moves around the center according to $\theta(C_n^r)=e^{iq_r2\pi/n}$. The translation quantum numbers of the monopole can be easily obtained once the rotation quantum number is known.} \label{fig:honeycomb1} \end{figure} \end{center} Now what happens when a monopole (a flux quanta) moves in this simple charge background? What the flux sees is a fixed gauge charge pattern, with $q_A=-1$ on each A-site and $q_B=+1$ on each B-site. Therefore as the flux moves around each site, a non-trivial Berry phase is picked up. The amount of gauge charge $q_r$ sitting on each rotation center $r$ dictates the Berry phase under the $C_n$ rotation to be \begin{equation} \label{eqn:berryphaseformula} \theta(C_n^r)=e^{iq_r2\pi/n}. \end{equation} Effectively the monopole gains an angular momentum $l_A=-1$ under A-site-centered $C^A_3$ rotation (corresponding to a Berry phase $\omega^{-1}=e^{-i2\pi/3}$ under a $C^A_3$ rotation), and $l_B=+1$ under B-site-centered $C^B_3$ rotation (Berry phase $\omega$), and $l_c=0$ under plaquette-centered $C^c_6$ rotation since there is no charge placed at the center of the hexagon plaquettes. One may ask how robust these Berry phases are -- for example, will the results change if the flux is inserted far away from the rotation center? The answer is no, since by $C_n$ symmetry the total charge enclosed by a closed rotation trajectory will always be $q_r$ (mod $n$). Therefore the monopole Berry phase will always be given by Eq.~\eqref{eqn:berryphaseformula} under a $C_n$ rotation. This is essentially the spirit of dimensional reduction introduced in \cite{song_2017}. Translation symmetry quantum numbers are now easily obtained: $T_1=C_A^{-1}C_B=\omega^2$ and $T_2=C_BC_A^{-1}=\omega^2$. This makes the monopole operator the Kakule VBS order parameter, as expected. The above argument can be extended straightforwardly to square lattice, from which the standard results follow, namely the monopole carries $\pm1$ angular momentum under site-centered $C_4$ rotations and is therefore identified with columnar VBS order parameter\cite{HaldaneBerry, ReSaSUN}. \subsection{Wannier centers on honeycomb lattice: a case study} \label{Wannier} We now return to fermionic spinons in a quantum spin Hall insulator band structure. Let us first consider the simpler honeycomb lattice. The band structure is given by the Haldane model\cite{HaldaneHoneycomb} for each spin component with opposite Chern numbers. The hopping amplitudes include a real nearest-neighbor hopping and an imaginary second-neighbor hopping that preserve all the lattice symmetries and time-reversal symmetry $f\to i\sigma^2 f$. One may ask whether we can deform this insulator to an ``atomic limit" as we did above in Sec.~\ref{warmup}, and then trivially read off the monopole's lattice quantum numbers. Obviously this is impossible if time-reversal and spin rotation (now only $SO(2)$) symmetries are preserved, since the spinons form a strong topological insulator and therefore (almost by definition) an atomic limit does not exist. However, since the question of monopole lattice quantum number has nothing to do with spin rotation and time-reversal symmetry, we may as well explicitly break all the symmetries except the lattice translation, rotation, and the $U(1)$ charge conservation. In the simplest setting, the resulting insulator can be deformed into an ``atomic limit", which can be pictured as particles completely localized in real space. One nontrivial aspect, compared to the bosonic spinon case in Sec.~\ref{warmup}, is that the effective centers of the localized orbits do not have to sit on the original lattice sites (where the microscopic fermions are defined). For insulators described by free fermion band theory these are simply the Wannier centers. One can always deform all the Wannier centers, without further breaking any symmetry, to one of the rotation centers, such as lattice sites or plaquette centers (see Fig.~\ref{fig:honeycomb2} for example). Once such a configuration is reached, we can simply follow the procedure in Sec.~\ref{warmup} to obtain the monopole Berry phase under a $C_n$ rotation centered at $r$ according to Eq.~\eqref{eqn:berryphaseformula}. Lattice momentum of the monopole can then be obtained by composing rotations at different centers. Let us illustrate this with the QSH insulator on honeycomb lattice -- for concreteness we assume that the nearest-neighbor hopping $t>0$. To determine the exact nature of the Wannier limit, we employ the techniques developed in Refs.~\cite{PoIndicators,Pofragile,Cano_2018}. The basic idea is to look at high symmetry points in momentum space, namely momentum points that are invariant under various lattice rotations. The fermion Bloch states at these high symmetry points will now form certain representations of the rotation symmetries, and our task is to determine what kind of atomic (Wannier) limit would give rise to such a representation. Consider inversion symmetries $I^c_2$ with respect to a hexagon center. There are four inversion invariant momentum points: $\Gamma$ ($T_1=T_2=1$), $M$ ($T_1=T_2=-1$), and two other points $M'$, $M''$ related to $M$ by $C_6$ rotations. Since there are in total two bands occupied (one per spin), the Bloch states form a two dimensional reducible representation of $I^c_2$ at each of these momentum point. The spectrum of inversion eigenvalues $\{\lambda_{I^c}\}$ at each momentum is \begin{eqnarray} \label{eqn:hexagoninversion} \lambda_{I^c}^{\Gamma}&:& \{+1, +1\}, \nonumber\\ \lambda_{I^c}^{M}&:& \{-1, -1\}. \end{eqnarray} Now what kind of atomic insulator can form such representations at $\Gamma$ and $M$? Let us consider several candidates. First, consider the simplest insulator, with exactly one Wannier center sitting on each lattice site. It is easy to show that this insulator has $\lambda_{I^c}: \{+1,-1\}$ at both $\Gamma$ and $M$. Next consider an insulator with one Wannier center sitting at the center of each edge. This has $\lambda_{I^c}^{\Gamma}: \lambda_0\times\{1,1,1\}$ and $\lambda_{I^c}^{M}: \lambda_0\times\{1,-1,-1\}$ where $\lambda_0$ is the intrinsic inversion eigenvalue of the Wannier orbit which can be either $\pm1$. There is yet another insulator with one Wannier center sitting at the center of each hexagon plaquette. This has $\lambda_{I^c}^{\Gamma}=\lambda_{I^c}^M=\lambda_0$ for some intrinsic $\lambda_0=\pm1$. We have exhausted all distinct types of atomic (Wannier) insulators but it appears that none of these fit the representation of our occupied band Eq.~\eqref{eqn:hexagoninversion}! The key concept we need here is that of fragile topology\cite{Pofragile}: the occupied band, even though being ``topologically trivial", can be deformed into an atomic (Wannier) limit only when combined with another atomic insulator. In our case, if we combine the occupied band with an atomic insulator with three Wannier centers sitting on each hexagon center, with intrinsic inversion eigenvalues $\lambda_0=\{1,1,-1\}$, the total system will form inversion representations at $\Gamma$ and $M$ that resembles an atomic insulator with one Wannier center on each site and at the center of each edge (with $\lambda_0=1$). Formally we can write this relation as (upper left of Fig \ref{fig:honeycomb2}) \begin{eqnarray} honeycomb\ QSH\ &&occupied\ band=site+edge^{\lambda_0=1} \nonumber\\ &&-2\times hexagon^{\lambda_0=1}-hexagon^{\lambda_0=-1}, \end{eqnarray} where the minus signs indicate the fragile nature of the topology. From the monopole point of view, the bands that are formally subtracted in the above relation can simply be viewed as negative gauge charges sitting on their Wannier centers. So the above relation produces a gauge charge pattern shown in Fig.~\ref{fig:honeycomb2} (remember that there is always a $-1$ background gauge charge sitting on each site). \begin{center} \begin{figure} \captionsetup{justification=raggedright} \adjustbox{trim={.10\width} {.22\height} {.10\width} {.02\height},clip} {\includegraphics[width=1.5\columnwidth]{wanniercenter.pdf}} \caption{The Wannier limit of a quantum spin Hall insulator on 4 types of lattices when all the symmetries are broken except for charge $U(1)$ conservation and lattice translation/rotation. The solid dot indicates a Wannier center and empty dot indicates ``minus" charge pertinent to the fragile nature of the band topology. The pattern of $U(1)$ gauge charge is shown in the figure, with the background $-1$ charge per site included.} \label{fig:honeycomb2} \end{figure} \end{center} One should also check that the above Wannier pattern also reproduces representations at high symmetry points of other point group symmetries such as site-centered $C_3$. Since the $C_3$-invariant momentum points include the Dirac points ($Q$ and $Q''$), it is important for us to include the Dirac mass term that corresponds to the QSH mass. It is a relatively straightforward exercise to check that the above Wannier pattern indeed reproduces the representations from the occupied band of the QSH insulator. The gauge charge pattern thus produced gives rise to some nontrivial quantum numbers for the monopole. Using Eq.~\eqref{eqn:berryphaseformula}, we conclude that the monopole transforms under $C_6$ as $\mathcal{M}\to -\mathcal{M}$, and transforms trivially under lattice translations. This is exactly what we found in Sec.~\ref{MQNBipartite} and numerically found in Ref.~\onlinecite{ran_2008}. We comment that we can alternatively focus on the un-occupied bands of the QSH insulator (holes) instead of the occupied bands (particles), and ask about Wannier centers of the holes. Since the occupied and un-occupied bands together produce two trivial Wannier orbits on each lattice site, it can be shown straightforwardly that we would obtain the identical gauge charge pattern by considering holes instead of particles. \subsection{Wannier centers on Square lattice} \label{square} Now consider the $\pi$-flux state on square lattice. We deform the band structure to the QSH regime. One could obtain how monopoles transform under various rotations by deforming the spinon bands to a Wannier insulator as we did above on honeycomb. One finds that on square lattice (upper right of Fig \ref{fig:honeycomb2}) \begin{eqnarray} square\ QSH\ &&occupied\ band=site\nonumber\\ &&+edge\ center -2\times plaquette, \end{eqnarray} where the RHS means Wannier insulators with particles lying on site, edge center or plaquette centers. Accounting for the background $1$ per site minus gauge charge, this configuration means the gauge charge on-site vanishes, leading to spin triplet monopoles staying invariant, under site-centered $C_4$ in Table~\ref{table:square_monopole}, while for $C_4$ around plaquette center the monopoles get $-1$ since they seem two gauge charges sitting at the rotation center. This implies that, for example, under $T_1\cdot T_2$ the spin-triplet monopoles should change sign, in agreement with Table~\ref{table:square_monopole}. We note that similar results were obtained in Ref.~\cite{thomson_2018} by considering a VBS insulator formed by the spinons, in which the effective Wannier centers are simply sitting on the lattice sites and the computation was simpler. \section{Monopole quantum numbers II: non-bipartite lattices} \label{calculation} We are finally ready to calculate monopole lattice quantum numbers on the non-bipartite lattices i.e. triangular and kagom\'e lattices. \subsection{Triangular lattice: Wannier centers from projective symmetry group} \label{triangular_calculation} \begin{table} \captionsetup{justification=raggedright} \begin{tabular}{|c| c| c| c |c| c|} \hline & $T_1$& $T_2$& $R$ &$C_6$ &$\mathcal T$\\%& sym-equiv item \\ \hline $\Phi_1^\dagger$ &$e^{i\frac{-\pi}{3}} \Phi_1^\dagger$&$e^{i\frac{\pi}{3}} \Phi_1^\dagger$&$-\Phi_3^\dagger$ &$\Phi_2$& $\Phi_1$ \\%& VBS spin order\\ $\Phi_2^\dagger$ &$e^{i\frac{2\pi}{3}} \Phi_2^\dagger$&$e^{i\frac{\pi}{3}} \Phi_2^\dagger$&$\Phi_2^\dagger$ &$-\Phi_3$ & $\Phi_2$\\%& VBS spin order\\ $\Phi_3^\dagger$ &$e^{i\frac{-\pi}{3}} \Phi_3^\dagger$&$e^{i\frac{-2\pi}{3}} \Phi_3^\dagger$&$-\Phi_1^\dagger$ &$-\Phi_1$& $\Phi_3$ \\%& VBS spin order\\ $\Phi_{4/5/6} ^\dagger$& $e^{i\frac{2\pi}{3}}\Phi_{4/5/6}^\dagger$&$e^{i\frac{-2\pi}{3}}\Phi_{4/5/6}^\dagger$&$\Phi_{4/5/6}^\dagger$&$-\Phi_{4/5/6}$ & -$\Phi_{4/5/6}$ \\%& $120$ magnetic order\\ \hline \end{tabular} \caption { Monopole transformation laws on triangular lattice. $C_6$ is $6-$ fold rotation around site and other symmetries are marked in figure~\ref{fig:honeycomb2}. There is nontrivial Berry phase for translations.} \label{table:triangular_monopole} \end{table} \subsubsection{ Mean-field and PSG} On triangular lattice we focus on a particular mean-field ansatz with the Hamiltonian \begin{equation} \mathcal H=J\sum_{\langle ij\rangle} t_{ij} \sum_\alpha f^\dagger_{i\alpha}f_{j\alpha}+h.c. \end{equation} where $t_{ij}=\pm 1$ and there is a ``staggered $\pi$ flux" configuration of $t_{ij}$ on the triangular lattice, with a $\pi$-flux on each upward triangle and zero flux on each downward triangle. More details can be found in Appendix~\ref{bilinears}. Under an appropriate basis the low-energy Hamiltonian takes the standard Dirac form, with two spins (denoted by Pauli matrices $\sigma$), two valleys (denoted by Pauli matrices $\tau$). The projective symmetry representation of the Dirac fermions (translation $T_{1/2}$, reflection $R_x$, six-fold rotation $C_6$ and time-reversal) are calculated in standard method and the results are listed below: \begin{eqnarray} \psi&&\xrightarrow{T_1}i\tau^3\psi, \nonumber\\ \psi&& \xrightarrow{T_2}-i\tau^2\psi, \nonumber\\ \psi&& \xrightarrow{\mathcal T}i\sigma^2 \mu^2\tau^2\psi(-k), \nonumber\\ \psi&&\xrightarrow{C_6}i\sigma^2 W_{C_6} \psi^\dagger \nonumber\\ \psi&&\xrightarrow{R}i\sigma^y W_{R} \psi^\dagger \end{eqnarray} where we have suppressed the coordinate transforms and \begin{eqnarray} W_{C_6}&&=e^{-i\gamma^0 \frac{\pi}{6}} W_c \sigma^2 e^{i\frac{\pi}{3}\tau^C}\quad \tau^C=\frac{1}{\sqrt{3}}(\tau^1+\tau^2+\tau^3)\nonumber \nonumber\\ W_R&&=\frac{(\gamma^1-\sqrt{3}\gamma^2)}{2} W_c \frac{\tau^3-\tau^1}{\sqrt{2}} \nonumber\\ W_c&&=\frac{1}{\sqrt{3}}(-iI_{4\times4}-\mu^3+\mu^1) \end{eqnarray} where $\mu^i$ are Pauli matrices acting on the Dirac spinor index (i.e. superposition of $\gamma$ matrices). Again more details can be found in Appendix~\ref{bilinears}. The above transformations on Dirac fermions fix the monopole transformations up to overall $U(1)_{top}$ phase factors. The results are listed in Table~\ref{table:triangular_monopole}. Let us provide some more explanations here: \begin{itemize} \item {Translations $T_{1/2}$: $\psi\xrightarrow{T_1}-i\tau^2\psi, \psi\xrightarrow{T_2}i\tau^3\psi$, which gives a relative minus sign to $\Phi_{1/3},\Phi_{1/2}$, respectively. The overall phase factor is undetermined, but is constrained by the invariance of $\Phi_{4/5/6}$ under $C_3=C_6^2$ to take value in $\{1,\omega\equiv e^{i\frac{2\pi}{3}}, \omega^{-1}\}$.} \item{ Time reversal $\mathcal T$: $\psi\xrightarrow{\mathcal T}i\sigma^2 \mu^2\tau^2\psi=\mathcal T_0 \psi$. Then from Sec.~\ref{generalities} we know that $\Phi\rightarrow O_T \Phi^\dagger$, which fixes the overall phase factor.} \item{Six-fold rotation around site $C_6$: \begin{equation} \psi\xrightarrow{C_6}e^{-i\gamma_0 \frac{\pi}{6}}\{i\sigma^2 W_c \tau^2\}\nonumber\\ exp[i\frac{\pi}{3}\tau^C] \psi^*, \end{equation} where the part inside $\{\}$ is identical to $\mathcal C_0$ defined in eq~\eqref{eqn:C0}}. We disregard the first Lorentz rotation since the monopole is a scalar anyway. The last part involves certain $SO(3)$ rotation in $\Phi_{1/2/3}$ induced by $\tau^2 [\frac{1}{2}( I_{4\times 4}-i\tau^3-i\tau^2-i\tau^1)]$. From Sec.~\ref{generalities} we know that $\mathcal C_0: \Phi\rightarrow \pm O_T \Phi^\dagger$, where the overall phase depends on convention. Fixing the overall phase we get the last column in table \ref{table:triangular_monopole}. \item{ Reflection $R_x$: \begin{equation} \psi\xrightarrow{R}\frac{(\gamma_1-\sqrt{3}\gamma_2)}{2}\{i\sigma^y W_c\tau^2\}\tau^2 \frac{\tau^3-\tau^1}{\sqrt{2}}\psi^*. \end{equation} This is really a $\mathcal{CR}$ symmetry. Since quantum spin hall mass $\overline \psi\sigma\psi$ stay invariant under $R$, we could invoke the reflection-protected topological phase argument in Sec ~\ref{dimred}. It is easy to check that $R^2=(-1)^F$, which means that $R: \Phi_{4/5/6}\to \Phi_{4/5/6}$ since this is not a nontrivial $R$ protected topological phase. To fix how $\Phi_{1/2/3}$ transform requires a bit of caution: first, they differ from $\Phi_{4/5/6}$ transformations by a minus sign from $\mathcal C_0$ (the part encoded in big bracket in the transformation), on top of that they are rotated by $\tau^2 \frac{\tau^3-\tau^1}{\sqrt{2}}$. Combining these steps, one gets full reflection transformation.} \end{itemize} So at this point the only unfixed phase factors are those associated with $T_{1,2}$. We now calculate these phases using the Wannier center technique developed in Sec.~\ref{wanniercenter}. \subsubsection{Monopole angular momenta from Wannier centers} Below, we calculate the Wannier centers in the triangular lattice setting to deduce the monopole angular momenta for rotations around different centers. We use the approach described in Sec ~\ref{wanniercenter} and consider three kinds of $3$-fold rotations around site $C_3^\cdot$, upward triangle center $C_3^\triangle$ and downward triangle center $C_3^\triangledown$, respectively (we omit $C_6$, since it changes the staggered flux pattern). Employing the dimensional reduction principles in Ref \onlinecite{song_2017} we first find the high symmetry points and calculate representations for the rotation groups. First one takes a $4$-site unit cell with a $C_3$ invariant Brilloun zone illustrated in figure \ref{fig:tri_psg}. Under the three types of $3$-fold rotations, the microscopic spinon fields transform as \begin{equation} \phi_{(\vec r,i)}\rightarrow g[C_3(\vec r,i)] \phi_{C3(\vec r,i)} \end{equation} where $\vec r$ labels Bravais lattice vector and $i$ labels the $A,B,C,D$ sublattices within a unit cell. The accompanying gauge transformation $g[\vec r,i] =g[0,i]exp (i \delta k\cdot (\vec r))$, i.e., the gauge transformation has momentum $\delta k=(0,\pi)$ shown in figure \ref{fig:tri_psg}. The momenta transform under $C_3$'s as \begin{equation} C_3: \vec k\rightarrow C_3(\vec k) +(0,\pi) \end{equation} which leads to three rotation invariant momenta, $M=(\pi,\pi)$ which is the $4$-fold degenerate Dirac point, and $k=(\frac{-\pi}{3},\frac{\pi}{3}),k'=-k$ (right panel of figure \ref{fig:tri_psg}). One could diagonalize the $C_3$ rotation matrix at these high symmetry points and the eigenvalues are listed in the last two columns of table \ref{table:tri_rep}. To find the Wannier limit of the spinon band, we compare the band representation to those of the Wannier insulator centered on site, upward triangle and downward triangle, respectively. The representation of site-centered Wannier insulator is straightforward from the projective symmetry group since we are using wavefunction localized on site as our basis after all. For the other two types of Wannier limit, we first define the localized fermionic wave function basis. We stipulate that the wannier function localized at the center of certain plaquette to be a equal-amplitude superposition of the wave functions on site surrounded the plaquette. Then each Wannier center holds $n$ linearly independent wave functions where $n$ is the number of vertices on the boundary of the plaquette. \footnote{For the edge centered Wannier function on square and honeycomb lattice, the Wannier function is a superposition of wavefunctions on the endpoints of the edge.} In principle, one could use this new basis, diagonalize the $C_3$ matrix at high symmetry point to calculate the representation. Next we present a more direct and physical way to obtain the representation. Consider site centered $C_3^v$ operation for upward triangle centered Wannier basis. As shown in figure \ref{fig:tri_psg}, we take the four upward triangles with the lower right site as reference point to form the unit cell marked by $\triangle_{A/B/C/D}$, it's obvious that under $C_3$ around site $A$ $\triangle_{A/B/D}$'s permutes to one another, since $(C_3^v)^3=1$, the matrix for these three triangles is always of the form (under appropriate phase choice of the superposition coefficients) \begin{equation} \left(\begin{array}{ccc} 0&1&0\\ 0&0&1\\1&0&0\end{array}\right ) \end{equation} whose eigenvalue is $\{1,\omega,\omega^2\},(\omega\equiv exp(i\frac{2\pi}{3}))$, the $9$-dimensional subspace spanned by Wannier functions of $\triangle_{A/B/D}$ at high symmetry points therefore constitute a representation with eigenvalues $3\{1,\omega,\omega^2\}$, where $3$ means direct sum of the set of eigenvalues. $\triangle_C$, on the other hand, goes to its equivalent under $C_3^\cdot$. Consider the Wannier function localized at $\triangle_C$: \begin{equation} \psi(\triangle_C)=\frac{1}{\sqrt{3}} (-|1\rangle+|2\rangle-|3\rangle) \end{equation} where the site labels are marked in figure \ref{fig:tri_psg}, under $C_3^\cdot$, \begin{equation} \psi(\triangle_C)\rightarrow \psi(\triangle_{C'})=-\frac{1}{\sqrt{3}} (-|1'\rangle+|2'\rangle-|3'\rangle). \end{equation} Since $C,C'$ differs by a lattice vector $\vec r_2$, the eigenvalue reads $-exp (i \vec k\cdot \vec r_2)$. Similarly for the other two $\triangle_C$ wave functions \begin{equation} \psi(\triangle_C)=\frac{1}{\sqrt{3}} (-|1\rangle+\omega^\eta|2\rangle-\omega^{2\eta}|3\rangle) \quad (\eta=1,2) \end{equation} the eigenvalues are $-\omega^\eta exp (i \vec k\cdot \vec r_2)$. \begin{figure}[htbp] \begin{center} \captionsetup{justification=raggedright} \includegraphics[width=0.45\textwidth]{tri_psg.pdf} \caption{On the left is the 4-site unit cell used to calculate Wannier centers on the triangular lattice. Right: Brillouin zone of the 4-site unit cell Parton Hamiltonian. The $-1$ gauge transformation for $C_3$'s is labeled by a solid blue circle on site (those without the circle have trivial gauge transformation). This gauge transform has momentum $(0,\pi)$. Right panel is the rotation invariant Brillouin zone for the $4$-site unit cell. Marked are 3 rotation invariant momenta. }\label{fig:tri_psg} \end{center} \end{figure} Since the Wannier insulator has to respect rotation symmetry, each legitimate representation should involve all $4$ types of wannier centers $\triangle_{A/B/C/D}$, namely one should combine the above block diagonalized representation for $\triangle_{A/B/D}$ with some $\triangle_C$ wavefunction of certain angular momentum $L$, leading to the second column of table \ref{table:tri_rep}. We proceed in a similar fashion to produce the rest of the table. \begin{table*}[ht] \renewcommand{\arraystretch}{1.4} \captionsetup{justification=raggedright} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \hline sym. & $\Gamma^\triangle_{k/M}$&$\Gamma^\triangledown_{k/M}$&$\Gamma^\circ_{k/M}$ &$\Gamma^{PSG}_k$({2-fold particle},{2-fold hole})&$\Gamma^{PSG}_M$(4-fold deg.)\\ \hline $C_3^{v,\chi_c=0}$ & $-\omega^\eta p_1^*\oplus[1,\omega,\omega^2]$& $-\omega^\eta p_1\oplus[1,\omega,\omega^2]$&$1\oplus[1,\omega,\omega^2]$& $\Gamma^{particle}_k=[\omega^2,1],\Gamma^{hole}_k=[\omega,1]$& $[1,1,\omega,\omega^2]$\\ $C_3^{u,,\chi_c=1}$ & $\omega^\eta\oplus[1,\omega,\omega^2]$& $-\omega^\eta p_1^*\oplus[1,\omega,\omega^2]$&$-p_1\oplus[1,\omega,\omega^2]$& $\Gamma^{particle}_k=[\omega,1],\Gamma^{hole}_k=[\omega,\omega^2]$& $[1,1,\omega,\omega^2]$\\ $C_3^{d,,\chi_c=2}$ & $-\omega^\eta p_1\oplus[1,\omega,\omega^2]$& $\omega^\eta\oplus[1,\omega,\omega^2]$&$- p_2\oplus[1,\omega,\omega^2]$& $\Gamma^{particle}_k=[\omega^2,\omega],\Gamma^{hole}_k=[\omega^2,1]$& $[1,1,\omega,\omega^2]$\\ \hline \multicolumn{2}{|c|} {$\Gamma^{particle}_k=2\{\omega^{\chi_c}[\omega^2,1]\}$}&\multicolumn{2}{c|}{$\Gamma^{particle}_M=[1,1,\omega,\omega^2]$}&\multicolumn{2}{c|}{$\Gamma^{particle}_{k,M} =\Gamma^\circ_{k,M}+\Gamma^\triangle_{k,M}-\Gamma^\triangledown_{k,M}$}\\ \multicolumn{2}{|c|} {$\Gamma^{hole}_k=2\{\omega^{\chi_c}[\omega,1]\}$}&\multicolumn{2}{c|}{$\Gamma^{hole}_M=[1,1,\omega,\omega^2]$}&\multicolumn{2}{c|}{$\Gamma^{hole}_{k,M} =\Gamma^\circ_{k,M}+\Gamma ^\triangledown_{k,M}-\Gamma^\triangle_{k,M}$}\\ \hline \hline \end{tabular} \caption{The point space group representation of $C_3$ rotations (superscript v,u,d denotes rotation center as site, upward triangle and downward triangle center, respectively, $\chi_c$ is an index assigned to each rotation for notation convenience.) on triangular lattice at high symmetry points $k=(\frac{-\pi}{3},\frac{\pi}{3}),M=(\pi,\pi)$. $p_{1/2}=\exp(i \vec k\cdot r_{1/2})$ is the phase factor along $T_{1/2}$ translations at the high-symmetry momenta of interest. $\Gamma^\triangle,\Gamma^\triangledown,\Gamma^\circ$ denotes representation of Wannier functions centered at upward,downward triangle and on site, respectively. $\omega=e^{i\frac{2\pi}{3}}$ and integer $\eta$ is related to orbital angular momentum of Wannier function and can differ from each to each ($\eta=0$ for site-center wannier function by our Hilbert space choice).The quantum spin hall mass will open a gap at $M$ point and the filled band for spin up and down are related by time reversal. Hence the filled band at Dirac point has eigenvalue $\{1,1,\omega,\omega^2\}$. \label{table:tri_rep}} \end{center} \end{table*} Compare the representation of spinon bands with those of the $3$ Wannier insulators, we identify a unique decompositon of the spinon bands into Wannier centers (lower left of Fig \ref{fig:honeycomb2}): \begin{equation} \textrm{triangular QSH occupied band =site+}\triangle-\triangledown \end{equation} where again the minus sign denotes formal difference in light of fragile topology. Taking into account the background gauge charge $-1$ per site, the spin triplet monopoles see no gauge charge rotating around site and $\pm 1$ gauge charge rotating around upward/downward triangle center, respectively. Since $T_2=(C_3^d)^{-1}C_3^u$ (d,u denotes rotations around downward,upward triangle centers, respectively), monopoles $\mathcal S_i$'s transform with a phase $-\frac{-2\pi}{3}+\frac{2\pi}{3}=\frac{4\pi}{3}$, and similarly they get a phase of $\omega$ under $T_1$. To sum up, we get the transformation of monopoles as tabulated in Table~\ref{table:triangular_monopole}. The minimal symmetry-allowed monopole is a three-fold monopole as discussed in Ref.~\cite{shortpaper}. Note that in principle, it is not sufficient to simply match the eigenvalues because of the complicated PSG structure. A full calculation should compare the full representations of lattice PSG (rotation and translation) at high symmetry points in the Brillouin zone, which are generically non-Abelian. We do not perform such calculation here since the much simpler calculation is already sufficiently constraining to essentially fix the Wannier centers. \subsection{Kagom\'e lattice} \label{kagome_calculation} \begin{table} \captionsetup{justification=raggedright} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline & $T_1$ & $T_2$ & $R_y$ & $ C_6$ & $\mathcal T$ \\ \hline $\Phi_1^\dagger$ & $-\Phi_1^\dagger$ &$- \Phi_1^\dagger$ & $-\Phi_3$ & $e^{i\frac{2\pi}{3}}\Phi_2^\dagger$ & $\Phi_1$\\ $\Phi_2^\dagger$ & $\Phi_2^\dagger$ &$-\Phi_2^\dagger$ & $\Phi_2 $ & $-e^{i\frac{2\pi}{3}}\Phi_3^\dagger$ & $\Phi_2 $\\ $\Phi_3^\dagger$ & $-\Phi_3^\dagger$& $\Phi_3^\dagger$&$-\Phi_1 $ & $-e^{i\frac{2\pi}{3}}\Phi_1^\dagger$ & $\Phi_3 $\\ $\Phi_{4/5/6}^\dagger $ & $\Phi_{4/5/6}^\dagger $& $\Phi_{4/5/6}^\dagger $& $-\Phi_{4/5/6} $ &$e^{i\frac{2\pi}{3}}\Phi_{4/5/6}^\dagger $ & $-\Phi_{4/5/6} $\\ \hline \end{tabular} \caption{The transformation of monopoles on kagom\'e lattice. $R_y,C_6$ denotes reflection perpendicular to vertical direction and six-fold rotation around center of hexagon in fig \ref{fig:honeycomb2}. It's impossible to incorporate the $6$-fold rotation of monopoles to a vector representation of $SO(6)$ owing to the nontrivial Berry phase, which is in line with magnetic pattern on kagom\'e lattices. \label{table:kagome_monopole}} \end{center} \end{table} On kagom\'e lattice, similar to triangular case, Ref.~\onlinecite{hermele_2008} calculated the kagom\'e DSL with staggered flux mean-field ansatz, with three gamma matrices as $\gamma_\nu=(\mu^3,\mu^2,-\mu^1)$, and we have for the PSG of Dirac fermions as \begin{align} T_1: \psi\rightarrow (i\tau^2)\psi\quad T_2: \psi\rightarrow (i\tau^3)\psi\quad R_y: \psi\rightarrow (i\mu^1)e^{\frac{i\pi}{2}\tau_{ry}} \psi\nonumber\\ C_6: \psi\rightarrow e^{\frac{i\pi}{3}\mu^3} e^{\frac{2\pi i}{3}\tau_R}\psi\quad \mathcal T:\psi\rightarrow (i\sigma^2)(i\mu^2)(-i\tau^2)\psi. \end{align} where \begin{align} \tau_{ry}=\frac{-1}{\sqrt{2}} (\tau^1+\tau^3)\quad \tau_R=\frac{1}{\sqrt{3}}(\tau^1+\tau^2-\tau^3). \end{align} The PSG again fixes the monopole transformations, up to overall phase factors to be determined, as listed in Table~\ref{table:kagome_monopole}. The overall phase for time-reversal is fixed through the argument in Sec.~\ref{generalities}. The overall phase for $R_y$ is convention-dependent and we fix it as in Table~\ref{table:kagome_monopole}. The fact that $\Phi_{4,5,6}$ are invariant (up to a phase) under $C_6$ requires their momenta to be zero, which in turn fixes the overall phases associated with $T_{1,2}$ as in Table~\ref{table:kagome_monopole}. The only undetermined phase is that in $C_6$, which we fix below. The calculation is essentially the same as in the triangular case, so we will be brief here. One calculates symmetry representations of $3$-fold rotation around upward/downward triangle centers, and $6$-fold rotation around hexagon centers, listed in table \ref{table:kagome_rep}. The spinon band is represented as (lower right of Fig \ref{fig:honeycomb2}) \begin{equation} \textrm{kagome QSH occupied band= 3 site} -\triangle -\triangledown- 4\hexagon \end{equation} where $\triangle,\triangledown,\hexagon$ denotes Wannier insulators localized on upward/downward triangles, and hexagons. Note the numeral factor only denotes the numbers of occupied particles localized at Wannier center, they may have different Wannier wave functions. This means the $C_6$ rotation sees a $\frac{-4\pi}{3}$ Berry phase, and translation begets no Berry phase since translation is composed of $C_3^\triangle (C_3^\triangledown)^{-1}$ and the two Berry phase cancels. These are the results listed in Table~\ref{table:kagome_monopole}. The most relevant (in the RG sense) symmetry-allowed monopole is a two-fold monopole as discussed in Ref.~\cite{shortpaper}. \begin{table*}[] \renewcommand{\arraystretch}{1.4} \captionsetup{justification=raggedright} \begin{center} \begin{tabular}{|c|p{18mm}|p{18mm}|p{30mm}|p{18mm}|p{40mm}|p{40mm}|} \hline sym. & $\Gamma^\triangle_{k/Q}$&$\Gamma^\triangledown_{k/Q}$&$\Gamma^{\hexagon}_{k/Q}$ &$\Gamma^\circ_{k/Q}$ &$\Gamma^{PSG}_k$({4-fold particle,2-fold particle,4-fold hole,2-fold hole})&$\Gamma^{PSG}_Q${4-fold particle,4-fold(Dirac fermion), 4-fold hole}\\ \hline $C_3^{u}$ & $\omega^\eta\oplus[1,\omega,\omega^2]$ &$\omega^\eta p_1\oplus[1,\omega,\omega^2]$& $\omega^\eta p_1^*\oplus[1,\omega,\omega^2]$ & $4[ 1,\omega,\omega^2]$ & $[(\omega,\omega^2),(1,\omega)]\oplus[1,\omega^2]\oplus[(\omega,\omega^2),(1,\omega)]\oplus[1,\omega^2]$ & $[\omega,\omega,\omega^2,\omega^2]\oplus [1,1,\omega,\omega^2] \oplus [1,1,\omega,\omega^2]$\\ $C_3^{d}$ & $\omega^\eta p_1^*\oplus[1,\omega,\omega^2]$ &$\omega^\eta \oplus[1,\omega,\omega^2]$& $\omega^\eta p_1\oplus[1,\omega,\omega^2]$ & $4[ 1,\omega,\omega^2]$ & $[(1,\omega^2),(\omega^2,\omega)]\oplus[1,\omega]\oplus[(1,\omega^2),(\omega^2,\omega)]\oplus[1,\omega]$ & $[\omega,\omega,\omega^2,\omega^2]\oplus [1,1,\omega,\omega^2] \oplus [1,1,\omega,\omega^2]$\\ $C_3^{h}$ & $\omega^\eta p_1\oplus[1,\omega,\omega^2]$ &$\omega^\eta p_1^*\oplus[1,\omega,\omega^2]$& $\omega^\eta \oplus[1,\omega,\omega^2]$ & $4[ 1,\omega,\omega^2]$ & $[(\omega,1),(1,\omega^2)]\oplus[\omega,\omega^2]\oplus[(\omega,1),(1,\omega^2)]\oplus[\omega,\omega^2]$ & $[\omega,\omega,\omega^2,\omega^2]\oplus [1,1,\omega,\omega^2] \oplus [1,1,\omega,\omega^2]$\\ \hline $C_6$ &\multicolumn{2}{c|} {$[\Omega^{2\eta+1} ,-\Omega^{2\eta+1} ]\oplus [\textrm{Sextet}]$} &$\Omega^{2\eta+1} \oplus(-1)^{\eta}[e^{i\frac{\pi}{6}},e^{i\frac{5\pi}{6}},-i]$ &$2[\textrm{Sextet}]$ &\multicolumn{2}{c|} {$\Gamma^{PSG}_Q: [ \Omega,\Omega^5,\Omega^{-5},\Omega^{*}]\oplus [\pm i,\Omega,\Omega^{*}]\oplus [ \pm i,\Omega^5,\Omega^{-5}] $}\\ \hline \multicolumn{7}{|c|} {$\Gamma^{particle}=3\Gamma^\circ_{L=0}-\Gamma^\triangle_{L=0}-\Gamma^\triangledown_{L=0}-\Gamma^{\hexagon}_{L=0}-\Gamma^{\hexagon}_{L=3}-\Gamma^{\hexagon}_{L=4}-\Gamma^{\hexagon}_{L=5}$}\\ \multicolumn{7}{|c|} {$\Gamma^{hole}=-\Gamma^\circ_{L=0}+\Gamma^\triangle_{L=0}+\Gamma^\triangledown_{L=0}+\Gamma^{\hexagon}_{L=0}+\Gamma^{\hexagon}_{L=3}+\Gamma^{\hexagon}_{L=4}+\Gamma^{\hexagon}_{L=5}$}\\ \hline \end{tabular} \end{center} \caption{The space group representation of rotations on kagom\'e lattice at high symmetry points $k=(\frac{2\pi}{3},\frac{\pi}{3}),Q=(0,\pi)$. Superscript u,d,h denotes rotation around upward, downward triangle and hexagon centers, respectively. We have $\Omega=e^{i\frac{\pi}{6}}$, $\omega=e^{i2\pi/3}$, $[\textrm{Sextet}]=i[e^{\pm i\frac{\pi}{6}},\pm i,e^{\pm i\frac{5\pi}{6}}]$, and $\eta$ is an integer and can vary from each to each. $p_{1}=e^{i \vec k\cdot r_{1}}$ is the phase factor under $T_1$ translation at the high-symmetry momenta of interest. $\Gamma^\triangle,\Gamma^\triangledown,\Gamma^\circ, \Gamma^{\hexagon}$ denote representation of Wannier functions centered at upward,downward triangle, sites and hexagon, respectively. At $k$ point, there are two sets of $4-$fold degenerate states, which can be simultaneously block diagonalized into two $2\times 2$ matrices for all three $C_3$ rotations and we list the pairwise eigenvalues in brackets. The multiplicity of reps. is denoted as simply a number in front of the set of eigenvalues. } \label{table:kagome_rep} \end{table*} \section{Anomalies and Lieb-Schultz-Mattis} \label{anomalyLSM} In $CP^1$ (slave boson) representation of spin-$1/2$ systems on square lattice, the monopole quantum numbers were fixed by Lieb-Shultz-Mattis (LSM) constraints\cite{MetlitskiThorngren}. Essentially, the LSM theorem requires the low energy effective field theory to have certain symmetry anomalies, which can be matched only if the $CP^1$ monopole carries the right quantum number, e.g. $\pm1$ angular momentum under $C_4$. It is then natural to ask to what extent are the monopole quantum numbers in Dirac spin liquids determined by LSM-anomaly constraints. In this section we show that monopole quantum numbers associated with $\mathbb{Z}_2$ symmetries (such as inversion) are indeed determined by LSM-anomaly constraints in DSL, while those associated with $\mathbb{Z}_3$ symmetries (such as $C_3$) are not. Let us start from the QED$_3$ theory, and try to gauge the $SO(3)_{spin}\times SO(3)_{valley}\times U(1)_{top}$ symmetry\footnote{Notice that this symmetry is simpler than $SO(6)\times U(1)/\mathbb{Z}_2$ since $SO(3)\times SO(3)$ has no center.}, as we will be interested in those microscopic symmetries that can be embedded into this group. The anomaly associated with these symmetries can be calculated. One way to interpret the anomaly is to imagine a $(3+1)d$ SPT state that hosts the QED$_3$ theory on its boundary, and the bulk SPT is characterized by a response theory \begin{equation} \label{Anomaly} S_{bulk}=i\pi\int_{X_4}\left[ w_2^s\cup w_2^v+\left(w_2^s+w_2^v+\frac{dA_{top}}{2\pi} \right)\cup \frac{dA_{top}}{2\pi} \right], \end{equation} where $w_2^{s,v}$ are the second Stiefel-Whitney classes of the $SO(3)_{spin/valley}$ bundles, respectively, and $A_{top}$ is the $U(1)$ gauge field that couples to the $U(1)_{top}$ charge. We outline the derivation of the above expression (which is similar to that in Ref.~\cite{wang_2017}) in Appendix~\ref{SSUanomaly}. The first term is essentially a descendent of the parity anomaly of Dirac fermions, and the terms involving $A_{top}$ simply represents the fact that an $A_{top}$ monopole -- the spinon $\psi$ in the original QED$_3$ -- carries half-spin under $SO(3)_{spin/valley}$ and is a fermion. Our remaining task is simply to embed the lattice symmetries into the $SO(3)_{valley}\times U(1)_{top}$ group and see if the correct anomaly from LSM is reproduced. The PSG determines the $SO(3)_{valley}$ part of the lattice symmetries, which in turn determines $w_2^v$. The only unknown is the relation between the lattice symmetries and $A_{top}$, determined by the $U(1)_{top}$ Berry phase in the symmetry realizations. Let us first consider $\mathbb{Z}_2$ inversion symmetries. On triangular lattice this is not interesting since it involves charge conjugation. We shall consider the other three lattices in detail. On square lattice the site-centered inversion involves a nontrivial $SO(3)_{valley}$ rotation $\psi\to i \tau^2\psi$. So if we gauge the inversion symmetry (call the $\mathbb{Z}_2$ connection $\gamma$), the $\pi w_2^s w_2^v$ term in Eq.~\eqref{Anomaly} becomes $\pi w_2^s\gamma^2$(short hand for $\gamma\cup\gamma$), which is exactly the anomaly imposed by LSM (simply reflecting the fact that there is a spin-$1/2$ Hilbert space at the inversion center on lattice). Therefore the other terms in Eq.~\eqref{Anomaly} should not contribute further anomalies. This means that $dA_{top}=0$, i.e. there is no additional $U(1)_{top}$ phase factor associated with inversion. This is indeed what we have in Table~\ref{table:square_monopole}, where inversion only implements the $SO(3)_{valley}$ rotation by $\Phi_{1,3}\to-\Phi_{1,3}$. On honeycomb lattice the $\pi w_2^s w_2^v$ term likewise gives a $\pi w_2^s\gamma^2$ anomaly where $\gamma$ is again the inversion gauge field. However, since on the lattice there is no spin at the inversion center, there should be no actual anomaly. This means that the other terms in Eq.~\eqref{Anomaly} should contribute another $\pi w_2^s\gamma^2$ term to the anomaly, which can be done by having $dA_{top}=2\pi\gamma^2$. This means that under inversion the monopole should pick up an additional $(-1)$ phase, exactly in accordance with Table~\ref{table:honeycomb_monopole}. On kagome lattice, the hexagon-centered inversion does not involve any nontrivial $SO(3)_{valley}$ rotation for the Dirac fermions, therefore the $\pi w_2^s w_2^v$ term does not contribute an anomaly for inversion. The LSM constraint also requires no anomaly since on the lattice there is no spin at the inversion center. Therefore the terms involving $A_{top}$ in Eq.~\eqref{Anomaly} should not give rise to any anomaly either. This means that under inversion the monopoles stay invariant, again in accordance with Table~\ref{table:kagome_monopole}. We now consider $\mathbb{Z}_3$ (or any $\mathbb{Z}_{2k+1}$) symmetries like the $C_3$ rotations. In this case the anomalies become trivial no matter how we embed the symmetry to $SO(3)_{valley}\times U(1)_{top}$. This can be seen most easily by writing the anomalies involving $SO(3)_{spin}$ as $\pi w_2^s w_2^{SO(3)_{valley}\times U(1)_{top}}$ (recall that $w_2^{SO(2)}=dA/2\pi$ (mod $2$)), and using the fact that $w_2=0$ for a $\mathbb{Z}_3$ bundle. Therefore the anomaly-based argument does not say anything about monopole quantum numbers for these symmetries. As a final example, let us consider translation symmetries $T_{1,2}$ on triangular lattice. These symmetries act as $\mathbb{Z}_2\times \mathbb{Z}_2$ on Dirac fermions, but could also involve a $\mathbb{Z}_3$ subgroup of $U(1)_{top}$ (one can show translation involves Berry phase $2n\pi/3(n\in\mathbb Z)$ from algebraic relations of space group\cite{shortpaper}). The $\mathbb{Z}_2\times \mathbb{Z}_2$ part gives an anomaly $\pi w_2^{s}xy$ where $x,y$ are the $\mathbb{Z}_2$ forms associated with $T_{1,2}$, and this is exactly the LSM anomaly since we have one spin-$1/2$ per unit cell. The $\mathbb{Z}_3$ part, however, will not further modify the anomaly, so the two in-equivalent choices of Berry phase ($e^{i2\pi/3}$ or trivial) are both allowed by LSM. \section{Three dimensions: Monopole PSG from charge centers} \label{3Dchargecenter} The charge (Wannier) center approach can also be generalized to three dimensions. Consider a $3D$ $U(1)$ quantum spin liquid with gapped spinons (charge) and magnetic monopoles. Recall that in $3D$ a monopole is a point excitation, and the monopole creation operator is a nonlocal operator. Such spin liquids have been extensively discusses in the context of quantum spin ice materials\cite{savary_2017}. The non-local nature of monopoles in $3D$ implies that unlike the 2D case, they can transform projectively under physical symmetries like the spinons. The relevant question then is how the spinon band topology (or SPT-ness) affects the monopole projective symmetry quantum number (or simply monopole PSG). We now show that the monopole PSG associated with lattice rotation symmetries are determined by the effective gauge charges sitting at the rotation centers, in cases when the rotation symmetries do not involve charge conjugation (namely when the monopole flux is invariant under the lattice rotations). To see this, first consider a single charge, or any odd-integer charge, in space with the full $SO(3)$ rotation symmetry around the charge. By examining the Aharonov-Bohm phase for a monopole moving around the charge, one concludes that the monopole carries half-integer angular momentum under the space $SO(3)$ rotation\footnote{One choice of vector potential for a $2\pi$ Dirac monopole reads $\vec A =\frac{(1-\cos(\theta))}{2r\sin(\theta)}\hat\varphi$ in polar coordinates parametrized by $(r,\theta,\varphi)$, on the unit sphere identical to Berry connection for spin-$1/2$.} i.e. the monopole transforms projectively under $SO(3)$. Now on a lattice the $SO(3)$ is broken down to a discrete subgroup, but as long as the remaining rotation group $G$ admits a projective representation $\omega_2\in H^2(G,U(1))$ that is a descendent of the spin-$1/2$ representation when $G$ is embedded into $SO(3)$, the monopole will transform projectively under $G$ according to $\omega_2$. The simplest examples is the dihedral group $D_2=\mathbb{Z}_2\times \mathbb{Z}_2$, corresponding to $\pi$ rotations about three orthogonal axes. If an odd number of gauge charges effectively sit at the rotation center, the monopole will transform under $D_2$ such that different $\mathbb{Z}_2$ rotations anti-commute. \section{Discussion and Future Directions} \label{Discussion} We have demonstrated a precise mapping between Landau order parameters and the symmetry protected band topology of fermions. The link is established by studying the properties of magnetic monopoles in a Dirac spin liquid. Knowledge of the spinon band topology allows us to analytically calculate monopole symmetry quantum numbers. This in turn allows us to identify the set of order parameters that are enhanced in the vicinity of a Dirac spin liquid. Thus, results involving gapped and noninteracting fermions, which is an analytically well controlled limit, are used to extract key information about a strongly interacting and gapless system, the Dirac spin liquid. We also showed that on bipartite lattices there always exists a symmetry-trivial monopole due to the existence of a parent SU(2) gauge theory. In a separate publication we have discussed the physical consequence of the monopole properties as well as signatures of a Dirac spin liquid which can be accessed in numerics and in scattering experiments on candidate materials\cite{shortpaper}. Even though all our calculations of the Wannier centers were done in free fermion framework, notions like the monopole quantum numbers are certainly well-defined beyond free fermions. In fact, the connection between Wannier centers and monopole quantum numbers gives intrinsically interacting definitions of notions like Wannier centers and fragile topology (in accordance with the recent work\cite{ElsePoWatanabe,shiftinsulator}). The monopole angular momenta arising from charge (Wannier) centers are consistent with another simple fact in lattice gauge theory: in the strong-coupling limit of a lattice gauge theory (which is fully confined), all the gauge charges simply sit on the defining sites of lattice and do not fluctuate. Therefore if the gapped charge fields have their effective charge centers (Wannier centers) off the sites (say at plaquette centers), the strong-coupling limit must destroy such a state and pull the charges back to the lattice sites, implying that the lattice symmetries (that prevent charges from moving away from nontrivial centers) must be broken along the way. One can use this as an intuitive way to understand the following well-known statement: the $SU(2)$ Yang-Mills theory in $(3+1)$ dimensions with $\theta=\pi$ cannot confine to a trivial phase without breaking time-reversal symmetry (for a recent exposure see Ref.~\onlinecite{GKKS}). When put on the lattice, $\theta=\pi$ can be generated through introducing fundamental fermions and putting the fermions into a topological superconductor state (protected by $SU(2)\times\mathcal{T}$). By definition the topological superconductor cannot be deformed (without closing gap or breaking symmetries) to a trivial product state with local $SU(2)$ singlets on lattice sites -- but this is exactly what the strong-coupling limit demands. Therefore if the theory indeed flow to infinite coupling, without closing the fermion gap, then time-reversal symmetry must be broken in the IR. We end with a discussions of open issues: \begin{itemize} \item It is also straightforward to apply the charge center method to $\mathbb{Z}_2$ spin liquids (or other discrete gauge theories) in $(2+1)d$. For example, if a nontrivial $\mathbb{Z}_2$ charge (effectively) sits at the center of a plaquette, then the gauge flux (vison) will transform projectively under the lattice rotation around the plaquette center. This is similar to the situation for $3D$ $U(1)$ spin liquids discussed in Sec.~\ref{3Dchargecenter}. \item One natural question is: given a microscopic Hilbert space (say, one spin-$1/2$ per site on triangular lattice), is it possible to realize a different $U(1)$ Dirac spin liquid, with the same field content and symmetry realizations (PSG), but different monopole quantum number from the one discussed in this work? For example, is there a spinon mean field ansats that gives identical Dirac dispersion and PSG, but with monopoles at different momenta, say $T_{1,2}: \Phi_{4/5/6}\to\Phi_{4/5/6}$ (as opposed to that in Table~\ref{table:triangular_monopole})? Unlike the bosonic spinon theory for deconfined criticality, some of the monopole quantum numbers in our examples are not linked to Lieb-Schultz-Mattis (LSM) constraints\cite{MetlitskiThorngren}, as discussed in Sec.~\ref{anomalyLSM}. So an alternative $U(1)$ Dirac spin liquid (with the same PSG but different monopole quantum numbers) seems allowed on general ground -- finding a concrete example is left for future work. \item In Sec.~\ref{wanniercenter} and \ref{calculation} we calculated the angular momenta of monopoles using charge centers, and then obtained their momenta by composing different rotations. In the absence of lattice rotation symmetries the angular momenta are not defined, but the momentum of a monopole is still well defined (although may not be quantized). It is then natural to ask what determines monopole momentum when lattice rotation symmetries are not considered. Recall that with rotation symmetries, the momentum is given by the difference between angular momenta around different rotation centers, which is in turn given by the difference between gauge charges sitting at different rotations centers. The latter appears to be nothing but the dipole moment in the unit cell, which in the more familiar language is just the polarization density. In fact by comparing Fig.~\ref{fig:honeycomb2} and the monopole momenta for each spinon insulator, we can infer that monopole momentum $\vec{k}_{\mathcal{M}}$ is given by the polarization density $\vec{P}$ (dipole density per unit cell) through $\vec{k}_{\mathcal{M}}=2\pi\hat{z}\times\vec{P}$. In a forthcoming work we will develop the connection between polarization and monopole momentum in detail without assuming lattice rotation symmetries\cite{toappear}. \end{itemize} {\it Note added:} In a different parallel work \cite{shiftinsulator}, monopole quantum numbers were used as a probe of free fermion band topology for a specific class of models. Here however, monopoles are dynamical objects, and the underlying models are of strongly interacting quantum magnets. \section*{Acknowledgements} We gratefully acknowledge helpful discussions with Chao-Ming Jian, Eslam Khalaf, Shang Liu, Jacob McNamara, Max Metlitski, Adrian H. C. Po, Ying Ran, Subir Sachdev, Cenke Xu, Yi-Zhuang You and Liujun Zou. X-Y~S acknowledges hospitality of Kavli Institute of Theoretical Physics (NSF PHY-1748958). YCH was supported by the Gordon and Betty Moore Foundation under the EPiQS initiative, GBMF4306, at Harvard University. A.V. was supported by a Simons Investigator award. CW was supported by the Harvard Society of Fellows. Research at Perimeter Institute (YCH and CW) is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. \begin{widetext}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Millisecond oscillations in the X-ray brightness during thermonuclear bursts, ``burst oscillations", have been observed from six low mass X-ray binaries (LMXB) with the Rossi X-ray Timing Explorer (RXTE) (see \markcite{SSZ}Strohmayer, Swank \& Zhang {\it et al.} 1998 for a recent review). Considerable evidence points to rotational modulation as the source of these pulsations (see for example, \markcite{SZS}Strohmayer, Zhang \& Swank 1997; \markcite{SM99}Strohmayer \& Markwardt 1999). Anisotropic X-ray emission caused by either localized or inhomogeneous nuclear burning produces either one or a pair of hot spots on the surface which are then modulated by rotation of the neutron star. A remarkable property of these oscillations is the frequency evolution which occurs in the cooling tail of some bursts. Recently, \markcite{SM99}Strohmayer \& Markwardt (1999) have shown that the frequency in the cooling tail of bursts from 4U 1728-34 and 4U 1702-429 is well described by an exponential chirp model whose frequency increases asymptotically toward a limiting value. \markcite{SJGL}Strohmayer et. al (1997) have argued this evolution results from angular momentum conservation of the thermonuclear shell, which cools, shrinks and spins up as the surface radiates away the thermonuclear energy. To date, only frequency increases have been reported in the cooling tails of bursts, consistent with settling of the shell as its energy is radiated away. In this Letter we report observations of a {\it decreasing} burst oscillation frequency in the tail of an X-ray burst. We find that an episode of spin down in the cooling tail of a burst observed on December 31, 1996 at 17:36:52 UTC (hereafter, burst A, or the ``spin down'' burst) from 4U 1636-53 is correlated with the presence of an extended. In \S 1 we present an analysis of the frequency evolution in this burst, with emphasis on the spin down episode. In \S 2 we present time resolved energy spectra of the spin down burst, and we investigate the energetics of the extended tail. Throughout, we compare the temporal and spectral behavior of the spin down burst with a different burst observed on December 29, 1996 at 23:26:46 UTC (hereafter, burst B) which does not show either a spin down episode nor an extended tail of emission, but which is similar to the spin down burst in most other respects. We conclude in \S 3 with a summary and discussion of the spin down episode and extended emission in the context of an additional, delayed thermonuclear energy release which might re-expand the thermonuclear shell and perhaps account for both the spin down and the extended tail of thermal emission. \section{Evidence for Spin Down} Oscillations at 580 Hz were discovered in thermonuclear bursts from 4U 1636-53 by Zhang et al. (1996). More recently, Miller (1999a) has reported evidence during the rising phase of bursts of a significant modulation at half the 580 Hz frequency suggesting that 580 Hz is twice the neutron star spin frequency and that a pair of antipodal spots produce the oscillations. Here we focus on a burst from 4U 1636-53 which shows a unique decrease in the $\approx 580$ Hz oscillation frequency. To study the evolution in frequency of burst oscillations we employ the $Z_n^2$ statistic (\markcite{B83}Buccheri et al. 1983). We have described this method previously, and details can be found in \markcite{SM99}Strohmayer \& Markwardt (1999). We first constructed for both bursts A and B a dynamic ``variability'' spectrum by computing $Z_1^2$ as a function of time on a grid of frequency values in the vicinity of 580 Hz. We used 2 second intervals to compute $Z_1^2$ and started a new interval every 0.25 seconds. This variability spectrum is very similar to a standard dynamic power spectrum, however, the $Z_1^2$ statistic allows for a more densely sampled frequency grid than a standard Fast Fourier Transform power spectrum. The results are shown in Figure 1 (bursts A and B are in the top and bottom panels, respectively) as contour maps of constant $Z_1^2$ through each burst. The contour map for the spin down burst (top panel) suggests that the oscillation began with a frequency near 579.6 Hz at burst onset, reappeared later in the burst after ``touchdown'' of the photosphere at an increased frequency, $\approx 580.7$ Hz, but then beginning near 11 seconds dropped to $\approx 579.6$ Hz over several seconds. For comparison, we also show in Figure 1 (bottom panel) a similar variability spectrum for burst B which also shows strong oscillations near 580 Hz, but shows no evidence of a similar spin down episode. To investigate the evolution of the oscillation frequency more quantitatively we fit a model for the temporal evolution of the frequency, $\nu(t)$, to the 4.5 second interval during which the oscillation is evident in the dynamic variability spectrum (Figure 1, top panel). Our model is composed of two linear segments, each with its own slope, joined continuously at a break time $t_b$. This is similar to the model employed by \markcite{M99b}Miller (1999b), and has four free parameters, the initial frequency, $\nu_0$, the two slopes, $d_{\nu}^1$ and $d_{\nu}^2$, and the break time, $t_b$. We used this frequency model to compute phases $\phi_{t_j}$ for each X-ray event, {\it viz.} $\phi_{t_j} = \int_0^{t_j} \nu(t') dt'$, where $t_j$ are the photon arrival times, and then varied the model parameters to maximize the $Z_1^2$ statistic. We used a downhill simplex method for the maximization (see Press et al. 1989). Figure 2 compares $Z_1^2$ vs. parameter $\nu_0$ for the best fitting two segment model (solid histogram) and a simple constant frequency model ($\nu(t) = \nu_0$, dashed histogram). The two segment model produces a significant increase in the maximum $Z_1^2$ of about 40 compared with no frequency evolution, and it also yields a single, narrower peak in the $Z_1^2$ distribution. The increase of 40 in $Z_1^2$, which for a purely random process is distributed as $\chi^2$ with 2 degrees of freedom, argues convincingly that the frequency drop is significant. We note that \markcite{M99b}Miller (1999b) has also identified the same spin down episode during this burst using a different, but related method. The best fitting two segment model is shown graphically as the solid curve in Figure 1 (top panel). \section{Time history, spectral evolution and burst energetics} A comparison of the 2 - 20 keV time history of the spin down burst with other bursts from the same observations reveals that this burst is also unique in having an extended tail of thermal emission. This is well illustrated in Figure 3, which compares the 2 - 20 keV time histories of the spin down burst and burst B. To further investigate the energetics of the thermal burst emission we performed a spectral evolution analysis. We accumulated energy spectra for varying time intervals through both bursts. Using XSPEC we fit blackbody spectra to each interval by first subtracting a pre-burst interval as background, and then investigated the temporal evolution of the blackbody temperature, $kT$, inferred radius, $R_{BB}$ and bolometric flux, $F$. In most intervals we obtained acceptable fits with the blackbody model. The results for both bursts are summarized in Figure 4, We have aligned the burst profiles in time for direct comparison. Both bursts show evidence for radius expansion shortly after onset in that $kT$ drops initially and then recovers. Their peak fluxes are also similar, consistent with being Eddington limited. Out to about 7 seconds post-onset both bursts show the same qualitative behavior, after this, however, the spin down burst (solid curve) shows a much more gradual decrease in both the blackbody temperature $kT$ and the bolometric flux $F$ than is evident in burst B. We integrated the flux versus time profile for each burst in order to estimate fluences and establish the energy budget in the extended tail. We find fluences of $1.4 \times 10^{-6}$ and $5.1 \times 10^{-7}$ ergs cm$^{-2}$ for bursts A and B respectively. That is, the spin down burst has about 2.75 times more energy than burst B. Put another way, most of the energy in the spin down burst is in the extended tail. In figure 4 we also indicate with a vertical dotted line the time $t_b$ associated with the beginning of the spin down episode based on our modelling of the 580 Hz oscillations. The spectral evolution analysis indicates that at about the same time the spin down episode began there was also a change in its spectral evolution as compared with that of burst B. This behavior is evident in figure 5 which shows the evolution of $kT$ (dashed curve) and the inferred blackbody radius $R_{BB}$ (solid curve) for the spin down burst. Notice the secondary increase in $R_{BB}$ and an associated dip in $kT$ near time $t_b$ (vertical dotted line). This behavior is similar to the signature of radius expansion seen earlier in both bursts, but at a weaker level, it suggests that at this time there may have been an additional thermonuclear energy input in the accreted layers, perhaps at greater depth, which then diffused out on a longer timescale, producing the extended tail. That this spectral signature ocurred near the same time as the onset of the spin down episode suggests that the two events may be causally related. \section{Discussion and Summary} The observation of thermonuclear bursts with extended tails is not a new phenomenon. \markcite{CCG}Czerny, Czerny, \& Grindlay (1987) reported on a burst from the soft X-ray transient Aql X-1 which showed a long, relatively flat X-ray tail. Bursts following this one were found to have much shorter, weaker tails. \markcite{F92}Fushiki et al. (1992) argued that such long tails were caused by an extended phase of hydrogen burning due to electron captures at high density ($\rho \approx 10^7$ g cm$^{-3}$) in the accreted envelope. Such behavior is made possible because of the long time required to accumulate an unstable pile of hydrogen-rich thermonuclear fuel when the neutron star is relatively cool ($\approx 10^7$ K) prior to the onset of accretion. This, they argued, could occur in transients such as Aql X-1, which have long quiescent periods during which the neutron star envelope can cool, and thus the first thermonuclear burst after the onset of an accretion driven outburst should show the longest extended tail. Other researchers have shown that the thermal state of the neutron star envelope, the abundance of CNO materials in the accreted matter, and variations in the mass accretion rate all can have profound effects on the character of bursts produced during an accretion driven outburst (see \markcite{T93}Taam et al. 1993; \markcite{WW85}Woosley \& Weaver 1985; and \markcite{AJ82}Ayasli \& Joss 1982, and references therein). For example, \markcite{T93}Taam et al. (1993) showed that for low CNO abundances and cool neutron star envelopes the subsequent bursting behavior can be extremely erratic, with burst recurrence times varying by as much as two orders of magnitude. They also showed that such conditions produce dwarf bursts, with short recurrence times and peak fluxes less than a tenth Eddington, and that many bursts do not burn all the fuel accumulated since the last X-ray burst. Thus residual fuel, in particular hydrogen, can survive and provide energy for subsequent bursts. These effects lead to a great diversity in the properties of X-ray bursts observed from a single accreting neutron star. Some of these effects were likely at work during the December, 1996 observations of 4U 1636-53 discussed here, as both a burst with a long extended tail, as well as a dwarf burst were observed (see \markcite{M99b}Miller 1999b). The spin up of burst oscillations in the cooling tails of thermonuclear bursts from 4U 1728-34 and 4U 1702-429 has been discussed in terms of angular momentum conservation of the thermonuclear shell (see \markcite{SJGL}Strohmayer et al 1997; \markcite{SM99}Strohmayer \& Markwardt 1999). Expanded at burst onset by the initial thermonuclear energy release, the shell spins down due to its larger rotational moment of inertia compared to its pre-burst value. As the accreted layer subsequently cools its scale height decreases and it comes back into co-rotation with the neutron star over $\approx$ 10 seconds. To date the putative initial spin down at burst onset has not been observationally confirmed, perhaps due to the radiative diffusion delay, on the order of a second, which can hide the oscillations until after the shell has expanded and spun down (see, for example, \markcite{B98}Bildsten 1998). We continue to search for such a signature, however. Although the initial spin down at burst onset has not been seen, the observation of a spin down episode in the {\it tail} of a burst begs the question; can it be understood in a similar context, that is, by invoking a second, thermal expansion of the burning layers? The supporting evidence is the presence of spin down associated with the spectral evidence for an additional energy source, the extended tail, as well as the spectral variation observed at the time the spin down commenced (see Figure 5). Based on these observations we suggest that the spin down began with a second episode of thermonuclear energy release, perhaps in a hydrogen-rich layer underlying that responsible for the initial instability, and built up over several preceeding bursts. Such a scenario is not so unlikely based on previous theoretical work (see \markcite{T93}Taam et al. 1993, \markcite{F92}Fushiki et al. 1992). The observed rate of spin down, $d_{\nu}^2 = -1.01\times 10^{-3}$ s$^{-1}$, interpreted as an increase in the height of the angular momentum conserving shell gives $dr/dt \approx (\Delta\nu / 2\Delta T \nu_0)R \approx 5.25$ m s$^{-1}$, for a neutron star radius of $R = 10$ km. Calculations predict increases in the scale height of the bursting layer on the order of 20-30 m during thermonuclear flashes (see \markcite{J77}Joss 1977; and \markcite{B95}Bildsten 1995). Based on this and the energy evident in the extended tail, the additional expansion of about 12 m does not appear overly excessive. If correct this scenario would require that the oscillation frequency eventually increase again later in the tail. Unfortunately the oscillation dies away before another increase is seen. It is interesting to note that bursts from 4U 1636-53 do not appear to show the same systematic evolution of the oscillation frequency as is evident in bursts from 4U 1728-34 and 4U 1702-429 (see for example, \markcite{M99b}Miller 1999b; and \markcite{SM99}Strohmayer \& Markwardt 1999). In particular, there is no strong evidence for an exponential-like recovery that is often seen in 4U 1728-34 and 4U 1702-429. Rather, in 4U 1636-53, when the burst oscillation frequency reappears after photospheric touchdown in many bursts it appears almost immediately at the higher frequency. In the context of a spinning shell this might suggest that the shell recouples to the underlying star more quickly than in 4U 1728-34 or 4U 1702-429. Interestingly, 4U 1636-53 is also the only source to show significant pulsations at the sub-harmonic of the strongest oscillation frequency, and this has been interpreted as revealing the presence of a pair of antipodal hot spots (see \markcite{M99a}Miller 1999a). These properties may be related and could, for example, indicate the presence of a stronger magnetic field in 4U 1636-53 than the other sources. Another physical process which could alter the observed frequency is related to general relativistic (GR) time stretching. If the burst evolution modulates the location in radius of the photosphere, then the rotation at that radius, as seen by a distant observer, is affected by a redshift such that, $\Delta r /R = (R/r_g)(1-r_g/R)(1-1/(\nu_{h}/\nu_{l})^2)$, where $\Delta r$ is the change in height of the photosphere, $r_g = 2GM/c^2$ is the schwarzchild radius, and $\nu_h/\nu_l$ is the ratio of the highest and the lowest observed frequencies. If this were the sole cause of the frequency changes, then it would imply a height change for the photosphere of $\approx 120$ m, which is much larger than the increases predicted theoretically for bursting shells. Note that this effect works counter to angular momentum conservation of the shell, since increasing the height makes the frequency higher compared to deeper layers. Since the thicknesses of pre- and post-burst shells are on the order of 20 - 50 m, we estimate from the above that the GR correction amounts to about 10 - 20\% of the observed frequency change, and, if the angular momentum conservation effect is at work, requires a modest {\it increase} in the height of the shell over that estimated non-relativistically. We have reported in detail on the first observations of a spin down in the frequency of X-ray brightness oscillations in an X-ray burst. We have shown that this event is coincident with the ocurrence of an extended tail as well as a spectral signature which both suggest a secondary release of thermonuclear energy in the accreted layer. It is always difficult to draw conclusions based on a single event, however, if the association of spin down episodes with an extended X-ray tail can be confirmed in additional bursts this will provide strong evidence in support of the hypothesis that angular momentum conservation of the thermonuclear shell is responsible for the observed frequency variations during bursts. The combination of spectral and timing studies during bursts with oscillations can then give us a unique new probe of the physics of thermonuclear burning on neutron stars. \acknowledgements We thank Craig Markwardt and Jean Swank for many helpful discussions. \vfill\eject
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:Introduction} Strongly correlated electron systems attract enormous attention because of the multitude of remarkable phenomena they exhibit, such as the Kondo effect, magnetic or charge ordering, and unconventional superconductivity. The situation becomes particularly interesting when different effects are either competing or reinforcing each other. A class of compounds that exhibit all these effects are the heavy fermion materials \cite{lit:Hewson:KondoProblem93,lit:Grewe1984,lit:Stewart1984,lit:Stewart2001,lit:Stewart2006,lit:Lohnensysen2007,lit:Barzykin2005,lit:Si2001,lit:Si2006,lit:Coleman2005,lit:Gegenwart2008,lit:Knafo2009,lit:Si2010,lit:Paschen2014}, where strongly interacting $f$-electrons hybridize with conduction $spd$ bands. Heavy fermion superconductors are usually considered to be a nodal unconventional superconductor where the nonlocal Cooper pairing is mediated by magnetic fluctuations \cite{lit:Ott1983, lit:Stewart1984_2,lit:Pfleiderer2009,lit:Nunes2003,lit:Steglich2016,lit:Masuda2015}. However, very recently the pairing mechanism of the first heavy fermion superconductor $\mathrm{Ce}\mathrm{Cu}_2\mathrm{Si}_2$ is controversially discussed \cite{lit:Kittaka2014,lit:Yamashitae2017,lit:Kitagawa2017,lit:Takenaka2017,lit:Li2018,lit:Pang2018}. While $\mathrm{Ce}\mathrm{Cu}_2\mathrm{Si}_2$ was generally believed to be a prototypical $d$-wave superconductor \cite{lit:Steglich1979}, recent low-temperature experiments have found no evidence of gap nodes at any point of the Fermi surface \cite{lit:Yamashitae2017}. These results indicate that, contrary to the long-standing belief, $\mathrm{Ce}\mathrm{Cu}_2\mathrm{Si}_2$ is a heavy-fermion superconductor with a fully gapped $s$-wave superconducting (SC) state which may be caused by an on-site attractive pairing interaction. Since it is generally believed that the coupling between conduction electrons and strongly interacting $f$-electrons, which causes the Kondo effect and magnetism, strongly suppresses superconductivity, heavy fermion superconductors with an attractive on-site pairing interaction have been barely studied theoretically \cite{lit:Bertussi2009,lit:Bodensiek2010,lit:Maska2010,lit:Bodensiek2013,lit:Karmakar2016,lit:Costa2018}. Furthermore, besides the possibility of fully gapped superconductivity in $\mathrm{Ce}\mathrm{Cu}_2\mathrm{Si}_2$, $s$-wave superconductivity might always be induced in heavy fermion systems via the proximity effect \cite{lit:Otop2003,lit:Chen1992,lit:Han1985}, making it possible to study the interplay between superconductivity, magnetic ordering, charge ordering, and the Kondo effect. One of the simplest models comprising all these effects is a Kondo lattice \cite{lit:Doniach77,lit:Lacroix1979,lit:Fazekas1991,lit:Assaad1999,lit:Peters2007} with an additional attractive Hubbard interaction $U<0$ \cite{lit:Bauer2009}: \begin{align} H=& t\sum_{<i,j>,\sigma} \left( c^\dagger_{i,\sigma} c^{\phantom{\dagger}}_{j,\sigma} + \mathrm{H.c.} \right) - \mu \sum_{i,\sigma} n_{i,\sigma} \nonumber \\ &+ U\sum_i n_{i,\uparrow} n_{i,\downarrow} + J \sum_{i} \vec{S}_i \cdot \vec{s}_i, \label{eq:lattice_H} \end{align} where $\mu$ is the chemical potential, $t$ denotes the hopping parameter between nearest neighbors and $J>0$ is a Kondo coupling. $c^\dagger_{i,\sigma}$ creates a conduction electron on site $i$ with spin $\sigma$ and $n_{i,\sigma} = c^\dagger_{i,\sigma} c^{\phantom{\dagger}}_{i,\sigma}$. The last term in Eq. \eqref{eq:lattice_H} describes the spin-spin interaction between the conduction electron spins $\vec{s}_i=\sum_{\sigma,\sigma'}c^\dagger_{i,\sigma} \vec{\sigma}_{\sigma,\sigma'} c^{\phantom{\dagger}}_{i,\sigma'}$ and the localized $f$-electron spins $\vec{S}_i$, with the Pauli matrices $\vec{\sigma}_{\sigma,\sigma'}$. This model has been investigated in one dimension by means of density matrix renormalization group (DMRG) for a filling of $n=1/3$ \cite{lit:Bertussi2009}, for different fillings in three dimensions using static mean-field theory \cite{lit:Costa2018} and for ferromagnetic couplings $J<0$ in two dimensions with the aid of variational minimization and Monte Carlo methods \cite{lit:Karmakar2016}. For $U=0$, the model reduces to the ordinary Kondo lattice model, exhibiting a competition between spin-density waves (SDWs) and the Kondo effect, while for $J=0$ one obtains the attractive Hubbard model with an on-site pairing term. This on-site pairing may evoke superconductivity and, at half filling, also a charge density wave (CDW) state which is energetically degenerate with the SC state \cite{lit:Huscroft1997,lit:Bauer2009}. Although CDWs play a crucial role at half filling, previous investigations of the model Eq. \eqref{eq:lattice_H} have ignored possible CDWs \cite{lit:Karmakar2016,lit:Costa2018}. A finite $J$ and attractive $U$ allows us to examine the interplay between all these effects. Such an attractive on-site term can arise in different ways. In solid state systems, it can be mediated by bosons, e.g., phonons \cite{lit:Raczkowski2010} or excitons, while in ultracold atom systems \cite{lit:Review_UltraColdGases2008} the effective interaction between optically trapped fermionic atoms can be tuned using Feshbach resonances \cite{lit:Tiesinga1993,lit:Stwalley1976,lit:Inouye1998,lit:Courteille1998} so that it is well described by a local attractive potential. In such systems, $s$-wave superfluidity has already been observed \cite{lit:Greiner2003,lit:Zwierlein2004,lit:Zwierlein2005,lit:Chin2006}. In this paper, we investigate the interplay between magnetic ordering, charge ordering, the Kondo effect, and superconductivity for the Kondo lattice Hamiltonian with an attractive Hubbard interaction [Eq. \eqref{eq:lattice_H}] on a two-dimensional square lattice. To analyze this system, we employ the real-space dynamical mean field theory (RDMFT) which is a generalization of the dynamical mean-field theory (DMFT) \cite{lit:Metzner89,lit:DMFTreview}. The DMFT has been proven to be very suitable to investigate the properties of strongly correlated lattice systems in cases where the momentum dependence of the self-energy can be neglected. In the RDMFT, each lattice site of a finite cluster is mapped onto its own impurity model. This allows us to study incommensurate CDWs or SDWs, however, nonlocal interactions such as intersite SC pairing mechanisms cannot be described with the RDMFT. Therefore, only $s$-wave superconductivity, mediated by a local pairing, is investigated in this paper. The effective impurity models have to be solved self-consistently. For this purpose we develop a new self-consistent NRG \cite{lit:WilsonNRG,lit:BullaReview} scheme which allows us to combine superconductivity with spin symmetry broken solutions and is, hence, more general than the one by Bauer \textit{et al.} \cite{lit:Bauer2009}. We obtain a rich phase-diagram at half filling and demonstrate that depending on $J$ and $U$ superconductivity, CDWs, SDWs, Kondo screening, or a different combination of these effects may occur. Contrary to recent static mean-field calculations \cite{lit:Costa2018}, we find a novel phase at half filling where CDWs and SDWs coexist. It is shown that in this phase, the SDWs lift the degeneracy between the SC state and the CDW state such that superconductivity is suppressed. The spectral functions reveal that the system becomes a half metal in the CDW phase near the phase boundary to the N\'eel phase. Away from half filling, CDWs are suppressed and superconductivity survives for much larger couplings $J$. Instead of a homogeneous N\'eel state, we observe incommensurate SDWs; however, we find no evidence that superconductivity has an influence on the pattern of this SDWs. We show that the CDWs, at half filling, as well as the superconductivity, away from half filling, enhance the magnetic ordering of the localized spins since the emergent gaps in the density of states (DOS) mitigate the Kondo screening. These results resemble recent observations in cuprate superconductors \cite{lit:Gabovich2010,lit:Tu2016,lit:Jang2016}. There one can also find a rich phase diagram where superconductivity, CDWs, and SDWs coexist or compete with each other. Similar to our model, the appearance of CDWs also strongly depends on the doping of the system. Note, however, that cuprate superconductors are usually considered to be $d$-wave superconductors with a nonlocal pairing mechanism, while in this paper we only consider a local pairing. The rest of the paper is organized as follows. The RDMFT approach and its generalization to Nambu space are described in Sec. \ref{sec:ModelMethod}. Furthermore, the new self-consistent NRG scheme, which is used to solve the effective impurity models, is explained in detail. In Sec. \ref{sec:Phase diagram_half_filling}, we present the results for half filling while the properties of the system away from half filling are described in Sec. \ref{sec:away_from_half-filling}. We give a short conclusion in Sec. \ref{sec:Conclusion}. \section{Method} \label{sec:ModelMethod} \subsection{RDMFT setup in Nambu space} To solve the model of Eq. \eqref{eq:lattice_H}, we employ the RDMFT, which is an extension of the conventional DMFT \cite{lit:Metzner89,lit:DMFTreview} to inhomogeneous situations \cite{lit:Potthoff1999}. It is based on the assumption of a local self-energy matrix $\underline{\Sigma}_{i,j}(\omega)=\underline{\Sigma}_i(\omega) \delta_{i,j}$, with \begin{align} \underline{\Sigma}_i(\omega)= \begin{pmatrix} \Sigma^i_{11}(\omega) & \Sigma^i_{12}(\omega) \\ \Sigma^i_{21}(\omega) & \Sigma^i_{22}(\omega) \end{pmatrix} \end{align} being the self-energy matrix of site $i$ in Nambu space. Within this approximation, correlations between different sites of the cluster are not included, but the self-energy may be different for each lattice site and allows, therefore, e.g., SDWs and CDWs. In the RDMFT, each site $i$ in a finite cluster is mapped onto its own effective impurity model with an SC symmetry breaking term \begin{align} H_\mathrm{Eff} =& H_\mathrm{Imp} + \sum_{\vec{k},\sigma} \epsilon_{\vec{k},\sigma} c^\dagger_{\vec{k},\sigma} c^{\phantom{\dagger}}_{\vec{k},\sigma} +\sum_{\vec{k},\sigma} V_{\vec{k},\sigma} \left( c^\dagger_{\vec{k},\sigma} d^{\phantom{\dagger}}_{\sigma} + \mathrm{H.c.} \right) \nonumber \\ & - \sum_{\vec{k}} \Delta_{\vec{k}} \left[ c^\dagger_{\vec{k},\uparrow} c^{\dagger}_{-\vec{k},\downarrow} + c^{\phantom{\dagger}}_{-\vec{k},\downarrow} c^{\phantom{\dagger}}_{\vec{k},\uparrow} \right], \label{eq:impurity_model} \end{align} where \begin{align} H_\mathrm{Imp} =& \sum_{\sigma} \epsilon_d n_{d,\sigma} + U n_{d,\uparrow}n_{d,\downarrow} + J \vec{S} \vec{s}_{d}, \label{eq:impurity} \end{align} with $\epsilon_d=\mu$, $n_{d,\sigma} = d^\dagger_{\sigma} d^{\phantom{\dagger}}_{\sigma}$, $\vec{s}_{d}=d^\dagger_{\sigma} \vec{\sigma}_{\sigma,\sigma'} d^{\phantom{\dagger}}_{\sigma'}$ and $d_\sigma$ being the fermionic operator of the impurity site. The parameters $\epsilon_{\vec{k},\sigma}$, $V_{\vec{k},\sigma}$, and $\Delta_{\vec{k}}$ are those for the medium and may be different for each site in the RDMFT cluster. The mapping of the lattice model of Eq. \eqref{eq:lattice_H} to the impurity model of Eq. \eqref{eq:impurity_model} is achieved by calculating the local Green's function in Nambu space: \begin{align} {G}_\mathrm{loc}(z) =& \int \int \left[ z \ensuremath{\mathbb{1}} - H_{k_x,k_y} - \Sigma(\omega) \right]^{-1} dk_x dk_y, \label{eq:local_Green} \end{align} where $H_{k_x,k_y}$ is the hopping Hamiltonian of the finite RDMFT cluster and the momentum dependence arises from the periodic boundary conditions. The medium dependent parameters of the effective impurity model for each site $i$ are then extracted from the site-diagonal Green's function matrix in Nambu space \begin{align} \underline{G}_{\mathrm{loc},ii}(z) = \begin{pmatrix} \langle d^{\dagger}_{\uparrow} d^{\phantom{\dagger}}_{\uparrow} \rangle_i(z) & \langle d^{\phantom{\dagger}}_{\uparrow} d^{\phantom{\dagger}}_{\downarrow} \rangle_i(z) \\ \langle d^{\dagger}_{\downarrow} d^{\dagger}_{\uparrow} \rangle_i(z) & \langle d^{\phantom{\dagger}}_{\downarrow} d^{\dagger}_{\downarrow} \rangle_i(z) \label{eq:local_Green_i} \end{pmatrix}, \end{align} which will be discussed in detail below. For a typical DMFT calculation, one starts with self-energies $\Sigma_i(\omega)$ for each site of the cluster which should break $U(1)$ gauge symmetry to obtain an SC solution. Afterward, the local Green's function of Eq. \eqref{eq:local_Green} is computed, which is used to set up the effective impurity problems. Solving these impurity models yields new self-energies $\Sigma_i(\omega)$, which are again used to calculate the local Green's functions. This procedure is repeated until a converged solution is found. To solve the impurity models, a variety of methods such as quantum Monte Carlo, exact diagonalization, or NRG \cite{lit:WilsonNRG,lit:BullaReview} can be used. We employ the NRG to compute the self-energy and local thermodynamic quantities of the effective impurity models since it has been proven to be a reliable tool to calculate dynamical properties such as real-frequency Green's functions \cite{lit:Peters2006} and self-energies \cite{lit:Bulla98} with high accuracy around the Fermi level. The combination of NRG and DMFT has already been successfully applied to superconductivity in interacting lattice systems \cite{lit:Bauer2009,lit:Bodensiek2013,lit:Peters2015} although only SU(2) spin symmetric systems without magnetic ordering have been treated \cite{lit:Galitski2002,lit:Yao2014}. \subsection{Self-consistent NRG Scheme with SU(2) spin symmetry breaking and superconductivity} To employ the DMFT, we still have to resolve how to calculate the parameters of the NRG Wilson chain, which depend on the local Green's function of Eq. \eqref{eq:local_Green_i} at each lattice site. Bauer \textit{et al.} \cite{lit:Bauer2009} have shown how the DMFT+NRG setup can be extended to SC symmetry breaking. This approach, however, requires SU(2) spin symmetry for the up and down conduction band channels. Therefore, we propose a new and different ansatz: Instead of directly discretizing the impurity model of Eq. \eqref{eq:impurity_model}, we first perform a Bogoliubov transformation and afterward discretize the model logarithmically into intervals $I^\alpha$ with $I^+=(x_{n+1},x_n)$ and $I^-=-(x_{n},x_{n+1})$ with $x_n=D\Lambda^{-n}$, where $\Lambda>1$ is the discretization parameter of the NRG and $D$ is the half bandwidth of the conduction band. After retaining only the lowest Fourier component \cite{lit:BullaReview} in Eq. \eqref{eq:impurity_model}, the Bogoliubov transformed and discretized impurity model can be written as \begin{align} H_\mathrm{Eff} =& H_\mathrm{Imp} + \sum_{\sigma,n,\alpha} \xi_{\sigma,n}^\alpha a^\dagger_{\alpha,n,\sigma} a^{\phantom{\dagger}}_{\alpha,n,\sigma} \nonumber \\ &+\sum_{n,\alpha} \left( \gamma_{n,\uparrow}^{\alpha} a^\dagger_{\alpha,n,\uparrow}d_{\uparrow} + \gamma_{n,\uparrow \downarrow}^{\alpha} a^{{\dagger}}_{\alpha,n,\uparrow}d^\dagger_{\downarrow} \right. \nonumber \\ & \left. + \gamma_{n,\downarrow \uparrow}^{\alpha} a^{{\dagger}}_{\alpha,n,\downarrow}d^\dagger_{\uparrow} + \gamma_{n,\downarrow}^{\alpha} a^{{\dagger}}_{\alpha,n,\downarrow}d_{\downarrow} + \mathrm{H.c.} \right). \label{eq:discretized_model} \end{align} The advantage of Eq. \eqref{eq:discretized_model} over the direct discretization in Ref. \cite{lit:Bauer2009} is that in each interval, the up and down conduction band channels are not directly coupled and the U(1) gauge symmetry breaking instead occurs due to the new interval-dependent hybridizations $\gamma^\alpha_{n,\uparrow \downarrow}$ and $\gamma^\alpha_{n,\downarrow \uparrow}$. Since the conduction band channels are not directly coupled anymore, we are able to choose the bath parameters $\xi_{\uparrow,n}^\alpha$ and $\xi_{\downarrow,n}^\alpha$ independently of each other and, afterward, adjust the hybridizations such that they lead to the same effective action for the impurity degree of freedom as in the original model \cite{lit:Bulla97}. As usual in the NRG \cite{lit:BullaReview}, we can, therefore, choose $\xi_{\uparrow,n}^+ = \xi_{\downarrow,n}^+ = E^+_n = E_n$ and $\xi_{\uparrow,n}^- = \xi_{\downarrow,n}^- = E^-_n = -E_n$, where $E_n=|x_n+x_{n+1}|/2$ is the value in the middle of an interval. The remaining parameters for each site $i$ of the finite cluster are determined from the generalized matrix hybridization function $\underline{K}(\omega)$ in Nambu space, which can be calculated from the local impurity Green's function matrix of Eq. \eqref{eq:local_Green_i}: \begin{align} \underline{K}(z) =& z \underline{\mathbb{1}} - \underline{G}_{\mathrm{loc}}(z)^{-1} - \underline{\Sigma}(z), \end{align} where we have omitted the site index $i$ since the procedure is the same for every site. To calculate the remaining parameters, we demand, as usual in the DMFT, that the local hybridization function of the lattice $\underline{K}(z)$ and the hybridization function of the discretized model are equal: \begin{align} \underline{K}(z) =& \begin{pmatrix} K^{}_{11}(z) & K^{}_{12}(z) \\ K^{}_{21}(z) & K^{}_{22}(z) \end{pmatrix} \nonumber \\ = \sum_{n,\alpha} \frac{1}{z - E_n^\alpha} & \begin{pmatrix} \gamma^{\alpha}_{n,\uparrow} & \gamma^{\alpha}_{n,\uparrow\downarrow} \\ \gamma^{\alpha}_{n,\downarrow\uparrow} & \gamma^{\alpha}_{n,\downarrow} \end{pmatrix}^\dagger \begin{pmatrix} \gamma^{\alpha}_{n,\uparrow} & \gamma^{\alpha}_{n,\uparrow\downarrow} \\ \gamma^{\alpha}_{n,\downarrow\uparrow} & \gamma^{\alpha}_{n,\downarrow} \end{pmatrix}. \end{align} Since $K_{12}(z) = K_{21}(z)$ must apply, we can choose $\gamma^{\alpha}_{n,\downarrow\uparrow} = \gamma^{\alpha}_{n,\uparrow\downarrow}=\gamma^{\alpha}_{n,\mathrm{off}}$. Using only the imaginary parts $\Delta_\uparrow(\omega)=-\mathrm{Im}\, K_{11}(\omega+i\eta)/\pi$, $\Delta_\downarrow(\omega)=-\mathrm{Im}\, K_{22}(\omega+i\eta)/\pi$, and $\Delta_\mathrm{off}(\omega)=-\mathrm{Im} \, K_{12}(\omega+i\eta)/\pi$, the equation can be rewritten as a sum of delta functions \begin{align} \Delta_\uparrow(\omega) =& \sum_{n,\alpha} ( {\gamma^{\alpha}_{n,\uparrow}}^2 + {\gamma^{\alpha}_{n,\mathrm{off}}}^2 ) \delta(\omega - E_n^\alpha), \\ \Delta_\downarrow(\omega) =& \sum_{n,\alpha} ( {\gamma^{\alpha}_{n,\downarrow}}^2 + {\gamma^{\alpha}_{n,\mathrm{off}}}^2 ) \delta(\omega - E_n^\alpha), \\ \Delta_\mathrm{off}(\omega) =& \sum_{n,\alpha} \gamma^\alpha_{n,\mathrm{off}} ( \gamma^{\alpha}_{n,\uparrow} + \gamma^{\alpha}_{n,\downarrow} ) \delta(\omega - E_n^\alpha). \end{align} Integration over the energy intervals $I^\alpha_n$, \begin{align} w^\alpha_{n,\sigma} = \int_{I^\alpha_n} \Delta_{\sigma}(\omega) d\omega \quad w^\alpha_{n,\mathrm{off}} = \int_{I^\alpha_n} \Delta_{\mathrm{off}}(\omega) d\omega, \end{align} yields the equation system \begin{align} w^\alpha_{n,\uparrow} =& {\gamma^{\alpha}_{n,\uparrow}}^2 + {\gamma^{\alpha}_{n,\mathrm{off}}}^2, \\ w^\alpha_{n,\downarrow} =& {\gamma^{\alpha}_{n,\downarrow}}^2 + {\gamma^{\alpha}_{n,\mathrm{off}}}^2, \\ w^\alpha_{n,\mathrm{off}} =& \gamma^\alpha_{n,\mathrm{off}} ( \gamma^{\alpha}_{n,\uparrow} + \gamma^{\alpha}_{n,\downarrow} ). \end{align} One possible solution of this system is given by \begin{align} \gamma^{\alpha}_{n,\uparrow} =& \frac{ w^\alpha_{n,\uparrow} + \sqrt{ w^\alpha_{n,\uparrow} w^\alpha_{n,\downarrow} - {w^\alpha_{n,\mathrm{off}}}^2 } }{ \sqrt{w^\alpha_{n,\uparrow} + w^\alpha_{n,\downarrow} +2\sqrt{w^\alpha_{n,\uparrow} w^\alpha_{n,\downarrow} - {w^\alpha_{n,\mathrm{off}}}^2} } }, \\ \gamma^{\alpha}_{n,\downarrow} =& \frac{ w^\alpha_{n,\downarrow} + \sqrt{ w^\alpha_{n,\uparrow} w^\alpha_{n,\downarrow} - {w^\alpha_{n,\mathrm{off}}}^2 } }{ \sqrt{w^\alpha_{n,\uparrow} + w^\alpha_{n,\downarrow} +2\sqrt{w^\alpha_{n,\uparrow} w^\alpha_{n,\downarrow} - {w^\alpha_{n,\mathrm{off}}}^2} } }, \\ \gamma^{\alpha}_{n,\mathrm{off}} =& \frac{ w^\alpha_{n,\mathrm{off}}}{ \sqrt{w^\alpha_{n,\uparrow} + w^\alpha_{n,\downarrow} +2\sqrt{w^\alpha_{n,\uparrow} w^\alpha_{n,\downarrow} - {w^\alpha_{n,\mathrm{off}}}^2} } }. \end{align} Note that in the case of vanishing superconductivity $w^\alpha_{n,\mathrm{off}}=0$, the equations reduce to the standard NRG solution \cite{lit:BullaReview} ${\gamma^{\alpha}_{n,\sigma}}^2= w^\alpha_{n,\sigma}$ and $\gamma^{\alpha}_{n,\mathrm{off}}=0$. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Fig_1_NRG_WilsonChain} \caption{ New Wilson chain with the superconducting symmetry breaking terms $\delta_{n}$ (blue lines), $\tilde\delta_{n,\downarrow\uparrow}$ and $\tilde\delta_{n,\uparrow\downarrow}$ (green lines). $\tilde\delta_{n,\downarrow\uparrow}$ and $\tilde\delta_{n,\uparrow\downarrow}$ vanish in the case of SU(2) spin symmetry. } \label{fig:WilsonChain} \end{figure} Now that we have calculated all model parameters from a given hybridization function $\underline{K}(\omega)$, the next step is to map the impurity model of Eq. \eqref{eq:impurity_model} via a Householder transformation to a linear chain model of the form \begin{align} H_\mathrm{Eff} =& H_\mathrm{Imp} + \sum_{n=0,\sigma}^N \epsilon_{n,\sigma} f^\dagger_{n,\sigma} f^{\phantom{\dagger}}_{n,\sigma} + \sum_{n=0}^N \delta_{n} \left(f^\dagger_{n,\uparrow} f^{{\dagger}}_{n,\downarrow} + \mathrm{H.c.} \right) \nonumber \\ &+ \sum_{n=-1}^{N-1} \left( \tilde\delta_{n,\uparrow\downarrow} f^\dagger_{n,\uparrow} f^{{\dagger}}_{n+1,\downarrow} + \tilde\delta_{n,\downarrow\uparrow} f^\dagger_{n,\downarrow} f^{{\dagger}}_{n+1,\uparrow} + \mathrm{H.c.} \right) \nonumber \\ & + \sum_{n=-1,\sigma}^{N-1} t_{n,\sigma} \left( f^\dagger_{n,\sigma} f^{\phantom{\dagger}}_{n+1,\sigma} + \mathrm{H.c.} \right). \end{align} The new Wilson chain is illustrated in Fig. \ref{fig:WilsonChain}. In addition to the usual hopping parameters $t_{n,\sigma}$ and on-site energies $\epsilon_{n,\sigma}$ of an ordinary Wilson chain, this chain exhibits the SC symmetry breaking terms $\delta_{n}$ (blue lines), $\tilde\delta_{n,\downarrow\uparrow}$ and $\tilde\delta_{n,\uparrow\downarrow}$ (green lines). In the case of SU(2) spin symmetry the terms $\tilde\delta_{n,\downarrow\uparrow}$ and $\tilde\delta_{n,\uparrow\downarrow}$ vanish and the chain reduces to the form of Bauer \textit{et al.} \cite{lit:Bauer2009}. Since $\tilde \delta_{n,\downarrow\uparrow}$ and $\tilde \delta_{n,\uparrow\downarrow}$ link different energy scales, it is important to emphasize that both terms decay exponentially with increasing $n$ and, therefore, ensure the separation of energy scales which is vital for the NRG. Also note that both terms do not need to be equal but depend on the details of the Householder transformation, e.g. it is also possible that one of these terms always vanishes. Since the described NRG scheme is completely independent of $H_\mathrm{Imp}$, which incorporates all impurity degrees of freedom, we have tested it for the exactly solvable case of vanishing Hubbard $U=0$ and Kondo coupling $J=0$ and found good agreement. \section{Half Filling} \label{sec:Phase diagram_half_filling} \subsection{Phase diagram } \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Fig_2_Phase_diagram} \caption{ The phase diagram at half filling in dependence of an attractive $U$ and antiferromagnetic Kondo coupling $J$. A detailed explanation is given in the text. Lines indicate phase boundaries (see Fig. \ref{fig:Critical_J}). The step structures around the phase boundaries are caused by the finite resolution of the data. } \label{fig:phase_diagram} \end{figure} \begin{figure}[t] \flushleft{(a)} \includegraphics[width=0.49\textwidth]{Fig_3a_JScan_UScan_cc} \flushleft{(b)} \includegraphics[width=0.49\textwidth]{Fig_3b_JScan_UScan_zeta} \flushleft{(c)} \includegraphics[width=0.49\textwidth]{Fig_3c_JScan_UScan_Sz} \caption{ Different order parameters of the system at half filling as a function of $U$ and $J$: (a) The anomalous expectation value $\Phi=\langle d^\dagger_\uparrow d^\dagger_\downarrow \rangle$. (b) The CDW order parameter $\zeta=|n_{d,i}-n_{d,i+1}|/2$ . (c) The polarization of the localized $f$-electron spins $|\langle S_z \rangle|$. The step structures around the phase boundaries are caused by the finite resolution of the data. } \label{fig:phase_diagram_properties} \end{figure} \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Fig_4_Critical_J} \caption{ Critical couplings $J_c(U)$ separating the phases plotted against $|U|$. We find a linear behavior for all three phase boundaries: $J_c/t = 0.038 |U/t| - 0.011$ for the transition from SC+CDW to CDW+N\'eel (red solid line), $J_c/t = 0.544 |U/t| - 0.638$ for the transition from CDW+N\'eel to N\'eel (pink dashed line), and $J_c/t = 0.284 |U/t| + 2.053$ for the transition from N\'eel to paramagnetism (black dashed-dotted line). } \label{fig:Critical_J} \end{figure} Figure \ref{fig:phase_diagram} summarizes our main results and depicts the phase diagram as a function of the strength of the attractive $U$ and antiferromagnetic Kondo coupling $J$ for half filling. The calculations are performed for $T/t=4\cdot10^{-5}$. For a vanishing coupling $J$, our observations are in agreement with the previous results for an attractive Hubbard model \cite{lit:Huscroft1997,lit:Bauer2009}. At half filling and $J=0$, the SC state is energetically degenerate with a CDW state so that an arbitrary superposition of both states yields a stable solution in the DMFT. For a CDW state, the occupation of each lattice site may differ from half filling, but the average of two neighboring sites yields $n_d=1$, with $n_d=n_{d,\uparrow} + n_{d,\downarrow}$, such that on average the whole lattice is half filled. This behavior does not change for very weak couplings $J$; namely, we also see SC solutions for finite couplings. This is demonstrated in Fig. \ref{fig:phase_diagram_properties}(a), which shows the anomalous expectation value $\Phi=\langle d^\dagger_\uparrow d^\dagger_\downarrow \rangle$ as a color contour plot. In this regime, the system behaves exactly as in the $J=0$ case and we do not observe magnetic ordering for the localized spins since the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction is too weak. Notice that for these small coupling strengths $J$, the spins and the conduction electrons are effectively decoupled at this temperature. For larger couplings, the system undergoes a first-order transition. The superconductivity breaks down and the anomalous expectation value exhibits a jump to $\Phi\approx 0$. The small residual value of $\Phi$ might be caused by a finite spectral resolution due to numerical noise and broadening of the NRG spectra. Note, however, that a finite temperature in a real experiment would have a similar effect and could also lead to a finite SC expectation value $\Phi$. The reason for this is that the energy difference between the SC state and the CDW state is very small so that the SC state is partially occupied due to the finite temperature effect. For these larger coupling strengths $J$, the CDW phase is energetically favoured over the superconductivity without the need of a nonlocal density-density interaction. We thus observe a CDW phase, which is revealed in Fig. \ref{fig:phase_diagram_properties}(b) that depicts the CDW order parameter $\zeta=|n_{d,i}-n_{d,i+1}|/2$, measuring the difference in the occupation of two neighboring sites. Figure \ref{fig:phase_diagram_properties}(c) displays the polarization of the localized $f$-electron spins. In addition to the onset of the CDWs, we also find SDWs where the localized spins are ordered in an antiferromagnetic N\'eel state. The bright yellow area in Fig. \ref{fig:phase_diagram_properties}(c) indicates that in this regime the spins are almost completely polarized since the Kondo screening is suppressed due to the relatively large gap created by the CDW at the Fermi energy in the DOS. Note that although the degeneracy is lifted in this phase, the energy difference between CDW and superconductivity is very small such that it may take a large number of DMFT iterations to go from an SC solution to a CDW solution. The critical coupling separating the two phases displays a linear dependence on $U$, as depicted in Fig. \ref{fig:Critical_J} (red solid line). However, the gradient is very small such that the critical couplings are very similar for a wide range of $U$. The reason why the CDW state has lower energy compared to the SC state is that the antiferromagnetically ordered $f$-electron spins generate magnetic fields which oscillate from site to site. In an SC state, a magnetic field always decreases the gap size while in a CDW state it is possible to retain the size of the gap in at least one of the conduction band channels, i.e., up- or down-spin channel. Consequently, the system becomes a half metal in this phase since the gap closes only in one of the conduction band channels. Thus, antiferromagnetically ordered spins can cooperate with a CDW order in the conduction electrons, but not with SC conduction electrons. This will be discussed in more detail in Sec. \ref{sec:Dynamic_properties}. We point out that this CDW+N\'eel phase and the breakdown of superconductivity has not been observed in recent static mean-field theory calculations \cite{lit:Costa2018}. Instead, a phase combining SDWs and superconductivity has been found because CDWs have not been considered in this static mean field approach while CDWs emerge in our RDMFT framework without any additional assumptions. Upon further increasing the Kondo coupling, another first-order transition, indicated by discontinuous jumps in physical properties, is observed and the CDW vanishes. The critical coupling shows again a linear dependence on $U$ as indicated by the dashed pink line in Fig. \ref{fig:Critical_J}. Note that in this phase, the polarization of the localized spins decreases [see Fig. \ref{fig:phase_diagram_properties}(c)]. The reason for this is a change in the size of the gap in the DOS at the transition from the CDW to the N\'eel phase. The Kondo temperature $T_k=D\mathrm{e}^{-1/\rho J}$ exponentially depends on the coupling $J$ and the DOS around the Fermi energy $\rho$. In the CDW phase, the gap is rather large, which impedes the Kondo effect, while in the N\'eel phase the gap in the DOS becomes significantly smaller. This leads to an increased Kondo screening in the N\'eel phase and, hence, a decrease of the spin polarization. For larger couplings, the Kondo temperature exponentially increases and we obtain the results of a standard Kondo lattice model without an additional attractive interaction, $U=0$ \cite{lit:Peters2015_2}. Close to half filling, the Kondo lattice is dominated by the interplay between RKKY interaction $\propto J^2$ and the Kondo effect as described by the Doniach phase diagram \cite{lit:Doniach77}. For relatively small couplings $J$, the localized $f$-electrons are antiferromagnetically ordered in a N\'eel state, thus, suppressing the Kondo effect. On the other hand, with increasing coupling the Kondo screening becomes more dominant such that the polarization of the localized spins decreases. At strong couplings, the Kondo effect dominates and the system undergoes a continuous transition from a magnetically ordered N\'eel state to a paramagnetic state \cite{lit:Peters2015_2}. Compared to the $U=0$ case, the critical coupling at which the transition from the magnetically ordered to the paramagnetic state occurs, increases for a finite attractive $U$. Again, a linear dependence on $U$ is found for the critical Kondo coupling, which is depicted in Fig. \ref{fig:Critical_J} as a black dashed-dotted line. Note that the constant offset of about $2.053$, which indicates the critical coupling $J_c$ for the case of vanishing interaction $U=0$, is in good agreement with the results of a standard Kondo lattice without additional attractive interactions \cite{lit:Otsuki2015,lit:Peters2015_2}. The reason for the increasing critical coupling is that with increasing attractive $U$, either the doubly occupied or empty state with total spin $s=0$ is favored over the singly occupied state with $s=1/2$ and, consequently, the effective magnetic moment in the conduction band, which screens the localized spins, vanishes. Therefore, an attractive interaction $U$ inhibits Kondo screening of the localized spins \cite{lit:Raczkowski2010}. \subsection{Static properties and phase transitions} \label{sec:Static_properties} \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Fig_5_U-0100_JScan} \caption{ Occupation of both spin-channels $|n_{d,\sigma}-0.5|$, CDW order parameter $\zeta$, polarization of the conduction band $2|s^z_d|=|n_{d,\uparrow} - n_{d,\downarrow}|$, anomalous expectation value $\Phi$, and $f$-electron spin polarization $|S_z|$ as functions of the coupling $J$ for a constant $U/t=-2$ exactly at half filling. } \label{fig:U-0100_JScan_half-filling} \end{figure} We now discuss the static properties of the system in greater detail. Figure \ref{fig:U-0100_JScan_half-filling} shows the deviation of the occupation numbers $n_{d,\uparrow}$ and $n_{d,\downarrow}$ from half filling, the CDW order parameter $\zeta$, the anomalous expectation value $\Phi=\langle d^\dagger_\uparrow d^\dagger_\downarrow \rangle$, and the spin polarization of the localized $f$-electrons for a constant attractive $U/t=-2$ as a function of the coupling $J$. For small couplings up to $J/t\approx 0.065$, the system behaves exactly in the same way as for $J=0$; the SC and CDW states are degenerate (CDW state not explicitly shown). The anomalous expectation value is constant and since we start with a non-CDW self-energy, the occupation for all sites is exactly half filling $|n_{d,\sigma}-0.5|=0$ and $\zeta=0$. Due to the small coupling, the RKKY interacting is very weak and we do not observe a magnetic ordering of the localized $f$-electron spins. Note, however, that the localized spins are completely unscreened, due to the SC gap, see Sec. \ref{sec:Dynamic_properties} below. Therefore, the localized $f$-electrons essentially behave like free spins and even very weak perturbations can polarize them. In the current model, however, there is no possibility to mediate the coupling between spins other than the RKKY interaction. In a real material, it is very likely that different long range interactions would produce a stable magnetic ordering in this phase. For larger couplings $J$, all properties show a discontinuous jump indicating a first-order transition. The degeneracy between superconductivity and CDWs is lifted so that superconductivity almost completely vanishes and instead a CDW state with $\zeta \neq 0$ appears. Upon further increasing the coupling, the conduction-band polarization $2|s^z_d|=|n_{d,\uparrow} - n_{d,\downarrow}|$ continuously increases due to the magnetic fields induced by the localized spins and, consequently, the small residual superconductivity eventually vanishes. Note, however, that only the occupation $n_{d,\downarrow}$ changes while $n_{d,\uparrow}$ remains almost constant. The reason for this behavior is the alternating magnetic fields originating from the antiferromagnetic ordered $f$-electrons, which enable the system to preserve the gap in at least one spin-channel. Around $J/t\approx 0.45$, another first-order transition occurs and most physical properties display a discontinuous jump. The occupation number at each lattice site jumps to half filling such that $\zeta=0$ and $|n_{d,\uparrow}-0.5|=|n_{d,\downarrow}-0.5|$. For the localized spins, we find the typical N\'eel state of a Kondo lattice model at half filling without any CDW or SC order $\Phi=0$. Note the small jump in $S_z$ ($s_d^z$) at the phase transition around $J/t\approx 0.5$, indicating that the polarization is slightly smaller (larger) than the one in the CDW phase. For even larger couplings, the well known second-order transition for the standard Kondo lattice from a magnetically ordered to a paramagnetic state occurs \cite{lit:Peters2015_2} and the N\'eel state vanishes continuously. \subsection{Dynamical properties} \label{sec:Dynamic_properties} \begin{figure}[t] \flushleft{(a)} \includegraphics[width=0.49\textwidth]{Fig_6a_Local_Green11_0_0_U010} \flushleft{(b)} \includegraphics[width=0.49\textwidth]{Fig_6b_Local_Green22_0_0_U010} \flushleft{(c)} \includegraphics[width=0.49\textwidth]{Fig_6c_Local_Green12_0_0_U010} \caption{ The local DOS (a) $\rho_{11}(\omega)$ for the spin-up and (b) $\rho_{22}(\omega)$ for the spin-down channel for $U/t=-2$. The spectrum at the neighboring sites is mirrored on the $\omega=0$ axis. (c) Real part of the off-diagonal Green's function. The system is for $J/t=0.06$ in the SC, for $J/t=0.2$ in the CDW, for $J/t=0.45$, and $J/t=2.4$ in the N\'eel and for $J/t=3.0$ in the paramagnetic phase. } \label{fig:Local_Greens_function} \end{figure} The local DOSs at a lattice site for the spin-up and spin-down channel are depicted in Figs. \ref{fig:Local_Greens_function}(a) and \ref{fig:Local_Greens_function}(b), respectively, for different couplings $J$ and constant $U/t=-2$. The spectrum at neighboring sites is mirrored on the $\omega=0$ axis. Figure \ref{fig:Local_Greens_function}(c) displays the real part of the off-diagonal Green's function, where finite values indicate superconductivity. For the very weak coupling $J/t=0.06$ (red line), we observe a gap with two symmetric peaks for both spin channels. Since the magnetic order is absent for this small coupling, the spectrum for the spin-up and spin-down channel is identical. The pronounced value of the $\mathrm{Re}[G_{12}(\omega)]$ inside the gap shows that this gap originates from superconductivity. In the CDW phase for $J/t=0.2$ (blue line), both spin channels have just one peak below the Fermi energy, which is shifted to energies above the Fermi energy for neighboring sites due to the CDW. Note that the position and the heights of the peaks for the two channels is not identical. The value of the off-diagonal Green's function is very small, in accordance with the observation of a very small $\Phi$, see Fig. \ref{fig:U-0100_JScan_half-filling}. In this phase, the localized spins are almost completely polarized since the gap at the Fermi energy suppresses the screening of the $f$-electrons due to the Kondo effect. \begin{figure}[t] \flushleft{(a)} \includegraphics[width=0.49\textwidth]{Fig_7a_Local_Spectrum_11_U-0100_JScan} \flushleft{(b)} \includegraphics[width=0.49\textwidth]{Fig_7b_Local_Spectrum_22_U-0100_JScan} \caption{ Local DOS (a) $\rho_{11}(\omega)$ for the spin-up and (b) $\rho_{22}(\omega)$ for the spin-down channel in the CDW + N\'eel state for different couplings close to the phase transition to the magnetically ordered phase and $U/t=-2$. For neighboring sites, the spectrum is mirrored on the $\omega=0$ axis. } \label{fig:Local_Greens_function_CDW} \end{figure} For the two larger couplings $J/t=0.45$ (green line) and $J/t=2.4$ (black line), the system is in the magnetically ordered N\'eel phase already known for the ordinary Kondo lattice. The off-diagonal Green's function completely vanishes, showing that there is no superconductivity anymore in this phase. In this phase, the size of the gap strongly depends on the strength of the coupling $J$ \cite{lit:Coleman2006} so that the gap is very small for $J/t=0.45$ while it is rather large for $J/t=2.4$. The small gap for weak couplings $J$ in the N\'eel phase leads to a sudden increase of the DOS around the Fermi energy compared to the CDW phase and, consequently, results in an enhancement of the Kondo effect. The increasing influence of the Kondo effect causes a decrease of the polarization of the localized spins, which can be seen as a small jump in $S_z$ at the phase transition [see, e.g., Fig. \ref{fig:phase_diagram_properties}(c)]. At the phase transition point from the CDW to the N\'eel phase, the peak in $\rho_{22}(\omega)$ discontinuously jumps from below to above the Fermi energy, once again indicating a first-order transition. For the coupling $J/t=3.0$, the system is in the paramagnetic phase and we observe a gap with two symmetric peaks. Since there is no polarization anymore, the DOS of the spin-up and spin-down channel are identical. As before, we do not observe superconductivity and $\mathrm{Re}[G_{12}(\omega)]$ is completely zero. Since, for very small couplings, the system behaves just like an attractive Hubbard model with $J=0$ while, for large couplings, the results of a standard Kondo lattice with $U=0$ are obtained, the CDW phase is the most interesting phase. Figure \ref{fig:Local_Greens_function_CDW}, therefore, depicts the local DOS in the CDW phase for couplings close to the phase transition point to the magnetically ordered state in more detail. When approaching the phase transition point to the SDW phase, we observe that the position of the peak for the spin-up channel is almost unchanged, indicating an insulating system. On the other hand, the peak in the spin-down channel is shifted towards the Fermi energy, leading to a gap-closing. In addition to the shift, the spectral weight at the Fermi energy increases and two small peaks around the $\omega=0$ evolve. The energy scale on which these additional peaks appear agrees very well with the energy of $J \langle \vec{S}\cdot\vec{s}_{d}\rangle$, indicating that these peaks originate from spin-flip excitations. A comparison between Figs. \ref{fig:Local_Greens_function_CDW}(a) and \ref{fig:Local_Greens_function_CDW}(b) reveals that the system becomes a half metal close to the quantum phase transition where only the gap in one conduction band channel disappears. This behavior arises from the combination of the CDW and the oscillating magnetic fields caused by the localized spins. While at the site shown in Fig. \ref{fig:Local_Greens_function_CDW} the effective magnetic field tends to shift the peak in $\rho_{11}(\omega)$ to lower energies and away from the Fermi energy, the peak in $\rho_{22}(\omega)$ is displaced toward the Fermi energy. At the neighboring sites, the situation is the same. Because of the spin-flip of the localized spin in the N\'eel state, the effective magnetic field now shifts the peak of $\rho_{11}(\omega)$ to higher energies. However, due to the CDW, the peak is now located above $\omega=0$ so that it is again displaced away from the Fermi energy. For the same reason, the peak in $\rho_{22}(\omega)$ of the neighboring sites is shifted toward the Fermi energy so that the gap closes only for the spin-down channel. \begin{figure}[t] \flushleft{(a)} \includegraphics[width=0.49\textwidth]{Fig_8a_Spectrum_11_J0020_U-0100} \flushleft{(b)} \includegraphics[width=0.49\textwidth]{Fig_8b_Spectrum_22_J0020_U-0100} \caption{ Momentum-dependent spectral functions of (a) the spin-up channel and (b) spin-down channel for $U/t=-2$ and a coupling $J/t=0.4$ close to the quantum phase transition. } \label{fig:Momentum_Spectral_function} \end{figure} The half-metallic behavior is once again shown in Fig. \ref{fig:Momentum_Spectral_function} where the momentum-dependent spectral functions close to the quantum phase transition are depicted. While the spectrum for the spin-up channel (panel a) is almost indistinguishable from the spectrum in the SC phase at $J=0$ (not shown) and exhibits a gap, the spectrum of the spin-down channel does not show any gaps and instead displays the properties of a metal. Since the system can preserve the gap in one conduction band channel, the CDW+N\'eel state yields a small energy gain compared to the SC state where the effective magnetic fields always decrease the size of the gap in both channels. This opens the opportunity to use the combination of CDWs and N\'eel ordering, which originates from the interplay between an attractive $U$ and a Kondo coupling $J$, as an application for spin filters. For the momentum-dependent spectral functions of the magnetically ordered phase and the paramagnetic phase, we have not observed any differences compared to the standard Kondo lattice \cite{lit:Peters2015_2}. \section{Away from Half Filling} \label{sec:away_from_half-filling} \begin{figure}[t] \flushleft{(a)} \includegraphics[width=0.49\textwidth]{Fig_9a_U-01_EScan_occ_and_cc} \flushleft{(b)} \includegraphics[width=0.49\textwidth]{Fig_9b_Doped_Mag_Order} \caption{ (a) CDW order parameter $\zeta$, averaged occupation $\overline{n}_d=1/N\sum_i n_{d,i}$, superconducting expectation value $\Phi$, and polarization of the localized spins $|S_z|$ for $U/t=-2$ and $J/t=0.2$ as a function of the chemical potential $\mu$. For $\mu/t=1$, the lattice is half filled and CDWs occur. (b) Site-dependent polarization $S_z$ of the localized $f$-electron spins for $U/t=-2$, $J/t=0.2$ and $\mu/t=1.4$. Around $\mu/t=1.4$, the antiferromagnetic N\'eel state is not stable anymore and instead SDWs as shown in panel (b) appear. } \label{fig:EScan} \end{figure} In the attractive Hubbard model with $J=0$, the SC state and CDW state are degenerate only at half filling. Away from half filling this degeneracy is lifted and instead only the SC state becomes the ground state. Figure \ref{fig:EScan}(a) shows different properties of the system for $U/t=-2$ and a finite coupling $J/t=0.2$ as a function of the chemical potential $\mu$. For the particle-hole symmetric case $\mu/t=1$, the average occupation number $\overline{n}_d=1$ indicates that the system is at half filling and, consequently, we observe CDWs with $\zeta \neq 0$. For a small critical deviation away from $\mu/t=1$ the CDW order parameter $\zeta$ shows a discontinuous jump to a value close to zero, indicating a first-order transition. At the same time, also the SC expectation value jumps from $\Phi=0$ to a finite value. At this point, the SC state, instead of the CDW + N\'eel state, becomes the new ground state. The finite residual $\zeta$ is presumably caused by a finite energy resolution and broadening effects of the NRG spectra, which have the same effect as a finite temperature in real experiments. Using an applied voltage to change the chemical potential, it is, therefore, possible to drive the system from the insulating CDW phase at half filling to the SC phase. This could be interesting for a possible future implementation of SC transistors. Away from half filling, the superconductivity persists up to much larger couplings $J$ compared to the case of half filling. The almost free $f$-electron spins are stabilized in a SDW state by the RKKY interaction. In this phase, we observe superconductivity combined with magnetic ordering confirming previous results \cite{lit:Bertussi2009,lit:Karmakar2016,lit:Costa2018}. Upon further increasing the chemical potential, the small residual $\zeta$ rapidly disappears, leading to a complete breakdown of the CDWs while the deviation from half filling $|\overline{n}_d-1|$ continuously increases. The SC expectation value $\Phi$ decreases almost linearly with increasing $\mu$ until it vanishes around $\mu/t \approx 1.75$. Away from half filling, however, the homogeneous N\'eel state becomes unstable and changes into a phase of SDWs as depicted in Fig. \ref{fig:EScan}(b) in which the polarization of the localized spins are lattice-site dependent. Exactly the same kind of SDWs have also been found for the normal Kondo lattice with $U=0$ \cite{lit:Peters2015_2,lit:Peters_2017}. Although the superconductivity is still significant ($\Phi\approx 0.04$) in this regime, we can, therefore, conclude that it has no influence on the structure of the SDWs. A finite attractive $U$ just changes the critical Kondo coupling at which the N\'eel state becomes unstable \cite{lit:Bertussi2009,lit:Costa2018}. \begin{figure}[t] \flushleft{(a)} \includegraphics[width=0.49\textwidth]{Fig_10a_new_sc} \flushleft{(b)} \includegraphics[width=0.49\textwidth]{Fig_10b_new_Sz} \caption{ (a) Superconducting expectation value $\Phi$ and (b) polarization of the localized spins as a function of the coupling $J$ and the filling $n$ for $U/t=-2$. } \label{fig:JScan_EScan} \end{figure} Figure \ref{fig:JScan_EScan} depicts the anomalous expectation value $\Phi$ and the polarization of the localized $f$-electron spins as a function of the coupling $J$ and the filling $n$ for $U/t=-2$. Note that in contrast to the case of half filling, $\Phi$ continuously decreases with increasing Kondo coupling $J$ and no discontinuity occurs \cite{lit:Bertussi2009,lit:Karmakar2016,lit:Costa2018}. Away from half filling, superconductivity can be observed for couplings up to $J/t \approx 0.5$ and it is largest for fillings around $n\approx 0.92$. While for larger fillings than $n\approx 0.92$ superconductivity is suppressed since at half filling the CDW state is the ground state, for lower fillings it decreases because the electron density is reduced. \begin{figure}[t] \flushleft{(a)} \includegraphics[width=0.49\textwidth]{Fig_11a_Spectrum11_J0025_E0060} \flushleft{(b)} \includegraphics[width=0.49\textwidth]{Fig_11b_Spectrum11_J0035_E0060} \caption{ Moment-dependent spectral functions for (a) $J/t=0.5$ and (b) $J/t=0.7$ for $n \approx 0.85$ and $U/t=-2$. Blue dashed line indicates Fermi energy. } \label{fig:Doped_Spectrum_small_and_large_J} \end{figure} The momentum-dependent spectral functions for two different couplings $J/t=0.5$ and $0.7$ are shown in Fig. \ref{fig:Doped_Spectrum_small_and_large_J} for the filling $n\approx 0.85$ and $U/t=-2$. For the smaller coupling $J/t=0.5$ [Fig. \ref{fig:Doped_Spectrum_small_and_large_J}(a)], the spectrum exhibits two gaps, one directly at the Fermi energy (indicated by a blue dashed line), and the other at $\omega/t=0.25$. For this coupling strength, we still observe a significant anomalous expectation value of $\Phi \approx 0.05$ and the gap at the Fermi energy is the SC gap. This gap is largest for $J=0$ and becomes continuously smaller with increasing coupling $J$, which agrees with the observation that $\Phi$ continuously decreases with increasing $J$. For the larger coupling $J/t=0.7$ [Fig. \ref{fig:Doped_Spectrum_small_and_large_J}(b)], the SC expectation value is zero $\Phi =0$ and, consequently, the gap at the Fermi energy is completely gone so that the system behaves like a metal. On the other hand, the width of the gap at $\omega/t=0.25$ is increased compared to the case for $J/t=0.5$. This gap is already known from the ordinary Kondo lattice \cite{lit:Peters2015_2} and resides at half filling. It is caused by the hybridization with the localized electrons and the width increases with increasing coupling $J$ \cite{lit:Coleman2006}. \section{Conclusion} \label{sec:Conclusion} In this paper, we have studied the competition between superconductivity, charge ordering, magnetic ordering, and the Kondo effect in a heavy fermion $s$-wave superconductor which is described by the Kondo lattice model with an attractive on-site Hubbard interaction. To solve this model, we have employed for the first time the combination of RDMFT and a newly developed self-consistent NRG scheme in Nambu space as an impurity solver. Compared to the approach of Bauer \textit{et al.} \cite{lit:Bauer2009} we have chosen a different ansatz for the discretized impurity model that allows SU(2) spin symmetry broken solutions, which is essential to study the competition between SDWs and superconductivity. Using this new approach, we have found a rich phase-diagram at half filling, where depending on $J$ and $U$ many different effects may occur. For very small Kondo couplings $J$ compared to the on-site interaction $U$, the system behaves like a Hubbard model with an attractive on-site interaction while for large couplings the system shows the properties of a usual Kondo lattice with $U=0$. For moderate couplings, we have found a completely new phase where CDWs and magnetic ordering are present at the same time. Interestingly, the N\'eel state of the $f$-electron spins favors the CDW state over the SC state and, hence, lifts the degeneracy between the two phases such that superconductivity is strongly suppressed. Another remarkable feature is that, in this phase, the system may become a half metal close to the quantum phase transition to the non-SC magnetically ordered phase where the gap in the DOS closes only in one spin-channel of the conduction band. Away from half filling, our findings are in good agreement with previous results \cite{lit:Bertussi2009,lit:Karmakar2016,lit:Costa2018}. The CDWs are suppressed and we have found instead a phase where superconductivity along with magnetic ordering exists up to moderate couplings $J$. For the chosen interaction $U/t=-2$, the superconductivity is strongest for fillings around $n\approx 0.9$. The anomalous expectation value as well as the SC gap both decrease continuously with increasing coupling $J$. Instead of the homogeneous N\'eel state, we have observed incommensurate SDWs. Since the same kind of SDWs have already been seen in the ordinary Kondo lattice away from half filling \cite{lit:Peters2015_2}, we find no evidence that superconductivity has an influence on the structure of these SDWs. A finite attractive $U$ just changes the Kondo coupling, at which the incommensurate SDWs occur. Since an applied voltage can change the chemical potential and drive the system from the insulating CDW state at half filling to the SC state away from half filling, this system might be interesting for a possible future implementation of a SC transistor, where superconductivity can be switched on and off simply by applying a voltage. Interestingly, superconductivity away from half filling as well as CDW at half filling both enhance the magnetic ordering since the gap in the DOS mitigates the Kondo screening \cite{lit:Raczkowski2010} such that the $f$-electron spins are almost completely polarized in both phases. In future work, our enhanced RDMFT+NRG approach could be used to investigate a variety of other SC systems since it is not limited to homogeneous SC lattice systems, where localized $f$-electrons reside on every lattice site. One example could be diluted SC systems where the behavior for different impurity concentrations is examined. In this case, one would randomly place a specific number of impurities on the lattice sites of a large RDMFT cluster such that the desired concentration is achieved. On the other hand, one could also study proximity-induced superconductivity where a lattice Hubbard model with an attractive on-site potential $U$ is coupled, e.g., to an ordinary Kondo lattice. These issues are now under consideration. \begin{acknowledgments} B.L. thanks the Japan Society for the Promotion of Science (JSPS) and the Alexander von Humboldt Foundation. Computations were performed at the Supercomputer Center, Institute for Solid State Physics, University of Tokyo and the Yukawa Institute for Theoretical Physics, Kyoto. This work is partly supported by JSPS KAKENHI Grants No. JP15H05855, JP16K05501, JP17F17703, JP18H01140, JP18K03511, and No. JP18H04316. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}The purpose of this paper is to present a certain combinatorial method of constructing invariants of isotopy classes of oriented tame links. This arises as a generalization of the known polynomial invariants of Conway and Jones. These invariants have one striking common feature. If $L_+, L_-$ and $L_0$ are diagrams of oriented links which are identical, except near one crossing point, where they look like in Fig. 1.1\footnote{Added for e-print: we follow here the old Conway's convention \cite{C}; in modern literature the role of $L_+$ and $L_-$ is usually inverted. In \cite{Prz} the new convention is already used.}, then $w_{L_+}$ is uniquely determined by $w_{L_-}$ and $w_{L_0}$, and also $w_{L_-}$ is uniquely determined by $w_{L_+}$ and $w_{L_0}$. Here $w$ denotes the considered invariants (we will often take the liberty of speaking about the value of an invariant for a specific link or diagram of a link rather than for an isotopy class of links). In the above context, we agree to write $L_+^p, L_-^p$ and $L_0^p$ if we need the crossing point to be explicitly specified. \ \\ \ \\ \centerline{{\psfig{figure=SkeinTriplePT1.eps,height=3.5cm}}}\ \\ \centerline{\footnotesize{Fig. 1.1}} \indent In this paper we will consider the following general situation. Assume we are given an algebra $\A$ with a countable number of $0$-argument operations $a_1, a_2,..., a_n,...$ and two 2-argument operations $|$ and $*$. We would like to construct invariants satisfying the conditions \begin{align*} w_{L_+}= &\ w_{L_-} | w_{L_0} \text{ and}\\ w_{L_-}= &\ w_{L_+} * w_{L_0} \text{ and} \\ w_{T_n}= &\ a_n \text{ for $T_n$ being a trivial link of n components.} \end{align*} \indent We say that $(\A, a_1, a_2,..., |, *)$ is a Conway algebra if the following conditions are satisfied\footnote{Added for e-print: We were unaware, when writing this paper, that the condition 1.5, $(a\!*\!b)\!*\!(c\!*\!d) = (a\!*\!c)\!*\!(b\!*\!d)$, was used already for over 50 years, first time in \cite{Bu-Ma}, under various names, e.g. entropic condition (see for example \cite{N-P}).}: \\ ${1.1} \quad a_n | a_{n+1} = a_n \\ {1.2} \quad a_n * a_{n+1} = a_n $\\ $\left. \begin{aligned} {1.3} && &(a|b)|(c|d) = (a|c)|(b|d) \\ {1.4} && &(a|b)\!*\!(c|d) = (a\!*\!c)|(b\!*\!d) \\ {1.5} && &(a\!*\!b)\!*\!(c\!*\!d) = (a\!*\!c)\!*\!(b\!*\!d) \end{aligned} \right\} \quad \text{transposition properties}$ \\ ${1.6} \quad (a|b)*b = a \\ {1.7} \quad (a*b)|b = a.$\\ \vspace{1mm} \\ We will prove the following theorem: \\ \begin{theorem}\textbf{1.8.} For a given Conway algebra $\A$ there exists a uniquely determined invariant $w$ which attaches an element $w_L$ from $\A$ to every isotopy class of oriented links and satisfies the conditions\\ ${(1)} \quad w_{T_n} = a_n \hspace{2.5cm} \text{ initial conditions}$\\ $\left. \begin{aligned} {(2)}&& &w_{L_+} = &\ w_{L_-} | w_{L_0}\\ {(3)}&& &w_{L_-} = &\ w_{L_+} * w_{L_0} \end{aligned} \right.\bigg\} \ Conway\ relations$ \end{theorem} It will be proved in \S{2}. \\ \indent Let us write here a few words about the geometrical meaning of the axioms 1.1-1.7 of Conway algebra. Relations 1.1 and 1.2 are introduced to reflect the following geometrical relations between the diagrams of trivial links of $n$ and $n+1$ components: \centerline{\epsfig{figure=Figure12.eps,height=10cm}} \centerline{\footnotesize{Fig. 1.2}} Relations 1.3, 1.4, and 1.5 arise when considering rearranging a link at two crossings of the diagram but in different order. It will become clear in \S{2}. Relations 1.6 and 1.7 reflect the fact that we need the operations $|$ and $*$ to be in some sense opposite one to another. \\ \begin{example}\textbf{1.9.} (Number of components). Set $\A=N-$ the set of natural numbers; $a_i=i$ and $i|j=i*j=i$. This algebra yields the number of components of a link.\end{example} \begin{example}\textbf{1.10.} Set $\A=$ \big\{0, 1, 2\big\}; the operation $*$ is equal to $|$ and $0|0\!=\!1,\ 1|0\!=\!0,\ 2|0\!=\!2,\ 0|1\!=\!0,\ 1|1\!=\!2,\ 2|1\!=\!1,\ 0|2\!=\!2,\ 1|2\!=\!1,\ 2|2\!=\!0$. Furthermore $a_i \equiv i \mod3.$ The invariant defined by this algebra distinguishes, for example, the trefoil knot from the trivial knot. \end{example} \begin{example}\textbf{1.11.}{(a)} $\A=Z [x^{\mp 1} , y^{\mp 1}, z]$; $a_1 = 1, a_2=x+y+z,\dots, a_i = (x+y)^{i-1} + z(x+y)^{i-2} + \dots+z(x+y)+z=(x+y)^{i-1}+z \big( \frac{(x+y)^{i-1}-1}{x+y-1} \big)$,{\small ...} . We define $|$ and $*$ as follows: $w_2 | w_0 = w_1$ and $w_1 * w_0 = w_2$ where \begin{flalign*} {1.12}&& xw_1+yw_2 &=w_0-z, \ \ w_1,w_2,w_0 \in \A.& \end{flalign*} \end{example} \indent (b) $\A = Z$[ $x^{\mp1}$ , $y^{\mp1}$] is obtained from the algebra described in (a) by substitution $z=0$. In particular $a_i=(x+y)^{i-1}$ and 1.12 reduces to: \begin{flalign*} {1.13}&& xw_1+yw_2 &= w_0.& \end{flalign*} We describe this algebra for two reasons: \\ \indent $-$first, the invariant of links defined by this algebra is the simplest generalization of the Conway polynomial \big(substitute $x=- \frac{1}{z}$, $y=\frac{1}{z}$, {[$K$-1]}\big) and the Jones polynomial \big(substitute $x= \frac{1}{t} \frac{1}{\sqrt{t}-\frac{1}{\sqrt{t}}}$, $y= \frac{-t}{\sqrt{t}-\frac{1}{\sqrt{t}}}$ {[$J$]}\big); \\ \indent$-$second, this invariant behaves well under disjoint and connected sums of links: \begin{center} $\qquad P_{L_1 \sqcup L_2} (x,y)\!=\! (x+y)P_{L_1{\sharp} L_2}(x,y)\!=\!(x+y)P_{L_1}(x,y)\!\cdot\!P_{L_2}(x,y)$ \\ \vspace{2mm} where $P_L\!(x,y)$ is a polynomial invariant of links yielded by $\A$.\\ \vspace{2mm} \begin{example} \textbf{1.14.} (Linking number). Set $\A\!=\!N\!\times\!Z$, $a_i\!=\!(i,0)$ and\end{example} \begin{equation*} (a,b)|(c,d)=\begin{cases} (a,b-1) \ \textup{if} \ a > c \\ (a,b) \qquad \textup{if} \ a \leqslant c \end{cases} \end{equation*} \begin{equation*} (a,b)*(c,d)=\begin{cases} (a,b+1)\ \textup{if} \ a > c \\ (a,b)\ \qquad \textup{if}\ a \leqslant c \end{cases} \end{equation*} \end{center} \indent The invariant associated to a link is a pair (number of components, linking number). \begin{remark}\textbf{1.15.} It may happen that for each pair $u,v\!\in\!\A$ there exists exactly one $w\!\in\!\A$ such that $v|w\!=\!u$ and $u\!*\!w\!=\!v$. Then we can introduce a new operation $\circ$: $\A \times \A \rightarrow \A$ putting $u \circ v=w$ (we have such a situation in Examples $1.10$ and $1.11$ but not in $1.9$ where $2|1\!=\!2\!*\!1\!=\!2\!=\!2|3\!=\!2\!*\!3$). Then $a_n\!=\!a_{n-1}\circ a_{n-1}$. If the operation $\circ$ is well defined we can find an easy formula for invariants of connected and disjoint sums of links. We can interpret $\circ$ as follows: if $w_1$ is the invariant of $L_+$ (Fig 1.1) and $w_2$ of $L_-$ then $w_1 \circ w_2$ is the invariant associated to $L_0$. \end{remark} \begin{remark}\textbf{1.16.} Our invariants often allow us to distinguish between a link and its mirror image. If $P_L (x, y, z)$ is an invariant of $L$ from Example 1.11 (a) and $\overline{L}$ is the mirror image of $L$ then \[P_{\overline{L}}(x,y,z) = P_L(y,x,z).\] \end{remark} \indent We will call a crossing of the type \parbox{1.1cm}{\psfig{figure=PT-Lplus.eps,height=0.9cm}} positive and crossing of the type \parbox{1.1cm}{\psfig{figure=PT-Lmin.eps,height=0.9cm}} negative. This will be denoted by sgn $p= +$ or $-$. Let us consider now the following example. \begin{example}\textbf{1.17.} Let $L$ be the figure eight knot represented by the diagram \end{example} \centerline{\epsfig{figure=Figure13.eps,height=7.5cm}} \ \\ \centerline{\footnotesize{Fig. 1.3}} \ \\ To determine $w_L$ let us consider the following binary tree: \\ \centerline{\epsfig{figure=Fig14.eps,height=12cm}} \centerline{\footnotesize{Fig. 1.4}} As it is easily seen the leaves of the tree are trivial links and every branching reflects a certain operation on the diagram at the marked crossing point. To compute $w_L$ it is enough to have the following tree: \\ \ \\ \centerline{\psfig{figure=Skein-tree2.eps,height=3cm}} Here the sign indicates the sign of the crossing point at which the operation was performed, and the leaf entries are the values of $w$ for the resulting trivial links. Now we may conclude that \[w_L\!=\!a_1\!*\!(a_2|a_1).\] Such a binary tree of operations on the diagram resulting in trivial links at the leaves will be called the resolving tree of the diagram. \newline \indent There exists a standard procedure to obtain such a tree for every diagram. It will be described in the next paragraph and it will play an essential role in the proof of Theorem 1.8. It should be admitted that the idea is due to Ball and Metha {[$B$-$M$]} and we learned this from the Kauffman lecture notes {[$K$-3]}. \\ \section{Proof of the Main Theorem}\label{Section 2} \begin{definition}\textbf{2.1.} Let $L$ be an oriented diagram of $n$ components and let $b\!=$($b,\dots,b_n$) be base points of $L$, one point from each component of $L$, but not the crossing points. Then we say that $L$ is untangled with respect to $b$ if the following holds: if one travels along $L$ (according to the orientation of $L$) starting from $b_1$, then, after having returned to $b_1-$ from $b_2,\dots,$ finally from $b_n$, then each crossing which is met for the first time is crossed by a bridge.\end{definition} \indent It is easily seen that for every diagram $L$ of an oriented link there exists a resolving tree such that the leaf diagrams are untangled (with respect to appropriately chosen base points). This is obvious for diagrams with no crossings at all, and once it is known for diagrams with less than $n$ crossings we can use the following procedure for any diagram with $n$ crossings: choose base points arbitrarily and start walking along the diagram until the first ``bad" crossing $p$ is met, i.e. the first crossing which is crossed by a tunnel when first met. Then begin to construct the tree changing the diagram in this point. If, for example, sgn $p\!=\!+$ we get \\ \centerline{{\psfig{figure=Skein-tree.eps,height=2.1cm}}}\ \\ Then we can apply the inductive hypothesis to $L_0^p$ and we can continue the procedure with $L_-^p$ (walking further along the diagram and looking for the next bad point). \\ \vspace{1mm}\\ \indent To prove Theorem 1.8 we will construct the function $w$ as defined on diagrams. In order to show that $w$ is an invariant of isotopy classes of oriented links we will verify that $w$ is preserved by the Reidemeister moves. \\ \vspace{1mm}\\ \indent We use induction on the number $cr(L)$ of crossing points in the diagram. For each $k \geqslant 0$ we define a function $w_k$ assigning an element of $\A$ to each diagram of an oriented link with no more than $k$ crossings. Then $w$ will be defined for every diagram by $w_L = w_k(L)$ where $k \geqslant cr(L)$. Of course the functions $w_k$ must satisfy certain coherence conditions for this to work. Finally we will obtain the required properties of $w$ from the properties of $w_k$'s. \\ \indent We begin from the definition of $w_0$. For a diagram $L$ of $n$ components with $cr(L)=0$ we put \\ \vspace{1mm} \begin{flalign*} {2.2}&& w_0(L) &= a_n.& \end{flalign*} To define $w_{k+1}$ and prove its properties we will use the induction several times. To avoid misunderstandings the following will be called the ``Main Inductive Hypothesis": M.I.H. We assume that we have already defined a function $w_k$ attaching an element of $\A$ to each diagram $L$ for which $cr(L) \leqslant k$. We assume that $w_k$ has the following properties: \begin{flalign*} {2.3}&& w_k(U_n) &= a_n & \end{flalign*} for $U_n$ being an untangled diagram of $n$ components (with respect to some choice of base points). \begin{flalign*} {2.4} && w_k(L_+) &= w_k(L_-)|w_k(L_0)&\\ {2.5} && w_k(L_-) &= w_k(L_+)*w_k(L_0) & \end{flalign*} for $L_+$, $L_-$ and $L_0$ being related as usually. \begin{flalign*} {2.6}&& w_k(L) &= w_k(R(L))& \end{flalign*} where $R$ is a Reidemeister move on $L$ such that $cr(R(L))$ is still at most $k$. \\ \indent Then, as the reader may expect, we want to make the Main Inductive Step to obtain the existence of a function $w_{k+1}$ with analogous properties defined on diagrams with at most $k$ +1 crossings. \\ \indent Before dealing with the task of making the M.I.S. let us explain that it will really end the proof of the theorem. It is clear that the function $w_k$ satisfying M.I.H. is uniquely determined by properties 2.3, 2.4, 2.5 and the fact that for every diagram there exists a resolving tree with untangled leaf diagrams. Thus the compatibility of the function $w_k$ is obvious and they define a function $w$ defined on diagrams. \\ \indent The function $w$ satisfies conditions (2) and (3) of the theorem because the function $w_k$ satisfy such conditions. \\ \indent If $R$ is a Reidemeister move on a diagram $L$, then $cr(R(L))$ equals at most $k=cr(L)+2$, whence \\ \indent $w_{R(L)}$=$w_k(R(L))$, $w_L$=$w_k(L)$ and by the properties of $w_k, w_k(L)$=$w_k(R(L))$ what implies $w_{R(L))}$=$w_L$. It follows that $w$ is an invariant of the isotopy class of oriented links. \\ \indent Now it is clear that $w$ has the required property (1) too, since there is an untangled diagram $U_n$ in the same isotopy class as $T_n$ and we have $w_k(U_n)=a_n$. \\ \indent The rest of this section will be occupied by the M.I.S. For a given diagram $D$ with $cr(D)\!\leqslant\!k+1$ we will denote by $\mathcal{D}$ the set of all diagrams which are obtained from $D$ by operations of the kind \parbox{3.1cm}{\psfig{figure=PTplustominus.eps,height=0.9cm}} or \parbox{3.1cm}{\psfig{figure=PTplustozero.eps,height=0.9cm}}. Of course, once base points $b$=($b_1,\dots, b_n$) are chosen on $D$, then the same points can be chosen as base points for any $L\!\in\!\mathcal{D}$, provided $L$ is obtained from $D$ by the operations of the first type only. \\ \indent Let us define a function $w_b$, for a given $D$ and $b$, assigning an element of $\A$ to each $L\!\in\!\mathcal{D}$. If $cr(L)\!<\!k+1$ we put \begin{flalign*} {2.7}&& w_b(L) &= w_k(L) & \end{flalign*} If $U_n$ is an untangled diagram with respect to $b$ we put \begin{flalign*} {2.8}&& w_b(U_n) &= a_{n} & \end{flalign*} ($n$ denotes the number of components). \\ Now we can proceed by induction on the number $b(L)$ of bad crossings in $L$ (in the symbol $b(L)\ b$ works simultaneously for ``bad" and for $b$=($b_1,\dots,b_n$). For a different choice of base points $b'$=($b'_1,\dots,b'_n$) we will write $b'(L))$. Assume that $w_b$ is defined for all $L\!\in\!\mathcal{D}$ such that $b(L)\!<\!t$. Then for $L$, $b(L)$=$t$, let $p$ be the first bad crossing for $L$(starting from $b_1$ and walking along the diagram). Depending on $p$ being positive or negative we have $L$=$L_+^p$ or $L$=$L_-^p$. We put \begin{flalign*} {2.9} & \ \ \ \ \ \ \ \ \ w_b(L)= \begin{cases} w_b(L_-^p) | w_b(L_0^p), & \text{if sgn } p = + \\ w_b(L_+^p)*w_b(L_0^p), & \text{if sgn } p = -. \\ \end{cases}& \end{flalign*} We will show that $w_b$ is in fact independent of the choice of $b$ and that it has the properties required from $w_{k+1}$. \\ \vspace{1mm}\\ \textbf{Conway Relations for $\boldsymbol{w_b}$} \\ \vspace{1mm}\\ \indent Let us begin with the proof that $w_b$ has properties $2.4$ and $2.5$. We will denote by $p$ the considered crossing point. We restrict our attention to the case: $b(L_+^p)>b(L_-^p)$. The opposite situation is quite analogous. \\ \indent Now, we use induction on $b(L_-^p)$. If $b(L_-^p)$=$0$, then $b(L_+^p)$=$1$, $p$ is the only bad point of $L_+^p$, and by defining equalities $2.9$ we have \[w_b(L_+^p)\!=\!w_b(L_-^p)|w_b(L_0^p)\] and using 1.6 we obtain \[w_b(L_-^p)\!=\!w_b(L_+^p)\!*\!w_b(L_0^p).\] Assume now that the formulae $2.4$ and $2.5$ for $w_b$ are satisfied for every diagram $L$ such that $b(L_-^p)\!<\!t$, $t\!\geqslant\!1$. Let us consider the case $b(L_-^p)$=$t$. \\ \indent By the assumption $b(L_+^p)\!\geqslant\!2$. Let $q$ be the first bad point on $L_+^p$. Assume that $q$=$p$. Then by $2.9$ we have \[w_b(L_+^p)\!=\!w_b(L_-^p)| w_b(L_0^p).\] Assume that $q \neq p$. Let sgn $q=+$, for example. Then by 2.9 we have \[w_b(L_+^p) = w_b(L{_+^p}{_+^q}) = w_b(L{_+^p}{_-^q}) | w_b(L{_+^p}{_ 0^q}).\] But $b(L{_-^p}{_-^q})\!<\!t$ and $cr(L{_+^p}{_ 0^q})\!\leqslant\!k$, whence by the inductive hypothesis and M.I.H. we have \[w_b(L{_+^p}{_-^q})\!=\!w_b(L{_-^p}{_-^q})|w_b(L{_0^p}{_ -^q})\] and \[w_b(L{_+^p}{_0^q})\!=\!w_b(L{_-^p}{_0^q})|w_b(L{_0^p}{_ 0^q})\] whence \[w_b(L_+^p) = (w_b(L{_-^p}{_-^q}) | w_b(L{_0^p}{_-^q}))|( w_b(L{_-^p}{_ 0^q})| w_b(L{_0^p}{_0^q)})\] and by the transposition property 1.3 \begin{flalign*} {2.10} && w_b(L_+^p) &=(w_b(L{_-^p}{_-^q})|w_b(L{_-^p}{_0^q})) |(w_b(L{_0^p}{_ -^q})|w_b(L{_0^p}{_0^q})).& \end{flalign*} On the other hand $b(L{_-^p}{_-^q})\!<\!t$ and $cr(L_ 0^p)\!\leqslant\!k$, so using once more the inductive hypothesis and M.I.H. we obtain \[ {2.11} \begin{split} w_b(L_-^p) &= w_b(L{_-^p}{_+^q}) = w_b(L{_-^p}{_-^q}) | w_b(L{_-^p}{_ 0^q}) \\ w_b(L_0^p) &= w_b(L{_0^p}{_+^q}) = w_b(L{_0^p}{_-^q}) | w_b(L{_0^p}{_ 0^q}) \end{split} \] Putting 2.10 and 2.11 together we obtain \[w_b(L_+^p) = w_b(L_-^p) | w_b(L_0^p)\] as required. If sgn $q=-$ we use $1.4$ instead of $1.3$. This completes the proof of Conway Relations for $w_b$. \\ \vspace{1mm}\\ \textbf{Changing Base Points} \\ \vspace{1mm}\\ \indent We will show now that $w_b$ does not depend on the choice of $b$, provided the order of components is not changed. It amounts to the verification that we may replace $b_i$ by $b'_i$ taken from the same component in such a way that $b'_i$ lies after $b_i$ and there is exactly one crossing point, say $p$, between $b_i$ and $b'_i$. Let $b'$=($b_1, \dots,b'_i,\dots,b_n$). We want to show that $w_b(L)\!=\!w_{b'}(L)$ for every diagram with $k+1$ crossings belonging to $\mathcal{D}$. We will only consider the case sgn $p=+$; the case sgn $p=-$ is quite analogous. \\ \indent We use induction on $B(L)=$max$(b(L),b'(L))$. We consider three cases. \\ \vspace{1mm}\\ \indent \textsc{Cbp 1.} Assume $B(L)\!=\!0$. Then $L$ is untangled with respect to both choices of base points and by 2.8 \[w_b(L) = a_n = w_{b'}(L).\] \indent \textsc{Cbp 2.} Assume that $B(L)\!=\!1$ and $b(L)\!\neq\!b'(L)$. This is possible only when $p$ is a self-crossing point of the $i$-th component of $L$. There are two subcases to be considered. \\ \vspace{1mm}\\ \indent \textsc{Cbp 2} (a): $b(L)\!=\!1$ and $b'(L)\!=\!0$. Then $L$ is untangled with respect to $b'$ and by 2.8 $$ w_{b'}(L)=a_n $$ $$ \ \ \ \ w_b(L)=w_b(L_+^p)= w_b(L_-^p)|w_b(L_0^p)$$ \indent Again we have restricted our attention to the case sgn $p\!=\!+$. Now, $w_b(L_-^p)\!=\!a_n$ since $b(L_-^p)\!=\!0$, and $L_0^p$ is untangled with respect to a proper choice of base points. Of course $L_0^p$ has $n+1$ components, so $w_b(L_0^p)\!=\!a_{n+1}$ by 2.8. It follows that $w_b(L)\!=\!a_n|a_{n+1}$ and $a_n|a_{n+1}=a_n$ by $1.1$. \\ \vspace{1mm}\\ \indent \textsc{Cbp 2}(b): $b(L)\!=\!0$ and $b'(L)\!=\!1$. This case can be dealt with like \textsc{Cbp 2}(a).\\ \vspace{1mm}\\ \indent \textsc{Cbp 3.} $B(L)\!=\!t\!>\!1$ or $B(L)\!=\!1\!=\!b(L)\!=\!b'(L)$. We assume by induction $w_b(K)\!=\!w_{b'}(K)$ for $B(K)\!<\!B(L)$. Let $q$ be a crossing point which is bad with respect to $b$ and $b'$ as well. We will consider this time the case sgn $q=-$. The case sgn $q=+$ is analogous. \\ \indent Using the already proven Conway relations for $w_b$ and $w_{b'}$ we obtain \[w_b(L)= w_b(L_-^q)=w_b(L_+^q)\!*\!w_b(L_0^q) \] \[w_{b'}(L)= w_{b'}(L_-^q)= w_{b'}(L_+^q)\!*\!w_{b'}(L_0^q)\] But $B(L_+^q)\!<\!B(L)$ and $cr(L_0^q)\!\leqslant\!k$, whence by the inductive hypothesis and M.I.H. hold \[w_b(L_+^q) = w_{b'}(L_+^q)\] \[w_b(L_0^q) = w_{b'}(L_0^q)\] which imply $w_b(L)\!=\!w_{b'}(L)$. This completes the proof of this step (C.B.P.). \\ \indent Since $w_b$ turned out to be independent of base point changes which preserve the order of components we can now consider a function $w^0$ to be defined in such a way that it attaches an element of $\A$ to every diagram $L$, $cr(L) \leqslant k+1$ with a fixed ordering of components. \\ \vspace{1mm}\\ \textbf{Independence of $\boldsymbol{w^0}$ of Reidemeister Moves} (I.R.M) \\ \vspace{1mm}\\ \indent When $L$ is a diagram with fixed order of components and $R$ is a Reidemeister move on $L$, then we have a natural ordering of components on $R(L)$. We will show now that $w^0(L)=w^0(R(L))$. Of course we assume that $cr(L)$, $cr(R(L)) \leqslant k+1$. \\ \indent We use induction on $b(L)$ with respect to properly chosen base points $b=(b_1, \dots , b_n)$. Of course the choice must be compatible with the given ordering of components. We choose the base points to lie outside the part of the diagram involved in the considered Reidemeister move $R$, so that the same points may work for the diagram $R(L)$ as well. We have to consider the three standard types of Reidemeister moves (Fig. 2.1). \\ \ \\ \centerline{\psfig{figure=R123-PT.eps,height=2.5cm}} \centerline{\footnotesize{Fig. 2.1}} \indent Assume that $b(L)=0$. Then it is easily seen that also $b(R(L))=0$, and the number of components is not changed. Thus \[w^0(L)=w^0(R(L))\ \textup{ by\ 2.8.}\] \indent We assume now by induction that $w^0(L)=w^0(R(L))$ for $b(L)<t$. Let us consider the case $b(L)=t$. Assume that there is a bad crossing $p$ in $L$ which is different from all the crossings involved in the considered Reidemeister move. Assume, for example, that sgn $p=+$. Then, by the inductive hypothesis, we have \begin{flalign*} {2.12} && w^0(L_-^p) &= w^0(R(L_-^p))& \end{flalign*} and by M.I.H. \begin{flalign*} {2.13} && w^0(L_0^p) &= w^0(R(L_0^p))& \end{flalign*} Now, by the Conway relation 2.4, which was already verified for $w^0$ we have \[w^0(L)=w^0(L_+^p)=w^0(L_-^p)|w^0(L_0^p) \] \[w^0(R(L))=w^0(R(L)_+^p)=w^0(R(L)_-^p)|w^0(R(L)_0^p) \] whence by 2.12 and 2.13 \[w^0(L)=w^0(R(L))\] Obviously $R(L_-^p)=R(L)_-^p$ and $R(L_0^p)=R(L)_0^p$. \\ \indent It remains to consider the case when $L$ has no bad points, except those involved in the considered Reidemeister move. We will consider the three types of moves separately. The most complicated is the case of a Reidemeister move of the third type. To deal with it let us formulate the following observation: \\ \indent Whatever the choice of base points is, the crossing point of the top arc and the bottom arc cannot be the only bad point of the diagram. \\ \centerline{\psfig{figure=TriangleB-PT.eps,height=4.1cm}} \centerline{\footnotesize{Fig. 2.2}} The proof of the above observation amounts to an easy case by case checking and we omit it. The observation makes possible the following induction: we can assume that we have a bad point at the crossing between the middle arc and the lower or the upper arc. Let us consider for example the first possibility; thus $p$ from Fig. 2.2 is assumed to be a bad point. We consider two subcases, according to sgn $p$ being $+$ or $-$. \\ \indent Assume sgn $p=+$. Then by Conway relations \[w^0(L)=w^0(L_+^p)=w^0(L_-^p)|w^0(L_0^p)\] \[w^0(R(L))=w^0(R(L)_+^p)=w^0(R(L)_-^p)|w^0(R(L)_0^p)\] But $R(L)_-^p=R(L_-^p)$ and by the inductive hypothesis \[w^0(L_-^p)=w^0(R(L_-^p))\] Also $R(L)_0^p$ is obtained from $L_0^p$ by two subsequent Reidemeister moves of type two (see Fig. 2.3), whence by M.I.H. \[w^0(R(L)_0^p)=w^0(L_0^p)\] and the equality \[w^0(L)=w^0(R(L))\ \textup{follows.}\] \centerline{\epsfig{figure=Figure23.eps,height=3.5cm}} \centerline{\footnotesize{Fig.2.3}} Assume now that sgn $p=-$. Then by Conway relations \[w^0(L)=w^0(L_-^p)=w^0(L_+^p)*w^0(L_0^p) \] \[w^0(R(L))=w^0(R(L)_-^p)=w^0(R(L)_+^p)*w^0(R(L)_0^p)\] But $R(L)_+^p=R(L_+^p)$ and by the inductive hypothesis \[w^0(L_+^p)=w^0(R(L_+^p))\] Now, $L_0^p$ and $R(L)_0^p$ are essentially the same diagrams (see Fig. 2.4), whence $w^0(L_0^p)=w^0(R(L)_0^p)$ and the equality \[w^0(L)=w^0(R(L))\ \textup{follows.}\] \centerline{\epsfig{figure=Fig24.eps, height=3.5cm}} \centerline{\footnotesize{Fig. 2.4}} \ \\ \ \\ Reidemeister moves of the first type. The base points can always be chosen so that the crossing point involved in the move is good. \\ Reidemeister moves of the second type. There is only one case, when we cannot choose base points to guarantee the points involved in the move to be good. It happens when the involved arcs are parts of different components and the lower arc is a part of the earlier component. In this case the both crossing points involved are bad and they are of different signs, of course. Let us consider the situation shown in Fig. 2.5. \\ \centerline{\epsfig{figure=Fig25.eps, height=3.5cm}} \centerline{\footnotesize{Fig. 2.5}} We want to show that $w^0(R(L))=w^0(L)$. But by the inductive hypothesis we have \[w^0(L')=w^0(R'(L'))=w^0(R(L)).\] Using the already proven Conway relations, formulae 1.6 and 1.7 and M.I.H. if necessary, it can be proved that $w^0(L)=w^0(L')$. Let us discuss in detail the case involving M.I.H. It occurs when sgn $p=-$. Then we have \[w^0(L)=w^0(L_+^q)=w^0(L_-^q)|w^0(L_0^q)=(w^0(L{_-^q}{_+^p})*w^0(L{_-^q}{_0^p}))|w^0(L_0^q)\] But $L{_-^q}{_+^p}= L'$ and by M.I.H. $w^0(L{_-^q}{_0^p})=w^0(L_0^q)$ (see Fig. 2.6, where $L{_-^q}{_0^p}$ and $L_0^q$ are both obtained from $K$ by a Reidemeister move of the first type). \\ \ \\ \centerline{\epsfig{figure=Fig26.eps, height=3.2cm}} \centerline{\footnotesize{Fig. 2.6}} Thus by 1.7: \[w^0(L) = w^0(L')\ \textup{whence}\] \[w^0(L)=w^0(R(L)). \] The case: sgn $p=+$ is even simpler and we omit it. This completes the proof of the independence of $w^0$ of Reidemeister moves. To complete the Main Inductive Step it is enough to prove the independence of $w^0$ of the order of components. Then we set $w_k = w^0$. The required properties have been already checked. \\ \vspace{1mm}\\ \textbf {Independence of the Order of Components} (I.O.C.) \\ \vspace{1mm}\\ \indent It is enough to verify that for a given diagram $L\ (cr(L) \leqslant k+1)$ and fixed base points $b=(b_1, \dots , b_i, b_{i+1}, \dots , b_n)$ we have \[w_b(L)=w_{b'}(L)\] where $b'=(b_1, \dots , b_{i+1}, b_{i}, \dots , b_n)$. This is easily reduced by the usual induction on $b(L)$ to the case of an untangled diagram. To deal with this case we will choose $b$ in an appropriate way. \\ \indent Before we do it, let us formulate the following observation: If $L_i$ is a trivial component of $L$, i.e. $L_i$ has no crossing points, neither with itself, nor with other components, then the specific position of $L_i$ in the plane has no effect on $w^0(L)$; in particular we may assume that $L_i$ lies separately from the rest of the diagram: \\ \ \\ \centerline{\epsfig{figure=Fig27.eps, height=2.2cm}} \centerline{\footnotesize{Fig. 2.7}} This can be easily achieved by induction on $b(L)$, or better by saying that it is obvious. \\ \indent For an untangled diagram we will be done if we show that it can be transformed into another one with less crossings by a series of Reidemeister moves which do not increase the crossing number. We can then use I.R.M. and M.I.H. This is guaranteed by the following lemma. \begin{lemma}\textbf{2.14.} Let $L$ be a diagram with $k$ crossings and a given ordering of components $L_1, L_2, \dots , L_n$. Then either $L$ has a trivial circle as a component or there is a choice of base points $b=(b_1, \dots , b_n)$; $b_i \in L_i$ such that an untangled diagram $L^{u}$ associated with $L$ and $b$ (that is all the bad crossings of $L$ are changed to good ones) can be changed into a diagram with less than $k$ crossings by a sequence of Reidemeister moves not increasing the number of crossings. \end{lemma} This was probably known to Reidemeister already, however we prove it in the Appendix for the sake of completeness.\\ \indent With I.O.C. proven we have completed M.I.S. and the proof of Theorem 1.8.\\ \section{Quasi-algebras}\label{Section 3} \indent We shall now describe a certain generalization of Theorem 1.8. This is based on the observation that it was not necessary to have the operations $|$ and $*$ defined on the whole product $\A \times \A$. Let us begin with the following definition. \begin{definition}\textbf{3.0.} A quasi Conway algebra is a triple $(\A, B_1, B_2), B_1, B_2$ being subsets of $\A \times \A$, together with 0-argument operations $a_1,a_2, \dots ,a_n, \dots$ and two 2-argument operations $|$ and $*$ defined on $B_1$ and $B_2$ respectively satisfying the conditions: \end{definition} \begin{align*} \left. \begin{aligned} 3.1 && \qquad &a_n | a_{n+1} = a_n \\ 3.2 && \qquad &a_n * a_{n+1} = a_n \\ 3.3&& \qquad &(a|b)|(c|d) = (a|c)|(b|d) \\ 3.4 && \qquad &(a|b)*(c|d) = (a*c)|(b*d) \\ 3.5 && \qquad &(a*b)*(c*d) = (a*c)*(b*d) \\ 3.6 && \qquad &(a|b)*b = a \\ 3.7 && \qquad &(a*b)|b = a. \\ \end{aligned} \right\} \indent \text{whenever the both sides are defined} \end{align*} \indent We would like to construct invariants of Conway type using such quasi-algebras. As before $a_n$ will be the value of the invariant for the trivial link of $n$ components. \\ \indent We say that $\A$ is geometrically sufficient if and only if for every resolving tree of each diagram of an oriented link all the operations that are necessary to compute the root value are defined. \begin{theorem}\textbf{3.8.} Let $\A$ be a geometrically sufficient quasi Conway algebra. There exists a unique invariant $w$ attaching to each isotopy class of links an element of $\A$ and satisfying the conditions \end{theorem} \begin{enumerate} \item $w_{T_n}=a_n$ for $T_n$ being a trivial link of n components,\\ \item if $L_+$, $L_-$ and $L_0$ are diagrams from Fig. 1.1, then \[w_{L_+}=w_{L_-}|w_{L_0} \text{ and } \] \[w_{L_-}=w_{L_+}*w_{L_0}.\] \end{enumerate} \noindent The proof is identical with the proof of Theorem 1.8.\\ \indent As an example we will now describe an invariant, whose values are polynomials in an infinite number of variables. \begin{example}\textbf{3.9.} $\A=N \times Z[ y_1^{\mp1},{x'}_2^{\mp 1},{z'}_2,\ x_1^{\mp1},z_1,\ x_2^{\mp1},z_2,x_3^{\mp1},z_3,\dots], B_1=B_2=B=\{ ((n_1,\ w_1),(n_2,w_2))\in\A\times\A:|n_1-n_2|=1\}, a_1=(1,1),a_2=(2, x_1+y_1+z_1),\dots, a_n=(n,\Pi_{i=1}^{n-1}(x_i+y_i)+z_1\Pi_{i=2}^{n-1} (x_i+y_i)+\dots+z_{n-2}(x_{n-1}+y_{n-1})+z_{n-1}),\dots$ where $y_i=x_i\frac{y_1}{x_1}$. To define the operations $|$ and $*$ consider the following system of equations:\end{example} \begin{flalign*} {(1)} && &x_1w_1+y_1w_2=w_0-z_1& \\ {(2)} && &x_2w_1+y_2w_2=w_0-z_2& \\ {(2^\prime)}&& &x'_2w_1+y'_2w_2=w_0-z'_2& \\ {(3)} && &x_3w_1+y_3w_2=w_0 -z_3 \\ {(3^\prime)}&& &x'_3w_1+y'_3w_2=w_0-z'_3 \\ && &\dots \\ {(i)}&& &x_iw_1+y_iw_2=w_0-z_i \\ {\text({i}^\prime})&& &x'_iw_1+y'_iw_2=w_0-z'_i \\ && &\ldots \end{flalign*} where $y'_i=\frac{x'_iy_1}{x_i},x'_i=\frac{x'_2x_1}{x_{i-1}}$ and $z'_i$ are defined inductively to satisfy \[\frac{z'_{i+1}-z_{i-1}}{x_1x'_2}= \Big(1+\frac{y_1}{x_1}\Big)\Big(\frac{z'_i}{x'_i}-\frac{z_i}{x_i}\Big).\] We define $(n,w)=(n_1,w_1)|(n_2,w_2)$ (resp.$(n,w)=(n_1,w_1)*(n_2, w_2))$ as follows: $n=n_1$ and if $n_1=n_2-1$ then we use the equations $(n)$ to get $w$; namely $x_nw+y_nw_1=w_2-z_n$ (resp. $x_nw_1+y_nw=w_2-z_n$). If $n_1=n_2+1$ then we use the equation ($n'$) to get $w$; namely $x'_nw+y'_nw_1=w_2-z'_n$ (resp. $x'_nw_1+y'_nw=w_2-z'_n$). We can think of Example 1.11 as being a special case of Example 3.9. \\ \indent Now we will show that the quasi-algebra $\A$ for Example 3.9 satisfies the relations 1.1-1.7.\\ \indent It is an easy task to check that the first coordinate of elements from $\A$ satisfies the relations 1.1-1.7 (compare with Example 1.9) and to check the relations 1.1, 1.2, 1.6 and 1.7 so we will concentrate our attention on relations 1.3, 1.4, and 1.5.\\ \indent It is convenient to use the following notation: if $w\in\A$ then $w=(\lvert{w}\rvert,F)$ and for \[w_1|w_2=(\lvert{w_1}\rvert,F_1)|(\lvert{w_2}\rvert, F_2)=(\lvert{w}\rvert,F)=w \] to use the notation \begin{equation*} F=\begin{cases} F_1|_nF_2 \ \textup{if} \ n=\lvert{w_1}\rvert=\lvert{w_2}\rvert-1 \\ F_1|_{n'}F_2 \ \textup{if} \ n=\lvert{w_1}\rvert=\lvert{w_2}\rvert+1. \end{cases} \end{equation*} Similar notation we use for the operation $*$. \\ \indent In order to verify relations 1.3-1.5 we have to consider three main cases:\\ $1. \quad \lvert{a}\rvert=\lvert{c}\rvert-1=\lvert{b}\rvert+1=n$ \\ Relations 1.3-1.5 make sense iff $\lvert{d}\rvert=n$. The relation 1.3 has the form: \[(F_a|_{n'}F_b)|_n(F_c|_{(n+1)'}F_d)=(F_a|_{n}F_c)|_{n'}(F_b|_{(n-1)}F_d).\] From this we get: \[\frac{1}{x_nx'_{n+1}}F_d-\frac{y'_{n+1}}{x_nx'_{n+1}}F_c-\frac{y_n}{x_nx'_n}F_b + \frac{y_ny'_n}{x_nx'_n} F_a-\frac{z'_{n+1}}{x_nx'_{n+1}}-\frac{z_n}{x_n}+ \frac{y_nz'_n}{x_nx'_n}= \] \[= \frac{1}{x'_nx_{n-1}} F_d-\frac{y_{n-1}}{x'_nx_{n-1}}F_b-\frac{y'_n}{x_nx'_n}F_c + \frac{y_ny'_n}{x_nx'_n}F_a-\frac{z_{n-1}}{x'_nx_{n-1}}-\frac{z'_n}{x'_n}+ \frac{y'_nz_n}{x_nx'_n}\] Therefore: \begin{align*} \text{(i)}&& &x_{n-1}x'_n=x_nx'_{n+1}\\ \text{(ii)}&& &\frac{y'_{n+1}}{x'_{n+1}}=\frac{y'_n}{x'_n} \\ \text{(iii)}&& &\frac{y_n}{x_n}=\frac{y_{n-1}}{x_{n-1}} \\ \text{(iv)}&& &\frac{z'_{n+1}}{x_nx'_{n+1}}+\frac{z_n}{x_n}-\frac{y_nz'_n}{x_nx'_n}=\frac{z_{n-1}}{x'_nx_{n-1}}+\frac{z'_n}{x'_n}-\frac{y'_nz_n}{x_nx'_n} \end{align*} When checking the relations 1.4 and 1.5 we get exactly the same conditions (i)-(iv).\\ $2. \quad \lvert a \rvert =\lvert b \rvert -1 = \lvert{c} \rvert -1 = n$.\\ \indent (I) $\lvert{d}\rvert=n.$\\ The relation 1.3 has the following form: \[(F_a|_{n}F_b)|_n(F_c|_{(n+1)'}F_d)=(F_a|_{n}F_c)|_{n}(F_b|_{(n+1)'}F_d).\] We get after some calculations that it is equivalent to \begin{align*} \text{(v)}&& &\frac{y_n}{x_n}=\frac{y'_{n+1}}{x'_{n+1}}& \end{align*} The relations 1.4 and 1.5 reduce to the same condition (v). \\ \indent (II) $\lvert{d}\rvert=n+2.$ \\ Then the relations 1.3-1.5 reduce to the condition (iii). \\ $3. \quad \lvert{a}\rvert=\lvert{b}\rvert+1=\lvert{c}\rvert+1=n$ \\ \indent (I) $\lvert{d}\rvert = n-2$ \\ \indent (II) $\lvert{d}\rvert = n.$ \\ We get, after some computations, that relations 3 (I) and 3 (II) follow from the conditions (iii) and (v). \\ \indent Conditions (i)-(v) are equivalent to the conditions on $x'_i, y_i, y'_i$ and $z'_i$ described in Example 3.9. Therefore the quasi-algebra $\A$ from Example 3.9 satisfies the relations 1.1-1.7. Furthermore, if $L$ is a diagram and $p-$ its crossing, then the number of components of $L_0^p$ is always equal to the number of components of $L$ plus or minus one, so the set $B\subset\A\times\A$ is sufficient to define the link invariant associated with $\A$.\\ \section{Final remarks and problems}\label{Section 4} \begin{remark}\textbf{4.1.} Each invariant of links can be used to build a better invariant which will be called weighted simplex of the invariant. Namely, if $w$ is an invariant and $L$ is a link of $n$ components $L_1,\dots, L_n$ then we consider an $n-1$ dimensional simplex $\Delta^{n-1}=(q_1,\dots,q_n)$. We associate with each face ($q_{i_1},\dots,q_{i_k}$) of $\Delta^{n-1}$ the value $w_{L'}$ where $L' = L_{i_1}\cup \cdots \cup L_{i_k}$. \end{remark} \indent We say that two weighted simplices are equivalent if there exists a bijection of their vertices which preserves weights of faces. Of course, the weighted simplex of an invariant of isotopy classes of oriented links is also an invariant of isotopy classes of oriented links. \\ \indent Before we present some examples, we will introduce an equivalence relation $\thicksim_c$ (Conway equivalence relation) on isotopy classes of oriented links ($\mathcal{L}$). \begin{definition}\textbf{4.2.} $\thicksim_c$ is the smallest equivalence relation on $\mathcal{L}$ which satisfies the following condition: let $L'_1$ (resp. $L'_2$) be a diagram of a link $L_1$ (resp. $L_2$) with a given crossing $p_1$ (resp. $p_2$) such that $p_1$ and $p_2$ are crossings of the same sign and \[ \begin{split} (L'_1)_-^{p_1} \thicksim_c (L'_2)_-^{p_2}\ &\text{and} \\ (L'_1)_0^{p_1} \thicksim_c (L'_2)_0^{p_2} \end{split} \] then $L_1\thicksim_c L_2$.\\ \end{definition} \indent It is obvious that an invariant given by a quasi Conway algebra is a Conway equivalence invariant. \begin{example}\textbf{4.3.}{(a)} Two links shown on Fig. 4.1 are Conway equivalent but they can be distinguished by weighted simplices of the linking numbers. \end{example} \centerline{\epsfig{figure= Fig41.eps,height=3.5cm}} \centerline{\footnotesize{Fig. 4.1}} {(b)} J. Birman has found three-braids (we use the notation of [M]); \[ \begin{split} \gamma_1=\sigma_1^{-2}\sigma_2^3\sigma_1^{-1}\sigma_2^4\sigma_1^{-2}\sigma_2^{4}\sigma_1^{-1}\sigma_2 \\ \gamma_2=\sigma_1^{-2}\sigma_2^3\sigma_1^{-1}\sigma_2^4\sigma_1^{-1}\sigma_2\sigma_1^{-2}\sigma_2^4 \end{split} \] which closures have the same values of all invariants described in our examples and the same signature but which can be distinguished by weighted simplices of the linking numbers [B]. \\ \indent As the referee has kindly pointed out the polynomial invariants described in 1.11 (a) and 1.11 (b) are equivalent. Namely if we denote them by $w_L$ and $w'_L$ respectively, then we have \[w_L(x,\ y,\ z)=\Big(1-\frac{z}{1-x-y}\Big)w'_L(x,\ y)+\frac{z}{1-x-y}.\] \begin{problem}\textbf{4.4.} \begin{enumerate} \item[(a)] Is the invariant described in Example 3.9 better than the polynomial invariant from Example 1.11?\footnote{Added for e-print: Adam Sikora proved in his Warsaw master degree thesis written under direction of P.Traczyk, that the answer to Problem 4.4 (a) is negative, \cite{Si-1}.} \item[(b)] Find an example of links which have the same polynomial invariant of Example 3.9 but which can be distinguished by some invariant given by a Conway algebra.\footnote{Added for e-print: Adam Sikora proved no invariant coming from a Conway algebra can distinguish links with the same polynomial invariant of Example 1.11, \cite{Si-2}.} \item[(c)] Do there exist two links $L_1$ and $L_2$ which are not Conway equivalent but which cannot be distinguished using any Conway algebra? \item[(d)] Birman [$B$] described two closed 3-braid knots given by \[ \begin{split} y_1=\sigma_1^{-3}\sigma_2^4\sigma_1^{-1}\sigma_2^5\sigma_1^{-3}\sigma_2^{5}\sigma_1^{-2}\sigma_2 \\ y_2=\sigma_1^{-3}\sigma_2^4\sigma_1^{-1}\sigma_2^5\sigma_1^{-2}\sigma_2\sigma_1^{-3}\sigma_2^5 \end{split} \] which are not distinguished by the invariants described in our examples and by the signature. Are they Conway equivalent? (they are not isotopic because their incompressible surfaces are different). \end{enumerate} \end{problem} \begin{problem}\textbf{4.5.} Given two Conway equivalent links, do they necessarily have the same signature? \end{problem} \indent The examples of Birman and Lozano [$B$; Prop. 1 and 2] have different signature but the same polynomial invariant of Example 1.11 (b)(see [$B$]). \begin{problem}\textbf{4.6.} Let ($V_1$, $V_2$) be a Heegaard splitting of a closed 3-manifold $M$. Is it possible to modify the above approach using projections of links onto the Heegaard surface $\partial V_1$? \end{problem} \indent We have obtained the results of this paper in early December 1984 and we were not aware at the time that an important part of our results (the invariant described in Example 1.11 (b)) had been got three months before us by four groups of researchers: R. Lickorish and K. Millett, J. Hoste, A. Ocneanu, P. Freyd and D. Yetter and that the first two groups used arguments similar to ours. We have been informed about this by J. Birman (letter received on January 28, '85) and by J. Montesinos (letter received on February 11, '85; it included the paper by R. Lickorish and K. Millett and also a small joint paper by all above mentioned mathematicians). \\ \section{Appendix} Here we prove Lemma 2.14.\\ \indent A closed part cut out of the plane by arcs of $L$ is called an $i$-gon if it has $i$ vertices (see Fig. 5.1). \\ \centerline{{\psfig{figure=PT-2-2-9.eps,height=4.0cm}}}\ \\ \centerline{\footnotesize{Fig. 5.1}} Every $i$-gon with $i\leqslant2$ will be called $f$-gon ($f$ works for few). Now let $X$ be an innermost $f$-gon that is an $f$-gon which does not contain any other $f$-gon inside. \\ \indent If $X$ is 0-gon we are done because $\partial X$ is a trivial circle. If $X$ is 1-gon then we are done because int $X \cap L = \emptyset$ so we can perform on $L^u$ a Reidemeister move which decreases the number of crossings of $L^u$ (Fig. 5.2). \\ \centerline{{\psfig{figure=PT-2-2-10.eps,height=3.5cm}}}\ \\ \centerline{\footnotesize{Fig. 5.2}}\\ Therefore we assume that $X$ is a 2-gon. Each arc which cuts int $X$ goes from one edge to another. Furthermore, no component of $L$ lies fully in $X$ so we can choose base points $b=(b_1,\dots,b_n)$ lying outside $X$. This has important consequences: if $L^u$ is an untangled diagram associated with $L$ and $b$ then each 3-gon in $X$ supports a Reidemeister move of the third type (i.e. the situation of the Fig. 5.3 is impossible).\\ \centerline{{\psfig{figure=PT-2-2-11.eps,height=4.5cm}}}\ \\ \centerline{\footnotesize{Fig. 5.3}} \indent Now we will prove Lemma 2.14 by induction on the number of crossings of $L$ contained in the 2-gon $X$ (we denote this number by $c$). \\ \indent If $c=2$ then int $X \cap L=\emptyset$ and we are done by the previous remark (2-gon $X$ can be used to make the Reidemeister move of the second type on $L^u$ and to reduce the number of crossings in $L^u$). \\ \indent Assume that $L$ has $c>2$ crossings in $X$ and that Lemma 2.14 is proved for less than $c$ crossings in $X$. In order to make the inductive step we need the following fact. \begin{proposition}\textbf{5.1.} If $X$ is an innermost 2-gon with int $X\cap L \neq \emptyset$ then there is a 3-gon $\Delta \subset X$ such that $\Delta \cap \partial X \neq \emptyset$, int $\Delta \cap L = \emptyset$. \end{proposition} Before we prove Proposition 5.1 we will show how Lemma 2.14 follows from it. \\ \indent We can perform the Reidemeister move of the third type using the 3-gon $\Delta$ and reduce the number of crossings of $L^u$ in $X$ (compare Fig. 5.4).\\ \centerline{\epsfig{figure= Fig54.eps, height=3.5cm}} \centerline{\footnotesize{Fig. 5.4}}\\ Now either $X$ is an innermost $f$-gon with less than $c$ crossings in $X$ or it contains an innermost $f$-gon with less that $c$ crossings in it. In both cases we can use the inductive hypothesis. \\ \indent Instead of proving Proposition 5.1 we will show a more general fact, which has Proposition 5.1 as a special case. \begin{proposition}\textbf{5.2.} Consider a 3-gon $Y=$($a$, $b$, $c$) such that each arc which cuts it goes from the edge $\overline{ab}$ to the edge $\overline{ac}$ without self-intersections (we allow $Y$ to be a 2-gon considered as a degenerated 3-gon with $\overline{bc}$ collapsed to a point). Furthermore let int $Y$ be cut by some arc. Then there is a 3-gon $\Delta \subset Y$ such that $\Delta \cap \overline{ab} \neq\emptyset$ and int $\Delta$ is not cut by any arc. \end{proposition} \indent \textsc{Proof of Proposition 5.2:} We proceed by induction on the number of arcs in int $Y\cap L$ (each such an arc cuts $\overline{ab}$ and $\overline{ac}$). For one arc it is obvious (Fig. 5.5). Assume it is true for $k$ arcs ($k\geqslant 1$) and consider ($k+1)$-th arc $\gamma$. Let $\Delta_0=$($a_1$, $b_1$, $c_1$) be a $3$-gon from the inductive hypothesis with an edge $\overline{a_1b_1}\subset \overline{ab}$ (Fig. 5.6). \\ \centerline{{\psfig{figure=PT-2-2-13.eps,height=5.2cm}}}\ \\ \centerline{\footnotesize{Fig. 5.5}}\\ If $\gamma$ does not cut $\Delta_0$ or it cuts $\overline{a_1b_1}$ we are done (Fig 5.6). Therefore let us assume that $\gamma$ cuts $\overline{a_1c_1}$ (in $u_1$) and $\overline{b_1c_1}$ (in $w_1$). Let $\gamma$ cut $\overline{ab}$ in $u$ and $\overline{ac}$ in $w$ (Fig. 5.7). We have to consider two cases: \\ \indent (a) $\quad \overline{uu_1}\ \cap$ int $\Delta_0=\emptyset$ (so $\overline{ww_1}\ \cap$ int $\Delta_0=\emptyset$); Fig. 5.7.\\ \centerline{\epsfig{figure=Fig56.eps, height=3.3in}}\\ \centerline{\footnotesize{Fig. 5.6}}\\ \ \\ \centerline{{\psfig{figure=PT-2-2-15.eps,height=5.9cm}}}\ \\ \centerline{\footnotesize{Fig 5.7}} Consider the 3-gon $ua_1u_1$. No arc can cut the edge $\overline{a_1u_1}$ so each arc which cuts the 3-gon $ua_1u_1$ cuts the edges $\overline{ua_1}$ and $\overline{uu_1}$. Furthermore this 3-gon is cut by less than $k+1$ arcs so by the inductive hypothesis there is a 3-gon $\Delta$ in $ua_1u_1$ with an edge on $\overline{ua_1}$ the interior of which is not cut by any arc. Then $\Delta$ satisfies the thesis of Proposition 5.2.\\ \indent (b) $\quad \overline{uw_1} \ \cap$ int$\Delta_0=\emptyset$ (so $\overline{wu_1}\ \cap$ int$\Delta_0 =\emptyset$). In this case we proceed like in case (a). \\ This completes the proof of Proposition 5.2 and hence the proof of Lemma 2.14.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{\large {\bf SUPPLEMENTARY INFORMATION}} \vspace{2cm} \section*{Contents} \begin{enumerate} \item Schematic of an agent's field of view \item Algorithm for computing the number of clusters \item Detailed explanation of the nonlinear term in the model \item Snapshots of flocking patterns observed over a range of parameter values \item Description of the movies \end{enumerate} \renewcommand\thepage{S\arabic{page}} \setcounter{page}{1} \section*{Schematic of an agent's field of view} The field of view of agent $i$ is illustrated in Fig.~\ref{sch}. At each iteration, agent $i$ attempts to select an agent that lies within its field of view, which is delimited by a maximum bearing angle $\theta_{\text{max}}$, for the purposes of an alignment interaction. An agent $j$ within this field of view is picked by $i$ with a probability that is related to the distance between them, as well as the angle between the velocity of $i$ and the line connecting the two agents. If the field of view of agent $i$ is empty, it performs a random rotation. \begin{figure}[!h] \centering \includegraphics{schematic_SI.png} \caption{ Schematic of the field of view of an agent $i$ that picks an agent $j$ lying within this field of view. The intensity of colour in a given region is related to the the probability with which agent $i$ chooses an agent that lies in that region. Each agent has the highest probability of interacting with agents that lie at a distance $\sigma$ along its direction of motion. Similarly, the intensity reduces as the angle $\theta_{i,j}$ between the velocity of $i$ and the line connecting the agents approaches the maximum bearing angle $\theta_{\rm max}$. Thus, an agent $i$ is most likely to align with an agent that is near its direct line of sight, and which is separated by a distance of around $\sigma$. } \label{sch} \end{figure} \clearpage \section*{Algorithm for computing the number of clusters} At any specified time instant, the maximum possible distance between a pair of agents in the flock is denoted by \[ R_{\max}=\max\left(|\mathbf{x}_i(t)-\mathbf{x}_j(t)|\right)\, \forall\, i,j\in [1,N]\,. \] We set the resolution length $R = \lambda\, R_{\max}$ by choosing a value of $\lambda$ in the range $0 < \lambda \leq 1$. Each agent $i=1,2,\ldots,N$ is assigned a label $g_{i}$ which is associated with an integer value that specifies the cluster to which the agent belongs to. The cluster-finding algorithm involves determining the number of distinct clusters $N_{c}$ of size $\geq R$. The label of each agent $i$ thus lies in the range $g_{\min}(=1) \leq g_i \leq g_{\max}(=N_{c})$.\\ ~\\ {\bf{Summary of the variables used:}}\\ ~\\ \begin{tabular}{rl} $N$ : & Total number of agents in the system,\\ $N_c$ : & Total number of clusters found using the algorithm,\\ $R$ : & Resolution length of the flock (defined above),\\ $g_i$ : & Label associated with each cluster,\\ $b_i$, $c$ : & Boolean variables,\\ $g_{\min}$ : & Minimum value of the array $g$,\\ $g_{\max}$ : & Maximum value of the array $g$. \end{tabular} ~\\ ~\\ ~\\ {\bf{Pseudocode of the algorithm:}}\\ ~\\ The algorithm is outlined in the following pseudocode. Comments appear in blue italicised text. \begin{tabbing} \hspace*{1.0cm}\=\hspace*{1.0cm}\=\hspace*{1.0cm}\=\hspace*{1.0cm}\=\hspace*{1.0cm}\= \kill \TT{Initalize:} $g_{\max} = 0$, $g_{\min} = 0$, \TT{and} $g_i = 0$ \TT{for all} $i=1,2,\ldots,N$.\\ \TT{For} $i=1,2,\ldots,N$\\ \>\CMT{If agent $i$ has not been assigned a label, we label it as one plus the maximum value of the array $g$.}\\ \>\TT{If} $g_i = 0$ \TT{Then} $g_i=\max \{ g_{i'}, i'=1,2,\ldots,N \}+1$.\\ \>\CMT{The variable $b$ marks all the agents in the current assignment.}\\ \>\TT{Initalize:} $b_j = 0$ \TT{for all} $j=1,2,\ldots,N$.\\ \>\CMT{Find all agents $j$ that are at a distance $\leq R$ from agent $i$ and assign $j$ with the same label as $i$. }\\ \>\TT{For} $j=1,2,\ldots,N$\\ \>\>\TT{If} $|\mathbf{x}_i(t)-\mathbf{x}_j(t)|<R$ \TT{Then}\\ \>\>\>\TT{If} $g_j = 0$ \TT{Then} $g_j=g_i$.\\ \>\>\>$b_j=1$.\\ \>\>\TT{End}\\ \>\TT{End}\\ \>\TT{Initalize:} $g_{\min}=g_i$.\\ \>\CMT{Consider all the marked agents, i.e. all agents $j$ for which $b_j=1$.}\\ \>\CMT{We find the minimum value of $g_j$ and assign it to $g_{\min}$}\\ \>\TT{For} $j=1,2,\ldots,N$\\ \>\>\TT{If} $b_j = 1$ \TT{Then}\\ \>\>\>\TT{If} $g_j \leq g_{\min}$ \TT{Then} $g_{\min}=g_j$.\\ \>\TT{End}\\ \>\TT{For} $j=1,2,\ldots,N$\\ \>\>\CMT{We assign the minimum value of the array $g$ to all the marked agents.}\\ \>\>\TT{If} $b_j = 1$ \TT{Then}\\ \>\>\>\TT{For} $k=1,2,\ldots,N$\\ \>\>\>\>\TT{If} $g_k = g_j$ \TT{and} $k \neq j$ \TT{Then} $g_k=g_{\min}$.\\ \>\>\>\TT{End}\\ \>\>\>$g_j=g_{\min}$.\\ \>\>\TT{End}\\ \>\TT{End}\\ \TT{End}\\ ~\\ \TT{Compute:} $g_{\max} = \max \{ g_{i'}, i'=1,2,\ldots,N \}$.\\ ~\\ \CMT{If more than one cluster exists, we relabel them so as to remove the value zero.}\\ \TT{If} $g_{\max} > 1$ \TT{Then}\\ \>\TT{For} $i=(g_{\max}-1),~(g_{\max}-2),\ldots,1$\\ \>\>\TT{Set:} $c=0$\\ \>\>\TT{For} $j=1,2,\ldots,N$\\ \>\>\>\TT{If} $g_j = i$ \TT{Then} $c=1$ \TT{and Exit}.\\ \>\>\TT{End}\\ \>\>\CMT{Fix gaps in the label numbers to ensure that the final set is contiguous}\\ \>\>\TT{If} $c=0$ \TT{Then}\\ \>\>\>\TT{For} $j=1,2,\ldots,N$\\ \>\>\>\>\TT{For} $k=i+1,\ldots,g_{\max}$\\ \>\>\>\>\>\TT{If} $g_j = k$ \TT{Then} $g_j=k-1$.\\ \>\>\>\>\TT{End}\\ \>\>\>\TT{End}\\ \>\>\TT{End}\\ \>\TT{End}\\ \TT{End}\\ ~\\ \TT{Compute:} $g_{\min} = \min \{ g_{i'}, i'=1,2,\ldots,N \}$, $g_{\max} = \max \{ g_{i'}, i'=1,2,\ldots,N \}$. \end{tabbing} Once each $g_i$ has been relabelled, the number of agents in each cluster $i$ is simply the number of agents that are labelled $g_{i}$, and the total number of clusters at the chosen resolution length $N_{c}=g_{\max}$.\\ ~\\ {\bf{Demonstration of cluster-finding algorithm at different resolution lengths:}}\\ ~\\ In the following example, we present an implementation of this cluster-finding algorithm at two different resolution lengths, $R$. As displayed in Fig.~\ref{algo_example}, we consider four clusters of agents. Each cluster consists of $50$ agents whose coordinates are chosen randomly within a $10\times 10$ square centered at the coordinates $(0,0)$, $(0,25)$, $(25,0)$, and $(25/\sqrt{2},25/\sqrt{2})$. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{algo_example.pdf} \caption{ A demonstration of the cluster-finding algorithm. We choose resolution lengths (a) $R=R_{\max}/4$, and (b) $R=R_{\max}/8$. The lines connect the closest agents in each pair of clusters, and the corresponding numerical value denotes the distance between these agents. The bold lines and numbers in panel (a) indicate that the corresponding clusters are categorized as being part of the same cluster (II). In panel (b) four clusters (I-IV) are obtained since all of them are separated by a distance $>R$. } \label{algo_example} \end{figure} \clearpage Upon running our cluster-finding algorithm on this flock, we find that the maximum separation between any pair of agents is $R_{\max}=47.13$. For the choices $\lambda=1/4, 1/8$, we find $R=R_{\max}/4=11.78$ and $R=R_{\max}/8=5.89$. In the displayed realization (Fig.~\ref{algo_example}), we find that the minimum distance between agents in the lower left and upper right clusters is $13.1$. Hence, at resolution length $R=11.78$ these two clusters are categorized as being distinct. In contrast the minimum distances between the agents in upper right cluster and those in the remaining clusters are less than $11.78$ and hence they are categorized as being part of the same cluster. Thus, as displayed in Fig.~\ref{algo_example}(a), at resolution length $R=11.78$ we find just two distinct clusters I \& II (coloured red and blue).\\ For the case where a resolution length $R=R_{\max}/8=5.89$ is used, we find that since all four clusters are separated by a value greater than $R$ they are categorized are being distinct. Thus, our method obtains four distinct clusters (I-IV) at this resolution length, as displayed in Fig.~\ref{algo_example}(b) where each cluster is coloured distinctly. \section*{Detailed explanation of the nonlinear term in the model} In Eq. (2) of the main text, we introduce a nonlinear term $f(\mathbf{v}_j+\mathbf{v}_i)$, where $\mathbf{v}_i$ and $\mathbf{v}_j$ are respectively the velocities of agents $i$ and $j$. For the purpose of the current investigation, we consider the functional form $f(\mathbf{v}) := \mathbf{v}(1-|\mathbf{v}|)/(1+|\mathbf{v}|^\beta)$ with $\beta =3$. Note that if the field of view of agent $i$ is nonempty, i.e. $\Omega_{i}\neq\emptyset$, its velocity at time step $t+1$ is: \[ \mathbf{v}_i(t+1) = \mathbf{v}_i(t) + \alpha[\mathbf{v}_j(t)-\mathbf{v}_i(t) + f(\mathbf{v}_j(t)+\mathbf{v}_i(t))] \] For the functional form that we consider, we see that $f(\mathbf{v}_j+\mathbf{v}_i)$ vanishes at $|\mathbf{v}_j+\mathbf{v}_i| = 0$, $1$ and infinity, which implies that the velocity $\mathbf{v}_i(t+1) \simeq \mathbf{v}_i(t) + \alpha (\mathbf{v}_j(t)-\mathbf{v}_i(t))$ near these values. The case $|\mathbf{v}_j+\mathbf{v}_i| = 0$ corresponds to a situation where the velocities of particles $i$ and $j$ have identical magnitudes and opposite directions. In this scenario, the resulting velocity update effectively prevents a direct collision. To understand the case $|\mathbf{v}_j+\mathbf{v}_i| = 1$, let us assume that $|\mathbf{v}_i+\mathbf{v}_j| = 1 + \epsilon$, where $|\epsilon| \ll 1$. In this situation, we see that \[ f(\mathbf{v}_i+\mathbf{v}_j) = \frac{(\mathbf{v}_i+\mathbf{v}_j)(1-(1+\epsilon))}{1+(1+\epsilon)^3} \simeq \frac{-\epsilon(\mathbf{v}_i+\mathbf{v}_j)}{2}\,, \] Substituting this expression into Eq. (2) of the main text, we find that the acceleration is \[ \mathbf{a}_i \simeq \alpha\left(1 - \frac{\epsilon}{2}\right)\mathbf{v}_j - \alpha\left(1 + \frac{\epsilon}{2}\right)\mathbf{v}_i\, \] Using the velocity update expression from Eq. (1) of the main text, we see that $|\mathbf{v}_i|_{\epsilon \neq 0} < |\mathbf{v}_i|_{\epsilon=0}$ if $\epsilon >0$, and $|\mathbf{v}_i|_{\epsilon \neq 0} > |\mathbf{v}_i|_{\epsilon=0}$ if $\epsilon <0$. This implies that for $\epsilon > 0$, the agent slows down whereas for $\epsilon < 0$, it moves faster. In other words, the nonlinear term $f(\mathbf{v}_j+\mathbf{v}_i)$ ensures that the agent's speed remains close to that of the specified mean value. \clearpage \section*{Snapshots of flocking patterns observed over range of parameter values} Flocking patterns observed for $\alpha=0.01,0.05,0.1,0.5$, and over a range of $\theta_{\max}$ and $\sigma$, are displayed in Figs.~\ref{snap1}--\ref{snap4}. \begin{figure}[!ht] \centering \includegraphics{SI_Fig_01.pdf} \caption{ Snapshots of flocking patterns exhibited by the model for a system of $N = 10^{3}$ agents, obtained for an interaction strength $\alpha=0.01$, over a range of values of the mean interaction length $\sigma$ and maximum bearing angle $\theta_{\max}$. The corresponding parameter space diagram from the main text is displayed in the bottom right panel. In this panel, we display (in log-scale) the dependence of the average angular momentum of the flock on $\sigma$ and $\theta_{\max}$. Each of the other $15$ panels display flocking patterns observed for parameter values denoted by the corresponding numbered red marker on the parameter space diagram. The numbered solid bars in the lower left corner of these panels provides a measure of spatial distance in each case. The solid bar in the lower right corner of each panel indicates the extent of the corresponding resolution length $R$, which we use for our cluster-finding algorithm. } \label{snap1} \end{figure} \clearpage \begin{figure}[!ht] \centering \includegraphics{SI_Fig_05.pdf} \caption{ Snapshots of flocking patterns exhibited by the model for a system of $N = 10^{3}$ agents, obtained for an interaction strength $\alpha=0.05$, over a range of values of the mean interaction length $\sigma$ and maximum bearing angle $\theta_{\max}$. The corresponding parameter space diagram from the main text is displayed in the bottom right panel. In this panel, we display (in log-scale) the dependence of the average angular momentum of the flock on $\sigma$ and $\theta_{\max}$. Each of the other $15$ panels display flocking patterns observed for parameter values denoted by the corresponding numbered red marker on the parameter space diagram. The numbered solid bars in the lower left corner of these panels provides a measure of spatial distance in each case. The solid bar in the lower right corner of each panel indicates the extent of the corresponding resolution length $R$, which we use for our cluster-finding algorithm. } \label{snap2} \end{figure} \clearpage \begin{figure}[!ht] \centering \includegraphics{SI_Fig_10.pdf} \caption{ Snapshots of flocking patterns exhibited by the model for a system of $N = 10^{3}$ agents, obtained for an interaction strength $\alpha=0.1$, over a range of values of the mean interaction length $\sigma$ and maximum bearing angle $\theta_{\max}$. The corresponding parameter space diagram from the main text is displayed in the bottom right panel. In this panel, we display (in log-scale) the dependence of the average angular momentum of the flock on $\sigma$ and $\theta_{\max}$. Each of the other $15$ panels display flocking patterns observed for parameter values denoted by the corresponding numbered red marker on the parameter space diagram. The numbered solid bars in the lower left corner of these panels provides a measure of spatial distance in each case. The solid bar in the lower right corner of each panel indicates the extent of the corresponding resolution length $R$, which we use for our cluster-finding algorithm. } \label{snap3} \end{figure} \clearpage \begin{figure}[!ht] \centering \includegraphics{SI_Fig_50.pdf} \caption{ Snapshots of flocking patterns exhibited by the model for a system of $N = 10^{3}$ agents, obtained for an interaction strength $\alpha=0.5$, over a range of values of the mean interaction length $\sigma$ and maximum bearing angle $\theta_{\max}$. The corresponding parameter space diagram from the main text is displayed in the bottom right panel. In this panel, we display (in log-scale) the dependence of the average angular momentum of the flock on $\sigma$ and $\theta_{\max}$. Each of the other $15$ panels display flocking patterns observed for parameter values denoted by the corresponding numbered red marker on the parameter space diagram. The numbered solid bars in the lower left corner of these panels provides a measure of spatial distance in each case. The solid bar in the lower right corner of each panel indicates the extent of the corresponding resolution length $R$, which we use for our cluster-finding algorithm. Note that the pattern in panel $1$ is classified as a single cluster because over $90\%$ of the agents belong to that cluster (see algorithm for details). Moreover while the snapshots of patterns in panels $2-4$ may appear reminiscent of the wriggling pattern, their dynamics are in fact qualitatively similar to the meandering pattern. } \label{snap4} \end{figure} \clearpage \section*{Description of the movies} The captions for the four movies are displayed below: \begin{itemize} \item \begin{verbatim} Movie_S1.mp4 \end{verbatim} Evolution of a system of $N=10^{3}$ agents moving in a wriggling pattern for the case $\sigma=5$, $\theta_{\max}=40$ and $\alpha=0.8$. The system is simulated over $2\times10^{4}$ time steps, starting from an initial condition where agents are distributed randomly over a small portion of the computational domain. Each frame of the simulation is separated by $50$ time steps. \item \begin{verbatim} Movie_S2.mp4 \end{verbatim} Evolution of a system of $N=10^{3}$ agents moving in a closed trail for the case $\sigma=3$, $\theta_{\max}=50$ and $\alpha=0.1$. The system is simulated over $2\times10^{4}$ time steps, starting from an initial condition where agents are distributed randomly over a small portion of the computational domain. Each frame of the simulation is separated by $50$ time steps. \item \begin{verbatim} Movie_S3.mp4 \end{verbatim} Evolution of a system of $N=10^{3}$ agents moving in a milling pattern for the case $\sigma=1$, $\theta_{\max}=20$ and $\alpha=0.025$. The system is simulated over $2\times10^{4}$ time steps, starting from an initial condition where agents are distributed randomly over a small portion of the computational domain. Each frame of the simulation is separated by $50$ time steps. \item \begin{verbatim} Movie_S4.mp4 \end{verbatim} Evolution of a system of $N=10^{3}$ agents moving in a flock with a meandering center of mass for the case $\sigma=3$, $\theta_{\max}=15$ and $\alpha=0.02$. The system is simulated over $2\times10^{4}$ time steps, starting from an initial condition where agents are distributed randomly over a small portion of the computational domain. Each frame of the simulation is separated by $50$ time steps. \end{itemize} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The question of robustness of a basis or frame is a fundamental problem in functional analysis and in many concrete applications. It has its historical origin in the work of Paley and Wiener~\cite{young80} who studied the perturbation of Fourier bases and was subsequently investigated in many contexts in complex analysis and harmonic analysis. Particularly fruitful was the study of the robustness of structured function systems, such as reproducing kernels, sets of sampling in a space of analytic functions, wavelets, or Gabor systems. In this paper we take a new look at the stability of Gabor frames and Gabor Riesz sequences with respect to general deformations of phase space. To be explicit, let us denote the time-frequency shift\ of a function $g\in L^2(\Rdst) $ along $z= (x,\xi ) \in \Rdst \times \Rdst \simeq \Rtdst $ by $$ \pi (z) g(t) = e^{2\pi i \xi \cdot t} g(t-x) \, . $$ For a fixed non-zero function $g \in {L^2(\Rdst)}$, usually called a ``window function'', and $\Lambda \subseteq {\mathbb{R}^{2d}}$, a Gabor system is a structured function system of the form \begin{align*} \mathcal{G}(g,\Lambda) = \sett{\pi(\lambda)g := e^{2\pi i \xi \cdot}g(\cdot-x): \lambda = (x,\xi) \in \Lambda}\, . \end{align*} The index set $\Lambda $ is a discrete subset of the phase space $\Rtdst $ and $\lambda $ indicates the localization of a time-frequency shift\ $\pi (\lambda )g$ in phase space. The Gabor system $\mathcal{G}(g,\Lambda)$ is called a \emph{frame} (a Gabor frame), if \begin{align*} A \norm{f}_2^2 \leq \sum_{\lambda \in \Lambda} \abs{\ip{f}{\pi(\lambda)g}}^2 \leq B \norm{f}_2^2, \qquad f \in {L^2(\Rdst)}, \end{align*} for some constants $0<A\leq B < \infty$. In this case every function $f \in {L^2(\Rdst)}$ possesses an expansion $f=\sum_{\lambda} c_\lambda \pi(\lambda)g$, for some coefficient sequence $c \in \ell^2(\Lambda)$ such that $\norm{f}_2 \asymp \norm{c}_2$. The Gabor system $\mathcal{G}(g,\Lambda)$ is called a \emph{Riesz sequence} (or Riesz basis for its span), if $\norm{\sum_{\lambda} c_\lambda \pi(\lambda)g}_2 \asymp \norm{c}_2$ for all $c \in \ell^2(\Lambda)$. For meaningful statements about Gabor frames it is usually assumed that \begin{align*} \int_{{\mathbb{R}^{2d}}} \abs{\ip{g}{\pi(z)g}} dz < \infty. \end{align*} This condition describes the modulation space\ $M^1(\Rdst )$, also known as the Feichtinger algebra. Every Schwartz function satisfies this condition. In this paper we study the stability of the spanning properties of $\mathcal{G}(g,\Lambda)$ with respect to a set $\Lambda \subseteq \Rtdst $. If $\Lambda '$ is ``close enough'' to $\Lambda $, then we expect $\mathcal{G} (g,\Lambda ')$ to possess the same spanning properties. In this context we distinguish perturbations and deformations. Whereas a perturbation is local and $\Lambda '$ is obtained by slightly moving every $\lambda \in \Lambda $, a deformation is a global transformation of $\Rtdst $. The existing literature is rich in perturbation results, but not much is known about deformations of Gabor frames. (a) \emph{Perturbation or jitter error:} The jitter describes small pointwise perturbations of $\Lambda $. For every Gabor frame $\mathcal{G}(g,\Lambda) $ with $g\in M^1(\Rdst )$ there exists a maximal jitter $\epsilon >0$ with the following property: if $\sup _{\lambda \in \Lambda } \inf _{\lambda ' \in \Lambda '} |\lambda - \lambda '|<\epsilon$ and $\sup _{\lambda' \in \Lambda' } \inf _{\lambda \in \Lambda} |\lambda - \lambda '| < \epsilon $, then $\mathcal{G} (g, \Lambda ')$ is also a frame. See~\cite{fegr89,gr91} for a general result in coorbit theory, the recent paper~\cite{FS06}, and Christensen's book on frames~\cite{chr03} for more details and references. Conceptually the jitter error is easy to understand, because the frame operator is continuous in the operator norm with respect to the jitter error. The proof techniques go back to Paley and Wiener~\cite{young80} and amount to norm estimates for the frame operator. (b) \emph{Linear deformations:} The fundamental deformation result is due to Feichtinger and Kaiblinger~\cite{feka04}. Let $g\in M^1(\Rdst )$, $\Lambda \subseteq \Rtdst $ be a lattice, and assume that $\mathcal{G}(g,\Lambda) $ is a frame for $L^2(\Rdst) $. Then there exists $\epsilon >0$ with the following property: if $A$ is a $2d\times 2d$-matrix with $\|A-\mathrm{I}\| <\epsilon $ (in some given matrix norm), then $\mathcal{G} (g, A\Lambda )$ is again a frame. Only recently, this result was generalized to non-uniform Gabor frames~\cite{asfeka13}. The proof for the case of a lattice~\cite{feka04} was based on the duality theory of Gabor frames, the proof for non-uniform Gabor frames in~\cite{asfeka13} relies on the stability under chirps of the Sj\"ostrand symbol class for pseudodifferential operators, but this technique does not seem to adapt to nonlinear deformations. Compared to perturbations, (linear) deformations of Gabor frames are much more difficult to understand, because the frame operator no longer depends (norm-) continuously on $\Lambda $ and a deformation may change the density of $\Lambda $ (which may affect significantly the spanning properties of $\mathcal{G} (g, \Lambda ) $). Perhaps the main difficulty is to find a suitable notion for deformations that preserves Gabor frames. Except for linear deformations and some preliminary observations in ~\cite{CGN12,dG13} this question is simply unexplored. In this paper we introduce a general concept of deformation, which we call \emph{Lipschitz} deformations. Lipschitz deformations include both the jitter error and linear deformations as a special case. The precise definition is somewhat technical and will be given in Section~6. For simplicity we formulate a representative special case of our main result. \begin{theo} \label{th_main_intro} Let $g \in M^1({\mathbb{R}^d})$ and $\Lambda \subseteq {\mathbb{R}^{2d}}$. Let $T_n: {\mathbb{R}^{2d}} \to {\mathbb{R}^{2d}}$ for $n\in {\mathbb{N}} $ be a sequence of differentiable maps with Jacobian $DT_n$. Assume that \begin{align} \label{lipdefintro} \sup_{z\in{\mathbb{R}^{2d}}} \abs{DT_n(z)-I} \longrightarrow 0 \quad \text{ as } n \to \infty \, . \end{align} Then the following holds. \begin{itemize} \item[(a)] If $\mathcal{G}(g,\Lambda)$ is a frame, then $\mathcal{G}(g,T_n(\Lambda) )$ is a frame for all sufficiently large $n$. \item[(b)] If $\mathcal{G}(g,\Lambda)$ is a Riesz sequence, then $\mathcal{G}(g,T_n(\Lambda ))$ is a Riesz sequence for all sufficiently large $n$. \end{itemize} \end{theo} We would like to emphasize that Theorem ~\ref{th_main_intro} is quite general. It deals with \emph{non-uniform} Gabor frames (not just lattices) under \emph{nonlinear} deformations. In particular, Theorem~\ref{th_main_intro} implies the main results of~\cite{feka04, asfeka13}. The counterpart for deformations of Gabor Riesz sequences (item (b)) is new even for linear deformations. Condition~\eqref{lipdefintro} roughly states that the mutual distances between the points of $\Lambda$ are preserved locally under the deformation $T_n$. Our main insight was that the frame property of a deformed Gabor system $\mathcal{G} (g, T_n(\Lambda ))$ does not depend so much on the position or velocity of the sequences $(T_n(\lambda ))_{ n\in {\mathbb{N}} }$ for $\lambda \in \Lambda $, but on the relative distances $|T_n(\lambda ) - T_n (\lambda ') |$ for $\lambda ,\lambda ' \in \Lambda $. For an illustration see Example~\ref{ex_go_wrong}. As an application of Theorem~\ref{th_main_intro}, we derive a \emph{non-uniform Balian-Low theorem} (BLT). For this, we recall that the lower Beurling density of a set $\Lambda \subseteq \Rtdst $ is given by $$ D^-(\Lambda ) = \lim _{R\to \infty } \min _{z\in \Rtdst } \frac{\# \, \Lambda \cap B_R(z)}{\mathrm{vol}(B_R(0))} \, , $$ and likewise the upper Beurling density $D^+(\Lambda )$ (where the minimum is replaced by a supremum). The fundamental density theorem of Ramanathan and Steger~\cite{RS95} asserts that if $\mathcal{G}(g,\Lambda) $ is a frame then $D^-(\Lambda )\geq 1$. Analogously, if $\mathcal{G}(g,\Lambda) $ is a Riesz sequence, then $D^+(\Lambda ) \leq 1$~\cite{bacahela06-1}. The so-called Balian-Low theorem (BLT) is a stronger version of the density theorem and asserts that for ``nice'' windows $g$ the inequalities in the density theorem are strict. For the case when $g\in M^1(\Rdst )$ and $\Lambda $ is a lattice, the Balian-Low theorem is a consequence of ~\cite{feka04}. A Balian-Low theorem for non-uniform Gabor frames was open for a long time and was proved only recently by Ascensi, Feichtinger, and Kaiblinger~\cite{asfeka13}. The corresponding statement for Gabor Riesz sequences was open and is settled here as an application of our deformation theorem. We refer to Heil's detailed survey~\cite{heil07} of the numerous contributions to the density theorem for Gabor frames after~\cite{RS95} and to ~\cite{CP06} for the Balian-Low theorem. As an immediate consequence of Theorem~\ref{th_main_intro} we obtain the following version of the Balian-Low theorem for non-uniform Gabor systems. \begin{coro}[Non-uniform Balian-Low Theorem] Assume that $g\in M^1(\Rdst )$. (a) If $\mathcal{G}(g,\Lambda) $ is a frame for $L^2(\Rdst) $, then $D^-(\Lambda )>1$. (b) If $\mathcal{G}(g,\Lambda) $ is a Riesz sequence in $L^2(\Rdst) $, then $D^+(\Lambda )<1$. \end{coro} \begin{proof} We only prove the new statement (b), part (a) is similar~\cite{asfeka13}. Assume $\mathcal{G}(g,\Lambda) $ is a Riesz sequence, but that $D^+(\Lambda ) =1$. Let $\alpha _n >1$ such that $\lim _{n\to \infty } \alpha _n = 1$ and set $T_n z = \alpha _n z$. Then the sequence $T_n$ satisfies condition \eqref{lipdefintro}. On the one hand, we have $D^+(\alpha_n\Lambda) = \alpha _n ^{2d} >1$, and on the other hand, Theorem~\ref{th_main_intro} implies that $\mathcal{G} (g,\alpha _n \Lambda )$ is a Riesz sequence for $n$ large enough. This is a contradiction to the density theorem, and thus the assumption $D^+(\Lambda ) =1$ cannot hold. \end{proof} \vspace{3 mm} The proof of Theorem \ref{th_main_intro} does not come easily and is technical. It combines methods from the theory of localized frames~\cite{fogr05,gr04}, the stability of operators on $\ell ^p$-spaces~\cite{albakr08,sj95} and weak limit techniques in the style of Beurling~\cite{be89}. We say that $\Gamma \subseteq \Rtdst $ is a weak limit of translates of $\Lambda \subseteq \Rtdst$, if there exists a sequence $(z_n)_{n\in {\mathbb{N}} } \subseteq \Rtdst $, such that $\Lambda+z_n\to \Gamma$ uniformly on compact sets. See Section~4 for the precise definition and more details on weak limits. We will prove the following characterization of non-uniform Gabor frames ``without inequalities''. \begin{theo} \label{th-char-frame} Assume that $g\in M^1(\Rdst ) $ and $\Lambda \subseteq \Rtdst$. Then $\mathcal{G}(g,\Lambda) $ is a frame for $L^2(\Rdst) $, if and only if\ for every weak limit $\Gamma $ of $\Lambda $ the map $f\to \big( \langle f, \pi (\gamma )g\rangle \big) _{\gamma \in \Gamma }$ is one-to-one on $(M^1(\Rdst ))^*$. \end{theo} The full statement with five equivalent conditions characterizing a non-uniform Gabor frame will be given in Section~5, Theorem~\ref{th_char_frame}. An analogous characterization of Gabor Riesz sequences with weak limits is stated in Theorem~\ref{th_char_riesz}. For the special case when $\Lambda $ is a lattice, the above characterization of Gabor frames without inequalities was already proved in~\cite{gr07-2}. In the lattice case, the Gabor system $\mathcal{G}(g,\Lambda) $ possesses additional invariance properties that facilitate the application of methods from operator algebras. The generalization of \cite{gr07-2} to non-uniform Gabor systems was rather surprising for us and demands completely different methods. To make Theorem~\ref{th-char-frame} more plausible, we make the analogy with Beurling's results on balayage in Paley-Wiener space. Beurling~\cite{be89} characterized the stability of sampling in the Paley-Wiener space of bandlimited functions $\{ f\in L^\infty (\Rdst ): \operatorname{supp} \hat{f} \subseteq S\}$ for a compact spectrum $S\subseteq \Rdst $ in terms of sets of uniqueness for this space. It is well-known that the frame property of a Gabor system $\mathcal{G} (g,\Lambda ) $ is equivalent to a sampling theorem for an associated transform. Precisely, let $z\in \Rtdst \to V_gf(z) = \langle f, \pi (z)g\rangle $ be the short-time Fourier transform, for fixed non-zero $g\in M^1(\Rdst )$ and $f\in (M^1(\Rdst ))^*$. Then $\mathcal{G}(g,\Lambda) $ is a frame, if and only if\ $\Lambda $ is a set of sampling for the short-time Fourier transform\ on $(M^1)^*$. In this light, Theorem~\ref{th-char-frame} is the precise analog of Beurling's theorem for bandlimited functions. One may therefore try to adapt Beurling's methods to Gabor frames and the sampling of short-time Fourier transforms. Beurling's ideas have been used for many sampling problems in complex analysis following the pioneering work of Seip on the Fock space \cite{Seip92a},\cite{Seip92b} and the Bergman space \cite{Seip}, see also \cite{BMO} for a survey. A remarkable fact in Theorem~\ref{th-char-frame} is the absence of a complex structure (except when $g$ is a Gaussian). This explains why we have to use the machinery of localized frames and the stability of operators in our proof. We mention that Beurling's ideas have been transferred to a few other contexts outside complex analysis, such as sampling theorems with spherical harmonics in the sphere~\cite{MarOrt}, or, more generally, with eigenvectors of the Laplace operator in Riemannian manifolds~\cite{OrtPri}. This article is organized as follows: In Section~2 we collect the main definitions from time-frequency analysis . In Section~3 we discuss time-frequency\ molecules and their $\ell ^p$-stability. Section~4 is devoted to the details of Beurling's notion of weak convergence of sets. In Section~5 we state and prove the full characterization of non-uniform Gabor frames and Riesz sequences without inequalities. In Section~6 we introduce the general concept of a Lipschitz deformation of a set and prove the main properties. Finally, in Section~7 we state and prove the main result of this paper, the general deformation result. The appendix provides the technical proof of the stability of time-frequency\ molecules. \section{Preliminaries} \subsection{General notation} Throughout the article, $\abs{x} := (\abs{x_1}^2+\ldots+\abs{x_d}^2)^{1/2}$ denotes the Euclidean norm, and $B_r(x)$ denotes the Euclidean ball. Given two functions $f,g:X \to [0,\infty)$, we say that $f \lesssim g$ if there exists a constant $C>0$ such that $f(x) \leq C g(x)$, for all $x \in X$. We say that $f \asymp g$ if $f \lesssim g$ and $g \lesssim f$. \subsection{Sets of points} A set $\Lambda \subseteq {\mathbb{R}^d}$ is called \emph{relatively separated} if \begin{align} \label{eq_rel} \mathop{\mathrm{rel}}(\Lambda) := \sup \{ \#(\Lambda \cap B_1(x)) : x \in {\mathbb{R}^d} \} < \infty. \end{align} It is called \emph{separated} if \begin{align} \label{eq_sep} \mathop{\mathrm{sep}}(\Lambda) := \inf \sett{\abs{\lambda-\lambda'}: \lambda \not = \lambda' \in \Lambda} > 0. \end{align} We say that $\Lambda$ is $\delta$-separated if $\mathop{\mathrm{sep}}(\Lambda) \geq \delta$. A separated set is relatively separated and \begin{align} \label{eq_sep_relsep} \mathop{\mathrm{rel}}(\Lambda) \lesssim \mathop{\mathrm{sep}}(\Lambda)^{-d}, \qquad \Lambda \subseteq {\mathbb{R}^d}. \end{align} Relatively separated sets are finite unions of separated sets. The \emph{hole} of a set $\Lambda \subseteq {\mathbb{R}^d}$ is defined as \begin{align} \label{eq_gap} \rho(\Lambda) := \sup_{x\in{\mathbb{R}^d}} \inf_{\lambda \in \Lambda} \abs{x-\lambda}. \end{align} A sequence $\Lambda$ is called \emph{relatively dense} if $\rho(\Lambda) < \infty$. Equivalently, $\Lambda$ is relatively dense if there exists $R>0$ such that \begin{align*} {\mathbb{R}^d} = \bigcup_{\lambda \in \Lambda} B_R(\lambda). \end{align*} In terms of the Beurling densities defined in the introduction, a set $\Lambda$ is relatively separated if and only if $D^{+}(\Lambda)<\infty$ and it is relatively dense if and only if $D^{-}(\Lambda)>0$. \subsection{Amalgam spaces} \label{sec_am} The \emph{amalgam space} $W(L^\infty,L^1)({\mathbb{R}^d})$ consists of all functions $f \in L^\infty({\mathbb{R}^d})$ such that \begin{align*} \norm{f}_{W(L^\infty,L^1)} := \int_{{\mathbb{R}^d}} \norm{f}_{L^\infty(B_1(x))} dx \asymp \sum_{k \in {\mathbb{Z}^d}} \norm{f}_{L^\infty([0,1]^d+k)} <\infty. \end{align*} The space $C_0({\mathbb{R}^d})$ consists of all continuous functions $f: {\mathbb{R}^d} \to {\mathbb{C}}$ such that $\lim_{x \longrightarrow \infty} f(x) = 0$, consequently the (closed) subspace of $W(L^\infty,L^1)({\mathbb{R}^d})$ consisting of continuous functions is $W(C_0,L^1)({\mathbb{R}^d})$. This space will be used as a convenient collection of test functions. We will repeatedly use the following sampling inequality: \emph{Assume that $f\in W(C_0,L^1)({\mathbb{R}^d})$ and $\Lambda \subseteq \Rdst $ is relatively separated, then } \begin{equation} \label{eq:c7} \sum _{\lambda \in \Lambda } |f(\lambda )| \lesssim \mathop{\mathrm{rel}} (\Lambda ) \, \| f\|_{W(L^\infty , L^1)}\, . \end{equation} The dual space of $W(C_0,L^1)({\mathbb{R}^d})$ will be denoted ${W(\mathcal{M}, L^\infty)}({\mathbb{R}^d})$. It consists of all the complex-valued Borel measures $\mu: \mathcal{B}({\mathbb{R}^d}) \to {\mathbb{C}}$ such that \begin{align*} \norm{\mu}_{W(\mathcal{M}, L^\infty)} := \sup_{x \in {\mathbb{R}^d}} \norm{\mu}_{B_1(x)} = \sup_{x \in {\mathbb{R}^d}} \abs{\mu}(B_1(x)) < \infty. \end{align*} For the general theory of Wiener amalgam spaces we refer to \cite{fe83}. \subsection{Time-frequency analysis} The \emph{time-frequency shifts} of a function $f: {\mathbb{R}^d} \to {\mathbb{C}}$ are \begin{align*} \pi(z)f(t) := e^{2\pi i \xi t} f(t-x), \qquad z=(x,\xi) \in {\mathbb{R}^d}\times {\mathbb{R}^d}, t \in {\mathbb{R}^d}. \end{align*} These operators satisfy the commutation relations \begin{align} \label{eq_comp_tf} \pi(x,\xi) \pi(x',\xi') = e^{-2\pi i \xi' x} \pi(x+x', \xi+\xi'), \qquad (x,\xi), (x',\xi') \in {\mathbb{R}^d} \times {\mathbb{R}^d}. \end{align} Given a non-zero Schwartz function $g \in \mathcal{S}({\mathbb{R}^d})$, the \emph{short-time Fourier transform} of a distribution $f \in \mathcal{S}'({\mathbb{R}^d})$ with respect to the window $g$ is defined as \begin{align} \label{eq_def_stft} V_g f(z) := \ip{f}{\pi(z) g}, \qquad z \in {\mathbb{R}^{2d}}. \end{align} For $\norm{g}_2=1$ the short-time Fourier transform is an isometry: \begin{align} \label{eq_stft_l2} \norm{V_g f}_{{L^2(\Rtdst)}}=\norm{f}_{{L^2(\Rdst)}}, \qquad f \in {L^2(\Rdst)}. \end{align} The commutation rule \eqref{eq_comp_tf} implies the covariance property of the short-time Fourier transform: \begin{align*} V_g (\pi(x,\xi) f)(x',\xi') = e^{-2\pi i x(\xi '-\xi )} V_g f(x'-x,\xi'-\xi), \qquad (x,\xi), (x',\xi') \in {\mathbb{R}^d} \times {\mathbb{R}^d}. \end{align*} In particular, \begin{align} \label{eq_tf_stft} \abs{V_g \pi(z) f} = \abs{V_g f(\cdot-z)}, \qquad z \in {\mathbb{R}^{2d}}. \end{align} We then define the \emph{modulation spaces} as follows: fix a non-zero $g\in \mathcal{S} (\Rdst )$ and let \begin{align} \label{eq_def_mp} M^p({\mathbb{R}^d}) := \set{f \in \mathcal{S}'({\mathbb{R}^d})}{V_g f \in L^p({\mathbb{R}^{2d}})}, \qquad 1 \leq p \leq \infty, \end{align} endowed with the norm $\norm{f}_{M^p} := \norm{V_g f}_{L^p}$. Different choices of non-zero windows $g \in \mathcal{S}({\mathbb{R}^d})$ yield the same space with equivalent norms, see \cite{fe89}. We note that for $g\in M^1(\Rdst )$ and $f\in M^p(\Rdst )$, $1\leq p \leq \infty $, the short-time Fourier transform $V_gf$ is a continuous function, we may therefore argue safely with the pointwise values of $V_gf$. The space $M^1({\mathbb{R}^d})$, known as the Feichtinger algebra, plays a central role. It can also be characterized as \begin{align*} M^1({\mathbb{R}^d}) := \set{f \in L^2({\mathbb{R}^d})}{V_f f \in L^1({\mathbb{R}^{2d}})}. \end{align*} The modulation space\ $M^0({\mathbb{R}^d})$ is defined as the closure of the Schwartz-class with respect to the norm $\| \cdot \|_{M^\infty }$. Then $M^0(\Rdst )$ is a closed subspace of $M^\infty({\mathbb{R}^d})$ and can also be characterized as \begin{align*} M^0({\mathbb{R}^d}) = \set{f \in M^\infty({\mathbb{R}^d})}{V_g f \in C_0({\mathbb{R}^{2d}})}. \end{align*} The duality of modulation space s is similar to sequence spaces; we have $M^0(\Rdst )^* = M^1(\Rdst )$ and $M^1(\Rdst )^* = M^\infty (\Rdst )$ with respect to the duality $\langle f, h \rangle := \langle V_g f , V_g h \rangle $. In this article we consider a fixed function $g \in M^1({\mathbb{R}^d})$ and will be mostly concerned with $M^1({\mathbb{R}^d})$, its dual space $M^\infty({\mathbb{R}^d})$, and $M^2({\mathbb{R}^d})=L^2({\mathbb{R}^d})$. The weak* topology in $M^\infty({\mathbb{R}^d})$ will be denoted by $\sigma(M^\infty,M^1)$ and the weak* topology on $M^1({\mathbb{R}^d})$ by $\sigma(M^1,M^0)$. Hence, a sequence $\sett{f_k:k \geq 1} \subseteq M^\infty({\mathbb{R}^d})$ converges to $f \in M^\infty({\mathbb{R}^d})$ in $\sigma(M^\infty,M^1)$ if and only if for every $h \in M^1({\mathbb{R}^d})$: $\ip{f_k}{h} \longrightarrow \ip{f}{h}$. We mention the following facts that will be used repeatedly (see for example \cite[Theorem 4.1]{fegr89} and \cite[Proposition 12.1.11]{gr01}). \begin{lemma} \label{lemma_stft} Let $g \in M^1({\mathbb{R}^d})$ be nonzero. Then the following hold true. \begin{itemize} \item[(a)] If $f \in M^1({\mathbb{R}^d})$, then $V_g f \in W(C_0,L^1)({\mathbb{R}^{2d}})$. \item[(b)] Let $\sett{f_k:k \geq 1} \subseteq M^\infty({\mathbb{R}^d})$ be a bounded sequence and $f \in M^\infty({\mathbb{R}^d})$. Then $f_k \longrightarrow f$ in $\sigma(M^\infty,M^1)$ if and only if $V_g f_k \longrightarrow V_g f$ uniformly on compact sets. \item[(c)] Let $\sett{f_k:k \geq 1} \subseteq M^1({\mathbb{R}^d})$ be a bounded sequence and $f \in M^1({\mathbb{R}^d})$. Then $f_k \longrightarrow f$ in $\sigma(M^1,M^0)$ if and only if $V_g f_k \longrightarrow V_g f$ uniformly on compact sets. \end{itemize} \end{lemma} In particular, if $f_n \to f $ in $\sigma (M^\infty , M^1)$ and $z_n \to z \in \Rtdst $, then $V_gf_n (z_n) \to V_gf(z)$. \subsection{Analysis and synthesis maps} \label{sec_maps} Given $g \in M^1({\mathbb{R}^d})$ and a relatively separated set $\Lambda \subseteq {\mathbb{R}^{2d}}$, consider the \emph{analysis operator} and the \emph{synthesis operator} that are formally defined as \begin{align*} &C_{g, \Lambda} f := \left( \ip{f}{\pi(\lambda) g} \right)_{\lambda \in \Lambda}, \qquad f \in M^\infty({\mathbb{R}^d}), \\ &C^*_{g, \Lambda} c := \sum_{\lambda \in \Lambda} c_\lambda \pi(\lambda) g, \qquad c \in \ell^\infty(\Lambda). \end{align*} These maps are bounded between $M^p$ and $\ell^p$ spaces~\cite[Cor. 12.1.12]{gr01} with estimates \begin{align*} &\norm{C_{g, \Lambda} f}_{\ell^p} \lesssim \mathop{\mathrm{rel}}(\Lambda) \norm{g}_{M^1} \norm{f}_{M^p}, \\ &\norm{C^*_{g, \Lambda} c}_{M^p} \lesssim \mathop{\mathrm{rel}}(\Lambda) \norm{g}_{M^1} \norm{c}_{\ell^p}. \end{align*} The implicit constants in the last estimates are independent of $p\in [1,\infty ]$. For $z=(x,\xi) \in {\mathbb{R}^{2d}}$, the \emph{twisted shift} is the operator $\kappa(z): \ell ^\infty (\Lambda ) \to \ell^\infty (\Lambda+z ) $ given by \begin{align*} (\kappa(z) c)_{\lambda+z} := e^{-2\pi i x \lambda_2} c_\lambda, \qquad \lambda=(\lambda_1,\lambda_2) \in \Lambda. \end{align*} As a consequence of the commutation relations~\eqref{eq_comp_tf}, the analysis and synthesis operators satisfy the covariance property \begin{align} \label{eq_cov_c} \pi(z) C^*_{g,\Lambda}=C^*_{g,\Lambda+z} \kappa(z) \mbox{ and } e^{2\pi i x \xi}C_{g,\Lambda}\pi(-z)=e^{-2\pi i x \xi}\kappa(-z)C_{g,\Lambda+z} \, \end{align} for $z=(x,\xi)\in{\mathbb{R}^d}\times{\mathbb{R}^d}$. A Gabor system $\mathcal{G}(g,\Lambda)$ is a \emph{frame} if and only if $C_{g, \Lambda}: {L^2(\Rdst)} \to \ell^2(\Lambda)$ is bounded below, and $\mathcal{G}(g,\Lambda)$ is a \emph{Riesz sequence} if and only if $C^*_{g, \Lambda}: \ell^2(\Lambda) \to {L^2(\Rdst)}$ is bounded below. As the following lemma shows, each of these conditions implies a restriction of the geometry of the set $\Lambda$. \begin{lemma} Let $g \in {L^2(\Rdst)}$ and let $\Lambda \subseteq {\mathbb{R}^{2d}}$ be a set. Then the following holds. \label{lemma_set_must_be} \begin{itemize} \item[(a)] If $\mathcal{G}(g,\Lambda)$ is a frame, then $\Lambda$ is relatively separated and relatively dense. \item[(b)] If $\mathcal{G}(g,\Lambda)$ is a Riesz sequence, then $\Lambda$ is separated. \end{itemize} \end{lemma} \begin{proof} For part (a) see for example \cite[Theorem 1.1]{chdehe99}. For part (b), suppose that $\Lambda$ is not separated. Then there exist two sequences $\sett{\lambda_n: n \geq 1}, \sett{\gamma_n: n \geq 1} \subseteq \Lambda$ with $\lambda_n \not= \gamma_n$ such that $\abs{\lambda_n-\gamma_n} \longrightarrow 0$. Hence we derive the following contradiction: $\sqrt{2}=\norm{\delta_{\lambda_n}-\delta_{\gamma_n}}_{\ell^2(\Lambda)} \asymp \norm{\pi(\lambda_n)g-\pi(\gamma_n)g}_{L^2({\mathbb{R}^d})} \longrightarrow 0$. \end{proof} We extend the previous terminology to other values of $p \in [1,\infty]$. We say that $\mathcal{G}(g,\Lambda)$ is a $p$-frame for $M^p({\mathbb{R}^d})$ if $C_{g, \Lambda}: M^p({\mathbb{R}^d}) \to \ell^p(\Lambda)$ is bounded below, and that $\mathcal{G}(g,\Lambda)$ is a $p$-Riesz sequence within $M^p({\mathbb{R}^d})$ if $C^*_{g, \Lambda}: \ell^p(\Lambda) \to M^p({\mathbb{R}^d})$ is bounded below. Since boundedness below and left invertibility are different properties outside the context of Hilbert spaces, there are other reasonable definitions of frames and Riesz sequences for $M^p$. This is largely immaterial for Gabor frames with $g \in M^1$, since the theory of localized frames asserts that when such a system is a frame for $L^2$, then it is a frame for all $M^p$ and moreover the operator $C_{g, \Lambda}:M^p \to \ell^p$ is left invertible \cite{gr04, fogr05, bacahela06, bacahela06-1}. Similar statements apply to Riesz sequences. \section{Stability of time-frequency molecules} We say that $\sett{f_\lambda: \lambda \in\Lambda} \subseteq {L^2(\Rdst)}$ is a set of time-frequency molecules, if $\Lambda \subseteq {\mathbb{R}^{2d}}$ is a relatively separated set and there exists a non-zero $g\in M^1(\Rdst )$ and an envelope function $\Phi \in W(L^\infty,L^1)({\mathbb{R}^{2d}})$ such that \begin{align} \label{eq_env_mol} \abs{V_g f_\lambda (z)} \leq \Phi(z-\lambda), \qquad \mbox{a.e. } z \in {\mathbb{R}^d}, \lambda \in \Lambda . \end{align} If \eqref{eq_env_mol} holds for some $g\in M^1(\Rdst )$, then it holds for all $g\in M^1(\Rdst )$ (with an envelope depending on $g$). \begin{rem} \label{rem_gab_mol} {\rm Every Gabor system $\mathcal{G}(g,\Lambda)$ with window $g \in M^1({\mathbb{R}^d})$ and a relatively separated set $\Lambda \subseteq {\mathbb{R}^{2d}}$ is a set of time-frequency molecules. In this case the envelope can be chosen to be $\Phi = \abs{V_gg}$, which belongs to $W(L^\infty,L^1)({\mathbb{R}^{2d}})$ by Lemma \ref{lemma_stft}.} \end{rem} The following stability result will be one of our main technical tools. \begin{theo} \label{th_main_mol} Let $\sett{f_\lambda: \lambda \in \Lambda}$ be a set of time-frequency molecules. Then the following holds. \begin{itemize} \item[(a)] Assume that \begin{align} \label{eq_main_1} \norm{f}_{M^p} \asymp \norm{(\ip{f}{f_\lambda})_{\lambda\in\Lambda}}_p, \qquad \forall \, f \in M^p({\mathbb{R}^d}), \end{align} holds for some $1 \leq p \leq \infty$. Then \eqref{eq_main_1} holds for all $1 \leq p \leq \infty$. In other words, if $\sett{ f_\lambda : \lambda \in \Lambda } $ is a $p$-frame for $M^p(\Rdst )$ for some $p\in [1,\infty]$, then it is a $p$-frame for $M^p(\Rdst )$ for all $p\in [1,\infty]$. \item[(b)] Assume that \begin{align} \label{eq_main_2} \norm{\sum_{\lambda \in \Lambda} c_\lambda f_\lambda}_{M^p} \asymp \norm{c}_p, \qquad c \in \ell^p(\Lambda), \end{align} holds for some $1 \leq p \leq \infty$. Then \eqref{eq_main_2} holds for all $1 \leq p \leq \infty$. \end{itemize} \end{theo} The result is similar in spirit to other results in the literature \cite{su07-5, albakr08, shsu09, te10, su10-2}, but none of these is directly applicable to our setting. We postpone the proof of Theorem \ref{th_main_mol} to the appendix, so as not to interrupt the natural flow of the article. As in the cited references, the proof elaborates on Sj\"ostrand's Wiener-type lemma \cite{sj95}. As a special case of Theorem \ref{th_main_mol} we record the following corollary. \begin{coro} \label{coro_main_mol} Let $g \in M^1({\mathbb{R}^d})$ and let $\Lambda \subseteq {\mathbb{R}^{2d}}$ be a relatively separated set. Then the following holds. \begin{itemize} \item[(a)] If $\mathcal{G}(g,\Lambda)$ is a $p$-frame for $M^p({\mathbb{R}^d})$ for some $p \in [1, \infty]$, then it is a $p$-frame for $M^p({\mathbb{R}^d})$ for all $p \in [1,\infty]$. \item[(b)] If $\mathcal{G}(g,\Lambda)$ is a $p$-Riesz sequence in $M^p({\mathbb{R}^d})$ for some $p \in [1, \infty]$, then it is $p$-Riesz sequence in $M^p({\mathbb{R}^d})$ for all $p \in [1,\infty]$. \end{itemize} \end{coro} The space $M^1(\Rdst )$ is the largest space of windows for which the corollary holds. Under a stronger condition on $g$, statement (a) was already derived in~\cite{albakr08}, the general case was left open. \section{Weak convergence} \subsection{Convergence of sets} The Hausdorff distance between two sets $X,Y \subseteq {\mathbb{R}^d}$ is defined as \begin{align*} d_H(X,Y) := \inf \sett{\varepsilon>0: X \subseteq Y+ B_\varepsilon(0), Y \subseteq X+ B_\varepsilon(0)}. \end{align*} Note that $d_H(X,Y)=0$ if and only if $\overline{X} = \overline{Y}$. Let $\Lambda \subseteq {\mathbb{R}^d}$ be a set. A sequence $\{\Lambda_n: n \geq 1\}$ of subsets of ${\mathbb{R}^d}$ \emph{converges weakly} to $\Lambda$, in short $\Lambda _n \xrightarrow{w} \Lambda$, if \begin{align} \label{eq_weak_conv} d_H \big( (\Lambda_n \cap \bar{B}_R(z)) \cup \partial \bar{B}_R(z)) , (\Lambda \cap \bar{B}_R(z) ) \cup \partial \bar{B}_R(z)) \big) \to 0, \qquad \forall z\in {\mathbb{R}^d} , R>0 \, . \end{align} (To understand the role of the boundary of the ball in the definition, consider the following example in dimension $d=1$: $\Lambda_n:=\sett{1+1/n}$, $\Lambda:=\sett{1}$ and $B_R(z)=[0,1]$.) The following lemma provides an alternative description of weak convergence. \begin{lemma} \label{lemma_alt_weak} Let $\Lambda \subseteq {\mathbb{R}^d}$ and $\Lambda_n \subseteq {\mathbb{R}^d}, n \geq 1$ be sets. Then $\Lambda_n \xrightarrow{w} \Lambda$ if and only if for every $R>0$ and $\varepsilon>0$ there exists $n_0 \in {\mathbb{N}}$ such that for all $n \geq n_0$ \begin{align*} \Lambda \cap B_R(0) \subseteq \Lambda_n + B_\varepsilon(0) \quad \text{ and } \quad \Lambda_n \cap B_R(0) \subseteq \Lambda + B_\varepsilon(0). \end{align*} \end{lemma} The following consequence of Lemma \ref{lemma_alt_weak} is often useful to identify weak limits. \begin{lemma} \label{lemma_weak_inc} Let $\Lambda_n \xrightarrow{w} \Lambda$ and $\Gamma_n \xrightarrow{w} \Gamma$. Suppose that for every $R>0$ and $\varepsilon>0$ there exists $n_0 \in {\mathbb{N}}$ such that for all $n \geq n_0$ \begin{align*} &\Lambda_n \cap B_R(0) \subseteq \Gamma_n + B_\varepsilon(0). \end{align*} Then $\overline{\Lambda} \subseteq \overline{\Gamma}$. \end{lemma} The notion of weak convergence will be a technical tool in the proofs of deformation results. \subsection{Measures and compactness} In this section we explain how the weak convergence of sets can be understood by the convergence of some associated measures. First we note the following semicontinuity property, that follows directly from Lemma \ref{lemma_alt_weak}. \begin{lemma} \label{lemma_mes_supp_a} Let $\sett{\mu_n : n \geq 1} \subset {W(\mathcal{M}, L^\infty)}({\mathbb{R}^d})$ be a sequence of measures that converges to a measure $\mu \in {W(\mathcal{M}, L^\infty)}({\mathbb{R}^d})$ in the $\sigma({W(\mathcal{M}, L^\infty)}, {W(C_0,L^1)})$ topology. Suppose that $\operatorname{supp}(\mu_n) \subseteq \Lambda_n$ and that $\Lambda_n \xrightarrow{w} \Lambda$. Then $\operatorname{supp}(\mu) \subseteq \overline{\Lambda}$. \end{lemma} The example $\mu_n=\tfrac{1}{n}\delta$, $\mu = 0$ shows that in Lemma \ref{lemma_mes_supp_a} the inclusions cannot in general be improved to equalities. Such improvement is however possible for certain classes of measures. A Borel measure $\mu$ is called \emph{natural-valued} if for all Borel sets $E$ the value $\mu(E)$ is a non-negative integer or infinity. For these measures the following holds. \begin{lemma} \label{lemma_mes_supp_b} Let $\sett{\mu_n : n \geq 1} \subset {W(\mathcal{M}, L^\infty)}({\mathbb{R}^d})$ be a sequence of natural-valued measures that converges to a measure $\mu \in {W(\mathcal{M}, L^\infty)}({\mathbb{R}^d})$ in the $\sigma({W(\mathcal{M}, L^\infty)}, {W(C_0,L^1)})$ topology. Then $\operatorname{supp}(\mu_n) \xrightarrow{w} \operatorname{supp}(\mu)$. \end{lemma} The proof of Lemma \ref{lemma_mes_supp_b} is elementary and therefore we skip it. Lemma \ref{lemma_mes_supp_b} is useful to deduce properties of weak convergence of sets from properties of convergence of measures, as we now show. For a set $\Lambda \subseteq {\mathbb{R}^d}$, let us consider the natural-valued measure \begin{align} \label{eq_shah} {\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_\Lambda := \sum_{\lambda \in \Lambda} \delta_\lambda. \end{align} One can readily verify that $\Lambda$ is relatively separated if and only if ${\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_\Lambda \in {W(\mathcal{M}, L^\infty)}({\mathbb{R}^d})$ and moreover, \begin{align} \label{eq_mes_am} &\norm{{\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_\Lambda}_{W(\mathcal{M}, L^\infty)} \asymp \mathop{\mathrm{rel}}(\Lambda). \end{align} For sequences of sets $\sett{\Lambda_n: n\geq 1}$ with uniform separation, i.e., \begin{align*} \inf_n \mathop{\mathrm{sep}} (\Lambda _n) = \inf\{\abs{\lambda - \lambda'}: \lambda \not= \lambda', \lambda,\lambda' \in \Lambda_n, n \geq 1\} >0, \end{align*} the convergence $\Lambda_n \xrightarrow{w} \Lambda$ is equivalent to the convergence ${\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_{\Lambda_n} \to {\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_\Lambda$ in \\ $\sigma( {W(\mathcal{M}, L^\infty)}, {W(C_0,L^1)})$. For sequences without uniform separation the situation is slightly more technical because of possible multiplicities in the limit set. \begin{lemma} \label{lemma_compactness} Let $\sett{\Lambda_n: n \geq 1}$ be a sequence of relatively separated sets in ${\mathbb{R}^d}$. Then the following hold. \begin{itemize} \item[(a)] If ${\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_{\Lambda_n} \longrightarrow \mu$ in $\sigma({W(\mathcal{M}, L^\infty)}, {W(C_0,L^1)})$ for some measure $\mu \in {W(\mathcal{M}, L^\infty)}$, then $\sup_n \mathop{\mathrm{rel}}(\Lambda_n) < \infty$ and $\Lambda_n \xrightarrow{w} \Lambda := \operatorname{supp}(\mu)$. \item[(b)] If $\limsup_n \mathop{\mathrm{rel}}(\Lambda_n) < \infty$, then there exists a subsequence $\sett{\Lambda_{n_k}: k \geq 1}$ that converges weakly to a relatively separated set. \item[(c)] If $\limsup_n \mathop{\mathrm{rel}}(\Lambda_n) < \infty$ and $\Lambda_n \xrightarrow{w} \Lambda$ for some set $\Lambda \subseteq {\mathbb{R}^d}$, then $\Lambda$ is relatively separated (and in particular closed). \end{itemize} \end{lemma} The lemma follows easily from Lemma \ref{lemma_mes_supp_b}, \eqref{eq_mes_am} and the weak$^*$-compactness of the ball of ${W(\mathcal{M}, L^\infty)}$, and hence we do not prove it. We remark that the limiting measure $\mu$ in the lemma is not necessarily ${\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_\Lambda$. For example, if $d=1$ and $\Lambda_n := \sett{0,1/n,1,1+1/n,1-1/n}$, then ${\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_{\Lambda_n} \longrightarrow 2 \delta_0 + 3 \delta_1$. The measure $\mu$ in (a) can be shown to be natural-valued, and therefore we can interpret it as representing a set with multiplicities. The following lemma provides a version of \eqref{eq_mes_am} for linear combinations of point measures. \begin{lemma} \label{lemma_norms_mes} Let $\Lambda \subseteq {\mathbb{R}^d}$ be a relatively separated set and consider a measure \begin{align*} \mu := \sum_{\lambda \in \Lambda} c_\lambda \delta_\lambda \end{align*} with coefficients $c_\lambda \in {\mathbb{C}}$. Then \begin{align*} &\norm{\mu} = \abs{\mu}({\mathbb{R}^d}) = \norm{c}_1, \\ &\norm{c}_\infty \leq \norm{\mu}_{{W(\mathcal{M}, L^\infty)}} \lesssim \mathop{\mathrm{rel}}(\Lambda) \norm{c}_\infty. \end{align*} \end{lemma} \begin{proof} The identity $\abs{\mu}({\mathbb{R}^d}) = \norm{c}_1$ is elementary. The estimate for $\norm{\mu}_{{W(\mathcal{M}, L^\infty)}}$ follows from the fact that, for all $\lambda \in \Lambda$, $\abs{c_\lambda} \delta_\lambda \leq \abs{\mu} \leq \norm{c}_\infty {\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_\Lambda$, where ${\makebox[2.3ex][s]{$\sqcup$\hspace{-0.15em}\hfill $\sqcup$}\, \, }_\Lambda$ is defined by~\eqref{eq_shah}. \end{proof} \section{Gabor Frames and Gabor Riesz Sequences without Inequalities} \label{sec_char} As a first step towards the main results, we characterize frames and Riesz bases in terms of uniqueness properties for certain limit sequences. The corresponding results for lattices have been derived by different methods in \cite{gr07-2}. For the proofs we combine Theorem \ref{th_main_mol} with Beurling's methods~\cite[p.351-365]{be89}. For a relatively separated set $\Lambda \subseteq {\mathbb{R}^{2d}}$, let $W(\Lambda)$ be the set of weak limits of the translated sets $\Lambda+z, z\in {\mathbb{R}^{2d}}$, i.e., $\Gamma \in W(\Lambda)$ if there exists a sequence $\sett{z_n:n \in {\mathbb{N}}}$ such that $\Lambda+z_n \xrightarrow{w} \Gamma$. It is easy to see that then $\Gamma$ is always relatively separated. When $\Lambda$ is a lattice, i.e., $\Lambda=A {\mathbb{Z}^{2d}}$ for an invertible real-valued $2d\times2d$-matrix $A$, then $W(\Lambda)$ consists only of translates of $\Lambda$. Throughout this section we use repeatedly the following special case of Lemma \ref{lemma_compactness}(b,c): given a relatively separated set $\Lambda \subseteq {\mathbb{R}^{2d}}$ and any sequence of points $\sett{z_n: n \geq 1} \subseteq {\mathbb{R}^{2d}}$, there is a subsequence $\sett{z_{n_k}: k \geq 1}$ and a relatively separated set $\Gamma \subseteq {\mathbb{R}^{2d}}$ such that $\Lambda + z_{n_k} \xrightarrow{w} \Gamma$. \subsection{Characterization of frames} In this section we characterize the frame property of Gabor systems in terms of the sets in $W(\Lambda)$. \begin{theo} \label{th_char_frame} Assume that $g\in M^1({\mathbb{R}^d})$ and that $\Lambda \subseteq {\mathbb{R}^{2d}} $ is relatively separated. Then the following are equivalent. \begin{itemize} \item[(i)] $\mathcal{G}(g,\Lambda)$ is a frame for ${L^2(\Rdst)}$. \item[(ii)] $\mathcal{G}(g,\Lambda)$ is a $p$-frame for $M^p({\mathbb{R}^d})$ for some $p \in [1,\infty]$ (for all $p\in [1,\infty ]$). \item[(iii)] $\mathcal{G}(g,\Lambda)$ is an $\infty$-frame for $M^{\infty}({\mathbb{R}^d})$. \item[(iv)] $C_{g,\Lambda }^*$ is surjective from $\ell ^1(\Lambda )$ onto $M^1(\Rdst )$. \item[(v)] $C_{g,\Gamma}$ is bounded below on $M^\infty({\mathbb{R}^d}) $ for every weak limit $\Gamma \in W(\Lambda )$. \item[(vi)] $C_{g,\Gamma}$ is one-to-one on $M^\infty({\mathbb{R}^d}) $ for every weak limit $\Gamma \in W(\Lambda)$. \end{itemize} \end{theo} \begin{rem} {\rm 1. When $\Lambda$ is a lattice, then $W(\Lambda)$ consists only of translates of $\Lambda$. In this case, Theorem \ref{th_char_frame} reduces to main result in \cite{gr07-2}. } \\ {\rm 2. For related work in the context of sampling measures see also \cite{as13}. } \end{rem} \begin{proof} The equivalence of (i), (ii) and (iii) follows immediately from Corollary \ref{coro_main_mol}. In the sequel we will use several times the following version of the closed range theorem~\cite[p.~166]{conway90}: Let $T:X\to Y$ be a bounded operator between two Banach spaces $X$ and $Y$. Then $T$ is onto $Y$, if and only if\ $T^*: Y^* \to X^*$ is one-to-one on $Y^*$ and has closed range in $X^*$, if and only if\ $T^*$ is bounded below. Conditions (iii) and (iv) are equivalent by applying the closed range theorem to the synthesis operator $C^*_{g,\Lambda }$ on $\ell ^1(\Lambda )$. For the remaining equivalences we adapt Beurling's methods. {\bf (iv) $\Rightarrow$ (v)}. Consider a convergent sequence of translates $\Lambda - z_n \xrightarrow{w} \Gamma$. Since $C_{g,\Lambda }^*$ maps $\ell ^1(\Lambda )$ onto $M^1(\Rdst )$, because of \eqref{eq_cov_c} and the open mapping theorem, the synthesis operators $C_{g,\Lambda -z_n}^*$ are also onto $M^1({\mathbb{R}^d})$ with bounds on preimages independent of $n$. Thus for every $f\in M^1(\Rdst )$ there exist sequences $\sett{c^n_\lambda}_{\lambda \in \Lambda - z_n}$ with $\norm{c^n}_1 \lesssim 1$ such that \begin{align*} f = \sum_{\lambda \in \Lambda - z_n} c^n_\lambda \pi(\lambda) g, \end{align*} with convergence in $M^1({\mathbb{R}^d})$. Consider the measures $\mu_n := \sum_{\lambda \in \Lambda - z_n} c^n_\lambda \delta_{\lambda}$. Note that $\norm{\mu_n} = \norm{c^n}_1 \lesssim 1$. By passing to a subsequence we may assume that $\mu_n \longrightarrow \mu$ in $\sigma(\mathcal{M},C_0)$ for some measure $\mu \in \mathcal{M}({\mathbb{R}^{2d}})$. By assumption $\operatorname{supp}(\mu_n) \subseteq \Lambda - z_n$, $\Lambda - z_n \xrightarrow{w} \Gamma$, and $\Gamma$ is relatively separated and thus closed. It follows from Lemma \ref{lemma_mes_supp_a} that $\operatorname{supp}(\mu) \subseteq \Gamma$. Hence, \begin{align*} \mu = \sum_{\lambda \in \Gamma} c_\lambda \delta_{\lambda} \end{align*} for some sequence $c$. In addition, $\norm{c}_1 = \norm{\mu} \leq \liminf_n \norm{\mu_n} \lesssim 1$. Let $f' := \sum_{\lambda \in \Gamma} c_\lambda \pi(\lambda) g$. This is well-defined in $M^1(\Rdst )$, because $c \in \ell^1(\Gamma)$. Let $z\in{\mathbb{R}^{2d}}$. Since by Lemma \ref{lemma_stft} $V_g \pi(z) g \in {W(C_0,L^1)}({\mathbb{R}^{2d}}) \subseteq C_0({\mathbb{R}^{2d}})$ we can compute \begin{align*} \ip{f}{\pi(z)g} &= \sum_{\lambda \in \Lambda - z_n} c^n_\lambda \overline{V_g \pi(z) g(\lambda)} \\ &=\int_{{\mathbb{R}^{2d}}} \overline{V_g \pi(z) g} \,d\mu_n \longrightarrow \int_{{\mathbb{R}^{2d}}} \overline{V_g \pi(z) g} \,d\mu= \ip{f'}{\pi(z)g}. \end{align*} (Here, the interchange of summation and integration is justified because $c$ and $c^n$ are summable.) Hence $f=f'$ and thus $C^*_{g,\Gamma}: \ell^1(\Gamma) \to M^1({\mathbb{R}^d})$ is surjective. By duality $C_{g,\Gamma }$ is one-to-one from $M^\infty (\Rdst )$ to $\ell ^\infty (\Gamma )$ and has closed range, whence $C_{g,\Gamma }$ is bounded below on $M^\infty (\infty )$. {\bf (v) $\Rightarrow$ (vi)} is clear. {\bf (vi) $\Rightarrow$ (iii).} Suppose $\mathcal{G}(g,\Lambda)$ is not an $\infty$-frame for $M^\infty({\mathbb{R}^d})$. Then there exists a sequence of functions $\sett{f_n: n \geq 1} \subset M^\infty({\mathbb{R}^d})$ such that $\norm{V_g f_n}_\infty=1$ and $\sup_{\lambda \in \Lambda} \abs{V_g f_n (\lambda)} \longrightarrow 0$. Let $z_n \in {\mathbb{R}^{2d}}$ be such that $\abs{V_g f_n(z_n)} \geq 1/2$ and consider $h_n := \pi(-z_n)f_n$. By passing to a subsequence we may assume that $h_n \longrightarrow h$ in $\sigma(M^\infty, M^1)$ for some $h \in M^\infty({\mathbb{R}^d})$, and that $\Lambda - z_n \xrightarrow{w} \Gamma$ for some relatively separated $\Gamma$. Since $\abs{V_g h_n (0)}=\abs{V_g f_n (z_n)} \geq 1/2$ by~\eqref{eq_tf_stft}, it follows from Lemma \ref{lemma_stft} (b) that $h \not= 0$. Given $\gamma \in \Gamma$, there exists a sequence $\sett{\lambda_n: n \geq 1} \subseteq \Lambda$ such that $\lambda_n - z_n \longrightarrow \gamma$. Since, by Lemma \ref{lemma_stft}, $V_g h_n \longrightarrow V_g h$ uniformly on compact sets, we can use \eqref{eq_tf_stft} to obtain that \begin{align*} \abs{V_g h(\gamma)} = \lim_n \abs{V_g h_n(\lambda_n - z_n)} = \lim_n \abs{V_g f_n(\lambda_n)} = 0. \end{align*} As $\gamma \in \Gamma $ is arbitrary, this contradicts (vi). \end{proof} Although Theorem~\ref{th_char_frame} seems to be purely qualitative, it can be used to derive quantitative estimates for Gabor frames. We fix a non-zero window $g$ in $ M^1(\Rdst )$ and assume that $\|g\|_2=1$. We measure the modulation space norms with respect to this window by $\|f\|_{M^p} = \|V_g f\|_p$ and observe that the isometry property of the short-time Fourier transform extends to $M^\infty (\Rdst )$ as follows: if $f\in M^\infty (\Rdst ) $ and $h\in M^1(\Rdst )$, then \begin{equation} \label{eq:c10} \langle f, h\rangle = \int_{\rdd} V_gf(z) \overline{V_gh(z)} \, dz = \langle V_gf, V_gh\rangle. \end{equation} For $\delta >0$, we define the $M^1$-modulus of continuity of $g$ as \begin{equation} \label{eq:c9} \omega _\delta (g)_{M^1} = \sup_{\stackrel{z,w \in \Rtdst}{|z-w|\leq \delta }} \|\pi (z)g- \pi (w)g\|_{M^1} =\sup_{\stackrel{z,w \in \Rtdst}{|z-w|\leq \delta }} \norm{V_g(\pi (z)g- \pi (w)g)}_{L^1}. \end{equation} It is easy to verify that $\lim _{\delta \to 0+} \omega _\delta (g)_{M^1} = 0$, because time-frequency shifts are continuous on $M^1(\Rdst )$. Then we deduce the following quantitative condition for Gabor frames from Theorem~\ref{th_char_frame}. \begin{coro} For $g\in M^1(\Rdst )$ with $\norm{g}_2=1$ choose $\delta >0$ so that $ \omega _\delta (g)_{M^1} <1$. If $\Lambda \subseteq \Rtdst $ is relatively separated and $\rho (\Lambda ) \leq \delta $, then $\mathcal{G}(g,\Lambda) $ is a frame for $L^2(\Rdst) $. \end{coro} \begin{proof} We argue by contradiction and assume that $\mathcal{G}(g,\Lambda) $ is not a frame. By condition (vi) of Theorem~\ref{th_char_frame} there exists a weak limit $\Gamma \in W(\Lambda )$ and non-zero $f\in M^\infty (\Rdst )$, such that $V_gf \big| _\Gamma = 0$. Since $\rho (\Lambda ) \leq \delta $, we also have $\rho (\Gamma )\leq \delta $. By normalizing, we may assume that $\norm{f}_{M^\infty } = \norm{V_g f}_\infty = 1$. For $0< \epsilon < 1 - \omega _\delta (g)_{M^1}$ we find $z\in \Rtdst $ such that $|V_gf(z)| = |\langle f, \pi (z)g\rangle | > 1-\epsilon $. By Lemma \ref{lemma_compactness}, $\Gamma$ is relatively separated and, in particular, closed. Since $\rho (\Gamma ) \leq \delta $, there is a $\gamma \in \Gamma $ such that $|z-\gamma | \leq \delta $. Consequently, since $V_gf \big| _\Gamma = 0$, we find that \begin{align*} 1-\epsilon &< |\langle f, \pi (z)g\rangle - \langle f, \pi (\gamma )g\rangle | = |\langle f, \pi (z)g- \pi (\gamma )g\rangle | \\ & = |\langle V_g f, V_g( \pi (z)g- \pi (\gamma )g)\rangle | \\ &\leq \|V_gf \|_\infty \, \|V_g( \pi (z)g- \pi (\gamma )g)\|_1 \\ &= \|f\|_{M^\infty } \, \|\pi (z)g- \pi (\gamma )g\|_{M^1} \\ &\leq \omega _\delta (g)_{M^1} \, . \end{align*} Since we have chosen $1-\epsilon > \omega _\delta (g)_{M^1} $, we have arrived at a contradiction. Thus $\mathcal{G}(g,\Lambda) $ is a frame. \end{proof} This theorem is analogous to Beurling's famous sampling theorem for multivariate bandlimited functions~\cite{beurling66}. The proof is in the style of ~\cite{OU12}. \subsection{Characterization of Riesz sequences} We now derive analogous results for Riesz sequences. \begin{theo} \label{th_char_riesz} Assume that $g\in M^1({\mathbb{R}^d})$ and that $\Lambda \subseteq {\mathbb{R}^{2d}}$ is separated. Then the following are equivalent. \begin{itemize} \item[(i)] $\mathcal{G}(g,\Lambda)$ is a Riesz sequence in ${L^2(\Rdst)}$. \item[(ii)] $\mathcal{G}(g,\Lambda)$ is a $p$-Riesz sequence in $M^p(\Rdst )$ for some $p\in [1,\infty ]$ (for all $p\in [1,\infty ]$). \item[(iii)] $\mathcal{G}(g,\Lambda)$ is an $\infty$-Riesz sequence in $M^\infty(\Rdst )$, i.e., $C^* _{g,\Lambda} : \ell ^\infty (\Lambda) \to M^\infty({\mathbb{R}^d})$ is bounded below. \item[(iv)] $C_{g,\Lambda} : M^1 \to \ell^1(\Lambda)$ is surjective. \item[(v)] $C^* _{g,\Gamma } : \ell ^\infty (\Gamma) \to M^\infty({\mathbb{R}^d})$ is bounded below for every weak limit $ \Gamma \in W(\Lambda )$. \item[(vi)] $C^* _{g,\Gamma} : \ell ^\infty (\Gamma) \to M^\infty({\mathbb{R}^d})$ is one-to-one for every weak limit $ \Gamma \in W(\Lambda )$. \end{itemize} \end{theo} \begin{rem}{\rm (i) Note that we are assuming that $\Lambda$ is separated. This is necessarily the case if $\mathcal{G}(g,\Lambda)$ is a Riesz sequence (Lemma \ref{lemma_set_must_be}), but needs to be assumed in some of the other conditions. (ii) For bandlimited functions Beurling \cite[Problem 3, p. 359]{be89} asked whether a characterization analogous to (iii) $\Leftrightarrow$ (vi) in Theorem~\ref{th_char_riesz} holds for interpolating sequences. Note that in the context of bandlimited functions, the properties corresponding to (i) and (ii) are not equivalent. } \end{rem} Before proving Theorem \ref{th_char_riesz}, we prove the following continuity property of $C_{g,\Lambda }^*$ with respect to $\Lambda$. \begin{lemma} \label{lemma_zero_seq} Let $g\in M^1({\mathbb{R}^d}), g\neq 0$, and let $\sett{\Lambda_n: n \geq 1}$ be a sequence of uniformly separated subsets of ${\mathbb{R}^{2d}}$, i.e., \begin{equation} \label{unifsep} \inf_n \mathop{\mathrm{sep}}(\Lambda_n) =\delta > 0\, . \end{equation} For every $n \in {\mathbb{N}}$, let $c^n \in \ell^\infty(\Lambda_n)$ be such that $\norm{c^n}_\infty=1$ and suppose that \begin{align*} \sum_{\lambda \in \Lambda_n} c^n_\lambda \pi(\lambda) g \longrightarrow 0 \mbox{ in } M^\infty({\mathbb{R}^d}), \qquad \mbox{as } n \longrightarrow \infty. \end{align*} Then there exist a subsequence $(n_k ) \subset {\mathbb{N}} $, points $\lambda_{n_k} \in \Lambda_{n_k}$, a separated set $\Gamma \subseteq {\mathbb{R}^{2d}}$, and a non-zero sequence $c \in \ell^\infty(\Gamma)$ such that \begin{align*} \Lambda_{n_k}-\lambda_{n_k} & \xrightarrow{w} \Gamma, \qquad \mbox{as } k \longrightarrow \infty \\ \text{and} \qquad \quad \sum_{\lambda \in \Gamma} c_\lambda \pi(\lambda) g &= 0. \end{align*} \end{lemma} \begin{proof} Combining the hypothesis~\eqref{unifsep} and observation~\eqref{eq_sep_relsep}, we also have the uniform relative separation \begin{align} \label{eq_unif_relsep} \sup_n \mathop{\mathrm{rel}}(\Lambda_n) < \infty. \end{align} Since $\norm{c^n}_\infty=1$ for every $n \geq 1$, we may choose $\lambda_n \in \Lambda_n$ be such that $\abs{c^n_{\lambda_n}} \geq 1/2$. Let $\theta_{\lambda,n}\in {\mathbb{C}} $ such that \begin{align*} \theta_{\lambda,n} \pi(\lambda-\lambda_n)=\pi(-\lambda_n) \pi(\lambda), \end{align*} and consider the measures $\mu_n := \sum_{\lambda \in \Lambda_n} \theta_{\lambda,n} c^n_\lambda \delta_{\lambda-\lambda_n}$. Then by Lemma \ref{lemma_norms_mes}, $\norm{\mu_n}_{{W(\mathcal{M}, L^\infty)}} \lesssim \mathop{\mathrm{rel}}(\Lambda_n-\lambda_n) \norm{c^n}_\infty = \mathop{\mathrm{rel}}(\Lambda_n) \norm{c^n}_\infty \lesssim 1$. Using \eqref{eq_unif_relsep} and Lemma \ref{lemma_compactness}, we may pass to a subsequence such that (i) $\Lambda_n - \lambda_n \xrightarrow{w} \Gamma$ for some relatively separated set $\Gamma \subseteq {\mathbb{R}^{2d}}$ and (ii) $\mu_n \longrightarrow \mu$ in $\sigma({W(\mathcal{M}, L^\infty)}, {W(C_0,L^1)})({\mathbb{R}^{2d}})$ for some measure $\mu \in {W(\mathcal{M}, L^\infty)}({\mathbb{R}^{2d}})$. The uniform separation condition in \eqref{unifsep} implies that $\Gamma$ is also separated. Since $\operatorname{supp}(\mu_n) \subseteq \Lambda_n-\lambda_n$ it follows from Lemma \ref{lemma_mes_supp_a} that $\operatorname{supp}(\mu) \subseteq \overline{\Gamma } = \Gamma $. Hence, \begin{align*} \mu = \sum_{\lambda \in \Gamma} c_\lambda \delta_\lambda, \end{align*} for some sequence of complex numbers $c$, and, by Lemma \ref{lemma_norms_mes}, $\norm{c}_\infty \leq \norm{\mu}_{{W(\mathcal{M}, L^\infty)}} < \infty$. From \eqref{unifsep} it follows that for all $n \in {\mathbb{N}}$, $B_{\delta /2} (\lambda_n) \cap \Lambda_n = \sett{\lambda_n}$. Let $\varphi \in C({\mathbb{R}^{2d}})$ be real-valued, supported on $B_{\delta / 2} (0)$ and such that $\varphi(0)=1$. Then \begin{align*} \abs{\int_{{\mathbb{R}^{2d}}} \varphi \, d\mu}=\lim_n \abs{\int_{{\mathbb{R}^{2d}}} \varphi \, d\mu_n} =\lim_n \abs{c^n_{\lambda_n}} \geq 1/2. \end{align*} Hence $\mu \not = 0$ and therefore $c \not= 0$. Finally, we show that the short-time Fourier transform of $\sum_\lambda c_\lambda \pi(\lambda) g $ is zero. Let $z \in {\mathbb{R}^{2d}}$ be arbitrary and recall that by Lemma \ref{lemma_stft} $V_{g} \pi(z)g \in {W(C_0,L^1)}({\mathbb{R}^{2d}})$. Now we estimate \begin{align*} &\abs{\ip{\sum_{\lambda \in \Gamma} c_\lambda \pi(\lambda)g}{\pi(z)g}} = \abs{\sum_{\lambda \in \Gamma} c_\lambda \overline{V_{g} \pi(z)g}(\lambda)} \\ &\quad=\abs{\int_{{\mathbb{R}^{2d}}} \overline{V_{g}\pi(z)g} \,d\mu} =\lim_n \abs{\int_{{\mathbb{R}^{2d}}} \overline{V_{g}\pi(z)g} \,d{\mu_n}} \\ &\quad= \lim_n \abs{\ip{\sum_{\lambda \in \Lambda_n} \theta_{\lambda,n} c^n_\lambda \pi(\lambda-\lambda_n)g}{\pi(z)g}} \\ &\quad\leq \lim_n \bignorm{\sum_{\lambda \in \Lambda_n} \theta_{\lambda,n} c^n_\lambda \pi(\lambda-\lambda_n)g}_{M^\infty} \bignorm{g}_{M^1} \\ &\quad= \lim_n \bignorm{\pi(-\lambda_n) \sum_{\lambda \in \Lambda_n} c^n_\lambda \pi(\lambda)g}_{M^\infty} \bignorm{g}_{M^1} \\ &\quad= \lim_n \bignorm{\sum_{\lambda \in \Lambda_n} c^n_\lambda \pi(\lambda)g}_{M^\infty} \bignorm{g}_{M^1} =0. \end{align*} We have shown that $V_g (\sum_{\lambda \in \Gamma} c_\lambda \pi(\lambda)g) \equiv 0$ and thus $\sum_{\lambda \in \Gamma} c_\lambda \pi(\lambda)g \equiv 0$, as desired. \end{proof} \begin{proof}[Proof of Theorem \ref{th_char_riesz}] The equivalence of (i), (ii), and (iii) follows from Corollary \ref{coro_main_mol}(b), and the equivalence of (iii) and (iv) follows by duality. {\bf (iv) $\Rightarrow$ (v)}. Assume (iv) and consider a sequence $\Lambda - z_n \xrightarrow{w} \Gamma$. Let $\lambda \in \Gamma$ be arbitrary and let $\sett{\lambda_n: n \in {\mathbb{N}}} \subseteq \Lambda$ be a sequence such that $\lambda_n -z_n \longrightarrow \lambda$. By the open map theorem, every sequence $c \in \ell^1(\Lambda)$ with $\norm{c}_1=1$ has a preimage $c=C_{g,\Lambda}(f)$ with $\norm{f}_{M^1} \lesssim 1$. With the covariance property~\eqref{eq_cov_c} we deduce that there exist $f_n \in M^1(\Rdst )$, such that $c=C_{g,\Lambda -z_n}(f_n)$ and $\|f_n\|_{M^1} \lesssim 1$. In particular, for each $n \in {\mathbb{N}}$ there exists an interpolating function $h_n \in M^1({\mathbb{R}^d})$ such that $\norm{V_g h_n}_1 \lesssim 1$, $V_g h_n(\lambda_n-z_n)=1$ and $V_g h_n \equiv 0$ on $\Lambda-z_n \setminus \sett{\lambda_n-z_n}$. By passing to a subsequence we may assume that $h_n \longrightarrow h$ in $\sigma(M^1,M^0)$. It follows that $\norm{h}_{M^1} \lesssim 1$. Since $V_g h_n \longrightarrow V_g h$ uniformly on compact sets by Lemma \ref{lemma_stft}, we obtain that \begin{align*} V_g h(\lambda) = \lim_n V_g h_n(\lambda_n-z_n)=1. \end{align*} Similarly, given $\gamma \in \Gamma \setminus \sett{\lambda}$, there exists a sequence $\sett{\gamma_n: n \in {\mathbb{N}}} \subseteq \Lambda$ such that $\gamma_n -z_n \longrightarrow \gamma$. Since $\lambda \not= \gamma$, for $n\gg 0$ we have that $\gamma_n \not= \lambda_n$ and consequently $V_g h_n(\gamma_n-z_n)=0$. It follows that $V_g h(\gamma)=0$. Hence, we have shown that for each $\lambda \in \Gamma$ there exists an interpolating function $h_\lambda \in M^1({\mathbb{R}^d})$ such that $\norm{h_\lambda}_{M^1} \lesssim 1$, $V_g h_\lambda(\lambda)=1$ and $V_g h_\lambda\equiv 0$ on $\Gamma \setminus \sett{\lambda}$. Given an arbitrary sequence $c \in \ell^1(\Gamma)$ we consider \begin{align*} f := \sum_{\lambda \in \Gamma} c_\lambda h_\lambda. \end{align*} It follows that $f \in M^1({\mathbb{R}^d})$ and that $C_{g,\Gamma} f = c$. Hence, $C_{g,\Gamma}$ is onto $\ell ^1(\Gamma )$, and therefore $C^*_{g,\Gamma}$ is bounded below. {\bf (v) $\Rightarrow$ (vi)} is clear. {\bf (vi) $\Rightarrow$ (iii)}. Suppose that (iii) does not hold. Then there exists a sequence $\sett{c^n: n \in {\mathbb{N}}} \subseteq \ell^\infty(\Lambda)$ such that $\norm{c^n}_\infty = 1$ and \begin{align*} \bignorm{\sum_{\lambda \in \Lambda} c^n_\lambda \pi(\lambda)g}_{M^\infty} \longrightarrow 0, \mbox{ as }n \longrightarrow \infty. \end{align*} We now apply Lemma \ref{lemma_zero_seq} with $\Lambda_n := \Lambda$ and obtain a set $\Gamma \in W(\Lambda)$ and a non-zero sequence $c \in \ell^\infty(\Gamma)$ such that $\sum _{\lambda \in \Gamma } c_\lambda \pi (\lambda )g = C^* _{g,\Gamma}(c)=0$. This contradicts (vi). \end{proof} \section{Deformation of sets and Lipschitz convergence} \label{sec_regconv} The characterizations of Theorem~\ref{th_char_frame} suggest that Gabor frames are invariant under ``weak deformations'' of $\Lambda $. One might expect that if $\mathcal{G} (g,\Lambda )$ is a frame and $\Lambda '$ is close to $\Lambda $ in the weak sense, then $\mathcal{G} (g, \Lambda ')$ is also a frame. This view is too simplistic. Just choose $\Lambda _n = \Lambda \cap B_n(0)$, then $\Lambda _n \xrightarrow{w} \Lambda $, but $\Lambda _n$ is a finite set and thus $\mathcal{G} (g,\Lambda _n)$ is never a frame. For a deformation result we need to introduce a finer notion of convergence. Let $\Lambda \subseteq {\mathbb{R}^d}$ be a (countable) set. We consider a sequence $\sett{\Lambda_n: n \geq 1}$ of subsets of ${\mathbb{R}^d}$ produced in the following way. For each $n \geq 1$, let $\tau_n: \Lambda \to {\mathbb{R}^d}$ be a map and let $\Lambda_n := \tau_n(\Lambda) = \sett{\mapn{\lambda}: \lambda \in \Lambda}$. We assume that $\mapn{\lambda} \longrightarrow \lambda$, as $n \longrightarrow \infty$, for all $\lambda \in \Lambda$. The sequence of sets $\sett{\Lambda_n: n \geq 1}$ together with the maps $\sett{\tau_n: n \geq 1}$ is called a \emph{deformation} of $\Lambda$. We think of each sequence of points $\sett{\mapn{\lambda}: n \geq 1}$ as a (discrete) path moving towards the endpoint $\lambda$. We will often say that $\sett{\Lambda_n: n \geq 1}$ is a deformation of $\Lambda$, with the understanding that a sequence of underlying maps $\sett{\tau_n: n \geq 1}$ is also given. \begin{definition} A deformation $\sett{\Lambda_n: n \geq 1}$ of $\Lambda $ is called \emph{Lipschitz}, denoted by $\Lambda_n \xrightarrow{Lip} \Lambda$, if the following two conditions hold: {\bf (L1) } Given $R>0$, \begin{align*} \sup_{ \stackrel{\lambda, \lambda' \in \Lambda}{\abs{\lambda-\lambda'} \leq R}} \abs{(\mapn{\lambda} - \mapn{\lambda'}) - (\lambda - \lambda')} \rightarrow 0, \quad \mbox {as } n \longrightarrow \infty. \end{align*} {\bf (L2) } Given $R>0$, there exist $R'>0$ and $n_0 \in {\mathbb{N}}$ such that if $\abs{\mapn{\lambda} - \mapn{\lambda'}} \leq R$ for \emph{some} $n \geq n_0$ and some $\lambda, \lambda' \in \Lambda$, then $\abs{\lambda-\lambda'} \leq R'$. \end{definition} Condition (L1)\ means that $\mapn{\lambda} - \mapn{\lambda'} \longrightarrow \lambda - \lambda'$ uniformly in $\abs{\lambda-\lambda'}$. In particular, by fixing $\lambda '$, we see that Lipschitz convergence implies the weak convergence $\Lambda _n \xrightarrow{w} \Lambda $. Furthermore, if $\{\Lambda_n: n \geq 1\}$ is Lipschitz convergent to $\Lambda$, then so is every subsequence $\sett{\Lambda_{n_k}: k \geq 1}$. \begin{example} Jitter error: {\rm Let $\Lambda \subseteq {\mathbb{R}^d}$ be relatively separated and let $\sett{\Lambda_n: n \geq 1}$ be a deformation of $\Lambda$. If $\sup_\lambda \abs{\mapn{\lambda} - \lambda} \longrightarrow 0$, as $n \longrightarrow \infty$, then $\Lambda_n \xrightarrow{Lip} \Lambda$. } \end{example} \begin{example} Linear deformations: {\rm Let $\Lambda = A {\bZ^{2d}} \subseteq \Rtdst$, with $A$ an invertible $2d\times 2d$ matrix, $\Lambda _n = A_n {\bZ^{2d}} $ for a sequence of invertible $2d\times 2d$-matrices and assume that $\lim A_n = A$. Then $\Lambda_n \xrightarrow{Lip} \Lambda $. In this case conditions (L1)\ and (L2)\ are easily checked by taking $\tau_n = A_n A^{-1}$. } \end{example} The third class of examples contains differentiable, nonlinear deformations. \begin{lemma} \label{lemma_reg_conv} Let $p \in (d, \infty]$. For each $n \in {\mathbb{N}} $, let $T_n=(T_n^1, \ldots, T_n^d): {\mathbb{R}^d} \to {\mathbb{R}^d}$ be a map such that each coordinate function $T^k_n:{\mathbb{R}^d} \to {\mathbb{R}}$ is continuous, locally integrable and has a weak derivative in $L^p_{loc}({\mathbb{R}^d})$. Assume that \begin{align*} &T_n(0)= 0, \\ &\abs{DT_n-I} \longrightarrow 0 \,\,\, \mbox{ in } L^p({\mathbb{R}^d}). \end{align*} (Here, $DT_n$ is the Jacobian matrix consisting of the partial derivatives of $T_n$ and the second condition means that each entry of the matrix $DT_n-I$ tends to $0$ in $L^p$.) Let $\Lambda \subseteq {\mathbb{R}^d}$ be a relatively separated set and consider the deformation $\Lambda_n := T_n(\Lambda)$ (i.e $\tau_n := {T_n}\big| _{\Lambda})$. Then $\Lambda_n $ is Lipschitz convergent to $ \Lambda$. \end{lemma} \begin{rem}{\rm In particular, the hypothesis of Lemma \ref{lemma_reg_conv} is satisfied by every sequence of differentiable maps $T_n: {\mathbb{R}^d} \to {\mathbb{R}^d}$ such that} \begin{align*} &T_n(0)= 0, \\ &\sup_{z\in{\mathbb{R}^{2d}}} \abs{DT_n(z)-I} \longrightarrow 0. \end{align*} \end{rem} \begin{proof}[Proof of Lemma \ref{lemma_reg_conv}] Let $\alpha := 1 - \tfrac{d}{p} \in (0,1]$. We use the following Sobolev embedding known as Morrey's inequality (see for example \cite[Chapter 4, Theorem 3]{evga92}). If $f: {\mathbb{R}^d} \to {\mathbb{R}}$ is locally integrable and possesses a weak derivative in $L^p({\mathbb{R}^d})$, then $f$ is $\alpha $-H\"older continuous (after being redefined on a set of measure zero). If $x,y \in {\mathbb{R}^d}$, then \begin{align*} \abs{f(x)-f(y)} \lesssim \norm{\nabla f}_{L^p({\mathbb{R}^d})} \abs{x-y}^\alpha, \qquad x,y \in {\mathbb{R}^d}. \end{align*} Applying Morrey's inequality to each coordinate function of $T_n-I$ we obtain that there is a constant $C>0$ such that, for $x,y \in {\mathbb{R}^d}$, \begin{align*} \abs{(T_n x - T_n y) - (x - y)} = \abs{(T_n - I)x - (T_n - I)y} \leq C \norm{DT_n - I}_{L^p({\mathbb{R}^d})}\, \abs{x-y}^\alpha. \end{align*} Let $\epsilon _n = C\norm{DT_n - I}_{L^p({\mathbb{R}^d})}$, where $\norm{DT_n - I}_{L^p({\mathbb{R}^d})}$ is the $L^p$-norm of $\abs{DT_n(\cdot) - I}$. Then $\epsilon _n \to 0$ by assumption and \begin{align} \label{eq_lemma_p} \abs{(T_n x - T_n y) - (x - y)} \leq \varepsilon_n \abs{x-y}^\alpha, \qquad x,y \in {\mathbb{R}^d}. \end{align} Choose $x=\lambda$ and $y=0$, then $T_n(\lambda) \longrightarrow \lambda$ for all $\lambda \in \Lambda$ (since $T_n(0)=0$). Hence $\Lambda_n$ is a deformation of $\Lambda$. If $\lambda, \lambda' \in \Lambda$ and $\abs{\lambda-\lambda'} \leq R$, then \eqref{eq_lemma_p} implies that \begin{align*} \abs{(T_n \lambda - T_n \lambda') - (\lambda - \lambda')} \leq \varepsilon_n R^\alpha. \end{align*} Thus condition (L1)\ is satisfied. For condition (L2) , choose $n_0$ such that $\varepsilon_n \leq 1/2$ for $n \geq n_0$. If $|\lambda - \lambda '| \leq 1$, there is nothing to show (choose $R' \geq 1$). If $|\lambda -\lambda '| \geq 1$ and $\abs{T_n \lambda -T_n\lambda'} \leq R$ for some $n \geq n_0$, $\lambda, \lambda' \in \Lambda$, then by \eqref{eq_lemma_p} we obtain \begin{align*} \abs{(T_n \lambda - T_n \lambda') - (\lambda - \lambda')} \leq 1/2 \abs{\lambda-\lambda'}^\alpha \leq 1/2 \abs{\lambda-\lambda'}. \end{align*} This implies that \begin{align} \label{eq_super_bound} \abs{\lambda-\lambda'} \leq 2 \abs{(T_n \lambda - T_n \lambda')}, \mbox{ for all }n\geq n_0. \end{align} Since $\abs{T_n \lambda -T_n\lambda'} \leq R$, we conclude that $\abs{\lambda -\lambda'} \leq 2R $, and we may actually choose $R'= \max (1,2R) $ in condition (L2). \end{proof} \begin{rem} {\rm Property (L1)\ can be proved under slightly weaker conditions on the convergence of $DT_n - I$. In fact, we need~\eqref{eq_lemma_p} to hold only for $|x-y|\leq R$. Thus it suffices to assume locally uniform convergence in $L^p$, i.e., $$ \sup _{y\in \Rtdst } \int _{B_R(y)} |DT_n (x) - I|^p \, dx \to 0 $$ for all $R>0$. } \end{rem} We prove some technical lemmas about Lipschitz convergence. \begin{lemma} \label{lemma_dens} Let $\sett{\Lambda_n: n \geq 1}$ be a deformation of a relatively separated set $\Lambda \subseteq {\mathbb{R}^d}$. Then the following hold. \begin{itemize} \item[(a)] If $\Lambda_n$ is Lipschitz convergent to $\Lambda$ and $\mathop{\mathrm{sep}}(\Lambda)>0$, then $\liminf_n \mathop{\mathrm{sep}}(\Lambda _n) >0$. \item[(b)] If $\Lambda_n$ is Lipschitz convergent to $\Lambda$, then $\limsup_n \mathop{\mathrm{rel}}(\Lambda_n) < \infty$. \item[(c)] If $\Lambda_n$ is Lipschitz convergent to $\Lambda$ and $\rho(\Lambda) <\infty$, then $\limsup_n \rho(\Lambda_n)<\infty$. \end{itemize} \end{lemma} \begin{proof} (a) By assumption $\delta := \mathop{\mathrm{sep}}(\Lambda)>0$. Using (L2), let $n_0 \in {\mathbb{N}}$ and $R'>0$ be such that if $\abs{\mapn{\lambda} - \mapn{\lambda'}} \leq \delta/2$ for some $\lambda,\lambda' \in \Lambda$ and $n \geq n_0$, then $\abs{\lambda-\lambda'} \leq R'$. By (L1) , choose $n_1 \geq n_0$ such that for $n \geq n_1$ \begin{align*} \sup_{\abs{\lambda-\lambda'} \leq R'} \abs{(\mapn{\lambda} - \mapn{\lambda'}) - (\lambda - \lambda')} < \delta/2. \end{align*} \textbf{Claim.} $\mathop{\mathrm{sep}}(\Lambda_n) \geq \delta/2$ for $n\geq n_1$. If the claim is not true, then for some $n \geq n_0$ there exist two distinct points $\lambda, \lambda' \in \Lambda$ such that $\abs{\mapn{\lambda}-\mapn{\lambda'}} \leq \delta/2$. Then $\abs{\lambda-\lambda'} \leq R'$ and consequently \begin{align*} \abs{\lambda-\lambda'} \leq \abs{(\mapn{\lambda} - \mapn{\lambda'}) - (\lambda - \lambda')} + \abs{\mapn{\lambda}-\mapn{\lambda'}} < \delta, \end{align*} contradicting the fact that $\Lambda$ is $\delta$-separated. (b) Since $\Lambda$ is relatively separated we can split it into finitely many separated sets $\Lambda=\Lambda^1 \cup \ldots \cup \Lambda^L$ with $\mathop{\mathrm{sep}} (\Lambda ^k) >0$. Consider the sets defined by restricting the deformation $\tau_n$ to each $\Lambda^k$ \begin{align*} \Lambda^k_n := \sett{\mapn{\lambda}: \lambda \in \Lambda^k}. \end{align*} As proved above in (a), there exists $n_0 \in {\mathbb{N}}$ and $\delta>0$ such that $\mathop{\mathrm{sep}}(\Lambda^k_n) \geq \delta$ for all $n \geq n_0$ and $1 \leq k \leq L$. Therefore, using \eqref{eq_sep_relsep}, \begin{align*} \mathop{\mathrm{rel}}(\Lambda_n) \leq \sum_{k=1}^L \mathop{\mathrm{rel}}(\Lambda^k_n) \lesssim L \delta^{-d}, \qquad n \geq n_0, \end{align*} and the conclusion follows. (c) By (b) we may assume that each $\Lambda_n$ is relatively separated. Assume that $\rho(\Lambda) <\infty$. Then there exists $r>0$ such that every cube $Q_r(z) := z + [-r,r]^{d}$ intersects $\Lambda$. By (L1) , there is $n_0 \in {\mathbb{N}}$ such that for $n \geq n_0$, \begin{align} \label{eq_lemma_reg} \sup_{\stackrel{\lambda, \lambda' \in \Lambda}{\abs{\lambda-\lambda'}_\infty \leq 6r}} \abs{(\mapn{\lambda} - \mapn{\lambda'}) - (\lambda - \lambda')}_\infty \leq r. \end{align} Let $R:=8r$ and $n \geq n_0$. We will show that every cube $Q_R(z)$ intersects $\Lambda_n$. This will give a uniform upper bound for $\rho(\Lambda_n)$. Suppose on the contrary that some cube $Q_R(z)$ does not meet $\Lambda_n$ and consider a larger radius $R' \geq R$ such that $\Lambda_n$ intersects the boundary but not the interior of $Q_{R'}(z)$. (This is possible because $\Lambda_n$ is relatively separated and therefore closed.) Hence, there exists $\lambda \in \Lambda$ such that $\abs{\mapn{\lambda}-z}_\infty=R'$. Let us write \begin{align*} &(z-\mapn{\lambda})_k = \delta_k c_k, \qquad k=1,\ldots,d, \\ &\delta_k \in \{-1,1\}, \qquad k=1,\ldots,d, \\ &0 \leq c_k \leq R', \qquad k=1,\ldots,d \, , \end{align*} and $c_k = R' $ for some $k$. We now argue that we can select a point $\gamma \in \Lambda$ such that \begin{align} \label{eq_lga} (\lambda-\gamma)_k = -\delta_k c'_k, \qquad k=1,\ldots,d, \end{align} with coordinates \begin{align} \label{eq_lgb} 2r \leq c'_k \leq 6r, \qquad k=1,\ldots,d. \end{align} Using the fact that $\Lambda$ intersects each of the cubes $\sett{Q_r(2rj):j\in {\mathbb{Z}^d}}$, we first select an index $j \in {\mathbb{Z}^d}$ such that $\lambda \in Q_r(2rj)$. Second, we define a new index $j' \in {\mathbb{Z}^d}$ by $j_k' = j_k +2\delta _k$ for $k=1, \dots , d$. We finally select a point $\gamma \in \Lambda \cap Q_r(2rj')$. This procedure guarantees that \eqref{eq_lga} and \eqref{eq_lgb} hold true. See Figure~\ref{fig:1}. \begin{figure} \centering \includegraphics[width=0.45\textwidth,natwidth=356,natheight=270]{./points.png} \caption{The selection of the point $\gamma$ satisfying \eqref{eq_lga} and \eqref{eq_lgb}.} \label{fig:1} \end{figure} Since by \eqref{eq_lga} and \eqref{eq_lgb} $\abs{\lambda-\gamma}_\infty \leq 6r$, we can use \eqref{eq_lemma_reg} to obtain \begin{align*} (\mapn{\lambda}-\mapn{\gamma})_k = -\delta_k c''_k, \qquad k=1,\ldots,d, \end{align*} with coordinates \begin{align*} r \leq c''_k \leq 7r , \qquad k=1,\ldots,d. \end{align*} We write $(z-\mapn{\gamma})_k=(z-\mapn{\lambda})_k+(\mapn{\lambda}-\mapn{\gamma})_k=\delta_k(c_k-c''_k)$ and note that $-7r \leq c_k-c''_k \leq R' - r$. Hence, \begin{align*} \abs{z-\mapn{\gamma}}_\infty \leq \max\{R'-r,7r\}=R'-r, \end{align*} since $7r = R-r \leq R'-r$. This shows that $Q_{R'-r}(z)$ intersects $\Lambda_n$, contradicting the choice of $R'$. \end{proof} The following lemma relates Lipschitz convergence to the weak-limit techniques. \begin{lemma} \label{lemma_reg_key} Let $\Lambda \subseteq {\mathbb{R}^d}$ be relatively separated and let $\sett{\Lambda_n: n \geq 1}$ be a Lipschitz deformation of $\Lambda$. Then the following holds. \begin{itemize} \item[(a)] Let $\Gamma \subseteq {\mathbb{R}^d}$ and $\sett{\lambda_n: n \geq 1} \subseteq \Lambda$ be some sequence in $\Lambda $. If $\Lambda_n - \mapn{\lambda_n} \xrightarrow{w} \Gamma$, then $\Gamma \in W(\Lambda)$. \item[(b)] Suppose that $\Lambda$ is relatively dense and $\sett{z_n: n \geq 1} \subseteq {\mathbb{R}^d}$ is an arbitrary sequence. If $\Lambda_n - z_n \xrightarrow{w} \Gamma$, then $\Gamma \in W(\Lambda)$. \end{itemize} \end{lemma} \begin{proof} (a) We first note that $\Gamma$ is relatively separated. Indeed, Lemma \ref{lemma_dens} says that \begin{align*} \limsup_{n \rightarrow \infty} \mathop{\mathrm{rel}}(\Lambda_n - \mapn{\lambda_n}) = \limsup_{n \rightarrow \infty} \mathop{\mathrm{rel}}(\Lambda_n) < \infty, \end{align*} and Lemma~\ref{lemma_compactness}(c) implies that $\Gamma$ is relatively separated (and in particular closed). By extracting a subsequence, we may assume that $\Lambda - \lambda_n \xrightarrow{w} \Gamma'$ for some relatively separated set $\Gamma' \in W(\Lambda )$. We will show that $\Gamma'=\Gamma$ and consequently $\Gamma \in W(\Lambda)$. Let $R>0$ and $0<\varepsilon \leq 1$ be given. By (L1), there exists $n_0 \in {\mathbb{N}}$ such that \begin{align} \label{eq_lam_lamp} \lambda,\lambda' \in \Lambda , \abs{\lambda-\lambda'}\leq R, n \geq n_0 \Longrightarrow \abs{(\mapn{\lambda} - \mapn{\lambda'}) - (\lambda - \lambda')} \leq \varepsilon \, . \end{align} If $n \geq n_0$ and $z \in (\Lambda - \lambda_n) \cap B_R(0)$, then there exists $\lambda \in \Lambda$ such that $z=\lambda-\lambda_n$ and $\abs{\lambda-\lambda_n} \leq R$. Consequently~\eqref{eq_lam_lamp} implies that \begin{align*} \abs{(\mapn{\lambda} - \mapn{\lambda_n}) - z} = \abs{(\mapn{\lambda} - \mapn{\lambda_n}) - (\lambda-\lambda_n)} \leq \varepsilon. \end{align*} This shows that \begin{align} \label{eq_subsets} (\Lambda - \lambda_n) \cap B_R(0) \subseteq (\Lambda_n - \mapn{\lambda_n})+B_\varepsilon(0) \,\, \mbox{ for } n \geq n_0\, . \end{align} Since $\Lambda - \lambda_n \xrightarrow{w} \Gamma'$ and $\Lambda_n - \mapn{\lambda_n} \xrightarrow{w} \Gamma$, it follows from \eqref{eq_subsets} and Lemma \ref{lemma_weak_inc} that $\Gamma' \subseteq \overline{\Gamma}=\Gamma$. For the reverse inclusion, let again $R>0$ and $0< \varepsilon \leq 1$. Let $R'>0$ and $n_0 \in {\mathbb{N}}$ be the numbers associated with $R$ in (L2). Using (L1), choose $n_1 \geq n_0$ such that \begin{align} \label{eq_lam_lamp_2} \lambda,\lambda' \in \Lambda, \abs{\lambda-\lambda'}\leq R', n \geq n_1 \quad \Longrightarrow \,\, \abs{(\mapn{\lambda} - \mapn{\lambda'}) - (\lambda - \lambda')} \leq \varepsilon \, . \end{align} If $n \geq n_1$ and $z \in (\Lambda_n - \mapn{\lambda_n}) \cap B_R(0)$, then $z=\mapn{\lambda}-\mapn{\lambda_n}$ for some $\lambda \in \Lambda$ and $\abs{\mapn{\lambda}-\mapn{\lambda_n}} \leq R$. Condition (L2)\, now implies that $\abs{\lambda-\lambda_n} \leq R'$ and therefore, using \eqref{eq_lam_lamp_2} with $\lambda'=\lambda_n$, we get \begin{align*} \abs{z- (\lambda - \lambda_n)} = \abs{(\mapn{\lambda} - \mapn{\lambda_n}) - (\lambda-\lambda_n)} \leq \varepsilon. \end{align*} Hence we have proved that \begin{align*} (\Lambda_n - \mapn{\lambda_n}) \cap B_R(0) \subseteq (\Lambda - \lambda_n)+B_\varepsilon(0), \mbox{ for } n\geq n_1\, . \end{align*} Since $\Lambda_n - \mapn{\lambda_n} \xrightarrow{w} \Gamma$ and $\Lambda - \lambda_n \xrightarrow{w} \Gamma'$, Lemma \ref{lemma_weak_inc} implies that $\Gamma \subseteq \overline{\Gamma'}=\Gamma'$. In conclusion $\Gamma ' = \Gamma \in W(\Lambda )$, as desired. (b) Since $\rho(\Lambda) < \infty$, Lemma \ref{lemma_dens}(c) implies that $\limsup_n \rho(\Lambda_n) < \infty$. By omitting finitely many $n$, there exists $L>0$ such that $\Lambda_n + B_L(0) = {\mathbb{R}^d}$ for all $n \in {\mathbb{N}}$. This implies the existence of a sequence $\sett{\lambda_n: n \geq 1} \subseteq \Lambda$ such that $\abs{z_n-\mapn{\lambda_n}} \leq L$. By passing to a subsequence we may assume that $z_n-\mapn{\lambda_n} \longrightarrow z_0$ for some $z_0 \in {\mathbb{R}^d}$. Since $\Lambda_n - z_n \xrightarrow{w} \Gamma$ and $z_n-\mapn{\lambda_n} \longrightarrow z_0$, it follows that $\Lambda_n -\mapn{\lambda_n} \xrightarrow{w} \Gamma + z_0$. By (a), we deduce that $\Gamma + z_0 \in W(\Lambda)$ and thus $\Gamma \in W(\Lambda)$, as desired. \end{proof} \section{Deformation of Gabor systems} We now prove the main results on the deformation of Gabor systems. The proofs combine the characterization of non-uniform Gabor frames and Riesz sequences without inequalities and the fine details of Lipschitz convergence. First we formulate the stability of Gabor frames under a class of nonlinear deformations. \begin{theo} \label{th_per_frame} Let $g \in M^1({\mathbb{R}^d})$, $\Lambda \subseteq {\mathbb{R}^{2d}}$ and assume that $\mathcal{G} (g,\Lambda )$ is a frame for $L^2(\Rdst) $. If $\Lambda _n$ is Lipschitz convergent to $\Lambda $, then $\mathcal{G} (g, \Lambda _n)$ is a frame for all sufficiently large $n$. \end{theo} Theorem \ref{th_main_intro}(a) of the Introduction now follows by combining Theorem~\ref{th_per_frame} and Lemma \ref{lemma_reg_conv}. Note that in Theorem~\ref{th_main_intro} we may assume without loss of generality that $T_n(0)=0$, because the deformation problem is invariant under translations. \begin{proof} Suppose that $\mathcal{G}(g,\Lambda)$ is a frame. According to Lemma \ref{lemma_set_must_be}, $\Lambda$ is relatively separated and relatively dense. Now suppose that the conclusion does not hold. By passing to a subsequence we may assume that $\mathcal{G}(g,\Lambda_n)$ fails to be a frame for all $n\in {\mathbb{N}} $. By Theorem \ref{th_char_frame} every $\mathcal{G} (g, \Lambda _n)$ also fails to be an $\infty $-frame for $M^\infty (\Rdst )$. It follows that for every $n \in {\mathbb{N}}$ there exist $f_n \in M^\infty({\mathbb{R}^d})$ such that $\norm{V_g f_n}_{\infty}=1$ and \begin{align*} \norm{C_{g, \Lambda_n}(f_n)}_{\ell^\infty(\Lambda_n)}= \sup_{\lambda \in \Lambda} \abs{V_g f_n(\mapn{\lambda})} \longrightarrow 0, \qquad \mbox{as } n \longrightarrow \infty. \end{align*} For each $n \in {\mathbb{N}}$, let $z_n \in {\mathbb{R}^{2d}}$ be such that $\abs{V_g f_n(z_n)} \geq 1/2$ and let us consider $h_n:=\pi(-z_n) f_n$. By passing to a subsequence we may assume that $h_n \rightarrow h$ in $\sigma(M^\infty,M^1)$ for some function $h \in M^\infty$. Since $\abs{V_g h_n(0)}=\abs{V_g f_n(z_n)} \geq 1/2$, it follows that $\abs{V_g h(0)} \geq 1/2$ and the weak$^*$-limit $h$ is not zero. In addition, by Lemma \ref{lemma_dens} \begin{align*} \limsup_{n\rightarrow \infty} \mathop{\mathrm{rel}}(\Lambda_n - z_n) = \limsup_{n\rightarrow \infty} \mathop{\mathrm{rel}}(\Lambda_n) < \infty. \end{align*} Hence, using Lemma \ref{lemma_compactness} and passing to a further subsequence, we may assume that $\Lambda_n - z_n \xrightarrow{w} \Gamma$, for some relatively separated set $\Gamma \subseteq {\mathbb{R}^{2d}}$. Since $\Lambda$ is relatively dense, Lemma \ref{lemma_reg_key} guarantees that $\Gamma \in W(\Lambda)$. Let $\gamma \in \Gamma$ be arbitrary. Since $\Lambda_n - z_n \xrightarrow{w} \Gamma$, there exists a sequence $\sett{\lambda_n}_{n\in{\mathbb{N}}} \subseteq \Lambda$ such that $\mapn{\lambda_n} - z_n \rightarrow \gamma$. By Lemma \ref{lemma_stft}, the fact that $h_n \rightarrow h$ in $\sigma(M^\infty,M^1)$ implies that $V_g h_n \rightarrow V_g h$ uniformly on compact sets. Consequently, by \eqref{eq_tf_stft}, \begin{align*} \abs{V_g h(\gamma)} = \lim_n \abs{V_g h_n(\mapn{\lambda_n} - z_n)} = \lim_n \abs{V_g f_n(\mapn{\lambda_n})}=0. \end{align*} Hence, $h \not\equiv 0$ and $V_g h \equiv 0$ on $\Gamma \in W(\Lambda)$. According to Theorem \ref{th_char_frame}(vi), $\mathcal{G}(g,\Lambda)$ is not a frame, thus contradicting the initial assumption. \end{proof} The corresponding deformation result for Gabor Riesz sequences reads as follows. \begin{theo} \label{th_per_riesz} Let $g \in M^1({\mathbb{R}^d})$, $\Lambda \subseteq {\mathbb{R}^{2d}}$, and assume that $\mathcal{G} (g,\Lambda )$ is a Riesz sequence in $L^2(\Rdst) $. If $\Lambda _n$ is Lipschitz convergent to $\Lambda $, then $\mathcal{G} (g, \Lambda _n)$ is a Riesz sequence for all sufficiently large $n$. \end{theo} \begin{proof} Assume that $\mathcal{G}(g,\Lambda)$ is a Riesz sequence. Lemma~\ref{lemma_set_must_be} implies that $\Lambda$ is separated. With Lemma \ref{lemma_dens} we may extract a subsequence such that each $\Lambda_n$ is separated with a uniform separation constant, i.e., \begin{align} \label{eq_unif_sep} \inf_{n \geq 1} \mathop{\mathrm{sep}}(\Lambda_n) >0. \end{align} We argue by contradiction and assume that the conclusion does not hold. By passing to a further subsequence, we may assume that $\mathcal{G}(g,\Lambda_n)$ fails to be a Riesz sequence for all $n\in {\mathbb{N}}$. As a consequence of Theorem~\ref{th_char_riesz}(iii), there exist sequences $c^n \in \ell^\infty(\Lambda_n)$ such that $\norm{c^n}_\infty=1$ and \begin{align} \label{eq_goestozero} \norm{C^*_{g,\Lambda_n}(c^n)}_{M^\infty}= \Bignorm{\sum_{\lambda \in \Lambda} c^n_{\mapn{\lambda}} \pi(\tau_n{(\lambda}))g}_{M^\infty} \longrightarrow 0, \mbox{ as }n \longrightarrow \infty. \end{align} Thus $g, \Lambda _n, c^n$ satisfy the assumptions of Lemma~\ref{lemma_zero_seq}. The conclusion of Lemma~\ref{lemma_zero_seq} yields a subsequence $(n_k)$, a separated set $\Gamma \subseteq {\mathbb{R}^{2d}}$, a non-zero sequence $c \in \ell^\infty(\Gamma)$, and a sequence of points $\sett{\lambda_{n_k}: k \geq 1} \subseteq \Lambda$ such that \begin{align*} \Lambda_{n_k} - \tau _{n_k} (\lambda_{n_k}) \xrightarrow{w} \Gamma \end{align*} and $$\sum _{\gamma \in \Gamma} c_\gamma \pi (\gamma ) g = C^*_{g, \Gamma} (c)=0 \, . $$ By Lemma \ref{lemma_reg_key}, we conclude that $\Gamma \in W(\Lambda)$. According to condition (vi) of Theorem \ref{th_char_riesz}, $\mathcal{G}(g,\Lambda)$ is not a Riesz sequence, which is a contradiction. \end{proof} \begin{rem} Uniformity of the bounds: {\rm Under the assumptions of Theorem \ref{th_per_frame} it follows that there exists $n_0 \in {\mathbb{N}}$ and constants $A,B>0$ such that for $n \geq n_0$, \begin{align*} A \norm{f}_2^2 \leq \sum_{\lambda \in \Lambda} \abs{\ip{f}{\pi(\mapn{\lambda})g}}^2 \leq B \norm{f}_2^2, \qquad f \in L^2({\mathbb{R}^d}). \end{align*} The uniformity of the upper bound follows from the fact that $g \in M^1({\mathbb{R}^d})$ and $\sup_n \mathop{\mathrm{rel}}(\Lambda_n) < \infty$ (cf. Section \ref{sec_maps}). For the lower bound, note that the proof of Theorem \ref{th_per_frame} shows that there exists a constant $A'>0$ such that, for $n \geq n_0$, \begin{align*} A' \norm{f}_{M^\infty} \leq \sup_{\lambda \in \Lambda} \abs{\ip{f}{\pi(\mapn{\lambda})g}}, \qquad f \in M^\infty({\mathbb{R}^d}). \end{align*} This property implies a uniform $L^2$-bound as is made explicit in Remarks~\ref{rem_unif_loc} and \ref{rem_unif_loc_2}.} \end{rem} To show why local preservation of differences is related to the stability of Gabor frames, let us consider the following example. \begin{example} \label{ex_go_wrong} {\rm From~\cite{asfeka13} or from Theorem~\ref{th_per_frame} we know that if $g\in M^1(\Rdst )$ and $\mathcal{G}(g,\Lambda)$ is a frame, then $\mathcal{G} (g, (1+1/n) \Lambda)$ is also a frame for sufficiently large $n$. For every $n$ we now construct a deformation of the form $\mapn{\lambda} := \alpha_{\lambda,n} \lambda$, where $\alpha_{\lambda,n}$ is either $1$ or $(1+1/n)$ with roughly half of the multipliers equal to $1$. Since only a subset of $\Lambda $ is moved, we would think that this deformation is ``smaller'' than the full dilation $\lambda \to (1+\tfrac{1}{n}) \lambda $, and thus it should preserve the spanning properties of the corresponding Gabor system. Surprisingly, this is completely false. We now indicate how the coefficients $\alpha _{\lambda ,n}$ need to be chosen. Let $\Rtdst = \bigcup _{l=0}^\infty B_l$ be a partition of $\Rtdst $ into the annuli \begin{align*} &B_l = \{ z\in \Rtdst : (1+\tfrac{1}{n})^l \leq |z| < (1+\tfrac{1}{n})^{l+1} \}, \qquad l \geq 1, \\ &B_0 := \{ z\in \Rtdst : |z| < (1+\tfrac{1}{n}) \} \end{align*} and define $$ \alpha _{\lambda ,n} = \begin{cases} 1 & \text{ if } \lambda \in B_{2l}, \\ 1+\tfrac{1}{n} & \text{ if } \lambda \in B_{2l+1} \, . \end{cases} $$ Since $(1+\tfrac{1}{n} ) B_{2l+1} = B_{2l+2}$, the deformed set $\Lambda _n= \tau _n (\Lambda ) = \{\alpha _{\lambda ,n} \lambda : \lambda \in \Lambda \} $ is contained in $\bigcup _{l=0}^\infty B_{2l}$ and thus contains arbitrarily large holes. So $\rho (\Lambda _n) = \infty $ and $D^- (\Lambda _n) = 0$. Consequently the corresponding Gabor system $\mathcal{G} (g,\Lambda _n)$ cannot be a frame. See Figure~\ref{figweird} for a plot of this deformation in dimension $1$. } \end{example} \begin{figure}[!t] \includegraphics[width=0.6\textwidth,natwidth=442,natheight=252]{./ex.png} \caption{A deformation ``dominated'' by the dilation $\lambda \to (1+1/n)\lambda $. } \label{figweird} \end{figure} \section{Appendix} \label{sec_app} We finally prove Theorem \ref{th_main_mol}. Both the statement and the proof are modeled on Sj\"ostrand's treatment of Wiener's lemma for convolution-dominated matrices \cite{sj95}. Several stability results are built on his techniques \cite{su07-5, albakr08, shsu09, te10, su10-2}. The following proposition exploits the flexibility of Sj\"ostrand's methods to transfer lower bounds for a matrix from one value of $p$ to all others, under the assumption that the entries of the matrix decay away from a collection of lines. \subsection{A variation of Sj\"ostrand's Wiener-type lemma} Let $G$ be the group $G := \sett{-1,1}^d$ with coordinatewise multiplication, and let $\sigma \in G$ act on ${\mathbb{R}^d}$ by $\sigma x := (\sigma_1 x_1, \ldots, \sigma_d x_d)$. The group inverse of $\sigma \in G$ is $\sigma^{-1}=(-\sigma_1,\ldots,-\sigma_d)$ and the orbit of $x\in {\mathbb{R}^d}$ by $G$ is $G \cdot x := \sett{\sigma x: \sigma \in G}$. Consequently, $\bZ^d = \bigcup _{k \in {\mathbb{N}} _0^d } G\cdot k$ is a disjoint union. We note that the cardinality of $G\cdot x$ depends on the number of non-zero coordinates of $x\in \Rdst $. \begin{prop} \label{prop_loc} Let $\Lambda$ and $\Gamma$ be relatively separated subsets of ${\mathbb{R}^d}$. Let $A \in {\mathbb{C}}^{\Lambda \times \Gamma}$ be a matrix such that \begin{align*} \abs{A_{\lambda,\gamma}} \leq \sum_{\sigma \in G} \Theta(\lambda-\sigma \gamma) \qquad \lambda \in \Lambda, \gamma \in \Gamma, \qquad \mbox{for some } \Theta \in W(L^\infty,L^1)({\mathbb{R}^d}). \end{align*} Assume that there exist a $p\in [1,\infty]$ and $C_0>0$, such that \begin{equation} \label{eq:c4} \norm{Ac}_{p} \geq C_0 \norm{c}_{p} \quad \quad \text{ for all } c\in \ell ^{p} (\Gamma) \, . \end{equation} Then there exists a constant $C >0$ independent of $q$ such that, \textrm{for all $q \in [1, \infty]$} \begin{equation} \label{eq:c5} \norm{Ac}_{q} \geq C \norm{c}_q \quad \quad \text{ for all } c\in \ell ^q (\Gamma) \, . \end{equation} In other words, if $A $ is bounded below on some $\ell ^p$, then $A$ is bounded below on $\ell ^p$ for all $p\in [1,\infty ]$. \end{prop} \begin{proof} By considering $\sum_{y \in G\cdot x} \Theta(y)$ we may assume that $\Theta$ is $G$-invariant, i.e. $\Theta(x)=\Theta(\sigma x)$ for all $\sigma \in G$. \textbf{Step 1.} \emph{Construction of a partition of unity.} Let $\psi \in C^\infty(\Rdst)$ be $G$-invariant and such that $0 \leq \psi \leq 1$, $\operatorname{supp}(\psi) \subseteq B_2(0)$ and \begin{align*} \sum_{k\in{\mathbb{Z}^d}} \psi(\cdot-k) \equiv 1. \end{align*} For $\varepsilon>0$, define $\psi^\varepsilon_k (x):= \psi(\varepsilon x -k)$, $I:={\mathbb{N}}_0^d$, and \begin{align*} \varphi^\varepsilon_k := \sum_{j \in G\cdot k} \psi^\varepsilon_j. \end{align*} Since $\bZ^d = \bigcup _{k\in I} G\cdot k$ is a disjoint union, it follows that $$ \sum_{k \in I} \varphi^\varepsilon_k = \sum _{k\in I} \sum _{j \in G\cdot k} \psi ^\varepsilon_j \equiv 1 \, . $$ Thus $\{ \varphi^\varepsilon_k : k\in I\}$ generates a partition of unity, and it is easy to see that it has the following additional properties: \begin{itemize} \item $\Phi^\varepsilon := \sum_{k \in I} \left( \varphi^\varepsilon_k \right)^2 \asymp 1$, \item $0 \leq \varphi^\varepsilon_k \leq 1,$ \item $\abs{\varphi^\varepsilon_k(x) - \varphi^\varepsilon_k(y)} \lesssim \varepsilon \abs{x-y},$ \item $\varphi^\varepsilon_k(x)=\varphi^\varepsilon_k(\sigma x)$ for all $\sigma \in G$. \end{itemize} Combining the last three properties, we obtain that \begin{align} \label{eq_improved_lip} \abs{\varphi^\varepsilon_k(x) - \varphi^\varepsilon_k(y)} \lesssim \min\{1, \varepsilon d(x,G \cdot y)\}, \end{align} where $d(x,E) := \inf\sett{\abs{x-y}: y \in E}$. \textbf{Step 2. } \emph{Commutators.} For a matrix $B = (B_{\lambda,\gamma})_{\lambda \in \Lambda, \gamma \in \Gamma} \in {\mathbb{C}}^{\Lambda \times \Gamma}$ we denote the Schur norm by \begin{align*} \bignorm{B}_{\mbox{Schur}(\Gamma \to \Lambda)} := \max\Big\{ \sup_{\gamma \in \Gamma } \sum_{\lambda\in \Lambda } \abs{B_{\lambda,\gamma}}, \sup_{\lambda \in \Lambda } \sum_{\gamma \in \Gamma } \abs{B_{\lambda,\gamma}} \Big\}\, . \end{align*} Let us assume that $A$ is bounded below on $\ell^p(\Gamma )$. After multiplying $A$ with a constant, we may assume that \begin{align*} \norm{c}_p \leq \norm{Ac}_p, \qquad c \in \ell^p(\Gamma). \end{align*} For given $\varepsilon>0$ and $k \in I$ let $\varphi^\varepsilon_k c := {\varphi^\varepsilon_k}\big| _{\Gamma} \cdot c$ denote the multiplication operator by $\varphi ^\epsilon _k$ and $[A,\varphi^\varepsilon_k] = A \varphi ^\epsilon _k - \varphi ^\epsilon _k A $ the commutator with $A$. Now let us estimate \begin{align} \nonumber \norm{\varphi^\varepsilon_k c}_p & \leq \norm{A \varphi^\varepsilon_k c}_p \leq \norm{\varphi^\varepsilon_k Ac}_p + \norm{[A,\varphi^\varepsilon_k] c}_p \\ \nonumber & \leq \norm{\varphi^\varepsilon_k Ac}_p + \sum_{j\in I} \norm{[A,\varphi^\varepsilon_k]\varphi^\varepsilon_j (\Phi^\varepsilon)^{-1} \varphi^\varepsilon_jc}_p \\ \label{eq_bound} & \leq \norm{\varphi^\varepsilon_k Ac}_p + K \sum_{j\in I} V^\varepsilon_{j,k} \norm{\varphi^\varepsilon_jc}_p, \end{align} where $K = \max _x \Phi ^\epsilon (x)^{-1} $ and \begin{align} \label{eq_def_V} V^\varepsilon_{j,k} := \norm{[A,\varphi^\varepsilon_k]\varphi^\varepsilon_j}_{\mbox{Schur}(\Gamma \to \Lambda)}, \qquad j,k \in \ I. \end{align} The goal of the following steps is to estimate the Schur norm of the matrix $V^\epsilon $ with entries $V^\epsilon _{j,k}$ and to show that $\norm{V^\epsilon } _{\mathrm{Schur}(I\to I)} \to 0$ for $\epsilon \to 0+$. \Step{Step 3}. \emph{Convergence of the entries $V^\epsilon _{j,k}$.} We show that \begin{align} \label{eq_V_unif} \sup_{j,k \in I} V^\varepsilon_{j,k} \longrightarrow 0, \mbox{ as }\varepsilon \longrightarrow 0^+. \end{align} We first note that the matrix entries of $[A,\varphi^\varepsilon_k]\varphi^\varepsilon_j$, for $j,k \in I$, are \begin{align*} ([A,\varphi^\varepsilon_k]\varphi^\varepsilon_j)_{\lambda, \gamma} = -A_{\lambda,\gamma} \varphi^\varepsilon_j(\gamma)(\varphi^\varepsilon_k(\lambda)-\varphi^\varepsilon_k(\gamma)), \qquad \gamma \in \Gamma, \lambda \in \Lambda. \end{align*} Using \eqref{eq_improved_lip} and the hypothesis on $A$, we bound the entries of the commutator by \begin{align*} &\abs{([A,\varphi^\varepsilon_k]\varphi^\varepsilon_j)_{\lambda,\gamma}} \lesssim \sum_{\sigma \in G} \Theta(\lambda-\sigma \gamma) \min\{1, \varepsilon d(\lambda,G\cdot\gamma)\} \\ &\qquad \leq \sum_{\sigma \in G} \Theta(\lambda-\sigma \gamma) \min\{1, \varepsilon \abs{\lambda-\sigma\gamma}\}. \end{align*} Hence, if we define $\Theta^\varepsilon(x) := \Theta(x) \min\{1,\varepsilon\abs{x}\}$, then by \eqref{eq:c7} \begin{align*} V^\varepsilon_{j,k} = \norm{[A,\varphi^\varepsilon_k]\varphi^\varepsilon_j}_{\mbox{Schur}(\Gamma \to \Lambda)} \lesssim \max\{\mathop{\mathrm{rel}}(\Lambda),\mathop{\mathrm{rel}}(\Gamma)\} \norm{\Theta^\varepsilon}_{W(L^\infty,L^1)}. \end{align*} Since $\Theta \in W(L^\infty,L^1)$, it follows that $\norm{\Theta^\varepsilon}_{W(L^\infty,L^1)} \longrightarrow 0$, as $\varepsilon \longrightarrow 0^+$. This proves \eqref{eq_V_unif}. \Step{Step 4}. \emph{Refined estimates for $V^\epsilon _{jk}$.} For $s \in {\mathbb{Z}^d}$ let us define \begin{align} \label{eq_def_menv} \triangle^\varepsilon(s) := \sum_{t \in {\mathbb{Z}^d} : \abs{\varepsilon t-s}_\infty \leq 5} \sup_{z\in[0,1]^{d}+t} \abs{\Theta (z) }\, . \end{align} \textbf{Claim:} If $|j-k|>4$ and $\varepsilon \leq 1$, then \begin{align*} V^\varepsilon_{j,k} \lesssim \sum_{s \in G \cdot j - G \cdot k} \triangle^\varepsilon(s)\, . \end{align*} If $\abs{k-j}>4$, then $\varphi^\varepsilon_j(\gamma) \varphi^\varepsilon_k(\gamma)=0$. Indeed, if this were not the case, then $\varphi^\varepsilon_j(\gamma) \not= 0$ and $\varphi^\varepsilon_k(\gamma) \not=0$. Consequently, $\abs{\varepsilon \gamma - \sigma j} \leq 2$ and $\abs{\varepsilon \gamma - \tau k} \leq 2$ for some $\sigma, \tau \in G$. Hence, $d(k, G\cdot j) \leq \abs{k-\tau^{-1}\sigma j}=\abs{\tau k-\sigma j} \leq 4$. Since $k,j \in I$, this implies that $\abs{k-j} \leq 4$, contradicting the assumption. As a consequence, for $\abs{k-j}>4$, the matrix entries of $[A,\varphi^\varepsilon_k]\varphi^\varepsilon_j$ simplify to \begin{align*} ([A,\varphi^\varepsilon_k]\varphi^\varepsilon_j)_{\lambda,\gamma} = -A_{\lambda,\gamma} \varphi^\varepsilon_j(\gamma)\varphi^\varepsilon_k(\lambda), \qquad \gamma \in \Gamma, \lambda \in \Lambda \, . \end{align*} Hence, for $\abs{k-j}> 4$ we have the estimate \begin{align*} &\abs{([A,\varphi^\varepsilon_k]\varphi^\varepsilon_j)_{\lambda,\gamma}} \leq \sum_{\sigma \in G} \Theta(\lambda-\sigma \gamma) \varphi^\varepsilon_j(\gamma)\varphi^\varepsilon_k(\lambda) = \sum_{\sigma \in G} \Theta(\lambda-\sigma \gamma) \varphi^\varepsilon_j(\sigma \gamma)\varphi^\varepsilon_k(\lambda). \end{align*} Consequently, for $\abs{k-j}>4$ we have \begin{align*} &\sup_{\lambda \in \Lambda} \sum_{\gamma \in \Gamma} \abs{([A,\varphi^\varepsilon_k]\varphi^\varepsilon_j)_{\lambda,\gamma}} \leq \sup_{\lambda \in \Lambda} \sum_{\gamma \in \Gamma} \sum_{\sigma \in G} \Theta(\lambda-\sigma \gamma) \varphi^\varepsilon_j(\sigma \gamma)\varphi^\varepsilon_k(\lambda) \\ &\quad \lesssim \sup_{\lambda \in \Lambda} \sum_{\gamma \in G \cdot \Gamma} \Theta(\lambda-\gamma) \varphi^\varepsilon_j(\gamma)\varphi^\varepsilon_k(\lambda). \end{align*} If $\varphi^\varepsilon_j(\gamma) \varphi^\varepsilon_k(\lambda) \not = 0$, then $\abs{\varepsilon \gamma - \sigma j} \leq 2$ and $\abs{\varepsilon \lambda - \tau k} \leq 2$ for some $\sigma, \tau \in G$. The triangle inequality implies that \begin{align} \label{eq_in} d(\varepsilon (\lambda - \gamma), G \cdot j - G \cdot k) \leq 4. \end{align} Hence, we further estimate \begin{align} \label{eq_four_terms} &\sup_{\lambda \in \Lambda} \sum_{\gamma \in \Gamma} \abs{([A,\varphi^\varepsilon_k]\varphi^\varepsilon_j)_{\lambda,\gamma}} \lesssim \sup_{\lambda \in \Lambda} \sum_{s \in G \cdot j - G \cdot k} \sum_{ \stackrel{\gamma \in G\cdot\Gamma,}{\abs{\varepsilon (\lambda - \gamma)-s}\leq 4} } \Theta(\gamma-\lambda). \end{align} For fixed $s \in G \cdot j - G \cdot k$ and $\varepsilon \leq 1$ we bound the inner sum in \eqref{eq_four_terms} by \begin{align*} &\sum_{\gamma \in G \cdot \Gamma: \abs{\varepsilon (\lambda - \gamma) - s} \leq 4} \Theta(\gamma-\lambda) \leq \sum_{t\in{\mathbb{Z}^d}} \sum_{ \stackrel{\gamma \in G \cdot \Gamma: \abs{\varepsilon (\lambda - \gamma) - s} \leq 4}{(\lambda - \gamma) \in [0,1]^{d}+t}} \Theta(\gamma-\lambda) \\ &\quad \lesssim \mathop{\mathrm{rel}}(\lambda-G\cdot\Gamma) \sum_{t\in{\mathbb{Z}^d} : \abs{\varepsilon t-s}_\infty \leq 5} \sup_{z\in [0,1]^{d}+t} \abs{\Theta (z)} \\ &\quad \lesssim \mathop{\mathrm{rel}}(\Gamma) \sum_{t\in{\mathbb{Z}^d} : \abs{\varepsilon t-s}_\infty \leq 5 } \sup_{z\in [0,1]^{d}+t} \abs{\Theta (z) } \\ &\quad\lesssim \triangle^\varepsilon(s). \end{align*} Substituting this bound in \eqref{eq_four_terms}, we obtain \begin{align*} \sup_{\lambda \in \Lambda} \sum_{\gamma \in \Gamma} \abs{([A,\varphi^\varepsilon_k]\varphi^\varepsilon_j)_{\lambda,\gamma}} \lesssim \sum_{s \in G \cdot j - G \cdot k} \triangle^\varepsilon(s). \end{align*} Inverting the roles of $\lambda$ and $\gamma$ we obtain a similar estimate, and the combination yields \begin{align} \label{eq_V_kj} V^\varepsilon_{j,k} \lesssim \sum_{s \in G \cdot j - G \cdot k} \triangle^\varepsilon(s), \quad \mbox{ for }\abs{j-k}>4 \mbox{ and }\varepsilon \leq 1, \end{align} as claimed. \Step{Step 5}. \emph{Schur norm of $V^\epsilon $.} Let us show that $\norm{V^\epsilon }_{\mathrm{Schur}(I\to I)} \to 0$, i.e., \begin{align} \label{eq_schur_jk} \sup_{k\in I} \sum_{j\in I} V^\varepsilon_{j,k}, \quad \sup_{j\in I} \sum_{k\in I} V^\varepsilon_{j,k} \longrightarrow 0, \mbox{ as }\varepsilon \longrightarrow 0^+. \end{align} We only treat the first limit; the second limit is analogous. Using the definition of $\triangle^\varepsilon$ from \eqref{eq_def_menv} and the fact $\Theta \in W(L^\infty , L^1)$, we obtain that \begin{align} \label{bound_menv} \sum_{s\in{\mathbb{Z}^d},\abs{s}>6\sqrt{d}} \triangle^\varepsilon(s) \leq \sum_{s\in{\mathbb{Z}^d},\abs{s}_\infty>6} \triangle^\varepsilon(s) \lesssim \sum_{t \in {\mathbb{Z}^d}, \abs{t}_\infty>1/\varepsilon} \sup_{z\in[0,1]^{d}+t} \abs{\Theta (z) } \longrightarrow 0, \mbox{ as }\varepsilon \longrightarrow 0^+. \end{align} Fix $k \in I$ and use \eqref{eq_V_kj} to estimate \begin{align*} &\sum_{j: \abs{j-k} > 6\sqrt{d}} V^\varepsilon_{j,k} \lesssim \sum_{j: \abs{j-k} > 6\sqrt{d}} \sum_{s \in G \cdot j - G \cdot k} \triangle^\varepsilon(s) \leq \sum_{\sigma,\tau \in G} \sum_{j: \abs{j-k} > 6\sqrt{d}} \triangle^\varepsilon(\sigma j - \tau k). \end{align*} If $\abs{j-k} > 6\sqrt{d}$ and $j,k\in I$, then also $\abs{s}=\abs{\sigma j - \tau k}>6\sqrt{d}$ for all $\sigma , \tau \in G$. Hence we obtain the bound \begin{align*} \sum_{j\in I: \abs{j-k} > 6\sqrt{d}} V^\varepsilon_{j,k} \lesssim \sum_{\sigma,\tau \in G} \sum_{ \stackrel{s \in {\mathbb{Z}^d}}{\abs{s}>6\sqrt{d}}} \triangle^\varepsilon(s) \lesssim \sum_{ \stackrel{s \in {\mathbb{Z}^d}}{\abs{s}>6\sqrt{d}}} \triangle^\varepsilon(s). \end{align*} For the sum over $\{j: \abs{j-k} \leq 6 \sqrt{d}\}$ we use the bound \begin{align*} &\sum_{j: \abs{j-k} \leq 6\sqrt{d}} V^\varepsilon_{j,k} \leq \#\{j: \abs{j-k} \leq 6\sqrt{d}\} \sup_{s,t} V^\varepsilon_{s,t} \lesssim \sup_{s,t} V^\varepsilon_{s,t}. \end{align*} Hence, \begin{align*} \sum_{j \in I } V^\varepsilon_{j,k} \lesssim \sup_{s,t} V^\varepsilon_{s,t} + \sum_{\abs{s}>6\sqrt{d}} \triangle^\varepsilon(s), \end{align*} which tends to 0 uniformly in $k$ as $\varepsilon \rightarrow 0^+$ by \eqref{eq_V_unif} and \eqref{bound_menv}. \Step{Step 6}. \emph{The stability estimate.} According to the previous step we may choose $\varepsilon >0$ such that \begin{align*} \norm{V^\epsilon a}_q \leq \frac{1}{2K} \norm{a}_q, \qquad a\in \ell ^q (I) \end{align*} uniformly for all $q\in [1,\infty ]$. Using this bound in \eqref{eq_bound}, we obtain that \begin{align*} \left(\sum_{k \in I} \norm{\varphi^\varepsilon_k c}_p^q\right)^{1/q} \leq \left(\sum_{k \in I}\norm{\varphi^\varepsilon_kAc}_p^q\right)^{1/q} + 1/2 \left(\sum_{k \in I} \norm{\varphi^\varepsilon_k c}_p^q\right)^{1/q}, \end{align*} with the usual modification for $q=\infty$. Hence, \begin{align} \label{eq_sj_a} \left(\sum_{k \in I} \norm{\varphi^\varepsilon_k c}_p^q\right)^{1/q} \leq 2 \left(\sum_{k \in I}\norm{\varphi^\varepsilon_k Ac}_p^q\right)^{1/q}. \end{align} \Step{Step 7}. \emph{Comparison of $\ell ^p$-norms.} Let us show that for every $1 \leq q \leq \infty$, \begin{align} \label{eq_sj_b} \left(\sum_{k \in I} \norm{\varphi^\varepsilon_k a}_p^q \right)^{1/q} \asymp \norm{a}_q, \qquad a \in \ell^q(\Gamma), \end{align} with constants independent of $p,q$, and the usual modification for $q=\infty$. First note that for fixed $\varepsilon >0$ \begin{align*} N := \sup _{k\in I}\# \operatorname{supp}({\varphi^\varepsilon_k}\big| _{\Gamma}) = \sup_{k\in I} \#\set{\gamma \in \Gamma}{\varphi^\varepsilon_k(\gamma) \not= 0} < \infty. \end{align*} Then for $q \in [1,\infty]$ we have \begin{align*} \norm{\varphi^\varepsilon_k a}_p \leq \norm{\varphi^\varepsilon_k a}_1 \leq N \norm{\varphi^\varepsilon_k a}_\infty \leq N \norm{\varphi^\varepsilon_k a}_q, \qquad a \in \ell^\infty(\Gamma), \end{align*} and similarly \begin{align*} \norm{\varphi^\varepsilon_k a}_q \leq N \norm{\varphi^\varepsilon_k a}_p, \qquad a \in \ell^\infty(\Gamma). \end{align*} As a consequence, \begin{align} \label{eq_sj_pq} \Big( \sum_{k \in I} \norm{\varphi^\varepsilon_k a}_p^q \Big)^{1/q} \asymp \Big( \sum_{k \in I} \norm{\varphi^\varepsilon_k a}_q^q \Big)^{1/q}, \qquad a \in \ell^q(\Gamma) \end{align} with constants independent of $p$ and $q$ and with the usual modification for $q=\infty$. Next note that \begin{align} \label{eq_sj_covnum} \eta := \sup_{\varepsilon>0} \sup_{x \in {\mathbb{R}^d}} \#\set{k \in I}{\varphi^\varepsilon_k(x) \not= 0} = \sup_{\varepsilon>0} \sup_{x \in {\mathbb{R}^d}} \# \{k \in I\cap B_2(\varepsilon x)\} <\infty. \end{align} because $\operatorname{supp}(\psi) \subseteq B_2(0)$. So we obtain the following simple bound for all $x \in {\mathbb{R}^d}$: \begin{align*} 1 = \sum_{k\in I} \varphi^\varepsilon_k(x) = \sum_{k\in I: \varphi^\varepsilon_k(x) \not= 0} \varphi^\varepsilon_k(x) \leq \eta \, \sup_{k \in I} \varphi^\varepsilon_k(x). \end{align*} Therefore, for all $x \in {\mathbb{R}^d}$, \begin{align} \label{eq_sj_partq} \frac{1}{\eta} \leq \sup _{k\in I} \varphi^\varepsilon_k(x) \leq \Big( \sum_{k\in I} (\varphi^\varepsilon_k(x))^q \Big)^{1/q} \leq \sum_{k\in I} \varphi^\varepsilon_k(x) = 1 \, . \end{align} If $q<\infty$ and $a \in \ell^q(\Gamma)$, then \begin{align*} \frac{1}{\eta^q} \sum_{\gamma \in \Gamma} \abs{a_\gamma}^q \leq \sum_{\gamma \in \Gamma} \sum_{k\in I} (\varphi^\varepsilon_k(\gamma))^q \abs{a_\gamma}^q \leq \sum_{\gamma \in \Gamma} \abs{a_\gamma}^q, \end{align*} which implies that \begin{align} \label{eq_sj_qq} \Big(\sum_{k \in I} \norm{\varphi^\varepsilon_k a}_q^q\Big)^{1/q} = \Big(\sum_{\gamma \in \Gamma} \sum_{k\in I} (\varphi^\varepsilon_k(\gamma))^q \abs{a_\gamma}^q \Big)^{1/q} \asymp \norm{a}_q, \end{align} with constants independent of $q$. The corresponding statement for $q=\infty$ follows similarly. Finally, the combination of \eqref{eq_sj_pq} and \eqref{eq_sj_qq} yields \eqref{eq_sj_b}. \Step{Step 8}. We finally combine the norm equivalence~\eqref{eq_sj_b} with \eqref{eq_sj_a} and deduce that for all $1 \leq q \leq \infty$ \begin{align*} \norm{c}_q \lesssim \norm{Ac}_q, \end{align*} with a constant independent of $q$. This completes the proof. \end{proof} \begin{rem} \label{rem_unif_loc} {\rm We emphasize that the lower bound guaranteed by Proposition \ref{prop_loc} is uniform for all $p$. The constant depends only on the decay properties of the envelope $\Theta$, the lower bound for the given value of $p$, and on upper bounds for the relative separation of the index sets.} \end{rem} \begin{rem}{\rm Note that in the special case $d=1$ and $\Gamma = \Lambda = {\mathbb{Z}} $, the assumption of Theorem~\ref{prop_loc} says that $$ |A_{\lambda, \gamma } | \leq \Theta (\lambda - \gamma ) + \Theta (\lambda +\gamma ) \qquad \forall \lambda, \gamma \in {\mathbb{Z}} \, , $$ i.e., $A$ is dominated by the sum of a Toeplitz matrix and a Hankel matrix. } \end{rem} \subsection{Wilson bases} A Wilson basis associated with a window $g \in {L^2(\Rdst)}$ is an orthonormal basis of ${L^2(\Rdst)}$ of the form $\mathcal{W}(g)=\sett{g_\gamma: \gamma=(\gamma_1,\gamma_2) \in \Gamma}$ with $\Gamma \subseteq \tfrac{1}{2} {\mathbb{Z}^d} \times {\mathbb{N}}^d_0$ and \begin{align} \label{eq_wilson} g_\gamma = \sum_{\sigma \in \sett{-1,1}^d} \alpha_{\gamma,\sigma} \pi(\gamma_1,\sigma \gamma_2) g, \end{align} where $\alpha_{\gamma,\sigma} \in {\mathbb{C}}$, $\sup_{\gamma,\sigma} \abs{\alpha_{\gamma,\sigma}} < +\infty$, and, as before, $\sigma x = (\sigma_1 x_1, \ldots, \sigma_d x_d)$, $x \in {\mathbb{R}^d}$. There exist Wilson bases associated to functions $g$ in the Schwartz class \cite{dajajo91} (see also \cite[Chapters 8.5 and 12.3]{gr01}). In this case $\mathcal{W}(g)$ is a $p$-Riesz sequence and a $p$-frame for $M^p({\mathbb{R}^d})$ for all $p \in [1,\infty]$. This means that the associated coefficient operator $C_{\mathcal{W}}$ defined by $C_{\mathcal{W}} = (\ip{f}{g_\gamma})_\gamma $ is an isomorphism from $M^p(\Rdst ) $ onto $ \ell^p(\Gamma)$ for every $p\in [1,\infty ]$ and that the synthesis operator $C_{\mathcal{W}}^*c = C_{\mathcal{W} }^{-1} c= \sum_\gamma c_\gamma g_\gamma$ is an isomorphism from $\ell ^p(\Gamma )$ onto $M^p(\Rdst )$ for all $p\in [1,\infty ]$. (For $p=\infty$ the series converge in the weak* topology). \subsection{Proof of Theorem \ref{th_main_mol}} \begin{proof} Let $\mathcal{W}(g)=\sett{g_\gamma: \gamma \in \Gamma}$ be a Wilson basis with $g \in M^1({\mathbb{R}^d})$ and $\Gamma \subseteq \tfrac{1}{2} {\mathbb{Z}^d} \times {\mathbb{N}}^d_0$. (a) We assume that $\{ f_\lambda : \lambda \in \Lambda \}$ is a set of time-frequency\ molecules with associated coefficient operator $S$, $Sf := (\ip{f}{f_\lambda} )_{\lambda \in \Lambda}.$ We need to show that if $S$ is bounded below on $M^p(\Rdst )$ for some $p \in [1, \infty]$, then it is bounded below for all $p \in [1, \infty]$. Since the synthesis operator $C_{\mathcal{W} }^*$ associated with the Wilson basis $\mathcal{W}(g)$ is an isomorphism for all $1 \leq p \leq \infty$, $S$ is bounded below on $M^p(\Rdst )$, if and only if $SC_{\mathcal{W} }^* $ is bounded below on $\ell ^p(\Lambda )$. Thus it suffices to show that $SC^*_{\mathcal{W}}$ is bounded below for some $p \in [1,\infty]$ and then apply Proposition~\ref{prop_loc}. The operator $SC^*_{\mathcal{W}}$ is represented by the matrix $A$ with entries \begin{align*} A_{\lambda, \gamma} := \ip{g_\gamma}{f_\lambda}, \qquad \lambda \in \Lambda, \gamma \in \Gamma. \end{align*} In order to apply Proposition \ref{prop_loc} we provide an adequate envelope. Let $\Phi$ be the function from \eqref{eq_env_mol}, $\Phi^\vee(z):=\Phi(-z)$ and $\Theta := \Phi^\vee * \abs{V_g g}$. Then $\Theta \in W(L^\infty,L^1)({\mathbb{R}^{2d}})$ by Lemma~\ref{lemma_stft}. Using \eqref{eq_wilson} and the time-frequency localization of $\sett{f_\lambda: \lambda \in \Lambda}$ and of $g$ we estimate \begin{align*} &\abs{A_{\lambda, \gamma}} = \abs{\ip{f_\lambda}{g_\gamma}} = \abs{\ip{V_g f_\lambda}{V_g g_\gamma}} \\ &\qquad \leq \int_{{\mathbb{R}^{2d}}} \abs{\Phi(z-\lambda)} \abs{V_g g_\gamma(z)} dz \\ &\qquad \lesssim \sum_{\sigma \in \sett{-1,1}^d} \int_{{\mathbb{R}^{2d}}} \abs{\Phi(z-\lambda)} \abs{V_g g(z-(\gamma_1,\sigma \gamma_2))} dz \\ &\qquad= \sum_{\sigma \in \{-1,1\}^d} \Theta(\lambda-(\gamma_1,\sigma\gamma_2)) \leq \sum_{\sigma \in \{-1,1\}^{2d}} \Theta(\lambda-\sigma \gamma). \end{align*} Hence, the desired conclusion follows from Proposition \ref{prop_loc}. (b) Here we assume that $\{f_\lambda: \lambda \in \Lambda \}$ is a set of time-frequency\ molecules such that the associated synthesis operator $S^*c = \sum_{\lambda \in \Lambda} c_\lambda f_\lambda$ is bounded below on some $\ell ^p(\Lambda )$. We must show that $S^*$ is bounded below for all $p \in [1,\infty]$. Since $C_{\mathcal{W}} $ is an isomorphism on $M^p(\Rdst )$, this is equivalent to saying the operator $C_{\mathcal{W}} S^*$ is bounded below on some (hence all) $\ell ^p(\Lambda )$. The operator $C_{\mathcal{W}}S^*$ is represented by the matrix $B=A^*$ with entries \begin{align*} B_{\gamma,\lambda} := \ip{g_\gamma}{f_\lambda}, \qquad \gamma \in \Gamma, \lambda \in \Lambda \, , \end{align*} and satisfies \begin{align*} \abs{B_{\gamma,\lambda}} \leq \sum_{\sigma \in \{-1,1\}^{2d}} \Theta(\lambda-\sigma \gamma), \qquad \gamma \in \Gamma, \lambda \in \Lambda. \end{align*} To apply Proposition \ref{prop_loc}, we consider the symmetric envelope $\Theta^*(x) = \sum_{y \in G\cdot x} \Theta(y), x \in {\mathbb{R}^{2d}}.$ Then $\Theta^* \in W(L^\infty,L^1)({\mathbb{R}^{2d}})$, $\Theta^*(\sigma x) = \Theta(x)$, and \begin{align*} \abs{B_{\gamma,\lambda}} \leq \sum_{\sigma \in \{-1,1\}^{2d}} \Theta ^*(\lambda-\sigma \gamma) = \sum_{\sigma \in \{-1,1\}^{2d}} \Theta ^* (\gamma-\sigma \lambda). \end{align*} This shows that we can apply Proposition \ref{prop_loc} and the proof is complete. \end{proof} \begin{rem} \label{rem_unif_loc_2} {\rm As in Remark \ref{rem_unif_loc}, we note that the norm bounds for all $p$ in Theorem \ref{th_main_mol} depend on the envelope $\Theta$, on upper bounds for $\mathop{\mathrm{rel}}(\Lambda)$ and frame or Riesz basis bounds for a particular value of $p$.} \end{rem}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Rank--2 $U(1)$ gauge theory} Here, following \cite{XuPRB06,PretkoPRB16}, we derive the relationship between electric, magnetic and gauge fields within the rank--2 $U(1)$ [\mbox{R2--U1}] electrodynamics considered in the main text. Our starting point is an electric field described by a symmetric rank--2 tensor, \begin{equation} E_{ij} = E_{ji}. \end{equation} As in conventional electrodynamics, its conjugate is the rank-two gauge field $\bf A$, which also has to be symmetric to match the degrees of freedom, \begin{equation} A_{ij} = A_{ji}. \end{equation} The low energy sector of the electric field has vanishing vector charge, and is traceless, \begin{equation}\label{EQN_S_low_E_cond} \partial_i E_{ij} = 0, \quad E_{ii} = 0. \end{equation} Here we keep all indices as subscripts but still use the Einstein summation rule. These two conditions determine the form of gauge transformation. Consider a wave-function \begin{equation} \ket{\Psi({\bf A})}. \end{equation} We take a low energy configuration of $\bf E$ obeying Eq.~\eqref{EQN_S_low_E_cond} and construct a symmetrized operator that is identical to zero to act upon the wave-function \begin{equation} -i(\lambda_j \partial_i E_{ij} + \lambda_j \partial_i E_{ij})\ket{\Psi({\bf A})} = 0. \end{equation} By integration by parts and assuming vanishing boundary terms, we have \begin{equation} i(\partial_i\lambda_j +\partial_i \lambda_j )E_{ij}\ket{\Psi({\bf A})} = 0. \end{equation} Since $E_{ij}$ conjugates with $A_{ij}$, it generates a transformation of $\bf A$. Thus \begin{equation} i(\partial_i\lambda_j +\partial_i \lambda_j )E_{ij}\ket{\Psi({\bf A})} = \ket{\Psi({\bf A}+\bm{\nabla} \otimes\bm{\lambda} +(\bm{\nabla} \otimes\bm{\lambda} )^T)} - \ket{\Psi({\bf A})} = 0. \end{equation} That is, the low energy sector wave-function is invariant under gauge transformation \begin{equation} \begin{split} {\bf A}+\bm{\nabla} \otimes\bm{\lambda} +(\bm{\nabla} \otimes\bm{\lambda} )^T, \quad \text{ i.e., }\quad A_{ij }\rightarrow A_{ij} + \partial_i\lambda_j +\partial_i \lambda_j. \end{split} \end{equation} Similarly, the traceless condition \begin{equation} -i\gamma\delta_{ij} E_{ij}\ket{\Psi({\bf A})} = 0. \end{equation} leads to another gauge symmetry \begin{equation} A_{ij }\rightarrow A_{ij} +\gamma\delta_{ij}. \end{equation} Finally, the magnetic field is obtained by finding the simplest gauge-invariant quantity. In this case, it has to have three derivatives acting on the gauge field, \begin{equation} \begin{split} B_{ij} = & \frac{1}{2}[ \epsilon_{jab}(\partial_a \partial_k \partial_i A_{bk} - \partial_a \partial^2 A_{bi}) \\ & + \epsilon_{iab} (\partial_a \partial_k \partial_j A_{bk} - \partial_a \partial^2 A_{bj})]. \end{split} \end{equation} Further details of the phenomenology of \mbox{R2--U1} phases can be found in \cite{PretkoPRB16,PretkoPRB17}. \section{Derivation of Effective Field Theory} We show how a rank--2 tensor electric field, satisfying the constraint required for \mbox{R2--U1} electrodynamics [Eqs.~(\ref{eq:constraint.1},\ref{eq:constraint.2})], can be derived from a breathing pyrochlore lattice model [Eq.~\eqref{eq:H}]. The pattern of this derivation closely follows Refs.~\cite{Benton2016NatComm,BentonThesis,YanPRB17}. Our starting point is the breathing pyrochlore lattice with a spin on each of its sites, and nearest neighbor interactions between the spins. ``Breathing" means the lattice is bi-partitioned into A- and B-tetrahedra [Fig.~\ref{fig:breathing.lattice}], and each type of tetrahedron has its own interactions. The model that hosts a rank-2 spin liquid has breathing Heisenberg antiferromagnetic interactions on both the A- and B-tetrahedra, and negative Dzyaloshinskii-Moriya (DM) interactions on A-tetrahedra only. The Hamiltonian for the model is \begin{equation}\label{EQN_S_hamiltonian} \mathcal{H} = \sum_{\langle ij \rangle\in \text{A}} \left[J_A {\bf S}_i \cdot {\bf S}_j + D_A\hat{\bf d}_{ij}\cdot( {\bf S}_i \times {\bf S}_j )\right ] + \sum_{\langle ij \rangle\in \text{B}} \left[J_B {\bf S}_i \cdot {\bf S}_j + D_B \hat{\bf d}_{ij}\cdot( {\bf S}_i \times {\bf S}_j )\right ]. \end{equation} where $\langle ij \rangle \in \text{A(B)}$ denotes nearest neightbour bonds belonging to the A(B)-tetrahedra. The sites $0,\ 1,\ 2,\ 3$ are at positions relative to the center of an A-tetrahedron \begin{equation} {\bf r}_0 = \frac{a}{8}(1,1,1),\; {\bf r}_1 = \frac{a}{8}(1,-1,-1),\; {\bf r}_2 = \frac{a}{8}(-1,1,-1),\; {\bf r}_3 = \frac{a}{8}(-1,-1,1), \end{equation} where $a$ is the length of the unit cell. Vectors $\hat{\bf d}_{ij}$ are bond dependent, defined in accordance with Ref~\cite{KotovPRB05,CanalsPRB08,RauPRL16}: \begin{equation} \begin{split} &\hat{\bf d}_{01}= \frac{(0,-1,1)}{\sqrt{2}},\; \hat{\bf d}_{02}= \frac{(1,0,-1)}{\sqrt{2}},\; \hat{\bf d}_{03}= \frac{(-1,1,0)}{\sqrt{2}} , \\ &\hat{\bf d}_{12}= \frac{(-1,-1,0)}{\sqrt{2}},\; \hat{\bf d}_{13}= \frac{(1,0,1)}{\sqrt{2}},\; \hat{\bf d}_{23}= \frac{(0,-1,-1)}{\sqrt{2}}. \end{split} \label{EQN_SM_D_Vec} \end{equation} Equivalently, this model can be written in a standard matrix-exchange form for a breathing-pyrochlore lattice model as \begin{equation} \mathcal{H} = \sum_{\langle ij \rangle \in \text{A}} S_i^\alpha \mathcal{J}_\text{A,ij}^{\alpha\beta} S_j^\beta + \sum_{\langle ij \rangle \in \text{B}} S_i^\alpha \mathcal{J}_\text{B}^{\alpha\beta} S_j^\beta \end{equation} where $\mathcal{J}_\text{A,ij}$ is a three-by-three matrix that couples spins on sub-lattice sites $i, j$ whose bond belongs to A-tetrahedra, and $\mathcal{J}_\text{B}$ is the coupling matrix for B-tetrahedra. In the case of $D_B=0$ that we are interested in, $\mathcal{J}_\text{B} $ is identical for any pair of $i, j$, \begin{equation} \mathcal{J}_\text{B} = \begin{bmatrix} J_B & 0 & 0 \\ 0 & J_B & 0 \\ 0 & 0 & J_B \end{bmatrix} . \end{equation} Matrices $\mathcal{J}_\text{A,ij}$ are bond dependent and related to each other by the lattice symmetry. Their values are \begin{equation} \begin{split} & \mathcal{J}_\text{A,01} = \begin{bmatrix} J_A & D_A/\sqrt{2} & D_A/\sqrt{2} \\ -D_A/\sqrt{2} & J_A & 0 \\ -D_A/\sqrt{2} & 0 & J_A \end{bmatrix} ,\; \mathcal{J}_\text{A,02} = \begin{bmatrix} J_A & -D_A/\sqrt{2} & 0 \\ D_A/\sqrt{2} & J_A & D_A/\sqrt{2} \\ 0 & -D_A/\sqrt{2} & J_A \end{bmatrix} ,\; \\ & \mathcal{J}_\text{A,03} = \begin{bmatrix} J_A & 0 & -D_A/\sqrt{2}\\ 0 & J_A & -D_A/\sqrt{2} \\ D_A/\sqrt{2} & D_A/\sqrt{2} & J_A \end{bmatrix} ,\; \mathcal{J}_\text{A,12} = \begin{bmatrix} J_A & 0 & D_A/\sqrt{2} \\ 0 & J_A & -D_A/\sqrt{2} \\ -D_A/\sqrt{2} & D_A/\sqrt{2} & J_A \end{bmatrix} ,\;\\ & \mathcal{J}_\text{A,13} = \begin{bmatrix} J_A & D_A/\sqrt{2} & 0\\ -D_A/\sqrt{2} & J_A & D_A/\sqrt{2} \\ 0 & -D_A/\sqrt{2} & J_A \end{bmatrix} ,\; \mathcal{J}_\text{A,23} = \begin{bmatrix} J_A & -D_A/\sqrt{2} & D_A/\sqrt{2}\\ D_A/\sqrt{2} & J_A & 0 \\ -D_A/\sqrt{2} & 0 & J_A \end{bmatrix} . \end{split} \end{equation} \begin{table*} \begin{tabular}{ c c c } \hline \hline \multirow{2}{*}{} order & definition in terms & associated \\ parameter & of spin components & ordered phases \\ \hline \multirow{1}{*}{} $m_{\sf A_2}$ & $\frac{1}{2 \sqrt{3} } \left(S_0^x+S_0^y+S_0^z+S_1^x-S_1^y-S_1^z-S_2^x+S_2^y-S_2^z-S_3^x-S_3^y+S_3^z \right)$ & ``all in-all out'' \\ \multirow{1}{*}{} ${\bf m}_{\sf E}$ & $\begin{pmatrix} \frac{1}{2 \sqrt{6} } \left( -2 S_0^x + S_0^y + S_0^z - 2 S_1^x - S_1^y-S_1^z+2 S_2^x + S_2^y- S_2^z +2 S_3^x-S_3^y +S_3^z \right) \\ \frac{1}{2 \sqrt{2}} \left( -S_0^y+S_0^z+S_1^y-S_1^z-S_2^y-S_2^z+S_3^y+S_3^z \right) \end{pmatrix}$ & $\begin{matrix} \Gamma_{5}, \textrm{including}\\ \Psi_2 \textrm{ and } \Psi_3 \end{matrix}$ \\ \multirow{1}{*}{} ${\bf m}_{\sf T_{1+}}$ & $\begin{pmatrix} \frac{1}{2} (S_0^x+S_1^x+S_2^x+S_3^x) \\ \frac{1}{2} (S_0^y+S_1^y+S_2^y+S_3^y) \\ \frac{1}{2} (S_0^z+S_1^z+S_2^z+S_3^z) \end{pmatrix} $ & collinear FM \\ \multirow{1}{*}{} ${\bf m}_{\sf T_{1, -}}$ & $\begin{pmatrix} \frac{-1}{2\sqrt{2}} (S_0^y+S_0^z-S_1^y-S_1^z-S_2^y+S_2^z+S_3^y-S_3^z) \\ \frac{-1}{2\sqrt{2}} (S_0^x+S_0^z-S_1^x+S_1^z-S_2^x-S_2^z+S_3^x-S_3^z) \\ \frac{-1}{2\sqrt{2}} ( S_0^x+S_0^y-S_1^x+S_1^y+S_2^x-S_2^y-S_3^x-S_3^y) \end{pmatrix}$ & non-collinear FM \\ \multirow{1}{*}{} ${\bf m}_{\sf T_2} $ & $\begin{pmatrix} \frac{1}{2 \sqrt{2}} \left( -S_0^y+S_0^z+S_1^y-S_1^z+S_2^y+S_2^z-S_3^y-S_3^z \right) \\ \frac{1}{2 \sqrt{2}} \left( S_0^x-S_0^z-S_1^x-S^z_1-S_2^x+S_2^z+S_3^x+S_3^z \right) \\ \frac{1}{2 \sqrt{2} } \left( -S_0^x+S_0^y+S_1^x+S_1^y-S_2^x-S_2^y+S_3^x-S_3^y \right) \end{pmatrix} $ & Palmer--Chalker ($\Psi_4$) \\ \hline \end{tabular} \caption{ Order parameters ${\bf m}_\mathsf{X}$, describing how the point-group symmetry of a single tetrahedron within the pyrochlore lattice is broken by magnetic order. % Order parameters transform according to irreducible representations of the point-group ${\sf T}_d$, and are expressed in terms of linear combinations of spin-components ${\bf S}_i = (S^x_i, S^y_i, S^z_i)$, in the global frame of the crystal axes --- cf. $\mathcal{H}$~[Eq.~\eqref{EQN_S_hamiltonian})]. % Labelling of spins within the tetrahedron follows the convention of Ross~{\it et al.}~\cite{RossPRX11}. % The notation $\Psi_i$ for ordered phases is taken from~Ref.~\cite{PooleJPCM07}. } \label{table:m_lambda_global} \end{table*} The spin degrees of freedom on each tetrahedron can be rewritten in terms of fields forming the irreducible representations of the lattice symmetry, \begin{equation}\label{EQN_S_Irreps} {m}_{\mathsf{A_2}} ,\quad \mathbf{m}_{\mathsf{E}} ,\quad \mathbf{m}_{\mathsf{T_2}} ,\quad \mathbf{m}_{\mathsf{T_{1+}}} ,\quad \mathbf{m}_{\mathsf{T_{1-}}} , \end{equation} whose definition can be found in Table \ref{table:m_lambda_global}. They are linear combinations of the spin degrees of freedom, allowing for a fully quadratic Hamiltonian: \begin{equation} \mathcal{H}=\frac{1}{2}\sum_{\mathsf{X}}a_{\mathsf{X},\text{A}} m^2_{\mathsf{X},\text{A}} + \frac{1}{2}\sum_{\mathsf{X}}a_{\mathsf{X},\text{B}} m^2_{\mathsf{X},\text{B}} , \end{equation} where $\mathsf{X}$ runs over irreps of the group $T_d$, i.e. $\{ \mathsf{A_2}, \mathsf{E}, \mathsf{T_2}, \mathsf{T_{1+}}, \mathsf{T_{1-}} \}$ as listed in Eq.~\eqref{EQN_S_Irreps}, and the subscript A,B denotes on which type of tetrahedra they are defined. For the couplings in Eq.~\eqref{EQN_S_hamiltonian}, we have on A-tetrahedra \begin{eqnarray} a_{\mathsf{A_2},\text{A}} & = & -J_A- 4D_A/\sqrt{2} \;,\\ a_{\mathsf{T_2},\text{A}} & = & -J_A -2D_A/\sqrt{2} \;,\\ a_{\mathsf{T_{1+}},\text{A}} & = & 3J_A \;,\\ a_{\mathsf{T_{1-}},\text{A}} = a_{\mathsf{E},\text{A}} & = & -J_A + 2 D_A/\sqrt{2} , \end{eqnarray} and on B-tetrahedra \begin{eqnarray} a_{\mathsf{A_2},\text{B} }= a_{\mathsf{E},\text{B}} = a_{\mathsf{T_2},\text{B}} = a_{\mathsf{T_{1-}},\text{B} } & = & -J_B ,\\ a_{\mathsf{T_{1+}},\text{B} } & = & 3J_B . \end{eqnarray} For $J_{A},J_{B}>0$ and $D_{A}<0$, these parameters are in order \begin{eqnarray} && \text{on A-tetrahedra:} \qquad a_{\mathsf{E},\text{A}} = a_{\mathsf{T_{1-}},\text{A}} < a_{\mathsf{A_2},\text{A}} ,\ a_{\mathsf{T_2},\text{A} } , \ a_{\mathsf{T_{1+}},\text{A}} ,\\ && \text{on B-tetrahedra:} \qquad a_{\mathsf{A_2},\text{B}} = a_{\mathsf{E},\text{B}} = a_{\mathsf{T_2},\text{B}} =a_{\mathsf{T_{1-}},\text{B}} < a_{\mathsf{T_{1+}},\text{B}} , \end{eqnarray} which plays the central role of dictating the low energy physics. The irreducible representation fields are subject to constraints arising from fixed spin length \begin{equation} \sum_\mathsf{X} m^2_\mathsf{X} = 1 \end{equation} for both A- and B-tetrahedra. As a consequence, the low energy sector allows the $m^2_\mathsf{X}$ corresponding to the smallest $a_{\mathsf{X}}$ to fluctuate, while all other fields have to vanish. This principle applied to our model leads to \begin{itemize} \item On A-tetrahedra, the fields $ \mathbf{m}_{\mathsf{E}} $ and $\mathbf{m}_{\mathsf{T_{1-}}}$ can fluctuate; \item On A-tetrahedra, the fields ${\bf m}_{\mathsf{T_{1+}}} = {\bf m}_{\mathsf{T_2}} ={m}_{\mathsf{A_2}}=0$; \item On B-tetrahedra, the fields ${m}_{\mathsf{A_2}} ,\; \mathbf{m}_{\mathsf{E}} ,\; \mathbf{m}_{\mathsf{T_2}} ,\; \mathbf{m}_{\mathsf{T_{1-}}}$ can fluctuate; \item On B-tetrahedra, \begin{equation}\label{EQN_S_B_condition} {\bf m}_{\mathsf{T_{1+}}} = 0 \end{equation} \end{itemize} Since every spin is shared by an A- and a B-tetrahedron, the fluctuating fields $ \mathbf{m}_{\mathsf{E}} $ and $\mathbf{m}_{\mathsf{T_{1-}}}$ on A-tetrahedra must obey additional constraints to respect the the low-energy sector condition on B-tetrahedron imposed by Eq.~\eqref{EQN_S_B_condition}. Assuming that the fields are varying slowly in space such that the continuous limit can be taken, the constraint Eq.~\eqref{EQN_S_B_condition} can be expressed in terms of fields living on A-tetrahedron as \begin{equation}\label{EQN_S_E_constraint_1} \frac{2}{\sqrt{3}} \begin{bmatrix} \partial_x m_\mathsf{E}^1 \\ -\frac{1}{2} \partial_y m_\mathsf{E}^1 - \frac{\sqrt{3}}{2} \partial_y m_\mathsf{E}^2 \\ -\frac{1}{2} \partial_y m_\mathsf{E}^1 + \frac{\sqrt{3}}{2} \partial_y m_\mathsf{E}^2 \end{bmatrix} - \begin{bmatrix} \partial_y m_\mathsf{T_{1-}}^z + \partial_z m_\mathsf{T_{1-}}^y \\ \partial_z m_\mathsf{T_{1-}}^x + \partial_x m_\mathsf{T_{1-}}^z \\ \partial_x m_\mathsf{T_{1-}}^y + \partial_y m_\mathsf{T_{1-}}^x \end{bmatrix} = 0 . \end{equation} From this constraint we can build the symmetric, traceless, rank-two magnetic field $E_{ij}$ as \begin{equation} E_{ij} = \begin{bmatrix} \frac{2}{\sqrt{3}}m_\mathsf{E}^1 & m_\mathsf{T_{1-}}^z & m_\mathsf{T_{1-}}^y \\ m_\mathsf{T_{1-}}^z & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 - m_\mathsf{E}^2 & m_\mathsf{T_{1-}}^x \\ m_\mathsf{T_{1-}}^y & m_\mathsf{T_{1-}}^x & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 + m_\mathsf{E}^2 \end{bmatrix} \; , \label{eq:S.31} \end{equation} such that Eq.~\eqref{EQN_S_E_constraint_1} becomes \begin{equation} \partial_i E_{ij}= 0 \; , \label{EQN_S_R2_Constraint} \end{equation} with symmetric and traceless conditions \begin{equation} \label{EQN_S_S_E_constaint} E_{ji} = E_{ji} ,\qquad \Tr {\bf E} = 0 \end{equation} by the definition of $E_{ij}$. Hence a rank-2, traceless, vector charged magnetic field emerges at the low-energy sector from the microscopic model of Eq.~\eqref{EQN_S_hamiltonian}. Equation~\eqref{EQN_S_S_E_constaint} constrains the form of correlations functions of $\langle E_{ij}({\bf q}) E_{kl}(-{\bf q}) \rangle$, in the same spirit as how the two-in-two-out condition constrains the spin-spin correlation of spin ice. It is, however, in a more complicated form. The explicit results for the \textit{traceful} scalar-charged and vector-charged versions of \mbox{R2--U1} are discussed in detail in Ref.~\cite{PremPRB18}. The vector-charge field correlation is \begin{equation}\label{EQN_S__Vc_Corr} \begin{split} \langle E_{ij}({\bf q}) E_{kl}(-{\bf q}) \rangle \propto & \frac{1}{2}(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) + \frac{q_i q_j q_k q_l}{q^4} \\ & - \frac{1}{2} \left(\delta_{ik}\frac{q_i q_l}{q^2} +\delta_{jk}\frac{q_i q_l}{q^2} +\delta_{il}\frac{q_j q_k}{q^2} +\delta_{jl}\frac{q_i q_k}{q^2} \right) \end{split} \end{equation} In close analogy, we derive the correlation function of our \textit{traceless} vector-charged model by deducting the trace, \begin{equation}\label{EQN_S_Corr} \begin{split} \langle E_{ij}({\bf q}) E_{kl}(-{\bf q}) \rangle \propto & \frac{1}{2}(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) + \frac{q_i q_j q_k q_l}{q^4} \\ & - \frac{1}{2} \left(\delta_{ik}\frac{q_i q_l}{q^2} +\delta_{jk}\frac{q_i q_l}{q^2} +\delta_{il}\frac{q_j q_k}{q^2} +\delta_{jl}\frac{q_i q_k}{q^2} \right) \\ & -\frac{1}{2}\left( \delta_{ij} -\frac{q_iq_j}{q^2} \right)\left( \delta_{kl} -\frac{q_kq_l}{q^2} \right) , \end{split} \end{equation} which encodes a singularity at ${\bf q} \rightarrow 0$. Different choices of the components $E_{ij}$ and $E_{kl}$ show different patterns. A few representative ones can be found in Fig.~\ref{Fig_S_diff_corr}. \begin{figure}[H] \centering \subfloat[\label{EQN_S_Corr_1}]{\includegraphics[width=0.18\textwidth]{FigS-1-1.png}} \; {\raisebox{3ex}{\includegraphics[height=0.15\textwidth]{FigS-1-legend1.png}}}\; % \subfloat[\label{EQN_S_Corr_2}]{\includegraphics[width=0.18\textwidth]{FigS-1-2.png}}\; \subfloat[\label{EQN_S_Corr_3}]{\includegraphics[width=0.18\textwidth]{FigS-1-3.png}} \; {\raisebox{3ex}{\includegraphics[height=0.15\textwidth]{FigS-1-legend2.png}}}\; % \subfloat[\label{EQN_S_Corr_4}]{\includegraphics[width=0.18\textwidth]{FigS-1-4.png}}\; \subfloat{\raisebox{3ex}{\includegraphics[height=0.15\textwidth]{FigS-1-legend4.png}}} % % \caption{Different components of correlation function $\langle E_{ij}({\bf q})E_{kl}(-{\bf q})\rangle$ in $q_x-q_y$ plane, calculated from Eq.~\eqref{EQN_S_Corr}. } \label{Fig_S_diff_corr} \end{figure} Fig.~\ref{EQN_S_Corr_2},\ref{EQN_S_Corr_3} have the four-fold pinch-point (4FPP) singularity, which differentiates the rank-2 gauge theories uniquely from the conventional $U(1)$ gauge theory. It is the key signature to be looked for in experiments. \section{Connection between Heisenberg Antiferromagnet and Rank--2 $U(1)$ spin liquid} The Heisenberg antiferromagnet (HAF) on a pyrochlore lattice also hosts a spin liquid, but one described by a $U(1)\times U(1)\times U(1)$ gauge theory \cite{isakov05}. These three copies of $U(1)$ originate in the separate flux--conservation laws for the three components of an O(3) spin and, as noted by Henley, can be collected into a single rank--2 tensor field \cite{henley10}. The three independent flux--conservation laws impose a condition of zero vector charge, Eq.~(\ref{EQN_S_R2_Constraint}), one of the requirements for a rank--2 $U(1)$ theory. However they do not enforce the other requirement, namely that the tensor field be symmetric and traceless, Eq.~(\ref{EQN_S_S_E_constaint}). We can gain more insight into the connection between these two spin liquids by generalising the analysis in terms of irrep fields given in the Section above. We find that, in the case of the HAF, low--energy fluctuations can be described by a rank--2 tensor field with the form \begin{equation} \label{EQN_SM_HAF_Electric} {\bf E}^\text{\sf HAF} = \begin{bmatrix} \frac{2}{\sqrt{3}}m_\mathsf{E}^1 -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & m_\mathsf{T_{1,-}}^z + m_\mathsf{T_2}^z& m_\mathsf{T_{1,-}}^y - m_\mathsf{T_2}^y\\ m_\mathsf{T_{1,-}}^z - m_\mathsf{T_2}^z & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 - m_\mathsf{E}^2 -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & m_\mathsf{T_{1,-}}^x + m_\mathsf{T_2}^x\\ m_\mathsf{T_{1,-}}^y + m_\mathsf{T_2}^y& m_\mathsf{T_{1,-}}^x - m_\mathsf{T_2}^x & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 + m_\mathsf{E}^2 -\sqrt{\frac{2}{3}}m_\mathsf{A_2} \end{bmatrix} . \end{equation} This satisfies the condition of zero vector charge [Eq.~(\ref{EQN_S_R2_Constraint})], \begin{equation} \partial_i E_{ij}^{\sf HAF} = \rho_j = 0 \; . \label{EQN_SM_HAF_Electric_Constraint} \end{equation} However, as anticipated by the arguments of Henley, ${\bf E}^{\sf HAF}$ also has a finite trace, and a finite anti--symmetric part, and so does not satisfy Eq.~(\ref{EQN_S_S_E_constaint}). We can separate the different contributions to ${\bf E}^{\sf HAF}$ as \begin{equation} {\bf E}^{\sf HAF} = {\bf E}^{\sf HAF}_{\sf sym.} + {\bf E}^{\sf HAF}_ {\sf antisym.} + {\bf E}^{\sf HAF}_{\sf trace} \end{equation} where the antisymmetric part is given by \begin{equation} {\bf E}^{\sf HAF}_ {\sf antisym.} = \begin{bmatrix} 0 & m_\mathsf{T_2}^z& - m_\mathsf{T_2}^y\\ - m_\mathsf{T_2}^z & 0 & m_\mathsf{T_2}^x\\ m_\mathsf{T_2}^y& - m_\mathsf{T_2}^x & 0 \end{bmatrix} , \label{eq:S.origin.antisymetric} \end{equation} and the finite trace comes from \begin{equation} {\bf E}^{\sf HAF}_{\sf trace} = \begin{bmatrix} -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & 0& 0\\ 0 & -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & 0\\ 0& 0& -\sqrt{\frac{2}{3}}m_\mathsf{A_2} \end{bmatrix} . \label{eq:S.origin.trace} \end{equation} The remaining components of ${\bf E}^{\sf HAF}$ are identical to the symmetric, traceless rank--2 electric field ${\bf E}$ found in the R2--U1 theory [Eq.~(\ref{eq:S.31})], i.e. \begin{equation} {\bf E}^{\sf HAF}_{\sf sym.} = \begin{bmatrix} \frac{2}{\sqrt{3}}m_\mathsf{E}^1 & m_\mathsf{T_{1-}}^z & m_\mathsf{T_{1-}}^y \\ m_\mathsf{T_{1-}}^z & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 - m_\mathsf{E}^2 & m_\mathsf{T_{1-}}^x \\ m_\mathsf{T_{1-}}^y & m_\mathsf{T_{1-}}^x & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 + m_\mathsf{E}^2 \end{bmatrix} \; . \end{equation} Written in this way, it is clear that the origin of the finite antisymmetric part [Eq.~(\ref{eq:S.origin.antisymetric})], and the finite trace [Eq.~(\ref{eq:S.origin.trace})], are the fluctuations of the irreps ${\bf m}_{\mathsf{T_2}}$ and $m_\mathsf{A_2}$, respectively. Introducing DM interactions of the appropriate form $D_A < D_B < 0$ [Eq.~(\ref{EQN_S_hamiltonian})], introduces an energy cost for the fluctuations of ${\bf m}_{\mathsf{T_2}}$ and $m_\mathsf{A_2}$, and so enforces the ``missing'' constraint, Eq.~(\ref{EQN_S_S_E_constaint}). This converts the $U(1) \times U(1) \times U(1)$ spin liquid of the HAF into the R2--U1 spin liquid of the breathing--pyrochlore model studied in this article. \section{Predictions for Neutron Scattering} The 4FPP is a unique pattern that differentiates the \mbox{R2--U1} from vector $U(1)$ gauge theory, which only has the conventional two-fold pinch points. The 4FPPs are most unambiguously presented in the correlation function of the irrepresentation fields as discussed in the main text. These correlation functions are, however, not directly accessible to the experiments. In magnetism, the neutron scattering technique is widely applied to measure the spin-spin correlation of the form \begin{equation} S({\bf q}) = \sum_{\alpha, \beta, i, j} \left( \delta_{\alpha\beta}-\frac{q^\alpha q^\beta}{q^2} \right) \langle S_i^\alpha({\bf q}) S_j^\beta(-{\bf q}) \rangle \end{equation} where $\alpha, \beta = x, y, z$ are spin-component indices and $i, j = 0, 1, 2, 3$ are sub-lattice site indices. Furthermore, with neutrons polarized in direction of unit vector $\hat{\bf v}$ perpendicular to the scattering plane, one can measure the spin-flip (SF) channel neutron scattering defined by \begin{equation} S({\bf q})_\text{SF} = \sum_{\alpha, \beta, i, j} (v_\perp^\alpha v_\perp^\beta) \langle S_i^\alpha({\bf q}) S_j^\beta(-{\bf q}) \rangle , \end{equation} where \begin{equation} \hat{\bf v}_\perp = \frac{\hat{\bf v}\times {\bf q}}{\hat{\bf v}\times {\bf q}} . \end{equation} One can also measure the non-spin-flip (NSF) channel defined by \begin{equation} S({\bf q})_\text{NSF} = \sum_{\alpha, \beta, i, j} (v^\alpha v^\beta) \langle S_i^\alpha({\bf q}) S_j^\beta(-{\bf q}) \rangle \end{equation} Finally we show the spin structure factor of the $[hhk]$ plane as a complement of the $[h0k]$ plane neutron scattering result shown in the main text. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Fig_MC_Big_1_hhk.png} \caption{4-Fold Pinch Points (4FPPs) in spin structure factor in the $[hhk]$ plane of momentum space of the model [Eq.~\eqref{eq:H}] from MC simulations. The exchange parameters are from the idealized theoretical case $J_A = J_B = 1.0,\ D_A = -0.01,\ D_B = 0.0$, at $T = 2.5 \times10^{-3}\ J_A$. (a) Total structure factor. (b) Non-spin-flip (NSF) channel. (b) Spin-flip (SF) channel. The 4FPPs can be clearly observed in the SF channel, centered on [0, 0, 2] (and points related by symmetry), but weaker than in the $[h0k]$ plane. } \label{Fig_thy_SF_hhk} \end{figure} \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Fig_MC_Big_2_hhk.png} \caption{4-Fold Pinch Points (4FPPs) in spin structure factor in the $[hhk]$ plane of momentum space of the model (cf. Eq.~\eqref{eq:H}) from MC simulations. The exchange parameters are from the experimental case (cf. Eq.~\eqref{eq:experimental.parameter.set}) at $T = 252\ \text{mK}$. (a) Total structure factor. (b) Non-spin-flip (NSF) channel. (b) Spin-flip (SF) channel. The 4FPPs can be observed in the SF channel, centered on [0, 0, 2] (and points related by symmetry), but weaker than in the $[h0k]$ plane. } \label{fig:Sq.experimental.parameters_hhk} \end{figure} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{\large \bf Introduction} Ribaucour transformations for hypersurfaces, parametrized by lines of curvature, were classically studied by Bianchi \cite{Bianchi}. They can be applied to obtain surfaces of constant Gaussian curvature and surfaces of constant mean curvature, from a given such surface, respectively, with constant Gaussian curvature and constant mean curvature. The first application of this method to minimal and cmc surfaces in $R^3$ was obtained by Corro, Ferreira, and Tenenblat in \cite{CFT1}-\cite{CFT3}. In \cite{TW}, Tenenblat and Wang extended such transformations to surfaces of space forms. For more application this method, see \cite{CT}, \cite{NT}, \cite{FT}, \cite{Tenenblat2}, \cite{TL} and \cite{TW1}. Using Ribaucour transformations and applying the theory to the rotational flat surfaces in $H^3$, in \cite{CMT} the authours obtained families of new such surfaces. The study of flat surfaces in $S^3$ traces back to Bianchi's works in the 19th century, and it has a very rich global theory, as evidenced by the existence of a large class of flat tori in $S^3$, see \cite{K1}, \cite{pinkall}, \cite{Spivak} and \cite{W}. Indeed, these flat tori constitute the only examples of compact surfaces of constant curvature in space forms that are not totally umbilical round spheres. Flat surfaces in $S^3$ admit a more explicit treatment than other constant curvature surfaces. Moreover, there are still important open problems regarding flat surfaces in $S^3$, some of them unanswered for more than 40 years. For instance, it remains unknown if there exists an isometric embedding of R2 into $S^3$. These facts show that the geometry of flat surfaces in $S^3$ is a worth studying topic, although the number of contributions to the theory is not too large. Some important references of the theory are \cite{dadok}, \cite{K1}-\cite{K5}, \cite{pinkall}, \cite{Spivak} and \cite{W}. In \cite{MJP}, the authors give a complete classification of helicoidal flat surfaces in $S^3$ by means of asymptotic lines coordinates. In \cite{Aledo}, the authors characterized the flat surfaces in the unit 3-sphere that pass through a given regular curve of $S^3$ with a prescribed tangent plane distribution along this curve. In this paper, motivated by \cite{CMT} we use the Ribaucour transformations to get a family of complete flat surfaces in $S^3$ from a given such surface in $S^3$. As an application of the theory, we obtain a families of complete flat surfaces in $S^3$ associated to the flat torus. The families we obtain depend four parameters. One of these parameters is given by parameterizing of the flat torus. The other parameters, appear from integrating the Ribaucour transformation. We show explicit examples of these surfaces. This work is organized as follows. In Section 1, we give a brief description of Ribaucour transformations in space forms. In Section 2, we give an additional condition for the transformed surface to be flat. In Section 3, we describe all flat surfaces of the sphere 3-space obtained by applying the Ribaucour transformation to the flat torus. We prove that such surfaces are complete and provide explicit examples. \section*{ \large \bf 1. Preliminary} This section contains the definitions and the basic theory of Ribaucour transformations for surfaces in $S^3$. ( For more Details see \cite{TW} ) Let $M$ be an orientable surface in $S^3$ without umbilic points, with Gauss map we denote by $N$. Suppose that there exist 2 orthonormal principal vector fields $e_1$ and $e_2$ defined on $M$. We say that $\widetilde{M}\subset S^3$ is associated to $M$ by a Ribaucour transformation with respect to $e_1$ and $e_2$, if there exist a differentiable function $h$ defined on $M$ and a diffeomorphism $\psi:M\rightarrow \widetilde{M}$ such that\\ (a) for all $p\in M$, $exp_ph(p)N(p)=exp_{\psi(p)}h(p)\widetilde{N}(\psi(p))$, where $\widetilde{N}$ is the Gauss map of $\widetilde{M}$ and $exp$ is the expo- nential map of $M^3(\overline{k})$.\\ (b)The subset $\{exp_ph(p)N(p)$, $p\in M\}$, is a two-dimensional submanifold of $S^3$.\\ (c) $d\psi(e_i)$ $1\leq i\leq 2$ are orthogonal principal directions of $\widetilde{M}$. \vspace{.1in} The following result gives a characterization of Ribaucour transfomations. ( see \cite{TW} for a proof and more details) \vspace{.1in} \noindent {\bf Theorem 1.1} \textit{ Let $M$ be an orientable surface of $S^3$ parametrized by $X:U\subseteq R^2\rightarrow M$, without umbilic points. Assume $e_i=\frac{X,_i}{a_i}$, $1\leq i\leq 2$ where $a_i=\sqrt{g_{ii}}$ are orthogonal principal directions, $-\lambda_i$ the corresponding principal curvatures, and $N$ is a unit vector field normal to $M$. A surface $\widetilde{M}$ is locally associated to $M$ by a Ribaucour transformation if and only if there is differentiable functions $W,\Omega,\Omega_i:V\subseteq U\rightarrow R$ which satisfy \begin{eqnarray} \Omega_{i,j}&=&\Omega_j\frac{a_{j,i}}{a_{i}},\hspace{0,5cm} for\hspace{0,2cm}i\neq j, \nonumber\\ \Omega,_i &=&a_i\Omega_i,\label{eq12}\\ W,_i&=&-a_i\Omega_i\lambda_i.\nonumber \end{eqnarray} $W(W+\lambda_i\Omega)\neq 0$ and $\widetilde{X}:V\subseteq U\rightarrow \widetilde{M}$, is a parametrization of $\widetilde{M}$ given by \begin{eqnarray} \widetilde{X}=\bigg(1-\frac{2\Omega^2}{S}\bigg)X-\frac{2\Omega}{S}\bigg(\sum_{i=1}^2\Omega_ie_i-WN\bigg),\label{eq6} \end{eqnarray} where \begin{eqnarray} S=\sum_{i=1}^2\big(\Omega_i\big)^2+W^2+\Omega^2.\label{eq7} \end{eqnarray} Moreover, the normal map of $\widetilde{X}$ is given by \begin{eqnarray} \widetilde{N}=N+\frac{2W}{S}\bigg(\sum_{i=1}^2\Omega_ie_i-WN+\Omega X\bigg),\label{eq8} \end{eqnarray} and the principal curvatures and coefficients of the first fundamental form of $\widetilde{X}$, are given by \begin{eqnarray} \widetilde{\lambda}_i=\frac{WT_i+\lambda_iS}{S-\Omega T_i},\hspace{1,5cm}\widetilde{g}_{ii}=\bigg(\frac{S-\Omega T_i}{S}\bigg)^2g_{ii}\label{eq9} \end{eqnarray} where $\Omega_i$, $\Omega$ and $W$ satisfy (\ref{eq12}), $S$ is given by (\ref{eq7}), $g_{ii}$, $1\leq i\leq 2$ are coefficients of the first fundamental form of $X$, and \begin{eqnarray} T_1=2\bigg(\frac{\Omega_{1,1}}{a_1}+\frac{a_{1,2}}{a_1a_{2}}\Omega_2-W\lambda_1+\Omega\bigg),\hspace{0,1cm}T_2=2\bigg(\frac{\Omega_{2,2}}{a_2}+\frac{a_{2,1}}{a_{1}a_2}\Omega_1-W\lambda_2+\Omega\bigg)\label{T1T2first} \end{eqnarray} } \section*{ \large \bf 2. Ribaucour transformation for flat surface in $S^3$} In this section we provides a sufficient condition for a Ribaucour transformation to transform a flat surface into another such surface. \vspace{.1in} \noindent {\bf Theorem 2.1} \textit{ Let $M$ be a surfaces of $S^3$ parametrized by $X:U\subseteq R^2\rightarrow M$, without umbilic points and let $\widetilde{M}$ parametrized by (\ref{eq6}) be associated to $M$ by a Ribaucour transformation, such that the normal lines intersect at a distance function $h$. Assume that $h=\frac{\Omega}{W}$ is not constant along the lines of curvature and the function $\Omega$, $\Omega_i$ and $W$ satisfy one of the additional relation \begin{eqnarray} \Omega_1^2+\Omega_2^2=c\big(\Omega^2+W^2\big)\label{eq13} \end{eqnarray} where $c>0$, $S$ is given by (\ref{eq7}) and $W$, $\Omega_i$, $1\leq i\leq 2$ satisfies (\ref{eq12}). Then $\widetilde{M}$ parameterized by (\ref{eq6}) is a flat surface, if and only if $M$ is a flat surface. } \vspace{.1in} \noindent {\bf Proof:} See \cite{TW} Theorem 2.1, with $\alpha=\gamma=1$ and $\beta=0$. \vspace{.1in} \noindent {\bf Remark 2.2} Let $X$ as in the previous theorem. Then the parameterization $\widetilde{X}$ of $\widetilde{M}$, locally associated to $X$ by a Ribaucour transformation, given by (\ref{eq6}), is defined on \begin{eqnarray*} V=\{(u_1,u_2)\in U;\hspace{0,1cm}(\Omega T_1-S)(\Omega T_2-S)\neq0\}.\nonumber \end{eqnarray*} \vspace{.1in} \section*{ \large \bf 3. Families of flat surfaces associated to the Flat Torus in $S^3$.} \vspace{.2in} In this section, by applying Theorem 2.1 to the Flat Torus in $S^3$, we obtain a three parameter family of complete flat surfaces in $S^3$. \vspace{.2in} \noindent {\bf Theorem 3.1} \textit{ Consider the Flat Torus in $S^3$ parametrized by \begin{eqnarray} X(u_1,u_2)=(r_1\cos(r_2u_1),r_1\sin(r_2u_1),r_2\cos(r_1u_2),r_2\sin(r_1u_2)), \hspace{0,3cm}(u_1,u_2)\in R^2\label{torus} \end{eqnarray} $r_i$, $1\leq i\leq 2$ are positive constants satisfying $r_1^2+r_2^2=1$, as flat torus in $S^3$ where the first fundamental form is $I=r_1^2r_2^2\big(du_1^2+du_2^2\big)$. A parametrized surface $\widetilde{X}(u_1,u_2)$ is flat surface locally associated to $X$ by a Ribaucour transformation as in Theorem 2.1 with additional relation given by (\ref{eq13}), if and only if, up to an isometries of $S^3$, it is given by \begin{eqnarray}\label{xtiu} \widetilde{X}=\frac{1}{S}\left[ \begin{array}{c} \big(r_1S-2r_1\Omega^2-2r_2\Omega W\big)\cos(r_2u_1)+2f'\Omega\sin(r_2u_1)\\ \big(r_1S-2r_1\Omega^2-2r_2\Omega W\big)\sin(r_2u_1)-2f'\Omega\cos(r_2u_1)\\ \big(r_2S-2r_2\Omega^2+2r_1\Omega W\big)\cos(r_1u_2)+2g'\Omega\sin(r_1u_2)\\ \big(r_2S-2r_2\Omega^2+2r_1\Omega W\big)\sin(r_1u_2)-2g'\Omega\cos(r_1u_2)\\ \end{array} \right] \end{eqnarray} defined on $V=\{(u_1,u_2)\in R^2;\hspace{0,1cm}(r_1^2g^2-r_2^2f^2-2r_2^2fg)(r_2^2f^2-r_1^2g^2+2r_1^2fg)\neq0\}$, where \begin{eqnarray} \Omega=r_1r_2\big(f(u_1)+g(u_2)\big),\,\,\, W=r_2^2f(u_1)-r_1^2g(u_2),\,\,\,S=(1+c)\big(\Omega^2+W^2\big)\label{omegaeW} \end{eqnarray} $c> 0$, and the functions $f$ and $g$ are given by \begin{eqnarray} &&i)\,\,f(u_1)=\cosh(r_2\sqrt{c}u_1),\,\,\,g(u_2)=\frac{r_2}{r_1}\sinh(r_1\sqrt{c}u_2),\label{fgA1positivo}\\ && \hspace{4cm}or\nonumber\\ &&ii)\,\,f(u_1)=\sinh(r_2\sqrt{c}u_1),\,\,\,g(u_2)=\frac{r_2}{r_1}\cosh(r_1\sqrt{c}u_2),\label{fgA1negativo}\\\ &&\hspace{4cm} or\nonumber\\ &&iii)\,\,f(u_1)=a_1e^{\epsilon_1r_2\sqrt{c}u_1},\,\,\,g(u_2)=b_1e^{\epsilon_2r_1\sqrt{c}u_2},\,\, \epsilon_i^2=1, \, 1\leq i\leq 2.\label{fgA1zero} \end{eqnarray} Moreover, the normal map of $\widetilde{X}$ is given by \begin{eqnarray}\label{ntiu} \widetilde{N}=\frac{1}{S}\left[ \begin{array}{c} \big(-r_2S-2r_2W^2+2r_1\Omega W\big)\cos(r_2u_1)+2f'\Omega\sin(r_2u_1)\\ \big(-r_2S-2r_2W^2+2r_1\Omega W\big)\sin(r_2u_1)-2f'\Omega\cos(r_2u_1)\\ \big(r_1S+2r_1W^2-2r_2\Omega W\big)\cos(r_1u_2)+2g'\Omega\sin(r_1u_2)\\ \big(r_1S+2r_1W^2-2r_2\Omega W\big)\sin(r_1u_2)-2g'\Omega\cos(r_1u_2)\\ \end{array} \right] \end{eqnarray} } \vspace{.1in} \noindent {\bf Proof:} Consider the first fundamental form of the Flat Torus $ds^2=r_1^2r_2^2(du_1^2+du_2^2)$ and the principal curvatures $-\lambda_i$ $1\leq i\leq 2$ given by $\lambda_1=\frac{-r_2}{r_1}$, $\lambda_2=\frac{r_1}{r_2}$. Using (\ref{eq12}), to obtain the Ribaucour transformations, we need to solve the following of equations \begin{eqnarray} \Omega_{i,j}=0,\hspace{0,5cm}\Omega,_i =r_1r_2\Omega_i,\hspace{0,5cm}W,_i=-r_1r_2\Omega_i\lambda_i, \hspace{0,2cm}1\leq i\neq j\leq 2.\label{eqomega} \end{eqnarray} Therefore we obtain \begin{eqnarray} \Omega =r_1r_2\big(f_1(u_1)+f_2(u_2)\big),\hspace{0,5cm}W=-r_1r_2\big(\lambda_1f_1+\lambda_2f_2\big)+\overline{c},\hspace{0,5cm}\Omega_i=f_i', \hspace{0,2cm}1\leq i\neq j\leq 2,\label{omega} \end{eqnarray} where $\overline{c}$ is a real constant. Using (\ref{eq13}) the associated surface will be flat when $\displaystyle{\Omega_1^2+\Omega_2^2=c(\Omega^2+W^2)}$. Therefore, we obtain that $c> 0$ and the functions $f_1$ and $f_2$ satisfy \begin{eqnarray} (f_1')^2+(f_2')^2=c(\Omega^2+W^2).\label{eqf1''} \end{eqnarray} Differentiate this last equation with respect $x_1$ and $x_2$, using (\ref{eqomega}) and (\ref{omega}) we get \begin{eqnarray} &&f_1''=cr_2^2f_1+cr_2^2\overline{c},\nonumber\\ &&f_2''=cr_1^2f_2-cr_1^2\overline{c}.\nonumber \end{eqnarray} Defining $f(u_1)=f_1(u_1)+\overline{c}$ and $g(u_2)=f_2(u_2)-\overline{c}$, we have \begin{eqnarray} f''-cr_2^2f=0,&&\,\,\,\,\,\ g''-cr_1^2g=0,\label{eqf1'''}\\ \Omega =r_1r_2\big(f(u_1)+g(u_2)\big),&&\,\,\,\,\,\ W=r_2^2f(u_1)-r_1^2g(u_2).\label{omegafim} \end{eqnarray} By Theorem 1.1, we have that $\widetilde{X}$ and $\widetilde{N}$ are given by (\ref{xtiu}) and (\ref{ntiu}). Using (\ref{T1T2first}) and (\ref{omegafim}), we get $$T_1=\frac{2r_2(1+c)f}{r_1}\,\,\,\,\,\,\,T_2=\frac{2r_1(1+c)g}{r_2}.$$ Thus, from Remark 2.2 $\widetilde{X}$ is defined in $V=\{(u_1,u_2)\in R^2;\hspace{0,1cm}(r_1^2g^2-r_2^2f^2-2r_2^2fg)(r_2^2f^2-r_1^2g^2+2r_1^2fg)\neq0\}$.\\ From (\ref{eqf1'''}), we get \begin{eqnarray} f(u_1)=a_1\cosh(r_2\sqrt{c}u_1)+a_2\sinh(r_2\sqrt{c}u_1),\,\,\ g(u_2)=b_1\cosh(r_1\sqrt{c}u_2)+b_2\sinh(r_1\sqrt{c}u_2). \label{fgprimeiro} \end{eqnarray} Substituting (\ref{fgprimeiro}) and (\ref{omegafim}) in (\ref{eqf1''}), we have \begin{eqnarray} \big(a_1^2-a_2^2\big)r_2^2=\big(b_2^2-b_1^2\big)r_1^2.\label{eqa1b1} \end{eqnarray} Let $A_1=a_1^2-a_2^2$. If $A_1>0$, then from (\ref{eqa1b1}) $b_2^2> b_1^2$. Hence (\ref{fgprimeiro}) can be rewritten as \begin{eqnarray} &&f(u_1)=\sqrt{A_1}\cosh(r_2\sqrt{c}u_1+A_2),\label{fterceiro}\\ &&g(u_2)=\sqrt{A_1}\frac{r_2}{r_1}\sinh(r_1\sqrt{c}u_2+B_2),\label{gterceiro} \end{eqnarray} where $\cosh(A_2)=\frac{a_1}{\sqrt{a_1^2-a_2^2}}$, $\sinh(A_2)=\frac{a_2}{\sqrt{a_1^2-a_2^2}}$, $\sinh(B_2)=\frac{b_2}{\sqrt{b_2^2-b_1^2}}$ and $\cosh(B_2)=\frac{b_1}{\sqrt{b_2^2-b_1^2}}$.\\ The constants $A_2$ and $B_2$, without loss of generality, my be considered to be zero. One can verify that the surfaces with different values of $A_2$ and $B_2$ are congruent. In fact, using the notation $\widetilde{X}_{A2B2}$ for the surface $\widetilde{X}$ with fixed constants $A_2$ and $B_2$, we have $$\widetilde{X}_{A2B2}=R_{(\frac{-A_2}{r_2\sqrt{c}},\frac{-B_2}{r_1\sqrt{c}})}\widetilde{X}_{00}\circ h $$ where $\displaystyle{h(u_1,u_2)=\bigg(u_1+\frac{A_2}{r_2\sqrt{c}},u_1+\frac{B_2}{r_1\sqrt{c}}\bigg)}$ with $$R_{(\theta,\phi)}(x_1,x_2,x_3,x_4)=(x_1\cos\theta-x_2\sin\theta,x_1\sin\theta+x_2\cos\theta,x_3\cos\phi-x_4\sin\phi,x_3\sin\phi+x_4\cos\phi).$$ Now substituting (\ref{fterceiro}) with $A_2=0$, (\ref{gterceiro}) with $B_2=0$ and (\ref{omegafim}) in (\ref{xtiu}) we obtain that $\widetilde{X}$ does not depend on $A_1$. Thus without loss of generality, we can consider $A_1=1$. Therefore we conclude that $f$ and $g$ are given by (\ref{fgA1positivo}). On the other hand, if $A_1<0$, then from (\ref{eqa1b1}) $b_2^2< b_1^2$. Hence (\ref{fgprimeiro}) can be rewritten as \begin{eqnarray} &&f(u_1)=\sqrt{-A_1}\sinh(r_2\sqrt{c}u_1+A_2),\label{fterceiro1}\\ &&g(u_2)=\sqrt{-A_1}\frac{r_2}{r_1}\cosh(r_1\sqrt{c}u_2+B_2).\label{gterceiro1} \end{eqnarray} Proceeding in a similar way to the previous case, we obtain that $f$ and $g$ are given by (\ref{fgA1negativo}). If $A_1=0$, then $a_2=\epsilon_1a_1$, and from (\ref{eqa1b1}) $b_2=\epsilon_2b_1$, with $\epsilon_i^2=1$, $1\leq i\leq 2$. Thus, substituting this in (\ref{fgprimeiro}), we obtain (\ref{fgA1zero}). \vspace{.2in} \noindent {\bf Remark 3.2} Each flat surfaces associated to the flat torus as in Theorem 3.1, is parametrized by lines of curvature and from (\ref{eq9}), the metric is given by $ds^2=\psi_1^2du_1^2+\psi_2^2du_2^2$, where \begin{eqnarray} \psi_1=\frac{(-r_2^2f^2+r_1^2g^2-2r_2^2fg)r_1r_2}{r_1^2g^2+r_2^2f^2},\,\,\,\psi_2=\frac{(r_2^2f^2-r_1^2g^2-2r_1^2fg)r_1r_2}{r_1^2g^2+r_2^2f^2}.\label{metrica} \end{eqnarray} Moreover, from (\ref{eq9}), the principal curvatures of the $\widetilde{X}$ are given by \begin{eqnarray} \widetilde{\lambda}_1=\frac{-r_2\psi_2}{r_1\psi_1},\,\,\,\,\,\,\widetilde{\lambda}_2=\frac{r_1\psi_1}{r_2\psi_2}.\label{aa1} \end{eqnarray} \vspace{.1in} \noindent{\bf Proposition 3.3} \textit{Any flat surfaces associated to the flat torus $\widetilde{X}$, given by Theorem 3.1 is complete.} \vspace{.15in} \noindent {\bf Proof:} For divergent curves $\gamma(t)=(u_1(t),u_2(t))$, such that $\displaystyle{\lim_{t\rightarrow \infty}\big(u_1^2+u_2^2\big)=\infty}$, we have $l(\widetilde{X}\circ\gamma)=\infty$. In fact, the functions $f$ and $g$ are given by (\ref{fgA1positivo}) or (\ref{fgA1negativo}) or (\ref{fgA1zero}) and the coefficients of the first fundamental form $\psi_i$, $1\leq i\leq 2$ of $\widetilde{X}$ given by (\ref{metrica}). Therefore $\displaystyle{\lim_{|u_1|\rightarrow \infty}|\psi_i|=r_1r_2}$, $1\leq i\leq 2$, uniformly in $u_2$ and $\displaystyle{\lim_{|u_2|\rightarrow \infty}|\psi_i|=r_1r_2}$, $1\leq i\leq 2$, uniformly in $u_1$. Hence, there are $k_1>0$ and $k_2>0$ such that $|\psi_i(u_1,u_2)|>\frac{r_1r_2}{2}$, $1\leq i\leq 2$ for all $(u_1,u_2)\in R^2$ with $|u_1|>k_1$ and $|u_2|>k_2$.\\ Let $$m_i=min\{|\psi_i(u_1,u_2)|;\,(u_1,u_2)\in [-k_1,k_1 ]\times [-k_2,k_2 ]\}.$$ Therefore $|\psi_i(u_1,u_2)|\geq m_i$ in $[-k_1,k_1 ]\times [-k_2,k_2 ]$. Now consider $m_0=min\{m_1,m_2,\frac{r_1r_2}{2}\}$, then $|\psi_i(u_1,u_2)|\geq m_0$ in $R^2$. We conclude that $\widetilde{X}$ is a complete surface. \vspace{.2in} In the following, we provide some examples \vspace{.2in} \noindent {\bf Example 3.3}{ Consider stereographic projection $\pi:S^3\rightarrow R^3$ $$\pi(x_1,x_2,x_3,x_4)=\frac{1}{1-x_4}\bigg(x_1,x_2,x_3\bigg)$$ where $x_1^2+x_2^2+x_3^2+x_4^2=1$.\\ In Figure 1, we provide the surface parametrized by $\pi\circ\widetilde{X}$ where $\widetilde{X}$ is given by (\ref{xtiu}) locally associated to flat torus in $S^3$ by a Ribaucour transformation. In this surface, we have\\ $f(u_1)=\cosh\bigg(\frac{8u_1}{5}\bigg)$ and $g(u_2)=\frac{4}{3}\sinh\bigg(\frac{6u_2}{5}\bigg)$\\ \begin{figure}[!h] \begin{center} {\includegraphics[scale=0.5]{figura2.png}} \caption[!h]{In the figure above we have $r_1=\frac{3}{5}$, $r_2=\frac{4}{5}$, $c=4$.} \end{center} \end{figure}} \vspace{.1in} In Figure 2, we provide the surface parametrized by $\pi\circ\widetilde{X}$ where $\widetilde{X}$ is given by (\ref{xtiu}) locally associated to flat torus in $S^3$ by a Ribaucour transformation. In this surface, we have\\ $f(u_1)=\sinh\bigg(\frac{8u_1}{5}\bigg)$ and $g(u_2)=\frac{4}{3}\cosh\bigg(\frac{6u_2}{5}\bigg).$\\ \begin{figure}[!h] \begin{center} {\includegraphics[scale=0.5]{figura3.png}} \caption[!h]{In the figure above we have $r_1=\frac{3}{5}$, $r_2=\frac{4}{5}$, $c=4$.} \end{center} \end{figure} \vspace{.1in} In Figure 3, we provide two surfaces parametrized by $\pi\circ\widetilde{X}$ where $\widetilde{X}$ is given by (\ref{xtiu}) locally associated to flat torus in $S^3$ by a Ribaucour transformation. In this surfaces, we have $r_1=\frac{3}{5}$, $r_2=\frac{4}{5}$, $c=\frac{1}{1000}$ with $f$ and $g$ given by (\ref{fgA1positivo}) in the first surface and $f$ and $g$ given by (\ref{fgA1negativo}) and the second. \begin{figure}[!h] \begin{center} {\includegraphics[scale=0.5]{figura4.png}} \caption[!h]{} \end{center} \end{figure} \vspace{.1in} In Figure 4, we provide two surfaces parametrized by $\pi\circ\widetilde{X}$ where $\widetilde{X}$ is given by (\ref{xtiu}) locally associated to flat torus in $S^3$ by a Ribaucour transformation. In this surfaces, we have $r_1=\frac{3}{5}$, $r_2=\frac{4}{5}$, $c=\frac{1}{1000}$, $a_1=b_1=1$ with $f$ and $g$ given by (\ref{fgA1zero}) . \begin{figure}[!h] \begin{center} {\includegraphics[scale=0.5]{figura5.png}} \caption[!h]{} \end{center} \end{figure} \vspace{.1in}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} General relativity is not perturbatively renormalizable, which can be overcome by including higher-derivative terms in the action, albeit at the cost of introducing ghosts~\cite{Stelle:1977ry}. For this reason, infinite-derivative models are interesting because they may circumvent this issue~\cite{Pais:1950za,Efimov:1967}. Due to the infinite number of derivatives, these theories are nonlocal; for a more detailed historical overview, see~\cite{Buoninfante:2018mre,Boos:2020qgg}. It is intriguing that nonlocality appears to be an essential ingredient of consistent theories of quantum gravity, since, for example, diffeomorphism-invariant observables form a nonlocal algebra \cite{Donnelly:2015hta}; nonlocality may resolve the black hole information loss problem \cite{Mathur:2009hf,Unruh:2017uaw}; and holography itself features bulk nonlocality \cite{Bousso:2002ju}. Nonlocal quantum field theories with an infinite number of derivatives, however, come with their own complications. The propagator in a ghost-free nonlocal theory involves an entire function of four-momentum $p^\mu$ that has essential singularities at infinite radius in the complex $p^0$-plane. This makes Wick rotation problematic, a fact that has been shown to be related to the violation of unitarity in theories that are defined in Minkowski spacetime~\cite{Carone:2016eyp,Pius:2016jsl,Briscese:2018oyx,Briscese:2021mob,Koshelev:2021orf,Buoninfante:2022krn}. One may avoid this problem if one declares (or tacitly assumes, as is the case in much of the phenomenological literature; see, for example, Ref.~\cite{Biswas:2014yia}) that a ghost-free nonlocal theory is fundamentally a Euclidean field theory, and that loop amplitudes are to be analytically continued to Minkowski spacetime at the last step. An alternative approach, however, was developed in Refs.~\cite{Boos:2021chb,Boos:2021jih,Boos:2021lsj}: theories defined in Minkowski spacetime were constructed to closely approximate the desired nonlocal theory as a limit, while avoiding the problems associated with the limit point. This framework, called asymptotic nonlocality, has been studied in the context of $\phi^4$ theory~\cite{Boos:2021chb}, scalar quantum electrodynamics~\cite{Boos:2021jih}, and non-Abelian gauge theories~\cite{Boos:2021lsj}. Asymptotically nonlocal theories interpolate between higher-derivative theories of finite order and ghost-free nonlocal ones via a sequence of theories with $N-1$ massive partner particles, as $N$ becomes large. (Following our past conventions, $N$ refers to the total number of propagator poles~\cite{Boos:2021chb,Boos:2021jih,Boos:2021lsj}.) At finite $N$, each corresponds to a Lee-Wick theory~\cite{LeeWick:1969,LeeWick:1972}, for which contour prescriptions and perturbation theory are understood~\cite{LeeWick:1969,LeeWick:1972,Cutkosky:1969fq}; see also Refs.~\cite{Anselmi:2017yux,Anselmi:2018kgz}. In the limiting case where $N \rightarrow \infty$ with the ratio $m_1^2/N$ held fixed, where $m_1$ is the mass of the lightest Lee-Wick particle, one arrives at a nonlocal theory with form factor $\exp(\ell^2\Box)$; the derived nonlocal scale $\ell^2 \sim {\cal O}( N/m^2_1)$ does not appear as a fundamental parameter in the Lagrangian, but emerges in physical quantities that are computed in the stated limit. For large $N$, the nonlocal scale can be hierarchically separated from the Lee-Wick mass scale, with applications to the hierarchy problem in realistic theories~\cite{Boos:2021jih,Boos:2021lsj}. In the present work, we show how this approach may be extended to gravity, working perturbatively at lowest order in an expansion of the metric about a flat background. Nonlocal modifications of Einstein gravity have been of substantial interest in the recent literature \cite{Krasnikov:1987,Kuzmin:1989,Tomboulis:1997gg,Biswas:2005qr,Modesto:2011kw,Biswas:2011ar}. We will see that our construction provides a new way of defining such theories. Our paper is organized as follows. We first review the concept of asymptotic nonlocality in a simple scalar toy model in Sec.~\ref{sec:framework}, and present a field theory for asymptotically nonlocal gravity at the linearized level in Sec.~\ref{sec:gravity}. In Sec.~\ref{sec:examples}, we demonstrate the emergence of the nonlocal regulator scale in our gravitational theory via three examples: (i) the resolution of the singularity in the classical solution for the metric of a point particle, (ii) the same for the nonrelativistic potential extracted from a $t$-channel graviton exchange scattering amplitude, and (iii) the ultraviolet behavior of the gravitational contributions to the self-energy of a real scalar field at one loop. In Sec.~\ref{sec:conc}, we summarize our results and comment on how to extend these studies beyond the linearized approximation by drawing analogies to the construction of asymptotically nonlocal Yang-Mills theory~\cite{Boos:2021lsj}. \section{Framework} \label{sec:framework} In this section, we review the construction of an asymptotically nonlocal theory of a real scalar field. The goal is to find a sequence of higher-derivative quantum field theories, each with a finite number of derivatives, that approach the nonlocal theory defined by \begin{equation} {\cal L} = -\frac{1}{2} \, \phi \, \Box \, e^{\ell^2 \Box} \, \phi - V(\phi) \label{eq:limitpoint} \end{equation} as a limit point. One begins by noting that \begin{equation} {\cal L} = -\frac{1}{2} \, \phi \,\, \Box \, \left(1+\frac{\ell^2 \Box}{N-1}\right)^{N-1} \!\! \phi - V(\phi) \label{eq:toodegen} \end{equation} approaches Eq.~(\ref{eq:limitpoint}) in the limit that $N\rightarrow \infty$. However, the propagator following from Eq.~(\ref{eq:toodegen}) includes an $(N-1)^{\rm th}$ order pole, which has no simple particle interpretation. We remedy this by altering the finite-$N$ theory: \begin{equation} \label{eq:hdN} {\cal L}_N = - \frac{1}{2} \phi \, \Box \left[\prod_{j=1}^{N-1} \left(1+\frac{\ell_j^2 \Box}{N-1} \right) \right] \phi - V(\phi) \, . \end{equation} One obtains the same limiting form of the Lagrangian, Eq.~(\ref{eq:limitpoint}), when $N\rightarrow \infty$, provided that the $\ell_j$ approach a common value, $\ell$, as the limit is taken. For finite-$N$, the propagator is now given by \begin{equation} D_F(p^2) = \frac{i}{p^2} \, \prod_{j=1}^{N-1} \left(1-\frac{\ell_j^2 \, p^2}{N-1}\right)^{-1} \, , \label{eq:nondegprop} \end{equation} which has $N$ nondegenerate poles. Let us define $m_0=0$ and $m_k = 1/a_k$ where $a_k^2 \equiv \ell_k^2 / (N-1)$ for $k \geq 1$. Then, one may use a partial fraction decomposition to write Eq.~(\ref{eq:nondegprop}) as \begin{equation} D_F(p^2) = \sum_{j=0}^{N-1} c_j \, \frac{i}{p^2 - m_j^2} \, , \label{eq:sumprops} \end{equation} where \begin{equation} c_0=1 \,\,\,\,\, \mbox{ and } \,\,\,\,\, c_j = -\prod\limits_{\substack{k=1\\k\not=j}}^{N-1} \frac{m_k^2}{m_k^2 - m_j^2} \, \mbox{ for } j>0 \, . \label{eq:pfd0} \end{equation} It follows from Eq.~(\ref{eq:pfd0}) that the residue of each pole alternates in sign, indicating that the spectrum of the theory consists of a tower of ordinary particles and ghosts. This is what one expects to find~\cite{Pais:1950za} in a Lee-Wick theory \cite{Grinstein:2007mp} with additional higher-derivative quadratic terms beyond the minimal set~\cite{Carone:2008iw}. While the theory defined by Eq.~(\ref{eq:hdN}) at any finite $N$ is of the (arguably unitary~\cite{Grinstein:2008bg}) Lee-Wick type, they inherit a desirable feature of the limiting theory as $N$ becomes large, namely the emergence of a nonlocal scale that can serve as a regulator. We showed in Refs.~\cite{Boos:2021chb,Boos:2021jih,Boos:2021lsj} that the nonlocal scale, $M_\text{nl} \equiv 1/\ell$, can be hierarchically smaller than the lightest Lee-Wick partner $m_1$, \begin{align} M_\text{nl}^2 \sim {\cal O}\left(\frac{m_1^2}{N}\right) \, . \end{align} As we will see, one encounters the same phenomenon in the linearized gravitational theory that is the subject of the present work. The existence of $N$ first-order poles in Eq.~(\ref{eq:sumprops}) suggests that there is a way to rewrite Eq.~(\ref{eq:hdN}) as a theory that includes $N$ propagating fields that do not have higher-derivative quadratic terms. This approach is well known in Lee-Wick theories~\cite{Grinstein:2007mp}, and was extended to asymptotically nonlocal theories in Refs.~\cite{Boos:2021chb,Boos:2021jih}. In the present case, consider the following theory of $N$ real scalar fields $\phi_j$, and $N-1$ real scalar fields $\chi_j$: \begin{equation} {\cal L}_N = -\frac{1}{2} \, \phi_1 \Box \phi_N - V(\phi_1) - \sum_{j=1}^{N-1} \chi_j \, \left[ \Box \phi_j - (\phi_{j+1}-\phi_j)/a_j^2\right] \, . \label{eq:start} \end{equation} The $a_j$ were defined previously, and we have rescaled the $\chi_j$ fields so that each term in the sum has unit coefficient. The $\chi_j$ are auxiliary fields that serve to impose a set of constraints on the theory. Since they appear linearly in the Lagrangian, one may functionally integrate over the $\chi_j$ in the generating functional for the correlations functions of the theory. This leads to functional delta functions, which impose constraints that are exact at the quantum level: \begin{equation} \Box \phi_j -(\phi_{j+1}-\phi_j)/a_j^2=0 \, , \quad \mbox{ for }j=1, \dots, N-1 \, . \label{eq:recursive} \end{equation} These constraints allow the $(j+1)^{\rm th}$ field to be eliminated in terms of the $j^{\rm th}$ field; after successive functional integrations, one finds \begin{equation} \phi_N = \left[\prod_{j=1}^{N-1} \left(1+\frac{\ell_j^2 \Box}{N-1} \right) \right] \phi_1 \, . \end{equation} Substituting into Eq.~(\ref{eq:start}), and relabelling $\phi_1 \rightarrow \phi$, one recovers Eq.~(\ref{eq:hdN}), as desired. Alternatively, Eq.~(\ref{eq:start}) can be subject to field redefinitions which lead to a sector with diagonal kinetic and mass terms, corresponding to the expected propagating degrees of freedom in a Lee-Wick theory, and a sector of non-dynamical fields that can be integrated out. The spectrum of propagating fields is found to be identical to that of the higher-derivative theory; we refer the reader to Ref.~\cite{Boos:2021chb} for details. In applications where one is only interested in computing Feynman diagrams with internal scalar lines, there is little practical advantage to using a field-redefined version of Eq.~(\ref{eq:start}) instead of the higher-derivative form in Eq.~(\ref{eq:hdN}). The same will be true in our generalization to linearized gravity; we will rely on the higher-derivative form of the theory, analogous to Eq.~(\ref{eq:hdN}), in the computations we present in Sec.~\ref{sec:examples} that illustrate the emergence of the nonlocal scale. Nevertheless, we will present an auxiliary field formulation analogous to Eq.~(\ref{eq:start}), which is helpful in understanding the spectrum of massive Lee-Wick gravitons. \section{Asymptotically nonlocal gravity}\label{sec:gravity} Let us now construct an asymptotically nonlocal theory in the gravitational sector. We work in Cartesian coordinates and consider a small perturbation from $D$-dimensional Minkowski spacetime parametrized as \begin{align} g{}_{\mu\nu} = \eta{}_{\mu\nu} + 2 \, \kappa \, h{}_{\mu\nu} \, , \end{align} where $\kappa = \sqrt{8\pi G}$, we set $\hbar = c = 1$, and we work in the particle physics metric signature $(+,-,\ldots,-)$. We discuss the generalization to the full, nonlinear theory in Sec.~\ref{sec:conc}. As a warmup, recall that the leading-order Einstein-Hilbert Lagrangian can then be written compactly as \begin{align} \label{eq:lagr-eh} \mathcal{L}_\text{EH} &= \frac{1}{2\kappa^2} \sqrt{-g} R = -\frac12 h{}_{\mu\nu} \mathcal{O}^{\mu\nu\rho\sigma} h{}_{\rho\sigma} + \mathcal{O}(\kappa) \, , \end{align} where we defined the symbols\footnote{Here, and in what follows, all indices are raised and lowered with the Minkowski metric.} \begin{align} \begin{split} \mathcal{O}^{\mu\nu}_{\rho\sigma} &\equiv \mathcal{O}^{\mu\nu\alpha\beta} \eta{}_{\rho\alpha} \eta{}_{\sigma\beta} = \left( \mathbb{1}^{\mu\nu}_{\rho\sigma} - \eta{}^{\mu\nu}\eta{}_{\rho\sigma} \right) \Box + \eta{}^{\mu\nu}\partial{}_\rho\partial{}_\sigma + \eta{}_{\rho\sigma}\partial{}^\mu\partial{}^\nu - \mathcal{C}^{\mu\nu}_{\rho\sigma} \, , \\ \mathbb{1}^{\mu\nu}_{\rho\sigma} &\equiv \frac12 \left( \delta{}^\mu_\rho \delta{}^\nu_\sigma + \delta{}^\mu_\sigma \delta{}^\nu_\rho \right) \, , \label{eq:one} \\ \mathcal{C}{}^{\mu\nu}_{\rho\sigma} &\equiv \frac12 \left( \delta{}^\mu_\rho\partial^\nu\partial_\sigma + \delta{}^\mu_\sigma\partial^\nu\partial_\rho + \delta{}^\nu_\rho\partial^\mu\partial_\sigma + \delta{}^\nu_\sigma\partial^\mu\partial_\rho \right) \, , \end{split} \end{align} and $\Box \equiv \eta{}^{\mu\nu} \partial{}_\mu \partial{}_\nu$. Observe that the operator $\mathcal{O}^{\mu\nu\rho\sigma}$ satisfies \begin{align} \mathcal{O}^{\mu\nu\rho\sigma} &= \mathcal{O}^{\nu\mu\rho\sigma} = \mathcal{O}^{\mu\nu\sigma\rho} = \mathcal{O}^{\rho\sigma\mu\nu} \, , \\ \partial{}_\mu \mathcal{O}^{\mu\nu\rho\sigma} &= \partial{}_\nu \mathcal{O}^{\mu\nu\rho\sigma} = \partial{}_\rho \mathcal{O}^{\mu\nu\rho\sigma} = \partial{}_\sigma \mathcal{O}^{\mu\nu\rho\sigma} = 0 \, . \label{eq:transverse} \end{align} Under a gauge transformation associated with the infinitesimal diffeomorphism $2\kappa\xi^\mu(x)$, the metric perturbation transforms as $h{}_{\mu\nu} \rightarrow h{}_{\mu\nu} + \partial{}_\mu \xi{}_\nu + \partial{}_\nu \xi{}_\mu$. The gauge invariance of \eqref{eq:lagr-eh} is then guaranteed by Eq.~\eqref{eq:transverse}. In order to construct an asymptotically nonlocal theory of linearized gravity, we proceed in close analogy to Eq.~(\ref{eq:start}): Consider a theory with $N$ fields $h{}_{\mu\nu}^j$ (where $j=1,\dots,N$) and $N-1$ auxiliary fields $\chi{}_{\mu\nu}^j$ (with $j=1,\dots,N-1$). Under a gauge transformation we demand $h{}^j_{\mu\nu} \rightarrow h{}^j_{\mu\nu} + \partial{}_\mu \xi{}_\nu + \partial{}_\nu \xi{}_\mu$ and $\chi{}^j_{\mu\nu} \rightarrow \chi{}^j_{\mu\nu}$. The Lagrangian is \begin{align} \label{eq:lagrangian} \mathcal{L}_N = -\frac12 h{}^1_{\mu\nu} \mathcal{O}^{\mu\nu\rho\sigma} h{}^N_{\rho\sigma} - \sum\limits_{j=1}^{N-1} \chi_j^{\rho\sigma} \left[ \mathcal{O}^{\mu\nu}_{\rho\sigma} h{}^j_{\mu\nu} - m_j^2 \mathcal{M}{}^{\mu\nu}_{\rho\sigma}( h{}^{j+1}_{\mu\nu} - h{}^j_{\mu\nu} ) \right] \, , \end{align} where the mass matrix $\mathcal{M}$ and its inverse $\mathcal{W}$ are \begin{align} \begin{split} \mathcal{M}{}^{\mu\nu}_{\rho\sigma} &= \mathbb{1}^{\mu\nu}_{\rho\sigma} - b \, \eta{}^{\mu\nu}\eta{}_{\rho\sigma} \, , \\ \mathcal{W}{}^{\mu\nu}_{\rho\sigma} &= \mathbb{1}^{\mu\nu}_{\rho\sigma} - \frac{b}{bD-1} \eta{}^{\mu\nu}\eta{}_{\rho\sigma} \, , \end{split} \label{eq:mandw} \end{align} such that $\mathcal{M}{}^{\mu\nu}_{\alpha\beta}\mathcal{W}{}^{\alpha\beta}_{\rho\sigma} = \mathbb{1}^{\mu\nu}_{\rho\sigma}$, with the parameter $b$ to be determined and left free for now, and $m_j^2>0$ are arbitrary mass parameters. \subsection{Integrating out auxiliary fields} The fields $\chi{}^j_{\mu\nu}$ appear linearly in the Lagrangian and are therefore auxiliary fields. Hence, one may perform the functional integral over each one of these exactly, leading to the iterative functional constraints \begin{align} \mathcal{O}^{\rho\sigma}_{\mu\nu} h{}^j_{\rho\sigma} = m_j^2 \mathcal{M}{}^{\rho\sigma}_{\mu\nu}( h{}^{j+1}_{\rho\sigma} - h{}^j_{\rho\sigma} ) \, . \end{align} By acting with the inverse mass matrix $\mathcal{W}$ one finds \begin{align} h{}^{j+1}_{\mu\nu} = \left( \mathbb{1}^{\rho\sigma}_{\mu\nu} + \frac{1}{m_j^2} \hat{\mathcal{O}}{}^{\rho\sigma}_{\mu\nu} \right) h{}^j_{\rho\sigma} \, , \quad \hat{\mathcal{O}}{}^{\rho\sigma}_{\mu\nu} \equiv \mathcal{W}{}^{\rho\sigma}_{\alpha\beta} \mathcal{O}^{\alpha\beta}_{\mu\nu} \, , \end{align} where $\hat{\mathcal{O}}$ is given by \begin{align} \hat{\mathcal{O}}{}^{\mu\nu}_{\rho\sigma} = \mathcal{O}^{\mu\nu}_{\rho\sigma} + \frac{b(D-2)}{bD-1} \eta{}^{\mu\nu} (\Box\eta{}_{\rho\sigma} - \partial{}_\rho\partial{}_\sigma) \, . \end{align} After integrating out all constraints one is left with the Lagrangian \begin{align} \mathcal{L}_N = -\frac12 h^1_{\mu\nu} \left[ \mathcal{O} \left( \mathbb{1} + \frac{1}{m_{N-1}^2} \hat{\mathcal{O}} \right) \left( \mathbb{1} + \frac{1}{m_{N-2}^2} \hat{\mathcal{O}} \right) \dots \left( \mathbb{1} + \frac{1}{m_1^2} \hat{\mathcal{O}} \right) \right]^{\mu\nu\rho\sigma} h^1_{\rho\sigma} \, , \label{eq:intermediate} \end{align} where we have suppressed the tensor indices in the intermediate operators. We find that the higher-derivative Lagrangian we seek is obtained for the choice $b=1/2$. This leads to significant simplifications, including the relations \begin{align} \label{eq:master-relations} \hat{\mathcal{O}}{}^{\mu\nu}_{\alpha\beta} \hat{\mathcal{O}}{}^{\alpha\beta}_{\rho\sigma} = \Box \hat{\mathcal{O}}{}^{\mu\nu}_{\rho\sigma} \, , \qquad \mathcal{O}{}^{\mu\nu}_{\alpha\beta} \hat{\mathcal{O}}{}^{\alpha\beta}_{\rho\sigma} = \Box \mathcal{O}{}^{\mu\nu}_{\rho\sigma} \, , \end{align} which hold for any number of spacetime dimensions $D$. These may be used to show that Eq.~(\ref{eq:intermediate}) simplifies to \begin{align} \label{eq:lagrangian-2} \mathcal{L}_N = -\frac12 h^1_{\mu\nu} f(\Box) \mathcal{O}^{\mu\nu\rho\sigma} h^1_{\rho\sigma} \, , \quad f(\Box) \equiv \prod\limits_{j=1}^{N-1} \left( 1 + \frac{\Box}{m_j^2} \right) \, . \end{align} Compared to the linearized Einstein-Hilbert Lagrangian \eqref{eq:lagr-eh}, the above implements a higher-derivative modification thereof in terms of the form factor $f(\Box)$, where $h{}^1_{\mu\nu}$ is the graviton field. It is worth noting that the choice $b=1$ in Eq.~(\ref{eq:mandw}) corresponds to the tensor structure of a Pauli-Fierz mass term; in models of massive gravity~\cite{Hinterbichler:2011tt}, this is usually the preferred choice since it renders the massive graviton free of a ghost degree of freedom. In the present context, we retain a massless graviton, and the additional massive states already include a proliferation of ghosts. The extra degree of freedom in each massive mode from the choice $b=1/2$ does nothing more than indicate that the Lee-Wick spectrum includes both spin-two and spin-zero Lee-Wick particles, with the latter quantized like any other Lee-Wick scalar~\cite{Park:2010zw}. All are ultimately decoupled as one takes that asymptotically nonlocal limit. \subsection{Propagator} In order to find the graviton propagator we add a gauge-fixing term to the Lagrangian,\footnote{The required Faddeev-Popov ghosts do not appear in our subsequent computations.} \begin{align} \mathcal{L}_\text{gf} = \frac{1}{2\xi} \left( \partial{}_\mu h{}^{\mu\nu}_1 - \lambda \partial{}^\nu h_1 \right)^2 \, , \quad h_1 \equiv \eta{}^{\alpha\beta} h^1_{\alpha\beta} \, , \end{align} which, after integration by parts, we may rewrite as \begin{align} \mathcal{L}_\text{gf} = -\frac{1}{2\xi} h{}^1_{\mu\nu} \left[ \lambda^2 \eta{}^{\mu\nu}\eta{}_{\rho\sigma} \Box - \lambda( \eta{}^{\mu\nu}\partial{}_\rho \partial{}_\sigma + \eta{}_{\rho\sigma}\partial{}^\mu\partial{}^\nu ) + \frac12 \mathcal{C}^{\mu\nu}_{\rho\sigma} \right] h{}^{\rho\sigma}_1 \, . \end{align} Then, for $\lambda=\tfrac12$, the propagator takes the form \begin{align} \GravitonPropagator &\equiv D{}^{\mu\nu\rho\sigma}(k) \nonumber \\ &= \frac{i}{2k^2 f(-k^2)} \bigg\{ \eta{}^{\mu\rho}\eta{}^{\nu\sigma}+\eta{}^{\mu\sigma}\eta{}^{\nu\rho} - \frac{2}{D-2}\eta{}^{\mu\nu}\eta{}^{\rho\sigma} \label{eq:gravprop} \\ &\hspace{40pt}-[1-2\xi f(-k^2)]\frac{\eta{}^{\mu\rho}k^\nu k^\sigma + \eta{}^{\mu\sigma}k^\nu k^\rho + \eta{}^{\nu\rho}k^\mu k^\sigma + \eta{}^{\nu\sigma}k^\mu k^\rho}{k^2} \bigg\} \, , \nonumber \\ f(-k^2) &\equiv \prod\limits_{j=1}^{N-1} \left( 1 - \frac{k^2}{m_j^2} \right) \, . \end{align} In a local theory, where $f(-k^2)=1$, the result obtained by setting $\xi=1/2$ in Eq.~(\ref{eq:gravprop}) is usually said to be the graviton propagator in harmonic or de Donder gauge. Note that $i \, [k^2 f(-k^2)]^{-1}$ is identical to Eq.~(\ref{eq:nondegprop}) and has the same partial fraction decomposition, \begin{equation} \frac{1}{k^2 \, f(-k^2)} = \sum_{k=0}^{N-1} \frac{c_j}{k^2 - m_j^2} \,\, , \end{equation} with the coefficients $c_j$ given in Eq.~(\ref{eq:pfd0}). For later convenience, it is useful to note that the $c_j$ satisfy the sum rules \begin{align} \label{eq:sum-rules} \sum\limits_{j=0}^{N-1} c_j = 0 \,\,\,\, \mbox{ and } \,\,\,\,\, \sum\limits_{j=0}^{N-1} m_j^{2n} \, c_j = 0 ~ \quad \text{for} ~ n=1,\dots,N-2 \, . \end{align} \section{Examples of emergent scale}\label{sec:examples} \subsection{Metric of a classical point particle} Taking the Lagrangian \eqref{eq:lagrangian-2}, we may substitute $h{}_{\mu\nu} \equiv h{}^1_{\mu\nu}$ for notational brevity and couple this to matter in the usual way, \begin{align} \mathcal{L} = \mathcal{L}_N - \kappa \, h{}_{\mu\nu} T{}^{\mu\nu} \, , \end{align} such that the classical field equations take the form \begin{align} \mathcal{O}^{\rho\sigma}_{\mu\nu} f(\Box) h{}_{\rho\sigma} = - \kappa\, T{}_{\mu\nu} \, . \end{align} Working in the harmonic gauge, $\partial{}^\mu h{}_{\mu\nu} = \tfrac12 \partial{}_\nu h$, the field equations take the form \begin{align} \Box f(\Box) \left( h{}_{\mu\nu} - \frac12 h \eta{}_{\mu\nu}\right) = - \kappa \,T{}_{\mu\nu} \, , \end{align} and energy-momentum conservation follows from the gauge choice directly. Let us find the weak-field solution for a point particle of mass $m$ at rest. Let $X{}^\mu = (t, \boldsymbol{x})$, the symbol $\boldsymbol{x}$ denote a $(D-1)$-dimensional spatial vector in Cartesian coordinates, and $r \equiv |\boldsymbol{x}|$. The conserved external energy-momentum tensor is \begin{align} T{}_{\mu\nu} = m \, \delta{}^t_\mu \delta{}^t_\nu \, \delta{}^{(D-1)}(x) \, . \end{align} Inserting the static and spherically symmetric ansatz \begin{align} \text{d} s^2 = (\eta{}_{\mu\nu} + 2\kappa h{}_{\mu\nu})\text{d} X{}^\mu \text{d} X{}^\nu \equiv +[1-\phi(r)]\text{d} t^2 - [1 + \psi(r)]\text{d}\boldsymbol{x}^2 \end{align} into the field equations yields \begin{align} \label{eq:twofield} \begin{split} f(\nabla^2) \nabla^2 \left[ \phi + (D-1)\psi \right] &= -4\kappa^2 m \, \delta^{D-1}(\boldsymbol{x}) \, , \\ f(\nabla^2) \nabla^2 \left[ (D-3)\psi - \phi \right] &= 0 \, , \end{split} \end{align} where $\nabla^2$ is the spatial part of the $\Box$-operator. Equation~(\ref{eq:twofield}) is equivalent to \begin{align} f(\nabla^2) \nabla^2 \phi = -2 \, \frac{D-3}{D-2} \, \kappa^2 m\, \delta{}^{(D-1)}(\boldsymbol{x}) \, , \label{eq:givesphi} \end{align} and $\psi = \phi/(D-3)$. One may verify that in the case where $D=4$ and $f=1$, the solution of Eq.~(\ref{eq:givesphi}) is $\phi = 2\,G\,m/r$, as expected. In the more general case, we fix $D=4$ and use the partial fraction decomposition Eqs.~(\ref{eq:nondegprop})-(\ref{eq:pfd0}) to evaluate the Fourier transform of Eq.~(\ref{eq:givesphi}). Solving for $\phi$ in this way, one finds \begin{align} \phi(r) = \psi(r) = \frac{2 G m}{r} \left[ 1 + \sum\limits_{j=1}^{N-1} c_j e^{-m_j r} \right] \, , \quad c_{j\ge 1} = -\prod\limits_{\substack{k=1\\k\not=j}}^{N-1} \frac{m_k^2}{m_k^2-m_j^2} \, , \label{eq:phiofr} \end{align} which resembles results encountered in quadratic gravity \cite{Stelle:1978,Quandt:1990gc,Modesto:2014eta}. Performing an expansion near $r=0$ (and recalling that $c_0=1$ and $m_0=0$) one finds \begin{align} \phi(r) \approx 2Gm \left[ \frac{1}{r} \sum\limits_{j=0}^{N-1} c_j - \sum\limits_{j=0}^{N-1} c_j m_j + \frac{r}{2} \sum\limits_{j=0}^{N-1} c_j m_j^2 \right] + \mathcal{O}(r^2) \, . \end{align} The sum rules \eqref{eq:sum-rules} imply that for any $N \geq 2$ the $1/r$-divergence cancels, so the potential is manifestly finite at the origin. Moreover, for $N \ge 3$ the term linear in $r$ also vanishes, which implies the absence of a conical singularity at $r=0$; for a detailed study of regularity properties in higher-derivative gravity models see Ref.~\cite{Burzilla:2020utr}. The emergence of the nonlocal regulator scale can be seen by evaluating Eq.~(\ref{eq:phiofr}) numerically in the limit where $N \rightarrow \infty$ with the $m^2_j/N$ approaching a common value $1/\ell^2$. This is shown in Fig.~\ref{fig:potentials}, which assumes the following mass parametrization~\cite{Boos:2021lsj}: \begin{align} \label{eq:masses} m_j^2 = \frac{N}{\ell^2} \frac{1}{1-\frac{j}{2N^P}} \, , \qquad j \geq 1 \,\,\, , \end{align} where $P>1$ is an arbitrary parameter. The results can be seen to approach the expectation for a limiting nonlocal theory with an $\exp{(\ell^2 \Box)}$ form factor~\cite{Tseytlin:1995uq,Nicolini:2005vd,Modesto:2010uh,Edholm:2016hbt,Giacchini:2016xns,Boos:2018bxf,Akil:2022coa}, \begin{align} \phi(r) = \psi(r) = \frac{2 G m}{r} \text{erf} \left( \frac{r}{2\ell} \right) \, , \end{align} which is regular at $r=0$ and approaches the Newtonian expression for $r \gg \ell$. \begin{figure}[!ht] \centering \includegraphics[width=0.75\textwidth]{plot-potential-v5.pdf} \caption{The gravitational potential $\phi(r)$ for various choices of $N$, given a typical mass parametrization \eqref{eq:masses}, versus the dimensionless distance $r/\ell$, where $\ell$ is the emergent regulator scale. We choose $P=2$ and display the potentials for $2Gm/\ell = 1$. The case $N=1$ corresponds to the Newtonian potential, whereas $N \ge 2$ corresponds to the Lee-Wick case; $N=\infty$ is the nonlocal case.} \label{fig:potentials} \end{figure} \subsection{Nonrelativistic gravitational potential} Using the propagator developed in Sec.~\ref{sec:gravity}, we next compute the gravitational potential by considering the nonrelativistic limit of a two-into-two scattering amplitude. To make the analogy with the well-known computation of the Coulomb potential in quantum electrodynamics manifest, we take the matter fields to be two distinct Dirac fermions with mass $m$. The single-graviton vertex comes from the part of the Lagrangian that is linear in $h_{\mu\nu}$, \begin{equation} {\cal L} \supset - \kappa \, h_{\mu\nu} \, T^{\mu\nu} \, , \end{equation} where \begin{equation} T^{\mu\nu} = \frac{i}{2} \overline{\psi} \stackrel{\leftrightarrow}{\partial}{}^{(\mu} \gamma^{\nu)} \psi-\eta^{\mu\nu} \left[\frac{i}{2} \, \overline{\psi} \stackrel{\!\!\! \leftrightarrow}{\partial_{\alpha}} \gamma^{\alpha} \psi - m \, \overline{\psi} \, \psi \right] \end{equation} is the flat-space energy-momentum tensor. Here we follow the conventions that $X_{(\mu\nu)} \equiv (X_{\mu\nu} + X_{\nu\mu})/2$ and $A \stackrel{\!\! \leftrightarrow}{\partial^{\mu}} B \equiv A \, \partial_\mu B - (\partial_\mu A) B$. The vertex Feynman rule is given by \begin{equation} \OneGravitonOneFermionVertex \equiv V(p',p)^{\mu\nu}= -i \, \kappa \left[ \frac{1}{2} \left( \mathbb{1}^{\mu\nu}_{\rho\sigma} - \eta_{\rho\sigma} \eta^{\mu\nu} \right) \gamma^\rho (p+p^\prime)^\sigma + m \, \eta^{\mu\nu} \right] \, , \end{equation} where $\mathbb{1}_{\rho\sigma}^{\mu\nu}$ is defined in Eq.~\eqref{eq:one}. To extract the gravitational potential, it is sufficient for us to study fermion-fermion scattering, whose scattering amplitude is given by \begin{align} \begin{split} i {\cal M} &= \GravitonExchange = \left[ \overline{u}^{(s')}(p^\prime) V(p',p)_{\alpha\beta} \, u^{(s)}(p) \right] D^{\alpha\beta\rho\sigma}(q) \left[\overline{u}^{(r')}(k') V(k',k)_{\rho\sigma} \, u^{(r)}(k)\right] \,\,\, , \label{eq:scatamp} \end{split} \end{align} where $q=p-p'$ is the momentum flowing through the $t$-channel propagator, and $r$, $r'$, $s$, and $s'$ label spin states. The portion of the propagator proportional to $[1-2 \xi f(-q^2)]$ gives a vanishing contribution to the amplitude, which can be verified using $\overline{u}(p^\prime) \slash{\!\!\!q} \, u(p) = 0$ and $\overline{u}(k^\prime) \slash{\!\!\!q} \, u(k)=0$. The remaining part of the amplitude simplifies dramatically in the nonrelativistic limit. At zeroth order in the three-momentum, the spinor $u(p)$ in the Weyl basis may be written as \cite{Peskin:1995ev} \begin{equation} u^{(r)}(p) = \sqrt{m} \left(\begin{array}{c} \xi^{(r)} \\ \xi^{(r)} \end{array}\right) \, , \end{equation} where $\xi^{(r)}$ (with $r=1, 2$) is a set of two-component spinors that describe the spin state of the particle in the rest frame. For example, at lowest order, \begin{equation} \overline{u}^{(r)}(p^\prime) \gamma^0 \, u^{(s)}(p) = \overline{u}^{(r)} (p^\prime) \, u^{(s)}(p) = 2\, m \,\xi^{\prime (r) \dagger} \xi^{(s)} \, . \end{equation} The scattering amplitude in Eq.~(\ref{eq:scatamp}) reduces in the same limit to \begin{equation} i {\cal M} = - \frac{\kappa^2 \, m^2 }{2 \, f(|\vec{p}-\vec{p'}|^2)} \frac{i}{|\vec{p}-\vec{p'}|^2} \left(2\, m \,\xi^{\prime (s') \dagger} \xi^{(s)} \right) \left(2\, m \,\xi^{\prime (r') \dagger} \xi^{(r)} \right) \, . \end{equation} One may immediately identify the Fourier transformed potential energy~\cite{Peskin:1995ev}, \begin{equation} \widetilde{V}(\vec{q}) = - 4 \, \pi \, G \, \frac{m^2}{|\vec{q}|^2 f(|\vec{q}|^2)} \, . \end{equation} We decompose $[\,|\vec{q}|^2 f(|\vec{q}|^2)\,]^{-1}$ using partial fractions and then Fourier transform, \begin{equation} V(\vec{x}) = -4\, \pi \, G \,m^2 \int \frac{\text{d}^3 q}{(2 \pi)^3} \, e^{i\vec{q}\cdot\vec{x}} \left[\frac{1}{|\vec{q}|^2}+\sum_{j=1}^{N-1} c_j \, \frac{1}{|\vec{q}|^2 + m_k^2} \right], \end{equation} where the $c_j$ are the same coefficients defined in Eq.~(\ref{eq:pfd0}). Regulating the first term in the usual way, one obtains \begin{equation} V(\vec{x}) = - \frac{G \, m^2}{r}\left[1 + \sum_{j=1}^{N-1} c_j \, e^{-m_j r} \right] \, , \end{equation} where $r \equiv | \vec{x}|$. This potential energy function is proportional to the function $\phi(r)$ discussed in the previous section. Hence, the singularity at the origin is eliminated and the potential energy is regulated by the same emergent scale in the asymptotically nonlocal limit. \subsection{Loop regulator} As a final example, let us now demonstrate that the emergent scale also regulates the otherwise quadratically divergent self-energy of a real scalar field of mass $m$. The vertex Feynman rules are given by \begin{align} \OneGravitonOneScalarVertex \hspace{16pt} &= -i \kappa \, \left[ p^\mu p'^\nu + p^\nu p'^\mu - (p\cdot p' - m^2)\,\eta{}^{\mu\nu} \right] \, , \\[10pt] \TwoGravitonTwoScalarsVertex \hspace{5pt} &= 4\, i \kappa^2 \, \Big[\mathbb{1}^{\mu\nu}_{\alpha\gamma} \mathbb{1}^{\rho\sigma}_{\beta\delta} \, \eta^{\gamma\delta} \, (p'^\alpha p^\beta + p'^\beta p^\alpha) \nonumber \\[-30pt] & \hspace{40pt} -\frac{1}{2} \,(\mathbb{1}^{\mu\nu\rho\sigma} - \frac{1}{2}\, \eta^{\mu\nu}\eta^{\rho\sigma} )(p\cdot p' - m^2) \nonumber \\ & \hspace{40pt} -\frac{1}{2} \,(\mathbb{1}^{\mu\nu}_{\alpha\beta} \eta^{\rho\sigma} + \mathbb{1}^{\rho\sigma}_{\alpha\beta} \eta^{\mu\nu}) \, p'^\alpha p^\beta \Big] \, , \end{align} where $\mathbb{1}^{\mu\nu\rho\sigma}$ is defined by Eq.~(\ref{eq:one}), with all indices raised using the Minkowski metric $\eta^{\alpha\beta}$. The total self-energy at one loop is \begin{align} -i M^2(p^2) = \BubbleDiagram + \RainbowDiagram \equiv -i \left[ M^2_A(p^2) + M^2_B(p^2) \right] \, . \label{eq:msqrd} \end{align} The physical scalar mass $p^2 = m_{\rm phys}^2$ is determined by the location of the propagator pole, {\it i.e.} it is the solution to $p^2-m^2 -M^2(p^2)=0$. This makes $M^2(m_{\rm phys}^2)$ a quantity of interest; at the order we work in perturbation theory, this is equivalent to $M^2(m^2)$, which we now study. For simplicity, we shall work in $\xi=0$ gauge.\footnote{The on-shell self-energy in Abelian and non-Abelian gauge theories is gauge invariant, with no dependence on the parameter $\xi$. In gravity, this is not the case, so that mass renormalization requires gauge-dependent counterterms. Nevertheless, it can be shown that physical quantities, such as scattering amplitudes, remain independent of gauge~\cite{Antoniadis:1985ub}. For an alternative approach using a background field formalism which leads to gauge-invariant counterterms, see Ref.~\cite{Mackay:2009cf}.} On-shell, the second, ``rainbow" diagram gives \begin{align} -i M^2_B(p^2=m^2) = 2\kappa^2 m^2 \int \frac{\text{d}^4 k}{(2\pi)^4} \frac{m^2 k^2-4(k\cdot p)^2}{k^4(k^2-2p\cdot k)f(-k^2)} \, . \end{align} If one were to set $f=1$, this expression looks logarithmically divergent based on naive power counting, provided a factor of $k^2$ survives in the numerator of the integrand. However, this is not quite the case: After combining denominators and shifting the integration variable $ k \rightarrow k + {\rm shift}$, the leading term in the numerator may be replaced by $(1-4/D) \, k^2 m^2$, with vanishing coefficient in $D=4$. Hence, this integral represents a finite correction, even before faster convergence is provided by $f(-k^2)$. Therefore, we focus on the first diagram to see the appearance of the nonlocal scale as a regulator. The first, ``bubble" diagram is given by \begin{align} -i M_A^2(p^2=m^2) = -6 \, \kappa^2 m^2 \int \frac{\text{d}^4 k}{(2\pi)^4} \frac{1}{k^2f(-k^2)} \, . \end{align} This expression is quadratically divergent for $f=1$, so let us now track the influence of the higher-derivative modification. Performing a partial fraction decomposition, we may write \begin{align} \begin{split} -i M_A^2(p^2=m^2) &= 6\, i\kappa^2 m^2 \sum\limits_{j=0}^{N-1} c_j \int \frac{\text{d}^4 k_E}{(2\pi)^4} \frac{1}{k_E^2+m_j^2} \\ &= 6i\kappa^2 m^2 \sum\limits_{j=1}^{N-1} c_j \left[ -\frac{m_j^2}{8\pi^2}\times\frac{1}{\epsilon} + \text{finite} \right] \, , \end{split} \end{align} where in the second line we have restored $m_0=0$ and evaluated the integral using dimensional regularization. The $1/\epsilon$-divergences cancel due to the sum rules \eqref{eq:sum-rules}, such that \begin{align} -iM_A^2(p^2=m^2) &= \frac{6i\kappa^2}{(4\pi)^2} m^2 \sum\limits_{j=1}^{N-1} c_j m_j^2 \log m_j^2 \approx \frac{6i\kappa^2}{(4\pi)^2} m^2 M_\text{nl}^2 + \mathcal{O}\left(\frac{1}{N}\right) \, . \end{align} The last equality can be found in Eq.~(4.28) of Ref.~\cite{Boos:2021jih} and follows from the parametrization in Eq.~\eqref{eq:masses}, and holds numerically for $P \ge 1$. Therefore, the emergent scale $M_\text{nl}$ acts as the physical regulator for the gravitational corrections to the scalar self energy as the Lee-Wick spectrum is appropriately decoupled. \section{Conclusions}\label{sec:conc} In Refs.~\cite{Boos:2021chb,Boos:2021jih,Boos:2021lsj}, we introduced a class of theories that interpolate between a Lee-Wick theory, with a finite number of higher-derivative quadratic terms, and a ghost-free nonlocal theory, with infinite-derivative quadratic terms. We call this sequence of theories, with ever increasing numbers of Lee-Wick particles, asymptotically nonlocal. As the number of Lee-Wick particles is increased, their spectrum is taken to decouple in such a way that the Lagrangian approaches that of the limiting theory. However, the limiting theory is fundamentally different from any theory in this sequence. In Minkowski spacetime, the propagator of the nonlocal limiting theory involves an entire function that diverges in certain directions in the complex plane; this prevents Wick rotation and leads to a violation of the optical theorem~\cite{Carone:2016eyp}. The asymptotically nonlocal theories do not have this feature, and unitarity can be established in Minkowski spacetime using prescriptions that have been developed in studies of more conventional Lee-Wick theories~\cite{Cutkosky:1969fq}. It is interesting that at large $N$, where $N$ is the number of propagator poles, an emergent regulator scale appears in the asymptotically nonlocal theories that does not correspond to any fundamental parameter in the Lagrangian; this scale is identified with the regulator scale that would appear directly in the definition of the nonlocal limiting theory. The derived regulator scale is hierarchically smaller that the lightest Lee-Wick partner, with the suppression in the squared cut off scale proportional to the number of propagator poles. As asymptotic nonlocality has been previously considered in scalar field theories~\cite{Boos:2021chb}, Abelian gauge theories~\cite{Boos:2021jih}, and non-Abelian gauge theories~\cite{Boos:2021lsj}, respectively, it is natural to ask how the same construction might be generalized to the gravitational sector. The present work takes the first step in this direction by developing an asymptotically nonlocal version of linearized gravity. As in our earlier work~\cite{Boos:2021chb,Boos:2021jih,Boos:2021lsj}, we present a higher-derivative and an auxiliary-field formulation of the gravitational theory that preserve the diffeomorphism invariance of the linearized Einstein-Hilbert action. We then consider a number of examples that confirm the emergence of the nonlocal scale, namely, in resolving the singularity at the origin of the nonrelativistic gravitational potential and in regulating the divergences of graviton loop diagrams. As with the quantum field theories that we previously studied~\cite{Boos:2021chb,Boos:2021jih,Boos:2021lsj}, we find that the emergent regulator scale is also suppressed relative to lightest Lee-Wick particle according to the relation $M_\text{nl}^2 \sim {\cal O}\left(m_1^2 / N \right)$. Generalization of asymptotically nonlocal gravity to the fully nonlinear theory can be formulated most easily working with a higher-derivative formulation, for example, \begin{equation} \mathcal{L} = \sqrt{-g} \left[\frac{1}{2\kappa^2} R + R F_1(\Box) R + R{}_{\mu\nu} F_2(\Box) R{}^{\mu\nu} + R{}_{\mu\nu\rho\sigma} F_3(\Box) R{}^{\mu\nu\rho\sigma} \right] \, , \label{eq:full} \end{equation} \begin{equation} F_k(\Box) = \prod\limits_{j=1}^{N-1} \left(1 - \frac{\ell_{k,j}^2 \Box}{N-1} \right) \, , \quad \Box = g{}^{\mu\nu} \nabla{}_\mu \nabla{}_\nu \, , \end{equation} where the $\nabla{}_\mu$ are covariant derivatives. Here, the $F_i(\Box)$ can be chosen so that Eq.~(\ref{eq:full}) reduces to Eq.~\eqref{eq:lagrangian-2} in the linearized approximation \cite{Modesto:2011kw,Biswas:2011ar}; see also Ref.~\cite{Frolov:2015usa}. Finding a compact expression for Eq.~(\ref{eq:full}) in terms on auxiliary fields is a more difficult task, but is not strictly necessary for studying any relevant physics of interest in the full theory. The present work concludes a series of papers in which we systematically developed the notion of asymptotic nonlocality. The higher-derivative theories of finite order that we construct allow the consistent perturbative evaluation of scattering amplitudes, while also approaching a nonlocal theory via a limiting procedure. Thus, the sequence of asymptotically nonlocal theories can be thought of as a means of \textit{defining} nonlocal quantum field theories, and may provide a useful framework for further studies of their properties and phenomenology. \begin{acknowledgments} We thank the NSF for support under Grants PHY-1819575 and PHY-2112460. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsubsection*{Acknowledgements} The author would like to thank Darij Grinberg for his time spent understanding interval-posets and the nice and constructive discussions that followed, which had a positive impact on this paper. She also thanks Grégory Châtel and Frédéric Chapoton for the original work on the interval-posets and bijections which later on led to this result. Finally, the computation and tests needed along the research were done using the open-source mathematical software {\tt SageMath}~\cite{SageMath2017} and its combinatorics features developed by the {\tt Sage-Combinat} community~\cite{SAGE_COMBINAT}. The recent development funded by the OpenDreamKit Horizon 2020 European Research Infrastructures project (\#676541) helped providing the live environment~\cite{JNotebook} which complements this paper. \section{Introduction} The Tamari lattice \cite{Tamari1, Tamari2} is a well known lattice on Catalan objects, most frequently described on binary trees, Dyck paths, and triangulations of a polygon. Among its many interesting combinatorial properties, we find the study of its intervals. Indeed, it was shown by Chapoton \cite{Chap} that the number of intervals of the Tamari lattice on objects of size $n$ is given by \begin{equation} \label{eq:intervals-formula} \frac{2}{n(n +1)} \binom{4 n + 1}{n - 1}. \end{equation} This is a surprising result. Indeed, it is not common that we find a closed formula counting intervals in a lattice. For example, there is no such formula to count the intervals of the weak order on permutations. Even more surprising is that this formula also counts the number of simple rooted triangular maps, which led Bernardi and Bonichon to describe a bijection between Tamari intervals and said maps \cite{BijTriangulations}. This is a strong indication that Tamari intervals have deep and interesting combinatorial properties. One generalization of the Tamari lattice is to describe it on $m$-Catalan objects. This was done by Bergeron and Préville-Ratelle \cite{BergmTamari}. Again, they conjectured that the number of intervals could be counted by a closed formula, which was later proved in \cite{mTamari}: \begin{equation} \label{eq:m-intervals-formula} \frac{m+1}{n(mn +1)} \binom{(m+1)^2 n + m}{n - 1}. \end{equation} In this case, the connection to maps is still an open question. The rich combinatorics of Tamari intervals and their generalizations has led to a surge of effort in their study. This is motivated by their connections with various subjects such as algebra, representation theory, maps, and more. For example, in~\cite{PRE_RognerudModern}, the author motivates the study of some subfamilies of intervals by connections to operads theory as well as path algebras. Another fundamental example is the work of Begeron and Préville-Ratelle on diagonal harmonic polynomials~\cite{BergmTamari} which has led to the study of $m$-Tamari lattices and more recently generalized Tamari lattices~\cite{RatelleViennot, FangPrevilleRatelle}. The relation to maps, and more specifically Schnyder woods~\cite{BijTriangulations} is a motivation for studying the relation between Tamari intervals and certain types of decorated trees (see for example~\cite{WeylChamber} and~\cite{FangBeta10}). A by-product of our paper is to introduce a new family of trees, the \emph{grafting trees}, which are very close to these decorated trees. In fact, they are in bijection with $(1,1)$ decoration trees of~\cite{CoriSchaefferDescTrees}. The goal of the present paper is to prove a certain equi-distribution of statistics on Tamari intervals related to \emph{contacts} and \emph{rises} of the involved Dyck paths. This was first noticed in \cite{mTamari}. At this stage, the equi-distribution could be seen directly on the generating function of the intervals but there was no combinatorial explanation. In his thesis \cite{PRThesis}, Préville-Ratelle developed the subject and left some open problems and conjectures. The one related to the contacts and rises of Tamari intervals is Conjecture~17, which we propose to prove in this paper. It describes an equi-distribution not only between two statistics (as in \cite{mTamari}) but between two sets of statistics. Basically, in \cite{mTamari}, only the initial rise of a Dyck path was considered, whereas in Conjecture~17, Préville-Ratelle considers all positive rises of the Dyck path. Besides, a third statistic is described, the \emph{distance}, which also appears in many other open conjectures and problems of Préville-Ratelle 's thesis: it is related to trivariate diagonal harmonics, which is the original motivation of the $m$-Tamari lattice. According to Préville-Ratelle, Conjecture~17 can be proved both combinatorially\footnote{Gilles Schaeffer says that this derives from a natural involution on maps.} and through the generating function when~$m=1$. But until now, there was no proof of this result when~$m > 1$. To prove this conjecture, we use some combinatorial objects that we introduced in a previous paper on Tamari intervals \cite{IntervalPosetsInitial}: the interval-posets. They are posets on integers, satisfying some simple local rules, and are in bijections with the Tamari intervals. Besides, their structure includes two planar forests (from the two bounds of the Tamari interval), which are very similar to the Schnyder woods of the triangular planar maps. Another quality of interval-posets is that $m$-Tamari intervals are also in bijection with a sub-family of interval-posets, which was the key to prove the result when~$m > 1$. Section~\ref{sec:interval-posets} of this paper gives a proper definition of Tamari interval-posets and re-explores the link with the Tamari lattice in the context of our problem. In Section~\ref{sec:statistics}, we describe the \emph{rise}, \emph{contact}, and \emph{distance} statistics and their relations to interval-poset statistics. This allows us to state Theorem~\ref{thm:main-result-classical}, which expresses our version of Conjecture~17 in the case $m=1$. Section~\ref{sec:involutions} is dedicated to the proof of Theorem~\ref{thm:main-result-classical} through an involution on interval-posets described in Theorem~\ref{thm:rise-contact-statistics}. However, the main results of our paper lie in our last section, Section~\ref{sec:mtam}, where we are able to generalize the involution to the $m > 1$ case. Theorem~\ref{thm:main-result-general} is a direct reformulation of Conjecture~17 from \cite{PRThesis}. It is a consequence of Theorem~\ref{thm:m-rise-contact-involution}, which describes an involution on intervals of the $m$-Tamari lattice. \begin{Remark} A previous version of this involution was described in an extended abstract~\cite{ME_FPSAC2014}. This was only for the $m=1$ case and did not include the whole set of statistics. Also, in this original description, the fact that it was an involution could be proved but was not clear. We leave it to the curious reader to see that the bijection described in~\cite{ME_FPSAC2014} is indeed the same as the one we are presenting in details now. \end{Remark} \begin{Remark} This paper comes with a complement {\tt SageMath-Jupyter} notebook~\cite{JNotebook} available on {\tt github} and {\tt binder}. This notebook contains {\tt SageMath} code for all computations and algorithms described in the paper. The {\tt binder} system allows the reader to run and edit the notebook online. \end{Remark} \section{Tamari Interval-posets} \label{sec:interval-posets} \subsection{Definition} Let us first introduce some notations that we will need further on. In the following, if $P$ is a poset, then we denote by $\trprec_{P}$, $\trpreceq_{P}$, $\trsucc_{P}$ and $\trsucceq_{P}$ the smaller, smaller-or-equal, greater and greater-or-equal, respectively, relations of the poset $P$. When the poset $P$ can be uniquely inferred from the context, we will sometimes leave out the subscript ``$P$''. We write \begin{equation} \rel(P) = \lbrace (x,y) \in P, x \trprec y \rbrace \end{equation} for the set of relations of $P$. A relation $(x,y)$ is said to be a \emph{cover relation} if there is no $z$ in $P$ such that $x \trprec z \trprec y$. The Hasse diagram of a poset $P$ is the directed graph formed by the cover relations of the poset. A poset is traditionally represented by its Hasse diagram. We say that we \emph{add} a relation $(i,j)$ to a poset $P$ when we add $(i,j)$ to $\rel(P)$ along with all relations obtained by transitivity (this requires that neither $i \trprec_P j$ nor $j \trprec_P i$ before the addition). Basically, this means we add an edge to the Hasse Diagram. The new poset $P$ is then an \emph{extension} of the original poset. We now give a first possible definition of interval-posets. \begin{Definition} \label{def:interval-poset} A \emph{Tamari interval-poset} (simply referred as \emph{interval-poset} in this paper) is a poset $P$ on $\left\{ 1,2,...,n\right\} $ for some $n\in\mathbb{N}$, such that all triplets $a < b < c$ in $P$ satisfy the following property, which we call the \emph{Tamari axiom}: \begin{itemize} \item $a \trprec c$ implies $b \trprec c$; \item $c \trprec a$ implies $b \trprec a$. \end{itemize} \end{Definition} Figure \ref{fig:interval-poset-example} shows an example and a counter-example of interval-posets. The first poset is indeed an interval-poset. The Tamari axiom has to be checked on every $a < b < c$ such that there is a relation between $a$ and $c$: we check the axiom on $1 < 2 < 3$ and $3 < 4 < 5$ and it is satisfied. The second poset of Figure \ref{fig:interval-poset-example} is not an interval poset: it contains $1 \trprec 3$ but not $2 \trprec 3$ so the Tamari axiom is not satisfied for $1 < 2 < 3$. \begin{figure}[ht] \input{figures/interval-poset-example} \caption{Example and counter-example of interval-poset} \label{fig:interval-poset-example} \end{figure} \begin{Definition} Let $P$ be an interval-poset and $a,b \in P$ such that $a < b$. Then \begin{itemize} \item if $a \trprec b$, then $(a,b)$ is said to be an \emph{increasing relation} of $P$. \item if $b \trprec a$, then $(b,a)$ is said to be a \emph{decreasing relation} of $P$. \end{itemize} \end{Definition} As an example, the increasing relations of the interval-poset of Figure~\ref{fig:interval-poset-example} are $(1,3)$ and $(2,3)$ and the decreasing relations are $(2,1)$, $(4,3)$, and $(5,3)$. Clearly a relation $x \trprec y$ is always either increasing or decreasing and so one can split the relations of $P$ into two non-intersecting sets. \begin{Definition} Let $P$ be an interval-poset. Then, the \emph{final forest} of $P$, denoted by $\dec(P)$, is the poset formed by the decreasing relations of $P$, \emph{i.e.}, $b \trprec_{\dec(P)} a$ if and only if $(b,a)$ is a decreasing relation of $P$. Similarly, the \emph{initial forest} of $P$, denoted by $\inc(P)$, is the poset formed by the increasing relations of $P$. \end{Definition} By Definition \ref{def:interval-poset} it is immediate that the final and initial forests of an interval-poset are also interval-posets. By extension, we say that an interval-poset containing only decreasing (resp. increasing) relations is a final forest (resp. initial forest). The designation \emph{forest} comes from the result proved in \cite{IntervalPosetsInitial} that an interval-poset containing only increasing (resp. decreasing) relations has indeed the structure of a planar forest, \emph{i.e.}, every vertex in the Hasse diagram has at most one outgoing edge. The increasing and decreasing relations of an interval-poset play a significant role in the structure and properties of the object. We thus follow the convention described in \cite{IntervalPosetsInitial} to draw interval-posets, which differs from the usual representation of posets through their Hasse diagram. Indeed, each interval-poset is represented with an overlay of the Hasse Diagrams of both its initial and final forests. By convention, an increasing relation $b \trprec c$ with $b < c$ is represented in blue with $c$ on the right of $b$. A decreasing relation $b \trprec a$ with $a < b$ is represented in red with $a$ above $b$. In general a relation (either increasing or decreasing) between two vertices $x \trprec y$ is always represented such that $y$ is on a righter and upper position compared to $x$. Thus, the color code, even though practical, is not essential to read the figures. Figure~\ref{fig:interval-poset-forests} shows the final and initial forests of the interval-poset of Figure~\ref{fig:interval-poset-example}. A more comprehensive example is shown in Figure \ref{fig:interval-poset-example2}. Following our conventions, you can read off, for example, that $3 \trprec 4 \trprec 5$ and that $9 \trprec 8 \trprec 5$. \begin{figure}[ht] \begin{center} \input{figures/interval-poset-forests} \end{center} \caption{Final and initial forests of an interval-poset} \label{fig:interval-poset-forests} \end{figure} \begin{figure}[ht] \begin{center} \scalebox{0.8}{ \input{figures/interval-poset-example2} } \end{center} \caption{An example of an interval-poset} \label{fig:interval-poset-example2} \end{figure} We also define some vocabulary on the vertices of the interval-posets related to the initial and final forests. \begin{Definition} Let $P$ be an interval-poset. Then \begin{itemize} \item $b$ is said to be \emph{a decreasing root} of $P$ if there is no $a < b$ with the decreasing relation $b \trprec a$; \item $b$ is said to be \emph{an increasing root} of $P$ if there is no $c > b$ with the increasing relation $b \trprec c$; \item an increasing-cover (resp. decreasing-cover) relation is a cover relation of the initial (resp. final) forest of $P$; \item the \emph{decreasing children} of $b$ are all elements $c > b$ such that $c \trprec b$ is a decreasing-cover relation; \item the \emph{increasing children} of $b$ are all elements $a < b$ such that $a \trprec b$ is an increasing-cover relation. \end{itemize} \end{Definition} As an example, in Figure~\ref{fig:interval-poset-example2}: the decreasing roots are $1,2,5$, the increasing roots are $1,5,7,10$, there are 7 decreasing-cover relations (red edges) and 6 increasing-cover relations (blue edges), the decreasing children of 5 are $6, 7, 8, 10$ and its increasing children are 2 and 4. We also need to refine the notion of extension related to increasing and decreasing relations. \begin{Definition} \label{def:interval-poset-extensions} Let $I$ and $J$ be two interval-posets, we say that \begin{itemize} \item $J$ is an \emph{extension} of $I$ if for all $i,j$ in $I$, $i \trprec_I j$ implies $i \trprec_{J} j$; \item $J$ is a \emph{decreasing-extension} of $I$ if $J$ is an extension of $I$ and for all $i,j$ such that $i \trprec_J j$ and $i \ntriangleleft_I j$ then $i > j$; \item $J$ is an \emph{increasing-extension} of $I$ if $J$ is an extension of $I$ and for all $i,j$ such that $i \trprec_J j$ and $i \ntriangleleft_I j$ then $i < j$; \end{itemize} \end{Definition} In other words, $J$ is an extension of $I$ if it is obtained by adding relations to $I$, it is a decreasing-extension if it is obtained by adding only decreasing relations and it is an increasing-extension if it is obtained by adding only increasing relations. \begin{Remark} \label{rem:adding-decreasing} If you add a decreasing relation $(b,a)$ to an interval-poset $I$, all extra relations that are obtained by transitivity are also decreasing. Indeed, suppose that $J$ is obtained from $I$ by adding the relation $b \trprec a$ with $a < b$ (in particular neither $(a,b)$ nor $(b,a)$ is a relation of $I$). And suppose that the relation $i \trprec_J j$ with $i < j$ is added by transitivity, which means $i \ntriangleleft_I j$, $i \trpreceq_I b$ and $a \trpreceq_I j$. If $i < a$, the Tamari axiom on $(i,a,b)$ implies $a \trprec_I b$, which contradicts our initial statement. So we have $a < i < j$ and $a \trprec_I j$, the Tamari axiom on $(a,i,j)$ implies $i \trprec_I j$ and again contradicts our statement. Note on the other hand that nothing guarantees that the obtained poset is still an interval-poset. Similarly, if you add an increasing relation $(a,b)$ to an interval-poset, you obtain an increasing-extension. \end{Remark} \subsection{The Tamari lattice} \label{sec:tamari} It was shown in \cite{IntervalPosetsInitial} that Tamari interval-posets are in bijection with intervals of the Tamari lattice. The main purpose of this paper is to prove a conjecture of Préville-Ratelle \cite{PRThesis} on Tamari intervals. To do so, we first give a detailed description of the relations between interval-posets and the realizations of the Tamari lattice in terms of trees and Dyck paths. Let us start with some reminder on the Tamari lattice. \begin{Definition} A binary tree is recursively defined by being either \begin{itemize} \item the empty tree, denoted by $\emptyset$, \item a pair of binary trees, respectively called \emph{left} and \emph{right} subtrees, grafted on a node. \end{itemize} If $L$ and $R$ are two binary trees, we denote by $\bullet (L,R)$ the binary tree obtained from $L$ and $R$ grafted on a node. \end{Definition} What we call a binary tree is often called a \emph{planar binary tree} in the literature (as the order on the subtrees is important). Note that in our representation of binary trees, we never draw the empty subtrees. The \emph{size} of a binary tree is defined recursively: the size of the empty tree is $0$, and the size of a tree $\bullet (L, R)$ is the sum of the sizes of $L$ and $R$ plus 1. It is also the number of nodes. For example, the following tree \scalebox{0.5}{\input{figures/trees/T3-2}} has size 3, it is given by the recursive grafting $\bullet ( \bullet(\emptyset, \bullet(\emptyset, \emptyset) ) , \emptyset)$. It is well known that the unlabeled binary trees of size $n$ are counted by the $n^{th}$ Catalan number \begin{equation} \frac{1}{n+1}\binom{2n}{n}. \end{equation} \begin{Definition}[Standard binary search tree labeling] Let $T$ be a binary tree of size $n$. The \emph{binary search tree labeling} of $T$ is the unique labeling of $T$ with labels $1, \dots, n$ such that for a node labeled $k$, all nodes on the left subtree of $k$ have labels smaller than $k$ and all nodes on the right subtree of $k$ have labels greater than $k$. An example is given in Figure \ref{fig:bst-example}. \end{Definition} \begin{figure}[ht] \input{figures/bst-example} \caption{A binary search tree labeling} \label{fig:bst-example} \end{figure} In other words, the binary search tree labeling of $T$ is an in-order recursive traversal of $T$: left, root, right. For the rest of the paper, we identify binary trees with their corresponding binary search tree labeling. In particular, we write $v_1, \dots, v_n$ the nodes of $T$: the index of the node corresponds to its label in the binary search tree labeling. To define the Tamari lattice, we need the following operation on binary trees. \begin{Definition} \label{def:tree-rotation} Let $v_y$ be a node of $T$ with a non-empty left subtree of root $v_x$. The \emph{right rotation} of $T$ on $v_y$ is a local rewriting which follows Figure~\ref{fig:tree-right-rotation}, that is replacing $v_y( v_x(A,B), C)$ by $v_x(A,v_y(B,C))$ (note that $A$, $B$, or $C$ might be empty). \begin{figure}[ht] \centering \input{figures/tree-right-rotation} \caption{Right rotation on a binary tree.} \label{fig:tree-right-rotation} \end{figure} \end{Definition} It is easy to check that the right rotation preserves the binary search tree labeling. It is the cover relation of the Tamari lattice \cite{Tamari1,Tamari2}: a binary tree $T$ is said to be bigger in the Tamari lattice than a binary tree $T'$ if it can be obtained from $T'$ through a sequence of right rotations. The lattices for the sizes 3 and 4 are given in Figure~\ref{fig:tamari-trees}. \begin{figure}[ht] \hspace*{-1.5cm} \begin{tabular}{cc} \scalebox{0.7}{\input{figures/tamari_trees-3}}& \scalebox{0.7}{\input{figures/tamari_trees-4}} \end{tabular} \caption{Tamari lattice of sizes 3 and 4 on binary trees.} \label{fig:tamari-trees} \end{figure} Dyck paths are another common set of objects used to define the Tamari lattice. First, we recall their definition. \begin{Definition} A \emph{Dyck path} of size $n$ is a lattice path from the origin $(0,0)$ to the point $(2n,0)$ made from a sequence of \emph{up-steps} (steps of the form $(x, y) \to (x+1, y+1)$) and \emph{down-steps} (steps of the form $(x, y) \to (x+1, y-1)$) such that the path stays above the line $y=0$. \end{Definition} A Dyck path can also be considered as a binary word by replacing up-steps by the letter $1$ and down-steps by $0$. We call a Dyck path \emph{primitive} if it only touches the line $y=0$ on its end points. As widely known, Dyck paths are also counted by the Catalan numbers. There are many ways to define a bijection between Dyck paths and binary trees. The one we use here is the only one which is consistent with the usual definition of the Tamari order on Dyck paths. \begin{Definition} \label{def:dyck-tree} We define the $\tree$ map from the set of all Dyck paths to the set of binary trees recursively. Let $D$ be a Dyck path. \begin{itemize} \item If $D$ is empty, then $\tree(D)$ is the empty binary tree. \item If $D$ is of size $n > 0$, then the binary word of $D$ can be written uniquely as $D_1 1 D_2 0$ where $D_1$ and $D_2$ are Dyck paths of size smaller than $n$ (in particular, they can be empty paths). Then $\tree(D)$ is the tree $\bullet (\tree(D_1), \tree(D_2))$. \end{itemize} \end{Definition} Note that the path defined by $1D_2 0$ is primitive; it is the only non-empty right factor of the binary word of $D$ which is a primitive Dyck path. Similarly, the subpath $D_1$ corresponds to the left factor of $D$ up to the last touching point before the end. Consequently, if $D$ is primitive, then $D = 1D_2 0$, while $D_1$ is empty and thus $\tree(D)$ is a binary tree whose left subtree is empty. If both $D_1$ and $D_2$ are empty, then $D =10$, the only Dyck path of size $1$, and $\tree(D)$ is the binary tree formed by a single node. The $\tree$ map is a bijection and preserves the size as it is illustrated in Figure~\ref{fig:dyck-tree}. \begin{figure}[ht] \centering \scalebox{0.8}{ \input{figures/dyck-tree} } \caption{Bijection between Dyck paths and binary trees.} \label{fig:dyck-tree} \end{figure} Following this bijection, one can check that the right rotation on binary trees corresponds to the following operation on Dyck paths. \begin{Definition} A \emph{right rotation} of a Dyck path $D$ consists of switching a down step $d$ followed by an up step with the primitive Dyck path starting right after $d$. (See Figure \ref{fig:rot-dyck}.) \begin{figure}[ht] \scalebox{0.8}{ \input{figures/rotation_dyck} } \caption{Rotation on Dyck Paths.} \label{fig:rot-dyck} \end{figure} \end{Definition} By extension, we then say that a Dyck path $D$ is bigger than a Dyck path $D'$ in the Tamari lattice if it can be obtained from $D'$ through a series of right rotations. The Tamari lattices of sizes 3 and 4 in terms of Dyck paths are given in Figure~\ref{fig:tamari-dyck}. \begin{figure}[ht] \hspace*{-1.5cm} \begin{tabular}{cc} \scalebox{0.7}{\input{figures/tamari_dyck-3}}& \scalebox{0.7}{\input{figures/tamari_dyck-4}} \end{tabular} \caption{Tamari lattices of sizes 3 and 4 on Dyck paths.} \label{fig:tamari-dyck} \end{figure} \subsection{Planar forests} The bijection between interval-posets and intervals of the Tamari lattice uses a classical bijection between binary trees and planar forests. \begin{Definition} Let $T$ be a binary tree of size $n$ and $v_1, \dots v_n$ its nodes taken in in-order as to follow the binary search tree labeling of~$T$. The final forest of $T$, $\dec(T)$ is the poset on $\left\{ 1, \dots, n \right\}$ whose relations are defined as follows: $b \trprec a$ if and only if $v_b$ is in the right subtree of~$v_a$. (Thus, $b \trprec a$ implies $b > a$.) Similarly, the initial forest of $T$, $\inc(T)$, is the poset on $\left\{ 1, \dots, n \right\}$ whose relations are defined as follows: $a \trprec b$ if and only if $v_a$ is in the left subtree of $v_b$. (Thus, $a \trprec b$ implies $b > a$.) \end{Definition} \begin{figure}[ht] \input{figures/tamari_forests} \caption{A binary tree with its corresponding final and initial forests.} \label{fig:forests} \end{figure} An example of the construction is given in Figure~\ref{fig:forests}. As explained in~\cite{IntervalPosetsInitial}, both the initial and the final forest constructions give bijections between binary trees and planar forests, \emph{i.e.}, forests of trees where the order on the trees is fixed as well as the orders of the subtrees of each node. Indeed, we first notice that the labeling on both images $\dec(T)$ and $\inc(T)$ is entirely canonical (such as the labeling on the binary tree) and can be retrieved by only fixing the order in which to read the trees and subtrees. Then these are actually well known bijections. The one giving the final forest is often referred to as ``left child = left brother'' because it can be achieved directly on the unlabeled binary tree by transforming every left child node into a left brother and by leaving the right child nodes as sons. Thus in Figure \ref{fig:forests}, 2 is the left child of 3 in $T$ and it becomes the left brother of 3 in $\dec(T)$, 9 is a right child of 7 in $T$ and it stays the right-most child of 7 in $\dec(T)$. The increasing forest construction is then the ``right child = right brother'' bijection. Also, the initial and final forests of a binary tree $T$ are indeed initial and final forests in the sense of interval-posets. In particular, they are interval-posets. The fact that they contain only increasing (resp. decreasing) relations is given by construction. It is left to check that they satisfy the Tamari axiom on all their elements: this is due to the binary search tree structure. In particular, if you interpret a binary search tree as poset by pointing all edges toward the root then it is an interval-poset. \begin{Theorem}[from \cite{IntervalPosetsInitial} Thm 2.8] Let $T_1$ and $T_2$ be two binary trees and $R = \rel(\dec(T_1)) \cup \rel(\inc(T_2))$. Then, $R$ is the set of relations of a poset $P$ if and only if $T_1 \leq T_2$ in the Tamari lattice. And in this case, $P$ is an interval-poset. This construction defines a bijection between interval-posets and intervals of the Tamari lattice. \end{Theorem} There are two ways in which $R$ could be not defining a poset. First, $R$ could be non-transitive. Because of the structure of initial and final forests, this never happens. Secondly, $R$ could be non-anti-symmetric by containing both $(a,b)$ and $(b,a)$ for some $a,b \leq n$. This happens if and only if $T_1 \not\leq T_2$. You can read more about this bijection in \cite{IntervalPosetsInitial}. Figure~\ref{fig:interval-poset-construction} gives an example. \begin{figure}[ht] \input{figures/forest-intersection} \caption{Two trees $T_1 \leq T_2$ in the Tamari lattice and their corresponding interval-poset.} \label{fig:interval-poset-construction} \end{figure} To better understand the relations between Tamari intervals and interval-posets, we now recall some results from \cite[Prop. 2.9]{IntervalPosetsInitial}, which are immediate from the construction of interval-posets and the properties of initial and final forests. \begin{Proposition}[From \cite{IntervalPosetsInitial} Prop. 2.9] \label{prop:interval-poset-extension} Let $I$ and $I'$ be two interval-posets such that their respective Tamari intervals are given by $[A,B]$ and $[A',B']$, then \begin{enumerate} \item $I'$ is an extension of $I$ if and only if $A' \geq A$ and $B' \leq B$; \label{prop-en:interval-poset-extension} \item $I'$ is a decreasing-extension of $I$ if and only if $A' \geq A$ and $B' = B$; \label{prop-en:interval-poset-decreasing-extension} \item $I'$ is an increasing-extension of $I$ if and only if $A' = A$ and $B' \leq B$. \label{prop-en:interval-poset-increasing-extension} \end{enumerate} \end{Proposition} As the Tamari lattice is also often defined on Dyck paths, it is legitimate to wonder what is the direct bijection between a Tamari interval $[D_1, D_2]$ of Dyck paths and an interval-poset. Of course, one can just transform $D_1$ and $D_2$ into binary trees through the bijection of Definition~\ref{def:dyck-tree} and then construct the corresponding final and initial forests. But because many statistics we study in this paper are more naturally defined on Dyck paths than on binary trees, we give the direct construction. Recall that for each up-step $d$ in a Dyck path, there is a corresponding down-step $d'$ which is the first step you meet by drawing a horizontal line starting from $d$. From this, one can define a notion of nesting: an up-step $d_2$ (and its corresponding down-step $d_2'$) is nested in $(d,d')$ if it appears in between $d$, $d'$ in the binary word of the Dyck path. \begin{Proposition} \label{prop:dyck-dec-forest} Let $D$ be a Dyck path on which we apply the following process: \begin{itemize} \item label from 1 to $n$ all pairs of up-steps and their corresponding down-steps by reading the up-steps on the Dyck path from left to right, \item define a poset $P$ by $b \trprec_P a$ if and only if $b$ is nested in $a$ in the previous labeling. \end{itemize} Then $\dec(D) := \dec(\tree(D)) = P$. \end{Proposition} This bijection is actually a very classical one. It consists of shrinking the Dyck path into a tree skeleton. In Figure~\ref{fig:dyck-dec-forest}, we show in parallel the process of Proposition~\ref{prop:dyck-dec-forest} on the Dyck path and the corresponding binary tree. \begin{figure}[ht] \input{figures/dyck-dec-forest} \caption{Bijection between a Dyck path and its final forest.} \label{fig:dyck-dec-forest} \end{figure} \begin{proof} We use the recursive definition of the $\tree$ map. Let $D$ be a Dyck path. If $D$ is empty, then $\tree(D)$ is the empty binary tree and $\dec(D) = \dec(\tree(D))$ is the empty poset of size 0. If $D$ is a non-empty Dyck path, let $T = \tree(D)$. We want to check that $P$ is equal to $F := \dec(T)$. The path $D$ decomposes into $D = D_1 1 D_2 0$ with $\tree(D_1) = T_1$, the left subtree of $T$ and $\tree(D_2) = T_2$, the right subtree of $T$. We assume by induction that the proposition is true on $\dec(D_1)$ and $\dec(D_2)$. Let $1 \leq k \leq n$ be such that $\size(D_1) = k-1$ (in Figure~\ref{fig:dyck-dec-forest}, $k=5$): then $k$ is the label of the pair $(1,0)$ which appears in the decomposition of $D$. We also have that $v_k$ is the root of $T$. Now, let us choose $a < b \leq n$. Either \begin{itemize} \item $a < b < k$: the pairs of steps labeled by $a$ and $b$ both belong to $D_1$, we have $b \trprec_P a$ if and only if $b \trprec_F a$ by induction. \item $b = k$: the pair labeled by $a$ belongs to $D_1$. It does not nest $k$, so $b \ntriangleleft_P a$. In $T$, $v_a$ is in $T_1$, the left subtree of $T$ and so we also have $b \ntriangleleft_F a$. \item $ a < k < b$: then $a$ belongs to $D_1$ and $b$ belongs to $D_2$ In particular $b$ is not nested in $a$ and so $b \ntriangleleft_P a$. In $T$, $v_a$ is in $T_1$ and $v_b$ is in $T_2$. In particular, $v_b$ is not in the right subtree of $v_a$ and so $b \ntriangleleft_F a$. \item $a = k$: the pair labeled by $b$ belongs to $D_2$. It is nested in $k$, so $b \trprec_P a$. In $T$, $v_b$ belongs to $T_2$ the right subtree of $T$, we have $b \trprec_F a$. \item $ k < a < b$: the pairs of steps labeled by $a$ and $b$ both belong to $D_2$, we have $b \trprec_P a$ if and only if $b \trprec_F a$ by induction. \end{itemize} \end{proof} On binary trees, the constructions of the final and initial forests are completely symmetrical: the difference between the two only consists of a choice between left subtrees and right subtrees. Because the left-right symmetry of binary trees is not obvious when working on Dyck paths, the construction of the initial forest from a Dyck path gives a different algorithm than the final forest one. \begin{Proposition} \label{prop:dyck-inc-forest} Let $D$ be a Dyck path of size $n$, we construct a directed graph following this process: \begin{itemize} \item label all up-steps of $D$ from 1 to $n$ from left to right, \item for each up-step $a$, find, if any, the first up-step $b$ following the corresponding down-step of $a$ and add the edge $a \longrightarrow b$. \end{itemize} Then this resulting directed graph is the Hasse diagram of the initial forest of $D$. \end{Proposition} The construction is illustrated on Figure~\ref{fig:dyck-inc-forest}. \begin{figure}[ht] \input{figures/dyck-inc-forest} \caption{Bijection between a Dyck path and its initial forest.} \label{fig:dyck-inc-forest} \end{figure} \begin{proof} We use the same induction technique as for the previous proof. The initial case is trivial. As before, when $D$ is non-empty, we have $D = D_1 1 D_2 0$ along with the corresponding trees $T$, $T_1$, and $T_2$ and $\size(D_1) = \size(T_1) = k-1$. We set $F := \inc(T)$ and we call $P$ the poset obtained by the algorithm. First, let us prove that for all $a < k$, we have $a \trprec_P k$. Indeed suppose there exists $a < k$ with $a \ntriangleleft_P k$, we take $a$ to be maximal among those satisfying these conditions. We have $a \in D_1$ so its corresponding down-step appears before $k$, let $a' \leq k$ be the first up-step following the down-step of $a$. If $a' = k$, then $(a,k)$ is in the Hasse diagram of~$P$ and so $a \trprec_P k$. If $a' < k$, we have $a \trprec_P a'$ by definition and the maximality of $a$ gives $a' \trprec_P k$, which implies $a \trprec_P k$ by transitivity. Now let us choose $a < b \leq n$. Either \begin{itemize} \item $a < b < k$: the up-steps labeled by $a$ and $b$ both belong to $D_1$, we have $a \trprec_P b$ if and only if $a \trprec_F b$ by induction. \item $b = k$: in $T$, $b$ is the root and $a$ is in its left subtree: we have $a \trprec_F b$. In $P$, we have also proved $a \trprec_P b$. \item $a < k < b$: then $a$ belongs to $D_1$ and $b$ belongs to $D_2$ In particular $b$ is above $k$ in the path and there cannot be any link with $a$ even by transitivity, which means $a \ntriangleleft_P b$. In $T$, $v_a$ is in $T_1$ and $v_b$ is in $T_2$. In particular, $v_a$ is not in the left subtree of $v_b$ and so $a \ntriangleleft_F b$. \item $a = k$: the corresponding down-step of $a$ is the last step of $D$, which means there is no edge $(a,b)$ in $P$. Similarly, because $a$ is the tree root, there is no edge $(a,b)$ in $F$. \item $ k < a < b$: the up-steps labeled by $a$ and $b$ both belong to $D_2$, we have $a \trprec_P b$ if and only if $a \trprec_F b$ by induction. \end{itemize} \end{proof} Now that we have described the relation between interval-posets and Tamari intervals both in terms of binary trees and Dyck path, we will often identify a Tamari interval with its interval-poset. When we refer to Tamari intervals in the future, we consider that they can be given indifferently by a interval-poset or by a couple of a lower bound and an upper bound $[A,B]$ where $A$ and $B$ can either be binary trees or Dyck paths. \section{Statistics} \label{sec:statistics} \subsection{Statement of the main result} \label{sec:statement} \begin{Definition} \label{def:contact-rise-dw} Let $D$ be a Dyck path. \begin{itemize} \item $\contacts(D)$ is the number of non-final contacts of the path $D$: the number of time the path $D$ touches the line $y=0$ outside the final point. \item $\rises(D)$ is the initial rise of $D$: the number of initial consecutive up-steps. \item Let $u_i$ be the $i^{th}$ up-step of $D$, we consider the maximal subpath starting right after $u_i$ which is a Dyck path. Then the \emph{contacts of $u_i$}, $\contactsStep{i}(D)$, are the number of non-final contacts of this Dyck path . \item Let $v_i$ be the $i^{th}$ down-step of $D$, we say that the number of consecutive up-steps right after $v_i$ are the \emph{rises} of $v_i$ and write~$\risesStep{i}(D)$. \item $\contactsV(D) := (\contacts(D), \contactsStep{1}(D), \dots, \contactsStep{n-1}(D))$ is the \emph{contact vector} of~$D$. \item $\contactsV^*(D) := (\contactsStep{1}(D), \dots, \contactsStep{n-1}(D))$ is the \emph{truncated contact vector} of~$D$. \item $\risesV(D) := (\rises(D), \risesStep{1}(D), \dots, \risesStep{n-1}(D))$ is the \emph{rise vector} of~$D$. \item $\risesV^*(D) := (\risesStep{1}(D), \dots, \risesStep{n-1}(D))$ is the \emph{truncated rise vector} of~$D$. \item Let $X = (x_0, x_1, x_2, \dots)$ be a commutative alphabet, we write $\contactsP(D,X)$ the monomial $x_{\contacts(D)}, x_{\contactsStep{1}(D)}, \dots, x_{\contactsStep{n-1}(D)}$ and we call it the \emph{contact monomial} of $D$. \item Let $Y = (y_0, y_1, y_2, \dots)$ be a commutative alphabet, we write $\risesP(D,Y)$ the monomial $y_{\rises(D)}, y_{\risesStep{1}(D)}, \dots, y_{\risesStep{n-1}(D)}$ and we call it the \emph{rise monomial} of $D$. \end{itemize} \end{Definition} \begin{figure}[ht] \input{figures/contact_example} \caption{Contacts and rises of a Dyck path} \label{fig:contact-rise-example} \end{figure} Figure~\ref{fig:contact-rise-example} gives an example of the different contacts and rises values computed on a given Dyck path. The Dyck path can be easily reconstructed from $\risesV(D)$. This is also true of $\contactsV(D)$ even though it is less obvious. It will become clear once we express the statistics in terms of planar forests. At first, let us use the definitions on Dyck paths to express our main result on Tamari intervals. \begin{Definition} \label{def:contact-rise-intervals} Consider an interval $I$ of the Tamari lattice described by two Dyck paths $D_1$ and $D_2$ with $D_1 \leq D_2$. Then \begin{enumerate} \item $\contactsStep{i}(I):= \contactsStep{i}(D_1)$ for $0 \leq i \leq n$, $\contactsV(I):=\contactsV(D_1)$, $\contactsV^*(I):=\contactsV^*(D_1)$, and $\contactsP(I,X):=\contactsP(D_1,X)$; \item $\risesStep{i}(I):=\risesStep{i}(D_2)$ for $0 \leq i \leq n$, $\risesV(I):=\risesV(D_2)$, $\risesV^*(I):=\risesV^*(D_2)$ and $\risesP(I,Y) := \risesP(D_2,Y)$. \end{enumerate} To summarize, all the statistics we defined on Dyck paths are extended to Tamari intervals by looking at the \emph{lower bound} Dyck path $D_1$ when considering contacts and the \emph{upper bound} Dyck path $D_2$ when considering rises. \end{Definition} Most of these statistics have been considered before on both Dyck paths and Tamari intervals. In \cite{mTamari}, one can find the same definitions for the initial rise $\rises(I)$ and number of non-final contacts $\contacts(I)$. Taking $x_0 = y_0 = 1$ in $\contactsP(I,X)$ and $\risesP(I,Y)$ corresponds to ignoring $0$ values in $\contactsV(I)$ and $\risesV(I)$: we find those monomials in Préville-Ratelle's thesis \cite{PRThesis}. Our definition of $\contactsP(I,X)$ is slightly different than the one of Préville-Ratelle: we will explain the correspondence in the more general case of $m$-Tamari intervals in Section~\ref{sec:mtam}. We now describe another statistic from \cite{PRThesis} which is specific to Tamari intervals: it cannot be defined through a Dyck path statistics on the interval end points. \begin{Definition} \label{def:distance} Let $I = [D_1, D_2]$ be an interval of the Tamari lattice. A \emph{chain} between $D_1$ and $D_2$ is a list of Dyck paths \begin{equation*} D_1 = P_1 < P_2 < \dots < P_k = D_2 \end{equation*} which connects $D_1$ and $D_2$ in the Tamari lattice. If the chain comprises $k$ elements, we say it is of length $k-1$ (the number of cover relations). We call the \emph{distance} of $I$ and write $\distance(I)$ the maximal length of all chains between $D_1$ and $D_2$. \end{Definition} For example, if $I = [D,D]$ is reduced to a single element, then $\distance(I) = 0$. If $I = [D_1, D_2]$ and $D_1 \leq D_2$ is a cover relation of the Tamari lattice, then $\distance(I) = 1$. This statistic was first described in \cite{BergmTamari}. It generalizes the notion of \emph{area} of a Dyck path to an interval. To finish, we need the notation $\size(I)$, which is defined to be the size of the elements of $I$: if $I$ is an interval of Dyck paths of size $n$, then $\size(I) = n$. Note that it is also the number of vertices of the interval-poset representing $I$. We can now state the first version of the main result of this paper. \begin{Theorem}[classical case] \label{thm:main-result-classical} Let $x,y,t,q$ be variables and $X = (x_0, x_1, x_2, \dots)$ and $Y = (y_0, y_1, y_2, \dots)$ be commutative alphabets. Consider the generating function \begin{equation} \Phi(t; x, y, X, Y, q) = \sum_{I} t^{\size(I)} x^{\contacts(I)} y^{\rises(I)} \contactsP(I,X) \risesP(I,Y) q^{\distance(I)} \end{equation} summed over all intervals of the Tamari lattice. Then we have \begin{equation} \Phi(t; x, y, X, Y, q) = \Phi(t; y, x, Y, X, q). \end{equation} \end{Theorem} For $x_0 = y_0 = 1$, this corresponds to a special case of \cite[Conjecture 17]{PRThesis} where $m=1$, the general case will be dealt in Section~\ref{sec:mtam}. The case where $X,Y,$ and $q$ are set to 1 is proved algebraically in \cite{mTamari}. In this paper, we give a combinatorial proof by describing an involution on Tamari intervals that switches $\contacts$ and $\rises$ as well as $\contactsP$ and $\risesP$. The involution is described in Section~\ref{sec:involutions}. One corollary of Theorem~\ref{thm:main-result-classical} is that the symmetry also exists when we restrict the sum to Dyck paths, \begin{equation} \sum_{D} P_D(t,X,Y) = \sum_{D} P_D(t,Y,X), \end{equation} where $P_D(t,X,Y) = t^{\size(D)} x^{\contacts(D)} y^{\rises(D)} \contactsP(D,X) \risesP(D,Y)$, summed over all Dyck paths. Indeed, an interval with distance 0 is reduced to a single element and, in this case, the statistics of the interval correspond to the classical statistics on the Dyck path. This particular case can be proved directly by conjugating two very classical involutions on Dyck path: the reversing of the Dyck path and the Tamari symmetry. We illustrate this in Figure~\ref{fig:inv_dyckpaths}. What we call the ``Tamari symmetry'' is the natural involution that is given by the top-down symmetry of the Tamari lattice itself. It is described more directly on binary trees, where it corresponds to recursively switching left and right subtrees. The Tamari symmetry is by nature compatible with the Tamari order and can be directly generalized to intervals. This is not the case of the reversal of Dyck path. In other words, if two Dyck paths are such that $D_1 \leq D_2$ in the Tamari lattice, then in general $D_1'$ is not comparable to $D_2'$, where $D_1'$ and $D_2'$ are the reverse Dyck paths of $D_1$ and $D_2$ respectively. This is exactly where lies the difficulty in finding the rise-contact involution on Tamari intervals: the transformation of $D_1$ and $D_2$ are inter correlated. Basically, we have found a way to reverse $D_2$ by keeping track of $D_1$. First, let us interpret the statistics directly on interval-posets. \begin{figure}[ht] \input{figures/involution_dyckpaths} \caption{The rise-contact involution on Dyck paths} \label{fig:inv_dyckpaths} \end{figure} \begin{Definition} Let I be an interval-poset of size $n$, we define \begin{itemize} \item $\dcstep{0}(I)$ (resp. $\icinf(I)$) is the number of decreasing (resp. increasing) roots of $I$. \item $\dcstep{i}(I)$ (resp. $\icstep{i}(I)$) for $1 \leq i \leq n$ is the number of decreasing (resp. increasing) children of the vertex $i$. \item $\DC(I) := (\dcstep{0}(I), \dcstep{1}(I), \dots, \dcstep{n-1}(I))$ is called the \emph{final forest vector} of $I$ and $\DC^*(I) := (\dcstep{1}(I), \dots, \dcstep{n-1}(I))$ is the \emph{truncated final forest vector}. \item $\IC(I) := (\icinf(I), \icstep{n}(I), \dots, \icstep{2}(I))$ is called the \emph{initial forest vector} of $I$ and $\IC^*(I) := (\icstep{n}(I), \dots, \icstep{2}(I))$ is the \emph{truncated initial forest vector}. \end{itemize} \end{Definition} Note that we do not include $\dcstep{n}$ nor $\icstep{1}$ in the corresponding vectors as they are always 0. The vertices of $I$ are read in their natural order in $\DC$ and in reverse order in $\IC$: this follows a natural traversal of the final (resp. initial) forests from roots to leaves. As an example, in Figure~\ref{fig:interval-poset-example2}, we have $\DC(I) = \left(3, 0,2,0,0,4, 0,0,1, 0 \right)$ and $\IC(I) = \left( 4, 2,0,0,1,0,2,1,0,0 \right)$. \begin{Proposition} \label{prop:contact-dc} Let $I$ be an interval-poset, then $\DC(I) = \contactsV(I)$. \end{Proposition} \begin{proof} This is clear from the construction of the final forest from the Dyck path given in Proposition~\ref{prop:dyck-dec-forest}. Indeed, each non-final contact of the Dyck path corresponds to exactly one decreasing root of the interval-poset. Then the decreasing children of a vertex are the contacts of the Dyck path nested in the corresponding (up-step, down-step) tuple. \end{proof} \begin{Remark} The vector $\IC(I)$ is not equal to $\risesV(I)$ in general. In fact, the interpretation of rises directly on the interval-poset is not easy. What we will prove anyway is that the two vectors can be exchanged through an involution on $I$. This involution is shown in Section~\ref{sec:involutions} and is a crucial step in proving Theorem~\ref{thm:main-result-classical}. \end{Remark} \subsection{Distance and Tamari inversions} \label{sec:tamari-inversions} Before describing the involutions used to prove Theorem~\ref{thm:main-result-classical}, we discuss more the \emph{distance} statistics on Tamari intervals in order to give a direct interpretation of it on interval-posets. \begin{Definition} \label{def:tamari-inversions} Let $I$ be an interval-poset of size $n$. A pair $(a,b)$ with $ 1 \leq a < b \leq n$ is said to be a \emph{Tamari inversion} of $I$ when \begin{itemize} \item there is no $a \leq k < b$ with $b \trprec k$; \item there is no $a < k \leq b$ with $a \trprec k$. \end{itemize} We write $\TInv(I)$ the set of Tamari inversions of a set $I$. \end{Definition} As an example, the Tamari inversions of the interval-poset of Figure~\ref{fig:interval-poset-example2} are exactly $(1,2), (1,5), (7,8), (7,10)$. As counter-examples, you can see that $(1,6)$ is not a Tamari inversion because we have $1 < 5 < 6$ and $6 \trprec 5$. Similarly, $(6,8)$ is not a Tamari inversion because there is $6 < 7 < 8$ and $6 \trprec 7$. Note also that if $(a,b)$ is a Tamari inversion of~$I$, then $a \ntriangleleft b$ and $b \ntriangleleft a$. Our goal is to prove the following statement. \begin{Proposition} \label{prop:tamari-inversions} Let $I$ be an interval-poset, then $\distance(I)$ is equal to the number of Tamari inversions of $I$. \end{Proposition} The proof of Proposition~\ref{prop:tamari-inversions} requires two inner results that we express as Lemmas. \begin{Lemma} \label{lem:tamari-inversions-extension} Let $I$ be an interval-poset whose Tamari interval is given by $[T_1, T_2]$ where $T_1$ and $T_2$ are binary trees. Let $I'$ be another interval given by $[T_1',T_2]$ with $T_1' > T_1$ in the Tamari lattice. Then the interval-poset of $I'$ is an extension of $I$ such that if we have $a < b$ with $(b,a)$ a decreasing-cover relation of $I'$ with $b \ntriangleleft_I a$, then $(a,b)$ is a Tamari inversion of $I$. In other words, $I'$ can be obtained from $I$ by adding only decreasing relations given by some Tamari inversions. \end{Lemma} \begin{proof} By Proposition~\ref{prop:interval-poset-extension}, we know that $I'$ is a decreasing-extension of $I$. This Lemma is then just a refinement of Proposition~\ref{prop:interval-poset-extension}, which states that the decreasing relations that have been added come from the Tamari inversions of $I$. Let $(b,a)$ be a decreasing-cover relation of $I'$ such that $b \ntriangleleft_I a$. Because $I'$ is an extension of $I$, we also know that $a \ntriangleleft_I b$. Let $k$ be such that $a < k < b$. Because we have $b \trprec_{I'} a$, the Tamari axiom on $a,k,b$ gives us $k \trprec_{I'} a$. This implies that $b \ntriangleleft_{I'} k$ as $(b,a)$ is a decreasing-cover relation of $I'$ by hypothesis. In particular, we cannot have $b \trprec_I k$ either as any relation of $I$ is also a relation of $I'$. Similarly, we cannot have $a \trprec_I k$ as this would imply $a \trprec_{I'} k$, contradicting $k \trprec_{I'} a$. \end{proof} \begin{Lemma} \label{lem:tamari-inversions-adding} Let $I$ be an interval-poset such that $\TInv(I) \neq \emptyset$ and let $(a,b)$ be its first Tamari inversion in lexicographic order. Then by adding the relation $(b,a)$ to $I$, we obtain an interval-poset $I'$ such that the number of Tamari inversions of $I'$ is the number of Tamari inversions of $I$ minus one. \end{Lemma} \begin{proof} Because $(a,b)$ is a Tamari inversion of $I$, we have $b \ntriangleleft_I a$ and $a \ntriangleleft_I b$, which means the relation $(b,a)$ can be added to~$I$ as a poset. We need to check that the result $I'$ is still an interval-poset. Let us first prove that for all $k$ such that $a < k < b$, we have $k \trprec_I a$. Let us suppose by contradiction that there exist $a < k < b$ with $k \ntriangleleft_I a$ and let us take the minimal $k$ possible. Note that $(a,k)$ is smaller than $(a,b)$ in the lexicographic order, which implies that $(a,k)$ is not a Tamari inversion. If there is $k'$ such that $a < k' \leq k$ with $a \trprec_I k'$ then $(a,b)$ is not a Tamari inversion. So there is $k'$ with $a \leq k' < k$ with $k \trprec_I k'$. But because we took $k$ minimal, we get $k' \trpreceq_I a$, which implies $k \trprec_I a$ and contradicts the fact that $(a,b)$ is a Tamari inversion. Now, we show that the Tamari axiom is satisfied for all $a'$, all $k$, and all $b$ such that $a' < k < b'$. By Remark~\ref{rem:adding-decreasing}, we only have to consider decreasing relations. More precisely, the only cases to check are the ones where $b' \ntriangleleft_I a'$ and $b' \trprec_{I'} a'$, which means $a \trpreceq_I a'$ and $b' \trpreceq_I b$ (the relation is either directly added through $(b,a)$ or obtained by transitivity). Let us choose such a couple $(a',b')$ and first prove that $a' \leq a < b \leq b'$. \begin{itemize} \item $b' \neq a$ and $a' \neq b$ because both would imply $a \trprec_I b$, which contradicts the fact that $(a,b)$ is a Tamari inversion. \item If $b' < a$, we have $b' < a < b$ and $b' \trprec_I b$, which implies $a \trprec_I b$ by the Tamari axiom on $(b',a,b)$. This contradicts the fact that $(a,b)$ is a Tamari inversion. \item If $a < b' < b$, we have proved $b' \trprec_I a$, which gives $b' \trprec_I a'$ by transitivity and contradicts our initial hypothesis. \item If $b < a'$, we have $a < b < a'$ with $a \trprec_I a'$, which implies $b \trprec_I a'$ by the Tamari axiom on $(a,b,a')$. This gives $b' \trprec_I a'$ by transitivity and contradicts our initial hypothesis. \item If $a < a' < b$ then we have $a \trprec a'$ and $(a,b)$ is not a Tamari inversion. \end{itemize} We now have $a' \leq a < b \leq b'$. Now for $k$ such that $a' < k < b'$, if $k < a$ we get $k \trprec_I a'$ by the Tamari axiom on $(a',k,a)$. If $a < k < b$, we have proved that $k \trprec_I a$ and so $k \trprec_I a'$ by transitivity. If $b < k < b'$, the Tamari axiom on $(b,k,b')$ gives us $k \trprec_I b$, which gives in $I'$ $k \trprec_{I'} b \trprec_{I'} a \trprec_{I'} a'$ so $k \trprec_{I'} a'$ by transitivity. In all cases, the Tamari axiom is satisfied in $I'$ for $(a',k,b')$. There is left to prove that the number of Tamari inversions of $I'$ has been reduced by exactly one. More precisely: all Tamari inversions of~$I$ are still Tamari inversions of $I'$ except $(a,b)$. Let $(a',b')$ be another Tamari inversion of $I$. Because $(a,b)$ is minimal in lexicographic order, we have either $a' > a$ or $a' = a$ and $b' > b$. \begin{itemize} \item If $a' > a$, let $k$ be such that $a' \leq k < b'$. We have $b' \ntriangleleft_I k$. Suppose that we have $b' \trprec_{I'} k$, which means that it has been added by transitivity and so we have $b' \trprec_{I} b$ and $a \trprec_I k$. Because $(a,b)$ is a Tamari inversion of $I$, we get that $k > b$. We have $a < b < k$ and $b < k < b'$, the Tamari axioms on $(a,b,k)$ and $(b',k,b)$ leads to a contradiction in $I$. Now, let $k$ be such that $a' < k \leq b'$. We have $a' \ntriangleleft_I k$. No increasing relation has been created in $I'$ and so $a' \ntriangleleft_{I'} k$. \item If $a = a'$ and $b' > b$, first note that $b' \ntriangleleft_I b$. Indeed we have $a' < b < b'$ and this would contradict the fact that $(a',b')$ is a Tamari inversion. Let $k$ be such that $a \leq k < b'$, then $b' \ntriangleleft_{I} k$. Because $b' \ntriangleleft_{I} b$, the relation $(b',k)$ cannot be obtained by transitivity in $I'$ and so $b' \ntriangleleft_{I'} k$. Now, if $a < k \leq b'$, we have $a \ntriangleleft_{I} k$ and by the same argument as earlier that no increasing relation has been created in $I'$, $a \ntriangleleft_{I'} k$. \end{itemize} \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:tamari-inversions}] Let $I$ be an interval-poset containing $v$ Tamari inversions and whose bounds are given by two binary trees $[T_1, T_2]$. Suppose there is a chain of length $k$ between $T_1$ and $T_2$. In other words, we have $k+1$ binary trees \begin{align*} T_1 = P_1 < P_2 < \dots < P_{k+1} = T_2 \end{align*} which connects $T_1$ and $T_2$ in the Tamari lattice. Let us look at the intervals $[P_i, T_2]$. Lemma~\ref{lem:tamari-inversions-extension} tells us that each of them can be obtained by adding decreasing relations $(b,a)$ to $I$ where $(a,b) \in \TInv(I)$. We now apply Proposition~\ref{prop:interval-poset-extension}. In our situation, it means that, for $1 \leq j \leq k+1$, the interval-poset of $[P_j,T_2]$ is an extension of every interval-posets $[P_i, T_2]$ with $1 \leq i \leq j$: the Tamari inversions that were added as decreasing relations in $[P_i,T_2]$ are kept in $[P_j,T_2]$. In other words, to obtain $P_{i+1}$ from $P_{i}$, one or more Tamari inversions of $I$ are added to $P_i$ as decreasing relations. At least one Tamari inversion is added at each step, which implies that $v \geq k$. This is true for all chain and thus $v \geq \distance(I)$. Now, let us explicitly construct a chain between $T_1$ and $T_2$ of length~$v$. This will give us that $v \leq \distance(I)$ and conclude the proof. We proceed inductively. \begin{itemize} \item If $v = 0$, then $\distance(I) \leq v$ is also 0, which means $T_1 = T_2$: this is a chain of size 0 between $T_1$ and $T_2$. \item We suppose $v > 0$ and we apply Lemma~\ref{lem:tamari-inversions-adding}. We take the first Tamari inversion of $\TInv(I)$ in lexicographic order and add it to~$I$ as a decreasing relation. We obtain an interval-poset $I'$ which is a decreasing-extension of $I$ with $v-1$ Tamari inversions. Then by Proposition~\ref{prop:interval-poset-extension}, the bounds of $I'$ are given by $[T_1',T_2]$ with $T_1' > T_1$. By induction, we construct a chain of size $v-1$ between $T_1'$ and $T_2$, which gives us a chain of size $v$ between $T_1$ and $T_2$. \end{itemize} \end{proof} The interpretation of the distance of an interval as a direct statistic on interval-posets is very useful for our purpose here as it gives an explicit way to compute it and its behavior through our involutions will be easy to state and prove. It is also interesting in itself. Indeed, this statistic appears in other conjectures on Tamari intervals, for example Conjecture~19 of \cite{PRThesis}, which is related to the well known open $q$-$t$-Catalan problems. \section{Involutions} \label{sec:involutions} \subsection{Grafting of interval-posets} \label{sec:composition} In this section, we revisit some major results of \cite{IntervalPosetsInitial} which we will be used to define some new involutions. \begin{Definition} Let $I_1$ and $I_2$ be two interval-posets, we define a \emph{left grafting} operation and a \emph{right grafting} operation depending on a parameter $r$. Let $\alpha$ and $\omega$ be respectively the label of minimal value of $I_2$ (shifted by the size of $I_1$) and the label of maximal value of $I_1$. Let $c = \contacts(I_2)$ and $y_1, \dots y_c$ be the decreasing roots of $I_2$ (shifted by the size of $I_1$). The left grafting of $I_1$ over $I_2$ with $\size(I_2) > 0$ is written as $I_1 \pleft I_2$. It is defined by the shifted concatenation of $I_1$ and $I_2$ along with relations $y \trprec \alpha$ for all $y \in I_1$. The right grafting of $I_2$ over $I_1$ with $\size(I_1) > 0$ is written as $I_1 \pright{r} I_2$ with $0 \leq r \leq c$. It is defined by the shifted concatenation of $I_1$ and $I_2$ along with relations $y_i \trprec \omega$ for $1 \leq i \leq r$. \end{Definition} \begin{figure}[ht] \input{figures/grafting} \caption{Grafting of interval-posets} \label{fig:grafting} \end{figure} Figure~\ref{fig:grafting} gives an example. Note that the vertices of $I_2$ are always shifted by the size of $I_1$. For simplicity, we do not always recall this shifting: when we mention a vertex $x$ of $I_2$ in a grafting, we mean the shifted version of $x$. These two operations were defined in \cite[Def. 3.5]{IntervalPosetsInitial}. Originally, the right grafting was defined as a single operation $\pright{}$ whose result was a formal sum of interval-posets. In this paper, it is more convenient to cut it into different sub-operations depending on a parameter. We can use these operations to uniquely \emph{decompose} interval-posets: this will be explained in Section~\ref{sec:decomposition}. First, we will study how the different statistics we have defined are affected by the operations. We start with the contact vector $\contactsV$, which is equal to the final forest vector $\DC$. \begin{Proposition} \label{prop:grafting-contact} Let $I_1$ and $I_2$ be two interval-posets of respective sizes $n > 0$ and $m > 0$, then \begin{itemize} \item $\contacts(I_1 \pleft I_2) = \contacts(I_1) + \contacts(I_2)$; \item $\contactsV(I_1 \pleft I_2) = \left( \contacts(I_1) + \contacts(I_2), \contactsStep{1}(I_1), \dots, \contactsStep{n-1}(I_1),0, \contactsStep{1}(I_2), \dots, \contactsStep{m-1}(I_2) \right)$; \item $\contacts(I_1 \pright{i} I_2) = \contacts(I_1) + \contacts(I_2) - i$; \item $\contactsV(I_1 \pright{i} I_2) = \left( \contacts(I_1) + \contacts(I_2) - i, \contactsStep{1}(I_1), \dots, \contactsStep{n-1}(I_1),i, \contactsStep{1}(I_2), \dots, \contactsStep{m-1}(I_2) \right)$. \end{itemize} If $\size(I_1)= 0$ then $I_1 \pleft I_2 = I_2$ and $\contactsV(I_1 \pleft I_2) = \contactsV(I_2)$. If $\size(I_2)= 0$ then $I_1 \pright{i} I_2 = I_1$ and $\contactsV(I_1 \pright{i} I_2) = \contactsV(I_1)$. \end{Proposition} This can be checked on Figure~\ref{fig:grafting}. We have $\contactsV(I_1) = \DC(I_1) = (2,1,0)$ because there are two connected components in the final forest ($2 \trprec 1$ and $3$) and $1$ and $2$ have respectively 1 and 0 decreasing children. For $I_2$, we get $\contactsV(I_2) = (2,0,1)$. Now, it can be checked that $\contactsV(I_1 \pleft I_2) = (4,1,0,0,0,1)$, $\contactsV(I_1 \pright{0} I_2) = (4,1,0,0,0,1)$, $\contactsV(I_1 \pright{1} I_2) = (3,1,0,1,0,1)$, $\contactsV(I_1 \pright{2} I_2) = (2,1,0,2,0,1)$. \begin{proof} First, remember that, by Proposition~\ref{prop:contact-dc}, contacts can be directly computed on the final forest of the interval-posets: the non-final contacts correspond to the number of components and $\contactsStep{v}$ for $1 \leq v \leq n$ is the number of decreasing children of the vertex $v$. Now, in the left grafting $I_1 \pleft I_2$, the two final forests are simply concatenated. In particular, $\contacts(I_1 \pleft I_2) = \contacts(I_1) + \contacts(I_2)$. The contact vector $\contactsV(I_1 \pleft I_2)$ is then formed by this initial value followed by the truncated contact vector of $I_1$, then an extra 0, which corresponds to $\contactsStep{n}$, then the truncated contact vector of $I_2$. The contacts of the right grafting $I_1 \pright{i} I_2$ depend on the parameter~$i$. Indeed, each added decreasing relation merges one component of the final forest of $I_2$ with the last component of the final forest of $I_1$ and thus reduces the number of components by one. As a consequence, we have $\contacts(I_1 \pright{i} I_2) = \contacts(I_1) + \contacts(I_2) - i$. The contact vector is formed by this initial value followed by the truncated contact vector of $I_1$, then the new number of decreasing children of $n$, which is $i$ by definition, then the truncated contact vector of $I_2$. \end{proof} Let us now study what happens to the rise vector $\risesV$ and the initial forest vector $\IC$. They both only depend of the initial forest (increasing relations). The vector $\IC$ can be read directly on the interval-poset and we get the following proposition. \begin{Proposition} \label{prop:grafting-ic} Let $I_1$ and $I_2$ be two interval-posets of respective sizes $n >0$ and $m >0$, then \begin{itemize} \item $\icinf(I_1 \pleft I_2) = \icinf(I_2)$; \item $\IC(I_1 \pleft I_2) = \left( \icinf(I_2), \icstep{m}(I_2), \dots, \icstep{2}(I_2), \icinf(I_1), \icstep{n}(I_1), \dots \icstep{2}(I_1) \right)$; \item $\icinf(I_1 \pright{i} I_2) = \icinf(I_2) + \icinf(I_1)$; \item $\IC(I_1 \pright{i} I_2) = \left(\icinf(I_2) + \icinf(I_1), \icstep{m}(I_2), \dots, \icstep{2}(I_2), 0, \icstep{n}(I_1), \dots, \icstep{2}(I_1) \right)$. \end{itemize} If $\size(I_1)= 0$ then $I_1 \pleft I_2 = I_2$ and $\IC(I_1 \pleft I_2) = \IC(I_2)$. If $\size(I_2)= 0$ then $I_1 \pright{i} I_2 = I_1$ and $\IC(I_1 \pright{i} I_2) = \IC(I_1)$. \end{Proposition} This can be checked on Figure~\ref{fig:grafting}. We initially have $\IC(I_1) = (2,1,0)$ and $\IC(I_2) = (3,0,0)$, and then $\IC(I_1 \pleft I_2) = (3,0,0,2,1,0)$ and $\IC(I_1 \pright{i} I_2) = (5, 0, 0, 0, 1, 0)$ for all $1 \leq i \leq 2$. \begin{proof} When we compute $I_1 \pleft I_2$, we add increasing relations from all vertices of $I_1$ to the first vertex $\alpha$ of the shifted copy of $I_2$. In other words, we attach all increasing roots of $I_1$ to a new root $\alpha$. The number of components in the initial forest of $I_1 \pleft I_2$ is then given by $\icinf(I_2)$ (the last component contains $I_1$) and the number of increasing children of $\alpha$ is given by $\icinf(I_1)$. Other number of increasing children are left unchanged and we thus obtain the expected vector. In the computation of $I_1 \pright{i} I_2$, the value of $i$ only impacts the decreasing relations and thus does not affects the vector $\IC$. No increasing relation is added, which means that the initial forests of $I_1$ and $I_2$ are only concatenated and by looking at connected components, we obtain $\icinf(I_1 \pright{i} I_2) = \icinf(I_1) + \icinf(I_2)$. The vector $\IC$ is formed by this initial value followed by the truncated initial forest vector of $I_2$, then an extra 0, which correspond to $\icstep{1}(I_2)$, then the truncated initial forest vector of $I_1$. \end{proof} To understand how the rise vector behaves through the grafting operations, we first need to interpret the grafting on the upper bound Dyck path of the interval. We start with the left grafting. \begin{Proposition} \label{prop:grafting-rise-left} Let $I_1$ and $I_2$ be two interval-posets of respective sizes $n > 0$ and $m >0$. Let $D_1$ and $D_2$ be their respective upper bound Dyck path. Then, the upper-bound Dyck path of $I_1 \pleft I_2$ is given by $D_1D_2$ and consequently, if $\size(I_1) > 0$, we get \begin{itemize} \item $\rises(I_1 \pleft I_2) = \rises(I_1)$; \item $\risesV(I_1 \pleft I_2) = (\rises(I_1), \risesStep{1}(I_1), \dots, \risesStep{n-1}(I_1), \rises(I_2), \risesStep{1}(I_2), \dots, \risesStep{m-1}(I_2))$. \end{itemize} \end{Proposition} \begin{figure}[ht] \input{figures/grafting-rises-left} \caption{The upper bound Dyck paths in the left grafting} \label{fig:grafting-rise-left} \end{figure} Figure~\ref{fig:grafting-rise-left} gives an example of left-grafting with corresponding upper bound Dyck paths and rise vectors. \begin{proof} The definition of $I_1 \pleft I_2$ states that we add all relations $(i,\alpha)$ with $i \in I_1$ and $\alpha$ the first vertex of $I_2$. This is the same as adding all relations $(i, \alpha)$ where $i$ is an increasing root of $I_1$ (the other relations are obtained by transitivity). The increasing roots of $I_1$ correspond to the up-steps of $D_1$ whose corresponding down-steps do not have a following up-step, \emph{i.e.}, the up-steps corresponding to final down-steps of $D_1$. By concatenating $D_1$ and $D_2$, the first up-step of $D_2$ is now the first following up-step of the final down-steps of $D_1$: this indeed adds the relations from the increasing roots of $I_1$ to the first vertex of $I_2$. The expressions for the initial rise and rise vectors follow immediately by definition. \end{proof} The effect of the right grafting on the rise vector is a bit more technical. For simplicity, we only study the case where $I_1$ is of size one, which is the only case we will need in this paper. \begin{Proposition} \label{prop:grafting-rise-right} Let $I$ be an interval-poset of size $n >0$ and $D$ its upper bound Dyck path. Let $u$ be the only interval-poset on a single vertex. Note that the upper bound Dyck path of $u$ is given by the word $10$. Then, the upper bound Dyck path of $u \pright{i} I$ is $1D0$ for all $0 \leq i \leq \contacts(I)$, and we have \begin{itemize} \item $\rises(u \pright{i} I) = \rises(I) + 1$; \item $\risesV(u \pright{i} I) = (\rises(I) + 1, \risesStep{1}(I), \dots, \risesStep{n-1}(I), \risesStep{n}(I) = 0)$. \end{itemize} \end{Proposition} \begin{figure}[ht] \input{figures/grafting-rises-right} \caption{The upper bound Dyck paths in the right grafting} \label{fig:grafting-rise-right} \end{figure} Figure~\ref{fig:grafting-rise-right} gives an example of right-grafting with corresponding upper bound Dyck paths and rise vectors. \begin{proof} The right-grafting only adds decreasing relations. On the initial forests, it is then nothing but a concatenation of the two initial forests. In particular, in the case of $u \pright{i} I$, no increasing relation is added from the vertex one to any vertex of $I$. On the upper bound Dyck path, this means that the down-step corresponding to the initial up-step is not followed by any up-step: the Dyck path of $I$ has to be nested into this initial up-step. The expressions for the rise vector follow immediately. \end{proof} \begin{Remark} \label{rem:right-grafting-rise-ic} When applying a right-grafting on $u$, the interval-poset of size 1, the rise vector and the initial forest vector have similar expressions: \begin{align} \icinf(u \pright{i} I_2) &= 1 + \icinf(I_2); \\ \rises(u \pright{i} I_2) &= 1 + \rises(I_2); \\ \IC(u \pright{i} I_2) &= \left( 1 + \icinf(I_2), \IC^*(I_2), 0 \right); \\ \risesV(u \pright{i} I_2) &= \left( 1 + \rises(I_2), \risesV^*(I_2), 0 \right). \end{align} This will be a fundamental property when we define our involutions. Note also that if $\size(I_2) = 0$, we have $\icinf(u \pright{i} I_2) = \rises(u \pright{i} I_2) = 1$ and $\IC(u \pright{i} I_2) = \risesV(u \pright{i} I_2) = 1$. \end{Remark} Now, the only statistic which is left to study through the grafting operations is the distance. Recall that by Proposition~\ref{prop:tamari-inversions}, it is given by the number of Tamari inversions. In the same way as for the $\risesV$ vector, it is more complicated to study on the right grafting in which case, we will restrict ourselves to $\size(I_1) = 1$. \begin{Proposition} \label{prop:grafting-distance} Let $I_1$ and $I_2$ be two interval-posets, and $u$ be the interval-poset of size one. Then \begin{itemize} \item $\distance(I_1 \pleft I_2) = \distance(I_1) + \distance(I_2)$, \item $\distance(u \pright{i} I_2) = \distance(I_2) + \contacts(I_2) - i$. \end{itemize} \end{Proposition} Look for example at Figure~\ref{fig:grafting-rise-left}: the Tamari inversion $(1,3)$ of $I_1$ and $(1,2)$ of $I_2$ are kept through $I_1 \pleft I_2$ and no other Tamari inversion is added. For the right grafting, you can look at Figure~\ref{fig:grafting-rise-right}: the interval-poset $I_2$ only has one Tamari inversion $(1,3)$ and we have $\contacts(I_2) = 2$. You can check that $\distance( u \pright{1} I_2) = 2 = 1 + 2 - 1$, the two Tamari inversions being $(2,4)$ and $(1,4)$. \begin{proof} We first prove $\distance(I_1 \pleft I_2) = \distance(I_1) + \distance(I_2)$. The condition for a couple $(a,b)$ to be a Tamari inversion is local: it depends only on the values $a \leq k \leq b$. Thus, because the local structure of $I_1$ and $I_2$ is left unchanged, any Tamari inversion of $I_1$ and $I_2$ is kept in $I_1 \pleft I_2$. Now, suppose that $a \in I_1$ and $b \in I_2$. Let $\alpha$ be the label of minimal value in $I_2$ (which has been shifted by the size of $I_1$). By definition, we have $a < \alpha \leq b$ and $a \trprec \alpha$ in $I_1 \pleft I_2$: $(a,b)$ is not a Tamari inversion. Now, let $I = u \pright{i} I_2$ with $0 \leq i \leq \contacts(I_2)$ and let us prove that $\distance(I) = \distance(I_2) + \contacts(I_2) - i$. Once again, note that the Tamari inversions of $I_2$ are kept through the right grafting. For the same reason, the only Tamari inversions that could be added are of the form $(1,b)$ with $b \in I_2$. Now, let $b$ be a vertex of $I_2$ which is not a decreasing root. This means there is $a < b$ with $b \trprec_{I_2} a$. In $I$, the interval-poset $I_2$ has been shifted by one and so we have: $1 < a < b$ with $b \trprec_I a$: $(1,b)$ is not a Tamari inversion of $I$. Let $b$ be a decreasing root of $I_2$. If $b \trprec_{I} 1$ then $(1,b)$ is not a Tamari inversion. If $b \ntriangleleft_{I} 1$, we have that: by construction, there is no $a \in I_2$ with $1 \trprec_I a$; because $b$ is a decreasing root there is no $a \in I_2$ with $a < b$ and $b \trprec a$. In other words, $(1,b)$ is a Tamari inversion of $I$ if and only if $b$ is a decreasing root of $I_2$ and $b \ntriangleleft_I 1$. By the definition of $\pright{i}$ there are exactly $\contacts(I_2) - i$ such vertices. \end{proof} \subsection{Grafting trees} \label{sec:decomposition} \begin{Proposition} \label{prop:decomposition} An interval-poset $I$ of size $n \geq 1$ is fully determined by a unique triplet $(I_L, I_R, r)$ with $0 \leq r \leq \contacts(I_R)$ and $\size(I_L) + size(I_R) + 1 = n$ such that $I = I_L \pleft u \pright{r} I_R$ with $u$ the unique interval-poset of size 1. We call this triplet the \emph{grafting decomposition} of $I$. See an example on Figure~\ref{fig:grafting-decomposition}. \end{Proposition} \begin{Remark}~ \begin{itemize} \item It can easily be checked that the operation $I_L \pleft u \pright{r} I_R$ is well defined as we have $(I_L \pleft u) \pright{r} I_R = I_L \pleft (u \pright{r} I_R)$. Indeed $I_L \pleft u$ adds increasing relations from $I_L$ to $u$ while $u \pright{r} I_R$ adds decreasing relation from $I_R$ to $u$. The two operations are independent of each other, see an example on Figure~\ref{fig:grafting-decomposition}. In practice, we think of it as $I_L \pleft (u \pright{r} I_R)$. \item One or both of the intervals in the decomposition can be empty (of size 0). In particular, the decomposition of $u$ is the triplet $(\emptyset, \emptyset, 0)$. \end{itemize} \end{Remark} \begin{figure}[ht] \input{figures/decomposition_example} \caption{Grafting decomposition of an interval-poset} \label{fig:grafting-decomposition} \end{figure} \begin{proof} This is only a reformulation of \cite[Prop. 3.7]{IntervalPosetsInitial}. Indeed, it was proved that each interval-poset $I$ of size $n$ uniquely appeared in one \emph{composition} $\mathbb{B}(I_L, I_R)$ of two interval-posets where we had $\size(I_L) + \size(I_R) + 1 = n$ and \begin{align*} \mathbb{B}(I_L, I_R) = \sum_{0 \leq i \leq \contacts(I_R)} I_L \pleft u \pright{r} I_R. \end{align*} The parameter $r$ identifies which element is $I$ in the composition sum. \end{proof} \begin{Definition} \label{def:grafting-tree} Let $T$ be a binary tree of size $n$. We write $v_1, \dots, v_n$ the nodes of $T$ taken in in-order (following the binary search tree labeling). Let $\ell : \lbrace v_1, \dots, v_n \rbrace \rightarrow \mathbb{N}$ be a labeling function on $T$. For all subtrees $T'$ of $T$, we write $\size(T')$ the size of the subtree and $\labels(T') := \sum_{v_i \in T'} \ell(v_i)$ the sum of the labels of its nodes. We say that $(T,\ell)$ is a \emph{Tamari interval grafting tree}, or simply \emph{grafting tree} if the labeling $\ell$ satisfies that for every node $v_i$, we have $\ell(v_i) \leq \size(T_R(v_i)) - \labels(T_R(v_i))$ where $T_R(v_i)$ is the right subtree of the node $v_i$. \end{Definition} An example is given in Figure~\ref{fig:grafting-tree}: the vertices $v_1, \dots, v_8$ are written in red above the nodes, whereas the labeling $\ell$ is given inside the nodes. For example, you can check the rule on the root $v_4$, we have $\size(T_R(v_4)) - \labels(T_R(v_4)) = 4 - 1 = 3$ and indeed $\ell(v_4) = 2 \leq 3$. The rule is satisfied on all nodes. Note that if the right subtree of a node is empty (which is the case for $v_1$, $v_3$, $v_6$, and $v_8$) then the label is always 0. \begin{figure}[ht] \begin{tabular}{cc} \scalebox{0.6}{ \input{figures/grafting_tree_example} } & \scalebox{0.8}{\input{figures/interval-posets/I8-ex3}} \end{tabular} \input{figures/grafting-tree-construct} \caption{Example of grafting tree with corresponding interval-poset and grafting decomposition.} \label{fig:grafting-tree} \end{figure} \begin{Proposition} \label{prop:grafting-tree} Intervals of the Tamari lattice are in bijection with grafting trees. The grafting tree of an interval-poset $I$ is written as~$\graftingTree(I)$. We compute $\graftingTree(I) = (T, \ell)$ recursively as follows \begin{itemize} \item if $I = \emptyset$, then $T$ is the empty binary tree; \item if $\size(I) > 0$ and $(I_L, I_R, r)$ is the grafting decomposition of $I$, such that $\graftingTree(I_L) = (T_L, \ell_L)$ and $\graftingTree(I_R) = (T_R, \ell_R)$, then $T = \bullet(T_L, T_R)$ and $\ell$ is constructed by keeping unchanged the labels of $T_L$ and $T_R$ given by $\ell_L$ and $\ell_R$ and for the new root $v$ of $T$, $\ell(v) = r$. \end{itemize} Besides, \begin{align} \label{eq:grafting-tree-contact} \contacts(I) = \size(\graftingTree(I)) - \labels(\graftingTree(I)). \end{align} \end{Proposition} Figure~\ref{fig:grafting-tree} illustrates the bijection with the full recursive decomposition. The interval-poset decomposes into the triplet $(I_L, I_R, 2)$ as shown in Figure~\ref{fig:grafting-decomposition}. The left and right subtrees of the grafting tree are obtained recursively by applying the decomposition on $I_L$ and $I_R$. As $\size(I_L) = 3$, the root of $T$ is $v_4$ and we have $\ell(v_4) = 2$, which is indeed the parameter $r$ of the grafting decomposition and also the number of decreasing children of $4$ in the interval-poset. \begin{proof} First, let us check that we can obtain an interval-poset from a grafting tree. We read the grafting tree as an expression tree where each empty subtree is replaced by an entry as an empty interval-poset and each node corresponds to the operation $I_L \pleft u \pright{r} I_R$ where $r$ is the label of the node, $I_L$ and $I_R$ the respective results of the expressions of the left and right subtrees, and $u$ the interval poset of size 1. In other words, the interval-poset $I = \graftingTree^{-1}(T, \ell)$ where $(T, \ell)$ is a grafting tree is computed recursively by \begin{itemize} \item if $T$ is empty then $I = \emptyset$; \item if $T = v_k(T_L, T_R)$ then $I = \graftingTree^{-1}(T_L, \ell_L) \pleft u \pright{r} \graftingTree^{-1}(T_R, \ell_R)$ with $\ell(v_k) = r$, and $\ell_L$ and $\ell_R$ the labeling function $\ell$ restricted to respectively $T_L$ and $T_R$. \end{itemize} We need to check that the operation $u \pright{r} \graftingTree^{-1}(T_R, \ell_R)$ is well-defined, \emph{i.e}, in the case where $T$ is not empty, that we have $0 \leq r \leq \contacts(\graftingTree^{-1}(T_R, \ell_R))$. We do that by also proving by induction that $\contacts(\graftingTree^{-1}(T, \ell)) = \size(T) - \labels(T)$. This is true in the initial case where $T$ is empty: $\contacts(\emptyset) = 0$. Now, suppose that $T = v_k(T_L, T_R)$ with $\ell(v_k) = r$ and that the property is satisfied on $(T_L, \ell_L)$ and $(T_R, \ell_R)$. We write $I_L = \graftingTree^{-1}(T_L, \ell_L)$ and $I_R = \graftingTree^{-1}(T_R,\ell_R)$. In this case, $I' := u \pright{r} I_R$ is well-defined because we have by definition that $r \leq \size(T_R) - \labels(T_R)$, which by induction is $\contacts(I_R)$. Besides, by Proposition~\ref{prop:grafting-contact}, we have $\contacts(I') = 1 + \contacts(I_R) - r$. We now compute $I = I_L \pleft I'$ and we get $\contacts(I) = \contacts(I_L) + 1 + \contacts(I_R) - r$, which is by induction $\size(T_L) - \labels(T_L) + 1 + \size(T_R) - \labels(T_R) - r = \size(T) - \labels(T)$. Conversely, it is clear from Proposition~\ref{prop:decomposition} that the grafting decomposition of an interval-poset $I$ gives a labeled binary tree $(T, \ell)$. By the unicity of the decomposition, it is is the only labeled binary tree such that $I = \graftingTree^{-1}(T, \ell)$. This proves that $\graftingTree^{-1}$ is injective. To prove that it is surjective, we need to show that $\graftingTree(I)$ is indeed a grafting tree, \emph{i.e.}, the condition on the labels holds. Once again, this is done inductively. An empty interval-poset gives an empty tree and the condition holds. Now if $I$ decomposes into the triplet $(I_L, I_R, r)$ we suppose that the condition holds on $(T_L, \ell_L) = \graftingTree(I_L)$ and $(T_R,\ell_R) = \graftingTree(I_R)$. We know that $0 \leq r \leq \contacts(I_R)$ and we have just proved that $\contacts(I_R)$ is indeed $\size(T_R) - \labels(T_R)$. \end{proof} \begin{Proposition} \label{prop:grafting-direct} Let $I$ be an interval-poset and $\graftingTree(I) = (T,\ell)$, then \begin{enumerate} \item $T$ is the upper bound binary tree of $I$; \item $\ell(v_i)$ is the number of decreasing children of the vertex $i$ in $I$. \end{enumerate} \end{Proposition} In other words, the grafting tree of an interval-poset can be obtained directly without using the recursive decomposition. Also, the tree $T$ only depends on the initial forest and the labeling $\ell$ only depends on the final forest. \begin{proof} We prove the result by induction on the size of $I$. If $I$ is empty, there is nothing to prove. We then suppose that $I$ decomposes into a triplet $(I_L, I_R, r)$ with $k = \size(I_L) + 1$. We suppose by induction that the proposition is true on $I_L$ and $I_R$. Let $(T, \ell) = \graftingTree(I)$, $(T_L, \ell_L) = \graftingTree(I_L)$, and $(T_R, \ell_R) = \graftingTree(I_R, \ell_R)$. By induction, $T_L$ and $T_R$ are the upper bound binary trees of $I_L$ and $I_R$ respectively. In \cite[Prop. 3.4]{IntervalPosetsInitial}, we proved $T = v_k(T_L, T_R)$, which by construction of the initial forest is indeed the upper bound binary tree of $I$. The result on the labeling function $\ell$ is obtained by induction on $\ell_L$ and $\ell_R$ for all vertices $v_i$ with $i \neq k$. Besides, by definition of the grafting tree, we have $\ell(v_k) = r$, which is indeed the number of decreasing children of the vertex $k$ in $I$ by the definition of the right grafting $\pright{r}$. \end{proof} \begin{Remark} Note that the grafting tree of an Tamari interval has similarities with another structure in bijection with interval-posets: \emph{closed flow} on a planar forest, which was described in \cite{ME_FPSAC2014}. The planar forest associated to an interval-poset depends only on the initial forest of the interval, \emph{i.e.}, only on its upper bound binary tree, which also gives the shape of the grafting tree. In other words, given a binary tree~$T$, there is a one-to-one correspondence between the possible labeling $\ell$ such that $(T,\ell)$ is a grafting tree and the closed flows on a certain planar forest $F$. As described in \cite[Fig. 10]{ME_FPSAC2014}, the forest $F$ corresponding to $T$ is obtained by a classical bijection often referred to by the ``left child to left brother'' bijection. It consists of transforming, for each node of the binary tree, the left child into a left brother and the right child into the last child in the planar forest. Now, the flow itself depends on the decreasing forest of the interval-poset just as the labeling $\ell$ of the grafting tree. Each node receiving a $-1$ in the flow corresponds to a node with a positive label in the grafting tree. \end{Remark} \begin{Remark} The ``left child to left brother`` bijection to planar forest also gives a direct bijection between grafting trees and $(1,1)$ description trees of \cite{CoriSchaefferDescTrees} (the planar forest is turned into a tree by adding a root). The labels $\ell'$ of the $(1,1)$ description trees are obtained through a simple transformation from $\ell$: for each node $v$, $\ell'(v) = 1 + \size(T_R(v)) - \labels(T_R(v)) - \ell(v)$. \end{Remark} \begin{Proposition} \label{prop:grafting-tree-contact} Let $I$ be an interval-poset and $(T,\ell) = \graftingTree(I)$ its grafting tree with $v_1, \dots, v_n$ the vertices of~$T$. Then $\contactsV^*(I) = (\ell(v_1), \dots, \ell(v_{n-1}))$. \end{Proposition} \begin{proof} Remember that $\contactsV^*(I) = \DC^*(I)$ by Proposition~\ref{prop:contact-dc}, \emph{i.e.}, the final forest vector given by reading the number of decreasing children of the vertices in $I$. Then the result is a direct consequence of Proposition~\ref{prop:grafting-direct}. \end{proof} \begin{Proposition} \label{prop:grafting-tree-distance} Let $I$ be an interval-poset and $(T,\ell) = \graftingTree(I)$ its grafting tree with $v_1, \dots, v_n$ the vertices of $T$. Let $d_i = \size(T_R(v_i)) - \labels(T_R(v_i)) - \ell(v_i)$ for all $1 \leq i \leq n$ where $T_R(v_i)$ is the right subtree of the vertex $v_i$ in $T$. Then \begin{align} \distance(I) = \sum_{1 \leq i \leq n} d_i. \end{align} \end{Proposition} For example, on Figure~\ref{fig:grafting-tree}, we have all $d_i = 0$ except for $d_4 = 4 - 1 - 2 = 1$ and $d_5 = 3 - 1 = 2$. This indeed is consistent with $\distance(I) = 3$, the 3 Tamari inversions being $(4,7)$, $(5,6)$, and $(5,7)$. More precisely, the number $d_i$ is the number of Tamari inversions of the form~$(i,*)$. \begin{proof} Once again, we prove the property inductively. This is true for an empty tree where we have $\distance(I) = 0$. Now, let $I$ be a non-empty interval-poset, then $I$ decomposes into a triplet $(I_L, I_R, r)$ with $I = I_L \pleft u \pright{r} I_R$. Proposition~\ref{prop:grafting-distance} gives us \begin{align} \distance(I) &= \distance(I_L \pleft u \pright{r} I_R) \\ &= \distance(I_L) + \distance(u \pright{r} I_R) \\ &= \distance(I_L) + \distance(I_R) + \contacts(I_R) - r. \end{align} Now let $(T, \ell) = \graftingTree(I)$. By definition, we have $T = v_k(T_L, T_R)$ with $k = \size(T_L) + 1$, $(T_L, \ell_L) = \graftingTree(I_L)$, and $(T_R, \ell_R) = \graftingTree(I_R)$. Using the induction hypothesis and \eqref{eq:grafting-tree-contact}, we obtain \begin{align} \sum_{1 \leq i \leq n} d_i &= \sum_{1 \leq i < k} d_i + d_k + \sum_{k < i \leq n} d_i \\ &= \distance(I_L) + \size(T_R) - \labels(T_R) - \ell(v_k) + \distance(I_R) \\ &= \distance(I_L) + \contacts(I_R) - r + \distance(I_R). \end{align} \end{proof} \subsection{Left branch involution on the grafting tree} \label{sec:grafting-tree-involution} We now give an interesting involution on the grafting tree, which in turns gives an involution on Tamari intervals. In Section~\ref{sec:statement}, we mentioned that the rise-contact involution on Dyck paths (not intervals) used the reversal of a Dyck path conjugated with the Tamari symmetry. The equivalent of the Dyck path reversal on the corresponding binary tree is also a classical involution, which we call the \emph{left branch involution}. Applying this involution on grafting trees will allow us to generalize it to intervals. A \emph{right hanging binary tree} is a binary tree whose left subtree is empty. An alternative way to see a binary tree is to understand it as list of right hanging binary trees grafted together on its left-most branch. For example, the tree of Figure~\ref{fig:grafting-tree} can be decomposed into 3 right hanging binary trees : the one with vertex $v_1$, the one with vertices $v_2$ and $v_3$ and the one with vertices $v_4$ to $v_8$. \begin{Definition} The \emph{left branch involution} on binary trees is the operation that recursively reverse the order of right hanging trees on every left branch of the binary tree. We write $\leftbranch(T)$ the image of a binary tree $T$ through the involution. \end{Definition} This operation is a very classical involution on binary trees, see Figure~\ref{fig:left-branch-involution} for an example. It is implemented in SageMath \cite{SageMath2017} as the \texttt{left\_border\_symmetry} method on binary trees. You can also understand it in a recursive manner: if $T$ is an non-empty tree with $T_L$ and $T_R$ as respectively left and right subtrees, then the image of $T$ can be constructed from the respective image $T_R'$ and $T_L'$ of $T_R$ and $T_L$ following the structure of Figure~\ref{fig:left-branch-involution-recursive}. The root is grafted on the left-most branch of $T_L'$ with an empty left subtree and $T_R'$ as a right subtree. \begin{figure}[ht] \input{figures/left-border-symmetry} \caption{The left-branch involution.} \label{fig:left-branch-involution} \end{figure} \begin{figure}[ht] \input{figures/left-border-symmetry-recursive} \caption{The left-branch involution seen recursively.} \label{fig:left-branch-involution-recursive} \end{figure} \begin{Proposition} The left branch involution is an involution on grafting trees. \end{Proposition} \begin{proof} First, let us clarify what the involution means on a grafting tree~$(T, \ell)$: we apply the involution on the binary tree $T$ and the vertices \emph{move along with their labels} as illustrated in Figure~\ref{fig:left-branch-involution}. We obtain a new labeled binary tree $(T', \ell')$ where every vertex $v_i$ of $T$ is sent to a new vertex $v_{i'}$ of $T'$ such that $\ell(v_i) = \ell'(v_{i'})$. For example, in Figure~\ref{fig:left-branch-involution}, the root $v_4$ of $T$ is sent to $v_1$ of $T'$, with $\ell(v_4) = \ell'(v_1) = 2$. The only thing to check is that $\ell'$ still satisfies the grafting tree condition. This is immediate. Indeed, for $v_i \in T$, and $T_R(v_i)$ its right subtree, we have $\ell(v_i) \leq \size(T_R(v_i)) - \labels(T_R(v_i))$. Now, if $v_{i'}$ is the image of $v_i$ and $T'_R(v_{i'})$ its right subtree, even though $T'_R$ might be different from $T_R$, the statistics are preserved: $\size(T'_R(v_{i'})) = \size(T_R(v_i))$ and $\labels(T'_R(v_{i'})) = \labels(T_R(v_i))$, because the involution only acts on left branches and the set of labels of the right subtree is preserved. \end{proof} As a consequence, we now have an involution on Tamari intervals. \begin{Definition}[The Left Branch Involution] \label{def:leftbranch-intervals} The left branch involution on Tamari intervals is defined by the left branch involution on their grafting trees. \begin{align} \leftbranch(I) := \graftingTree^{-1}(\leftbranch(\graftingTree(I))) \end{align} \end{Definition} The grafting tree seems to be the most natural object to describe the involution. Indeed, even though it can be easily computed on interval-posets using decomposition and graftings, we have not seen any simple direct description of it. Furthermore, if we understand the interval as a couple of a lower bound and upper bound, then the action on the upper bound is simple: the shape of the upper bound binary tree is given by the grafting tree and so the involution on the upper bound is only the classical left-branch involution, which corresponds to reversing the Dyck path. Nevertheless, the action on the lower bound cannot be described as an involution on binary trees: it depends on the corresponding upper bound. One way to understand this involution is that we apply the left-branch involution on the upper bound binary tree and the lower bounds ``follows'' in the sense given by the labels of the grafting tree. \begin{Proposition} \label{prop:leftbranch-statistics} Let $I$ be an interval of Tamari, then \begin{align} \label{eq:leftbranch-contacts} \contacts(I) &= \contacts(\leftbranch(I)); \\ \label{eq:leftbranch-contactsP} \contactsP(I) &= \contactsP(\leftbranch(I)); \\ \label{eq:leftbranch-distance} \distance(I) &= \distance(\leftbranch(I)); \\ \label{eq:leftbranch-rises} \risesV(I) &= \IC(\leftbranch(I)). \end{align} In other words, the involution exchanges the rise vector and initial forest vector while leaving unchanged the number of contacts, the contact monomial, and the distance. \end{Proposition} \begin{proof} Points \eqref{eq:leftbranch-contacts} and \eqref{eq:leftbranch-contactsP} are immediate. Indeed, \eqref{eq:grafting-tree-contact} tells us that $\contacts(I)$ is given by $\size(\graftingTree(I)) - \labels(\graftingTree(I))$ : this statistic is not changed by the involution. Now remember that, by Proposition~\ref{prop:grafting-contact}, the values $\contactsStep{1}(I), \dots, \contactsStep{n}(I)$ are given by $\ell(v_1), \dots, \ell(v_n)$, so $\contactsP(I) = x_{\contacts(I)} x_{\ell(v_1)} \dots x_{\ell(v_{n-1})} = \frac{x_{\contacts(I)} x_{\ell(v_1)} \dots x_{\ell(v_{n})}}{x_0}$. This monomial is commutative and the involution sending $\ell$ to $\ell'$ only applies a permutation on the indices: the monomial itself is not changed. Also, we always have $\ell(v_n) = \ell'(v_n) = 0$ so the division by $x_0$ is still possible after the permutation and still removes the last value $x_{\ell'(v_n)}$. As an example, on Figure~\ref{fig:left-branch-involution}, we have $\contactsP(I) = x_4 x_0 x_1 x_0 x_2 x_0 x_0 x_1 = x_0^4 x_1^2 x_2 x_4 = x_4 x_2 x_0 x_1 x_0 x_0 x_1 x_0 x_0 = \contactsP(\leftbranch(I))$. Point \eqref{eq:leftbranch-distance} is also immediate by Proposition~\ref{prop:grafting-tree-distance}. Indeed, for all $1 \leq i \leq n$, we have $d_i = \size(T_R(v_i)) - \labels(T_R(v_i)) = \size(T_R(v_{i'})) - \labels(T_R(v_{i'})) = d_{i'}$ if $v_i$ is sent to $v_{i'}$ by the involution. Once again, the values $d_1, \dots, d_n$ are only permuted and the sum stays the same. We prove point~\eqref{eq:leftbranch-rises} by induction on the size of the tree. It is trivially true when $\size(I) = 0$ (both vectors are empty). Now suppose that $I$ is an interval-poset of size $n > 0$. Let $(T, \ell) = \graftingTree(I)$, then $T$ is a non-empty binary tree with two subtrees $T_L$ and $T_R$ (which can be empty) and a root node $v$ such that $\ell(v) = i$. Let us call $I_L$ and $I_R$ the interval-posets corresponding to $T_L$ and $T_R$ respectively. By definition, we have that \begin{align} I = I_L \pleft u \pright{i} I_R. \end{align} We call $T_L'$ and $T_R'$ the respective image of $T_L$ and $T_R$ through the left branch involution and $I_L'$ and $I_R'$ the corresponding interval-posets. As both $T_L$ and $T_R$ are of size strictly less than $n$, we have by induction that \begin{align} \risesV(I_L) &= \IC(I_L'), \\ \nonumber \risesV(I_R) &= \IC(I_R'). \end{align} Following the recursive description of the left branch involution given on Figure~\ref{fig:left-branch-involution-recursive}, we obtain that the image $I' := \leftbranch(I)$ is given by \begin{align} \label{eq:leftbranch-intervals-recursive} \left( u \pright{i} I_R' \right) \pleft I_L'. \end{align} We are using a small shortcut here as this expression does not exactly correspond to the definition of the grafting tree. Indeed, $T_L'$ is a whole tree, not a single node. Nevertheless, it can be easily checked that the left product $\pleft$ is associative. Then any tree can be seen as a series of a right-hanging trees grafted to each other as in the following picture. \begin{center} \scalebox{.8}{\input{figures/right-hanging-trees}} \end{center} The definition gives us that the interval-poset is computed by \begin{align} &(\dots((u \pright{c} I_C) \pleft \dots (u \pright{b} I_B)) \pleft (u \pright{a} I_A))\\ \nonumber &= (u \pright{c} I_C) \pleft \dots (u \pright{b} I_B) \pleft (u \pright{a} I_A). \end{align} Using \eqref{eq:leftbranch-intervals-recursive}, we obtain the desired result. Indeed, let $J = u \pright{i} I_R$ and $J' = u \pright{i} I_R'$. If $I_R$ is empty, so is $I_R'$ and we have $\risesV(J) = \IC(J') = (1)$. If not, we use Propositions~\ref{prop:grafting-ic} and~\ref{prop:grafting-rise-right} and Remark~\ref{rem:right-grafting-rise-ic} to obtain \begin{align} \risesV(J) &= (1 + \rises(I_R), \risesV^*(I_R), 0) \\ &= (1 + \icinf(I_R'), \IC^*(I_R'), 0) \\ &= \IC(J'). \end{align} Now by using Propositions~\ref{prop:grafting-rise-left} and~\ref{prop:grafting-ic}, we obtain \begin{align} \risesV(I) = \risesV(I_L \pleft J) &= (\risesV(I_L), \risesV(J)) \\ &= (\IC(I_L'), \IC(J')) \\ &= \IC(J' \pleft I_L') = \IC(I'). \end{align} \end{proof} \subsection{The complement involution and rise-contact involution} As we have seen in Section~\ref{sec:statement}, the rise-contact involution on Dyck paths is a conjugation of the Tamari symmetry involution by the Dyck path reversal involution. The equivalent of the Dyck path reversal on intervals is the left-branch involution on the grafting tree. We now need to describe what is the Tamari symmetry on intervals: this is easy, especially described on interval-posets. \begin{Definition}[The Complement Involution] The \emph{complement} of an interval-poset $I$ of size $n$ is the interval-poset $J$ defined by \begin{align} i \trprec_J j \Leftrightarrow (n+1 -i) \trprec_I (n+1 -j). \end{align} We write $\compl(I)$ the complement of $I$. \end{Definition} \begin{figure}[ht] \input{figures/complement} \caption{The complement of an interval-poset} \label{fig:compl} \end{figure} An example is shown on Figure~\ref{fig:compl}. It is clear by Definition~\ref{def:interval-poset} that this is still an interval-poset. Basically, this is an involution exchanging increasing and decreasing relations. This corresponds to the up-down symmetry of the Tamari lattice. It is a well known fact that the Tamari lattice is isomorphic to its inverse by sending every tree $T$ to its reverse $T'$ where the left and right subtrees have been exchanged on every node. Let $T_1$ and $T_2$ be respectively the lower and upper bounds of an interval~$I$. Let $T_1'$ and $T_2'$ be the respective reverses of $T_1$ and $T_2$. Then $T_1'$ is the upper bound of $\compl(I)$ and $T_2'$ is the lower bound. \begin{Proposition} \label{prop:complement-ic-dc} Let $I$ be an interval-poset, then $\IC(I) = \DC(\compl(I))$. \end{Proposition} \begin{proof} Every increasing relation $a \trprec_I b$ is sent to a decreasing relation $(n+1 - a) \trprec_{\compl(I)} (n+1 - b)$. In particular, each connected component of the initial forest of $I$ is sent to exactly one connected component of the final forest of $\compl(I)$ and so $\icinf(I) = \dcstep{0}(\compl(I))$. Now, if a vertex $b$ has $k$ increasing children in $I$, its image $(n+1-b)$ has $k$ decreasing children in $\compl(I)$ so $\icstep{b}(I) = \dcstep{n+1-b}(\compl(I))$. Remember that $\IC^*$ reads the numbers of increasing children in reverse order from $n$ to $2$ whereas $\DC^*$ reads the number of decreasing children in the natural order from $1 = n+1-n$ to $n-1 = n+1 -2$. We conclude that $\IC(I) = \DC(\compl(I))$. \end{proof} \begin{Proposition} \label{prop:complement-distance} Let $I$ be an interval-poset, then $\distance(I) = \distance(\compl(I))$. More precisely, $(a,b)$ is a Tamari inversion of $I$ if and only if $(n+1-b, n+1-a)$ is a Tamari inversion of $\compl(I)$. \end{Proposition} \begin{proof} Let $a < b$ be two vertices of $I$, we set $a' = n+1-b$ and $b' = n+1-a$. \begin{itemize} \item There is $a \leq k <b$ with $b \trprec_I k$ if and only if there is $k' = n+1-k$ with $a' < k' \leq b'$ and $a' \trprec_{\compl(I)} k'$. \item There is $a < k \leq b$ with $a \trprec_I k$ if and only if there is $k' = n+1-k$ with $a' \leq k' < b'$ and $b' \trprec_{\compl(I)} k'$. \end{itemize} In other words, $(a,b)$ is a Tamari inversion of $I$ if and only if $(a',b')$ is a Tamari inversion of $\compl(I)$. By Proposition~\ref{prop:tamari-inversions}, this gives us $\distance(I) = \distance(\compl(I))$. \end{proof} You can check on Figure~\ref{fig:compl} that $I$ has 3 Tamari inversions $(1,5)$, $(2,3)$, and $(2,5)$, which give respectively the Tamari inversions $(4,8)$, $(6,7)$, and $(4,7)$ in $\compl(I)$. We are now able to state the following Theorem, which gives an explicit combinatorial proof of Theorem~\ref{thm:main-result-classical}. We give an example computation on Figure~\ref{fig:rise-contact-involution}. You can run more examples and compute tables for all intervals using the provided live {\tt Sage-Jupyter} notebook~\cite{JNotebook}. \begin{figure}[ht] \input{figures/beta} \caption{The rise-contact involution on an example.} \label{fig:rise-contact-involution} \end{figure} \begin{Theorem}[the rise-contact involution] \label{thm:rise-contact-statistics} Let $\risecontact$ be the \emph{rise-contact involution} defined by \begin{align} \risecontact = \leftbranch \circ \compl \circ \leftbranch. \end{align} Then $\risecontact$ is an involution on Tamari intervals such that, for an interval~$I$ and a commutative alphabet $X$, \begin{align} \label{eq:rise-contact-rises} \rises(I) &= \contacts(\risecontact(I)); \\ \label{eq:rise-contact-partitions} \risesP(I,X) &= \contactsP(\risecontact(I),X); \\ \label{eq:rise-contact-distance} \distance(I) &= \distance(\risecontact(I)). \end{align} \end{Theorem} \begin{proof} The operation $\risecontact$ is clearly an involution because it is the conjugate of the complement involution $\compl$ by the left-branch involution~$\leftbranch$. We obtain \eqref{eq:rise-contact-distance} immediately as the distance is constant through $\leftbranch$ by~\eqref{eq:leftbranch-distance} and through $\compl$ by Proposition~\ref{prop:complement-distance}. Now, using Propositions~\ref{prop:leftbranch-statistics} and \ref{prop:complement-ic-dc}, we have \begin{align} \contacts(\risecontact(I)) &= \contacts(\leftbranch \circ \compl \circ \leftbranch (I)) = \contacts(\leftbranch \circ \compl(I)) = \dcstep{0}(\leftbranch \circ \compl(I)) \\ &= \icinf(\leftbranch(I)) = \rises(I), \end{align} which proves \eqref{eq:rise-contact-rises}. Now, by Proposition~\ref{prop:leftbranch-statistics}, we have that $\contactsV(\leftbranch \circ \compl(I))$ is a permutation of $\contactsV(\leftbranch \circ \compl \circ \leftbranch (I))$. We then use Proposition~\ref{prop:complement-ic-dc} and again Proposition~\ref{prop:leftbranch-statistics} \begin{align} \contactsV(\leftbranch \circ \compl(I)) &= \DC (\leftbranch \circ \compl(I)) = \IC(\leftbranch (I)) = \risesV(I). \end{align} This means that $\risesV(I)$ is a permutation of $\contactsV(\beta(I))$, and so, because $X$ is a commutative alphabet, \eqref{eq:rise-contact-partitions} holds. \end{proof} \begin{Remark} The reader might notice at this point that the notion of Tamari interval-poset is not completely necessary to the definition of the rise-contact involution. Indeed, one novelty of this paper is the introduction of the grafting tree, which, we believe, truly encapsulates the recursive structure of the Tamari intervals. As an example, it is an interesting (and easy) exercise to recover the functional equation first described in \cite{Chap} and later discussed in \cite{IntervalPosetsInitial} using solely grafting trees. Nevertheless, please note that the rise-contact involution cannot be described using solely grafting trees. Indeed, grafting trees are the natural object to apply the left-branch involution but they do not behave nicely through the complement involution. In this case, the interval-posets turn out to be the most convenient object. The complement involution can also be described directly on intervals of binary trees but then it makes it more difficult to follow some statistics such as the distance. For these reasons, and also for convenience and reference to previous results, we have kept interval-posets central in this paper. \end{Remark} \begin{Remark} In \cite{DecompBeta10} and \cite{InvolutionBeta10}, the authors describe an interesting involution on $(1,0)$ description trees that leads to the equi-distribution of certain statistics. Their bijection is described recursively through grafting and up-raising of trees. Some similar operations can be defined on $(1,1)$ description trees. An interesting question is then: is there a a direct description of the rise-contact involution on $(1,1)$ description trees? The answer is most probably \emph{yes}. Actually, this resumes to understanding the complement involution on $(1,1)$ description trees. We leave that for further research or curious readers. \end{Remark} \section{The $m$-Tamari case} \label{sec:mtam} \subsection{Definition and statement of the generalized result} \label{sec:mtam-def} The $m$-Tamari lattices are a generalization of the Tamari lattice where objects have an $(m+1)$-ary structure instead of binary. They were introduced in \cite{BergmTamari} and can be described in terms of $m$-ballot paths. An $m$-ballot path is a lattice path from $(0,0)$ to $(nm,n)$ made from horizontal steps $(1,0)$ and vertical steps $(0,1)$, which always stays above the line $y=\frac{x}{m}$. When $m=1$, an $m$-ballot path is just a Dyck path where up-steps and down-steps have been replaced by respectively vertical steps and horizontal steps. They are well known combinatorial objects counted by the $m$-Catalan numbers \begin{equation} \frac{1}{mn + 1} \binom{(m+1)n}{n}. \end{equation} They can also be interpreted as words on a binary alphabet and the notion of \emph{primitive path} still holds. Indeed, a primitive path is an $m$-ballot path which does not touch the line $y=\frac{x}{m}$ outside its end points. From this, the definition of the rotation on Dyck path given in Section~\ref{sec:tamari} can be naturally extended to $m$-ballot-paths, see Figure~\ref{fig:mpath-rot}. \begin{figure}[ht] \centering \input{figures/rotation_mpath} \caption{Rotation on $m$-ballot paths.} \label{fig:mpath-rot} \end{figure} When interpreted as a cover relation, the rotation on $m$-ballot paths induces a well-defined order, which is a lattice \cite{BergmTamari}. This is what we call the $m$-Tamari lattice or $\Tamnm$, see Figure \ref{fig:mTamari} for an example. \begin{figure}[ht] \input{figures/mTamari-3-3-paths} \caption{$m$-Tamari on $m$-ballot paths: $\Tam{3}{2}$.} \label{fig:mTamari} \end{figure} The intervals of $m$-Tamari lattices have also been studied. In \cite{mTamari}, it was proved that they are counted by \begin{equation} \label{eq:m-intervals-formula} I_{n,m} = \frac{m+1}{n(mn +1)} \binom{(m+1)^2 n + m}{n - 1}. \end{equation} They were also studied in \cite{IntervalPosetsInitial} where it was shown that they are in bijection with some specific families of Tamari interval-posets. Our goal here is to use this characterization to generalize Theorem~\ref{thm:main-result-classical} to intervals of $m$-Tamari, thus proving Conjecture 17 of \cite{PRThesis}. First, let us introduce the $m$-statistics, which correspond to the classical cases statistics defined in Definition~\ref{def:contact-rise-dw}. \begin{Definition} \label{def:m-contact-rise-dw} Let $B$ be an $m$-ballot path. We define the following $m$-\emph{statistics}. \begin{itemize} \item $\mcontacts(B)$ is the number of non-final contacts of the path $B$: the number of time the path $B$ touches the line $y=\frac{x}{m}$ outside the last point. \item $\mrises(B)$ is the initial rise of $B$: the number of initial consecutive vertical steps. \item Let $u_i$ be the $i^{th}$ vertical step of $B$, $(a,b)$ the coordinate of its starting point and $j$ an integer such that $1 \leq j \leq m$. We consider the line $\ell_{i,j}$ starting at $(a, b + \frac{j}{m})$ with slope $\frac{1}{m}$ and the portion of path $d_{i,j}$ of $B$ which starts at $(a,b+1)$ and stays above the line $\ell_{i,j}$. From this, we define $\mcontactsStep{i,j}(B)$ the number of non-final contacts between $\ell_{i,j}$ and $d_{i,j}$. \item Let $v_i$ be the $i^{th}$ horizontal step of $B$, we say that the number of consecutive vertical steps right after $v_i$ are the \emph{$m$-rises} of $v_i$ and write $\mrisesStep{i}(B)$. \item $\mcontactsV(B) := (\mcontacts(B), \mcontactsStep{1,1}(B), \dots, \mcontactsStep{1,m}(B), \dots, \mcontactsStep{n,1}(B), \dots, \mcontactsStep{n,m-1}(B))$ is the \emph{$m$-contact vector} of~$B$. \item $\mrisesV(B) := (\mrises(B), \mrisesStep{1}(B), \dots, \mrisesStep{nm-1}(B))$ is the \emph{$m$-rise vector} of~$B$. \item Let $X = (x_0, x_1, x_2, \dots)$ be a commutative alphabet, we write $\mcontactsP(B,X)$ the monomial $x_{v_0}, \dots x_{v_{nm-1}}$ where $\mcontactsV(I) = (v_0, \dots, v_{nm-1})$ and we call it the \emph{$m$-contact monomial} of $B$. \item Let $Y = (y_0, y_1, y_2, \dots)$ be a commutative alphabet, we write $\risesP(B,Y)$ the monomial $y_{w_0}, \dots, y_{w_{nm-1}}$ where $\mrisesV(I) = (w_0, \dots, w_{nm-1})$ and we call it the \emph{$m$-rise monomial} of $B$. \end{itemize} Besides, we write $\size(B):=n$. An $m$-ballot path of size $n$ has $n$ vertical steps and $nm$ horizontal steps. \end{Definition} \begin{figure}[ht] \input{figures/m-contacts-rises-example} \caption{The $m$-contacts and $m$-rises of a ballot path.} \label{fig:m-stat} \end{figure} An example is given on Figure~\ref{fig:m-stat}. When $m = 1$, this is the same as Definition~\ref{def:contact-rise-dw}. Note also that we will later define a bijection between $m$-ballot paths and certain families of Dyck paths which also extends to intervals: basically any element of $\Tamnm$ can also be seen as an element of $\Tamk{n \times m}$ but the statistics are not exactly preserved, which is why we use slightly different notations for $m$-statistics to avoid any confusion. Both $\mcontactsV(B)$ and $\mrisesV(B)$ are of size $nm$. Also, note that even though $\ell_{i,j}$ does not always starts at an integer point, the contacts with the subpath $d_{i,j}$ only happen at integer points. Because the final contact is not counted, it can happen that $\mcontactsStep{i,j} = 0$ even when $d_{i,j}$ is not reduced to a single point. Indeed, the initial point is a contact only when $j = m$. In this case, the definition of $\mcontactsStep{i,m}$ is similar to the classical case from Definition~\ref{def:contact-rise-dw}. The $m$-rise vector somehow partitions the vertical steps and it is clear that $\sum_{0 \leq i \leq nm} \mrisesStep{i}(B) = n$. Actually, we also have $\sum_{0 \leq i \leq n; 1 \leq j \leq m} \mcontactsStep{i,j}(B) = n$. We see this through another description of the non-zero values of the vector which makes the relation to \cite[Conjecture 17]{PRThesis} explicit. \begin{Proposition} For each vertical step $u_i$ of an $m$-ballot path, let $a_i$ be the number of $1 \times 1$ squares that lies horizontally between the step $u_i$ and the line $y = \frac{x}{m}$. This gives us $a(B) = [a_1, \dots, a_n]$, the \emph{area vector} of~$B$. We partition the values of $a(B)$ such that $a_i$ and $a_j$ are in the same set if $a_i = a_j$ and for all $i'$ such that $i \leq i' \leq j$, then $a_{i'} \geq a_i$. Let $\lambda = (\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_k) $ be the integer partition obtained by keeping only the set sizes and let $e(B,X) = x_{\lambda_1} \dots x_{\lambda_k}$ a monomial on a commutative alphabet $X$. Then $e(B,X) = \mcontactsP(B,X)$ with $x_0 = 1$. \end{Proposition} The definition of $e(B,X)$ comes from \cite[Conjecture 17]{PRThesis}. As an example, the area vector of the path from Figure~\ref{fig:m-stat} is $(0,1,2,4,2,4,4,0)$. The set partition is $\lbrace \lbrace a_1, a_8 \rbrace, \lbrace a_2 \rbrace, \lbrace a_3, a_5 \rbrace, \lbrace a_4 \rbrace, \lbrace a_6, a_7 \rbrace \rbrace$. In particular, the area vector always starts with a 0 and each new 0 corresponds to a contact between the path and the line. Here, we get $\lambda = (2,2,2,1,1)$, which indeed gives $e(B,X) = x_1 ^2 x_2^3 = \contactsP(B,X)$ at $x_0 = 1$. \begin{proof} If the step $u_i$ starts at a point $(x,y)$, then we have by definition $my = x + a_i$. In particular, if $a_i = a_j$, then $u_i$ and $u_j$ both have a contact with a same affine line $s$ of slope $\frac{1}{m}$. Then $a_i$ and $a_j$ belong to the same set in the partition if and only if the path between $u_i$ and $u_j$ stays above the line $s$. More precisely, the line $s$ cuts a section $p$ of the path, starting at some point $(a, b + \frac{j}{m})$ where $(a,b)$ is the starting point of a vertical step and $1 \leq j \leq m$. The non-final contacts of this path $p$ with the line $s$ are exactly the vertical steps $u_k$ with $a_k = a_i$. The final contact corresponds either to the end of the path $B$ or to a horizontal step: it does not correspond to an area $a_k = a_i$. \end{proof} As for the classical case, we now extend those definitions to intervals of the $m$-Tamari lattice. \begin{Definition} \label{def:m-contact-rise-intervals} Consider an interval $I$ of $\Tamnm$ described by two $m$-ballot paths $B_1$ and $B_2$ with $B_1 \leq B_2$. Then \begin{enumerate} \item $\mcontacts(I) = \mcontacts(B_1)$, $\mcontactsStep{i,j}(I):= \mcontactsStep{i,j}(B_1)$ for $1 \leq i \leq n$ and $1 \leq j \leq m$, $\mcontactsV(I):=\mcontactsV(B_1)$, and $\mcontactsP(I,X):=\mcontactsP(B_1,X)$; \item $\mrisesStep{i}(I):=\mrisesStep{i}(B_2)$ for $0 \leq i \leq mn$, $\mrisesV(I):=\mrisesV(B_2)$, and $\mrisesP(I,Y) := \mrisesP(B_2,Y)$. \end{enumerate} To summarize, all the statistics we defined on $m$-ballot paths are extended to $m$-Tamari intervals by looking at the \emph{lower bound} $m$-ballot path $B_1$ when considering contacts and the \emph{upper bound} $m$-ballot path $B_2$ when considering rises. Besides, we write $\size(I)$ the size $n$ of the $m$-ballot paths $B_1$ and $B_2$. \end{Definition} Finally, the definition of \emph{distance} naturally extends to $m$-Tamari. \begin{Definition} \label{def:m-distance} Let $I = [B_1, B_2]$ be an interval of $\Tamnm$. We call the \emph{distance} of $I$ and write $\distance(I)$ the maximal length of all chains between $B_1$ and $B_2$ in the $m$-Tamari lattice. \end{Definition} We can now state the generalized version of Theorem~\ref{thm:main-result-classical}. \begin{Theorem}[general case] \label{thm:main-result-general} Let $x,y,t,q$ be variables and $X = (x_0, x_1, x_2, \dots)$ and $Y = (y_0, y_1, y_2, \dots)$ be commutative alphabets. Consider the generating function \begin{equation} \Phi_m(t; x, y, X, Y, q) = \sum_{I} t^{\size(I)} x^{\mcontacts(I)} y^{\mrises(I)} \mcontactsP(I,X) \mrisesP(I,Y) q^{\distance(I)} \end{equation} summed over all intervals of the $m$-Tamari lattices. Then, for all $m$, we have \begin{equation} \Phi_m(t; x, y, X, Y, q) = \Phi_m(t; y, x, Y, X, q). \end{equation} \end{Theorem} We will give a combinatorial proof of this result, describing an involution on intervals of $m$-Tamari lattices which uses the classical case $\risecontact$ involution defined in Theorem~\ref{thm:rise-contact-statistics}. First, we will recall and reinterpret some results of \cite{IntervalPosetsInitial}. In particular, we recall how intervals of the $m$-Tamari lattice can be seen as interval-posets. \subsection{$m$-Tamari interval-posets} The $m$-Tamari lattice $\Tamnm$ is trivially isomorphic to an upper ideal of the classical Tamari lattice $\Tamk{n \times m}$. \begin{Definition} \label{m-dyck-paths} Let $B$ be an $m$-ballot path, we construct the Dyck path $\DD(B)$ by replacing every vertical step of $B$ by $m$ up-steps and every horizontal step of $B$ by a down-step. The set of such images are called the $m$-Dyck paths. \end{Definition} \begin{figure}[ht] \centering \input{figures/m-dyck-ballot} \caption{A $2$-ballot path and its corresponding $2$-Dyck path.} \label{fig:m-dyck} \end{figure} See Figure~\ref{fig:m-dyck} for an example. The $m$-Dyck paths have a trivial characterization: they are the Dyck paths whose rises are divisible by~$m$. In other words, a Dyck path $D$ is an $m$-Dyck path if and only if all values of $\risesV(D)$ are divisible by $m$. We say that they are \emph{rise-$m$-divisible}: the set of $m$-Dyck paths is exactly the set of rise-$m$-divisible Dyck paths. Besides, the set of $m$-Dyck paths is stable by the Tamari rotation. More precisely, they correspond to the upper ideal generated by the Dyck path $(1^m0^m)^n$ which is the image of the initial $m$-ballot path of $\Tamnm$, see Figure~\ref{fig:minimal-m-tam} for an example and \cite{mTamari} for more details. \begin{figure}[ht] \input{figures/minimal-m-tam} \caption{Minimal element of $\Tam{3}{2}$.} \label{fig:minimal-m-tam} \end{figure} We can read the $m$-statistics of an $m$-ballot path on its corresponding $m$-Dyck path. \begin{Proposition} \label{prop:m-ballot-m-dyck} Let $B$ be an $m$-ballot path of size $n$ and $D = \DD(B)$ then \begin{align} \mrisesStep{i}(B) &= \frac{1}{m}\risesStep{i}(D) & \text{ for } 0 \leq i \leq nm; \\ \mcontacts(B) &= \contacts(D); \\ \mcontactsStep{i,m}(B) &= \contactsStep{im}(D) & \text{ for } 1 \leq i \leq n; \\ \mcontactsStep{i,j}(B) &= \contactsStep{(i-1)m+j}(D) - 1 & \text{ for } 1 \leq i \leq n \text{ and } 1 \leq j < m. \end{align} \end{Proposition} \begin{proof} The result is clear for rises. For contacts, note that the $m$-Dyck path can be obtained from the ballot path by sending every point $(x,y)$ of the ballot path to $(my + x, my - x)$. In particular, every contact point between the ballot path and a line of slope $\frac{1}{m}$ is sent to a contact point between the $m$-Dyck path and a horizontal line. When $j\neq m$, the line $\ell_{i,j}$ starts at a non-integer point $(a, b+\frac{j}{m})$ which becomes $(mb + j + a, mb + b -a)$ in the $m$-Dyck path: it now counts for one extra contact when computing $\contactsStep{(i-1)m + j}$ in the $m$-Dyck path. \end{proof} For example, look at Figure~\ref{fig:m-dyck} and its $m$-contact vector on Figure~\ref{fig:m-stat}. The contact vector of its corresponding $2$-Dyck path is given by $\contactsV(D) = (2, \red{2}, 0, \red{3}, 0, \red{1}, 1, \red{1}, 0, \red{1}, 2, \red{1}, 0, \red{1}, 0, \red{1})$: for each even position, the number is the same and for each odd position (in red) the number is increased by 1. The rise-vector of the $m$-Dyck path is $\risesV(D) = (2, 2, 4, 0,0, 0, 4, 0, 2, 0,0,0,0,0, 2, 0)$: it is indeed the $m$-rise-vector of Figure~\ref{fig:m-stat} multiplied by 2. As the $m$-Tamari lattice can be understood as an upper ideal of the Tamari lattice, it follows that the intervals of $\Tamnm$ are actually a certain subset of intervals of $\Tamk{n \times m}$: they are the intervals whose both upper and lower bounds are $m$-Dyck paths (in practice, it is sufficient to check that the lower bound is an $m$-Dyck path). It is then possible to represent them as interval-posets. This was done in \cite{IntervalPosetsInitial} where the following characterization was given. \begin{Definition} \label{def:m-interval-posets} An $m$-interval-poset is an interval-poset of size $n\times m$ with \begin{align} \label{eq:m-condition} i m \trprec i m-1 \trprec \dots \trprec i m-(m-1) \end{align} for all $1 \leq i \leq n$. \end{Definition} \begin{Theorem}[Theorem 4.6 of \cite{IntervalPosetsInitial}] \label{thm:m-interval-posets} The $m$-interval-posets of size $n \times m$ are in bijection with intervals of $\Tam{n}{m}$. \end{Theorem} On Figure~\ref{fig:m-big-example}, you can see two examples of $m$-interval-posets with $m = 2$ and their corresponding $m$-ballot paths. To construct the interval-posets, you convert the ballot paths into $m$-Dyck paths and use the classical constructions of Propositions~\ref{prop:dyck-dec-forest} and \ref{prop:dyck-inc-forest}. You can check that the result agrees with Definition~\ref{def:m-interval-posets}: for all $k$, $2k \trprec 2k-1$. The proof that it is a bijection uses the notion of $m$-binary trees. These are the binary trees of size $nm$ which belong to the upper ideal of $\Tamk{n \times m}$ corresponding to the $m$-Tamari lattice. This ideal is generated by the binary tree image of the initial $m$-Dyck path through the bijection of Definition~\ref{def:dyck-tree} as shown in Figure~\ref{fig:minimal-m-tam}. The $m$-binary trees have a $(m+1)$-ary recursive structure: this is the key element to prove Theorem~\ref{thm:m-interval-posets} and we will also use it in this paper. \begin{Definition} \label{def:m-binary-tree} The $m$-binary trees are defined recursively by being either the empty binary tree or a binary tree $T$ of size $m \times n$ constructed from $m+1$ subtrees $T_L, T_{R_1}, \dots, T_{R_m}$ such that \begin{itemize} \item the sum of the sizes of $T_L, T_{R_1}, \dots, T_{R_m}$ is $mn - m$; \item each subtree $T_L, T_{R_1}, \dots, T_{R_m}$ is itself an $m$-binary tree; \item and $T$ follows the structure bellow. \end{itemize} \begin{center} \scalebox{0.6}{ \input{figures/m-binary-trees} } \end{center} The left subtree of $T$ is $T_L$. The right subtree of $T$ is constructed from $ T_{R_1}, \dots, T_{R_m}$ by the following process: graft a an extra node to the left of the leftmost node of $T_{R_1}$, then graft $T_{R_2}$ to the right of this node, then graft an extra node to the left of the leftmost node of $T_{R_2}$, then graft $T_{R_3}$ to the right of this node, and so on. Note that in total, $m$ extra nodes were added: we call them the $m$-roots of $T$. \end{Definition} Figure~\ref{fig:mbinary} gives two examples of $m$-binary trees for $m=2$ with their decompositions into 3 subtrees. More examples and details about the structure can be found in \cite{IntervalPosetsInitial}. In particular, $m$-binary trees are the images of $m$-Dyck paths through the bijection of Definition~\ref{def:dyck-tree}. \begin{figure}[ht] \centering \begin{tabular}{c|c} \scalebox{0.5}{\input{figures/mbinary-example}} & \scalebox{0.5}{\input{figures/mbinary-example-2}} \end{tabular} \caption{Examples of $m$-binary trees for $m=2$: $T_L$ is in red, $T_{R_1}$ is in dotted blue and $T_{R_2}$ is in dashed green. In the second example, $T_{R_1}$ is empty.} \label{fig:mbinary} \end{figure} When working on the classical case, we could safely identify an interval of the Tamari lattice and its representing interval-poset. For $m\neq 1$, we need to be a bit more careful and clearly separate the two notions. Indeed, the $m$-statistics from Definition~\ref{def:m-contact-rise-intervals} of an interval of $\Tamnm$ are not equal to the statistics of its corresponding interval-poset from Definition~\ref{def:contact-rise-intervals}. They can anyway be retrieved through simple operations. \begin{Proposition} \label{prop:m-statistics-interval-posets} Let $I$ be an interval of $\Tamnm$, and $\tI$ its corresponding interval-poset of size $nm$. Then \begin{align} \mrisesStep{i}(I) &= \frac{1}{m}\risesStep{i}(\tI) & \text{ for } 0 \leq i \leq nm; \\ \mcontacts(I) &= \contacts(\tI); \\ \mcontactsStep{i,m}(I) &= \contactsStep{im}(\tI) & \text{ for } 1 \leq i \leq n; \\ \mcontactsStep{i,j}(I) &= \contactsStep{(i-1)m+j}(\tI) - 1 & \text{ for } 1 \leq i \leq n \text{ and } 1 \leq j < m; \\ \label{eq:m-distance} \distance(I) &= \distance(\tI). \end{align} \end{Proposition} \begin{proof} All identities related to rises and contacts are a direct consequence of Proposition~\ref{prop:m-ballot-m-dyck}. Only~\eqref{eq:m-distance} needs to be proved, which is actually also direct: $\Tamnm$ is isomorphic to the ideal of $m$-Dyck path in $\Tamk{n \times m}$ and so the distance between two paths in the lattice stays the same. \end{proof} \subsection{The expand-contract operation on $m$-Tamari intervals} \begin{Definition} \label{def:m-divisible} We say that an interval-poset $I$ of size $nm$ is \begin{itemize} \item \emph{contact-$m$-divisible} if all values of $\contactsV(I)$ are divisible by $m$; \item \emph{rise-$m$-divisible} if all values of $\risesV(I)$ are divisible by $m$; \item \emph{rise-contact-$m$-divisible} if it is both contact-$m$-divisible and rise-$m$-divisible. \end{itemize} \end{Definition} In particular, $m$-interval-posets are rise-$m$-divisible but not necessary contact-$m$-divisible. Besides, we saw that rise-$m$-divisible Dyck paths were exactly $m$-Dyck paths, but the set of rise-$m$-divisible interval-posets is not equal to $m$-interval-posets. Indeed, an interval whose upper bound is an $m$-Dyck path is rise-$m$-divisible but it can have a lower bound which is not an $m$-Dyck path and so it is not an $m$-interval-poset. Furthermore, it is quite clear that the set of $m$-interval-posets is not stable through the rise-contact involution $\risecontact$. Indeed, the image of an $m$-interval-poset would be contact-$m$-divisible but not necessary rise-$m$-divisible. In this section, we describe a bijection between the set of $m$-interval-posets and the set of rise-contact-$m$-divisible intervals. This bijection will allow us to define an involution on $m$-interval-posets which proves Theorem~\ref{thm:main-result-general}. \begin{Definition} Let $(T, \ell)$ be a grafting tree of size $nm$ and $v_1, \dots, v_{nm}$ be the nodes of $T$ taken in in-order. We say that $(T, \ell)$ is an $m$-grafting-tree if $\ell(v_i) \geq 1$ for all $i$ such that $i \not \equiv 0 \mod m$. \end{Definition} \begin{Proposition} \label{prop:m-graft-m-interval} An interval-poset $I$ is an $m$-interval-poset if and only if $\graftingTree(I)$ is an $m$-grafting-tree. \end{Proposition} As an example, the top and bottom grafting trees of Figure~\ref{fig:m-big-example} are $m$-grafting trees: you can check that every odd node has a non-zero label. The corresponding $m$-interval-posets are drawn on the same lines. Proposition~\ref{prop:m-graft-m-interval} is direct consequence of Proposition~\ref{prop:grafting-direct} and Definition~\ref{def:m-interval-posets}. Indeed, to obtain \eqref{eq:m-condition}, it sufficient to say that every node $i$ of the interval-poset such that $i \not \equiv 0 \mod m$ has at least one decreasing child $j > i$ such that $j \trprec i$. By definition of an interval-poset, this gives $i + 1 \trprec i$. \begin{Proposition} \label{prop:m-graft-m-binary} Let $(T,\ell)$ be an $m$-grafting-tree, then $T$ is an $m$-binary-tree. \end{Proposition} \begin{proof} This is immediate by Proposition~\ref{prop:grafting-direct}: $(T,\ell)$ corresponds to an $m$-interval-poset $I$. In particular, the upper bound of $I$ is an $m$-binary tree which is equal to $T$. \end{proof} \begin{Proposition} \label{prop:m-exp-cont} Let $(T, \ell)$ be an $m$-grafting-tree, and $v_1, \dots, v_{nm}$ its nodes taken in in-order. The \emph{expansion of $(T,\ell)$} is $\expand(T,\ell) = (T', \ell')$ defined by \begin{itemize} \item $T' = T$; \item $\ell'(v_i) = m \ell(v_i)$ if $i \equiv 0 \mod m$, otherwise, $\ell'(v_i) = m (\ell(v_i) -~1)$. \end{itemize} Then $\expand$ defines a bijection through their grafting trees between $m$-interval-posets and rise-contact-$m$-divisible interval posets. The reverse operation is called \emph{contraction}, we write $(T,\ell) = \contract(T, \ell')$. Besides, we have \begin{equation} \label{eq:m-exp-contacts} \contacts(T,\ell') = m \contacts(T,\ell). \end{equation} Note that we write $\contacts(T,\ell)$ for $\contacts(\graftingTree^{-1}(T,\ell))$ for short. \end{Proposition} The intuition behind this operation is first that the relations $im \trprec \dots \trprec im - (m-1)$ are not necessary to recover the $m$-interval-poset (because they are always present) and secondly that the structure of the $m$-binary tree allows to replace each remaining decreasing relations by $m$ decreasing relations. Nevertheless, even if the operation is easy to follow on grafting tree (and the proof mostly straight forward), we would very much like to see a ``better'' description of it directly on Tamari intervals. \begin{proof} This proposition contains different results, which we organize as claims and prove separately. \begin{proofclaim} \label{claim:expand} $(T, \ell') = \expand(T, \ell)$ is a grafting tree such that $\contacts(T, \ell') = m\contacts(T, \ell)$. \end{proofclaim} These two properties are intrinsically linked, we will prove both at the same time by induction on the recursive structure of $m$-binary-trees. Let $(T, \ell)$ be an $m$-grafting tree. By Proposition~\ref{prop:m-graft-m-binary}, $T$ is an $m$-binary tree. If $T$ is empty, then there is nothing to prove. Let us suppose that $T$ is non-empty: it can be decomposed into $m+1$ subtrees $T_L, T_{R_1}, \dots, T_{R_m}$ which are all $m$-grafting trees. By induction, we suppose that they satisfy the claim. Let us first focus on the case where $T_L$ is the empty tree. Then $v_1$ (the first node in in-order) is the root, and moreover, the $m$-roots are $v_1, \dots v_m$. We call $T_1, T_2, \dots, T_m$ the subtrees of $T$ whose roots are respectively $v_1, \dots, v_m$ (in particular, $T_1 = T$). See Figure~\ref{fig:m-exp-cont} for an illustration. \begin{figure}[ht] \input{figures/proof_exp} \caption{Illustration of $T_1, \dots, T_m$} \label{fig:m-exp-cont} \end{figure} In particular, for $ 1 \leq k < m$, the tree $T_k$ follows a structure that depends on $T_{R_k}$ and $T_{k+1}$ as shown in Figure~\ref{fig:m-exp-cont} and $T_m$ depends only on $T_{R_m}$. Note that $T_2, \dots T_k$ are grafting trees but they are not $m$-grafting trees whereas $T_{R_1}, \dots, T_{R_m}$ are. Following Definition~\ref{def:grafting-tree}, the structure gives us \begin{align} \label{eq:m-exp-cont-tvk} \ell(v_k) &\leq \size(T_{R_k}) + \size(T_{k+1}) - \labels(T_{R_k}, \ell) - \labels(T_{k+1}, \ell) \\ \nonumber &= \contacts(T_{R_k}, \ell) + \contacts(T_{k+1},\ell) \end{align} for $1 \leq k < m$ and \begin{equation} \label{eq:m-exp-cont-tvm} \ell(v_m) \leq \contacts(T_{R_m}, \ell). \end{equation} Also, for $1 \leq k < m$, we have $\ell'(v_k) = m(\ell(v_k) -1) \geq 0$ (indeed remember that $\ell(v_k) \geq 1$ because $(T,\ell)$ is an $m$-grafting-tree) and $\ell'(v_m) = m \ell(v_m) \geq 0$. To prove that $(T,\ell')$ is a grafting tree, we need to show \begin{align} \label{eq:m-exp-cont-graft} \ell'(v_k) &\leq \contacts(T_{R_k}, \ell') + \contacts(T_{k+1},\ell'); \\ \label{eq:m-exp-cont-graftm} \ell'(v_m) &\leq \contacts(T_{R_m}, \ell'). \end{align} We simultaneously prove \begin{equation} \label{eq:m-exp-cont-contacts} \contacts(T_k, \ell') = m \contacts(T_k, \ell) - k + 1. \end{equation} The case $k=1$ in \eqref{eq:m-exp-cont-contacts} finishes to prove the claim. We start with $k=m$ and then do an induction on $k$ decreasing down to 1. By hypothesis, we know that $(T_{R_m}, \ell)$ satisfies the claim. In particular $(T_{R_m}, \ell')$ is a grafting tree and $\contacts(T_{R_m}, \ell') = m \contacts(T_{R_m}, \ell)$. By definition, we have $\ell'(v_m) = m \ell(v_m)$ and so \eqref{eq:m-exp-cont-tvm} implies \eqref{eq:m-exp-cont-graftm}. Besides \begin{align} \contacts(T_m, \ell) &= \size(T_m) - \labels(T_m, \ell) \\ \nonumber &= 1 + \size(T_{R_m}) - \ell(v_m) - \labels(T_{R_m}, \ell) \\ \nonumber &= 1 - \ell(v_m) + \contacts(T_{R_m}, \ell), \\ \label{eq:m-exp-cont-contacts-m} m \contacts(T_m, \ell) - m + 1 &= m - m\ell(v_m) + m \contacts(T_{R_m}, \ell) - m + 1 \\ \nonumber &= 1 - \ell'(v_m) + \contacts(T_{R_m}, \ell') \\ \nonumber &= \contacts(T_m, \ell'), \end{align} \emph{i.e.}, case $k=m$ of \eqref{eq:m-exp-cont-contacts}. Now, we choose $1 \leq i <m$ and assume \eqref{eq:m-exp-cont-graft} and \eqref{eq:m-exp-cont-contacts} to be true for $k > i$. We have $\ell'(v_i) = m \left( \ell(v_i) - 1 \right)$, so \eqref{eq:m-exp-cont-tvk} gives us \begin{align} \ell'(v_i) &\leq m \contacts(T_{R_i}, \ell) + m \contacts(T_{i+1}, \ell) - m \\ \nonumber &= \contacts(T_{R_i}, \ell') + \contacts(T_{i+1}, \ell') + i - m \end{align} using \eqref{eq:m-exp-cont-contacts} with $k = i+1$. As $i < m$, this proves \eqref{eq:m-exp-cont-graft} for $k=i$. Now, the structure of $T_i$ gives us \begin{equation} \contacts(T_i, \ell) = \contacts(T_{R_i}, \ell) + \contacts(T_{i+1}, \ell) + 1 - \ell(v_i); \end{equation} \begin{align} \label{eq:m-exp-cont-contactsi} \contacts(T_i, \ell') &= \contacts(T_{R_i}, \ell') + \contacts(T_{i+1}, \ell') + 1 - \ell'(v_i) \\ \nonumber &= m \contacts(T_{R_i}, \ell) + m \contacts(T_{i+1}, \ell) - (i+1) + 1 + 1 - m (\ell(v_i) - 1) \\ \nonumber &= m \contacts(T_i, \ell) - i + 1. \end{align} The case where $T_L$ is not the empty tree is left to consider but actually follows directly. The claim is true on $T_L$ by induction as its size is strictly smaller than $T$. Let $\tilde{T}$ be the tree $T$ where you remove the left subtree $T_L$. Then $\tilde{T}$ is still an $m$-grafting tree and the above proof applies. The expansion on $T$ consists of applying the expansion independently on $T_L$ and $\tilde{T}$ and we get $\contacts(T, \ell') = \contacts(T_L, \ell') + \contacts(\tilde{T}, \ell') = m \contacts(T, \ell)$. \begin{proofclaim} $(T, \ell') = \expand(T)$ is rise-contact-$m$-divisible. \end{proofclaim} $T$ is still an $m$-binary tree, which by Proposition~\ref{prop:grafting-direct}, means that the upper bound of $\graftingTree^{-1}(T, \ell')$ is an $m$-binary tree: it corresponds to an $m$-Dyck path and is then $m$-rise-divisible. We have just proved that $\contacts(T, \ell') = m \contacts(T, \ell)$ is a multiple of $m$. By Proposition~\ref{prop:grafting-contact} the rest of the contact vector is given by reading the labels on $T$: by definition of $\ell'$, all labels are multiples of $m$. \begin{proofclaim} Let $(T, \ell')$ be a rise-contact-$m$-divisible grafting tree, then $(T, \ell) = \contract(T, \ell')$ is an $m$-grafting tree. \end{proofclaim} We define $(T, \ell) = \contract(T, \ell')$ to make it the inverse of the $\expand$ operation: \begin{align} \ell(v_i) &= \frac{\ell'(v_i)}{m} &\text{if } i \equiv 0 \mod m \\ \ell(v_{i}) &= \frac{\ell'(v_i)}{m} + 1 &\text{otherwise}. \end{align} As earlier, we simultaneously prove that $(T, \ell)$ is an $m$-grafting tree and that $\contacts(T,\ell) = \frac{\contacts(T,\ell')}{m}$. Our proof follows the exact same scheme as for Claim~\ref{claim:expand}. First note that the fact that $(T,\ell')$ is rise-$m$-divisible implies that $T$ is an $m$-binary tree: indeed, it corresponds to a certain Dyck path which is rise-$m$-divisible. When $T$ is not empty, we can recursively decompose it into $T_L$, $T_{R_1}, \dots, T_{R_m}$. As earlier, the only case to consider is actually when $T_L$ is empty. We use the decomposition of $T$ depicted in Figure~\ref{fig:m-exp-cont} and prove \eqref{eq:m-exp-cont-contacts} and~\eqref{eq:m-exp-cont-tvk} by induction on $k$ decreasing from $m$ to $1$. The case where $k = m$ is straightforward: we have that \eqref{eq:m-exp-cont-graftm} implies \eqref{eq:m-exp-cont-tvm} and \eqref{eq:m-exp-cont-contacts-m} is still true. Now, we choose $1 \leq i <m$ and assume \eqref{eq:m-exp-cont-tvk} and \eqref{eq:m-exp-cont-contacts} to be true for $k > i$. Using \eqref{eq:m-exp-cont-graft}, we get \begin{align} m ( \ell(v_i) - 1) &\leq \contacts(T_{R_i}, \ell') + \contacts(T_{i+1}, \ell') \\ \nonumber &= m \contacts(T_{R_i}, \ell) + m \contacts(T_{i+1}, \ell) -(i+1) +1 \\ \nonumber \ell(v_i) &\leq \contacts(T_{R_i}, \ell) + \contacts(T_{i+1}, \ell) - \frac{i}{m} + 1 . \end{align} We have $0 < \frac{i}{m} < 1$ and because $\ell(v_i)$ is an integer then \eqref{eq:m-exp-cont-tvk} is true. Besides, by definition of $\ell$, $\ell(v_i) \geq 1$, which satisfies the $m$-grafting tree condition. The rest of the induction goes smoothly because \eqref{eq:m-exp-cont-contactsi} is still valid. \end{proof} The $\expand$ and $\contract$ operations are the final crucial steps that allow us to define the $m$-contact-rise involution and prove Theorem~\ref{thm:main-result-general}. Before that, we need a last property to understand how the distance statistic behaves through the transformation. \begin{Proposition} \label{prop:m-exp-distance} Let $(T, \ell)$ be an $m$-grafting tree of size $mn$, and $(T,\ell') = \expand(T,\ell)$, then \begin{equation} \distance(T,\ell') = m \distance(T, \ell) + \frac{n m (m-1)}{2} \end{equation} \end{Proposition} \begin{proof} For each vertex $v_i$ of $T$, let $d_i(T,\ell) = \size(T_R(v_i)) - \labels(T_R(v_i), \ell) - \ell(v_i)$ where $T_R(v_i)$ is the right subtree of the vertex $v_i$ in $T$ and remember that $d(T,\ell) = \sum_{i=1}^{nm} d_i$ by Proposition~\ref{prop:grafting-tree-distance}. We claim that \begin{equation} d_{im - j}(T, \ell') = m d_{im - j}(T, \ell) + j \end{equation} for $1 \leq i \leq n$ and $0 \leq j < m$, which gives the result by summation. We prove our claim by induction on $n$. Let us suppose that $T$ is not empty and decomposes into $T_L, T_{R_1}, \dots, T_{R_m}$. The result is true by induction on the subtrees: indeed the index of a given vertex (in in-order) in $T$ and in its corresponding subtree is the same modulo $m$. It remains to prove the property for the $m$-roots of $T$, which are given by $\lbrace v_{im - j} ; 0 \leq j < m \rbrace$ for some $1 \leq i \leq n$. We use the decomposition of Figure~\ref{fig:m-exp-cont}. Remember that $T_{R_m}$ is an $m$-grafting tree and we have by Proposition~\ref{prop:m-exp-cont} that $\contacts(T_{R_m},\ell') = m\contacts(T_{R_m},\ell)$. We get \begin{align} d_{im}(T, \ell') &= \size(T_{R_m}) - \labels(T_{R_m}, \ell') - \ell'(v_{im}) \\ \nonumber &= \contacts(T_{R_m},\ell') - \ell'(v_{im}) \\ \nonumber &= m \contacts(T_{R_m},\ell) - m \ell(v_{im}) \\ \nonumber &= m d_{im}(T,\ell). \end{align} Now, remember that by the decomposition of Figure~\ref{fig:m-exp-cont}, $T_R(v_{im-j})$ is made of $T_{R_{m-j}}$ (which is an $m$-grafting tree) with $T_{m-j+1}$ grafted on its left most branch, using that and \eqref{eq:m-exp-cont-contacts}, we get \begin{align} d_{im-j}(T,\ell') &= \size(T_R(v_{im-j})) - \labels(T_R(v_{im-j}),\ell') - \ell'(v_{im-j}) \\ \nonumber &= \size(T_{R_{m-j}}) + \size(T_{m-j+1}) - \labels(T_{R_{m-j}}, \ell') - \labels(T_{m-j+1},\ell') - \ell'(v_{im-j}) \\ \nonumber &= \contacts(T_{R_{m-j}}, \ell') + \contacts(T_{m-j+1},\ell') - \ell'(v_{im-j}) \\ \nonumber &= m \contacts(T_{R_{m-j}}, \ell) + m \contacts(T_{m-j+1},\ell) -(m-j+1) + 1 - m(\ell(v_{im-j}) -1) \\ \nonumber &= d_{im-j}(T,\ell) +j. \end{align} \end{proof} \begin{Theorem}[The $m$-rise-contact involution] \label{thm:m-rise-contact-involution} Let $\mrisecontact$ be the \emph{$m$-rise-contact involution} defined on $m$-interval-posets by \begin{align} \mrisecontact = \contract \circ \risecontact \circ \expand \end{align} Then $\mrisecontact$ is an involution on intervals of $\Tamnm$, such that for an interval~$I$ and a commutative alphabet $X$, \begin{align} \label{eq:m-rise-contact-rises} \mrises(I) &= \mcontacts(\mrisecontact(I)); \\ \label{eq:m-rise-contact-partitions} \mrisesP(I,X) &= \mcontactsP(\mrisecontact(I),X); \\ \label{eq:m-rise-contact-distance} \distance(I) &= \distance(\mrisecontact(I)). \end{align} \end{Theorem} \begin{proof} Le $I$ be an interval of $\Tamnm$ with $\tilde{I}$ its corresponding $m$-interval-poset in $\Tamk{n \times m}$ and let $(T,\ell) = \expand(\tilde{I})$ be the expansion of its $m$-grafting-tree. By Propositions~\ref{prop:m-statistics-interval-posets} and~\ref{prop:m-exp-cont}, we have \begin{align} \label{eq:m-rise-contact-proof1} \contacts(T,\ell) &= m \contacts(\tilde{I}) = m (\mcontacts(I)) \\ \label{eq:m-rise-contact-proof2} \contactsStep{im}(T, \ell) &= m \contactsStep{im}(\tilde{I}) = m (\mcontactsStep{i,m}(I)) \\ \label{eq:m-rise-contact-proof3} \contactsStep{(i-1)m + j}(T, \ell) &= m (\contactsStep{(i-1)m +j}(\tilde{I}) - 1) = m (\mcontactsStep{i,j}(I)) \end{align} for $1 \leq i \leq n$ and $1 \leq j < m$. And using again Propositions~\ref{prop:m-statistics-interval-posets} and the fact that the expansion does not affect the initial forest, we have \begin{align} \label{eq:m-rise-contact-proof4} \risesStep{i}(T,\ell) &= \risesStep{i}(\tilde{I}) = m (\mrisesStep{i}(I)) \end{align} for $0 \leq i \leq mn$. In other words, $(T,\ell)$ is rise-contact-$m$-divisible. Let $(T', \ell') = \risecontact(T, \ell)$. By Theorem~\ref{thm:rise-contact-statistics}, we have that \begin{align} \label{eq:m-rise-contact-proof5} \rises(T,\ell) &= \contacts(T',\ell'); \\ \label{eq:m-rise-contact-proof6} \risesP(T,\ell,X) &= \contactsP(T',\ell',X); \\ \label{eq:m-rise-contact-proof7} \distance(T,\ell) &= \distance(T',\ell'). \end{align} In particular, this means that $(T',\ell')$ is still rise-contact-$m$-divisible: we can apply the $\contract$ operation and we get an $m$-interval-poset $\tilde{J}$ of $\Tamk{n \times m}$, which corresponds to some interval $J$ of $\Tamnm$. This proves that $\mrisecontact$ is well defined and is an involution by construction. Using \eqref{eq:m-rise-contact-proof4} followed by \eqref{eq:m-rise-contact-proof5} then by \eqref{eq:m-rise-contact-proof1} on $(T',\ell')$, $J$ and $\tilde{J}$, we obtain \eqref{eq:m-rise-contact-rises}. The result \eqref{eq:m-rise-contact-partitions} follows in a similar way. The equality \eqref{eq:m-rise-contact-proof4} tells us that the rise vector of $(T,\ell)$ is the rise vector of $I$ where every value has been multiplied by $m$. Now \eqref{eq:m-rise-contact-proof6} basically says that the contact vector of $(T',\ell')$ is a permutation of the rise vector of $(T,\ell)$. Finally, we apply \eqref{eq:m-rise-contact-proof1}, \eqref{eq:m-rise-contact-proof2}, and \eqref{eq:m-rise-contact-proof3} on $(T',\ell')$, $\tilde{J}$ and $J$ instead of $(T,\ell)$, $\tilde{I}$ and $I$, and we get the equality \eqref{eq:m-rise-contact-partitions} between the rise and contact partitions. For \eqref{eq:m-rise-contact-distance}, see in \eqref{eq:m-rise-contact-proof7} that the distance statistic is not affected by $\risecontact$. Proposition~\ref{prop:m-exp-distance} tells us that $\expand$ applies an affine transformation which does not depend on the shape of $T$, it is then reverted by the application of $\contract$ later on. \end{proof} Figure~\ref{fig:m-big-example} shows a complete example of the $\mrisecontact$ involution on an interval of $\Tam{11}{2}$. You can run more examples and compute tables for all intervals using the provided live {\tt Sage-Jupyter} notebook~\cite{JNotebook}. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The advent of social media marked a significant change in media production and distribution, which was previously monopolized by large, often state-owned institutions \cite{Loader2011}. Social media enables users to share their own opinions and present alternative viewpoints, thereby changing users' roles from passive content consumers to prosumers (producers and consumers). Social media helps build online communities, encourages debate, and mobilizes users, and therefore can promote democracy \cite{Price2013}. However, despite the initial enthusiasm about the democracy-promoting role of social media, such media can be manipulated by various parties \cite{Arif2018}. For example, non-democratic governments allegedly hire online commentators to spread speculation, propaganda, and disinformation so as to push agendas and manipulate public opinion \cite{Sobolev2019, Zannettou2019b}. Such paid commentators are referred to as trolls \cite{Mihaylov2015, Llewellyn2019}. An alleged system of professional paid trolls exists in Russia coordinated by a company called Internet Research Agency (IRA) \cite{OfficeoftheDirectorofNationalIntelligence2017}, which is also referred to as a troll factory (e.g., \cite{Evstatieva2018}). The Russian trolls are alleged to have interfered with several international political and social events, most prominently the 2016 US presidential election \cite{OfficeoftheDirectorofNationalIntelligence2017}. Such claims have triggered a new research stream on the operations and impact of paid Internet trolls, particularly those associated with IRA (e.g., \cite{gorrell2019partisanship, Howard2018, Zannettou2019b}). However, such studies are still scarce \cite{Tucker2018}. Furthermore, while foreign-targeted operations of Russian trolls have received some attention, as far as we know, no studies have analyzed their domestic and regional-targeted activities. To address this gap, this paper analyzes the domestic and regional operations of Russian trolls by studying their Russian-language Twitter posts (tweets) from a dataset recently released by Twitter \cite{Twitter2019}. Although the procedure of troll detection adopted by Twitter remains unclear, we assume that the identified troll accounts indeed belonged to paid and centrally coordinated commentators. This study is the first to characterize the domestic and regional operations of Russian trolls based on the data provided by Twitter. We discover that the crash of Malaysia Airlines flight MH17 has prompted the largest information campaign of Russian-language trolls. Therefore, the paper further focuses on the trolls' reaction to the crash of MH17. Overall, the issue of state-sponsored organized trolls remains important with novel disinformation campaigns even targeting the COVID-19 pandemic \cite{gabrielle2020}. This paper is structured as follows. Section \ref{data} introduces the data analyzed. Section \ref{relatedwork} presents the related work. Section \ref{content_analysis} analyzes hashtags, retweets, and URLs that Russian trolls' used in their domestic and regional operations. Section \ref{temporal_patterns} studies temporal posting patterns of troll tweets. Section \ref{mh17} focuses on the MH17-related activities of trolls. Finally, results are discussed and conclusions are drawn in Section \ref{discussion}. \section{Data} \label{data} In 2017, due to the investigation into Russian involvement in the 2016 U.S. elections and to increase transparency into foreign influence on its platform, Twitter released a list of accounts believed to be associated with the IRA. The troll tweets posted from these accounts were then first collected and released by researchers from Clemson University\footnote{https://github.com/fivethirtyeight/russian-troll-tweets} and later by Twitter itself \cite{Twitter2019}. Specifically, between October 2018 and September 2019 Twitter released 9,691,682 tweets published between May 2009 and November 2018 by 3843 troll accounts related to the IRA \cite{Twitter2019}. The dataset includes all public, non-deleted tweets from such accounts. The dataset contains the following information: \begin{itemize} \item \textbf{Tweet}: ID, language, text, time, and name of client application used to post the tweet. And when applicable ID of the tweet and user that the current tweet replied to, reposted (retweeted), quoted, or mentioned, number of times the current tweet was quoted, replied to, liked, and retweeted\footnote{The count of likes and retweets exclude the engagements from users who are suspended or deleted at the moment of data release (e.g., trolls)}, list of hashtags, URLs, and tweet geolocation. \item \textbf{User/Profile}: ID, anonymized display and screen names, self-reported location, description of profile, language, creation date, number of followers and followed accounts at the time of suspension. \end{itemize} The language of 50.15\% (around 4.86M) of the troll tweets is Russian. We focus on and perform analyses on these Russian language tweets and trolls (though we also perform some comparisons against the English language trolls and tweets). These tweets were posted by 1551 accounts between January 2010 and September 2018, with 97\% of these tweets falling between 2014 and 2017. The number of trolls with at least one tweet in a month exceeded 1000 from September 2014 to October 2015 and reached the maximum of 1115 in April 2015. At the same time, the number of tweets posted during these months varied considerably from 74K to 413K. For reference, Figure \ref{fig:total_number_of_tweets_and_trolls} details the number of tweets and trolls per month over time. Further, we only analyze tweets posted in 2014-2017 to concentrate on the period of trolls' highest activity. \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{total_number_of_tweets_and_trolls.pdf} \caption{Number of troll tweets (solid blue) and trolls (dashed red) over time} \Description{97\% of tweets were published in 2014-2017. The number of active trolls exceeded 1000 from September 2014 to October 2015 while the number of tweets varied from 74000 to 413000 during these months.} \label{fig:total_number_of_tweets_and_trolls} \end{figure} \section{Related Work} \label{relatedwork} \subsection{Quantitative studies of Russian trolls} The data on troll accounts and tweets released by Twitter has started a wave of quantitative research on Russian trolls. Table \ref{tab:quantitative_studies} provides an overview of such studies. \begin{table*}[tbp!] \caption{Quantitative studies of the activity of organized trolls on social media} \label{tab:quantitative_studies} \begin{tabular}{p{0.5cm}p{5.5cm}p{7cm}p{3cm}} \toprule Ref & Data & Research objectives & Scope (case) \\ \midrule \cite{Addawood2019} & 1.2M tweets by 1148 troll accounts; 12.4M tweets by 1.2M ordinary users & \textit{Detect}: Identify linguistic markers of deception in trolls' posts and test their applicability for troll detection & US Elections; Broad, exploratory \\ \cite{Badawy2019} & 540K "original" tweets by 1148 troll accounts; 13.6M tweets of ordinary users & \textit{Assess the impact}: Understand what users fall for propaganda spread by IRA trolls & US Elections \\ \cite{Boyd} & 3500 Facebook ads ordered by IRA; tweets of 969 trolls and 1078 ordinary users & \textit{Detect}: Conduct linguistic analysis to separate the posts of IRA trolls from native English-speaking users & US Elections; Broad, exploratory \\ \cite{Broniatowski2018} & 899 vaccination-related troll tweets; 1.8M tweets of ordinary users & \textit{Describe}: Understand the role of bots and trolls in vaccination-related discussion & Vaccination debate \\ \cite{Ghanem2019} & 1.8M tweets by 2K trolls; 1.9M tweets by 95K ordinary users & \textit{Detect}: Define approach of detecting trolls on Twitter based on textual features & US Elections \\ \cite{gorrell2019partisanship} & 9M tweets by 3.8K trolls; 13.2M tweets by 1.8M ordinary users; & \textit{Describe; assess the impact}: Define the role and impact of "politically-motivated actors", including IRA trolls & Brexit; Broad, exploratory \\ \cite{Howard2018} & Data provided by Facebook, Twitter, and Google to Senate: posts, ads, accounts & \textit{Describe}: Analyze trolls' activity over several social media platforms & US Elections and other political events \\ \cite{Im2019} & 347K tweets by 2.2K trolls; 30M tweets by 171K ordinary users & \textit{Detect}: Develop machine learning models to detect trolls, apply the models on currently active accounts & Broad, exploratory \\ \cite{Kim2019} & 1.7M tweets by 733 trolls & \textit{Describe}: Propose a classification framework to define the trolls' identity and social role & US Elections \\ \cite{Linvill2019a} & 3.2M tweets by 2K trolls & \textit{Describe}: Categorize trolls by role in political discussion & US Elections \\ \cite{Llewellyn2019} & 3485 Brexit-related tweets by 419 trolls & \textit{Describe}: Analyze behavior shifts of trolls over time & Brexit \\ \cite{Sobolev2019} & 500M posts by 700 trolls on Live Journal and 80K discussions with trolls' participation & \textit{Assess the impact}: Identify the impact of trolls on conversations in social media & Political and social events in Russia \\ \cite{Zannettou2019} & 1.8M images from 9M troll tweets & \textit{Describe}: Characterize image-posting activity of trolls & Broad, exploratory \\ \cite{Zannettou2019a} & 27K tweets by 1K trolls; 96K tweets by 1K ordinary users & \textit{Describe; assess the impact}: Characterize trolls' operations and their influence on the greater Web & Broad, exploratory \\ \cite{Zannettou2019b} & 10M posts by 5.5K Russian (IRA) and Iranian trolls on Twitter and Reddit & \textit{Describe; assess the impact}: Identify and compare strategies of different groups of trolls (IRA vs Iranian), analyze the impact & Broad, exploratory \\ \bottomrule \end{tabular} \end{table*} Quantitative troll studies can be divided into those (1) focusing on describing the behavior, strategy, and operations of trolls; (2) assessing the impact of trolls' operations; and (3) proposing methods for troll detection. The first group of studies are mostly descriptive and exploratory, characterizing the operations of trolls along multiple dimensions, including the use of hashtags, URLs, and retweets (e.g., \cite{gorrell2019partisanship, Zannettou2019a, Zannettou2019b}). The second group of studies typically assesses the impact of trolls' operations by tracking the engagement with trolls within the social network and in the greater Web \cite{Badawy2019, Sobolev2019, Zannettou2019a, Zannettou2019b}. The third group of studies often uses linguistic features (such as features common to native Russian speakers in English-language conversations) to detect trolls \cite{Boyd, Ghanem2019}, although some studies (e.g., \cite{Im2019}) use a broader set of features for troll detection, including profile-, behavior-, and stop word usage-related features. Quantitative troll studies could further be divided into those taking a broad scope, and those focusing on a particular propaganda campaign, most often the 2016 US Elections \cite{Kim2019, Linvill2019a, Howard2018, Addawood2019, Badawy2019, Boyd} and Brexit \cite{gorrell2019partisanship, Llewellyn2019}. This paper belongs to the first group of studies taking first a broad scope, after which focusing on the MH17 campaign. Quantitative troll studies have typically analyzed textual data from Twitter (Table \ref{tab:quantitative_studies}). However, a few studies also examined Facebook ads purchased by IRA \cite{Boyd}, images posted on Twitter \cite{Zannettou2019}, as well as posts on other social networks, such as Reddit \cite{Zannettou2019b} and the blogging platform LiveJournal \cite{Sobolev2019} (popular in Russia). Furthermore, the first multi-platform troll studies have started to appear \cite{Howard2018}. Previous research suggests that the behavior of IRA trolls significantly differs from ordinary social media users in posting patterns and language use. Namely, on Twitter, trolls often exhibit abnormal tweet and retweet rates, use more hashtags, and share more URLs \cite{Addawood2019, Im2019}. Moreover, they post shorter tweets with shorter words, use fewer words that indicate causation, and use less emojis \cite{Addawood2019, Boyd}. Furthermore, the trolls differ from each other. For example, \cite{Linvill2019a} identified five groups of trolls based on their behavior: right troll, left troll, news feed, hashtag gamer, and fearmonger. Finally, \cite{Kim2019} observed that the strategic behavior of trolls changes over time. Among the studies of Russian trolls, our work most closely relates to \cite{Zannettou2019a, Zannettou2019b} in taking an exploratory approach and characterizing the activity of trolls across various dimensions. However, unlike \cite{Zannettou2019a, Zannettou2019b}, which analyze the content of English-language troll tweets and other troll activity (e.g., URL sharing) without language separation, we focus on Russian-language troll operations. \subsection{The Crash of MH17} Malaysia Airlines flight MH17 en route from Amsterdam to Kuala Lumpur was shot down in Eastern Ukraine on the 17th of July 2014. Shortly after the incident, western media accused separatists from the self-proclaimed Donetsk People's Republic (DPR) of shooting down the plane with Buk Missile System. In turn, the separatists and some Russian media stated that Ukrainian Armed Forces were to blame \cite{Oates2016}. The results of a criminal investigation by the Joint Investigation Team (JIT) published in September 2016 indicated that the Buk missile system was transported from Russia to territory controlled by separatists the day before the incident \cite{OpenbaarMinisterie2016}. The crash of MH17 sparked an intense debate on social media. In \cite{Golovchenko2018}, the authors collected tweets related to the incident and manually marked a sample of English-language tweets based on the judgements they expressed. 10.3\% of tweets appeared to be pro-Ukrainian, and 5.5\% -- pro-Russian, while the rest did not take a side. The authors found that pro-Russian and pro-Ukrainian tweets were mostly spread by accounts identified as ordinary citizens rather than media figures or politicians. However, they did not consider the role of trolls in the discussion. Furthermore, no academic studies have analyzed how trolls reacted to the crash of MH17. Dutch journalists have analyzed the Twitter-released troll data and found that the two days after the crash of MH17 were the most active ever for the trolls in terms of the number of tweets \cite{VanderNoordaa2019}. 66000 of the July 18-19 tweets included the hashtags "Kiev Shot Down Boeing", "Kiev's Provocation", and "Kiev Tell the Truth". The authors further found that the first posts after the incident reported that the militia of DPR brought down a transport plane AN-26, although no plane other than MH17 was downed on that day. \section{Content Analysis} \label{content_analysis} \subsection{Hashtags and Hashtag Campaigns} We first study the hashtags of the troll tweets. About 17\% of tweets include at least one hashtag (vs 47\%, $p<0.001$\footnote{Unless noted otherwise, the statistical tests in the paper are two-sided $X^2$ proportion tests from Stata 16.0 with clustered standard errors to account for, for example, multiple tweets per troll.}, for English-language troll tweets); with on average 1.39 hashtags per tweet with hashtags. Table \ref{tab:hashtags} shows the most used hashtags. Similar to the previous work for English-language troll tweets \cite{Zannettou2019b}, we find that the most common Russian hashtag is \#News (\foreignlanguage{russian}{Новости}), which accounts for 4.96\% of all hashtags used by trolls (9.47\%, $p<0.01$, for hashtags used by English language trolls). Other popular apolitical hashtags include \#Auto (\foreignlanguage{russian}{Авто}), \#Sport (\foreignlanguage{russian}{Спорт}), and \#Cinema (\foreignlanguage{russian}{Кино}). Furthermore, several top hashtags relate to geographic locations, such as \#StPetersburg\footnote{More precisely, \textit{\#spb (\foreignlanguage{russian}{спб})}, which stands for Saint Petersburg}, \#Russia (\foreignlanguage{russian}{Россия}), and \#Ukraine (\foreignlanguage{russian}{Украина}). Some hashtags have a political sentiment, such as \#KievsProvocation (\foreignlanguage{russian}{ПровокацияКиева}), which was actively used after the crash of Malaysia Airlines' flight MH17. Other political hashtags relate to US (\#ReturnCalifornia -- \foreignlanguage{russian}{ВернитеКалифорнию}) and Ukraine (\#PanicInKiev -- \foreignlanguage{russian}{ПаникаВКиеве}). Interestingly, one of the top hashtags (\#RunZelensky -- \foreignlanguage{russian}{ЗеленскийБеги}) refers to Vladimir Zelensky, then a well-known figure in Russian media who was later elected as the president of Ukraine. Overall, we observe more event- and person-specific along with politically colored hashtags than in the 20 hashtags most used in English-language operations. \begin{table}[tbp!] \caption{Trolls most used hashtags (translated)} \label{tab:hashtags} \begin{tabular}{p{2.25cm}p{0.9cm}p{2.25cm}p{0.9cm}} \toprule Hashtag & Share & Hashtag & Share \\ \midrule News & 4.96\% & ImageOfRussia & 1.28\% \\ StPetersburg & 4.62\% & Putin & 1.08\% \\ Russia & 3.45\% & Sport & 1.04\% \\ RussianSpirit & 2.65\% & ReturnCalifornia & 0.98\%\\ NevskieNews & 2.18\% & Cinema & 0.94\% \\ KievsProvocation & 1.98\% & BattleOfOligarchs & 0.87\% \\ KievShot-DownBoeing & 1.97\% & Music & 0.80\% \\ KievTellTheTruth & 1.95\% & RunZelensky & 0.73\% \\ Ukraine & 1.85\% & Politics & 0.73\% \\ Auto & 1.30\% & Football & 0.72\% \\ \bottomrule \end{tabular}% \end{table} Further, we check whether hashtags can indicate propaganda campaigns run by Russian trolls. To detect such propaganda \textit{hashtag campaigns}, we first select hashtags used by trolls more than 500 times (303 hashtags) with at least 95\% of their occurrences happened within one month (165 hashtags). Further, we explore tweets containing the selected hashtags and manually classify them based on their subject and sentiment. Some of the campaigns included several hashtags; we remove such duplicates (8 hashtags). Further, some hashtag campaigns had two subjects and sentiments, for example, attacking Ukraine and praising Russia simultaneously. In such cases, two subjects and sentiments are marked (7 campaigns). Figure \ref{fig:hashtag_campaigns} illustrates the focus of the trolls' hashtag campaigns over time. Most of the campaigns were run between June 2014 and November 2015, with only one detected outside this range (in April 2016, not shown in the figure). The analysis shows 163 campaigns divided into seven categories. \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{hashtag_campaigns_over_time5.pdf} \caption{Trolls' hashtag campaigns by category over time (+,-,0 indicates the positive, negative, or neutral sentiment of each campaign category)} \Description{163 hashtag campaigns were detected. 48 praised Russia and Putin, 43 criticized Ukraine, and 15 criticized the US and Obama.} \label{fig:hashtag_campaigns} \end{figure} Figure \ref{fig:hashtag_campaigns} demonstrates that the main focuses of Russian-language trolls were praising Russia with Putin (48 campaigns), criticizing Ukraine (43 campaigns), as well as the USA with Obama (15 campaigns). We notice that about half of the Anti-Ukrainian and Anti-USA and Obama campaigns were run in July-September 2014. Anti-Ukrainian campaigns further exhibited an increase in January-February 2015, however, there was a surprising decline in October-December 2014. Patriotic pro-Russian and pro-Putin campaigns ran steadily between July 2014 and September 2015, except for January and February 2015. Eight campaigns criticized other countries: three were related to the European Union sanctions targeting Russia (e.g., \#AgainstSanctions -- \foreignlanguage{russian}{ПротивСанкций}), two were against Canada (e.g., \#nocanada in November 2014), two attacked Turkey and were related to a Russian military jet downed by Turkey in Syria in November 2015 (e.g., \#TurkeyAggressor), and one criticized Armenia for demonstrations in June 2015 (\#YerevanBeSmart). Several campaigns attacked particular Russian and foreign figures (e.g., \#SomeoneWhoKillsChildren against the then president of Ukraine Petro Poroshenko). Further, about 18\% of all hashtag campaigns were apolitical. More than half of such hashtags appeared in October-December 2014, when anti-Ukraine campaigns were on the decline. Finally, among the miscellaneous hashtags with varying subjects and sentiments, three were pro-Ukraine campaigns run in February 2015 related to the Minsk II agreement\footnote{https://www.bbc.com/news/world-europe-31436513} (e.g., \#MinskHope -- \foreignlanguage{russian}{МинскаяНадежда}); two were anti-LGBT campaigns from July 2015 (e.g., \#StopLGBT -- \foreignlanguage{russian}{СтопЛГБТ}), and several campaigns addressed internal Russian events. For example, in December 2014 trolls blamed speculators for the Russian Ruble depreciation (\#Speculators -- \foreignlanguage{russian}{Спекулянты}). The hashtag campaign analysis illustrates the wide variation in topics that the trolls addressed. Interestingly, trolls devoted more attention to foreign rather than internal Russian affairs. Furthermore, in some campaigns, particularly anti-USA and -Obama, they addressed events that were not related to Russia. For example, trolls participated in the \#IHaveADream campaign about a black teen killed by a policeman in the US, and \#LatteSalute about Obama saluting Marines while holding a coffee cup. At the same time, only six campaigns criticized internal Russian events and figures. Also, although some of the hashtags were extensively used, only about 3\% of hashtags appeared in the data more than 100 times. Further, about 45\% of hashtags were used only once, as the blue curve in Figure \ref{fig:ECDF_Hashtags_RT_URL} illustrates, compared with 50\% ($p<0.001$) for English-language hashtags. Therefore, although trolls seem to have been somewhat free in choosing hashtags for their tweets, hashtag posting by Russian-language trolls was slightly more centralized than in English-language operations. \subsection{Retweets and Shared URLs} Next, we analyze the use of retweets and URLs in troll tweets. Around 42.5\% of all tweets were retweets (RTs) (vs 44.2\%, $p>0.05$, for English-language troll tweets). While around 63.2\% contained a URL (vs 31.5\%, $p<0.001$, for English-language troll tweets and 26\% for ordinary user tweets \cite{Zannettou2019a}). Combined, RTs and tweets with a URL accounted for 75.6\% of all troll tweets. The portion of tweets with a RT or URL varied over time from around 20\% in February 2014 to 94\% in December 2015. Around 35.9\% of the retweets were the tweets of other trolls. Other retweets included posts of mass media accounts (with the top three being RIA Novosti, Gazeta.ru, and RT), public figures, and personal blogs. Furthermore, we find that the average number of retweets of a trolls' tweet (with retweets of other trolls excluded) is 3.1. Only about 20\% of the original troll tweets were retweeted at least once, as the orange curve in Figure \ref{fig:ECDF_Hashtags_RT_URL} indicates. In addition, among retweeted tweets, the portion of retweeted tweets shared more than 100 times is only about 3\%. Therefore, engagement with trolls as measured by the number of retweets is relatively small. \begin{figure} \centering \includegraphics[width=\columnwidth]{cdf_rt_url_hashtags.pdf} \caption{ECDF of the counts of hashtag uses, retweets, and shared URLs per domain} \Description{Only 20\% of tweets were retweeted at least once. About 45\% of hashtags were used once only.} \label{fig:ECDF_Hashtags_RT_URL} \end{figure} We further analyze retweeting and URL-sharing activity of trolls through the distribution of tweet types across ``active'' trolls that have at least 100 tweets in the data. Figure \ref{fig:accounts_by_tweet_type_ECDF} illustrates the empirical cumulative distribution functions (ECDF) of trolls by tweet types. The median share of RTs among trolls' tweets is around 54\%, meaning that RTs account for at least 54\% of tweets for half of the trolls. Furthermore, half of the trolls include a URL in 66\% of their tweets, which is much larger the share of 15\% reported for ordinary users \cite{Benevenuto2010}. This observation is consistent with the previous research \cite{Addawood2019}. Finally, 50\% of trolls included a RT or URL (or both) in 85\% of their tweets, thereby resharing information either directly (RT) or indirectly (URL). Furthermore, we analyze the most popular domains referred in troll tweets. Shortened links accounted for around 40\% of all shared URLs, dominated by the services bit.ly (62\% of shortened links), dlvr.it (11.1\%), and goo.gl (9.5\%). Therefore, we create a script that visits each link and captures the final domain after all redirections. However, due to unreachability of some shortened links, we capture the domain information of around 84\% URLs. Table \ref{tab:urls} shows the top 20 most common domains of URLs shared by trolls. More than one fifth of URLs link to a popular Russian blogging platform LiveJournal. Other frequently referred domains include the news agencies FAN (riafan.ru) and Neskiy News (nevnov.ru), which the US government links to IRA\footnote{https://home.treasury.gov/news/press-releases/sm577}; sites publishing news about Ukraine (kievsmi.net, kiev-news.com, and emaidan.com.ua), and social media platforms (youtube.com, vk.com, and twitter.com). We further analyze the number of URLs per domain and find that 40\% of domains are referred to only once, while 6\% of domains are referred to more than 100 times (as the green curve in Figure \ref{fig:ECDF_Hashtags_RT_URL} indicates). \begin{figure} \centering \includegraphics[width=\columnwidth]{CDF_share_of_retweets_active.pdf} \caption{ECDF of active trolls ($\geq$100 tweets/year) by tweet types} \Description{Around 75\% of trolls re-shared content through RT and URLs in more than half of their tweets.} \label{fig:accounts_by_tweet_type_ECDF} \end{figure} \begin{table} \caption{The most common web domains referred to by trolls} \label{tab:urls} \begin{tabular}{p{2.1cm}p{1.15cm}p{2.1cm}p{1.15cm}} \toprule Domain & Share & Domain & Share\\ \midrule livejournal.com & 20.7\% & inforeactor.ru & 1.5\% \\ riafan.ru & 16.8\% & vk.com & 1.5\% \\ gazeta.ru & 5.1\% & tass.ru & 1.3\% \\ ria.ru & 4.5\% & politexpert.net & 1.3\% \\ rt.com & 3.1\% & emaidan.com.ua & 1.1\% \\ nevnov.ru & 2.3\% & lenta.ru & 1.0\% \\ vesti.ru & 2.1\% & rbc.ru & 1.0\% \\ kievsmi.net & 1.8\% & twitter.com & 1.0\%\\ youtube.com & 1.7\% & lifenews.ru & 0.9\%\\ kiev-news.com & 1.7\% & podrobnosti.biz & 0.8\%\\ \bottomrule \end{tabular}% \end{table} \section{Temporal posting patterns} \label{temporal_patterns} First, we analyze the temporal posting patterns of trolls. For each troll, we calculate a common temporal burstiness measure $B$ \cite{Goh2008} of their tweets' inter-arrival times (IATs, the times have a one minute granularity). The measure is defined as \begin{displaymath} \ B= \frac{\sigma_{\tau} - \mu_{\tau}}{\sigma_{\tau} + \mu_{\tau}} \end{displaymath} where $\mu_{\tau}$ is the sample mean and $\sigma_{\tau}$ is the sample standard deviation of the IAT distribution $\tau$. $B$ can vary from -1 to 1, with $B=1$ corresponding to a highly bursty signal, $B=0$ to a random (Poissonian) signal, and $B=-1$ to a highly regular (periodic) signal. We calculate $B$ for each troll that posted at least 30 tweets in a given year. Figure \ref{fig:iat_burstiness} illustrates the ECDF of $B$ across trolls by year. We find that at least 95\% of trolls exhibited quite periodic posting patterns with $B<0$. The burstiness for ordinary twitter users (and for many other human driven activities) is around 0.2 to 0.4 \cite{Kim2016,Goh2008}. Therefore, we hypothesize that trolls use automation tools for tweet posting. We further observe that burstiness varied over the years. Namely, in 2014, posting patterns were the most periodic with a median $B=-0.73$, whereas in 2015 and 2016 the median burstiness increased to $-0.38$ and $-0.36$, respectively, potentially indicating a lower degree of the automation use. Next, we analyze the periodicity of tweet posting on an aggregate level. Namely, we examine the frequency distribution for IATs that were calculated separately for each troll and pooled together. We construct the frequency distribution for every year from 2014 to 2017. The distributions in Figure \ref{fig:interarrival_time} show cyclical patterns in tweet posting activity. We observe that posting patterns also differ across years. Namely, in 2014, peaks of IATs were generally multiples of three, particularly, after IAT = 90. In 2015, peaks of IATs were spread around multiples of 20, generally in the interval $[20n - 2, 20n+1]$. In 2016-2017, patterns were similar, with IAT peaks at multiples of 30 minutes as well as at 10, 15, 20, and 48 minutes. However, in 2017 the hourly peaks were more prominent than in 2016. Overall, the periodic temporal patterns in Figure \ref{fig:interarrival_time} reinforce the automation findings from Figure \ref{fig:iat_burstiness}. Furthermore, given that the analysis was conducted on an aggregate level, we infer that some trolls used automation tools with the same or similar settings. For example, in 2015, about 47\% of trolls had more IATs between $[20n - 2, 20n+1]$ (multiples of 20 min) than the 20\% of trolls that we would expect given uniformly random automation settings. Though we also note that the share of such automated tweets rarely exceeded 50\% for any troll suggesting trolls also likely manually posted or changed automation settings. \begin{figure} \centering \includegraphics[width=\columnwidth]{burstinesses_years_ecdf.pdf} \caption{ECDF of temporal burstiness of active trolls ($\geq$30 tweets/year) by year} \Description{Burstiness is negative for most of the trolls suggesting the trolls exhibit period patterns.} \label{fig:iat_burstiness} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{IAT_minutes_years.pdf} \caption{Frequency distribution for the inter-arrival times of troll tweets by year} \Description{Inter-arrival time (IAT) of trolls' tweets shows cyclical patterns that change over time.} \label{fig:interarrival_time} \end{figure} \section{Reacting to MH17 crash} \label{mh17} Furthermore, we analyze the reaction of trolls to the crash of Malaysia Airlines flight MH17. Trolls used hashtags \#KievsProvocation (\foreignlanguage{russian}{ПровокацияКиева}), \#KievShotDownBoeing (\foreignlanguage{russian}{КиевСбилБоинг}), and \#KievTellTheTruth (\foreignlanguage{russian}{КиевСкажиПравду}) during the two days after the incident. We find troll tweets related to the incident using the above-defined hashtags and the keywords "mh17", "boeing" with Russian and English spellings, as well as Russian equivalents of the words "shot down" (\foreignlanguage{russian}{сбил*, сбит*}), "buk*" (\foreignlanguage{russian}{бук*}), "crash*" (\foreignlanguage{russian}{катастроф*}), "Malaysia*" (\foreignlanguage{russian}{малази*, малайз*}), and "plane" (\foreignlanguage{russian}{самолет*}). Furthermore, we only select tweets posted from 17.07.2014 14:00 (approximate time of the plane crash) until the end of July 2014. This results in 71023 tweets, 92.4\% of which were posted with at least one of the three mentioned hashtags, making the operation one of the largest run by the Russian trolls. This observation is in line with \cite{VanderNoordaa2019}. The content of tweets was often exactly the same for each of the three campaign hashtags. Nevertheless, perhaps due to space restrictions, most troll tweets used only one of the hashtags. Figure \ref{fig:MH17_tweet_count} shows the number of troll tweets related to the crash of MH17 published between 14:00 17.07 and 14:00 20.07, as 99\% of the such tweets fall into this interval. The hashtag campaigns started at around 9:00 on 18.07 and ended abruptly at around 12:00 on 19.07. The mean tweet rate during the campaign was 2484 tweets per hour, 9.4 times higher than the mean for July 2014. The mean hourly number of active trolls during the MH17 campaign was around 288, compared to 98 over all of July 2014. \begin{figure} \centering \includegraphics[width=\columnwidth]{KTTT_vs_other_hour.pdf} \caption{Number of hourly troll tweets related to MH17 over time} \Description{MH17 hashtag campaign started on 18.07.2014 at around 9:00 and ended abruptly at around 12:00 on 19.07.2014.} \label{fig:MH17_tweet_count} \end{figure} \subsection{First reactions} Table \ref{tab:first_MH17_tweets} shows the tweets that trolls posted during the first several hours after the crash including the tweet time (UTC) and the number of similar tweets posted within 10 minutes of each first tweet. This can show whether tweet posting was coordinated. The very first tweet related to the crash was posted at 15:01 stating that the militia of DPR had shot down a Ukrainian transport plane (an AN-26). The tweet referred to a news article published by the Federal News Agency\footnote{http://riafan.ru/2014/07/17/29576-opolchentsyi-dnr-sbili-transportnyiy-an-26-pod-gorodom-snezhnoe/}. The link to the article is still active, although the article has no connection to the original title indicated in the URL. At 15:40, the first tweets informing about the crash of MH17 appeared. At 15:59, the first tweet that blamed Ukraine for the incident was posted. Surprisingly, starting from 16:14, ten tweets appeared that repeated the statement that the DPR had shot down a Ukrainian transport plane but included the hashtag \#mh17. Moreover, one of the trolls first posted a tweet about the crash of MH17, and later about the shot-down transport plane. Finally, at 16:21 and 16:41 the authorities of DPR first denied their involvement and then blamed Ukrainian armed forces. At 16:45, a single tweet was posted stating that a plane was shot down by Russia with a link to a suspended blog post. At 17:19, some trolls posted tweets saying the DPR was framed by Ukrainian air controllers who had sent the plane into the firing zone. Finally, at 17:21 trolls stated, with the reference to the Luhansk People's Republic (LPR), that MH17 was shot down by a Ukrainian attack plane that was later destroyed. The analysis of first reactions shows that at first trolls relied on the information from separatists that initially reported on downing of military AN-26 plane. Even after the crash of MH17 had been confirmed, some trolls did not seem to have realized that these two messages referred to the same incident. After the situation was clarified, trolls started to spread inconsistent messages placing responsibility on Ukraine, separatists, and even Russia. Therefore, it seems that in an initial {\it fog of war} scenario, the trolls were more interested in spreading confusion and mistrust rather than waiting to develop a coherent narrative and strategy. This method allows them to move quickly (within minutes of the incident) and in a decentralized manner. \begin{table} \caption{First reactions of trolls to MH17 crash and number of similar tweets within 10 mins} \label{tab:first_MH17_tweets} \begin{tabular}{p{0.7cm}p{5.3cm}p{1.3cm}} \toprule Time & Tweet text (translated) & Tweets in 10 min \\ \midrule 15:01 & The militia of DPR shot down a transport AN-26 near the town of Snizhne \textit{URL} & 14 \\ 15:40 & Malaysian plane Amsterdam – Kuala Lumpur crashed on the border of Ukraine and Russia \textit{URL} & 19 \\ 15:59 & Daily Bacchanalia in Ukraine. Today a plane was accidentally shot down. It's terrible, comrades, how can one live like that? & 1 \\ 16:14 & The militia reports: another plane of Ukrainian air forces was destroyed \#mh17 ukraine-russia & 10 \\ 16:21 & DPR denies the involvement in the crash of Malaysian plane \textit{URL} & 10 \\ 16:41 & The authorities of DPR blamed Ukrainian armed forces for the crash of Malaysian Boeing \textit{URL} & 19 \\ 16:45 & A plane was shot down by Russia \textit{URL} & 1 \\ 17:19 & RT: \#Ukraine frames (makes) \#DPR shoot down an international plane by sending it through air controllers to the firing zone \textit{URL} & 3 \\ 17:21 & Boeing was shot down by Ukrainian attack plane, which was later destroyed - LPR (Luhansk People's Republic) \textit{URL} & 18 \\ \bottomrule \end{tabular}% \end{table} \subsection{Tweet text analysis} We observe that many tweets from different trolls include exactly the same text (while not being retweets). To test this, we tokenize tweets into sentences. We find 94 tokens that were used more than 500 times, with the most common being "why shoot down a civilian plane" (\foreignlanguage{russian}{зачем сбивать гражданский самолет}), "what did you expect" (\foreignlanguage{russian}{а вы чего ждали}), "do you agree with me" (\foreignlanguage{russian}{а вы со мной согласны}), and "here is what my friend posted" (\foreignlanguage{russian}{вот друг мой опубликовал}). Many tweets combined two or three of such sentences, e.g., \textit{"There are as many opinions as there are people. Why shoot down a civilian plane? Ukrops went totally nuts"}. Interestingly, some of these common sentences were also used in other anti-Ukrainian and anti-USA hashtag campaigns later on, such as \#ReturnCalifornia (\foreignlanguage{russian}{ВернитеКалифорнию}) in September 2014 and \#BattleOfOligarchs (\foreignlanguage{russian}{БитваОлигархов}) in March 2015. Further, we analyze the text of trolls' tweets related to the MH17 incident to understand their purpose and main message. Specifically we use a two-stage approach. In the first stage, we cluster tweets using a machine learning approach. In the second stage, we hand-code tweet clusters into six categories using an inductive content analysis approach \cite{Elo2008}. \subsubsection{Clustering stage} Tweets are first lemmatized using the Yandex MyStem 3.1 morphological analyzer\footnote{https://yandex.ru/dev/mystem/; https://github.com/nlpub/pymystem3}. Then stopwords and punctuation are removed. Apart from the common stopwords, we also remove the three most common hashtags discussed above. Further, tweets are tokenized into words, and small tweets with less than 7 tokens as well as tokens that occurred only once are filtered out. The remaining 41143 tweets contain 1401 unique tokens. We calculate the term frequency-inverse document frequency (TF-IDF) for the corpus of tweets and the pairwise cosine similarities between the TF-IDF of tweets. We also test embedding-based representations of tweets, but TF-IDF performs well in grouping very similar (almost duplicate) tweets, and therefore we select it as a more intuitive approach. Next we cluster tweets using agglomerative hierarchical clustering. Average linkage is selected as it produces the most persuasive clusters. We test several cluster cut-off levels from 0.3 to 0.7 in increments of 0.05 and compare the resulting clusters using the silhouette coefficient. Lower cut-off levels result in a higher silhouette score but also larger numbers of clusters, particularly singletons. Therefore, we choose the cut-off level of 0.5 as a trade-off resulting in a reasonably small number of clusters for hand-coding (n = 873, including 200 singletons) and an acceptable silhouette coefficient (0.44)\footnote{At a cut-off level of 0.4, the silhouette score is only 7\% higher (0.47), yet the number of clusters is 115\% higher (1883).}. \subsubsection{Coding stage} The clusters are coded by the main author based on the text of five randomly selected tweets from each cluster. For clusters with less than 5 tweets, all available tweets are inspected. The coding scheme is developed iteratively to represent the main purpose and message of tweet clusters. To ensure the reliability of coding, 15\% of clusters (n=131) are also independently coded by another researcher with native Russian language skills. Cohen's kappa score of inter-rater reliability on the sample varies from 0.79 to 0.92 for different coding categories, indicating at least a substantial level of agreement \cite{Landis1977}. For coding, first the tweet text (excluding hashtags) is examined to determine if the text actually mentions or implies the MH17 incident. If so, the tweet is also coded through the scheme described below. These directly related tweets are referred to as DR tweets. This initial filtering helps to remove hashtag-amplifying tweets with unrelated text. The coding scheme for the DR tweets consists of the following non-exclusive categories: \begin{enumerate} \item \textit{Tweets blame Ukraine for the MH17 incident.} The tweets blame Ukraine either directly or indirectly for downing the plane or enabling the incident. \item \textit{Tweets include the following narrative of blame on Ukraine.} For tweets in category (1), the narrative of blame is also grouped into several categories such as Ukrainians downed the plane or Ukrainians are responsible at least indirectly (e.g. for letting the incident happen). \item \textit{Tweets contain news.} The tweets include information related to the incident that is presented as news rather than personal opinion and refers to a source (such as photos or reports of eyewitness) or is based on expert opinions. \item \textit{Tweets contain disinformation or propaganda.} For tweets in category (3), the tweets contains disinformation (i.e., contradicts the findings of JIT) or presents information in a biased way to support a particular narrative\footnote{News containing erroneous information, which does not suggest a responsible party or an alternative version of the incident, is not considered as disinformation.} (propaganda news). \item \textit{Tweets contain the following narrative of disinformation and propaganda.} For tweets in category (4), the narrative of disinformation and propaganda is grouped into four high-level and 16 low-level categories as shown in Table \ref{tab:fake_news_narratives}. \end{enumerate} \subsubsection{Findings} We find that 68\% of the troll tweets (when hashtags are excluded) do not actually mention or imply the incident (i.e., non-DR tweets). Therefore, these tweets seemingly served only as hashtag-amplifiers. Furthermore, 99.9\% of such tweets were published during the hashtag campaign (between 9:00 18.07 and 12:00 19.07) compared to 89.9\% for DR tweets, supporting the hypothesis of hashtag amplification. Nevertheless, almost all of such non-DR tweets discuss Ukraine in a more general context, typically in a negative tone. Surprisingly, tweets blaming Ukraine for the incident account for only 49\% of the DR tweets. Nevertheless, this share is much larger than 5.5\% of pro-Russian tweets detected by \cite{Golovchenko2018} for English-language tweets related to MH17. Table \ref{tab:narratives_of_blame} shows the distribution for the narratives of blame among such tweets. About 73\% of such tweets suggest that Ukraine downed the plane and 21\% say that Ukraine is responsible at least indirectly (e.g., by allowing the plane to fly over the war zone or by escalating the conflict in Ukraine). Further, 3\% blame Ukraine for interfering with the crash investigation, mainly by bombing the crash site. Finally, around 3\% point to the potential responsibility of Ukraine without directly blaming, e.g., by recalling a similar incident with a Russian Tu-155 accidentally shot down by the Ukrainian Air Force in 2001 or stating that at the time of the crash MH17 was in a Ukrainian air-defense zone. \begin{table} \caption{The narratives of blame placed on Ukraine for the crash of MH17 of troll tweets} \label{tab:narratives_of_blame} \begin{tabular}{p{2.3cm}p{4cm}p{1cm}} \toprule Narrative & Example tweet & Share \\ \midrule Ukraine downed the plane & News of Ukraine. According to an expert, the Boeing 777 was shot down by Ukrainian air defense with 90\% probability & 73\% \\ Ukraine is responsible at least indirectly & Vladimir Putin: Ukraine is responsible for the Boeing crash & 21\% \\ Ukraine interferes with the investigation & Latest news: Ukraine does not allow Malaysian experts to the Boeing wreckage & 3\% \\ Ukraine may be responsible & Just read it. Experts recall Tu-155 shot down 13 years ago & 3\% \\ \bottomrule \end{tabular}% \end{table} Furthermore, about 57\% of the DR tweets contain news. Most of the news tweets inform about the number of crash victims, their nationality, the progress of evidence collection, or report the statements of officials on the incident. However, 23\% of news tweets (13\% of all DR tweets) contain disinformation or propaganda. Table \ref{tab:fake_news_narratives} shows the main narratives of such tweets. About half of the disinformation suggests that a Ukrainian fighter aircraft downed the plane. According to trolls, the size of debris indicated that the plane was downed by an air-to-air missile. Trolls further referred to a statement from Spanish air-traffic controller's working in Kiev, which claimed to have seen a military aircraft escorting MH17\footnote{This statement was later disproven as Ukrainian laws require air traffic controllers to have Ukrainian citizenship}. About 23.1\% of disinformation tweets suggest that Ukraine downed the plane from the ground, with 11.8\% not openly blaming but stating that Ukraine relocated their Buk missile systems to the location of the missile launch a day before the crash. Interestingly, two contradicting disinformation campaigns (Ukrainians shot down the plane by an air-to-air vs. ground-to-air missile) were run in parallel. Overall, trolls' tweets seem to reflect the changing and contradicting narratives of the Russian government on MH17 \cite{Toler2018}. Furthermore, 21.4\% of disinformation and propaganda tweets use other narratives to blame Ukraine. Trolls suggested that the incident was a planned operation, and that the President of Ukraine knew about the crash before it happened because his reaction to the crash was so fast. Finally, about 5.6\% of disinformation tweets do not blame Ukraine, including a few posts where trolls confused MH17 for a military AN-26 (see Table \ref{tab:first_MH17_tweets}) and a few tweets with experts supporting conspiracy theories. \begin{table} \caption{The narratives of disinformation and propaganda of troll tweets} \label{tab:fake_news_narratives} \begin{tabular}{p{6.3cm}p{1cm}} \toprule Narrative & Share \\ \midrule \textit{Ukrainian fighter aircraft downed the plane:} & \textit{49.8\%}\\ - Debris are too large & 24.2\% \\ - Statement of air controller & 10.2\% \\ - Eyewitness report & 10.1\% \\ - Using machine guns & 2.1\% \\ - Other & 3.2\% \\ \midrule \textit{Ukrainians downed the plane from the ground:} & \textit{23.1\%} \\ - Ukrainians relocated Buks shortly before the incident & 11.8\% \\ - Assassination attempt on Putin & 7.1\% \\ - Using air defense systems (general) & 2.9\% \\ - Mistake at military training & 1.3\% \\ \midrule \textit{Other narratives (blaming Ukraine):} & \textit{21.4\%} \\ - Ukraine interferes with the investigation & 7.4\% \\ - President of Ukraine knew about the incident before it happened & 7.1\% \\ - Traffic controllers purposefully diverted the plane to the war zone & 1.7\% \\ - Miscellaneous & 5.2\% \\ \midrule \textit{Other narratives (not blaming Ukraine):} & \textit{5.6\%} \\ - Separatists shot down AN-26 & 1.3\% \\ - Conspiracy theories & 0.5\% \\ - Miscellaneous & 3.8\% \\ \bottomrule \end{tabular}% \end{table} \section{Discussion and conclusion} \label{discussion} This paper shed light on domestic and regional Russian-language operations of Russian trolls on Twitter by analyzing trolls' tweets from between 2014 and 2017. Namely, we first studied the tweets' content through their hashtags, the use of retweets, and shared URLs. Further, we analyzed tweets' temporal posting patterns. After that, we focused on trolls' reaction to Malaysia Airlines MH17 crash. \textbf{Tweet Content.} Comparing the results of the hashtag analysis with English-language hashtags of Russian trolls from a previous study \cite{Zannettou2019b}, we see that in both cases trolls did not focus solely on political and social issues, but also used apolitical hashtags (e.g., \#Sport) likely to appear similar to ordinary users. Nevertheless, at least eight out of the top 20 hashtags in Russian were related to politics, compared with three for English-language operations \cite{Zannettou2019b}. This difference is mainly due to the ``hashtag campaigns'' that Russian-language trolls often ran, which accounted for six of the eight political hashtags, and which did not seem to be common in English-language operations. A potential motivation for such campaigns is to increase the hashtag visibility in the country-specific Twitter trends. Therefore, a hashtag campaign is more sensible in the smaller Russian domain of Twitter rather than a larger English language domain. We discovered 165 hashtag campaigns, which were run between June 2014 and November 2015. Their main sentiments were attacking Ukraine, USA and Obama personally, as well as praising Russia and Putin. The targets of such campaigns stayed roughly the same over the 18 months. However, about half of anti-Ukraine and anti-USA campaigns were run between July-September 2014, when the international pressure on Russia increased following the crash of MH17. Further, no campaigns praising Russia and Putin appeared in January-February 2015. This could be related to the depreciation of Russian ruble that peaked during these months. Moreover, there was also no anti-USA campaigns during the months. We further found that trolls actively reshared information by using retweets (RTs) and URLs. The share of original tweets (i.e., without RTs or URLs) was only about 24\%, and only about a quarter of trolls posted original tweets more often than tweets with RTs or URLs. About two thirds of trolls' retweets were the posts of non-troll accounts. Though at least half of the top 20 most retweeted non-troll accounts and most referenced internet domains were of mainstream media. However, an average troll tweet received only about 3.1 retweets from ordinary users, showing that the engagement with trolls' posts was relatively low. \textbf{Temporal posting patterns.} We analyzed the burstiness and frequency distribution of tweet inter-arrival times (IATs) to understand their temporal posting patterns. Using burstiness we discovered that, unlike normal Twitter users, many trolls exhibited highly periodic posting patterns. Furthermore, the frequency distribution of pooled IATs revealed three distinct cyclical patterns that prevailed during different years. Namely, in 2014, peaks in IATs were detected at multiples of three minutes, in 2015, multiples of 20 minutes, and in 2016-2017, multiples of 30 minutes. Such patterns could indicate the use of tweet posting automation tools with similar settings. Therefore, Russian trolls could more accurately be described as cyborgs or bot-assisted humans \cite{Chu2012}. \textbf{Reacting to MH17 crash.} In reaction to the crash of Malaysia Airlines flight MH17, Russian-language trolls ran their largest single information campaign (by the number of tweets). However, 68\% of the 71K tweets of the campaign had text not directly related to the incident; such tweets were seemingly used only for hashtag amplification. Nearly half of the remaining (related) tweets blamed Ukraine for the crash, either alleging that Ukraine downed the plane (73\%), or suggesting some degree of responsibility (21\%). Surprisingly, only 13\% of such related tweets contained news-like disinformation or propaganda. Furthermore, the narratives of such fake news were not internally consistent. Namely, approximately half stated that Ukraine shot down the plane with an air-to-air missile, whereas about 23\% suggested the use of a ground-to-air missile. The fake news also reported diverse reasons for the incident, ranging from framing the separatists and Russia, to a mistake in military training, to a failed assassination attempt on Putin. \begin{acks} The authors thank Anar Bazarhanova for helping with coding the tweets. Benjamin Finley is supported by the 5GEAR project (No. 319669) and the FIT project (No. 325570) both funded by the Academy of Finland. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{section intro} Let $G$ be a connected real reductive group; precisely, a finite cover of a closed connected transpose-stable subgroup of $GL(n,\bR)$ with complexified Lie algebra $\mathfrak{g}$. Let $K$ be a maximal compact subgroup of $G$. Write $\mathfrak{g}=\mathfrak{k}+\mathfrak{p}$ for the corresponding Cartan decomposition of $\mathfrak{g}$, where $\mathfrak{k}$ is the complexified Lie algebra of $K$. \begin{subequations}\label{se:cohintro} Let $T\subset K$ be a maximal torus, so that $H_c = G^T$ is a maximally compact Cartan subgroup, with Lie algebra $\mathfrak{h}_{c}$. Let $\Lambda\subset\widehat{H_{c}}\subset\mathfrak{h}_{c}^{\star}$ be the lattice of weights of finite-dimensional $(\mathfrak{g},K)$-modules. For a fixed $\lambda_{0}\in\mathfrak{h}_{c}^{\star}$ regular, a family of virtual $(\mathfrak{g},K)$-modules $X_\lambda$, $\lambda\in\lambda_{0}+\Lambda$, is called {\it coherent} if for each $\lambda$, $X_\lambda$ has infinitesimal character $\lambda$, and for any finite-dimensional $(\mathfrak{g},K)$-module $F$, and for any $\lambda$, \begin{equation}\label{cohintro} X_\lambda\otimes F = \sum_{\mu\in\Delta(F)} X_{\lambda+\mu}, \end{equation} where $\Delta(F)$ denotes the multiset of all weights of $F$. (A more complete discussion appears in Section \ref{section coherent}.) The reason for studying coherent families is that if $X$ is any irreducible $(\mathfrak{g},K)$-module of infinitesimal character $\lambda_0$, then there is a {\it unique} coherent family with the property that \begin{equation}\label{existscoherent} X_{\lambda_0} = X. \end{equation} For any invariant of Harish-Chandra modules, one can therefore ask how the invariant of $X_\lambda$ changes with $\lambda \in \lambda_0 + \Lambda$. The nature of this dependence is then a new invariant of $X$. This idea is facilitated by the fact that \begin{equation} \text{$X_\lambda$ is irreducible or zero whenever $\lambda$ is integrally dominant;} \end{equation} zero is possible only for singular $\lambda$. (See for example \cite{V2}, sections 7.2 and 7.3.) The notion of ``integrally dominant'' is recalled in \eqref{intdom}; we write $(\lambda_0+\Lambda)^+$ for the cone of integrally dominant elements. We may therefore define \begin{equation}\label{annintro} \Ann(X_{\lambda}) = \text{annihilator in $U(\mathfrak{g})$ of $X_\lambda$} \qquad (\lambda \in (\lambda_0 + \Lambda)^+). \end{equation} The ideal $\Ann(X_\lambda)$ is primitive if $X_\lambda$ is irreducible, and equal to $U(\mathfrak{g})$ if $X_\lambda = 0$. Write $\rk(U(\mathfrak{g})/\Ann(X_{\lambda})$) for the Goldie rank of the algebra $U(\mathfrak{g})/\Ann(X_{\lambda})$. Let $W_\mathfrak{g}$ be the Weyl group of $\mathfrak{g}$ with respect to $\mathfrak{h}_c$. Joseph proved that the $\bN$-valued map defined on integrally dominant $\lambda \in \lambda_{0}+\Lambda$ by \begin{equation}\label{goldieintro} \lambda\mapsto \rk(U(\mathfrak{g})/\Ann(X_{\lambda})) \end{equation} extends to a $W_{\mathfrak{g}}$-harmonic polynomial $P_{X}$ on $\mathfrak{h}_{c}^{\star}$ called the {\it Goldie rank polynomial} for $X$. The polynomial $P_X$ is homogeneous of degree $\sharp R_{\mathfrak{g}}^{+}-\mathop{\hbox {Dim}}\nolimits(X)$, where $\sharp R_{\mathfrak{g}}^{+}$ denotes the number of positive $\mathfrak{h}_{c}$-roots in $\mathfrak{g}$ and $\mathop{\hbox {Dim}}\nolimits(X)$ is the Gelfand-Kirillov dimension of $X$. Moreover, $P_{X}$ generates an irreducible representation of $W_{\mathfrak{g}}$. See \cite{J1I}, \cite{J1II} and \cite{J1III}. There is an interpretation of the $W_{\mathfrak{g}}$-representation generated by $P_X$ in terms of the Springer correspondence. For all $\lambda\in (\lambda_0+\Lambda)^+$ such that $X_\lambda \ne 0$ (so for example for all integrally dominant regular $\lambda$), the associated variety ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(X_{\lambda})))$ (defined by the associated graded ideal of $\Ann(X_{\lambda})$, in the symmetric algebra $S(\mathfrak{g})$) is the Zariski closure of a single nilpotent $G_{\bC}$-orbit ${\mathcal O}$ in $\mathfrak{g}^{\star}$, independent of $\lambda$. (Here $G_\bC$ is a connected complex reductive algebraic group having Lie algebra $\mathfrak{g}$.) Barbasch and Vogan proved that the Springer representation of $W_{\mathfrak{g}}$ attached to ${\mathcal O}$ coincides with the $W_{\mathfrak{g}}$-representation generated by the Goldie rank polynomial $P_{X}$ (see \cite{BV1}). Here is another algebro-geometric interpretation of $P_{X}$. Write \begin{equation}\label{eq:Korbit} {\mathcal O}\cap (\mathfrak{g}/\mathfrak{k})^* = \coprod_{j=1}^r {\mathcal O}^j \end{equation} for the decomposition into (finitely many) orbits of $K_{\mathbb C}$. (Here $K_{\mathbb C}$ is the complexification of $K$.) Then the associated cycle of each $X_\lambda$ is \begin{equation}\label{multintro} \Ass(X_\lambda) = \coprod_{j=1}^r m^j_X(\lambda) \overline{{\mathcal O}^j} \qquad (\lambda \in (\lambda_0 + \Lambda)^+) \end{equation} (see Definition 2.4, Theorem 2.13, and Corollary 5.20 in \cite{V3}). The component multiplicity $m^j_X(\lambda)$ is a function taking nonnegative integer values, and extends to a polynomial function on $\mathfrak{h}_{c}^*$. We call this polynomial the {\it multiplicity polynomial} for $X$ on the orbit ${\mathcal O}^j$. The connection with the Goldie rank polynomial is that each $m^j_X$ is a scalar multiple of $P_X$; this is a consequence of the proof of Theorem 5.7 in \cite{J1II}. On the other hand, Goldie rank polynomials can be interpreted in terms of the asymptotics of the global character $\mathop{\hbox {ch}}\nolimits_{\mathfrak{g}}(X_{\lambda})$ of $X_{\lambda}$ on a maximally split Cartan subgroup $H_{s} \subset G$ with Lie algebra $\mathfrak{h}_{s,0}$. Namely, if $x \in \mathfrak{h}_{s,0}$ is a generic regular element, King proved that the map \begin{equation}\label{kingintro} \lambda\mapsto \lim_{t\rightarrow 0}t^{\mathop{\hbox {Dim}}\nolimits(X)}\mathop{\hbox {ch}}\nolimits_{\mathfrak{g}}(X_{\lambda})(\exp tx) \end{equation} on $\lambda_{0}+\Lambda$ extends to a polynomial $C_{X,x}$ on $\mathfrak{h}^{\star}_{c}$. We call this polynomial {\it King's character polynomial}. It coincides with the Goldie rank polynomial $P_{X}$ up to a constant factor depending on $x$ (see \cite{K1}). More precisely, as a consequence of \cite{SV}, one can show that there is a formula \begin{equation}\label{eq:multchar} C_{X,x} = \sum_{j=1}^r a^j_x m^j_{X}; \end{equation} the constants $a^j_x$ are independent of $X$, and this formula is valid for any $(\mathfrak{g},K)$-module whose annihilator has associated variety contained in $\overline{\mathcal O}$. The polynomial $C_{X,x}$ expresses the dependence on $\lambda$ of the leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the maximally split Cartan $H_{s}$. \end{subequations} In this paper, we assume that $G$ and $K$ have equal rank. Under this assumption, we use Dirac index to obtain the analog of King's asymptotic character formula (\ref{kingintro}), or equivalently of the Goldie rank polynomial (\ref{goldieintro}), in the case when $H_s$ is replaced by a compact Cartan subgroup $T$ of $G$. In the course of doing this, we first prove a translation principle for the Dirac index. To define the notions of Dirac cohomology and index, we first recall that there is a {\it Dirac operator} $D\in U(\mathfrak{g})\otimes C(\mathfrak{p})$, where $C(\mathfrak{p})$ is the Clifford algebra of $\mathfrak{p}$ with respect to an invariant non-degenerate symmetric bilinear form $B$ (see Section \ref{section setting}). If $S$ is a spin module for $C(\mathfrak{p})$ then $D$ acts on $Y\otimes S$ for any $(\mathfrak{g},K)$-module $Y$. The {\it Dirac cohomology} $H_{D}(Y)$ of $Y$ is defined as \begin{equation*} H_{D}(Y)=\mathop{\hbox{Ker}}\nolimits D / \mathop{\hbox{Ker}}\nolimits D\cap\text{Im} D. \end{equation*} It is a module for the spin double cover $\widetilde{K}$ of $K$. Dirac cohomology was introduced by Vogan in the late 1990s (see \cite{V4}) and turned out to be an interesting invariant attached to $(\mathfrak{g},K)$-modules (see \cite{HP2} for a thorough discussion). We would now like to study how Dirac cohomology varies over a coherent family. This is however not possible; since Dirac cohomology is not an exact functor, it cannot be defined for virtual $(\mathfrak{g},K)$-modules. To fix this problem, we will replace Dirac cohomology by the Dirac index. (We note that there is a relationship between Dirac cohomology and translation functors; see \cite{MP}, \cite{MP08}, \cite{MP09}, \cite{MP10}.) \begin{subequations}\label{se:cohindex} Let $\mathfrak{t}$ be the complexified Lie algebra of the compact Cartan subgroup $T$ of $G$. Then $\mathfrak{t}$ is a Cartan subalgebra of both $\mathfrak{g}$ and $\mathfrak{k}$. In this case, the spin module $S$ for $\widetilde{K}$ is the direct sum of two pieces $S^+$ and $S^-$, and the Dirac cohomology $H_D(Y)$ breaks up accordingly into $H_D(Y)^+$ and $H_D(Y)^-$. If $Y$ is admissible and has infinitesimal character, define the {\it Dirac index of $Y$} to be the virtual $\widetilde{K}$-module \begin{equation} I(Y)= H_D(Y)^+-H_D(Y)^-. \end{equation} This definition can be extended to arbitrary finite length modules (not necessarily with infinitesimal character), replacing $H_D$ by the higher Dirac cohomology of \cite{PS}. See Section \ref{section index}. Then $I$, considered as a functor from finite length $(\mathfrak{g},K)$-modules to virtual $\widetilde{K}$-modules, is additive with respect to short exact sequences (see Lemma \ref{exact} and the discussion below (\ref{index formula})), so it makes sense also for virtual $(\mathfrak{g},K)$-modules. Furthermore, $I$ satisfies the following property (Proposition \ref{main}): for any finite-dimensional $(\mathfrak{g},K)$-module $F$, \begin{equation*} I(Y\otimes F)=I(Y)\otimes F. \end{equation*} Let now $\{X_{\lambda}\}_{\lambda \in \lambda_{0}+\Lambda}$ be a coherent family of virtual $(\mathfrak{g},K)$-modules. By a theorem of Huang and Pand{\v{z}}i{\'c}, the $\mathfrak{k}$-infinitesimal character of any $\widetilde{K}$-type contributing to the Dirac cohomology $H_D(Y)$ of an irreducible $(\mathfrak{g},K)$-module $Y$ is $W_{\mathfrak{g}}$-conjugate to the $\mathfrak{g}$-infinitesimal character of $Y$ (see Theorem \ref{HPmain}). In terms of the virtual representations $\widetilde{E}$ of $\widetilde{K}$ defined in Section \ref{section coherent}, the conclusion is that we may write \begin{equation}\label{indexformula} I(X_{\lambda_{0}})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w\lambda_{0}} \end{equation} with $a_w\in \bZ$. Then, for any $\nu\in\Lambda$, we have (Theorem \ref{translindex}): \begin{equation} \label{indextransintro} I(X_{\lambda_{0}+\nu})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w(\lambda_{0}+\nu)} \end{equation} with the same coefficients $a_w$. It follows that $I(X_{\lambda_{0}})\neq 0$ implies $I(X_{\lambda_{0}+\nu})\neq 0$, provided both $\lambda_{0}$ and $\lambda_{0}+\nu$ are regular for $\mathfrak{g}$ (Corollary \ref{nonzeroindex}). Combining the translation principle for Dirac index \eqref{indextransintro} with the Weyl dimension formula for $\mathfrak{k}$, we conclude that the map \begin{equation}\label{indexintro} \lambda_{0}+\Lambda\rightarrow\bZ,\qquad \lambda\mapsto\mathop{\hbox {dim}}\nolimits I(X_{\lambda}) \end{equation} extends to a $W_{\mathfrak{g}}$-harmonic polynomial $Q_{X}$ on $\mathfrak{t}^{\star}$ (see Section \ref{section Weyl group}). We call the polynomial $Q_X$ the {\it index polynomial} attached to $X$ and $\lambda_{0}$. If $Q_X$ is nonzero, its degree is equal to the number $R_{\mathfrak{k}}^{+}$ of positive $\mathfrak{t}$-roots in $\mathfrak{k}$. More precisely, $Q_X$ belongs to the irreducible representation of $W_{\mathfrak{g}}$ generated by the Weyl dimension formula for $\mathfrak{k}$ (Proposition \ref{harmonic}). Furthermore, the coherent continuation representation generated by $X$ must contain a copy of the index polynomial representation (Proposition \ref{wequi}). We also prove that the index polynomial vanishes for small representations. Namely, if the Gelfand-Kirillov dimension $\mathop{\hbox {Dim}}\nolimits(X)$ is less than the number $\sharp R_{\mathfrak{g}}^{+}-\sharp R_{\mathfrak{k}}^{+}$ of positive noncompact $\mathfrak{t}$-roots in $\mathfrak{g}$, then $Q_{X}=0$ (Proposition \ref{indexzero}). An important feature of the index polynomial is the fact that $Q_{X}$ is the exact analogue of King's character polynomial (\ref{kingintro}), but attached to the character on the compact Cartan subgroup instead of the maximally split Cartan subgroup (see Section \ref{section Goldie rank}). In fact, $Q_{X}$ expresses the dependence on $\lambda$ of the (possibly zero) leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the compact Cartan $T$: for any $y\in \mathfrak{t}_0$ regular, we have \begin{equation*} \lim_{t\to 0+} t^{\sharp R_{\mathfrak{g}}^{+}-\sharp R_{\mathfrak{k}}^{+}} \mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp ty)=\textstyle{\frac{\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y)}{\prod_{\alpha\in R_\mathfrak{g}^+}\alpha(y)}}\, Q_X(\lambda). \end{equation*} In particular, if $G$ is semisimple of Hermitian type, and if $X$ is the $(\mathfrak{g},K)$-module of a holomorphic discrete series representation, then the index polynomial $Q_{X}$ coincides, up to a scalar multiple, with the Goldie rank polynomial $P_{X}$ (Proposition \ref{holods}). Moreover, if $X$ is the $(\mathfrak{g},K)$-module of any discrete series representation (for $G$ not necessarily Hermitian), then $Q_X$ and $P_X$ are both divisible by the product of linear factors corresponding to the roots generated by the $\tau$-invariant of $X$ (Proposition \ref{tau}). Recall that the $\tau$-invariant of the $(\mathfrak{g},K)$-module $X$ consists of the simple roots $\alpha$ such that the translate of $X$ to the wall defined by $\alpha$ is zero (see Section 4 in \cite{V1} or Chapter 7 in \cite{V2}). Recall the formula \eqref{eq:multchar} relating King's character polynomial to the multiplicity polynomials for the associated cycle. In Section 7, we conjecture a parallel relationship between the index polynomial $Q_X$ and the multiplicity polynomials. For that, we must assume that the $W_{\mathfrak{g}}$-representation generated by the Weyl dimension formula for $\mathfrak{k}$ corresponds to a nilpotent $G_{\bC}$-orbit ${\mathcal O}_{K}$ via the Springer correspondence. (At the end of Section \ref{orbits}, we list the classical groups for which this assumption is satisfied.) Then we conjecture (Conjecture \ref{conj}): if ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(X)))\subset \overline{{\mathcal O}_K}$, then \begin{equation}\label{eq:multindex} Q_{X}=\sum_{j}c^j m_{X}^{j}. \end{equation} Here the point is that the coefficients $c^j$ should be integers independent of $X$. We check that this conjecture holds in the case when $G=SL(2,\bR)$ and also when $G=SU(1,n)$ with $n\geq 2$. In the following we give a few remarks related to the significance of the above conjecture. Associated varieties are a beautiful and concrete invariant for representations, but they are too crude to distinguish representations well. For example, all holomorphic discrete series have the same associated variety. Goldie rank polynomials and multiplicity functions both offer more information, but the information is somewhat difficult to compute and to interpret precisely. The index polynomial is easier to compute and interpret precisely; it can be computed from knowing the restriction to $K$, and conversely, it contains fairly concrete information about the restriction to $K$. In the setting of (\ref{eq:multindex}) (that is, for fairly small representations), the conjecture says that the index polynomial should be built from multiplicity polynomials in a very simple way. The conjecture implies that, for these small representations, the index polynomial must be a multiple of the Goldie rank polynomial. This follows from the fact that each $m^i_X$ is a multiple of $P_X$, mentioned below (\ref{multintro}). The interesting thing about this is that the index polynomial is perfectly well-defined for {\it larger} representations as well. In some sense it is defining something like ``multiplicities" for $\mathcal{O}_K$ even when $\mathcal{O}_K$ is not a leading term. This is analogous to a result of Barbasch, which says that one can define for any character expansion a number that is the multiplicity of the zero orbit for finite-dimensional representations. In the case of discrete series, this number turns out to be the formal degree (and so is something really interesting). This indicates that the index polynomial is an example of an interesting ``lower order term" in a character expansion. We can hope that a fuller understanding of all such lower order terms could be a path to extending the theory of associated varieties to a more complete invariant of representations. \end{subequations} \section{Setting}\label{section setting} Let $G$ be a finite cover of a closed connected transpose-stable subgroup of $GL(n,\bR)$, with Lie algebra $\mathfrak{g}_{0}$. We denote by $\Theta$ the Cartan involution of $G$ corresponding to the usual Cartan involution of $GL(n,\bR)$ (the transpose inverse). Then $K=G^\Theta$ is a maximal compact subgroup of $G$. Let $\mathfrak{g}_{0}=\mathfrak{k}_{0}\oplus\mathfrak{p}_{0}$ be the Cartan decomposition of $\mathfrak{g}_{0}$, with $\mathfrak{k}_0$ the Lie algebra of $K$. Let $B$ be the trace form on $\mathfrak{g}_{0}$. Then $B$ is positive definite on $\mathfrak{p}_{0}$ and negative definite on $\mathfrak{k}_{0}$, and $\mathfrak{p}_{0}$ is the orthogonal of $\mathfrak{k}_{0}$ with respect to $B$. We shall drop the subscript `0' on real vector spaces to denote their complexifications. Thus $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ denotes the Cartan decomposition of the complexified Lie algebra of $G$. The linear extension of $B$ to $\mathfrak{g}$ is again denoted by $B$. Let $G_{\bC}$ be a connected reductive algebraic group over $\bC$ with Lie algebra $\mathfrak{g}$. Let $C(\mathfrak{p})$ be the Clifford algebra of $\mathfrak{p}$ with respect to $B$ and let $U(\mathfrak{g})$ be the universal enveloping algebra of $\mathfrak{g}$. The Dirac operator $D$ is defined as \[ D=\sum_i b_i\otimes d_i\in U(\mathfrak{g})\otimes C(\mathfrak{p}), \] where $b_i$ is any basis of $\mathfrak{p}$ and $d_i$ is the dual basis with respect to $B$. Then $D$ is independent of the choice of the basis $b_i$ and is $K$-invariant. Moreover, the square of $D$ is given by the following formula due to Parthasarathy \cite{P}: \begin{equation} \label{Dsquared} D^2=-(\mathop{\hbox {Cas}}\nolimits_\mathfrak{g}\otimes 1+\|\rho_\mathfrak{g}\|^2)+(\mathop{\hbox {Cas}}\nolimits_{\mathfrak{k}_\Delta}+\|\rho_\mathfrak{k}\|^2). \end{equation} Here $\mathop{\hbox {Cas}}\nolimits_\mathfrak{g}$ is the Casimir element of $U(\mathfrak{g})$ and $\mathop{\hbox {Cas}}\nolimits_{\mathfrak{k}_\Delta}$ is the Casimir element of $U(\mathfrak{k}_\Delta)$, where $\mathfrak{k}_\Delta$ is the diagonal copy of $\mathfrak{k}$ in $U(\mathfrak{g})\otimes C(\mathfrak{p})$, defined using the obvious embedding $\mathfrak{k}\hookrightarrow U(\mathfrak{g})$ and the usual map $\mathfrak{k}\to\mathfrak{s}\mathfrak{o}(\mathfrak{p})\to C(\mathfrak{p})$. See \cite{HP2} for details. If $X$ is a $(\mathfrak{g},K)$-module, then $D$ acts on $X\otimes S$, where $S$ is a spin module for $C(\mathfrak{p})$. The Dirac cohomology of $X$ is the module \[ H_D(X)=\mathop{\hbox{Ker}}\nolimits D / \mathop{\hbox{Ker}}\nolimits D\cap\text{Im} D \] for the spin double cover $\widetilde{K}$ of $K$. If $X$ is unitary or finite-dimensional, then \[ H_D(X)=\mathop{\hbox{Ker}}\nolimits D=\mathop{\hbox{Ker}}\nolimits D^2. \] The following result of \cite{HP1} was conjectured by Vogan \cite{V3}. Let $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$ be a fundamental Cartan subalgebra of $\mathfrak{g}$. We view $\mathfrak{t}^*\subset\mathfrak{h}^*$ by extending functionals on $\mathfrak{t}$ by 0 over $\mathfrak{a}$. Denote by $R_{\mathfrak{g}}$ (resp. $R_{\mathfrak{k}}$) the set of $(\mathfrak{g},\mathfrak{h})$-roots (resp. $(\mathfrak{k},\mathfrak{t})$-roots). We fix compatible positive root systems $R^{+}_{\mathfrak{g}}$ and $R^{+}_{\mathfrak{k}}$ for $R_\mathfrak{g}$ and $R_\mathfrak{k}$ respectively. In particular, this determines the half-sums of positive roots $\rho_\mathfrak{g}$ and $\rho_\mathfrak{k}$ as usual. Write $W_{\mathfrak{g}}$ (resp. $W_{\mathfrak{k}}$) for the Weyl group associated with $(\mathfrak{g},\mathfrak{h})$-roots (resp. $(\mathfrak{k},\mathfrak{t})$-roots). \begin{thm} \label{HPmain} Let $X$ be a $(\mathfrak{g},K)$-module with infinitesimal character corresponding to $\Lambda\in\mathfrak{h}^*$ via the Harish-Chandra isomorphism. Assume that $H_D(X)$ contains the irreducible $\widetilde{K}$-module $E_\gamma$ with highest weight $\gamma\in\mathfrak{t}^*$. Then $\Lambda$ is equal to $\gamma+\rho_\mathfrak{k}$ up to conjugation by the Weyl group $W_\mathfrak{g}$. In other words, the $\mathfrak{k}$-infinitesimal character of any $\widetilde{K}$-type contributing to $H_D(X)$ is $W_\mathfrak{g}$-conjugate to the $\mathfrak{g}$-infinitesimal character of $X$. \end{thm} \section{Dirac index} \label{section index} Throughout the paper we assume that $\mathfrak{g}$ and $\mathfrak{k}$ have equal rank, i.e., that there is a compact Cartan subalgebra $\mathfrak{h}=\mathfrak{t}$ in $\mathfrak{g}$. In this case, $\mathfrak{p}$ is even-dimensional, so (as long as $\mathfrak{p} \neq\{0\}$) the spin module $S$ for the spin group $\mathop{\hbox {Spin}}\nolimits(\mathfrak{p})$ (and therefore for $\widetilde{K}$) is the direct sum of two pieces, which we will call $S^+$ and $S^-$. To say which is which, it is enough to choose an $SO(\mathfrak{p})$-orbit of maximal isotropic subspaces of $\mathfrak{p}$. We will sometimes make such a choice by fixing a positive root system $\Delta^+(\mathfrak{g},\mathfrak{t})$ for $\mathfrak{t}$ in $\mathfrak{g}$, and writing $\mathfrak{n}=\mathfrak{n}_\mathfrak{k} + \mathfrak{n}_\mathfrak{p}$ for the corresponding sum of positive root spaces. Then $\mathfrak{n}_\mathfrak{p}$ is a choice of maximal isotropic subspace of $\mathfrak{p}$. The full spin module may be realized using $\mathfrak{n}_\mathfrak{p}$ as $S \simeq \bigwedge\mathfrak{n}_\mathfrak{p}$, with the $C(\mathfrak{p})$-action defined so that elements of $\mathfrak{n}_\mathfrak{p}$ act by wedging, and elements of the dual isotropic space $\mathfrak{n}_\mathfrak{p}^-$ corresponding to the negative roots act by contracting. (Details may be found for example in \cite{Chev} at the beginning of Chapter 3.) In particular, the action of $C(\mathfrak{p})$ respects parity of degrees: odd elements of $C(\mathfrak{p})$ carry $\bigwedge^{\text{even}}\mathfrak{n}_\mathfrak{p}$ to $\bigwedge^{\text{odd}}\mathfrak{n}_\mathfrak{p}$ and so on. Because $\mathop{\hbox {Spin}}\nolimits(\mathfrak{p}) \subset C_{\text{even}}(\mathfrak{p})$, it follows that $\mathop{\hbox {Spin}}\nolimits(\mathfrak{p})$ preserves the decomposition \[ S \simeq \bigwedge \mathfrak{n}_\mathfrak{p} = \bigwedge\nolimits^{\!\!\text{even}}\mathfrak{n}_\mathfrak{p} \oplus \bigwedge\nolimits^{\!\!\text{odd}}\mathfrak{n}_\mathfrak{p} \overset{\text{def.}}= S^+ \oplus S^-. \] The group $\widetilde{K}$ acts on $S$ as usual, through the map $\widetilde{K}\to\mathop{\hbox {Spin}}\nolimits(\mathfrak{p})\subset C(\mathfrak{p})$, and hence also the Lie algebra $\mathfrak{k}$ acts, through the map $\alpha:\mathfrak{k}\to \mathfrak{s}\mathfrak{o}(\mathfrak{p})\hookrightarrow C(\mathfrak{p})$. We call these actions of $\widetilde{K}$ and $\mathfrak{k}$ the spin actions. It should however be noted that although we wrote $S\simeq \bigwedge\mathfrak{n}_\mathfrak{p}$, the $\mathfrak{t}$-weights of $S$ for the spin action are not the weights of $\bigwedge\mathfrak{n}_\mathfrak{p}$, i.e., the sums of distinct roots in $\mathfrak{n}_\mathfrak{p}$, but rather these weights shifted by $-(\rho_\mathfrak{g}-\rho_\mathfrak{k})$. This difference comes from the construction of the map $\alpha$ and the action of $C(\mathfrak{p})$ on $S$. In particular, the weights of $S^+ \simeq \bigwedge^{\text{even}}\mathfrak{n}_\mathfrak{p} $ are \[ -\rho_\mathfrak{g} + \rho_\mathfrak{k} + \text{(sum of an even number of distinct roots in } \mathfrak{n}_\mathfrak{p}). \] Similarly, the weights of $S^- \simeq \bigwedge^{\text{odd}}\mathfrak{n}_\mathfrak{p}$ are \[ -\rho_\mathfrak{g} + \rho_\mathfrak{k} + \text{(sum of an odd number of distinct roots in } \mathfrak{n}_\mathfrak{p}). \] The Dirac operator $D$ interchanges $X\otimes S^+$ and $X\otimes S^-$ for any $(\mathfrak{g},K)$-module $X$. (That is because it is of degree 1 in the Clifford factor.) It follows that the Dirac cohomology $H_D(X)$ also breaks up into even and odd parts, which we denote by $H_D(X)^+$ and $H_D(X)^-$ respectively. If $X$ is of finite length, then $H_D(X)$ is finite-dimensional, as follows from (\ref{Dsquared}), which implies that $\mathop{\hbox{Ker}}\nolimits D^2$ is finite-dimensional for any admissible module $X$. If $X$ is of finite length and has infinitesimal character, then we define the Dirac index of $X$ as the virtual $\widetilde{K}$-module \begin{equation} \label{defindex wrong} I(X)= H_D(X)^+-H_D(X)^-. \end{equation} The first simple but important fact is the following proposition, which is well known for the case of discrete series or finite-dimensional modules. \begin{prop} \label{propindex} Let $X$ be a finite length $(\mathfrak{g},K)$-module with infinitesimal character. Then there is an equality of virtual $\widetilde{K}$-modules \[ X\otimes S^+ - X\otimes S^- = I(X). \] \end{prop} \begin{proof} By Parthasarathy's formula for $D^2$ (\ref{Dsquared}), $X\otimes S$ breaks into a direct sum of eigenspaces for $D^2$: \[ X\otimes S=\sum_\lambda (X\otimes S)_\lambda. \] Since $D^2$ is even in the Clifford factor, this decomposition is compatible with the decomposition into even and odd parts, i.e., \[ (X\otimes S)_\lambda=(X\otimes S^+)_\lambda \oplus (X\otimes S^-)_\lambda, \] for any eigenvalue $\lambda$ of $D^2$. Since $D$ commutes with $D^2$, it preserves each eigenspace. Since $D$ also switches parity, we see that $D$ defines maps \[ D_\lambda:(X\otimes S^{\pm})_\lambda \to (X\otimes S^{\mp})_\lambda \] for each $\lambda$. If $\lambda\neq 0$, then $D_\lambda$ is clearly an isomorphism (with inverse $\frac{1}{\lambda}D_\lambda$), and hence \[ X\otimes S^+ - X\otimes S^- = (X\otimes S^+)_0 - (X\otimes S^-)_0. \] Since $D$ is a differential on $\mathop{\hbox{Ker}}\nolimits D^2$, and the cohomology of this differential is exactly $H_D(X)$, the statement now follows from the Euler-Poincar\'e principle. \end{proof} \begin{cor} \label{exact} Let \[ 0\to U\to V\to W\to 0 \] be a short exact sequence of finite length $(\mathfrak{g},K)$-modules, and assume that $V$ has infinitesimal character (so that $U$ and $W$ must have the same infinitesimal character as $V$). Then there is an equality of virtual $\widetilde{K}$-modules \[ I(V) = I(U) + I(W). \] \end{cor} \begin{proof} This follows from the formula in Proposition \ref{propindex}, since the left hand side of that formula clearly satisfies the additivity property. \end{proof} To study the translation principle, we need to deal with modules $X\otimes F$, where $X$ is a finite length $(\mathfrak{g},K)$-module, and $F$ is a finite-dimensional $(\mathfrak{g},K)$-module. Therefore, Proposition \ref{propindex} and Corollary \ref{exact} are not sufficient for our purposes, because they apply only to modules with infinitesimal character. Namely, if $X$ is of finite length and has infinitesimal character, then $X\otimes F$ is of finite length, but it typically cannot be written as a direct sum of modules with infinitesimal character. Rather, some of the summands of $X\otimes F$ only have generalized infinitesimal character. Recall that $\chi:Z(\mathfrak{g})\to\mathbb{C}$ is the generalized infinitesimal character of a $\mathfrak{g}$-module $V$ if there is a positive integer $N$ such that \[ (z-\chi(z))^N=0\quad\text{on }V,\qquad \text{for every }z\in Z(\mathfrak{g}), \] where $Z(\mathfrak{g})$ denotes the center of $U(\mathfrak{g})$. Here is an example showing that Proposition \ref{propindex} and Corollary \ref{exact} can fail for modules with generalized infinitesimal character. \begin{ex}[\cite{PS}, Section 2] {\rm Let $G=SU(1,1)\cong SL(2,\mathbb{R})$, so that $K=S(U(1)\times U(1))\cong U(1)$, and $\mathfrak{g}=\mathfrak{s}\mathfrak{l}(2,\mathbb{C})$. Then there is an indecomposable $(\mathfrak{g},K)$-module $P$ fitting into the short exact sequence \[ 0\to V_0\to P\to V_{-2}\to 0, \] where $V_0$ is the (reducible) Verma module with highest weight 0, and $V_{-2}$ is the (irreducible) Verma module with highest weight -2. One can describe the $\mathfrak{g}$-action on $P$ very explicitly, and see that $\mathop{\hbox {Cas}}\nolimits_\mathfrak{g}$ does not act by a scalar on $P$, so $P$ does not have infinitesimal character. Using calculations similar to \cite{HP2}, 9.6.5, one checks that for the index defined by (\ref{defindex wrong}) the following holds: \begin{equation} \label{indexP} I(P)=-\mathbb{C}_1;\qquad I(V_0)=-\mathbb{C}_1;\qquad I(V_{-2})=-\mathbb{C}_{-1}, \end{equation} where $\mathbb{C}_1$ respectively $\mathbb{C}_{-1}$ is the one-dimensional $\widetilde{K}$-module of weight $1$ respectively $-1$. So Corollary \ref{exact} fails for $P$. It follows that Proposition \ref{propindex} must also fail. This can also be seen directly, by computing $P\otimes S^+-P\otimes S^-$. } \end{ex} The reason for the failure of both Proposition \ref{propindex} and Corollary \ref{exact} is the fact that the generalized 0-eigenspace for $D$ contains two Jordan blocks for $D$, one of length 1 and the other of length 3. The block of length 3 does contribute to $P\otimes S^+-P\otimes S^-$, but not to $I(P)$. With this in mind, a modified version of Dirac cohomology, called ``higher Dirac cohomology", has been recently defined by Pand\v zi\'c and Somberg \cite{PS}. It is defined as $H(X)=\bigoplus_{k\in\mathbb{Z}_+} H^k(V)$, where \[ H^k(V) = \mathop{\hbox {Im}}\nolimits D^{2k}\cap \mathop{\hbox{Ker}}\nolimits D \big/ \mathop{\hbox {Im}}\nolimits D^{2k+1}\cap\mathop{\hbox{Ker}}\nolimits D. \] For a module $X$ with infinitesimal character, $H(X)$ is the same as $H_D(X)$; in general, $H(X)$ contains $H_D(X)=H^0(X)$. If $X$ is an arbitrary finite length module, then $H(X)$ is composed from contributions from all odd length Jordan blocks in the generalized 0-eigenspace for $D$. It follows that if we let $H(X)^\pm$ be the even and odd parts of $H(X)$, and define the stable index as \begin{equation} \label{defindex right} I(X)=H(X)^+-H(X)^-, \end{equation} then Proposition \ref{propindex} holds for any module $X$ of finite length, i.e., \begin{equation} \label{index formula} I(X)= X\otimes S^+-X\otimes S^- \end{equation} (\cite{PS}, Theorem 3.4). It follows that the index defined in this way is additive with respect to short exact sequences (\cite{PS}, Corollary 3.5), and it therefore makes sense for virtual $(\mathfrak{g},K)$-modules, i.e., it is well defined on the Grothendieck group of the Abelian category of finite length $(\mathfrak{g},K)$-modules. Let us also mention that there is an analogue of Theorem \ref{HPmain} for $H(X)$ (\cite{PS}, Theorem 3.3.) There is another way to define the index that circumvents completely the discussion of defining Dirac cohomology in the right way. Namely, one can simply use the statement of Proposition \ref{propindex}, or (\ref{index formula}), as the definition of the index $I(X)$. It is clear that with such a definition the index does make sense for virtual $(\mathfrak{g},K)$-modules. Moreover, one shows as above that all of the eigenspaces for $D^2$ for nonzero eigenvalues cancel out in (\ref{index formula}), so what is left is a finite combination of $\widetilde{K}$-types, appearing in the 0-eigenspace for $D^2$. Whichever of these two ways to define $I(X)$ we take, we will from now on work with Dirac index $I(X)$, defined for any virtual $(\mathfrak{g},K)$-module $X$, and satisfying (\ref{index formula}). \section{Coherent families} \label{section coherent} Fix $\lambda_0\in\mathfrak{t}^*$ regular and let $T$ be a compact Cartan subgroup of $G$ with complexified Lie algebra $\mathfrak{t}$. We denote by $\Lambda\subset\widehat{T}\subset\mathfrak{t}^*$ the lattice of weights of finite-dimensional representations of $G$ (equivalently, of finite-dimensional $(\mathfrak{g},K)$-modules). A family of virtual $(\mathfrak{g},K)$-modules $X_\lambda$, $\lambda\in\lambda_{0}+\Lambda$, is called {\it coherent} if \begin{enumerate} \item $X_\lambda$ has infinitesimal character $\lambda$; and \item for any finite-dimensional $(\mathfrak{g},K)$-module $F$, and for any $\lambda\in\lambda_{0}+\Lambda$, \begin{equation} \label{coh} X_\lambda\otimes F = \sum_{\mu\in\Delta(F)} X_{\lambda+\mu}, \end{equation} where $\Delta(F)$ denotes the multiset of all weights of $F$. \end{enumerate} See \cite{V2}, Definition 7.2.5. The reason that we may use coherent families based on the compact Cartan $T$, rather than the maximally split Cartan used in \cite{V2}, is our assumption that $G$ is connected. A virtual $(\mathfrak{g},K)$-module $X$ with regular infinitesimal character $\lambda_{0}\in\mathfrak{h}_{c}^{\star}$ can be placed in a unique coherent family as above (see Theorem 7.2.7 in \cite{V2}, and the references therein; this is equivalent to \eqref{existscoherent}). Using this, one can define an action of the integral Weyl group $W(\lambda_{0})$ attached to $\lambda_{0}$ on the set ${\mathcal M}(\lambda_{0})$ of virtual $(\mathfrak{g},K)$-modules with infinitesimal character $\lambda_{0}$. Recall that $W(\lambda_{0})$ consists of those elements $w\in W_\mathfrak{g}$ for which $\lambda_0-w\lambda_0$ is a sum of roots. If we write $Q$ for the root lattice, then the condition for $w$ to be in $W(\lambda_0)$ is precisely that $w$ preserves the lattice coset $\lambda_{0}+Q$ (see \cite{V2}, Section 7.2). Then for $w\in W(\lambda_0)$, we set \begin{equation*} w\cdot X\overset{\text{def.}}= X_{w^{-1}(\lambda_{0})}. \end{equation*} We view ${\mathcal M}(\lambda_{0})$ as a lattice (a free $\bZ$-module) with basis the (finite) set of irreducible $(\mathfrak{g},K)$-modules of infinitesimal character $\lambda_{0}$. A decomposition into irreducible components of this $W(\lambda_{0})$-representation, known as the {\it coherent continuation} representation, was obtained by Barbasch and Vogan (see \cite{BV1b}). The study of coherent continuation representations is important for deeper understanding of coherent families. A weight $\lambda \in \lambda_0 + \Lambda$ is called {\it integrally dominant} if \begin{equation}\label{intdom} \langle\alpha^\vee,\lambda\rangle \ge 0 \ \text{whenever $\langle \alpha^\vee,\lambda_0 \rangle \in {\mathbb{N}}$} \qquad (\alpha \in R_\mathfrak{g}). \end{equation} Recall from the introduction that we write $(\lambda_0 + \Lambda)^+$ for the cone of integrally dominant weights. The notion of coherent families is closely related with the Jantzen-Zuckerman translation principle. For example, if $\lambda$ is regular and $\lambda+\nu$ belongs to the same Weyl chamber for integral roots (whose definition is recalled below), then $X_{\lambda+\nu}$ can be obtained from $X_\lambda$ by a translation functor, i.e., by tensoring with the finite-dimensional module $F_\nu$ with extremal weight $\nu$ and then taking the component with generalized infinitesimal character $\lambda+\nu$. The following observation is crucial for obtaining the translation principle for Dirac index. \begin{prop} \label{main} Suppose $X$ is a virtual $(\mathfrak{g},K)$-module and $F$ a finite-dimensional $(\mathfrak{g},K)$-module. Then \[ I(X\otimes F)=I(X)\otimes F. \] \end{prop} \begin{proof} By Proposition \ref{propindex} and (\ref{index formula}), \[ I(X\otimes F)=X\otimes F\otimes S^+ - X\otimes F\otimes S^-, \] while \[ I(X)\otimes F= (X\otimes S^+ - X\otimes S^-)\otimes F. \] It is clear that the right hand sides of these expressions are the same. \end{proof} \noindent Combining Proposition \ref{main} with (\ref{coh}), we obtain \begin{cor} \label{cohindex} Let $X_\lambda$, $\lambda\in\lambda_{0}+\Lambda$, be a coherent family of virtual $(\mathfrak{g},K)$-modules and let $F$ be a finite-dimensional $(\mathfrak{g},K)$-module. Then \begin{equation} \label{cohindexformula} I(X_\lambda)\otimes F=\sum_{\mu\in\Delta(F)} I(X_{\lambda+\mu}). \end{equation} \qed \end{cor} \noindent This says that the family $\{I(X_\lambda)\}_{\lambda\in\lambda_{0}+\Lambda}$ of virtual $\widetilde{K}$-modules has some coherence properties, but it is not a coherent family for $\widetilde{K}$, as $I(X_\lambda)$ does not have $\mathfrak{k}$-infinitesimal character $\lambda$. Also, the identity (\ref{cohindexformula}) is valid only for a $(\mathfrak{g},K)$-module $F$, and not for an arbitrary $\widetilde{K}$-module $F$. Using standard reasoning, as in \cite{V2}, Section 7.2, we can now analyze the relationship between Dirac index and translation functors. \begin{subequations}\label{Kcoherent} We first define some virtual representations of $\widetilde{K}$. Our choice of positive roots $R^+_\mathfrak{k}$ for $T$ in $K$ defines a Weyl denominator function \begin{equation}\label{Weyldenominator} d_\mathfrak{k}(\exp(y)) = \prod_{\alpha\in R^+_\mathfrak{k}} (e^{\alpha(y)/2} - e^{-\alpha(y)/2}) \end{equation} on an appropriate cover of $T$. For $\gamma\in \Lambda+\rho_\mathfrak{g}$, the Weyl numerator \begin{equation*} N_\gamma = \sum_{w\in W_\mathfrak{k}} \operatorname{sgn}(w) e^{w\gamma} \end{equation*} is a function on another double cover of $T$. According to Weyl's character formula, the quotient \begin{equation} \mathop{\hbox {ch}}\nolimits_{\mathfrak{k},\gamma} = N_\gamma/d_\mathfrak{k} \end{equation} extends to a class function on all of $\widetilde{K}$. Precisely, $\mathop{\hbox {ch}}\nolimits_{\mathfrak{k},\gamma}$ is the character of a virtual genuine representation $\widetilde{E}_\gamma$ of $\widetilde{K}$: \begin{equation} \widetilde{E}_\gamma = \begin{cases} \operatorname{sgn}(x)\left(\text{irr. of highest weight $x\gamma - \rho_\mathfrak{k}$}\right) &\text{$x\gamma$ is dom. reg. for $R^+_\mathfrak{k}$}\\ 0 &\text{$\gamma$ is singular for $R_\mathfrak{k}$} \end{cases} \end{equation} It is convenient to extend this definition to all of $\mathfrak{t}^*$ by \begin{equation} \widetilde{E}_\lambda = 0 \qquad (\lambda \notin \Lambda+\rho_\mathfrak{g}). \end{equation} With this definition, the Huang-Pand\v zi\'c infinitesimal character result clearly guarantees what we wrote in \eqref{indexformula}: \begin{equation*} I(X_{\lambda_{0}})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w\lambda_{0}}. \end{equation*} We could restrict the sum to those $w$ for which $w\lambda_0$ is dominant for $R^+_\mathfrak{k}$, and get a unique formula in which $a_w$ is the multiplicity of the $\widetilde{K}$ representation of highest weight $w\lambda_0 - \rho_\mathfrak{k}$ in $I(X_{\lambda_0})$. But for the proof of the next theorem, it is more convenient to allow a more general expression. \end{subequations} \begin{thm} \label{translindex} Suppose $\lambda_0\in\mathfrak{t}^*$ is regular for $\mathfrak{g}$. Let $X_\lambda$, $ \lambda\in\lambda_{0}+\Lambda$, be a coherent family of virtual $(\mathfrak{g},K)$-modules based on $\lambda_0 + \Lambda$. By Theorem \ref{HPmain}, we can write \begin{equation} \label{indexatlambda} I(X_{\lambda_{0}})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w\lambda_{0}}, \end{equation} where $\widetilde{E}$ denotes the family of finite-dimensional virtual $\widetilde{K}$-modules defined in \eqref{Kcoherent}, and $a_w$ are integers. Then for any $\nu\in\Lambda$, \begin{equation} \label{indexatlambda+nu} I(X_{\lambda_{0}+\nu})=\sum_{w\in W_\mathfrak{g}} a_w \widetilde{E}_{w(\lambda_{0}+\nu)}, \end{equation} with the same coefficients $a_w$. \end{thm} \begin{proof} We proceed in three steps. {\it Step 1:} suppose both $\lambda_{0}$ and $\lambda_{0}+\nu$ belong to the same integral Weyl chamber, which we can assume to be the dominant one. Let $F_\nu$ be the finite-dimensional $(\mathfrak{g},K)$-module with extremal weight $\nu$. Let us take the components of (\ref{cohindexformula}), written for $\lambda=\lambda_0$, with $\mathfrak{k}$-infinitesimal characters which are $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\nu$. By Theorem \ref{HPmain}, any summand $I(X_{\lambda_{0}+\mu})$ of the RHS of (\ref{cohindexformula}) is a combination of virtual modules with $\mathfrak{k}$-infinitesimal characters which are $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\mu$. By \cite{V2}, Lemma 7.2.18 (b), $\lambda_{0}+\mu$ can be $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\nu$ only if $\mu=\nu$. Thus we are picking exactly the summand $I(X_{\lambda_{0}+\nu})$ of the RHS of (\ref{cohindexformula}). We now determine the components of the LHS of (\ref{cohindexformula}) with $\mathfrak{k}$-infinitesimal characters which are $W_\mathfrak{g}$-conjugate to $\lambda_{0}+\nu$. Since $\widetilde{E}$ is a coherent family for $\tilde{K}$, and $F_\nu$ can be viewed as a finite-dimensional $\tilde{K}$-module, one has \[ \widetilde{E}_{w\lambda_{0}}\otimes F_\nu=\sum_{\mu\in\Delta(F_\nu)}\widetilde{E}_{w\lambda_{0}+\mu}. \] The $\mathfrak{k}$-infinitesimal character of $\widetilde{E}_{w\lambda_{0}+\mu}$ is $w\lambda_{0}+\mu$, so the components we are looking for must satisfy $w\lambda_{0}+\mu = u(\lambda_{0}+\nu)$, or equivalently \[ \lambda_{0}+w^{-1}\mu=w^{-1}u(\lambda_{0}+\nu), \] for some $u\in W_\mathfrak{g}$. Using \cite{V2}, Lemma 7.2.18 (b) again, we see that $w^{-1}u$ must fix $\lambda_{0}+\nu$, and $w^{-1}\mu$ must be equal to $\nu$. So $\mu=w\nu$, and the component $\widetilde{E}_{w\lambda_{0}+\mu}$ is in fact $\widetilde{E}_{w(\lambda_{0}+\nu)}$. So (\ref{indexatlambda+nu}) holds in this case. {\it Step 2:} suppose that $\lambda_{0}$ and $\lambda_{0}+\nu$ lie in two neighbouring chambers, with a common wall defined by a root $\alpha$, and such that $\lambda_{0}+\nu=s_{\alpha}(\lambda_{0})$. Assume further that for any weight $\mu$ of $F_\nu$, $\lambda_{0}+\mu$ belongs to one of the two chambers. Geometrically this means that $\lambda_{0}$ is close to the wall defined by $\alpha$ and sufficiently far from all other walls and from the origin. We tensor (\ref{indexatlambda}) with $F_{\nu}$. By (\ref{cohindexformula}) and the fact that $\widetilde{E}$ is a coherent family for $\tilde{K}$, we get \begin{equation*} \sum_{\mu\in\Delta(F_{\nu})}I(X_{\lambda_{0}+\mu})=\sum_{w\in W_{\mathfrak{g}}}a_{w}\sum_{\mu\in\Delta(F_{\nu})}\widetilde{E}_{w(\lambda_{0}+\mu)}. \end{equation*} By our assumptions, the only $\lambda_{0}+\mu$ conjugate to $\lambda_{0}+\nu$ via $W_{\mathfrak{g}}$ are $\lambda_{0}+\nu$ and $\lambda_{0}$. Picking the corresponding parts from the above equation, we get \begin{equation*} I(X_{\lambda_{0}+\nu})+cI(X_{\lambda_{0}})=\sum_{w\in W_{\mathfrak{g}}}a_{w}\big(c\widetilde{E}_{w\lambda_{0}}+\widetilde{E}_{w(\lambda_{0}+\nu)}\big) \end{equation*} where $c$ is the multiplicity of the zero weight of $F_\nu$. This implies (\ref{indexatlambda+nu}), so the theorem is proved in this case. {\it Step 3:} to get from an arbitrary regular $\lambda_{0}$ to an arbitrary $\lambda_{0}+\nu$, we first apply Step 1 to get from $\lambda_{0}$ to all elements of $\lambda_{0}+\Lambda$ in the same (closed) chamber. Then we apply Step 2 to pass to an element of a neighbouring chamber, then Step 1 again to get to all elements of that chamber, and so on. \end{proof} \begin{cor} \label{nonzeroindex} In the setting of Theorem \ref{translindex}, assume that both $\lambda_{0}$ and $\lambda_{0}+\nu$ are regular for $\mathfrak{g}$. Assume also that $I(X_{\lambda_{0}})\neq 0$, i.e., at least one of the coefficients $a_w$ in (\ref{indexatlambda}) is nonzero. Then $I(X_{\lambda_{0}+\nu})\neq 0$. \end{cor} \begin{proof} This follows immediately from Theorem \ref{translindex} and the fact that $\widetilde{E}_{w(\lambda_{0}+\nu)}$ can not be zero, since $w(\lambda_{0}+\nu)$ is regular for $\mathfrak{g}$ and hence also for $\mathfrak{k}$. \end{proof} \section{Index polynomial and coherent continuation representation} \label{section Weyl group} As in the previous section, let $\lambda_{0}\in\mathfrak{t}^{\star}$ be regular. For each $X\in {\mathcal M}(\lambda_{0})$, there is a unique coherent family $\{X_{\lambda}\mid \lambda\in\lambda_{0}+\Lambda\}$ such that $X_{\lambda_{0}}=X$. Define a function $Q_X\colon \lambda_{0}+\Lambda \to\mathbb{Z}$ by setting \begin{equation} \label{dim} Q_{X}(\lambda)= \mathop{\hbox {dim}}\nolimits I(X_\lambda)\quad (\lambda\in\lambda_{0}+\Lambda). \end{equation} Notice that $Q_{X}$ depends on both $X$ {\it and} on the choice of representative $\lambda_{0}$ for the infinitesimal character of $X$; replacing $\lambda_{0}$ by $w_{1}\lambda_{0}$ translates $Q_{X}$ by $w_{1}$. By Theorem \ref{translindex} and the Weyl dimension formula for $\mathfrak{k}$, $Q_{X}$ is a polynomial function in $\lambda$. (Note that taking dimension is additive with respect to short exact sequences of finite-dimensional modules, so it makes sense for virtual finite-dimensional modules.) We call the function $Q_{X}$ the {\it index polynomial} associated with $X$ (or $\{X_{\lambda}\}$). Recall that a polynomial on $\mathfrak{t}^*$ is called $W_\mathfrak{g}$-harmonic, if it is annihilated by any $W_\mathfrak{g}$-invariant constant coefficient differential operator on $\mathfrak{t}^*$ without constant term (see \cite{V1}, Lemma 4.3.) \begin{prop} \label{harmonic} For any $(\mathfrak{g},K)$-module $X$ as above, the index polynomial $Q_X$ is $W_\mathfrak{g}$-harmonic. If $Q_X\neq 0$, then it is homogeneous of degree equal to the number of positive roots for $\mathfrak{k}$; more precisely, it belongs to the irreducible representation of $W_{\mathfrak{g}}$ generated by the Weyl dimension formula for $\mathfrak{k}$. \end{prop} \begin{proof} The last statement follows from $(\ref{indexatlambda+nu})$; the rest of the proposition is an immediate consequence. \end{proof} Recall the natural representation of $W(\lambda_{0})$ (or indeed of all $W_{\mathfrak{g}}$) on the vector space $S(\mathfrak{t})$ of polynomial functions on $\mathfrak{t}^*$, $$ (w\cdot P)(\lambda)=P(w^{-1}\lambda). $$ The (irreducible) representation of $W(\lambda_{0})$ generated by the dimension formula for $\mathfrak{k}$ is called the {\it index polynomial representation}. \begin{prop} \label{wequi} The map \begin{equation*} {\mathcal M}(\lambda_{0})\rightarrow S(\mathfrak{t}),\qquad X\mapsto Q_{X} \end{equation*} intertwines the coherent continuation representation of $W(\lambda_0)$ with the action on polynomials. In particular, if $Q_{X}\neq 0$, then the coherent continuation representation generated by $X$ must contain a copy of the index polynomial representation. \end{prop} \begin{proof} Let $\{X_\lambda\}$ be the coherent family corresponding to $X$. Then for a fixed $w\in W(\lambda_0)$, the coherent family corresponding to $w\cdot X$ is $\lambda_{0}+\nu\mapsto X_{w^{-1}(\lambda_{0}+\nu)}$ (see \cite{V2}, Lemma 7.2.29 and its proof). It follows that \begin{eqnarray*} (w\cdot Q_{X})(\lambda)&=&Q_{X}(w^{-1}\cdot\lambda)\\ &=&\mathop{\hbox {dim}}\nolimits I(X_{w^{-1}\lambda})\\ &=&Q_{w\cdot X}(\lambda), \end{eqnarray*} i.e., the map $X\mapsto Q_{X}$ is $W(\lambda_{0})$-equivariant. The rest of the proposition is now clear. \end{proof} \begin{ex} \label{exfd} {\rm Let $F$ be a finite-dimensional $(\mathfrak{g},K)$-module. The corresponding coherent family is $\{F_\lambda\}$ from \cite{V2}, Example 7.2.12. In particular, every $F_\lambda$ is finite-dimensional up to sign, or 0. By Proposition \ref{propindex} and (\ref{index formula}), for any $F_\lambda$, \[ \mathop{\hbox {dim}}\nolimits I(F_\lambda)=\mathop{\hbox {dim}}\nolimits(F_\lambda\otimes S^+-F_\lambda\otimes S^-)=\mathop{\hbox {dim}}\nolimits F_\lambda(\mathop{\hbox {dim}}\nolimits S^+-\mathop{\hbox {dim}}\nolimits S^-)=0, \] since $S^+$ and $S^-$ have the same dimension (as long as $\mathfrak{p}\neq 0$). It follows that \[ Q_{F}(\lambda)=0. \] (Note that the index itself is a nonzero virtual module, but its dimension is zero. This may be a little surprising at first, but it is quite possible for virtual modules.) This means that in this case Proposition \ref{wequi} gives no information about the coherent continuation representation (which is in this case a copy of the sign representation of $W_\mathfrak{g}$ spanned by $F$).} \end{ex} \begin{ex} \label{exds_sl2} {\rm Let $G=SL(2,\mathbb{R})$, so that weights correspond to integers. Let $\lambda_{0}=n_0$ be a positive integer. There are four irreducible $(\mathfrak{g},K)$-modules with infinitesimal character $n_0$: the finite-dimensional module $F$, the holomorphic discrete series $D^+$ of lowest weight $n_0+1$, the antiholomorphic discrete series $D^-$ of highest weight $-n_0-1$, and the irreducible principal series representation $P$. The coherent family $F_n$ corresponding to $F$ is defined by setting $F_n$ to be the finite-dimensional module with highest weight $n-1$ if $n>0$, $F_0=0$, and if $n<0$, $F_n=-F_{-n}$. Thus $s\cdot F=-F$, i.e., $F$ spans a copy of the sign representation of $W(\lambda_{0})=\{1,s\}$. As we have seen, the index polynomial corresponding to $F$ is zero. By \cite{V2}, Example 7.2.13, the coherent family $D_n^+$ corresponding to $D^+$ is given as follows: for $n\geq 0$, $D^+_n$ is the irreducible lowest weight $(\mathfrak{g},K)$-module with lowest weight $n+1$, and for $n<0$, $D^+_n$ is the sum of $D^+_{-n}$ and the finite-dimensional module $F_{-n}$. It is easy to see that for each $n\in\mathbb{Z}$, $I(D^+_n)$ is the one-dimensional $\widetilde K$-module $E_n$ with weight $n$. So the index polynomial $Q_{D^+}$ is the constant polynomial $1$. Moreover, $s\cdot D^+=D^++F$. One similarly checks that the coherent family $D_n^-$ corresponding to $D^-$ is given as follows: for $n\geq 0$, $D^-_n$ is the irreducible highest weight $(\mathfrak{g},K)$-module with highest weight $-n-1$, and for $n<0$, $D^-_n=D^-_{-n}+F_{-n}$. For each $n\in\mathbb{Z}$, $I(D^-_n)=-E_{-n}$, so the index polynomial $Q_{D^-}$ is the constant polynomial $-1$. Moreover, $s\cdot D^-=D^-+F$. Finally, one checks that the coherent family corresponding to $P$ consists entirely of principal series representations, that the $W(\lambda_{0})$-action on $P$ is trivial, and that the corresponding index polynomial is 0. Putting all this together, we see that the coherent continuation representation at $n_0$ consists of three trivial representations, spanned by $F+D^++D^-$, $D^+-D^-$ and $P$, and one sign representation, spanned by $F$. The index polynomial representation is the trivial representation spanned by the constant polynomials. The map $X\mapsto Q_X$ sends $P$, $F$ and $F+D^++D^-$ to zero, and $D^+-D^-$ to the constant polynomial $2$. } \end{ex} The conclusion of Example \ref{exfd} about the index polynomials of finite-dimensional representations being zero can be generalized as follows. \begin{prop} \label{indexzero} Let $X$ be a $(\mathfrak{g},K)$-module as above, with Gelfand-Kirillov dimension $\mathop{\hbox {Dim}}\nolimits(X)$. If $\mathop{\hbox {Dim}}\nolimits(X)<\sharp R_{\mathfrak{g}}^{+}-\sharp R_{\mathfrak{k}}^{+}$, then $Q_X=0$. \end{prop} \begin{proof} We need to recall the setting of \cite{BV2}, Section 2, in particular their Theorem 2.6.(b) (taken from \cite{J1II}). Namely, to any irreducible representation $\sigma$ of $W_\mathfrak{g}$ one can associate its degree, i.e., the minimal integer $d$ such that $\sigma$ occurs in the $W_\mathfrak{g}$-representation $S^d(\mathfrak{t})$. Theorem 2.6.(b) of \cite{BV2} says that the degree of any $\sigma$ occurring in the coherent continuation representation attached to $X$ must be at least equal to $\sharp R_{\mathfrak{g}}^{+}-\mathop{\hbox {Dim}}\nolimits(X)$. By assumption, the degree of $Q_X$, $\sharp R_{\mathfrak{k}}^{+} $, is smaller than $\sharp R_{\mathfrak{g}}^{+}-\mathop{\hbox {Dim}}\nolimits(X)$. On the other hand, by Proposition \ref{wequi} the index polynomial representation has to occur in the coherent continuation representation. It follows that $Q_X$ must be zero. \end{proof} \begin{ex} {\rm Wallach modules for $Sp(2n,\mathbb{R})$, $SO^*(2n)$ and $U(p,q)$, studied in \cite{HPP}, all have nonzero index, but their index polynomials are zero. This can also be checked explicitly from the results of \cite{HPP}, at least in low-dimensional cases. The situation here is like in Example \ref{exfd}; the nonzero Dirac index has zero dimension. In particular the conclusion $Q_X=0$ in Proposition \ref{indexzero} does not imply that $I(X)=0$. } \end{ex} We note that in the proof of Proposition \ref{indexzero}, we are applying the results of \cite{BV2} to $(\mathfrak{g},K)$-modules, although they are stated in \cite{BV2} for highest weight modules. This is indeed possible by results of Casian \cite{C}. We explain this in more detail. Let ${\mathcal B}$ be the flag variety of $\mathfrak{g}$ consisting of all the Borel subalgebras of $\mathfrak{g}$. For a point $x\in{\mathcal B}$, write $\mathfrak{b}_{x}=\mathfrak{h}_{x}+\mathfrak{n}_{x}$ for the corresponding Borel subalgebra, with nilradical $\mathfrak{n}_{x}$, and Cartan subalgebra $\mathfrak{h}_{x}$. Define a functor $\Gamma_{\mathfrak{b}_{x}}$ from the category of $\mathfrak{g}$-modules into the category of $\mathfrak{g}$-modules which are $\mathfrak{b}_x$-locally finite, by \begin{equation*} \Gamma_{\mathfrak{b}_{x}}M=\big\{\mathfrak{b}_{x}-\text{locally finite vectors in } M\big\}. \end{equation*} Write $\Gamma^{q}_{\mathfrak{b}_{x}}$, $q\geq 0$, for its right derived functors. Instead of considering the various $\mathfrak{b}_{x}$, $x\in{\mathcal B}$, it is convenient to fix a Borel subalgebra $\mathfrak{b}=\mathfrak{h}+\mathfrak{n}$ of $\mathfrak{g}$ and twist the module $M$. By a twist of $M$ we mean that if $\pi$ is the $\mathfrak{g}$-action on $M$ and $\sigma$ is an automorphism of $\mathfrak{g}$ then the twist of $\pi$ by $\sigma$ is the $\mathfrak{g}$-action $\pi\circ\sigma$ on $M$. Then Casian's generalized Jacquet functors $J_{\mathfrak{b}_{x}}^{q}$ are functors from the category of $\mathfrak{g}$-modules into the category of $\mathfrak{g}$-modules which are $\mathfrak{b}$-locally finite, given by \begin{equation*} J_{\mathfrak{b}_{x}}^{q}M=\Big\{\Gamma_{\mathfrak{b}_{x}}^{q}\mathop{\hbox {Hom}}\nolimits_{\bC}(M,\bC)\Big\}^{0} \end{equation*} where the superscript `0' means that the $\mathfrak{g}$-action is twisted by some inner automorphism of $\mathfrak{g}$, to make it $\mathfrak{b}$-locally finite instead of $\mathfrak{b}_{x}$-locally finite. In case $\mathfrak{b}_{x}$ is the Borel subalgebra corresponding to an Iwasawa decomposition of $G$, $J^0_{\mathfrak{b}_x}$ is the usual Jacquet functor of \cite{BB}, while the $J_{\mathfrak{b}_{x}}^q$ vanish for $q>0$. The functors $J_{\mathfrak{b}_{x}}^{q}$ make sense on the level of virtual $(\mathfrak{g},K)$-modules and induce an injective map \begin{equation*} X\mapsto \sum_{x\in{\mathcal B}/K}\sum_{q}(-1)^{q}J_{\mathfrak{b}_{x}}^{q}X \end{equation*} from virtual $(\mathfrak{g},K)$-modules into virtual $\mathfrak{g}$-modules which are $\mathfrak{b}$-locally finite. Note that the above sum is well defined, since the $J_{\mathfrak{b}_{x}}^{q}$ depend only on the $K$-orbit of $x$ in $\mathcal{B}$. An important feature of the functors $J_{\mathfrak{b}_{x}}^{q}$ is the fact that they satisfy the following identity relating the $\mathfrak{n}_{x}$-homology of $X$ with the $\mathfrak{n}$-cohomology of the modules $J_{\mathfrak{b}_{x}}^{q}X$ (see page 6 in \cite{C}): \begin{equation*} \sum_{p,q\geq 0}(-1)^{p+q}\mathop{\hbox {tr}}\nolimits_\mathfrak{h} H^{p}(\mathfrak{n},J_{\mathfrak{b}_{x}}^{q}X)=\sum_{q}(-1)^{q}\mathop{\hbox {tr}}\nolimits_\mathfrak{h} H_{q}(\mathfrak{n}_{x},X)^{0}. \end{equation*} Here the superscript `0' is the appropriate twist interchanging $\mathfrak{h}_{x}$ with $\mathfrak{h}$, and $\mathop{\hbox {tr}}\nolimits_\mathfrak{h}$ denotes the formal trace of the $\mathfrak{h}$-action. More precisely, if $Z$ is a locally finite $\mathfrak{h}$-module with finite-dimensional weight components $Z_\mu$, $\mu\in\mathfrak{h}^*$, then \[ \mathop{\hbox {tr}}\nolimits_\mathfrak{h} Z=\sum_{\mu\in\mathfrak{h}^*} \mathop{\hbox {dim}}\nolimits Z_\mu\, e^\mu. \] Using this and Osborne's character formula, the global character of $X$ on an arbitrary $\theta$-stable Cartan subgroup can be read off from the characters of the $J^q_{\mathfrak{b}_{x}}X$ (see \cite{C} and \cite{C2}). In particular, we deduce that if $\tau$ is an irreducible representation of the Weyl group $W_{\mathfrak{g}}$ occuring in the coherent representation attached to $X$ then $\tau$ occurs in the coherent continuation representation attached to $J_{\mathfrak{b}_{x}}^{q}X$ for some $q\geq 0$ and some Borel subalgebra $\mathfrak{b}_{x}$. Moreover, from the definitions, one has $\mathop{\hbox {Dim}}\nolimits(X)\geq \mathop{\hbox {Dim}}\nolimits(J_{\mathfrak{b}_{x}}^{q}X)$. Applying the results in \cite{BV2} to the module $J_{\mathfrak{b}_{x}}^{q}X$, we deduce that: \begin{equation*} d^{o}(\tau)\geq \sharp R^{+}_{\mathfrak{g}}-\mathop{\hbox {Dim}}\nolimits(J_{\mathfrak{b}_{x}}^{q}X)\geq \sharp R^{+}_{\mathfrak{g}}-\mathop{\hbox {Dim}}\nolimits(X), \end{equation*} where $d^{o}(\tau)$ is the degree of $\tau$. \section{Index polynomials and Goldie rank polynomials} \label{section Goldie rank} Recall that $H_s$ denotes a maximally split Cartan subgroup of $G$ with complexified Lie algebra $\mathfrak{h}_s$. As in Section \ref{section coherent}, we let $X$ be a module with regular infinitesimal character $\lambda_{0}\in\mathfrak{h}_s^{\star}$, and $\{X_{\lambda}\}_{\lambda\in\lambda_0+\Lambda}$ the corresponding coherent family on $H_s$. With notation from (\ref{annintro}) and (\ref{goldieintro}), Joseph proved that the mapping \begin{equation*} \lambda\mapsto P_{X}(\lambda)=\rk (U(\mathfrak{g})/\Ann(X_{\lambda})), \end{equation*} extends to a $W_\mathfrak{g}$-harmonic polynomial on $\mathfrak{h}_s^*$, homogeneous of degree $\sharp R^{+}_{\mathfrak{g}}-\mathop{\hbox {Dim}}\nolimits(X)$, where $\mathop{\hbox {Dim}}\nolimits(X)$ is the Gelfand-Kirillov dimension of $X$ (see \cite{J1I}, \cite{J1II} and \cite{J1III}). He also found relations between the Goldie rank polynomial $P_X$ and Springer representations; and (less directly) Kazhdan-Lusztig polynomials (see \cite{J2} and \cite{J3}). Recall from \eqref{kingintro} King's analytic interpretation of the Goldie rank polynomial: that for $x\in \mathfrak{h}_{s,0}$ regular, the expression \begin{equation} \label{charpol} \lim_{t\to 0+} t^d\mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp tx) \end{equation} is zero if $d$ is an integer bigger than $\mathop{\hbox {Dim}}\nolimits(X)$; and if $d=\mathop{\hbox {Dim}}\nolimits(X)$, it is (for generic $x$) a nonzero polynomial $C_{X,x}$ in $\lambda$ called the character polynomial. Up to a constant, this character polynomial is equal to the Goldie rank polynomial attached to $X$. In other words, the Goldie rank polynomial expresses the dependence on $\lambda$ of the leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the maximally split Cartan $H_{s}$. For more details, see \cite{K1} and also \cite{J1II}, Corollary 3.6. The next theorem shows that the index polynomial we studied in Section \ref{section Weyl group} is the exact analogue of King's character polynomial, but attached to the character on the compact Cartan subgroup instead of the maximally split Cartan subgroup. \begin{thm} \label{ind=char} Let $X$ be a $(\mathfrak{g},K)$-module with regular infinitesimal character and let $X_\lambda$ be the corresponding coherent family on the compact Cartan subgroup. Write $r_\mathfrak{g}$ (resp. $r_\mathfrak{k}$) for the number of positive $\mathfrak{t}$-roots for $\mathfrak{g}$ (resp. $\mathfrak{k}$). Suppose $y\in \mathfrak{t}_0$ is any regular element. Then the limit \begin{equation} \label{indexpol} \lim_{t\to 0+} t^d \mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp ty) \end{equation} is zero if $d$ is an integer bigger than $r_\mathfrak{g}-r_\mathfrak{k}$. If $d=r_\mathfrak{g}-r_\mathfrak{k}$, then the limit \eqref{indexpol} is equal to \[ \textstyle{\frac{\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y)}{\prod_{\alpha\in R_\mathfrak{g}^+}\alpha(y)}}\, Q_X(\lambda), \] where $Q_X$ is the index polynomial attached to $X$ as in (\ref{dim}). In other words, the index polynomial, up to an explicit constant, expresses the dependence on $\lambda$ of the (possibly zero) leading term in the Taylor expansion of the numerator of the character of $X_\lambda$ on the compact Cartan $T$. \end{thm} \begin{proof} The restriction to $K$ of any $G$-representation has a well defined distribution character, known as the $K$-character. The restriction of this $K$-character to the set of elliptic $G$-regular elements in $K$ is a function, equal to the function giving the $G$-character (see \cite{HC}, and also \cite{AS}, (4.4) and the appendix). Therefore Proposition \ref{propindex} and (\ref{index formula}) imply \begin{equation*} \mathop{\hbox {ch}}\nolimits_\mathfrak{g}(X_\lambda)(\exp ty)=\frac{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)}(\exp ty). \end{equation*} Also, it is clear that \[ \lim_{t\to 0+} \mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))(\exp ty)=\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))(e)=\mathop{\hbox {dim}}\nolimits I(X_\lambda)=Q_X(\lambda). \] Therefore the limit (\ref{indexpol}) is equal to \[ \lim_{t\to 0+} t^d \frac{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(I(X_\lambda))(\exp ty)}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)(\exp ty)}=Q_X(\lambda)\lim_{t\to 0+} \frac{t^d}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)(\exp ty)}. \] On the other hand, it is well known and easy to check that \[ \mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)=\frac{d_\mathfrak{g}}{d_\mathfrak{k}}, \] where $d_\mathfrak{g}$ (resp. $d_\mathfrak{k}$) denotes the Weyl denominator for $\mathfrak{g}$ (resp. $\mathfrak{k}$). It is immediate from the product formula \eqref{Weyldenominator} that we know that \[ d_\mathfrak{g}(\exp ty)=t^{r_\mathfrak{g}}\prod_{\alpha\in R_\mathfrak{g}^+} \alpha(y) + \text{ higher order terms in } t \] and similarly \[ d_\mathfrak{k}(\exp ty)=t^{r_\mathfrak{k}}\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y) + \text{ higher order terms in } t. \] So we see that \[ \lim_{t\to 0+} \frac{t^d}{\mathop{\hbox {ch}}\nolimits_\mathfrak{k}(S^+-S^-)(\exp ty)}=\lim_{t\to 0+} t^{d-r_\mathfrak{g}+r_\mathfrak{k}} \frac{\prod_{\alpha\in R_\mathfrak{k}^+}\alpha(y)}{\prod_{\alpha\in R_\mathfrak{g}^+} \alpha(y)}. \] The theorem follows. \end{proof} We are now going to consider some examples (of discrete series representations) where we compare the index polynomial and the Goldie rank polynomial. To do so, we identify the compact Cartan subalgebra with the maximally split one using a Cayley transform. Recall that if $X$ is a discrete series representation with Harish-Chandra parameter $\lambda$, then \[ I(X)= \pm H_D(X)= \pm E_\lambda, \] where $E_\lambda$ denotes the $\widetilde{K}$-type with infinitesimal character $\lambda$. (The sign depends on the relation between the positive system defined by $\lambda$ and the fixed one used in Section \ref{section index} to define the index. See \cite{HP1}, Proposition 5.4, or \cite{HP2}, Corollary 7.4.5.) The index polynomial $Q_X$ is then given by the Weyl dimension formula for this $\widetilde{K}$-type, i.e., by \begin{equation} \label{indexds} Q_X(\lambda)=\prod_{\alpha\in R_\mathfrak{k}^+} \frac{\langle\lambda,\alpha\rangle}{\langle\rho_\mathfrak{k},\alpha\rangle}. \end{equation} Comparing this with \cite{K2}, Proposition 3.1, we get: \begin{prop} \label{holods} Suppose $G$ is linear, semisimple and of Hermitian type. Let $X$ be the $(\mathfrak{g},K)$-module of a holomorphic discrete series representation. Then the index polynomial $Q_X$ coincides with the Goldie rank polynomial $P_X$ up to a scalar multiple. \end{prop} Of course, $Q_X$ is not always equal to $P_X$, since the degrees of these two polynomials are different in most cases. In the following we consider the example of discrete series representations for $SU(n,1)$. The choice is dictated by the existence of explicit formulas for the Goldie rank polynomials computed in \cite{K2}. The discrete series representations for $SU(n,1)$ with a fixed infinitesimal character can be parametrized by integers $i\in [0,n]$. To see how this works, we introduce some notation. First, we take for $K$ the group $S(U(n)\times U(1))\cong U(n)$. The compact Cartan subalgebra $\mathfrak{t}$ consists of diagonal matrices, and we identify it with $\mathbb{C}^{n+1}$ in the usual way. We make the usual choice for the dominant $\mathfrak{k}$-chamber $C$: it consists of those $\lambda\in\mathbb{C}^{n+1}$ for which \[ \lambda_1\geq\lambda_2\geq\dots \geq\lambda_n. \] Then $C$ is the union of $n+1$ $\mathfrak{g}$-chambers $D_0,\dots,D_n$, where $D_0$ consists of $\lambda\in C$ such that $\lambda_{n+1}\leq \lambda_n$, $D_n$ consists of $\lambda\in C$ such that $\lambda_{n+1}\geq \lambda_1$, and for $1\leq i\leq n-1$, \[ D_i=\{\lambda\in C\,\big|\, \lambda_{n-i}\geq \lambda_{n+1}\geq\lambda_{n-i+1}\}. \] Now for $i\in [0,n]$, and for $\lambda\in D_i$, which is regular for $\mathfrak{g}$ and analytically integral for $K$, we denote by $X_\lambda(i)$ the discrete series representation with Harish-Chandra parameter $\lambda$. We use the same notation for the corresponding $(\mathfrak{g},K)$-module. For $i=0$, $X_\lambda(i)$ is holomorphic and this case is settled by Proposition \ref{holods}; the result is that both the index polynomial and the Goldie rank polynomial are proportional to the Vandermonde determinant \begin{equation} \label{vandermonde} V(\lambda_1,\dots,\lambda_n)=\prod_{1\leq p<q\leq n}(\lambda_p-\lambda_q). \end{equation} The case $i=n$ of antiholomorphic discrete series representations is analogous. For $1\leq i\leq n-1$, the index polynomial of $X_\lambda(i)$ is still given by (\ref{vandermonde}). On the other hand, the character polynomial is up to a constant multiple given by the formula (6.5) of \cite{K2}, as the sum of two determinants. We note that King's expression can be simplified and that the character polynomial of $X_\lambda(i)$ is in fact equal to \begin{equation} \label{chards} \left|\begin{matrix} \lambda_1^{n-2}&\dots&\lambda_{n-i}^{n-2}&\lambda_{n-i+1}^{n-2}&\dots&\lambda_n^{n-2} \cr \lambda_1^{n-3}&\dots&\lambda_{n-i}^{n-3}&\lambda_{n-i+1}^{n-3}&\dots&\lambda_n^{n-3} \cr \vdots&&\vdots&\vdots&&\vdots \cr \lambda_1&\dots&\lambda_{n-i}&\lambda_{n-i+1}&\dots&\lambda_n \cr 1&\dots&1&0&\dots&0 \cr 0&\dots&0&1&\dots&1 \end{matrix} \right| \end{equation} \smallskip \noindent up to a constant multiple. For $i=1$, (\ref{chards}) reduces to the Vandermonde determinant $V(\lambda_1,\dots,\lambda_{n-1})$. Similarly, for $i=n-1$, we get $V(\lambda_2,\dots,\lambda_n)$. In these cases, the Goldie rank polynomial divides the index polynomial. For $2\leq i\leq n-2$, the Goldie rank polynomial is more complicated. For example, if $n=4$ and $i=2$, (\ref{chards}) becomes \[ -(\lambda_1-\lambda_2)(\lambda_3-\lambda_4)(\lambda_1+\lambda_2-\lambda_3-\lambda_4), \] and this does not divide the index polynomial. For $n=5$ and $i=2$, (\ref{chards}) becomes \begin{multline*} -(\lambda_1-\lambda_2)(\lambda_1-\lambda_3)(\lambda_2-\lambda_3)(\lambda_4-\lambda_5) \\ (\lambda_1\lambda_2+\lambda_1\lambda_3-\lambda_1\lambda_4-\lambda_1\lambda_5+\lambda_2\lambda_3 -\lambda_2\lambda_4-\lambda_2\lambda_5-\lambda_3\lambda_4-\lambda_3\lambda_5+\lambda_4^2+\lambda_4\lambda_5+\lambda_5^2), \end{multline*} and one can check that the quadratic factor is irreducible. More generally, for any $n\geq 4$ and $2\leq i\leq n-2$, the Goldie rank polynomial (\ref{chards}) is divisible by $(\lambda_p-\lambda_q)$ whenever $1\leq p<q\leq n-i$ or $n-i+1\leq p<q\leq n$. This is proved by subtracting the $q$th column from the $p$th column. On the other hand, if $1\leq p\leq n-i<q\leq n$, we claim that (\ref{chards}) is not divisible by $(\lambda_p-\lambda_q)$. Indeed, we can substitute $\lambda_q=\lambda_p$ into (\ref{chards}) and subtract the $q$th column from the $p$th column. After this we develop the determinant with respect to the $p$th column. The resulting sum of two determinants is equal to the Vandermonde determinant $V(\lambda_1,\dots,\lambda_{p-1},\lambda_{p+1},\dots,\lambda_n)$, and this is not identically zero. This proves that for $X=X_\lambda(i)$ the greatest common divisor of $P_X$ and $Q_X$ is \begin{equation} \label{gcd} \prod_{1\leq p<q\leq n-i}(\lambda_p-\lambda_q)\prod_{n-i+1\leq r<s\leq n}(\lambda_r-\lambda_s). \end{equation} Comparing with the simple roots $\Psi_i$ corresponding to the chamber $D_i$ described on p. 294 of \cite{K2}, we see that the linear factors of (\ref{gcd}) correspond to roots generated by the compact part of $\Psi_i$. On the other hand, the set of compact roots in $\Psi_i$ is equal to the $\tau$-invariant of $X_\lambda(i)$, as proved in \cite{HS}, Proposition 3.6 (see also \cite{K1}, Remark 4.5). Recall that the $\tau$-invariant of a $(\mathfrak{g},K)$-module $X$ consists of the simple roots $\alpha$ such that the translate of $X$ to the wall defined by $\alpha$ is 0; see \cite{V1}, Section 4. In particular, we have checked a special case of the following proposition. \begin{prop} \label{tau} Assume that $G$ is a real reductive Lie group in the Harish-Chandra class and that $G$ and $K$ have equal rank. Let $X$ be the discrete series representation of $G$ with Harish-Chandra parameter $\lambda$. Then the index polynomial $Q_X$ and the Goldie rank polynomial $P_X$ are both divisible by the product of linear factors corresponding to the roots generated by the $\tau$-invariant of $X$. \end{prop} \begin{proof} The $\tau$-invariant of $X$ is still given as above, as the compact part of the simple roots corresponding to $\lambda$. In particular, the roots generated by the $\tau$-invariant are all compact, and the corresponding factors divide $Q_X$, which is given by (\ref{indexds}). On the other hand, by \cite{V1}, Proposition 4.9, the Goldie rank polynomial is always divisible by the factors corresponding to roots generated by the $\tau$-invariant. We note that the result in \cite{V1} is about the Bernstein degree polynomial, which is up to a constant factor equal to the Goldie rank polynomial by \cite{J1II}, Theorem 5.7. \end{proof} Note that for $G=SU(n,1)$, the result we obtained is stronger than the conclusion of Proposition \ref{tau}. Namely, we proved that the product of linear factors corresponding to the roots generated by the $\tau$-invariant of $X$ is in fact the greatest common divisor $R$ of $P_X$ and $Q_X$. We note that it is easy to calculate the degrees of all the polynomials involved. Namely, if $2\leq i\leq n-2$, the degree of $R$ is $\binom{i}{2}+\binom{n-i}{2}$. Since $\mathop{\hbox {Dim}}\nolimits(X)=2n-1$ (see \cite{K2}), and $\sharp R_{\mathfrak{g}}^{+}=\binom{n+1}{2}$, the degree of $P_X$ is $\binom{n-1}{2}$. It follows that the degree of $P_X/R$ is $i(n-i)-(n-1)$. On the other hand, since the degree of $Q_X$ is $\sharp R_{\mathfrak{k}}^{+}=\binom{n}{2}$, the degree of $Q_X/R$ is $i(n-i)$. \section{Index polynomials and nilpotent orbits} \label{orbits} \begin{subequations}\label{Korbit} Assume again that we are in the setting \eqref{se:cohintro} of the introduction, so that $Y=Y_{\lambda_0}$ is an irreducible $(\mathfrak{g},K)$-module. (We use a different letter from the $X$ in the introduction as a reminder that we will soon be imposing some much stronger additional hypotheses on $Y$.) Recall from \eqref{multintro} the expression \begin{equation \Ass(Y_\lambda) = \coprod_{j=1}^r m^j_Y(\lambda) \overline{{\mathcal O}^j} \qquad (\lambda \in (\lambda_0 + \Lambda)^+), \end{equation} and the fact that each $m^j_Y$ extends to a polynomial function on $\mathfrak{t}^*$, which is a multiple of the Goldie rank polynomial: \begin{equation} m^j_Y = a^j_Y P_Y, \end{equation} with $a^j_Y$ a nonnegative rational number depending on $Y$. On the other hand, the Weyl dimension formula for $\mathfrak{k}$ defines a polynomial on the dual of the compact Cartan subalgebra $\mathfrak{t}^*$ in $\mathfrak{g}$, with degree equal to the cardinality $\sharp R_{\mathfrak{k}}^{+}$ of positive roots for $\mathfrak{k}$. Write $\sigma_{K}$ for the representation of the Weyl group $W_{\mathfrak{g}}$ generated by this polynomial. Suppose that $\sigma_{K}$ is a Springer representation, i.e., it is associated with a nilpotent $G_{\bC}$-orbit ${\mathcal O}_{K}$: \begin{equation}\label{assum1} \sigma_K \overset{\text{Springer}}\longleftrightarrow {\mathcal O}_K \subset \mathfrak{g}^*. \end{equation} Here $G_\mathbb{C}$ denotes a connected complex reductive algebraic group having Lie algebra $\mathfrak{g}$. Assume also that there is a Harish-Chandra module $Y$ of regular infinitesimal character $\lambda_0$ such that \begin{equation}\label{assum2} {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y)))=\overline{{\mathcal O}_{K}}. \end{equation} Recall from the discussion before (\ref{eq:Korbit}) that ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y)))$ is the variety associated with the graded ideal of $\Ann(Y)$ in the symmetric algebra $S(\mathfrak{g})$. Our assumptions force the degree of the Goldie rank polynomial $P_Y$ attached to $Y$ to be \[ \sharp R_{\mathfrak{g}}^{+} - \mathop{\hbox {Dim}}\nolimits(Y)=\sharp R_{\mathfrak{g}}^{+}-\half\mathop{\hbox {dim}}\nolimits {\mathcal O}_{K}=\half(\mathop{\hbox {dim}}\nolimits \mathcal{N}-\mathop{\hbox {dim}}\nolimits\mathcal{O}_K)=\sharp R_{\mathfrak{k}}^{+}, \] where $\mathcal{N}$ denotes the cone of nilpotent elements in $\mathfrak{g}^{\star}$. In other words, the Goldie rank polynomial $P_Y$ has the same degree as the index polynomial $Q_Y$. We conjecture that for representations attached to ${\mathcal O}_K$, the index polynomial admits an expression analogous to \eqref{eq:multchar}. \end{subequations} \begin{conj} \label{conj} Assume that the $W_{\mathfrak{g}}$-representation $\sigma_K$ generated by the Weyl dimension formula for $\mathfrak{k}$ corresponds to a nilpotent $G_{\bC}$-orbit ${\mathcal O}_{K}$ via the Springer correspondence. Then for each $K_\bC$-orbit ${\mathcal O}_{K}^{j}$ on ${\mathcal O}_{K}\cap (\mathfrak{g}/\mathfrak{k})^{\star}$, there exists an integer $c_{j}$ such that for any Harish-Chandra module $Y$ for $G$ satisfying ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) \subset \overline{O_{K}}$, we have \begin{equation*} Q_{Y}=\sum_{j}c_{j}m_{Y}^j. \end{equation*} Here $Q_{Y}$ is the index polynomial attached to $Y$ as in Section \ref{section Weyl group}. \end{conj} \begin{ex} {\rm Consider $G=SL(2,\bR)$ with $K=SO(2)$. Then $\sigma_{K}$ is the trivial representation of $W_{\mathfrak{g}}\simeq\bZ/2\bZ$ and ${\mathcal O}_{K}$ is the principal nilpotent orbit. ${\mathcal O}_{K}$ has two real forms ${\mathcal O}_{K}^{1}$ and ${\mathcal O}_{K}^{2}$. One checks from our computations in Example \ref{exds_sl2} and from the table below that $c_{1}=1$ and $c_{2}=-1$. This shows that the conjecture is true in the case when $G=SL(2,\bR)$. \vspace*{0.2cm}\\ \begin{center} \begin{tabular}{|l|c|r|} \hline $\hspace*{2.5cm}Y$ & ${\mathcal V}(Y)$ & $Q_{Y}$ \\ \hline finite-dimensional modules & $\{0\}$ & $0$ \\ \hline holomorphic discrete series & ${\mathcal O}_{K}^{1}$ & $1$ \\ \hline antiholomorphic discrete series & ${\mathcal O}_{K}^{2}$ & $-1$ \\ \hline principal series & ${\mathcal O}_{K}^{1}\cup {\mathcal O}_{K}^{2}$ & $0$ \\ \hline \end{tabular} \end{center} \vspace*{0.5cm} Here ${\mathcal V}(Y)\subset {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y)))$ is the associated variety of $Y$. } \end{ex} \begin{ex} \label{ex_su1n} {\rm Let $n>1$ and let $G=SU(1,n)$ with $K=U(n)$. Then ${\mathcal O}_K$ is the minimal nilpotent orbit of dimension $2n$. It has two real forms ${\mathcal O}_{K}^{1}$ and ${\mathcal O}_{K}^{2}$. The holomorphic and antiholomorphic discrete series representations $Y^1_\lambda$ and $Y^2_\lambda$ all have Gelfand-Kirillov dimension equal to $n$. By \cite{Ch}, Corollary 2.13, the respective associated cycles are equal to \[ \Ass(Y^i_\lambda)=m^i_{Y^i}(\lambda) {\mathcal O}_{K}^{i},\qquad i=1,2, \] with the multiplicity $m^i_{Y^i}(\lambda)$ equal to the dimension of the lowest $K$-type of $Y^i_\lambda$. The index of the holomorphic discrete series representations is the lowest $K$-type shifted by a one dimensional representation of $K$ with weight $\rho(\mathfrak{p}^-)$, so it has the same dimension as the lowest $K$-type. The situation for the antiholomorphic discrete series representations is analogous, but there is a minus sign. Hence \[ m^i_{Y^i}(\lambda) = (-1)^{i-1}Q_{Y^i}(\lambda),\qquad i=1,2. \] This already forces the coefficients $c_1$ and $c_2$ from Conjecture \ref{conj} to be 1 and -1 respectively. Since ${\mathcal O}_K$ is the minimal orbit, it follows that for infinite-dimensional $Y$, \[ {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) \subseteq \overline{O_{K}}\quad\Rightarrow\quad {\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) = \overline{O_{K}}. \] \medskip If ${\mathcal V}(\mathop{\hbox {gr}}\nolimits(\Ann(Y))) = \overline{O_{K}}$ and $Y$ is irreducible, then $\mathcal{V}(Y)$ must be either $\overline{\mathcal{O}_K^1}$ or $\overline{\mathcal{O}_K^2}$. This follows from minimality of $\mathcal{O}_K$ and from \cite{V3}, Theorem 1.3. Namely, the codimension of the boundary of $\mathcal{O}_K^i$ in $\overline{\mathcal{O}_K^i}$ is $n\geq 2$. On the other hand, by \cite{KO}, Lemma 3.5, $\mathcal{V}(Y)=\overline{\mathcal{O}_K^i}$ implies $Y$ is holomorphic if $i=1$, respectively antiholomorphic if $i=2$. Let us assume $i=1$; the other case is analogous. It is possible to write $Y$ as a $\mathbb{Z}$-linear combination of generalized Verma modules; see for example \cite{HPZ}, Proposition 3.6. So we see that it is enough to check the conjecture assuming $Y$ is a generalized Verma module. In this case, one easily computes that $I(Y)$ is the lowest $K$-type of $Y$ shifted by the one dimensional $\widetilde{K}$-module with weight $\rho(\mathfrak{p}^-)$; see \cite{HPZ}, Lemma 3.2. So the index polynomial is the dimension of the lowest $K$-type. By \cite{NOT}, Proposition 2.1, this is exactly the same as the multiplicity $m^1_Y$ of $\overline{\mathcal{O}_K^1}$ in the associated cycle. This proves the conjecture in this case (with $c_1=1$). } \end{ex} Whenever $G$ is a simple group with a Hermitian symmetric space, the associated varieties ${\mathcal O}_K^1$ and ${\mathcal O}_K^2$ of holomorphic and antiholomorphic discrete series are real forms of a complex orbit ${\mathcal O}_K$ attached by the Springer correspondence to $\sigma_K$. The argument above proves Conjecture \ref{conj} for holomorphic and antiholomorphic representations. But in general there can be many more real forms of ${\mathcal O}_K$, and the full statement of Conjecture \ref{conj} is not so accessible. \medskip We mention that neither of the two assumptions (\ref{assum1}) and (\ref{assum2}) above is automatically fulfilled. Below, we list the classical groups for which the assumption (\ref{assum1}) is satisfied, i.e the classical groups for which $\sigma_{K}$ is a Springer representation. To check whether $\sigma_K$ is a Springer representation, we proceed as follows (see \cite{Car}, Chapters 11 and 13): \begin{itemize} \item[(i)] we identify $\sigma_K$ as a Macdonald representation; \item[(ii)] we compute the symbol of $\sigma_K$; \item[(iii)] we write down the partition associated with this symbol; \item[(iv)] we check whether the partition corresponds to a complex nilpotent orbit. \end{itemize} Recall that complex nilpotent orbits in classical Lie algebras are in one-to-one correspondence with the set of partitions $\lbrack d_1,\cdots,d_k\rbrack$ with $d_1\geq d_2\geq\cdots\geq d_k\geq 1$ such that (see \cite{CM}, Chapter 5): \begin{itemize} \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=n$, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{l}(n,\mathbb{C})$; \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=2n+1$ and the even $d_j$ occur with even multiplicity, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{o} (2n+1,\mathbb{C})$; \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=2n$ and the odd $d_j$ occur with even multiplicity, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{p}(2n,\mathbb{C})$; \item[$\bullet$] $d_{1}+d_{2}+\cdots+d_{k}=2n$ and the even $d_j$ occur with even multiplicity, when $\mathfrak{g}\simeq\mathfrak{s}\mathfrak{o} (2n,\mathbb{C})$; except that the partitions having all the $d_j$ even and occurring with even multiplicity are each associated to {\em two} orbits. \end{itemize} For example, when $G=SU(p,q)$, with $q\geq p\geq 1$, the Weyl group $W_\mathfrak{g}$ is the symmetric group $S_{p+q}$, and $W_\mathfrak{k}$ can be identified with the subgroup $S_p\times S_q$. The representation $\sigma_K$ is parametrized, as a Macdonald representation, by the partition $\lbrack 2^p,1^{q-p}\rbrack$ (see \cite{M} or Proposition 11.4.1 in \cite{Car}). This partition corresponds to a $2pq$-dimensional nilpotent orbit, so $\sigma_K$ is Springer. Note that when $\mathfrak{g}$ is of type $A_n$, there is no symbol to compute, and any irreducible representation of $W_\mathfrak{g}$ is a Springer representation. When $G=SO_e(2p,2p+1)$, with $p\geq 1$, the group $W_\mathfrak{k}$ is generated by a root subsystem of type $D_p\times B_p$. In this case, $\sigma_K$ is parametrized by the pair of partitions $(\lbrack \alpha\rbrack,\lbrack\beta\rbrack)=(\lbrack 1^p\rbrack,\lbrack1^p\rbrack)$ and its symbol is the array \[ \begin{pmatrix} 0&&2&&3&&\cdots&&p+1\cr &1&&2&&\cdots &&p& \end{pmatrix}. \] (See \cite{L} or Proposition 11.4.2 in \cite{Car}.) The partition of $4p+1$ associated with this symbol is $\lbrack 3,2^{2p-2},1^2\rbrack$. This partition corresponds to a $2p(2p+1)$-dimensional nilpotent orbit, i.e., $\sigma_K$ is a Springer representation. When $G=Sp(p,q;\mathbb{R})$, with $q> p\geq 1$, the Weyl group $W_\mathfrak{k}$ is generated by a root subsystem of type $C_p\times C_q$ so that $\sigma_K$ is parametrized by the pair of partitions $(\lbrack \alpha\rbrack,\lbrack\beta\rbrack)=(\lbrack \emptyset\rbrack,\lbrack2^p,1^{q-p}\rbrack)$. Its symbol is the array \[ \begin{pmatrix} 0&&1&&2&&\cdots&&q\cr &1&&2&&\cdots &&q+1& \end{pmatrix}, \] where in the second line there is a jump from $q-p$ to $q-p+2$. (See \cite{L} or Proposition 11.4.3 in \cite{Car}.) The partition of $2p+2q$ associated with this symbol is $\lbrack 3,2^{2p-2},1^{2(q-p)+1}\rbrack$. This partition does not correspond to a nilpotent orbit, i.e., $\sigma_K$ is not a Springer representation. \scriptsize{\begin{table}[ht] \addtolength{\tabcolsep}{-6pt} \centering \scalebox{0.81}{ \begin{tabular}{c c c c c} \hline\hline & & & &\\${\bf G}$ & {\bf Generator for $\sigma_{K}$}& {\bf Springer ?} & ${\bf {\mathcal O}_{K}}$ & ${\bf \mathop{\hbox {dim}}\nolimits_{\bb C}({\mathcal O}_{K})}$\\[0.5ex] & & & &\\ \hline\hline & & & &\\ $SU(p,q)$, $q\geq p\geq 1$&\tiny{$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}-X_{j})$ for $p\geq 2$} &\tiny{Yes}&\tiny{$\lbrack 2^p,1^{q-p}\rbrack$}&\tiny{$2pq$} \\[8ex] & $\prod\limits_{2\leq i<j\leq q+1}(X_{i}-X_{j})$ for $q\geq 2$, $p=1$ & & (minimal orbit if $p=1$)& \\[5ex] & $\sigma_{K}$ is trivial for $p=q=1$& & (principal orbit if $p=q=1$)& \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2p+1)$, $p\geq 1$& $\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq 2p}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{p+1\leq i\leq 2p}X_{i}$ for $p\geq 2$&Yes &$\lbrack 3,2^{2p-2},1^{2}\rbrack$& $2p(2p+1)$ \\[8ex] & $X_{2}$ for $p=1$& &(subregular orbit if $p=1$) & \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2p-1)$, $p\geq 1$\;\;\;\;\;\;& $\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq 2p-1}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=p+1}^{2p-1 X_{i}$ for $p\geq 2$&Yes &$\lbrack 3,2^{2p-2}\rbrack$& $2p(2p-1)$ \\ & $\sigma_{K}$ is trivial for $p=1$& &(principal orbit if $p=1$)& \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2,2q+1)$, $q\geq 2$& $\prod\limits_{2\leq i<j\leq q+1}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=2}^{q+1} X_{i}$&Yes &$\lbrack 3,1^{2q}\rbrack$& $2(2q+1)$ \\ & & & & \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2q+1)$ &$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=p+1}^{p+q}X_{i}$ &Yes & $\lbrack 3,2^{2p-2},1^{2(q-p)+2}\rbrack$ &$2p(2q+1)$ \\ $q\geq p+1\geq 3$& & & & \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2q+1)$ &$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=p+1}^{p+q} X_{i}$ for $q\geq 2$&No & {\huge -} &{\huge -} \\[8ex] $p\geq q+2\geq 2$ & $X_{p+1}\prod\limits_{1\leq i<j\leq p}(X_{i}^{2}-X_{j}^{2})$ for $q=1$& & & \\[5ex] & $\prod\limits_{1\leq i<j\leq p}(X_{i}^{2}-X_{j}^{2})$ for $q=0$& & &\\ & & & &\\ \hline\hline & & & &\\ $Sp(2n,\bb R)$, $n\geq 1$ &$\prod\limits_{1\leq i<j\leq n}(X_{i}-X_{j})$ for $n\geq 2$& Yes & $\lbrack 2^n\rbrack$&$n(n+1)$ \\[5ex] & $\sigma_{K}$ is trivial for $n=1$& & (principal orbit if $n=1$)& \\ & & & &\\ \hline\hline & & & &\\ $Sp(p,q;\bb R)$, $q\geq p\geq 1$ & \tiny{$\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})\prod\limits_{i=1}^{p+q} X_{i}$ for $p\geq 2$} & No & {\huge -} & {\huge -} \\[8ex] & $\prod\limits_{2\leq i<j\leq q+1}(X_{i}^{2}-X_{j}^{2})\prod\limits_{1\leq i\leq q+1}X_{i}$ for $q\geq 2$, $p=1$& & \\[5ex] & $X_{1}X_{2}$ for $p=q=1$ & && \\ & & & &\\ \hline\hline & & & &\\ $SO_{e}(2p,2q)$, $q\geq p\geq 1$ & $\prod\limits_{\stackrel{1\leq i<j\leq p}{p+1\leq i<j\leq p+q}}(X_{i}^{2}-X_{j}^{2})$ for $p\geq 2$& Yes &$\lbrack 3,2^{2p-2},1^{2(q-p)+1}\rbrack$&$4pq$ \\[8ex] & $\prod\limits_{2\leq i<j\leq q+1}(X_{i}^{2}-X_{j}^{2})$ for $q\geq 2$, $p=1$ & & & \\[5ex] & $\sigma_{K}$ is trivial for $p=q=1$& & (principal orbit for $p=q=1$) & \\ & & & &\\ \hline\hline & & & &\\ $SO^\star(2n)$, $n\geq 1$ & $\prod\limits_{1\leq i<j\leq n}(X_{i}-X_{j})$ for $n\geq 2$& Yes & $\lbrack 2^{n}\rbrack$ for $n$ even & $n(n-1)$ \\[5ex] & $\sigma_{K}$ is trivial for $n=1$ & & $\lbrack 2^{n-1},1^{2}\rbrack$ for $n$ odd & \\ & & & (trivial orbit if $n=1$)& \\ & & & (minimal orbit if $n=3$)& \\ [1ex] \hline\hline\end{tabular} }\label{tableSpringer} \end{table}} \normalsize \clearpage The following theorem provides a sufficient condition for both assumptions (\ref{assum1}) and (\ref{assum2}) to hold. In contrast with the previous table, it includes exceptional groups. \begin{thm}\label{OK} Suppose G is connected semisimple, $T$ is a compact Cartan subgroup in $G$ contained in $K$, and $\lambda_0$ is the Harish-Chandra parameter for a discrete series representation $Y_0$ of $G$. Assume that the set of integral roots for $\lambda_0$ is precisely the set of compact roots, i.e., \begin{equation}\label{exception} \{\alpha \in \Delta(\mathfrak{g},\mathfrak{t})|\lambda_0(\alpha^\vee) \in {\mathbb Z} \}=\Delta(\mathfrak{k},\mathfrak{t}). \end{equation} Then $\sigma_K$ is the Springer representation for a complex nilpotent orbit ${\mathcal O}_K$. Let $\{Y_{\lambda_0+\mu} | \mu\in\Lambda\}$ be the Hecht-Schmid coherent family of virtual representations corresponding to $Y_0$ and form the virtual representation $$Y \overset{\text{def.}}= \sum_{w \in W_\mathfrak{k}} (-1)^w Y_{w\lambda}.$$ Then $Y$ is a nonzero integer combination of irreducible representations having associated variety of annihilator equal to $\overline{{\mathcal O}_K}$. \end{thm} \begin{proof} The character of $Y$ on the compact Cartan $T$ is a multiple (by the cardinality of $W_\mathfrak{k}$) of the character of $Y_0$. Consequently the character of $Y$ on $T$ is not zero, so $Y$ is not zero. By construction the virtual representation $Y$ transforms under the coherent continuation action of the integral Weyl group $W(\lambda_0) = W_\mathfrak{k}$ by the sign character of $W(\lambda_0)$. By the theory of $\tau$-invariants of Harish-Chandra modules, it follows that every irreducible constituent of $Y$ must have every simple integral root in its $\tau$-invariant. At any regular infinitesimal character $\lambda_0$ there is a unique maximal primitive ideal $J(\lambda_0)$, characterized by having every simple integral root in its $\tau$ invariant. The Goldie rank polynomial for this ideal is a multiple of \[ q_0(\lambda) = \prod_{\langle\alpha^\vee,\lambda_0 \rangle \in \mathbb{N}} \langle \alpha^\vee,\lambda\rangle; \] so the Goldie rank polynomial for every irreducible constituent of $Y$ is a multiple of $q_0$. The Weyl group representation generated by $q_0$ is $\sigma_K$ (see \eqref{Korbit}); so by \cite{BV1}, it follows that the complex nilpotent orbit ${\mathcal O}_0$ attached to the maximal primitive ideal $J_0$ must correspond to $\sigma_K$ as in \eqref{assum1}. At the same time, we have seen that the (nonempty!) set of irreducible constituents of the virtual representation $Y$ all satisfy \eqref{assum2}. \end{proof} Theorem \ref{OK} applies to any real form of $E_6$, $E_7$ and $E_8$, and more generally to any equal rank real form of one root length. It applies as well to $G_2$ (both split and compact forms. The theorem applies to compact forms for any $G$, and in that case ${\mathcal O}_K ={0}$). However, for the split $F_4$ and taking $\lambda_0$ a discrete series parameter for the nonlinear double cover, the integral root system (type $C_4$) strictly contains the compact roots (type $C_3 \times C_1$). So the above theorem does not apply to split $F_4$. Nevertheless the representation $\sigma_K$ does correspond to a (special) nilpotent orbit ${\mathcal O}_K$. At regular integral infinitesimal character, there are (according to the representation-theoretic software {\tt atlas}; see \cite{atlas}) exactly $27$ choices for an irreducible representation $Y$ as in (\ref{assum2}). There are two real forms of the orbit ${\mathcal O}_K$. The $Y$'s come in three families (``two-sided cells") of nine representations each, with essentially the same associated variety in each family. One of the three families contains an $A_\mathfrak{q}(\lambda)$ (with Levi of type $B_3$) and therefore has associated variety equal to one of the two real forms. In particular, the condition (\ref{exception}) is sufficient but not necessary for assumptions (\ref{assum1}) and (\ref{assum2}) to hold. Note that for rank one $F_4$, the representation $\sigma_K$ is not in the image of the Springer correspondence. For the classical groups, Theorem \ref{OK} applies to all the cases of one root length, explaining all the ``yes'' answers in Table \ref{tableSpringer} for types $A$ and $D$. In the case of two root lengths, the hypothesis of Theorem \ref{OK} can be satisfied in the noncompact case exactly when $G$ is Hermitian symmetric (so the cases $SO_e(2,2n-1)$ and $Sp(2n,\mathbb{R})$; more precisely, for appropriate nonlinear coverings of these groups). We do not know a simple general explanation for the remaining ``yes'' answers in the table. Just as for $F_4$, the integral root systems for a discrete series parameter $\lambda_0$ are too large for Theorem \ref{OK}: in the case of $SO_e(2p,2q+1)$, for example, the root system for $K$ is $D_p\times B_q$, but (for $p\ge 2$) the integral root system cannot be made smaller than $B_p \times B_q$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusions and Future Work} \label{section:conclusion} We present a novel toolchain for implementing refined multiparty session types (RMPST), which enables developers to use \textsc{Scribble}\xspace, a protocol description language for multiparty session types, and \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, a state-of-the-art verification-oriented programming language, to implement a multiparty protocol and statically verify endpoint implementations. To the best of the authors' knowledge, this is the first work on \emph{statically} verified multiparty protocols with \emph{refinement} types. We extend the theory of multiparty session types with data refinements, and present a toolchain that enables developers to \emph{specify} multiparty protocols with data dependencies, and \emph{implement} the endpoints using generated APIs in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace. We leverage the advanced typing system in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace to encode local session types for endpoints, and validate the data dependencies in the protocol statically. The verified endpoint program in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace is extracted into OCaml, where the refinements are \emph{erased} --- adding \emph{no runtime overhead} for refinements. The callback-styled API avoids linearity checks of channel usage by internalising communications in generated code. We evaluate our toolchain and demonstrate that our overhead is small compared to an implementation without session types. Whereas refinement types express the data dependencies of multiparty protocols, the availability of refinement types in general purpose mainstream programming languages is limited. For future work, we wish to study how to mix participants with refined implementation and those without, possibly using a gradual typing system \cite{POPL17GradualRefinement, JFP19GradualSession}. \section{Evaluation} \label{section:evaluation} We evaluate the expressiveness and performance of our toolchain \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace. We describe the methodology and setup (\cref{subsection:eval-method}), and comment on the compilation time (\cref{subsection:eval-comp}) and the execution time (\cref{subsection:eval-exec}). We demonstrate the expressiveness of \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace (\cref{subsection:eval-expressive}) by implementing examples from the session type literature and comparing with related work. The source files of the benchmarks used in this section are included in our artifact, along with a script to reproduce the results. \subsection{Methodology and Setup} \label{subsection:eval-method} We measure the time to generate the CFSM representation from a \textsc{Scribble}\xspace protocol (\emph{CFSM}), and the time to generate \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace code from the CFSM representation (\emph{\texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace APIs}). Since the generated APIs in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace need to be type-checked before use, we also measure the type-checking time for the generated code (\emph{Gen.\@ Code}). Finally, we provide a simple implementation of the callbacks and measure the type-checking time for the callbacks against the generated type (\emph{Callbacks}). To execute the protocols, we need a network transport to connect the participants by providing appropriate sending and receiving primitives. In our experiment setup, we use the standard library module \code{FStar.Tcp} to establish TCP connections between participants, and provide a simple serialisation module for base types. Due to the small size of our payloads, we set \code{TCP\_NODELAY} to avoid the delays introduced by the congestion control algorithms. Since our entry point to execute the protocol is parameterised by the connection/transport type, the implementation may use other connections if developers wish, e.g.\ an in-memory queue for local setups. We measure the execution time of the protocol (\emph{Execution Time}). \begin{figure} \centering \lstinputlisting[language=Scribble, numbers=none]{fig/PingPong.scr} \vspace{-3mm} \caption{Ping Pong Protocol (Parameterised by Protocol Length $n$)} \label{fig:pingpong} \vspace{-3mm} \end{figure} To measure the overhead of our implementation, we compare against an implementation of the protocol without session types or refinement types, which we call \emph{bare implementation}. In this implementation, we use the same sending and receiving primitives (i.e.\ \lstinline+connection+) as in the toolchain implementation. The bare implementation is in a series of direct calls of sending and receiving primitives, for the same communication pattern, but without the generated APIs. We use a Ping Pong protocol (\cref{fig:pingpong}), parameterised by the protocol length, i.e.\ the number of Ping Pong messages $n$ in a protocol iteration. When the protocol length $n$ increases, the number of CFSM states increases linearly, which gives rise to longer generated code and larger generated types. In each Ping Pong message, we include payload of increasing numbers, and encode the constraints as protocol refinements. We study its effect on the compilation time (\cref{subsection:eval-comp}) and the execution time (\cref{subsection:eval-exec}). We run the experiment on varying sizes of $n$, up to 25. Larger sizes of $n$ leads to unreasonably large resource usage during type-checking in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace. \cref{tab:run-time} reports the results for the Ping Pong protocol in \cref{fig:pingpong}. We run the experiments under a network of latency of 0.340ms (64 bytes ping), and repeat each experiment 30 times. Measurements are taken using a machine with Intel i7-7700K CPU (4.20 GHz, 4 cores, 8 threads), 16 GiB RAM, operating system Ubuntu 18.04, OCaml compiler version 4.08.1, \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler commit \code{\href{https://github.com/FStarLang/FStar/commit/8040e34a2c6031276fafd2196b412d415ad4591a}{8040e34a}}, Z3 version 4.8.5. \subsection{Compilation Time} \label{subsection:eval-comp} \mypara{CFSM and \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace Generation Time} We measure the time taken for \textsc{Scribble}\xspace to generate the CFSM from the protocol in \cref{fig:pingpong}, and for the code generation tool to convert the CFSM to \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace APIs. We observe from \cref{tab:run-time} that the generation time for CFSMs and \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace APIs is short. It takes less than 1 second to complete the generation phase for each case. \begin{table} \centering \begin{tabular}{|c|r|r|r|r|c|} \hline Protocol & \multicolumn{2}{c|}{Generation Time} & \multicolumn{2}{c|}{Type Checking Time} & Execution Time \\ \cline{2-5} Length ($n$) & CFSM & \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace APIs & Gen.\@ Code & Callbacks & (100 000 ping-pongs) \\ \hline \hline bare & n/a & n/a & n/a & n/a & 28.79s \\ \hline 1 & 0.38s & 0.01s & 1.28s & 0.34s & 28.75s \\ \hline 5 & 0.48s & 0.01s & 3.81s & 1.12s & 28.82s \\ \hline 10 & 0.55s & 0.01s & 14.83s & 1.34s & 28.84s \\ \hline 15 & 0.61s & 0.01s & 42.78s & 1.78s & n/a \\ \hline 20 & 0.69s & 0.02s & 98.35s & 2.54s & 28.81s \\ \hline 25 & 0.78s & 0.02s & 206.82s & 3.87s & 28.76s \\ \hline \end{tabular} \vspace{1mm} \caption{Time Measurements for Ping Pong Protocol} \label{tab:run-time} \vspace{-7mm} \end{table} \mypara{Type-checking Time of Generated Code and Callbacks} We measure the time taken for the generated APIs to type-check in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace. We provide a simple \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace implementation of the callbacks following the generated APIs, and measure the time taken to type-check the callbacks. The increase of type-checking time is non-linear with regard to the protocol length. We encode CFSM states as records corresponding to local typing contexts. In this case, the size of local typing contexts and the number of type definitions grows linearly, giving rise to a non-linear increase. Moreover, the entry point function is likely to cause non-linear increases in the type-checking time. The long type-checking time of the generated code could be avoided if the developer chooses to trust our toolchain to always generate well-typed \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace code for the entry point. The entry point would be available in an \emph{interface file} (cf.\ OCaml \code{.mli} files), with the actual implementation in OCaml instead of \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace\footnotemark. There would otherwise be no changes in the development workflow. Although neither does type-checking time of the callback implementation fit a linear pattern, it remains within reasonable time frame. \footnotetext{Defining a signature in an interface file, and providing an implementation in the target language (OCaml) allows the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler to \emph{assume} the implementation is correct. This technique is used frequently in the standard library of \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace. This is not to be confused with implementing the endpoints in OCaml instead of \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, as that would bypass the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace type-checking. } \subsection{Runtime Performance (Execution Time)} \label{subsection:eval-exec} We measure the execution time taken for an exchange of 100,000 ping pongs for the toolchain and bare implementation under the experiment network. The execution time is dominated by network communication, since there is little computation to be performed at each endpoint. We provide a bare implementation using a series of direct invocations of sending and receiving primitives, in a compatible way to communicate with generated APIs. The bare implementation does not involve a record of callbacks, which is anticipated to run faster, since the bare implementation involves fewer function pointers when calling callbacks. Moreover, the bare implementation does not construct \emph{state records}, which record a backlog of the communication, as the protocol progresses. To measure the performance impact of book-keeping of callback and state records, we run the Ping Pong protocol from \cref{fig:pingpong} for a protocol of increasing size (number of states and generated types), i.e.\ for increasing values of $n$. All implementations, including \textit{bare} are run until 100,000 ping pong messages in total are exchanged\footnote{For $n=1$, we run 100,000 iterations of recursion; for $n=10$, we run 10,000 iterations, etc. Total number of ping pong messages exchanged by two parties remain the same. }. We summarise the results in \cref{tab:run-time}. Despite the different protocol lengths, there are \emph{no significant changes} in execution time. Since the execution is dominated by time spent on communication, the measurements are subject to network fluctuations, difficult to avoid during the experiments. We conclude that our implementation does not impose a large overhead on the execution time. \subsection{Expressiveness} \label{subsection:eval-expressive} We implement examples from the session type literature, and add refinements to encode data dependencies in the protocols. We measure the time taken for code generation and type-checking, and present them in \cref{tab:examples}. The time taken in the toolchain for examples in the session type literature is usually short, yet we demonstrate that we are able to implement the examples easily with our callback style API. Moreover, the time taken is incurred at the compilation stage, hence there is no overhead for checking refinements by our runtime. \newcommand{\ding{55}}{\ding{55}} \begin{table} \centering \small \begin{tabular}{|l|r|c|c|c|c|} \hline Example (Endpoint) & Gen.\@ / TC.\@ Time & MP & RV & IV & STP \\ \hline \hline Two Buyer $^a$ (\code{A}) & 0.46s / 2.33s & \checkmark & \ding{55} & \checkmark & \checkmark ${}^\dagger$ \\ \hline Negotiation $^b$ (\code{C})& 0.46s / 1.59s & \ding{55} & \checkmark & \ding{55} & \ding{55} \\ \hline Fibonacci $^c$ (\code{A}) & 0.44s / 1.58s & \ding{55} & \checkmark & \ding{55} & \ding{55} \\ \hline Travel Agency $^d$ (\code{C})& 0.62s / 2.36s & \checkmark & \ding{55} & \ding{55} & \checkmark${}^\dagger$\\ \hline Calculator $^c$ (\code{C})& 0.51s / 2.30s & \ding{55} & \ding{55} & \ding{55} & \checkmark \\ \hline SH $^e$ (\code{P}) & 1.16s / 4.31s & \checkmark & \ding{55} & \checkmark & \checkmark ${}^\dagger$\\ \hline Online Wallet $^f$ (\code{C}) & 0.62s / 2.67s & \checkmark & \checkmark & \ding{55} & \ding{55} \\ \hline Ticket $^g$ (\code{C}) & 0.45s / 1.90s & \ding{55} & \checkmark & \ding{55} & \ding{55} \\ \hline HTTP $^h$ (\code{S}) & 0.55s / 1.79s & \ding{55} & \ding{55} & \ding{55} & \checkmark ${}^\dagger$\\ \hline \end{tabular}% \footnotesize \begin{tabular}{r|l} MP & Multiparty Protocol \\ RV & Uses Recursion Variables \\ IV & \begin{tabular}{@{}l@{}}Irrelevant Variables \end{tabular}\\ STP & \begin{tabular}{@{}l@{}} Implementable in STP\\ \checkmark ${}^\dagger$ STP requires \emph{dynamic} checks \end{tabular} \\ $^a$ & \cite{JACM16MPST} \\ $^b$ & \cite{DBLP:conf/concur/DemangeonH12}\\ $^c$ & \cite{FASE16EndpointAPI} \\ $^d$ & \cite{DBLP:conf/ecoop/HuYH08} \\ $^e$ & \cite{CC18TypeProvider} \\ $^f$ & \cite{RV13SPy} \\ $^g$ & \cite{TGC12MultipartyMultiSession} \\ $^h$ & \cite{RFCHttpSyntax} \end{tabular} \caption{Selected Examples from Literature} \label{tab:examples} \vspace{-10mm} \end{table} We also compare the expressiveness of our work with two most closely related works, namely \citet{CONCUR10DesignByContract} and \citet{CC18TypeProvider}, which study refinements in MPST (also see \cref{section:related}). \Citet{CC18TypeProvider} (Session Type Provider, STP) implements limited version of refinements in the \textsc{Scribble}\xspace toolchain. Our version is strictly more expressive than STP for two reasons: (1) support for recursive variables to express invariants and (2) support for irrelevant variables. \cref{fig:new-old-scribble} illustrates those features and \cref{tab:examples} identifies which of the implemented examples use them. \input{fig/fig-new-old-scribble.tex} In STP, when recursion occurs, all information about the variables is lost at the end of an iteration, hence their tool does not support even the simple example in \cref{fig:new-scribble-adder}. In contrast, our work retains the recursion variables, which are available throughout the recursion. Additionally, the endpoint projection in STP is more conservative with regards to refinements. Whilst there must be no variables unknown to a role in the refinements attached to a message for the sending role, there may be unknown variables to the receiving role. The part unknown to the receiving role is discarded (hence weakening the pre-condition). In our work such information can still be retained and used for type checking, thanks to irrelevant variables. In \citet{CONCUR10DesignByContract}, a global protocol with assertions must be \emph{well-asserted} (\S 3.1). In particular, the \emph{history sensitivity} requirement states: \textit{"A predicate guaranteed by a participant $\ppt p$ can only contain those interaction variables that $\ppt p$ knows."} Our theory lifts this restriction by allowing variables unknown to a sending role to be used in the global or local type, whereas such variables cannot be used in the implementation. For example, \cref{example:gty} fails the well-asserted requirement in \cite{CONCUR10DesignByContract}. In the refinement $\dexp{x = z}$ for variable $\dexp z$ (for message label $\dlbl{Trd}$), the variable $\dexp x$ is not known to $\ppt C$, hence the protocol would not be well-asserted. In our setup, such protocol is permitted, the endpoint implementation for $\ppt C$ can provide the value $\dexp y$ received from $\ppt B$ to satisfy the refinement type --- The SMT solver can validate the refinement from the transitivity of equality. \subsection{Targeting \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace and Implementing Endpoints} \label{subsection:fstar-bg} \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace \cite{POPL16FStar} is a verification-oriented programming language, with a rich set of features. Our method of API generation and example programs utilise the following \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace features:% \footnote{A comprehensive \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace{} tutorial is available at \url{https://www.fstar-lang.org/tutorial/}.} \begin{itemize}[leftmargin=*] \item \textbf{Refinement Types.} A \emph{refinement type} has the form \lstinline+x:t{E}+, where \lstinline+t+ is the base type, \lstinline+x+ is a variable that stands for values of this type, and \lstinline+E+ is a pure\footnotemark boolean expression for \emph{refinement}, possibly containing \lstinline+x+. \footnotetext{Pure in this context means pure terminating computation, i.e.\ no side-effects including global state modifcations, I/O actions or infinite recursions, etc.} In short, the values of this refinement type are the \emph{subset} of values of \lstinline+t+ that make \lstinline+E+ evaluate to \lstinline+true+, e.g.\ natural numbers are defined as \lstinline[language=FSharp,basicstyle=\small\ttfamily]+x:int{x+{\small $\geq$}\lstinline+0}+. We use this feature to express data and control flow constraints in protocols. In \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, type-checking refinement types are done with the assistance of the Z3 SMT solver~\cite{TACAS08Z3}. Refinements are encoded into SMT formulae and the solver decides the satisfiability of SMT formulae during type-checking. This feature enables automation in reasoning and saves the need for manual proofs in many scenarios. \item \textbf{Indexed Types.} \emph{Types} can take pure expressions as arguments. For example, a declaration \newline \lstinline[language=FSharp,basicstyle=\small\ttfamily]+type t (i:t') = ...+ prescribes the \emph{family} of types given by applying the type constructor \lstinline+t+ to \emph{values} of type \lstinline+t'+. We use this feature to generate type definitions for payload items in an internal choice, where the refinements in payload types refer to a state type. \item \textbf{Dependent Functions with Effects.} A (dependent) function in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace has a type of the form \linebreak \lstinline+(x:t+${}_1$\lstinline+)+ $\to$\lstinline+ E t+${}_2$, where \lstinline+t+${}_1$ is the argument type, \lstinline+E+ describes the \emph{effect} of the function, and \lstinline+t+${}_2$ is the result type, which may also refer to the argument \lstinline+x+. The default effect is \lstinline+Tot+, for pure total expressions (i.e.\ terminating and side-effect free). At the other end of the spectrum is the arbitrary effect \lstinline+ML+ (correspondent to all possible side effects in an ML language), which permits state mutation, non-terminating recursion, I/O, exceptions, etc. \item \textbf{The \code{Ghost} Effect and the \code{erased} Type.} A type can be marked \keyword{\small erased} in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, so that values of such types are not available for computation (after extracting into target language), but only for proof purposes (during type-checking). The type constructor is accompanied with the \code{Ghost} effect to mark computationally irrelevant code, where the type system prevents the use of erased values in computationally relevant code, so that the values can really be safely erased. In the following snippet, we quickly demonstrate this feature: \code{GTot} stands for \code{Ghost} and total, and cannot be mixed with the default pure effect (the function \code{not\_allowed} does not type-check). We use the \keyword{erased} type to mark variables known to the endpoint via the protocol specification, whose values are not known due to not being a party of the message interaction. For example, in \cref{fig:guess}, the endpoint \code{C} does not know the value of \code{n0}, but knowns its type from the protocol. \begin{center} \begin{minipage}{0.45\textwidth} \begin{lstlisting}[language=FSharp] type t = { x1: int; x2: erased int; } (* Definition in standard library *) val reveal: erased a -> GTot a \end{lstlisting} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{lstlisting}[language=FSharp] (* The following access is not allowed *) let not_allowed (o: t) = reveal o.x2 (* Accessing at type level is allowed *) val allowed: (o: t{reveal o.x2 >= 0}) ->int \end{lstlisting} \end{minipage} \end{center} \end{itemize} Our generated code consists of multiple type definitions and an entry point function (as shown in \cref{fig:workflow-detail}, \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace API), including: \begin{description}[leftmargin=*] \item[State Types:] Allowing developers to access variables known at a given CFSM state. \item[Callbacks:] A record of functions corresponding to CFSM transitions, used to implement program logic of the local endpoint. \item[Connections:] A record of functions for sending and receiving values to and from other roles in the global protocol, used to handle the communication aspects of the local endpoint. \item[Entry Point:] A function taking callbacks and connections to run the local endpoint. \end{description} To implement an endpoint, the developer needs to provide implementations for the generated callback and connection types, using appropriate functions to handle the program logic and communications. The \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler checks whether the implemented functions type-check against the prescribed types. If the endpoint implementation succeeds the type-checking, the developer may choose to extract to a target language (e.g.\ OCaml, C) for execution. \section{Additional Details on Code Generation} \label{section:impl-appendix} \subsection{Communicating Finite State Machines (CFSMs) (Toolchain Internals)} \label{subsection:impl-cfsm} \emph{Communicating Finite State Machines} (CFSMs, \cite{JACM83CFSM}) correspond to local types projected from global types, as shown in \cite{ICALP13CFSM}. We define the CFSM as a tuple $(\mathbb{Q}, q_0, \delta)$, where $\mathbb{Q}$ is set of states, $q_0 \in \mathbb{Q}$ is an initial state, and $\delta \subseteq \mathbb{Q} \times A \times \mathbb{Q}$ is a transition relation, where $A$ is the set of labelled actions (cf.\ \cref{subsection:theory-semantics}). \mypara{Conversion to Communicating Finite State Machines (CFSMs)} \textsc{Scribble}\xspace follows the projection defined in \cref{subsection:theory-projection}, and projects a global protocol into local types. Local types can be converted easily into a Communicating Finite State Machine (CFSM), such that the resulting CFSM does not have mixed state (i.e.\ a state does not contain a mixture of sending and receiving outgoing transitions), and that the states are directed (i.e.\ they only contain sending or receiving actions towards an identical participant)~\cite[Def. 3.4, Prop. 3.1]{ICALP13CFSM}. We follow the same approach to obtain a CFSM from the local types with their typing contexts. The CFSM has the same trace of actions as the local types \cite[Prop. 3.2]{ICALP13CFSM}. We generate \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace code from the CFSM obtained from projection. We generate records for each state to correspond to the typing context (explained in \cref{subsection:impl-typedef-st}), and functions for transitions to correspond to actions (explained in \cref{subsection:impl-handler}). The execution of the CFSM is detailed in \cref{subsection:impl-run-fsm}. \subsection{Generated APIs with Refinement Types (Toolchain Output)} \label{subsection:impl-api} Our code generator takes a CFSM as an input to produce type definitions and an entry point to execute the protocol. As previously introduced, our design separates program logic and communications, corresponding to the \emph{callbacks} type (\cref{subsection:impl-handler}) and \emph{connection} type (\cref{subsection:impl-connection}). The generated entry point function takes callbacks and a connection, and runs the protocol, which we detail the internals in \cref{subsection:impl-run-fsm}. \subsubsection{Callbacks} \label{subsection:impl-handler} We generate function types for handling transitions in the CFSM, and collect them into a record of \emph{callbacks}. When running the CFSM for a participant, appropriate callback will be invoked when the transition occurs. For sending transitions, the sending callback is invoked to prompt a value to be sent. For receiving transitions, the receiving callback is invoked with the value received, so the developer may use the value for processing. \mypara{Generating Handlers for Receiving States} For a receiving state $q \in \mathbb{Q}$ with receiving action $\ltsmsg{p}{q}{l}{x}{T}$ (assuming the current role is $\ppt q$), the receiving callback is a function that takes the record $\enc{q}$ and the received value of type $T$, that returns \dte{\code{unit}}\ (with possible side-effects). The function signature is given by $$\code{state}q\code{\_receive\_}\dlbl{l}: (\dexp{st}:\enc{q}) \rightarrow \enc{\dte T}_{\dexp{st}} \rightarrow ML~\dte{\code{unit}}$$ The constructor $ML$ is an effect specification in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, which permits all side effects in ML language (e.g.\ using references, performing I/O), in contrast to a pure total function permitting no side effects. $\enc{q}$ is a record correspondent to the local typing context of the state, $\enc{\dte T}_{\dexp{st}}$ is a refinement type, but the free variables in the refinement types are bound to the record $\dexp{st}$. We generate one callback for each receiving action, so that it can be invoked upon receiving a message according to the message label. \mypara{Generating Handlers for Sending States} For a sending state $q \in \mathbb{Q}$ with send actions $\ltsmsg{p}{q}{l_i}{x_i}{T_i}$ (assuming the current role is $\ppt p$), for some index $i \in I$, the sending callback is a function that takes the record $\enc{q}$, and returns a disjoint union of allowed messages (with possible side-effects). The constructor of of the disjoint union determines the label of the message, and takes the payload as its parameter. The function signature is given by $$\code{state}q\code{\_send} : (\dexp{st}:\enc{q}) \rightarrow ML~\biguplus_{i \in I}\dlbl{l_i}(\enc{\dte T_i}_{\dexp{st}})$$ Different from receiving callbacks, only one sending callback is generated for each sending state. This corresponds to the nature of internal choices, that the process implementing a sending prefix makes a single selection; on the contrary, the process implementing a receiving prefix must be able to handle all branches. \begin{remark}[Handlers and LTS Transitions] \upshape If the callback returns a choice with the refinements satisfied, the CFSM is able to make a transition to the next state. When a callback is provided, against its prescribed type, then the type of the callback type is inhabited and we can invoke the callback to obtain the label and the value of the payload. A callback function type may be uninhabited, for instance, when none of the choices are applicable. In this case, the endpoint cannot be implemented (we show an example below). If the developer provides a callback, then it must be the case that the specified type is inhabited. In this way, we ensure the protocol is able to make progress, and is not stuck due to empty value types\footnotemark. \end{remark} \footnotetext{Since we use a permissive $ML$ effect in the callback type, the callback may throw exceptions or diverge in case of unable to return. Such behaviours are not in the scope of our interest when we talk about progress.} \subsubsection{Connections} \label{subsection:impl-connection} The \emph{connection} type is a record type with functions for sending and receiving base types. The primitives for communications are collected in a record with fields as follows ($\dte S$ range over base types \dte{\code{int}}, \dte{\code{bool}}, \dte{\code{unit}}, etc.): \[ \arraycolsep=1pt \begin{array}{lclllcl} \code{send\_}\dte{S} & : & \enc{\ppt{$\mathbb{P}$}} \rightarrow \enc{\dte{S}} \rightarrow ML~\dte{\code{unit}} & \hspace{15mm}\ & \code{recv\_}\dte{S} & : & \enc{\ppt{$\mathbb{P}$}} \rightarrow \dte{\code{unit}} \rightarrow ML~\enc{\dte{S}} \end{array} \] where $\enc{\ppt{$\mathbb{P}$}}$ is a disjoint union of participants roles and $\enc{\dte{S}}$ is the data type for $\dte S$ in the programming language. The communication primitives do not use refinement types in the type signature. We can safely do so by exploiting the property that refinements can be erased at runtime after static type-checking. \subsubsection{State Records with Refinements} \label{subsection:impl-typedef-st} We generate a type $\enc{q}$ for each state $q \in \mathbb{Q}$ in the CFSM. The type $\enc{q}$ is a record type corresponding to the local typing context in the state. For each variable in the local typing context, we define a field in the encoded record type, corresponding to the refinement type in the typing context. Since refinement types allow dependencies in the typing context, we exploit the feature of dependent records in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace to encode the dependencies. We use the smallest typing context associated with the CFSM state for the generated record type. The typing context can be computed via a graph traversal of the CFSM, accumulating variables in the local type prefix along the traversal. \subsection{Verified Endpoint Implementation (User Input)} To implement an endpoint, a developer needs to provide a record of type \emph{callback}, containing individual callbacks for transitions, and a record of type \emph{connection}, containing functions to send and receive values of different base types. The two records are passed as arguments to the entry point function \code{run} to execute the protocol. The design of connection record gives freedom for the developer to implement any transport layer satisfying first-in-first-out (FIFO) delivery without loss of messages. These assumptions originate from the network assumptions in MPST. TCP connections are a good candidate to connect the participants in the protocol, since they satisfy the assumptions. To satisfy the data dependencies as specified in the protocol, the provided callbacks must match the generated refinement type. The \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler utilises the Z3 SMT solver \cite{TACAS08Z3} for type-checking, saving developers the need for manual proofs of arithmetic properties. After type-checking, the compiler can extract the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace code into OCaml (or other targets), where the refinements at the type level are erased. Developers can then compile the extracted OCaml program to execute the protocol. The resulting program has data dependencies verified by \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace using refinement types. Moreover, the MPST theory guarantees that the endpoints are free for deadlocks or communication mismatches, and conform to the global types. \label{subsection:impl-endpoint} \section{Implementing Refined Protocols in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace} \label{section:implementation} In this section, we demonstrate our callback-styled, refinement-typed APIs for implementing endpoints in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace \cite{POPL16FStar}. We introduced earlier the workflow of our toolchain (\cref{subsection:overview-overview}). In \cref{subsection:fstar-bg}, we summarise the key features of \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace utilised in our implementation. Using our running example, we explain the generated APIs in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace in \cref{subsection:impl-fstar-api}, and how to implement endpoints in \cref{subsection:impl-fstar-impl}. We outline the function we generate for executing the endpoint in \cref{subsection:impl-run-fsm}. Developers using \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace implement the callbacks in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, to utilise the functionality of refinement types provided by the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace type-checker. The \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler can verify statically whether the provided implementation by the developer satisfies the refinements as specified in the protocol. The verified code can be extracted via the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler to OCaml and executed. The verified implementation enjoys properties both from the MPST theory, such as \emph{session fidelity} and \emph{deadlock freedom}, but also from refinement types, that the data dependencies are verified to be correct according to the protocol. Additional details on code generation can be found in \iftoggle{fullversion}{\cref{section:impl-appendix}}{the full version of the paper}. \input{fstar-background} \subsection{Projection and \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace API Generation -- Communicating Finite State Machine--based Callbacks for Session I/O} \label{subsection:impl-fstar-api} As in the standard MPST workflow, the next step (\cref{fig:workflow-detail}) is to \emph{project} our \emph{refined} global protocol onto each role. This decomposes the global protocol into a set of \emph{local} protocols, prescribing the view of the protocol from each role. Projection is the definitive mechanism in MPST: although all endpoints together must comply to global protocol, projection allows each endpoint to be \emph{separately} implemented and verified, a key requirement for practical distributed programming. As we shall see, the way projection treats refinements---we must consider the local \emph{knowledge} of values propagated through the multiparty protocol---is crucial to verifying refined implementations, including our simple running example. \mypara{Projection onto \textsf{\textbf{\color{Teal}B}}} \begin{figure}[t] \small \begin{subfigure}{\textwidth} \begin{tikzpicture} [ >=stealth', NODE/.style={draw,circle,minimum width=3mm,inner sep=0pt,fill=lightgray} ] \node[NODE] (B1) {$1$}; \node[NODE,right=18mm of B1] (B2) {$2$}; \node[NODE,right=18mm of B2,label={[align=left,xshift=-13pt,yshift=-4pt]above:{$\mathcolorbox{lightgray}{n\{0 \mathop{\leq} n \mathop{<} 100\},}$\\[-3pt]$\mathcolorbox{lightgray}{t\{0 \mathop{<} t\}}$}}] (B3) {$3$}; \node[NODE,right=18mm of B3,yshift=7mm] (B5) {$5$}; \node[NODE,right=18mm of B3,yshift=-7mm] (B6) {$6$}; \node[NODE,right=18mm of B5,yshift=-7mm] (B4) {$4$}; \node[NODE,right=18mm of B4,yshift=7mm] (B7) {$7$}; \node[NODE,right=18mm of B4,yshift=-7mm] (B8) {$8$}; \node[NODE,right=18mm of B7,yshift=-7mm] (B9) {}; \draw[->] ($(B1.west)-(3mm,0)$) -- (B1); \draw[->] (B1) -- node [above,xshift=-10pt,yshift=4pt]{$A?\mathtt{start(n_0)\{0 \mathop{\leq} n_0 \mathop{<} 100\}}$} (B2); \draw[->] (B2) -- node [below,xshift=-4pt,yshift=-4pt]{$A?\mathtt{limit(t_0)\{0 \mathop{\leq} t_0\}}$} (B3); \draw[->] (B3) -- node [above,yshift=-1pt]{$C?\mathtt{guess(x)\{0 \mathop{\leq} x \mathop{<} 100\}}$} (B4); \draw[>=angle 60,->,densely dotted] ($(B3.north west)+(-2.5mm,4.5mm)$) -- (B3.north west); \path[->] (B4) edge[bend right=15] node [above,xshift=19pt,yshift=2pt]{$C!\mathtt{higher\{n \mathop{>} x \mathbin{\wedge} t \mathop{>} 1\}}$} (B5); \path[->] (B5) edge[bend right=15] node [above,xshift=10pt,yshift=3pt]{$A!\mathtt{higher}$} (B3); \path[->] (B4) edge[bend left=15] node [below,xshift=19pt,yshift=-2pt]{$C!\mathtt{lower\{n \mathop{<} x \mathbin{\wedge} t \mathop{>} 1\}}$} (B6); \path[->] (B6) edge[bend left=15] node [below]{$A!\mathtt{lower}$} (B3); \path[->] (B4) edge[bend left=15] node [below,xshift=15pt]{$C!\mathtt{win\{n \mathop{=} x\}}$} (B7); \path[->] (B7) edge[bend left=15] node [above,yshift=1pt]{$A!\mathtt{lose}$} (B9); \path[->] (B4) edge[bend right=15] node [above,xshift=28pt,yshift=1pt]{$C!\mathtt{lose\{n \mathop{\neq} x \mathbin{\wedge} t \mathop{=} 1\}}$} (B8); \path[->] (B8) edge[bend right=15] node [below,yshift=-1pt]{$A!\mathtt{win}$} (B9); \end{tikzpicture} \caption{CFSM Representation of the Projection. $!$ stands for sending actions, and $?$ for receiving actions on edges.} \label{fig:higherlower_projection_B-cfsm} \end{subfigure} \smallskip \renewcommand{\lstinline[language=FSharp,basicstyle=\small\ttfamily]}{\lstinline[language=FSharp,basicstyle=\small\ttfamily]} {\small \begin{tabular}{l|l} \begin{subfigure}{0.6\textwidth} \begin{tabular}{lll} \multicolumn{3}{l}{\textbf{Generated \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace API}} \\ State & Edge & Generated Callback Type \\ $1$ & $A?\mathtt{start}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s1 ->(n:int{0<=n<100}) ->ML unit+} \\ $2$ & $A?\mathtt{limit}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s2 ->(t:int{0<t}) ->ML unit+} \\ $3$ & $C?\mathtt{guess}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s3 ->(x:int{0<=x}) ->ML unit+} \\ $4$ & [multiple] & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+(s:s4) ->ML (s4Cases s)+} \\ $5$ & $A!\mathtt{higher}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s5 ->ML unit+} \\ $6$ & $A!\mathtt{lower}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s6 ->ML unit+} \\ $7$ & $A!\mathtt{lose}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s7 ->ML unit+} \\ $8$ & $A!\mathtt{win}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s8 ->ML unit+} \\ \end{tabular} \caption{Generated I/O Callback Types} \label{fig:higherlower_projection_B-types} \end{subfigure} & \begin{subfigure}{0.3\textwidth} \begin{tabular}{l} {\begin{minipage}{\textwidth} {\begin{lstlisting}[language=FSharp,numbers=none,basicstyle={\small\ttfamily}] type s4Cases (s:s4) = | s4_lower of unit{s.n<s.x && s.t>1} | s4_lose of unit{s.n£$\neq$£s.x && s.t=1} | s4_win of unit{s.n=s.x} | s4_higher of unit{s.n>s.x && s.t>1} \end{lstlisting}} \end{minipage}} \end{tabular} \caption{Generated Data Type for the Output Choice} \label{fig:higherlower_projection_B-internal-choice} \end{subfigure} \\ \end{tabular} } \vspace{-3mm} \caption{Projection and \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace{} API Generation for \textsf{\textbf{\color{Teal}B}} in \lstinline+HigherLower+} \vspace{-4mm} \label{fig:higherlower_projection_B} \end{figure} We first look at the projection onto $\ppt B$: although it is the largest of the three projections, it is unique among them because $\ppt B$ is involved in \emph{every} interaction of the protocol, and (consequently) $\ppt B$ has \emph{explicit} knowledge of the value of \emph{every} refinement variable during execution. Formally, projection is defined as a syntactic function (explained in detail later in \cref{subsection:theory-projection}); it is a partial function, designed conservatively to reject protocols that are not safely realisable in asynchronous distributed settings. However, we show in \cref{fig:higherlower_projection_B-cfsm} the representation of projections employed in our toolchain based on \emph{communicating finite state machines} (CFSMs)~\cite{JACM83CFSM}, where the transitions are the localised I/O actions performed by $\ppt B$ in this protocol. Projected CFSM actions preserve their refinements: as before, an action refinement serves as a precondition for an output transition to be fired, and a postcondition when an input transition is fired. For example, $A?\mathtt{start}(n_0)\{0\leq n_0 < 100\}$ is an input of a $\mathtt{start}$ message from $\ppt A$, with a refinement on the \lstinline+int+ payload value. Similarly, $C!\mathtt{higher}\{n>x \wedge t>1\}$ expresses a protocol flow refinement on an output of a $\mathtt{higher}$ message to $\ppt C$. For brevity, we omit the payload data types in the CFSM edges, as this example features only \lstinline+int+s; we omit empty payloads ``$()$'' likewise. We show the local state refinements as annotations on the corresponding CFSM states (shaded in grey, with an arrow to the state). \mypara{Refined API Generation for \textsf{\textbf{\color{Teal}B}}} CFSMs offer an intuitive understanding of the semantics of endpoint projections. Building on recent work~\cite{FASE16EndpointAPI,CC18TypeProvider,POPL19Parametric}, we use our CFSM-based representation of refined projections to \emph{generate} protocol- and role-specific APIs for implementing each role in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace. We highlight a novel and crucial development: we exploit the approach of \emph{type} generation to produce functional-style \emph{callback}-based APIs that \emph{internalise} all of the actual communication channels and I/O actions. In short, the transitions of the CFSM are rendered as a set of \emph{transition-specific} function types to be implemented by the user --- each of these functions take and return only the \emph{user-level data} related to I/O actions and the running of the protocol. The transition function of the CFSM itself is embedded into the API by the generation, exporting a user interface to execute the protocol by calling back the appropriate user-supplied functions according to the current CFSM state and I/O event occurrences. We continue with our example, \cref{fig:higherlower_projection_B-types} lists the function types for $\ppt B$, detailed as follows. Note, a characteristic of MPST-based CFSMs is that each non-terminal state is either input- or output-only. \begin{itemize}[leftmargin=*] \item \textbf{State Types.} For each state, we generate a separate type (named by enumerating the states, by default). Each is defined as a record containing previously known payload values and its local recursion variables, or \keyword{unit} if none, for example: \vspace{1mm} \centerline{ \lstinline[language=FSharp,basicstyle=\small\ttfamily]+type s3 =+ $\big\{$ \lstinline[language=FSharp,basicstyle=\small\ttfamily]+n0: int\{0<=n0<100\}; t0: int\{0<t0\}; n: int\{0<=n<100\}; t: int\{0<t\}+ $\big\}$ } \vspace{1mm} \item \textbf{Basic I/O Callbacks.} For each input transition we generate a function type \lstinline[language=FSharp,basicstyle=\small\ttfamily]+s ->+$\sigma$\lstinline[language=FSharp,basicstyle=\small\ttfamily]+ ->ML unit+, where \lstinline+s+ is the predecessor state type, and $\sigma$ is the refined payload type received. The return type is \lstinline[language=FSharp,basicstyle=\small\ttfamily]+unit+ and the function can perform side effects, i.e.\ the callback is able to modify global state, interact with the console, etc, instead of merely pure computation. If an input transition is fired during execution, the generated runtime will invoke a user-supplied function of this type with the appropriately populated value of \lstinline+s+, including any payload values received in the message that triggered this transition. Note, any data or protocol refinements are embedded into the types of these fields. Similarly, for each transition of an output state with a \emph{single} outgoing transition, we generate a function type \lstinline[language=FSharp,basicstyle=\small\ttfamily]+s ->ML +$\tau$, where $\tau$ is the refined type for the output payload. \item \textbf{Internal Choices.} For each output state with more than 1 outgoing transition, we generate an additional sum type $\rho$ with the cases of the choice, e.g.\ \cref{fig:higherlower_projection_B-internal-choice}. % This sum type (i.e.\ \lstinline+s4Cases+) is indexed by the corresponding state type (i.e.\ \lstinline+s+) to make any required knowledge available for expressing the protocol flow refinement of each case. Its constructors indicate the label of the internal choice. We then generate a single function type for this state, \lstinline[language=FSharp,basicstyle=\small\ttfamily]+s ->ML +$\rho$: the user implementation selects which choice case to follow by returning a corresponding $\rho$ value, under the constraints of any refinements. % For example, a \lstinline+s4_win+ value can only be constructed, thus this choice case only be selected, when \lstinline[language=FSharp,basicstyle=\small\ttfamily]+s.n=s.x+ for the given \lstinline+s+. The state machine is advanced according to the constructor of the returned value (corresponding to the label of the message), and the generated runtime sends the payload value to the intended recipient. \end{itemize} An asynchronous \emph{output} event, i.e.\ the trigger for the API to call back an output function, requires the communication medium to be ready to accept the message (e.g.\ there is enough memory in the local output buffer). For simplicity, in this work we consider the callbacks of an output state to always be immediately fireable. Concretely, we delegate these concerns to the underlying libraries and runtime system. \mypara{Projection and API Generation for \textsf{\textbf{\color{Teal}C}}} The projection onto $\ppt C$ raises an interesting question related to the refinement of \emph{multiparty} protocols: how should we treat refinements on variables that the target role does \emph{not} itself \emph{know}? $\ppt C$ does not know the value of the secret \lstinline+n+ (otherwise this game would be quite boring), but it does know that this information \emph{exists} in the protocol and is subject to the specified refinement. In standard MPST, it is essentially the main point of projection that interactions not involving the target role can be simply (and safely) \emph{dropped} outright; e.g.\ the communication of the \lstinline+start+ message would simply not appear in the projection of $\ppt C$. However, naively taking the same approach in RMPST would be inadequate: although the target role may not know some exact value, the role may still need the associated ``latent information'' to fulfil the desired application behaviour. Our framework introduces a notion of \emph{erased} variables for RMPST --- in short, our projection does drop third-party \emph{interactions}, but retains the latent information as refinement-typed \emph{erased} variables, as illustrated by the annotation on state $1$ in \cref{fig:higherlower-impl-c-cfsm}. Thanks to the SMT-based refinement type system of \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, the type-checker can still take advatange of the refined types of erased variables to \emph{prove} properties of the endpoint implementation; however, these variables cannot actually be used \emph{computationally} in the implementation (since their values are not known). Conveniently, \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace supports erased types (described briefly in \cref{subsection:fstar-bg}), and provides ways (i.e.\ \code{Ghost} effects) to ensure that such variables are not used in the computation. We demonstrate this for our example in the next subsection. Our approach can be considered a version of \emph{irrelevant} variables from \cite{LICS01Irrelavance, LMCS12Irrelevance} for the setting of typed, distributed interactions. \begin{figure}[t] \small \begin{subfigure}{\textwidth} \centering \begin{tikzpicture} [ >=stealth', NODE/.style={draw,circle,minimum width=3mm,inner sep=0pt,fill=lightgray} ] \node[NODE,label={[align=left,xshift=-30pt,yshift=0pt]above:{$\mathcolorbox{lightgray}{n \mathop{:} \mathtt{erased~int} \{0 \mathop{\leq} n \mathop{<} 100\},}$\\[-3pt]$\mathcolorbox{lightgray}{t \mathop{:} \mathtt{erased~int} \{0 \mathop{<} t\}}$}}] (C1) {$1$}; \node[NODE,right=50mm of C1] (C2) {$2$}; \node[NODE,right=20mm of C2] (C3) {}; \node[right=12cm of C1] {}; \draw[>=angle 60,->,densely dotted] ($(C1.north west)+(-5.5mm,2mm)$) -- (C1.north west); \draw[->] ($(C1.west)-(5mm,0)$) -- (C1); \draw[->] (C1) -- node[above,yshift=-2pt]{$B!\mathtt{guess}(x)\{0 \mathop{\leq} x \mathop{<} 100\}$} (C2); \path[->] (C2) edge[bend left=20] node [below]{$B?\mathtt{lower}\{n < x \land t > 1\}$} (C1); \path[->] (C2) edge[bend right=20] node [above]{$B?\mathtt{higher}\{n > x \land t > 1\}$} (C1); \path[->] (C2) edge[bend left] node [above]{$B?\mathtt{win}\{n = x\}$} (C3); \path[->] (C2) edge[bend right] node [below]{$B?\mathtt{lose}\{n \neq x \land t = 1\}$} (C3); \end{tikzpicture} \caption{CFSM Representation of the Projection} \label{fig:higherlower-impl-c-cfsm} \end{subfigure} \begin{subfigure}{\textwidth} \renewcommand{\lstinline[language=FSharp,basicstyle=\small\ttfamily]}{\lstinline[language=FSharp,basicstyle=\footnotesize\ttfamily]} {\small \begin{tabular}{llll} &&& User implementation \\ &&& {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+(* Allocate a refined int reference *)+} \\ State & Edge & Generated type & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+let next: ref (x:int{0<=x<100}) = alloc 50+} \\ \hline $1$ & $B!\mathtt{guess}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s1 ->ML (x:int{0<=x<100})+} & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+fun _ ->!next (*Deref next*)+} \\ $2$ & $B?\mathtt{higher}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s2 ->unit{n>x && t>1} ->ML unit+} & \red{{\lstinline[language=FSharp,basicstyle=\small\ttfamily]*fun s ->next := s.x + 1*}} \\ & $B?\mathtt{lower}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s2 ->unit{n<x && t>1} ->ML unit+} & \red{{\lstinline[language=FSharp,basicstyle=\small\ttfamily]*fun s ->next := s.x - 1*}} \\ & $B?\mathtt{win}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s2 ->unit{n=x} ->ML unit+} & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]!fun _ ->()!} \\ & $B?\mathtt{lose}$ & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]+s2 ->unit{n<>x && t=1} ->ML unit+} & {\lstinline[language=FSharp,basicstyle=\small\ttfamily]!fun _ ->()!} \end{tabular} } \caption{Generated I/O Callback Types} \label{fig:higherlower-impl-c-types} \end{subfigure} \vspace{-3mm} \caption{Projection and \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace{} API Generation for \textsf{\textbf{\color{Teal}C}} in \lstinline+HigherLower+ } \label{fig:higherlower-impl-c} \vspace{-5mm} \end{figure} \subsection{\texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace Implementation -- Protocol Validation and Verification by Refinement Types} \label{subsection:impl-fstar-impl} Finally, the generated APIs---embodying the refined projections---are used to implement the endpoint processes. As mentioned, the user implements the program logic as callback functions of the generated (refinement) types, supplied to the entry point along with code for establishing the communication channels between the session peers. Assuming a record \lstinline+callbacks+ containing the required functions (static typing ensures all are covered), \cref{fig:c-impl-main} bootstraps a $\ppt C$ endpoint. \begin{figure}[h] \centering \begin{subfigure}{0.48\textwidth} {\begin{lstlisting}[language=FSharp,numbers=none,basicstyle={\small\ttfamily}] let main () = (* connect to B via TCP *) let server_B = connect ip_B port_B in (* Setup connection from TCP *) let conn = mk_conn server_B in (* Invoke the Entry Point `run` *) let () = run callbacks conn in (* Close TCP connection *) close server_B \end{lstlisting}} \vspace{-2mm} \caption{Running the Endpoint \textsf{\textbf{\color{Teal}C}}}\label{fig:c-impl-main} \end{subfigure} \begin{subfigure}{0.48\textwidth} {\begin{lstlisting}[language=FSharp,numbers=none,basicstyle={\small\ttfamily}] (* Signature (s:s4) -> ML (s4Cases s) *) fun (s:s4) -> (* Win if guessed number is correct *) if s.x=s.n then s4_win () (* Lose if running out of attempts *) else if s.t=1 then s4_lose () (* Otherwise give hints accordingly *) else if s.n>s.x then s4_higher () else s4_lower () \end{lstlisting}} \vspace{-2mm} \caption{Implementing the Internal Choice for \textsf{\textbf{\color{Teal}B}}}\label{fig:c-impl-internal-choice} \end{subfigure} \vspace{-3mm} \caption{Selected Snippets of Endpoint Implementation} \vspace{-5mm} \end{figure} The API takes care of endpoint execution by monitoring the channels, and calling the appropriate callback based on the current protocol state and I/O event occurrences. For example, a minimal, well-typed implementation of $\ppt B$ could comprise the internal choice callback above (\cref{fig:c-impl-internal-choice}) (implementing the type in \cref{fig:higherlower_projection_B-internal-choice}), cf.\ state $4$, and an empty function for all others (i.e.\ \lstinline[language=FSharp,basicstyle=\small\ttfamily]+fun _ ->()+). We can highlight how protocol violations are ruled out by static refinement typing, which is ultimately the practical purpose of RMPST. In the above callback code, changing, say, the condition for the \lstinline+lose+ case to \lstinline[language=FSharp,basicstyle=\small\ttfamily]+s.t=0+ would directly violate the refinement on the \lstinline+s4_lose+ constructor, cf.\ \cref{fig:higherlower_projection_B-internal-choice}. Omitting the \lstinline+lose+ case altogether would break both the \lstinline+lower+ and \lstinline+higher+ cases, as the type checker would not be able to prove \lstinline[language=FSharp,basicstyle=\small\ttfamily]+s.t>1+ as required by the subsequent constructors. Lastly, \cref{fig:higherlower-impl-c-types} implements $\ppt C$ to guess the secret by a simple search, given we know its value is bounded within the specified interval. We draw attention to the input callback for \lstinline+higher+, where we adjust the \lstinline+next+ value. Given that the value being assigned is one more than the existing value, it might have been the case that the new value falls out of the range (in the case where \lstinline+next+ is 99), hence violating the prescribed type. However, despite that the value of \lstinline+n+ is unknown, we have known from the refinement attached to the edge that \lstinline[language=FSharp,basicstyle=\small\ttfamily]+n>x+ holds, hence it must have been the case that our last guess \lstinline+x+ is strictly less than the secret \lstinline+n+, which rules out the possibility that \lstinline+x+ can be 99 (the maximal value of \lstinline+n+). Had the refinement and the erased variable not been present, the type-checker would not be able to accept this implementation, and it demonstrates that our encoding allows such reasoning with latent information from the protocol. Moreover, the type and effect system of \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace prevents the erased variables from being used in the callbacks. On one hand, \lstinline[language=FSharp,basicstyle=\small\ttfamily]{int} and \lstinline[language=FSharp,basicstyle=\small\ttfamily]{erased int} types are not compatible, because they are not the same type. This prevents an irrelevant variable from being used in place of a concrete variable. On the other hand, the function \lstinline[language=FSharp,basicstyle=\small\ttfamily]{reveal} converts a value of \lstinline[language=FSharp,basicstyle=\small\ttfamily]{erased 'a} to a value of \lstinline[language=FSharp,basicstyle=\small\ttfamily]{'a} with \lstinline[language=FSharp,basicstyle=\small\ttfamily]{Ghost} effect. A function with \lstinline[language=FSharp,basicstyle=\small\ttfamily]{Ghost} effect \emph{cannot} be mixed with a function with \lstinline[language=FSharp,basicstyle=\small\ttfamily]{ML} effect (as in the case of our callbacks), so irrelevant variables cannot be used in the implementation via the \lstinline[language=FSharp,basicstyle=\small\ttfamily]{reveal} function. Interested readers are invited to try the running example out with our accompanying artifact. We propose a few modifications on the implementation code and the protocol, and invite the readers to observe errors when implementations no longer correctly conforms to the prescribed protocol. \subsection{Executing the Communicating Finite State Machine (Generated Code)} \label{subsection:impl-run-fsm} As mentioned earlier, our API design sepearates the concern of program logic (with callbacks) and communication (with connections). A crucial piece of the generated code involves \emph{threading} the two parts together --- the execution function performs the communications actions and invokes the appropriate callbacks for handling. In this way, we do \emph{not} expose explicit communication channels, so linearity can be achieved with ease by construction in our generated code. The entry point function, named \code{run}, takes callbacks and connections as arguments, and executes the CFSM for the specified endpoint. The signature uses the permissive \keyword{ML} effect, since communicating with the external world performs side effects. We traverse the states (the set of states is denoted $\mathbb{Q}$) in the CFSM and generate appropriate code depending on the nature of the state and its outgoing transitions. Internally, we define mutually recursive functions for each state $q \in \mathbb{Q}$, named \code{run$_q$}, taking the state record $\enc{q}$ as argument ($\enc{q}$ stands for the state record for a given state $q$), which performs the required actions at state $q$. The run state function for a state $q$ either (1) invokes callbacks and communication primitives, then calls the run state function for the successor state $q'$, or (2) returns directly for termination if $q$ is a terminal state (without outgoing transitions). The main entry point invokes the run function for the initial state $q_0$, starting the finite state machine. The internal run state functions are not exposed to the developer, hence it is not possible to tamper with the internal state with usual means of programming. This allows us to guarantee linearity of communication channels by construction. In the following text, we outline how to run each state, depending on whether the state is a sending state or a receiving state. Note that CFSMs constructed from local types do not have mixed states \cite[Prop. 3.1]{ICALP13CFSM} \begin{figure}[h] \begin{subfigure}{0.48\textwidth} \begin{lstlisting}[language=FSharp,numbers=none] let rec run_£$q$£ (st: state£$q$£) = let choice = callbacks.state£$q$£_send st in match choice with | Choice£$q\dlbl{l_i}$£ payload -> £$\tikzmark{curly-brace-1-start}$£ comm.send_string £$\ppt q$£ "£$\dlbl{l_i}$£"; comm.send_£$\dte S$ \ppt q£ payload; let st = { £$\cdots$£; £$\dexp{x_i}$£=payload } in £$\tikzmark{curly-brace-1-longest}$£ run_£$q'$£ st £$\tikzmark{curly-brace-1-end}$£ \end{lstlisting} \AddNote{curly-brace-1-start}{curly-brace-1-end}{curly-brace-1-longest}{Repeat for $i \in I$} \vspace{-8mm} \caption{Template for Sending State $q$} \label{fig:runq-sending} \end{subfigure} \hfill \begin{subfigure}{0.48\textwidth} \begin{lstlisting}[language=FSharp, numbers=none] let rec run_£$q$£ (st: state£$q$£) = let label = comm.recv_string £$\ppt p$£ () in match label with £$\tikzmark{curly-brace-2-start}$£| "£$\dlbl{l_i}$£" -> let payload = comm.recv_£$\dte S$£ £$\ppt p$£ () in callbacks.state£$q$£_receive_£$\dlbl{l_i}$£ st payload; let st = { £$\cdots$£; £$\dexp{x_i}$£=payload } in £$\tikzmark{curly-brace-2-end}$£ run_£$q'$£ st \end{lstlisting} \AddNoteLeft{curly-brace-2-start}{curly-brace-2-end}{curly-brace-2-end}{} \vspace{-8mm} \caption{Template for Receiving State $q$} \label{fig:runq-receiving} \end{subfigure} \vspace{-3mm} \caption{Template for \code{run}$_q$} \vspace{-3mm} \end{figure} \mypara{Running the CFSM at a Sending State} For a sending state $q \in \mathbb{Q}$, the developer makes an internal choice on how the protocol proceeds, among the possible outgoing transitions. This is done by invoking the sending callback $\code{state}q\code{\_send}$ with the current state record, to obtain a choice with the associated payload. We pattern match on the constructor of the label $\dlbl{l_i}$ of the message, and find the corresponding successor state $q'$. The label $\dlbl{l_i}$ is encoded as a \dte{\code{string}}\ and sent via the sending primitive to $\ppt q$. It is followed by the payload specified in the return value of the callback, via corresponding sending primitive according to the base type with refinement erased. We construct a state record of $\enc{q'}$ from the existing record $\enc{q}$, adding the new field $\dexp{x_i}$ in the action using the callback return value. In the case of recursive protocols, we also update the recursion variable according to the definition in the protocol when constructing $\enc{q'}$. Finally, we call the run state function $\code{run}_{q'}$ to continue the CFSM, effectively making the transition to state $q'$. Following the procedure, $\code{run}_q$ is generated as shown in \cref{fig:runq-sending}. \mypara{Running the CFSM at a Receiving State} For a receiving state $q \in \mathbb{Q}$, how the protocol proceeds is determined by an external choice, among the possible outgoint actions. To know what choice is made by the other party, we first receive a string and decode it into a label $\dlbl{l}$, via the receiving primitive for string. Subsequently, according to the label $\dlbl{l}$, we can look up the label in the possible transitions, and find the state successor $q'$. By invoking the appropriate receiving primitive, we obtain the payload value. We note that the receiving primitive has a return type without refinements. In order to re-attach the refinements, we use the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace builtin \keyword{assume} to reinstate the refinements according to the protocol before using the value. According the label $\dlbl{l}$ received, we can call the corresponding receiving callback with the received value. This allows the developer to process the received value and perform any relevant program logic. This is followed by the same procedure for constructing the state record for the next state $q'$ and invoking the run function for $q'$. Following the procedure, $\code{run}_q$ is generated as shown in \cref{fig:runq-receiving}. \subsection{Summary} We demonstrated with our running example, \lstinline[language=FSharp,basicstyle=\small\ttfamily]{HigherLower}, how to implement a refined multiparty protocol with our toolchain \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace. Exploiting the powerful type system of \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, our approach has several key benefits: First, it guarantees \emph{fully static} session type safety in a lightweight, practical manner --- the callback-style API is portable to any statically typed language. Existing work based on code generation has considered only hybrid approaches that supplement static typing with dynamically checked linearity of explicit communication channel usages. Moreover, the separation of program logic and communication leads to a modular implementation of protocols. Second, it is well suited to functional languages like \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace; in particular, the \emph{data}-oriented nature of the user interface allows the refinements in RMPST to be directly mapped to data refinements in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, allowing the refinements constraints to be discharged at the user implementation level by the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler --- again, fully statically. Furthermore, our endpoint implementation inherits core communication safety properties such as freedom from deadlock or communication mismatches, based on the original MPST theory. We use the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace type-checker to validate statically that an endpoint implementation is correctly typed with respect to the prescribed type obtained via projection of the global protocol. Therefore, the implementation benefits from additional guarantees from the refinement types. \section{Introduction} Distributed interactions and message passing are fundamental elements of the modern computing landscape. Unfortunately, language features and support for high-level and \emph{safe} programming of communication-oriented and distributed programs are much lacking, in comparison to those enjoyed for more traditional ``localised'' models of computation. One of the research directions towards addressing this challenge is \emph{concurrent behavioural types}~\cite{BettyBook, FTPL16BehavioralSurvey}, which seek to extend the benefits of conventional type systems, as the most successful form of lightweight formal verification, to communication and concurrency. \emph{Multiparty session types} (MPST)~\cite{POPL08MPST, JACM16MPST}, one of the most active topics in this area, offer a theoretical framework for specifying message passing protocols between multiple participants. MPST use a type system--based approach to \emph{statically} verify whether a system of processes implements a given protocol safely. The type system guarantees key execution properties such as freedom from message reception errors or deadlocks. However, despite much recent progress, there remain large gaps between the current state of the art and (i) powerful and \emph{practical} languages and techniques available to programmers today, and (ii) more advanced type disciplines needed to express a wider variety of constraints of interaction found in real-world protocols. This paper presents and combines two main developments: a theory of MPST enriched with \emph{refinement types} \cite{PLDI91RefinementML}, and a practical method, \emph{callback-based programming}, for safe session programming. The key ideas are as follows: \mypara{\emph{Refined} Multiparty Session Types (RMPST)} The MPST theory~\cite{POPL08MPST, JACM16MPST} provides a core framework for decomposing (or \emph{projecting}) a \emph{global type} structure, describing the collective behaviour of a distributed system, into a set of participant-specific \emph{local types} (see \cref{fig:top-down-mpst}). The local types are then used to implement endpoints. \begin{figure} \begin{tikzpicture} \node[draw, rectangle] (G) { $ \arraycolsep=1pt \begin{array}[t]{@{}ll@{}} \dgt{G} = & \gtbransingle{A}{B}{\dlbl{Count}(\vb{count}{\dte{\code{int}}\esetof{count \geq 0}})}. \\ & \gtrecur{t} {\vb{curr}{\dte{\code{int}}\esetof{curr \geq 0 \land curr \leq count}}} {\dexp{curr := 0}} {}\\ &\begin{array}{@{}l@{}} \gtbran{B}{C}{ \begin{array}{@{}l@{}} \dlbl{Hello}( \vb{it}{\dte{\code{int}}\esetof{curr < count \land it = count})} .\gtvar{t}{\dexp{curr:=curr+1}}\\ \dlbl{Finish}(\vb{it}{\dte{\code{int}}}\esetof{curr = count \land it = count}).\dgt{\code{end}} \end{array} } \end{array} \end{array}$}; \node[below=0.5mm of G] (Gtext) {A Global Type $\dgt G$}; \node[below=-1.5mm of Gtext, xshift=-45mm, align=center] (proj) {\bf Projection onto \\ \bf each Participant}; \node[below=6mm of Gtext, xshift=-40mm] (LA) {\small Local Type for \ppt A~ \boxed{\dtp{L}_{\ppt A}}}; \node[below=6mm of Gtext] (LB) {\small Local Type for \ppt B~\boxed{\dtp{L}_{\ppt B}}}; \node[below=6mm of Gtext, xshift=40mm] (LC) {\small Local Type for \ppt C~ \boxed{\dtp{L}_{\ppt C}}}; \draw[->] (Gtext) -- (LA); \draw[->] (Gtext) -- (LB); \draw[->] (Gtext) -- (LC); \end{tikzpicture} \vspace{-3mm} \caption{Top-down View of (R)MPST} \vspace{-3mm} \label{fig:top-down-mpst} \end{figure} Our theory of RMPST follows the same top-down methodology, but enriches MPST with features from \emph{refinement types}~\cite{PLDI91RefinementML}, to support the elaboration of data types in global and local types. Refinement types allow \emph{refinements} in the form of logical predicates and constraints to be attached to a base type. This allows to express various constraints in protocols. To motivate our refinement type extension, we use a counting protocol shown in \cref{fig:top-down-mpst}, and leave the details to \cref{section:theory}. Participant $\ppt A$ sends $\ppt B$ a number with a $\dlbl{Count}$ message. In this message, the refinement type $\vb{count}{\dte{\code{int}}\esetof{count \geq 0}}$ restricts the value for $\dexp{count}$ to be a natural number. Then $\ppt B$ sends $\ppt C$ exactly that number of $\dlbl{Hello}$ messages, followed by a $\dlbl{Finish}$ message. We demonstrate how refinement types are used to \emph{better} specify the protocol: The counting part of the protocol is described using a recursive type with two branches, where we use refinement types to restrict the protocol flow. The variable $\dexp{curr}$ is a \emph{recursion variable}, which remembers the current iteration, initialised to $\dexp 0$, and increments on each recursion ($\dexp{curr := curr + 1}$). The refinement $\dexp{curr = count}$ in the $\dlbl{Finish}$ branch specifies that the branch may only be taken at the last iteration; the refinement $\dexp{it = count}$ in both $\dlbl{Hello}$ and $\dlbl{Finish}$ branches specifies a payload value \emph{dependent} on the recursion variable $\dexp{curr}$ and the variable $\dexp{count}$ transmitted in the first message. We establish the correctness of Refined MPST. In particular, we show that projection is behaviour-preserving and that well-formed global types with refinements satisfy progress, i.e.\ they do not get stuck. Therefore, if the endpoints follow the behaviour prescribed by the local types, derived (via projection) from a well-formed global type with refinements, the system is deadlock-free. \mypara{\emph{Callback-styled, Refinement-typed} APIs for Endpoint Implementations} One of the main challenges in applying session types in practice is dealing with session \emph{linearity}: a session channel is used \emph{once and only once}. Session typing relies on a linear treatment of communication channels, in order to track the I/O actions performed on the channel against the intended session type. Most existing implementations adopt one of two approaches: monadic interfaces in functional languages~\cite{HaskellSessionBookChapter, SCP18OCaml, ECOOP20OCamlMPST}, or ``hybrid'' approaches that complement static typing with dynamic linearity checks~\cite{FASE16EndpointAPI, scalas17linear}. This paper proposes a fresh look to session-based programming that does \emph{not} require linearity checking for static safety. We promote a form of session programming where session I/O is \emph{implicitly} implemented by \emph{callback functions} --- we say ``implicitly'' because the user does not write any I/O operations themself: an \emph{input callback} is written to \emph{take} a received message as a parameter, and an \emph{output callback} is written to simply \emph{return} the label and value of the message to be sent. The callbacks are supported by a runtime, generated along with APIs in refinement types according to the local type. The runtime performs communication and invokes user-specified callback functions upon corresponding communication events. We provide a code generation tool to streamline the writing of callback functions for the projected local type. The inversion of control allows us to dispense with linearity checking, because our approach does not expose communication channels to the user. Our approach is a natural fit to functional programming settings, but also directly applicable to any statically typed language. Moreover, the linearity guarantee is achieved \emph{statically} without the use of a linear type system, a feature that is usually not supported by mainstream programming languages. We follow the principle of event-based programming via the use of callbacks, prevalent in modern days of computing. \mypara{A Toolchain Implementation: \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace} To evaluate our proposal, we implement RMPST with a toolchain --- \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace, as an extension to the \textsc{Scribble}\xspace toolchain~\cite{ScribbleWebsite, ScribbleBookChapter} (\url{http://www.scribble.org/}) with \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace~\cite{POPL16FStar} as the target endpoint language. Building on our callback approach, we show how to integrate RMPST with the verification-oriented functional programming language \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, exploiting its capabilities of refinement types and static verification to extend our fully static safety guarantees to data refinements in sessions. Our experience of specifying and implementing protocols drawn from the literature and real-world applications attests to the practicality of our approach and the value of statically verified refinements. Our integration of RMPST and \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace allows developers to utilise advanced type system features to implement safe distributed application protocols. \mypara{Paper Summary and Contributions} \begin{itemize}[leftmargin=*] \item[\cref{section:overview}] We present an overview of our toolchain: \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace, and provide background knowledge of \textsc{Scribble}\xspace and MPST. We use a number guessing game, \lstinline{HigherLower}, as our running example. \item[\cref{section:implementation}] We introduce \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace, a toolchain for RMPST. We describe in detail how our generated APIs can be used to implement multiparty protocols with refinements. \item[\cref{section:theory}] We establish the metatheory of RMPST, which gives semantics of global and local types with refinements. We prove trace equivalence of global and local types w.r.t.\ projection (\cref{thm:trace-eq}), and show progress and type safety of well-formed global types (\cref{thm:progress} and \cref{thm:type-safety}). \item[\cref{section:evaluation}] We evaluate our toolchain with examples from the session type literature, and measure the time taken for compilation and execution. We show that our toolchain does not have a long compilation time, and our runtime does not incur a large overhead on execution time. \end{itemize} We submitted an artifact for evaluation~\cite{sessionstarartifact}, containing the source code of our toolchain \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace, with examples and benchmarks used in the evaluation. The artifact is available as a Docker image, and can be \href{https://hub.docker.com/layers/sessionstar2020/sessionstar/artifact/images/sha256-4e3bf61238e04c1d2b5854971f0ef78f0733d566bf529a01d9c3b93ffa831193?context=explore}{accessed} on the Docker Hub. The source files are available on GitHub (\url{https://github.com/sessionstar/oopsla20-artifact}). We present the proof of our theorems, and additional technical details of the toolchain, in the \iftoggle{fullversion}{appendix.}{full version of the paper (to be added in the camera-ready version).} \section{Proofs for \S~\ref{section:theory}} \input{proof.tex} \input{implementation_appendix.tex} \end{appendix} }{} \end{document} \section{Overview of Refined Multiparty Session Types} \label{section:overview} In this section, we give an overview of our toolchain: \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace, describing its key toolchain stages. \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace extends the \textsc{Scribble}\xspace toolchain with refinement types and uses \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace as a target language. We begin with a short background on basic multiparty session types and \textsc{Scribble}\xspace, then demonstrate the specification of distributed applications with refinements using the extended \textsc{Scribble}\xspace. \subsection{Toolchain Overview} \label{subsection:overview-overview} \input{fig/fig-workflow-detail.tex} We present an overview of our toolchain in \cref{fig:workflow-detail}, where we distinguish user provided input by developers in \boxed{\text{solid boxes}}, from generated code or toolchain internals in \dbox{dashed boxes}. Development begins with \emph{specifying a protocol} using an extended \textsc{Scribble}\xspace protocol description language. \textsc{Scribble}\xspace is closely associated with the MPST theory \cite{ScribbleBookChapter, FeatherweightScribble}, and provides a user-friendly syntax for multiparty protocols. We extend the \textsc{Scribble}\xspace toolchain to support RMPST, allowing refinements to be added via annotations. The extended \textsc{Scribble}\xspace toolchain (as part of \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace) validates the well-formedness of the protocol, and produces a representation in the form of a \emph{communicating finite state machine} (CFSM, \cite{JACM83CFSM}) for a given participant. We then use a code generator (also as part of \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace) to generate \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace APIs from the CFSM, utilising a number of advanced type system features available in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace (explained later in \cref{subsection:fstar-bg}). The generated APIs, described in detail in \cref{section:implementation}, consist of various type definitions, and an entry point function taking \emph{callbacks} and \emph{connections} as arguments. In our design methodology, we separate the concern of \emph{communication} and \emph{program logic}. The callbacks, corresponding to program logic, do not involve communication primitives --- they are invoked to prompt a value to be sent, or to process a received value. Separately, developers provide a connection that allows base types to be serialised and transmitted to other participants. Developers implement the endpoint by providing both \emph{callbacks} and \emph{connections}, according to the generated refinement typed APIs. They can run the protocol by invoking the generated entry point. Finally, the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace source files can be verified using the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler, and extracted to an OCaml program (or other supported targets) for efficient execution. \subsection{Global Protocol Specification --- RMPST in Extended \textsc{Scribble}\xspace} \label{subsection:impl-rmpst} The workflow in the standard MPST theory~\cite{POPL08MPST}, as is generally the case in distributed application development, starts from identifying the intended protocol for participant interactions. In our toolchain, a \emph{global protocol}---the description of the whole protocol between all participants from a bird eye's view---is specified using our RMPST extension of the \textsc{Scribble}\xspace protocol description language~\cite{ScribbleWebsite, ScribbleBookChapter}. \Cref{fig:guess} gives the global protocol for a three-party game, \lstinline+HigherLower+, which we use as a running example. \mypara{Basic \textsc{Scribble}\xspace/MPST} We first explain basic \textsc{Scribble}\xspace (corresponding to the standard MPST) \emph{without} the \lstinline+@+-annotations (annotations are extensions to the basic \textsc{Scribble}\xspace). \begin{enumerate}[leftmargin=*] \item The main protocol \lstinline+HigherLower+ declares three \emph{roles} $\ppt A$, $\ppt B$ and $\ppt C$, representing the runtime communication session participants. The protocol starts with $\ppt A$ sending $\ppt B$ a \lstinline+start+ message and a \lstinline+limit+ message, each carrying an \keyword{\small int} payload. \item The \keyword{\small do} construct specifies all three roles to proceed according to the (recursive) \lstinline+Aux+ sub-protocol. $\ppt C$ sends $\ppt B$ a \lstinline+guess+ message, also carrying an \keyword{\small int}. (The \keyword{\small aux} keyword simply tells \textsc{Scribble}\xspace that a sub-protocol does not need to be verified as a top-level entry protocol.) \item The \keyword{\small choice at}\code{\small~B} construct specifies at this point that $\ppt B$ should decide (make an \emph{internal} choice) by which one of the four cases the protocol should proceed. This decision is explicitly communicated (as an \emph{external} choice) to $\ppt A$ and $\ppt C$ via the messages in each case. The \lstinline+higher+ and \lstinline+lower+ cases are the recursive cases, leading to another round of \lstinline+Aux+ (i.e.\ another \lstinline+guess+ by $\ppt C$); the other two cases, \lstinline+win+ and \lstinline+lose+, end the protocol. \end{enumerate} \noindent To sum up, $\ppt A$ sends $\ppt B$ two numbers, and $\ppt C$ sends a number (at least one) to $\ppt B$ for as long as $\ppt B$ replies with either \lstinline+higher+ or \lstinline+lower+ to $\ppt C$ (and $\ppt A$). Next we demonstrate how we can express data dependencies using refinements with our extended \textsc{Scribble}\xspace. \begin{figure}[t] \centering \begin{minipage}{0.9\textwidth} \lstinputlisting[language=Scribble,basicstyle={\small\ttfamily}]{fig/HighLow.scr} \end{minipage} \vspace{-3mm} \caption{A \emph{Refined} \textsc{Scribble}\xspace Global Protocol for a \lstinline+HigherLower+ Game. } \vspace{-3mm} \label{fig:guess} \end{figure} \input{extended-scribble} \subsection{Auxiliary lemmas} \begin{lemma} Given a participant $\ppt p$, a global typing context $\Gamma$ and a local typing context $\Sigma$ such that $\ctxproj{\Gamma}{p} = \Sigma$. Then, the projection of global typing context $\Gamma$ with $\vb{x^{\ppt{$\mathbb{P}$}}}{T}$ to $\ppt p$ satisfies that $$\ctxproj{\ctxext{\Gamma}{x^{\ppt{$\mathbb{P}$}}}{T}}{p} = \begin{cases} \ctxext{\Sigma}{x^\omega}{T} & \text{if~} \ppt p \in \ppt{$\mathbb{P}$} \\ \ctxext{\Sigma}{x^0}{T} & \text{if}~ \ppt p \notin \ppt{$\mathbb{P}$} \end{cases} $$ \label{lem:project-ext} \end{lemma} \begin{proof} By expanding defintion and case analysis. \end{proof} \begin{lemma} Given a participant $\ppt p$, global typing contexts $\Gamma_1, \Gamma_2$, a global type $\dgt G$. If the two typing contexts have the same projection on $\ppt p$: $\ctxproj{\Gamma_1}{p} = \ctxproj{\Gamma_2}{p}$, then the projection of global types under the two contexts are the same: $\gtctxproj{\Gamma_1}{G}{p} = \gtctxproj{\Gamma_2}{G}{p}$. \label{lem:project-env} \end{lemma} \begin{proof} By induction on the derivation of projection of global types. \end{proof} \begin{lemma} Given a global typing context $\Gamma$, a set of participants $\ppt{$\mathbb{P}$}$, a participant $\ppt p$, a variable $\dexp x$, a well-formed type $\dte T$ and a global type $\dgt G$. If $\gtctxproj{\ctxext{\Gamma}{x^{\varnothing}}{T}}{G}{p} = \ltctx{\Sigma}{L}$, then $\gtctxproj{\ctxext{\Gamma}{x^{\ppt{$\mathbb{P}$}}}{T}}{G}{p} = \ltctx{\Sigma}{L}$. \label{lem:project-weaken} \end{lemma} \begin{proof} By induction on derivation of projection of global types, via weakening of local typing rules: $$\Sigma_1, \vb{x^0}{T}, \Sigma_2 \vdash \vb{E}{T'} \implies \Sigma_1, \vb{x^\omega}{T}, \Sigma_2 \vdash \vb{E}{T'}$$ \end{proof} \begin{lemma}[Inversion of Projection] Given a global typing context $\Gamma$, a global type $\dgt G$, a participant $\ppt p$, a local typing context $\Sigma$, a local type $\dtp L$, such that $\gtctxproj{\Gamma}{G}{p} = \ltctx{\Sigma}{L}$. If $\dtp L$ is of form: \begin{enumerate} \item $\dtp L = \ttake{q}{\dlbl{l_i}(\vb{x_i}{T_i}).L_i}_{i \in I}$, then $\dgt G$ is of form $\gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i}_{i \in I}$. \item $\dtp L = \toffer{q}{\dlbl{l_i}(\vb{x_i}{T_i}).L_i}_{i \in I}$, then $\dgt G$ is of form $\gtbran{q}{p}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i}_{i \in I}$. \item $\dtp L = \tphi{l}{x}{T}{L}$, then $\dgt G$ is of form $\gtbran{s}{t}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i}_{i \in I}$, where $\ppt p \notin \psetof{s, t}$ and $\tphi{l}{x}{T}{L} = \sqcup_{i \in I}{\tphi{l_i}{x_i}{T_i}{L_i}}$, where $\dtp{L_i}$ is obtained via the projection of $\dgt{G_i}$. \item $\dtp L = \trecur{t}{}{\vb{x}{T}}{\dexp{x := E}}{L'}$, then $\dgt G$ is of form $\gtrecur{t}{\vb{x}{T}}{\dexp{x := E}}{G'}$, and $\ppt p \in \dgt G$. \end{enumerate} \label{lem:proj-inv} \end{lemma} \begin{proof} A direct consequence of projections rules $\gtctxproj{\Gamma}{G}{p}$ --- The results of projections are not overlapping. \end{proof} \begin{lemma}[Determinancy] Let $\gtctx{\Gamma}{G}$ be a global type under a global typing context, and $\alpha$ be a labelled action. If $\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma_1}{G_1}$ and $\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma_2}{G_2}$, then $\gtctx{\Gamma_1}{G_1} = \gtctx{\Gamma_2}{G_2}$. \label{lem:gty-determinancy} \end{lemma} \begin{proof} By induction on global type reduction rules. \end{proof} \subsection{Proof of \cref{thm:trace-eq}} \begin{quote} Let $\gtctx{\Gamma}{G}$ be a global type under a global typing context and $s \Leftrightarrow \gtctx{\Gamma}{G}$ be a configuration associated with the global type and context. $\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma'}{G'}$ if and only if $s \stepsto[\alpha] s'$, where $s' \Leftrightarrow \gtctx{\Gamma'}{G'}$. \end{quote} \begin{proof} Soundness ($\Rightarrow$): \input{soundness.tex} Completeness ($\Leftarrow$): \input{completeness.tex} \end{proof} \subsection{Proof of \cref{thm:wf-preservation}} \begin{quote} If $\gtctx{\Gamma}{G}$ is a well-formed global type under typing context, and $\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma'}{G'}$, then $\gtctx{\Gamma'}{G'}$ is well-formed. \end{quote} \begin{proof} By induction on the reduction of global type $\gtctx{\Gamma}{G} \stepsto[\alpha]\gtctx{\Gamma'}{G}$. \begin{itemize} \item \ruleG{Pfx}~ $\gtctx{\Gamma}{\gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i}_{i \in I}} \stepsto[\ltsmsg{p}{q}{l_j}{x_j}{T_j}] \gtctx{\ctxext{\Gamma}{x_{j}^{\psetof{p, q}}}{T_j}}{\dgt{G_j}}$ There are three cases for projection to consider: \ruleP{Send}, \ruleP{Recv}, and \ruleP{Phi}. In all cases, the premises state that $\gtctxproj{\ctxext{\Gamma}{x_{i}^{\psetof{p, q}}}{T_i}}{G_i}{p} = \ltctx{\Sigma_i}{L_i}$, which indicates that all continuations are projectable for all indices. \item \ruleG{Cnt}~ $\gtctx{\Gamma}{\gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i}_{i \in I}} \stepsto[\alpha] \gtctx{\Gamma'}{\gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i'}_{i \in I}}$ From inductive hypothesis, we have that for all index $i \in I$, if $\gtctx{\ctxext{\Gamma}{x_{j}^{\varnothing}}{T_j}}{G_j}$ is well-formed, then $\gtctx{\Gamma'}{G_j'}$ is well-formed. In all three cases of projection, the premises state that $\Sigma \vdash \dte{T_i} ~ty$ for all index $i \in I$. Therefore, the context $\gtctx{\ctxext{\Gamma}{x_{j}^{\varnothing}}{T_j}}{G_j}$ is also well-formed. We are left to show that $\gtctx{\Gamma'}{\gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i'}_{i \in I}}$ is well-formed: we invert the premise of projection of $\gtctx{\Gamma}{\gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}).G_i}_{i \in I}}$ and apply \cref{lem:project-weaken}. \item \ruleG{Rec}~ $\gtctx{\Gamma}{\gtrecur{t}{\vb{x}{T}}{\dexp{ x := E}}{G}} \stepsto[\alpha] \gtctx{\Gamma'}{G'}$ By inductive hypothesis. \end{itemize} \end{proof} \subsection{Proof of \cref{thm:progress}} \begin{quote} If $\gtctx{\Gamma}{G}$ is a well-formed global type under context, then $\gtctx{\Gamma}{G}$ satisfies progress. \end{quote} \begin{proof} By induction on the structure of global types $\dgt G$. Using \cref{thm:trace-eq}, we are sufficient to show that $\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma'}{G'}$, and apply the theorem for the progress of associated configuration. \begin{itemize} \item $\dgt G = \gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}). G_i}_{i \in I}$. Since the index set $I$ must not be empty, we can pick an index $i \in I$ and apply \ruleG{Pfx}. \item $\dgt G = \gtrecur{t}{\vb{x}{T}}{\dexp{x := E}}{G'} $ Since recursive types must be contractive, we have that $\dgt G'\subst{\gtrecursimpl{t}{\vb{x}{T}}{G'}}{\dgt{\mathbf{t}}} \neq \dgt{G'}$. Furthermore, the substituted type is closed. We can apply \ruleG{Rec}. \item $\dgt G = \gtvar{t}{\dexp{x := E}}$ Vacuous, since well-formed global type cannot have free type variable. \item $\dgt G = \dgt{\code{end}}$ Corresponds to the case where all local types are $\dtp{\code{end}}$. \end{itemize} \end{proof} \section{Related Work} \label{section:related} We summarise the most closely related works in the areas of refinement and session types. For a detailed survey on theory and implementations of session types, see \citet{BettyBook}. \mypara{Refinement Types for Verification and Reasoning} \label{subsection:related-refinement} Refinement types were introduced to allow recursive data structures to be specified in more details using predicates \cite{PLDI91RefinementML}. Subsequent works on the topic \cite{TOPLAS11Refinement, ICFP14LiquidHaskell, SCALA16Refinement, POPL18Refinement} utilise SMT solvers, such as Z3 \cite{TACAS08Z3}, to aid the type system to decide a semantic subtyping relation \cite{JFP12SemanticSubtyping} using SMT encodings. Refinement types have been applied to numerous domains, such as resource usage analysis \cite{POPL20Refinement, ICFP20RefinementResource}, secure implementations \cite{POPL10F7Protocol, TOPLAS11Refinement}, information control flow enforcements \cite{ICFP20RefinementIFC}, and theorem proving \cite{POPL18Refinement}. Our aim is to utilise refinement types for the specification and verification of distributed protocols, by combining refinement and session types in a single practical framework. \mypara{Implementation of Session Types} \label{subsection:related-session-impl} \Citet{CC18TypeProvider} provides an implementation of MPST with assertions using \textsc{Scribble}\xspace and F\#\xspace. Their implementation, the session type provider (STP), relies on code generation of fluent (class-based) APIs, initially described in \cite{FASE16EndpointAPI}. Each protocol state is implemented as a class, with methods corresponding to the possible transitions from that state. It forces a programming style that not only relies extensively on method chaining, but also requires dynamic checks to ensure the linearity of channel usage. Our work differs from STP in multiple ways. First, we extend the \textsc{Scribble}\xspace toolchain to support \emph{recursion variables}, allowing refinements on recursions, hence improving expressiveness. In this way, developers can specify dependencies across recursive calls, which is not supported in STP. Second, we depart from the class-based API generation, and generate a callback-based API. Our approach has the advantage that the linear usage of channels is ensured by construction, saving dynamic checks for channels. Third, we use refinement types in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace to verify refinements statically, in contrast, STP performs dynamic evaluations to validate assertions in protocols. Finally, the metatheory of session types extended with refinements was not developed in their work. Several other MPST works follow a similar technique of class-based API generation to overcome limitations of the type system in the target language, e.g. \citet{POPL19Parametric} for Go, \citet{CC15MPI} for C. All of the above works, suffer from the same limitations -- they detect linearity violations at runtime, and offer no static alternative. Indeed, to our knowledge, \citet{ECOOP20OCamlMPST} provide the only MPST implementation which \textit{statically} checks linearity violation. It relies on specific type-level OCaml features, and a monadic programming style. Our work proposes generation of a callback-styled API from MPST protocols. To our knowledge, it is the first work that ensures linear channel usage by construction. Although our target language is F*, the callback-styled API code generation technique is applicable to any mainstream programming language. \mypara{Dependent and Refinement Session Types} \label{subsection:related-mpst} \Citet{CONCUR10DesignByContract} propose a multiparty session $\pi$-calculus with logical assertions. By contrast, our formulation of RMPST is based on refinement types, projection with silent prefixes and correspondence with CFSMs, to target practical code generation, such as for \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace. They do not formulate any semantics for global types nor prove an equivalence between refined global types and projections, as in this paper. \Citet{JLAMP17DependentMPST} extend MPST with value dependent types. Invariants on values are witnessed by proof objects, which then may be erased at runtime. Our work uses refinement types, which follows the principle naturally, since refinements that appear in types are proof-irrelevant and can be erased safely. These works are limited to theory, whereas we provide an implementation. \Citet{PLACES19DependentSession} propose an Embedded Domain Specific Language (EDSL) approach of implementing multiparty sessions (analogous to MPST) in Idris. They use value dependent types in Idris to define combinators, with options to specify data dependencies, contrary to our approach of code generation. However, the combinators only describe the sessions, and how to implement and execute the sessions remains unanswered. Our work provides a complete toolchain from protocol description to implementation and verification. In the setting of binary session types, \Citet{CONCUR20SessionRefinement} extend session types with arithmetic refinements, with application to work analysis for computing upper bounds of work from a given session type. \Citet{POPL20LabelDependent} extend binary session types with label dependent types. In the setup of their work, specification of arithmetic properties involves complicated definitions of inductive arithmetic relations and functions. In contrast, we use SMT solvers, which have built-in functions and relations for arithmetic. Furthermore, there is no need to construct proofs manually, since SMT solvers find the proof automatically, which enhances usability and ergonomics. \Citet{POPL20Actris} combine binary session types with concurrent separation logic, allowing reasoning about mixed-paradigm concurrent programs, and planned to extend the framework to MPST. Along similar lines, \citet{ICFP20SteelCore} provide a framework of concurrent separation logic in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace, and demonstrate its expressiveness by showing how (dependent) binary session types can be represented in the logic and used in reasoning. Our work is based on the theory of MPST, subsuming the binary session types. Furthermore, we implement a toolchain that developers can use. \Citet{CSF09Session} use refinement types to implement a limited form of multiparty session types. Session types are encoded in refinement types via code generation. The specification language they use, albeit similar to MPST, has limited expressive power. Only patterns of interactions where participants alternate between sending and receiving are permitted. Moreover, they do not study data dependencies in protocols, hence they can neither specify, nor verify constraints on payloads or recursions. We use refinement types to specify constraints and dependencies in multiparty protocols, and use the \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace compiler \cite{POPL16FStar} for verifying the endpoint implementations. The verified endpoint program does not only comply to the multiparty protocol, enjoying the guarantees provided by the original MPST theory (deadlock freedom, session fidelity), but also satisfies additional guarantees provided by refinement types with respect to data constraints. \section{A theory of Refined Multiparty Session Types (RMPST)} \label{section:theory} In this section, we introduce \emph{refined multiparty session types} (\emph{RMPST} for short). We give the syntax of types in \cref{subsection:theory-syntax}, extending original multiparty session types (MPST) with \emph{refinement types}. We describe the refinement typing system that we use to type expressions in RMPST in \cref{subsection:typing-exp}. We follow the standard MPST methodology. \emph{Global session types} describe communication structures of many \emph{participants} (also known as \emph{roles}). \emph{Local session types}, describing communication structures of a single participant, can be obtained via \emph{projection} (explain in \cref{subsection:theory-projection}). Endpoint processes implement local types obtained from projection. We give semantics of global types and local types in \cref{subsection:theory-semantics}, and show the equivalence of semantics with respect to projection. As a consequence, we can compose all endpoint processes implementing local types for roles in a global type, obtained via projection, to implement the global type correctly. \subsection{Syntax of Types} \label{subsection:theory-syntax} We define the syntax of refined multiparty session types (refined MPST) in \cref{fig:mpst-syntax}. We use different colours for different syntactical categories to help disambiguation, but the syntax can be understood without colours. We use \dgt{pink}~for global types, \dtp{dark blue}~for local types, \dexp{blue}~for expressions, \dte{purple}~for base types, \dlbl{indigo}~for labels, and \ppt{Teal}~with bold fonts for participants. \begin{figure}[h] \vspace{-1mm} \begin{tabular}{ll} $ \arraycolsep=2pt \begin{array}{rcll} \dte{S} & \mathrel{::=} & \dte{\code{int}} \mathrel{\mid} \dte{\code{bool}} \mathrel{\mid} \dots & \text{\footnotesize Base Types} \\ \dte{T} & \mathrel{::=} & \dexp x:\dte{S}\esetof{E} & \text{\footnotesize Refinement Types} \\ \dexp{E} & \mathrel{::=} & \dexp x \mathrel{\mid} \dexp {\underline n} \mathrel{\mid} op_1~\dexp{E} \mathrel{\mid} \dexp{E} ~op_2~ \dexp{E} \dots & \text{\footnotesize Expressions} \\ \dgt{G} & \mathrel{::=} & & \text{\footnotesize Global Types} \\ & \mathrel{\mid} & \gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}). G_i}_{i \in I} & \text{\footnotesize Message} \\ & \mathrel{\mid} & \gtrecur{t}{\vb{x}{T}}{\dexp{x := E}}{G} & \text{\footnotesize Recursion} \\ & \mathrel{\mid} & \gtvar{t}{\dexp{x := E}} ~~\mathrel{\mid}~~ \dgt{\code{end}} & \text{\footnotesize Type Var., End} \\ \end{array}$ & $ \begin{array}{rcll} \dtp{L} & \mathrel{::=} & & \text{\footnotesize Local Types} \\ & \mathrel{\mid} & \toffer{p}{\dlbl{l_i}(\vb{x_i}{T_i}). L_i}_{i \in I} & \text{\footnotesize Receiving}\\ & \mathrel{\mid} & \ttake{p}{\dlbl{l_i}(\vb{x_i}{T_i}). L_i}_{i \in I} & \text{\footnotesize Sending}\\ & \mathrel{\mid} & \tphi{l}{x}{T}{L} & \text{\footnotesize Silent Prefix} \\ & \mathrel{\mid} & \trecur{t}{}{\vb{x}{T}}{\dexp{x := E}}{L} & \text{\footnotesize Recursion} \\ & \mathrel{\mid} & \tvar{t}{\dexp{x := E}} ~~\mathrel{\mid}~~ \dtp{\code{end}} & \text{\footnotesize Type Var., End} \\ \end{array}$ \end{tabular} \vspace{-1mm} \caption{Syntax of Refined Multiparty Session Types} \vspace{-3mm} \label{fig:mpst-syntax} \end{figure} \mypara{Value Types and Expressions} We use $\dte{S}$ for base types of values, ranging over integers, booleans, etc. Values of the base types must be able to be communicated. The base type $\dte{S}$ can be \emph{refined} by a boolean expression, acting as a predicate on the members of the base type. A \emph{refinement type} is of the form $(\vb{x}{S} \esetof{E})$. A value $\dexp{x}$ of the type has base type $\dte{S}$, and is refined by a boolean expression $\dexp E$. The boolean expression $\dexp E$ acts as a predicate on the members $\dexp x$ (possibly involving the variable $\dexp x$). For example, we can express natural numbers as $(\vb{x}{\dte{\code{int}}} \esetof{x \geq 0}$). We use $\fv{\cdot}$ to denote the free variables in refinement types, expressions, etc. We consider variable $\dexp x$ be bound in the refinement expression $\dexp E$, i.e. $\fv{\vb{x}{S}\esetof{E}} = \fv{\dexp E} \setminus \esetof{x}$. Where there is no ambiguity, we use the base type $\dte S$ directly as an abbreviation of a refinement type $(\vb{x}{S}\esetof{\dexp{\code{true}}})$, where $\dexp x$ is a fresh variable, and $\dexp{\code{true}}$ acts as a predicate that accepts all values. \mypara{Global Session Types} \emph{Global session types} (\emph{global types} or \emph{protocols} for short) range over $\dgt{G, G', G_i, \dots}$ Global types give an overview of the overall communication structure. We extend the standard global types \cite{ICALP13CFSM} with refinement types and variable bindings in message prefixes. Extensions to the syntax are \shaded{\text{shaded}} in the following explanations. $\gtbran{p}{q}{\dlbl{l_i}(\shaded{\dexp{x_i}:}\dte{T_i}). G_i}_{i \in I}$ is a message from $\ppt p$ to $\ppt q$, which branches into one or more continuations with label $\dlbl{l_i}$, carrying a payload variable $\dexp{x_i}$ with type $\dte{T_i}$. We omit the curly braces when there is only one branch, like $\gtbransingle{p}{q}{\dlbl{l}(\vb{x}{T})}$. We highlight the difference from the standard syntax, i.e.\ the variable binding. The payload variable $\dexp{x_i}$ occurs bound in the continuation global type $\dgt{G_i}$, for all $i \in I$. We sometimes omit the variable if it is not used in the continuations. The free variables are defined as: \vspace{-1mm} $$\fv{\gtbran{p}{q}{\dlbl{l_i}(\vb{x_i}{T_i}). G_i}_{i \in I}} = \bigcup_{i \in I}{\fv{\dte{T_i}}} \cup \bigcup_{i \in I}{(\fv{\dgt{G_i}} \setminus \esetof{x_i})} \vspace{-1mm} $$ We require that the index set $I$ is not empty, and all labels $\dlbl{l_i}$ are distinct. To prevent duplication, we write $\dlbl{l}(\vb{x}{S}\esetof{E})$ instead of $\dlbl{l}(\vb{x}{(\vb{x}{S}\esetof{E})})$ (the first $\dexp x$ occurs as a variable binding in the message, the second $\dexp x$ occurs as a variable representing member values in the refinement types). We extend the construct of recursive protocols to include a variable carrying a value in the inner protocol. In this way, we enhance the expressiveness of the global types by allowing a recursion variable to be maintained across iterations of global protocols. The recursive global type $\gtrecur{t}{\shaded{\vb{x}{T}}}{\shaded{\dexp{x := E}}}{G}$ specifies a variable $\dexp{x}$ carrying type $\dte{T}$ in the recursive type, initialised with expression $\dexp E$. The type variable $\gtvar{t}{\shaded{\dexp{x := E}}}$ is annotated with an assignment of expression $\dexp E$ to variable $\dexp x$. The assignment updates the variable $\dexp{x}$ in the current recursive protocol to expression $\dexp{E}$. The free variables in recursive type is defined as \vspace{-1mm} $$\fv{\gtrecur{t}{{\vb{x}{T}}}{{\dexp{x := E}}}{G}} = \fv{\dte T} \cup \fv{\dexp E} \cup (\fv{\dgt G} \setminus \esetof{x})$$ \vspace{-5mm} We require that recursive types are contractive \cite[\S 21]{PierceTAPL}, so that recursive protocols have at least a message prefix, and protocols such as $\gtrecur{t}{\vb{x}{T}}{\dexp{x:=E_1}}{\gtvar{t}{\dexp{x:=E_2}}}$ are not allowed. We also require recursive types to be closed with respect to type variables, e.g.\ protocols such as $\gtvar{t}{\dexp{x:=E}}$ alone are not allowed. We write $\dgt G\subst{\gtrecursimpl{t}{\vb{x}{T}}{G}}{\dgt{\mathbf{t}}}$ to substitute all occurrences of type variables with expressions $\gtvar{t}{\dexp{x:= E}}$ into $\gtrecur{t}{\vb{x}{T}}{\dexp{x:=E}}{G}$. We write $\ppt r \in \dgt G$ to say $\ppt r$ is a participating role in the global type $\dgt G$. \begin{example}[Global Types] We give the following examples of global types. \begin{enumerate} \item $ \dgt{G_1} = \gtbransingle{A}{B}{\dlbl{Fst}(\vb{x}{\dte{\code{int}}})}. \gtbransingle{B}{C}{\dlbl{Snd}(\vb{y}{\dte{\code{int}}\esetof{x = y}})}. \gtbransingle{C}{D}{\dlbl{Trd}(\vb{z}{\dte{\code{int}}\esetof{x = z}})}. \dgt{\code{end}}. $ \vspace{2mm} $\dgt{G_1}$ describes a protocol where $\ppt A$ sends an $\dte{\code{int}}$ to $\ppt B$, and $\ppt B$ relays the same $\dte{\code{int}}$ to $\ppt C$, similar for $\ppt C$ to $\ppt D$. Note that we can write $\dexp{x=z}$ in the refinement of $\dexp z$, whilst $\dexp x$ is not known to $\ppt C$. \vspace{2mm} \item $ \dgt{G_2} = \gtbransingle{A}{B}{\dlbl{Number}(\vb{x}{\dte{\code{int}}})}. \gtbran{B}{C}{ \begin{array}{@{}l@{}} \dlbl{Positive}({\dte{\code{unit}}}\esetof{x > 0}).\dgt{\code{end}}\\ \dlbl{Zero}({\dte{\code{unit}}}\esetof{x = 0}).\dgt{\code{end}}\\ \dlbl{Negative}({\dte{\code{unit}}}\esetof{x < 0}).\dgt{\code{end}} \end{array} } $ \vspace{2mm} $\dgt{G_2}$ describes a protocol where $\ppt A$ sends an $\dte{\code{int}}$ to $\ppt B$, and $\ppt B$ tells $\ppt C$ whether the $\dte{\code{int}}$ is positive, zero, or negative. We omit the variable here since it is not used later in the continuation. \vspace{2mm} \item $ \arraycolsep=1pt \begin{array}[t]{@{}ll@{}} \dgt{G_3} = & \gtrecur{t} {\vb{try}{\dte{\code{int}}\esetof{try \geq 0 \land try \leq 3}}} {\dexp{try := 0}} {}\\ &\begin{array}{@{}l@{}} \gtbransingle{A}{B}{\dlbl{Password}(\vb{pwd}{\dte{\code{string}}})}.\\ \gtbran{B}{A}{ \begin{array}{@{}l@{}} \dlbl{Correct}({\dte{\code{unit}}}).\dgt{\code{end}}\\ \dlbl{Retry}({\dte{\code{unit}}}\esetof{try < 3}).\gtvar{t}{\dexp{try:=try+1}}\\ \dlbl{Denied}({\dte{\code{unit}}}\esetof{try = 3}).\dgt{\code{end}} \end{array} } \end{array} \end{array} $ \vspace{2mm} $\dgt{G_3}$ describes a protocol where $\ppt A$ authenticates with $\ppt B$ with maximum 3 tries. \end{enumerate} \label{example:gty} \end{example} \mypara{Local Session Types} \emph{Local session types} (\emph{local types} for short) range over $\dtp{L, L', L_i, \dots}$ Local types give a view of the communication structure of an endpoint, usually obtained from a global type. In addition to standard syntax, the recursive types are similarly extended as those of global types. Suppose the current role is $\ppt q$, the local type $\ttake{p}{\dlbl{l_i}(\vb{x_i}{T_i}). L_i}_{i \in I}$ describes that the role $\ppt q$ sends a message to the partner role $\ppt p$ with label $\dlbl{l_i}$ (where $i$ is selected from an index set $I$), carrying payload variable $\dexp{x_i}$ with type $\dte{T_i}$, and continues with $\dtp{L_i}$. It is also said that the role $\ppt q$ takes an \emph{internal choice}. Similarly, the local type $\toffer{p}{\dlbl{l_i}(\vb{x_i}{T_i}). L_i}_{i \in I}$ describes that the role $\ppt q$ receives a message from the partner role $\ppt p$. In this case, it is also said that the role $\ppt q$ offers an \emph{external choice}. We omit curly braces when there is only a single branch (as is done for global messages). We add a new syntax construct of $\tphi{l}{x}{T}{L}$ for \emph{silent local types}. We motivate this introduction of the new prefix to represent knowledge obtained from the global protocol, but not in the form of a message. Silent local types are useful to model variables obtained with irrelevant quantification \cite{LICS01Irrelavance, LMCS12Irrelevance}. These variables can be used in the construction of a type, but cannot be used in that of an expression, as we explain later in \cref{subsection:typing-exp}. We show an example of a silent local type later in \cref{example:proj}, after we define \emph{endpoint projection}, the process of obtaining local types from a global type. \subsection{Expressions and Typing Expressions} \label{subsection:typing-exp} We use $\dexp{E, E', E_i}$ to range over expressions. Expressions consist of variables $\dexp x$, constants (e.g.\ integer literals $\dexp{\underline{n}}$), and unary and binary operations. We use an SMT assisted refinement type system for typing expressions, in the style of \cite{PLDI08LiquidTypes}. The simple syntax of expressions allows all expressions to be encoded into SMT logic, for deciding a semantic subtyping relation of refinement types \cite{JFP12SemanticSubtyping}. \input{fig/rules/typing-exp.tex} \mypara{Typing Contexts} We define two categories of typing contexts, for use in handling global types and local types respectively. $$ \Gamma \mathrel{::=} {\ensuremath{\varnothing}} \mathrel{\mid} \ctxc{\Gamma}{x^{\ppt{$\mathbb{P}$}}}{T} \hspace{8mm} \Sigma \mathrel{::=} {\ensuremath{\varnothing}} \mathrel{\mid} \ctxc{\Sigma}{x^\theta}{T} \hspace{8mm} \theta \mathrel{::=} 0 \mathrel{\mid} \omega $$ We annotate global and local typing contexts differently. For global contexts $\Gamma$, variables carry the annotation of a set of roles $\ppt{$\mathbb{P}$}$, to denote the set of roles that have the knowledge of its value. For local contexts $\Sigma$, variables carry the annotation of their multiplicity $\theta$. A variable with multiplicity $0$ is an irrelevantly quantified variable (irrelevant variable for short), which cannot appear in the expression when typing (also denoted as $\dexp x \div \dte T$ in the literature \cite{LICS01Irrelavance,LMCS12Irrelevance}). Such a variable can only appear in an expression used as a predicate, when defining a refinement type. A variable with multiplicity $\omega$ is a variable without restriction. We often omit the multiplicity $\omega$. \mypara{Well-formedness} Since a refinement type can contain free variables, it is necessary to define well-formedness judgements on refinement types, and henceforth on typing contexts. We define $\Sigma^+$ to be the local typing context where all irrelevant variables $\dexp{x^0}$ become unrestricted $\dexp{x^\omega}$, i.e.\ $({\ensuremath{\varnothing}})^+ = {\ensuremath{\varnothing}}; (\Sigma, \vb{x^\theta}{T})^+ = \Sigma^+, \vb{x^\omega}{T}$. We show the well-formedness judgement of a refinement type \ruleWf{Rty}~in \cref{fig:typing-expression}. For a refinement type $(\vb{x}{S}\esetof{E})$ to be a well-formed type, the expression $\dexp E$ must have a boolean type under the context $\Sigma^+$, extended with variable $\dexp x$ (representing the members of the type) with type $\dte S$. The typing context $\Sigma^+$ promotes the irrelevant quantified variables into unrestricted variables, so they can be used in the expression $\dexp E$ inside the refinement type. The well-formedness of a typing context is defined inductively, requiring all refinement types in the context to be well-formed. We omit the judgements for brevity. \mypara{Typing Expressions} We type expressions in local contexts, forming judgements of form \linebreak \hfill \hfill $\Sigma \vdash \dexp E: \dte T$, and show key typing rules in \cref{fig:typing-expression}. We modify the typing rules in a standard refinement \linebreak type system \cite{PLDI08LiquidTypes, ICFP14LiquidHaskell, POPL18Refinement}, adding distinction between irrelevant and unrestricted variables. \ruleTE{Const}\ gives constant values in the expression a refinement type that only contains the constant value. Similarly, \ruleTE{Plus}\ gives typing derivations for the plus operator, with a corresponding refinement type encoding the addition. We draw attention to the handling of variables (\ruleTE{Var}). An irrelevant variable in the typing context cannot appear in an expression, i.e.\ there is \emph{no} derivation for $\Sigma_1, \vb{x^0}{T}, \Sigma_2 \vdash \vb{x}{T}$. These variables can only be used in an refinement type (see \ruleWf{Rty}). The key feature of the refinement type system is the semantic subtyping relation decided by SMT \cite{JFP12SemanticSubtyping}, we describe the feature in \ruleTE{Sub}. We use $\enc{\dexp E}$ to denote the encoding of expresion $\dexp E$ into the SMT logic. We encode a type binding $\vb{x^\theta}{(\vb{v}{S}\esetof{E})}$ in a typing context by encoding the term $\dexp{E\subst{x}{v}}$, and define the encoding of a typing context $\enc{\Sigma}$ inductively. We define the extension of typing contexts ($\ctxext{\Gamma}{x^{\ppt{$\mathbb{P}$}}}{T}$; $\ctxext{\Sigma}{x^\theta}{T}$) in \cref{fig:ctx-ext}, used in definitions of semantics. We say a global type $\dgt G$ (resp.\ a local type $\dtp L$) is closed under a global context $\Gamma$ (resp.\ a local context $\Sigma$), if all free variables in the type are in the domain of the context. \begin{figure} \scalebox{0.9}{$ \ctxext{\Gamma}{x^{\ppt{$\mathbb{P}$}}}{T} = \begin{cases} \Gamma, \vb{x^{\ppt{$\mathbb{P}$}}}{T} & \text{if}~\dexp{x} \notin \Gamma \\ \Gamma_1, \vb{x^{\ppt{$\mathbb{P}$}}}{T}, \Gamma_2 & \text{if}~\Gamma = \Gamma_1, \vb{x^{\varnothing}}{T}, \Gamma_2\\ \Gamma_1, \vb{x^{\ppt{$\mathbb{P}$}}}{T}, \Gamma_2 & \text{if}~\Gamma = \Gamma_1, \vb{x^{\ppt{$\mathbb{P}$}}}{T}, \Gamma_2\\ \text{undefined} & \text{otherwise} \end{cases} \hspace{5mm} \ctxext{\Sigma}{x^\theta}{T} = \begin{cases} \Sigma, \vb{x^\theta}{T} & \text{if}~\dexp{x} \notin \Sigma \\ \Sigma_1, \vb{x^\theta}{T}, \Sigma_2 & \text{if}~\Sigma = \Sigma_1, \vb{x^0}{T}, \Sigma_2\\ \Sigma_1, \vb{x^\omega}{T}, \Sigma_2 & \text{if}~\Sigma = \Sigma_1, \vb{x^\omega}{T}, \Sigma_2\\ \text{undefined} & \text{otherwise} \end{cases} $} \vspace{-3mm} \caption{Typing Context Extension} \vspace{-3mm} \label{fig:ctx-ext} \end{figure} \begin{remark}[Empty Type] \upshape A refinement type may be \emph{empty}, with no inhabited member. We can construct such a type under the empty context ${\ensuremath{\varnothing}}$ as $(\vb{x}{S}\esetof{\dexp{\code{false}}})$ with any base types $\dte S$. A more specific example is a refinement type for an integer that is both negative and positive $(\vb{x}{\dte{\code{int}}}\esetof{x > 0 \land x < 0})$. Similarly, under the context $\vb{x^\omega}{\dte{\code{int}}\esetof{x > 0}}$, the refinement type $\vb{y}{\dte{\code{int}}\esetof{y < 0 \land y > x}}$ is empty. In these cases, the typing context with the specified type becomes inconsistent, i.e.\ the encoded context gives a proof of falsity. Moreover, an empty type can also occur without inconsistency. For instance, in a typing context of $\vb{x^0}{\dte{\code{int}}}$, the type $\vb{y}{\dte{\code{int}}\esetof{y > x}}$ is empty --- it is not possible to produce such a value without referring to $\dexp x$ (cf.\ \ruleTE{Var}). \label{rem:empty-value-type} \end{remark} \subsection{Endpoint Projection: From Global Contexts and Types to Local Contexts and Types} \label{subsection:theory-projection} In the methodology of multiparty session types, developers specify a global type, and obtain local types for the participants via \emph{endpoint projection} (\emph{projection} for short). In the original theory, projection is a \emph{partial} function that takes a global type $\dgt G$ and a participant $\ppt p$, and returns a local type $\dtp L$. The resulting local type $\dtp L$ describes a the local communication behaviour for participant $\ppt p$ in the global scenario. Such workflow has the advantage that each endpoint can obtain a local type separately, and implement a process of the given type, hence providing modularity and scalability. Projection is defined as a \emph{partial} function, since only \emph{well-formed} global types can be projected to all participants. In particular, a \emph{partial} merge operator $\sqcup$ is used during the projection, for creating a local type $\Sigma \vdash \dtp{L_1} \sqcup \dtp{L_2} = \dtp{L_{\text{merged}}}$ that captures the behaviour of two local types, under context $\Sigma$. In RMPST, we first define the projection of global typing contexts (\cref{fig:proj-gctx}), and then define the projection of global types under a global typing context (\cref{fig:proj-gty}). We use expression typing judgements in the definition of projection, to type-check expressions against their prescribed types. \mypara{Projection of Global Contexts} We define the judgement $\ctxproj{\Gamma}{p} = \Sigma$ for the projection of global typing context $\Gamma$ to participant $\ppt p$ in \cref{fig:proj-gctx}. In the global context $\Gamma$, a variable $\dexp x$ is annotated with the set of participants $\ppt{$\mathbb{P}$}$ who know the value. If the projected participant $\ppt p$ is in the set $\ppt{$\mathbb{P}$}$, \ruleP{Var-$\omega$}\ is applied to obtain an unrestricted variable in the resulting local context; Otherwise, \ruleP{Var-$0$}\ is applied to obtain an irrelevant variable. \mypara{Projection of Global Types with a Global Context} When projecting a global type $\dgt G$, we include a global context $\Gamma$, forming a judgement of form $\gtctxproj{\Gamma}{G}{p} = \ltctx{\Sigma}{L}$. Projection rules are shown in \cref{fig:proj-gty}. Including a typing context allows us to type-check expressions during projection, hence ensuring that variables attached to recursive protocols are well-typed. \input{fig/rules/proj-gctx.tex} \input{fig/rules/proj-gty.tex} If the prefix of $\dgt G$ is a message from role $\ppt p$ to role $\ppt q$, the projection results a local type with a send (resp.\ receive) prefix into role $\ppt p$ (resp.\ $\ppt q$) via \ruleP{Send}\ (resp.\ \ruleP{Recv}). For other roles $\ppt r$, the projection results in a local type with a \emph{silent label} via \ruleP{Phi}, with prefix $\tphi{l}{x}{T}{}$ This follows the concept of a coordinated distributed system, where all the processes follow a global protocol, and base assumptions of their local actions on actions of other roles not involving them. The projection defined in the original MPST theory does not contain information for role $\ppt r$ about a message between $\ppt p$ and $\ppt q$. We use the silent prefix to retain such information, especially the refinement type $\dte T$ of the payload. For merging two local types (as used in \ruleP{Phi}), we use a simple plain merge operator defined as $ \Sigma \vdash \dtp L \sqcup \dtp L = \dtp L $, requiring two local types to be identical in order to be merged.\footnotemark \footnotetext{We build upon the standard MPST theory with plain merging. Full merge~\cite{LMCS12Parameterised}, allowing certain different index sets to be merged, is an alternative, more permissive merge operator. Our implementation \texorpdfstring{\textsc{Session$^{\star}$}}{SessionStar}\xspace uses the more permissive merge operator for better expressiveness.} If the prefix of $\dgt G$ is a recursive protocol $\gtrecur{t}{\vb{x}{T}}{\dexp{x:=E}}{G}$, the projection preserves the recursion construct if the projected role is in the inner protocol via \ruleP{Rec-In}\ and that the expression $\dexp E$ can be typed with type $\dte T$ under the projected local context. Typing expressions under local contexts ensures that no irrelevant variables $\dexp{x^0}$ are used in the expression $\dexp E$, as no typing derivation exists for irrelevant variables. Otherwise projection results in $\dtp{\code{end}}$ via \ruleP{Rec-Out}. If $\dgt G$ is a type variable $\gtvar{t}{\dexp{x:=E}}$, we similarly validate that the expression $\dexp E$ carries the specified type in the correspondent recursion definition, and its projection also preserves the type variable construct. \begin{example}[Projection of Global Types of \cref{example:gty} (1)] We draw attention to the projection of $\dgt{G_1}$ to $\ppt C$, under the empty context ${\ensuremath{\varnothing}}$. \[ \gtctxproj{{\ensuremath{\varnothing}}}{G_1}{C} = \ltctx{{\ensuremath{\varnothing}}}{ \tphi{Fst}{x}{\dte{\code{int}}}{} \toffersingle{B}{\dlbl{Snd}(\vb{y}{\dte{\code{int}}\esetof{x = y}}). \ttakesingle{D}{\dlbl{Trd}(\vb{z}{\dte{\code{int}}\esetof{x = z}}). \dtp{\code{end}}} } } \] We note that the local type for $\ppt C$ has a silent prefix $\dlbl{Fst}(\vb{x}{\dte{\code{int}}})$, which binds the variable $\dexp x$ in the continuation. The silent prefix adds the variable $\dexp x$ and its type to the ``local knowledge'' of the endpoint $\ppt C$, yet the actual value of $\dexp x$ is unknown. \label{example:proj} \end{example} \begin{remark}[Empty Session Type] \upshape Global types $\dgt G$ and local types $\dtp L$ can be empty because one of the value types in the protocol in an empty type (cf.\ \cref{rem:empty-value-type}). For example, the local type $\ttakesingle{A}{\dlbl{Impossible}(\vb{x}{\dte{\code{int}}\esetof{x > 0 \land x < 0}}).\dtp{\code{end}}}$ cannot be implemented, since such an $\dexp x$ cannot be provided. For the same reason, the local type $\tphi{Pos}{x}{\dte{\code{int}}\esetof{x > 0}}{}\ttakesingle{A}{\dlbl{Impossible}(\vb{y}{\dte{\code{int}}\esetof{y > x}}).\dtp{\code{end}}} $ cannot be implemented. \label{rem:empty-session-type} \end{remark} \begin{remark}[Implementable Session Types] \upshape Consider the following session type: $$\dtp L = \toffersingle{B}{\dlbl{Num}(\vb{x}{\dte{\code{int}}}). \ttake{B}{\begin{array}{@{}l@{}} \dlbl{Pos}({\dte{\code{unit}}\esetof{x > 0}}).\dtp{\code{end}}\\ \dlbl{Neg}({\dte{\code{unit}}\esetof{x < 0}}).\dtp{\code{end}} \end{array} }}. $$ When the variable $\dexp x$ has the value $\eintlit{0}$, neither of the choices $\dlbl{Pos}$ or $\dlbl{Neg}$ could be selected, as the refinements are not satisfied. In this case, the local type $\dtp L$ cannot be implemented, as the internal choice callback may not be implemented in a \emph{total} way, i.e.\ the callback returns a choice label for all possible inputs of integer $\dexp x$.\footnotemark \footnotetext{Since we use a permissive \keyword{ML} effect in the callback type, allowing all side effects to be performed in the callback, the callback may throw exceptions or diverge in case of unable to return a value. } \end{remark} \subsection{Labelled Transition System (LTS) Semantics} \label{subsection:theory-semantics} We define the labelled transition system (LTS) semantics for global types and local types. We show the trace equivalence of a global type and the collection of local types projected from the global type, to demonstrate that projection preserves LTS semantics. The equivalence result allows us to use the projected local types for the implementation of local roles separately. Therefore, we can implement the endpoints in \texorpdfstring{\textsc{F}$^{\star}$}{FStar}\xspace separately, and they compose to the specified protocol. We also prove a type safety result that well-formed global types cannot be stuck. This, combined with the trace equivalence result, guarantees that endpoints are free from deadlocks. \mypara{Actions} We begin with defining actions in the LTS system. We define the label in the LTS as $\alpha \mathrel{::=} \ltsmsg{p}{q}{l}{x}{T}$, a message from role $\ppt p$ to $\ppt q$ with label $\dlbl l$ carrying a value named $\dexp x$ with type $\dte T$. We define $\subj{\alpha} = \psetof{p, q}$ to be the subjects of the action $\alpha$, namely the two roles in the action. \mypara{Semantics of Global Types} We define the LTS semantics of global types in \cref{fig:lts-gty}. Different from the original LTS semantics in \cite{ICALP13CFSM}, we include the context $\Gamma$ in the semantics along with the global type $\dgt G$. Therefore, the judgements of global LTS reduction have form ${\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma'}{G'}}$. \ruleG{Pfx}\ allows the reduction of the prefix action in a global type. An action, matching the definition in set defined in the prefix, allows the correspondent continuation to be selected. The resulting global type is the matching continuation and the resulting context contains the variable binding in the action. \ruleG{Cnt}\ allows the reduction of an action that is causally independent of the prefix action in a global type, here, the subjects of the action are disjoint from the prefix of the global type. If all continuations in the global types can make the reduction of that action to the same context, then the result context is that context and the result global type is one with continuations after reduction. When reducing the continuations, we add the variable of the prefix action into the context, but tagged with an empty set of known roles. This addition ensures that relevant information obtainable from the prefix message is not lost when performing reduction. \ruleG{Rec}\ allows the reduction of a recursive type by unfolding the type once. \input{fig/rules/lts-gty.tex} \begin{example}[Global Type Reductions] We demonstrate two reduction paths for a global type $$\dgt G = \gtbransingle{p}{q}{\dlbl{Hello}(\dexp{x}: \dte{\code{int}}\esetof{x < 0}).\gtbransingle{r}{s}{\dlbl{Hola}(\dexp{y}: \dte{\code{int}}\esetof{y > x}).\dgt{\code{end}}}}.$$ Note that the two messages are not causally related (they have disjoint subjects). We have the following two reduction paths of $\gtctx{{\ensuremath{\varnothing}}}{G}$ (omitting payload in LTS actions): \[ \begin{array}{rl} & \gtctx{{\ensuremath{\varnothing}}}{G} \\ \text\ruleG{Pfx}\stepsto[\ltsmsgsimpl{p}{q}{Hello}] & \gtctx{\vb{{x}^{\psetof{p, q}}}{\dte{\code{int}}\esetof{x < 0}}}{\gtbransingle{r}{s}{\dlbl{Hola}(\vb{y}{\dte{\code{int}}\esetof{y > x}}).\dgt{\code{end}}}} \\ \text\ruleG{Pfx}\stepsto[\ltsmsgsimpl{r}{s}{Hola}] & \gtctx{\vb{{x}^{\psetof{p, q}}}{\dte{\code{int}}\esetof{x < 0}}, \vb{{y}^{\psetof{r, s}}}{\dte{\code{int}}\esetof{y > x}}}{\dgt{\code{end}}} \\[2.5ex] & \gtctx{{\ensuremath{\varnothing}}}{G} \\ \text\ruleG{Cnt}\stepsto[\ltsmsgsimpl{r}{s}{Hola}] & \gtctx{\vb{{x}^{\varnothing}}{\dte{\code{int}}\esetof{x < 0}}, \vb{{y}^{\psetof{r, s}}}{\dte{\code{int}}\esetof{y > x}}}{\gtbransingle{p}{q}{\dlbl{Hello}(\vb{x}{\dte{\code{int}}\esetof{x < 0}}).\dgt{\code{end}}}} \\ \text\ruleG{Pfx}\stepsto[\ltsmsgsimpl{p}{q}{Hello}] & \gtctx{\vb{{x}^{\psetof{p, q}}}{\dte{\code{int}}\esetof{x < 0}}, \vb{{y}^{\psetof{r, s}}}{\dte{\code{int}}\esetof{y > x}}}{\dgt{\code{end}}} \end{array} \] \end{example} \mypara{Semantics of Local Types} We define the LTS semantics of local types in \cref{fig:lts-lty}. Similar to global type LTS semantics, we include the local context $\Sigma$ in the semantics. Therefore, the judgements of local LTS reductions have form ${\ltctx{\Sigma}{L} \stepsto[\alpha] \ltctx{\Sigma'}{L'}}$. When defining the LTS semantics, we also use judgements of form ${\ltctx{\Sigma}{L} \stepsto[\epsilon] \ltctx{\Sigma'}{L'}}$. It represents a silent action that can occur without an observed action. We write $\stepsto[\epsilon]^*$ to denote the reflexive transition closure of silent actions $\stepsto[\epsilon]$. We first have a look at silent transitions. \ruleE{Phi}\ allows the variable in a silent type to be added into the local context in the irrelevant form. This rule allows local roles to obtain knowledge from the messages in the global protocol without their participation. \ruleE{Cnt}\ allows prefixed local type to make a silent transition, if all of its continuations are allowed to make a silent transition to reach the same context. The rule allows a prefixed local type to obtain new knowledge about irrelevant variables, if such can be obtained in all possible continuations. \ruleE{Rec}\ unfolds recursive local types, analogous to the unfolding of global types. For concrete transitions, we have \ruleL{Send}\ (resp.\ \ruleL{Recv}) to reduce a local type with a sending (resp.\ receiving) prefix, if the action label is in the set of labels in the local type. The resulting context contains the variable in the message as a concrete variable, since the role knows the value via communication. The resulting local type is the continuation corresponding to the action label. In addition, \ruleL{Eps}\ permits any number of silent actions to be taken before a concrete action. \input{fig/rules/lts-lty.tex} \begin{remark}[Reductions for Empty Session Types] \upshape We consider empty session types to be reducible, since it is not possible to distinguish which types are inhabited. However, it does not invalidate the safety properties of endpoints, since no such endpoints can be implemented for an empty session type. \end{remark} \mypara{Relating Semantics of Global and Local Types} We extend the LTS semantics to a collection of local types in \cref{def:lts-collection}, in order to prove that projection preserves semantics. We define the semantics in a synchronous fashion. The set of local types reduces with an action $\alpha = \ltsmsg{p}{q}{l}{x}{T}$, if the local type for role $\ppt p$ and $\ppt q$ both reduce with that action $\alpha$. All other roles in the set of the local types are permitted to make silent actions ($\epsilon$ actions). Our definition deviates from the standard definition \cite[Def. 3.3]{ICALP13CFSM} in two ways: One is that we use a synchronous semantics, so that one action involves two reductions, namely at the sending and receiving sides. Second is that we use contexts and silent transitions in the LTS semantics. The original definition requires all non-action roles to be identical, whereas we relax the requirement to allow silent transitions. \begin{definition}[LTS over a collection of local types] A configuration $s = \setof{\ltctx{\Sigma_{\ppt{r}}}{L_{\ppt{r}}}}_{\ppt r \in \ppt{$\mathbb{P}$}}$ is a collection of local types and contexts, indexable via participants. Let $\ppt p \in \ppt{$\mathbb{P}$}$ and $\ppt q \in \ppt{$\mathbb{P}$}$. We say $s = \setof{\ltctx{\Sigma_{\ppt{r}}}{L_{\ppt{r}}}}_{\ppt r \in \ppt{$\mathbb{P}$}} \stepsto[\alpha = \ltsmsg{p}{q}{l}{x}{T}] s' = \setof{\ltctx{\Sigma'_{\ppt{r}}}{L'_{\ppt{r}}}}_{\ppt r \in \ppt{$\mathbb{P}$}}$ if \begin{enumerate} \item $\ltctx{\Sigma_{\ppt p}}{{L}_{\ppt p}} \stepsto[\alpha] \ltctx{\Sigma'_{\ppt p}}{L'_{\ppt p}}$ and, $\ltctx{\Sigma_{\ppt q}}{{L}_{\ppt q}} \stepsto[\alpha] \ltctx{\Sigma'_{\ppt q}}{L'_{\ppt q}}$ and, \item for all $\ppt s \in \ppt{$\mathbb{P}$}, \ppt s \neq \ppt p, \ppt s \neq \ppt q$. $\ltctx{\Sigma_{\ppt s}}{L_{\ppt s}} \stepsto[\epsilon]^* \ltctx{\Sigma'_{\ppt s}}{L'_{\ppt s}}$ \end{enumerate} \label{def:lts-collection} \end{definition} For a closed global type $\dgt G$ under context $\Gamma$, we show that the global type makes the same trace of reductions as the collection of local types obtained from projection. We prove it in \cref{thm:trace-eq}. \begin{definition}[Association of Global Types and Configurations] Let $\gtctx{\Gamma}{G}$ be a global context. The collection of local contexts \emph{associated} to $\gtctx{\Gamma}{G}$, is defined as the configuration \linebreak $\setof{\gtctxproj{\Gamma}{G}{r}}_{\ppt r \in \dgt G}$. We write $s \Leftrightarrow \gtctx{\Gamma}{G}$ if a configuration $s$ is the associated to $\gtctx{\Gamma}{G}$. \end{definition} \begin{theorem}[Trace Equivalence] Let $\gtctx{\Gamma}{G}$ be a closed global context and $s \Leftrightarrow \gtctx{\Gamma}{G}$ be a configuration associated with the global context. $\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma'}{G'}$ if and only if $s \stepsto[\alpha] s'$, where $s' \Leftrightarrow \gtctx{\Gamma'}{G'}$. \label{thm:trace-eq} \end{theorem} The theorem states that semantics are preserved after projection. Practically, we can implement local processes separately, and run them in parallel with preserved semantics. We also show that a well-formed global type $\dgt G$ has progress. This means that a well-formed global type does not get \emph{stuck}, which implies deadlock freedom. \begin{definition}[Well-formed Global Types] A global type under typing context $\gtctx{\Gamma}{G}$ is well-formed, if \begin{enumerate*} \item $\dgt G$ does not contain free type variables, \item $\dgt G$ is contractive \cite[\S 21]{PierceTAPL}, and \item for all roles in the protocol $\ppt r \in \dgt G$, the projection $\gtctxproj{\Gamma}{G}{r}$ is defined. \end{enumerate*} We also say a global type $\dgt G$ is well-formed, if $\gtctx{\varnothing}{G}$ is well-formed. \end{definition} \begin{theorem}[Preservation of Well-formedness] If $\gtctx{\Gamma}{G}$ is a well-formed global type under typing context, and $\gtctx{\Gamma}{G} \stepsto[\alpha] \gtctx{\Gamma'}{G'}$, then $\gtctx{\Gamma'}{G'}$ is well-formed. \label{thm:wf-preservation} \end{theorem} \begin{definition}[Progress] A configuration $s$ satisfies progress, if either \begin{enumerate*} \item For all participants $\ppt p \in s$, $\dtp{\dtp{L}_{\ppt p}} = \dtp{\code{end}}$, or \item there exists an action $\alpha$ and a configuration $s'$ such that $s \stepsto[\alpha] s'$. \end{enumerate*} A global type under typing context $\gtctx{\Gamma}{G}$ satisfies progress, if its associated configuration $s \Leftrightarrow \gtctx{\Gamma}{G}$, exists and satisfies progress. We also say a global type $\dgt G$ satisfies progress, if $\gtctx{\varnothing}{G}$ satisfies progress. \end{definition} \begin{theorem}[Progress] If $\gtctx{\Gamma}{G}$ is a well-formed global type under typing context, then $\gtctx{\Gamma}{G}$ satisfies progress. \label{thm:progress} \end{theorem} \begin{theorem}[Type Safety] If $\dgt G$ is a well-formed global type, then for any global type under typing context $\gtctx{\Gamma'}{G'}$ such that $\gtctx{\varnothing}{G} \stepstomany[] \gtctx{\Gamma'}{G'}$, $\gtctx{\Gamma'}{G'}$ satisfies progress. \label{thm:type-safety} \end{theorem} \begin{proof} Direct consequence of \cref{thm:wf-preservation} and \cref{thm:progress}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{fig/fig0/fig0} \end{center} \caption{A pipeline of the proposed method. Given a pre-trained CNN, we replace the first two fully connected layers with the two equivalent convolutional layers to efficiently obtain large amount of multi-scale dense activations. The activations are followed by the Multi-scale Pyramid Pooling (MPP) layer we suggest. The consequent image representation is combined with the linear SVM for the target classification task.} \label{FIG_PIPELINE} \end{figure*} Image representation is one of the most important factors that affect performance on visual recognition tasks. Barbu~\etal~\cite{FMRI} introduced an interesting experiment that a simple classifier along with human brain-scan data substantially outperforms the state-of-the-art methods in recognizing action from video clips. With a success of local descriptors~\cite{SIFT}, many researches devoted deep study to global image representation based on a Bag-of-Word (BOW) model~\cite{VGOOGLE} that aggregates abundant local statistics captured by hand-designed local descriptors. The BOW representation is further improved with VLAD~\cite{ORGVLAD} and Fisher kernel~\cite{IFK, FK} by adding higher order statistics. One major benefit of these global representations based on local descriptors is their invariance property to scale changes, location changes, occlusions and background clutters. In recent computer vision researches, drastic advances of visual recognition are achieved by deep convolutional neural networks (CNNs) \cite{LECUN}, which jointly learn the whole feature hierarchies starting from image pixels to the final class posterior with stacked non-linear processing layers. A deep representation is quite efficient since its intermediate templates are reused. However, the deep CNN is non-linear and have millions of parameters to be estimated. It requires strong computing power for the optimization and large training data to be generalized well. The recent presence of large scale ImageNet \cite{IMAGENET} database and the raise of parallel computing contribute to the breakthrough in visual recognition. Krizhevsky~\etal~\cite{ALEX} achieved an impressive result using a CNN in large-scale image classification. Instead of training a CNN for a specific task, intermediate activations extracted from a CNN pre-trained on independent large data have been successfully applied as a generic image representation. Combining the CNN activations with a classifier has shown impressive performance in wide visual recognition tasks such as object classification \cite{OFFTHESHELF, DECAF, SIVIC, SPP, DEVIL}, object detection \cite{RCNN, SPP}, scene classification \cite{OFFTHESHELF, VLAD, MITPLACE}, fine-grained classification \cite{OFFTHESHELF, FINERCNN}, attribute recognition \cite{PANDA}, image retrieval \cite{NEURALCODE}, and domain transfer \cite{DECAF}. For utilizing CNN activations as a generic image representation, a straightforward way is to extract the responses from the first or second fully connected layer of a pre-trained CNN by feeding an image and to represent the image with the responses~\cite{DECAF, NEURALCODE, SPP, RCNN}. However, this representation is vulnerable to geometric variations. There are techniques to address the problem. A common practice is exploiting multiple jitterred images (random crops and flips) for data augmentation. Though the data augmentation has been used to prevent over-fitting \cite{ALEX}, recent researches show that \textit{average pooling}, augmenting data and averaging the multiple activation vectors in a test stage, also helps to achieve better geometric invariance of CNNs while improving classification performance by +2.92\% in \cite{DEVIL} and +3.3\% in \cite{OFFTHESHELF} on PASCAL VOC 2007. A different experiment for enhancing the geometric invariance on CNN activations was also presented. Gong~\etal~\cite{VLAD} proposed a method to exploit multi-scale CNN activations in order to achieve geometric invariance characteristic while improving recognition accuracy. They extracted dense local patches at three different scales and fed each local patch into a pre-trained CNN. The CNN activations are aggregated at finer scales via VLAD encoding which was introduced in ~\cite{ORGVLAD}, and then the encoded activations are concatenated as a single vector to obtain the final representation. In this paper, we introduce a \textit{multi-scale pyramid pooling} to improve the discriminative power of CNN activations robust to geometric variations. A pipeline of the proposed method is illustrated in Figure \ref{FIG_PIPELINE}. Similar to \cite{VLAD}, we also utilize multi-scale CNN activations, but present a different pooling method that shows better performance in our experiments. Specifically, we suggest an efficient way to obtain abundant amount of multi-scale local activations from a CNN, and aggregate them using the state-of-the-art Fisher kernel~\cite{IFK, FK} with a simple but important scale-wise normalization, so called \textit{multi-scale pyramid pooling}. Our proposal demonstrates substantial improvements on both scene and object classification tasks compared to the previous representations including a single activation, the average pooling~\cite{OFFTHESHELF, DEVIL}, and the VLAD of activations~\cite{VLAD}. Also, we demonstrate object confidence maps which is useful for object detection/localization though only category-level labels without specific object bounding boxes are used in training. According to our empirical observations, replacing a VLAD kernel with a Fisher kernel does not present significant impact, however it shows meaningful performance improvements when our pooling mechanism that takes an average pooling after scale-wise normalization is applied. It implies that the performance improvement of our representation does not come just from the superiority of Fisher kernel but from the careful consideration of neural activation's property dependent on scales. \begin{figure*} \begin{center} \setlength{\tabcolsep}{1.7pt} \includegraphics[width=1\linewidth]{fig/fig1/fig1} \end{center} \vspace{-3mm} \caption{Obtaining multi-scale local activations densely from a pre-trained CNN. In this figure, the target layer is the first fully connected layer (FC6). Because FC6 can be equally implemented by a convolutional layer containing 4,096 filters of 6$\times$6$\times$256 size, we can obtain an activation map where spatial ordering of local descriptors is conserved. A single pre-trained CNN is shared for all scales.} \vspace{-3mm} \label{FIG_LOCALDESC} \end{figure*} \section{Multi-scale Pyramid Pooling} In this section, we first review the Fisher kernel framework and then introduce a \textit{multi-scale pyramid pooling} which adds a Fisher kernel based pooling layer on top of a pre-trained CNN. \subsection{Fisher Kernel Review} \label{FISHER_REVIEW} The Fisher kernel framework on a visual vocabulary is proposed by Perronnin~\etal in \cite{FK}. It extends the conventional Bag-of-Words model to a probabilistic generative model. It models the distribution of low-level descriptors using a Gaussian Mixture Model (GMM) and represents an image by considering the gradient with respect to the model parameters. Although the number of local descriptors varies across images, the consequent Fisher vector has a fixed-length, therefore it is possible to use discriminative classifiers such as a linear SVM. Let $\mathbf{x}$ denote a $d$-dimensional local descriptor and $\mathbf{G_\lambda} = \{\mathbf{g}_k, k \! = \! 1...K\}$ denote a pre-trained GMM with $K$ Gaussians where $\mathbf{\lambda}=\{\omega_k,\mu_k,\sigma_k, k \! = \! 1...K\}$. For each visual word $\mathbf{g}_k$, two gradient vectors, $\mathcal{G}_{\mu_k}\in\Re^d$ and $\mathcal{G}_{\sigma_k}\in\Re^d$, are computed by aggregating the gradients of the local descriptors extracted from an image with respect to the mean and the standard deviation of the $k^\text{th}$ Gaussian. Then, the final image representation, \textit{Fisher vector}, is obtained by concatenating all the gradient vectors. Accordingly, the Fisher kernel framework represents an image with a $2Kd$-dimensional Fisher vector $\mathcal{G}\in\Re^{2Kd}$. Intuitively, a Fisher vector includes the information about directions of model parameters to best fit the local descriptors of an image to the GMM. The fisher kernel framework is further improved in \cite{IFK} by the additional two-stage normalizations: power-normalization with the factor of 0.5 followed by $\ell_2$-normalization. Refer to \cite{IFK} for the theoretical proofs and details. \subsection{Dense CNN Activations} To obtain multi-scale activations from a CNN without modification, previous approach cropped local patches and fed the patches into a network after resizing the patches to the fixed size of CNN input. However, when we extract multi-scale local activations densely, the approach is quite inefficient since many redundant operations are performed in convolutional layers for overlapped regions. To extract dense CNN activations without redundant operations, we simply replace the fully connected layers of an existing CNN with equivalent multiple convolution filters along spatial axises. When an image larger than the fixed size is fed, the modified network outputs multiple activation vectors where each vector is CNN activations from the corresponding local patch. The procedure is illustrated in \Fref{FIG_LOCALDESC}. With this method, thousands of dense local activations (4,410 per image) from multiple scale levels are extracted in a reasonable extraction time (0.46 seconds per image) as shown in \Tref{TAB_TIME}. \begin{table}[t] \setlength{\tabcolsep}{1.6pt} \small \begin{center} \setlength{\tabcolsep}{1mm} \begin{tabular}{|l|c|c|c|c|}\hline Image scales&1$\sim$4&1$\sim$5&1$\sim$6&1$\sim$7\\ Number of activations&270&754&1,910&4,410\\\hline\hline Naive extraction (sec)&1.702&4.941&11.41&27.64\\ Proposed extraction (sec)&\textbf{0.0769}&\textbf{0.1265}&\textbf{0.2420}&\textbf{0.4635}\\\hline \end{tabular} \end{center} \caption{Average time for extracting multi-scale dense activations per image. With Caffe reference model \cite{CAFFE}, FC7 activations are extracted from 100 random images of PASCAL VOC 2007. All timings are based on a server with a CPU of 2.6GHz Intel Xeon and a GPU of GTX TITAN Black.} \label{TAB_TIME} \end{table} \subsection{Multi-scale Pyramid Pooling (MPP)} For representing an image, we first generate a scale pyramid for the input image where the minimum scale image has a fixed size of a CNN and each scale image has two times larger resolution than the previous scale image. We feed all the scaled images into a pre-trained CNN and extract dense CNN activation vectors. Then, all the activation vectors are merged into a single vector by our multi-scale pyramid pooling. If we consider each activation vector as a local descriptor, it is straightforward to aggregate all the local activations into a Fisher vector as explained in \Sref{FISHER_REVIEW}. However, CNN activations have different scale properties compared to SIFT-like local descriptors, as will be explained in \Sref{ANALYSIS_SCALE}. To adopt the Fisher kernel suitable to CNN activation characteristics, we introduce adding a \textit{multi-scale pyramid pooling layer} on top of the modified CNN as follows. Given a scale pyramid $S$ containing $N$ scaled image and local activation vectors $\mathbf{x}_s$ extracted from each scale $s\in S$, we first apply PCA to reduce the dimension of activation vectors and obtain $\mathbf{x}'_s$. Then, we aggregate the local activation vectors $\mathbf{x}'_s$ of each scale $s$ to each Fisher vector $\mathcal{G}^{s}$. After Fisher encoding, we have $N$ Fisher vectors and they are merged into one global vector by average pooling after $\ell_2$-normalization as \begin{equation} \mathcal{G}^{S}=\frac{1}{N}\sum_{s\in S}\frac{\mathcal{G}^{s}}{\left \|\mathcal{G}^{s}\right \|_{2}} \quad \text{s.t.} \quad\mathcal{G}^{s}=\frac{1}{|\mathbf{x}'_s|}\sum_{x \in \mathbf{x}'_s} \nabla_\lambda \log\mathbf{G_\lambda}(x), \label{EQ_SNFK} \end{equation} where $|\cdot|$ denotes the cardinality of a set. We use an average pooling since it is a natural pooling scheme for Fisher kernel rather than vector concatenation. Following the Improved Fisher Kernel framework \cite{IFK}, we finally apply power normalization and $\ell_2$-normalization to the Fisher vector $\mathcal{G}^{S}$. The overall pipeline of MPP is illustrated in Figure \ref{FIG_PIPELINE}. \section{Analysis of Multi-scale CNN Activations} \label{ANALYSIS_SCALE} We compare scale characteristics between traditional local features and CNN activations. It tells us that it is not suitable to directly adopt a Fisher kernel framework to multi-scale local CNN activations for representing an image. To investigate the best way for aggregating the CNN activations into a global representation, we perform empirical studies and conclude that applying scale-wise normalization of Fisher vectors is very important. \begin{figure} \begin{center} \begin{tabular}{cc} \setlength{\tabcolsep}{1.7pt} \hspace{-3mm} \includegraphics[width=0.45\linewidth]{fig/fig3/fig3-1}& \hspace{-6mm} \includegraphics[width=0.45\linewidth]{fig/fig3/fig3-2}\\ \end{tabular} \end{center} \vspace{-6mm} \caption{Classification performance of {SIFT-Fisher} and {CNN-Fisher} according to image scale on PASCAL VOC 2007. The tick labels of the horizontal axis denote image scales and their average number of local descriptors.} \label{FIG_S_VS_MAP} \end{figure} A naive way to obtain a Fisher vector $\mathcal{G'}^{S}$ given multi-scale local activations $X=\{ x\in \mathbf{x}_s, s \in S\}$ is to aggregate them as \begin{equation} \mathcal{G'}^{S}=\frac{1}{|X|}\sum_{s \in S}\sum_{x\in \mathbf{x}_{s}}\nabla_\lambda \log\mathbf{G_\lambda}(x). \label{EQ_FK} \end{equation} Here, every multi-scale local activation vector is pooled to one Fisher vector with an equal weight of $1/|X|$. To better combine a Fisher kernel with mid-level neural activations, the property of CNN activations according to patch scale should be took in consideration. In the traditional use of Fisher kernel on visual classification tasks, the hand-designed local descriptors such as SIFT \cite{SIFT} have been often densely computed in multi-scale. This local descriptor encodes low-level gradient information within an local region and captures detailed textures or shapes within a small region rather than the global structure within a larger region. In contrast, a mid-level neural activation extracted from a higher layer of CNNs (e.g. FC6 or FC7 of \cite{ALEX}) represents higher level structure information which is closer to class posteriors. As shown in the CNN visualization proposed by Zeiler and Fergus in \cite{MATTHEW}, image regions strongly activated by a certain CNN filter of the fifth layer usually capture a category-level entire object. To figure out the different scale properties between the Fisher vector of traditional SIFT ({SIFT-Fisher}) and that of neural activation from FC7 ({CNN-Fisher}), we conduct an empirical analysis with scale-wise classification scores on PASCAL VOC 2007~\cite{VOC2007}. For the analysis, we first diversify dataset into seven different scale levels from the smallest scale of $227\times227$ resolution to the biggest scale of $1,816\times1,816$ resolution and extract both dense SIFT descriptors and local activation vectors in the seventh layer (FC7) of our CNN. Then, we follow the standard framework to encode Fisher vectors and to train an independent linear SVM for each scale, respectively. In \Fref{FIG_S_VS_MAP}, we show the results of classification performances using {SIFT-Fisher} and {CNN-Fisher} according to scale. The figure demonstrates clear contrast between {SIFT-Fisher} and {CNN-Fisher}. {CNN-Fisher} performs worst at the largest image scale since local activations come from small image regions in an original image, while {SIFT-Fisher} performs best at the same scale since SIFT properly captures low-level contents within such small regions. If we aggregate the CNN activations of all scales into one Fisher vector by \Eref{EQ_FK}, the poorly performing 2,500 activations will have dominant influence with the large weight of 2,500/4,410 in the image representation. \begin{figure} \begin{center} \begin{tabular}{ccc} \setlength{\tabcolsep}{1.7pt} \hspace{-10mm} \includegraphics[width=0.38\linewidth]{fig/fig4/fig4-1}& \hspace{-6mm} \includegraphics[width=0.38\linewidth]{fig/fig4/fig4-2}& \hspace{-6mm} \includegraphics[width=0.38\linewidth]{fig/fig4/fig4-3}\\ \end{tabular} \end{center} \vspace{-4mm} \caption{Classification performance of our \textit{multi-scale pyramid pooling} in \Eref{EQ_SNFK} and the naive Fisher pooling in \Eref{EQ_FK}. The tick labels of the horizontal axis scale levels in a scale pyramid.} \label{FIG_SCOMBINE_VS_MAP} \end{figure} One possible strategy for aggregating multi-scale CNN activations is to choose activations of a set of scales relatively performing well. However, the selection of good scales is dependent on dataset and the activations from the large image scale can also contribute to geometric invariance property if we balance the influence of each scale. We empirically examined various combinations of pooling as will be shown in \Sref{sec:exp} and we found that scale-wise Fisher vector normalization followed by an simple average pooling is effective to balance the influence. We perform an experiment to compare our pooling method in \Eref{EQ_SNFK} to the naive Fisher pooling in \Eref{EQ_FK}. In the experiment, we apply both of two pooling methods with five different numbers of scales and perform classification on PASCAL VOC 2007. Despite the simplicity of our multi-scale pyramid pooling, it demonstrates superior performances as depicted in \Fref{FIG_SCOMBINE_VS_MAP}. The performance of the naive Fisher kernel pooling in \Eref{EQ_FK} deteriorates rapidly when finer scale levels are involved. This is because indistinctive neural activations from finer scale levels become dominant in forming a Fisher vector. Our representation, however, exhibits stable performance that the accuracy is constantly increasing and finally being saturated. It verify that our pooling method aggregates multi-scale CNN activations effectively \section{Experiments} \label{sec:exp} \subsection{Datasets} To evaluate our proposal as a generic image representation, we conduct three different visual recognition tasks with following datasets. \paragraph{MIT Indoor 67}\cite{SCENE67} is used for a scene classification task. The dataset contains 15,620 images with 67 indoor scene classes in total. It is a challenging dataset because many indoor classes are characterized by the objects they contain (e.g. different type of stores) rather than their spatial properties. The performance is measured with top-1 accuracy. \paragraph{PASCAL VOC 2007}\cite{VOC2007} is used for an object classification task. It consists of 9,963 images of 20 object classes in total. The task is quite difficult since the scales of the objects fluctuate and multiple objects of different classes are often contained in the same image. The performance is measured with (11-points interpolated) mean average precision. \paragraph{Oxford 102 Flowers} \cite{FLOWERS} is used for a fine-grained object classification task, which distinguishes the sub-classes of the same object class. This dataset consists of 8,189 images with 102 flower classes. Each class consists of various numbers of images from 20 to 258. The performance is measured with top-1 accuracy. \subsection{Pre-trained CNNs} We use two CNNs pre-trained on the ILSVRC'12 dataset~\cite{IMAGENET} to extract multi-scale local activations. One is the Caffe reference model~\cite{CAFFE} composed of five convolutional layers and three fully connected layers. This model performed 19.6\% top-5 error when a single center-crop of each validation image are used for evaluation on the ILSVRC'12 dataset. Henceforth, we denote this model by ``{Alex}'' since it is nearly the same architecture of Krizhevsky~\etal's CNN~\cite{ALEX}. The other one is Chatfield~\etal's CNN-S model~\cite{DEVIL} (``{CNNS}'', henceforth). This model, a simplified version of the OverFeat~\cite{OVERFEAT}, is also composed of five convolutional layers (three in \cite{OVERFEAT}) and three fully connected layers. It shows 15.5\% top-5 error on the ILSVRC'12 dataset with the same center-crop. Compared to Alex, it uses 7$\times$7 smaller filters but dense stride of 2 in the first convolutional layer. Our experiments are conducted mostly with the {Alex} by default. The {CNNS} is used only for the PASCAL VOC 2007 dataset to compare our method with \cite{DEVIL}, which demonstrates excellent performance with the {CNNS}. Both of the two pre-trained models are available online \cite{MATCONVNET}. \subsection{Implementation Details} We use an image pyramid of seven scales by default since the seven scales can cover large enough scale variations and performance in all datasets as shown in \Fref{FIG_SCOMBINE_VS_MAP}. The overall procedure of our image representation is as follows. Given an image, we make an image pyramid containing seven scaled images. Each image in the pyramid has twice resolution than the previous scale starting from the standard size defined in each CNN (e.g. $227\times227$ for {Alex}). We then feed each scale image to the CNN and obtain 4,410 vectors of 4,096 dimensional dense CNN activations from the seventh layer. The dimensionality of each activation vector is reduced to 128 by PCA where a projection is trained with 256,000 activation vectors sampled from training images. A visual vocabulary (GMM of 256 Gaussian distributions) is also trained with the same samples. Consequently, one 65,536 dimensional Fisher vector is computed by \Eref{EQ_SNFK}, and further power- and $\ell_2$-normalization follow. One-versus-rest linear SVMs with a quadratic regularizer and a hinge loss are trained finally. Our system is mostly implemented using open source libraries including VLFeat~\cite{VLFEAT} for a Fisher kernel framework and MatConvNet~\cite{MATCONVNET} for CNNs. \subsection{Results and Analysis} \label{EXPANALYSIS} We perform comprehensive experiments to compare various methods on the three recognition tasks. We first show the performance of our method and baseline methods. Then, we compare our result with state-of-the-art methods for each dataset. For simplicity, we use a notation protocol ``A(B)'' where A denotes a pooling method and B denotes descriptors to be pooled by A. The notations are summarized in \Tref{TAB_NOTATION}. We compare our method with several baseline methods. The baseline methods include intermediate CNN activations from a pre-trained CNN with a standard input, an average pooling with multiple jittered images, and modified versions of our method. The comparison results for each dataset are summarized in \Tref{TAB_SCENE67}(a), \ref{TAB_VOC2007}(a), \ref{TAB_FLOWERS}(a). As expected, the most basic representation, Alex-FC7, performs the worst for all datasets. The average pooling in AP10 and AP50 improves the performance +1.39\%$\sim$+3\%, however the improvement is bounded regardless of the number of data augmentation. The other two baseline methods (MPP w/o SN and CSF) exploit multi-scale CNN activations and they show better results than single-scale representations. Compared to the AP10, the performance gains from multi-scale activations exceed +10\%, +1\%, and +5\% for each dataset. It shows that image representation based on CNN activations can be enriched by utilizing multi-scale local activations. Even though baseline methods exploiting multi-scale CNN activations show substantial improvements compared to the single-scale baselines, we can also verify that handling multi-scale activations is important for further improvement. Compared to the naive Fisher kernel pooling (NFK) in \Eref{EQ_FK}, our MPP achieves an extra but significant performance gain of +4.18\%, +4.58\%, and 2.84\% for each dataset. Instead of pooling multi-scale activations as our MPP, concatenating encoded Fisher vectors can be another option as done in Gong~\etal's method~\cite{VLAD}. The concatenation (CSF) also improves the performance, however the CSF without an additional dimension reduction raises the dimensionality proportional to the number of scales and the MPP still outperforms the CSF for all datasets. The comprehensive test with various pooling strategies so far shows that the proposed image representation can be used as a primary image representation in wide visual recognition tasks. We also apply the spatial pyramid (SP) kernel~\cite{SPM} to our representation. We construct a spatial pyramid into four sub-regions (whole, top, middle, bottom) and it increases the dimensionality of our representation four times. The results are unequable but the differences are marginal for all datasets. This result is not surprising because the rich activations from smaller image scales already cover the global layout. It makes the SP kernel redundant. In \Tref{TAB_SCENE67}(b), we compare our result with various state-of-the-art methods on Indoor 67. Similar to ours, Gong~\etal~\cite{VLAD} proposed a pooling method for multi-scale CNN activations. They performed VLAD pooling at each scale and concatenated them. Compared to \cite{VLAD}, our representation largely outperforms the method with a gain of +7.07\%. The performance gap possibly comes from 1) the large number of scales, 2) the superiority of the Fisher kernel, and 3) the details of pooling strategy. While they use only three scales, we extract seven-scale activations with a quite efficient way (\Fref{FIG_LOCALDESC}). \textit{Though adding local activations from very finer scales such as 6 or 7 in a naive way may harm the performance, it actually contribute to a better invariance property by the proposed MPP}. In addition, as our experiment of the ``CSF" was shown, the MPP is more suitable for aggregating multi-scale activations than the concatenation. It implies that our better performance does not just come from the superior Fisher kernel, but from the better handling of multi-scale neural activations. The record holder in the Indoor 67 dataset has been Zuo~\etal~\cite{DSFL} who combined the Alex-FC6 and their complementary features so called DSFL. The DSFL learns discriminative and shareable filters with a target dataset. When we stack an additional MPP at the Pool5 layer, we already achieve a state-of-the-art performance (77.76\%) with a pre-trained Alex only. We also stack the DSFL feature\footnote{Pre-computed DSFL vectors for the MIT Indoor 67 dataset are provided by the authors.} over our representation and the result shows the performance of 80.78\%. It shows that our representation is also improved by combining complementary features. The results on VOC 2007 is summarized in \Tref{TAB_VOC2007}(b). There are two methods (\cite{SIVIC} and \cite{OFFTHESHELF}) that use the same Alex network. Razavian~\etal~\cite{OFFTHESHELF} performed target data augmentation and Oquab \etal \cite{SIVIC} used a multi-layer perceptron (MLP) instead of a linear SVM with ground truth bounding boxes. Our representation outperforms the two methods using the pre-trained Alex without data augmentation or the use of bounding box annotations. The gains are +1.84\% and +2.34\% respectively. There are recent methods outperforming our method. All of them are adopting better CNNs for the source task (i.e. ImageNet classification) or the target task, such as Spatial Pyramid Pooling (SPP) network \cite{SPP}, Multi-label CNN \cite{NUS} and the CNNS \cite{DEVIL}. Our basic MPP(Alex-FC7) demonstrates slightly lower precisions (79.54\%) compared to them, however we use the basic Alex CNN without fine-tuning on VOC 2007. When our representation is equipped with the superior CNNS \cite{DEVIL}, which is not fine-tuned on VOC 2007, our representation (81.40\%) reaches nearly stat-of-the-art performance and our method is further improved to 82.13\% by stacking MPP(CNNS-FC8). The performance is still lower than \cite{DEVIL}, who conduct target data augmentation and fine-tuning. We believe our method can be further improved by additional techniques such as fine-tuning, target data augmentation, or use of ground truth bounding boxes, we leave the issue as future work because our major focus is a generic image representation with a pre-trained CNN. \Tref{TAB_FLOWERS}(b) shows the classification performances on 102 Flowers. Our method (91.28\%) outperforms the previous state-of-the-art method~\cite{OFFTHESHELF} (86.80\%). Without the use of a powerful CNN representation, various previous methods show much lower performances. \Tref{TAB_VOC2007_PERCLASS} shows per-class performances on VOC 2007. Compared to state-of-the-art methods, our method performs best in 6 classes among 20 classes. It is interesting that the 6 classes include ``bottle", ``pottedplant", and ``tvmonitor", which are the representative small objects in the VOC 2007 dataset. The results clearly demonstrates the benefit of our MPP that aggregates activations from very finer-scales as well, which are prone to harm the performance if it is handled inappropriately. \begin{table*} \setlength{\tabcolsep}{1.6pt} \small \begin{center} \begin{tabular}{|l|l|} \hline Method&Description\\ \hline\hline CNN-FC7& A standard activation vector from FC7 of a CNN with a center-crop of a $256\times256$ size input image.\\\hline AP10(CNN-FC7)&Average pooling of a 5 crops and their flips, given a $256\times256$ size input image.\\\hlin AP50(CNN-FC7)&Average pooling of a 25 crops and their flips, given a $256\times256$ size input image.\\\hline NFK(CNN-FC7)&Naive Fisher kernel pooling without scale-wise vector normalization, given a multi-scale image pyramid.\\\hline CSF(CNN-FC7)&Concatenation of scale-wise normalized Fisher vectors, given a multi-scale image pyramid.\\\hline MPP(CNN-FC7)&The proposed representation, given a multi-scale image pyramid.\\\hline \end{tabular} \end{center} \caption{Summary of our notation protocol. Consequent image representations by the listed methods are finally $\ell_2$-normalized.} \label{TAB_NOTATION} \end{table*} \begin{table} \setlength{\tabcolsep}{1.6pt} \small \begin{center} \begin{tabular}{|l|l|l|c|}\hline Method &Description &CNN&Acc.\\\hline\hline Baseline &Alex-FC7 &Yes.&57.91\\ Baseline &AP10(Alex-FC7) &Yes.&60.90\\ Baseline &AP50(Alex-FC7) &Yes.&60.37\\ Baseline &NFK(Alex-FC7) &Yes.&71.49\\ Baseline &CSF(Alex-FC7) &Yes.&72.24\\ Ours &MPP(Alex-FC7) &Yes.&75.67\\ Ours &MPP(Alex-FC7)+SP &Yes.&\textbf{75.97}\\\hline\hline Ours &MPP(Alex-FC7,Pool5)&Yes.&77.56\\ Ours &MPP(Alex-FC7)+DSFL\cite{DSFL} &Yes.&\textbf{80.78}\\\hline \end{tabular} \\(a) baselines and our methods.\\ \begin{tabular}{|l|l|l|c|}\hline Method&Description&CNN&Acc.\\\hline\hline Singh \etal \cite{UNSUPERMID} '12&Part+GIST+DPM+SP&No.&49.40\\ Juneja \etal \cite{SHOUT} '13&IFK+Bag-of-Parts&No.&63.18\\ Doersch \etal \cite{MIDREP} '13&IFK+MidlevelRepresent.&No.&66.87\\ Zuo \etal \cite{DSFL} '14&DSFL&No.&52.24\\ Zuo \etal \cite{DSFL} '14&DSFL+Alex-FC6&Yes.&\textbf{76.23}\\ Zhou \etal \cite{MITPLACE} '14&Alex-FC7&Yes.&68.24\\ Zhou \etal \cite{MITPLACE} '14&Alex-FC7&Yes.&70.80\\ Razavian \etal~\cite{OFFTHESHELF} '14&AP(Alex)+PT+TargetAug.&Yes.&69.00\\ Gong \etal~\cite{VLAD} '14&VLAD Concat.(Alex-FC7)&Yes.&68.90\\\hline \end{tabular} \\(b) state-of-the-art methods on MIT Indoor 67.\\ \end{center} \caption{Classification performances on MIT Indoor 67. (SP: Spatial Pyramid, DPM: Deformable Part-based Model, PT: Power Transform, IFK: Improved Fisher Kernel, DSFL: Discriminative and Shareable Feature Learning.)} \vspace{20mm} \label{TAB_SCENE67} \end{table} \begin{table} \setlength{\tabcolsep}{1.0pt} \small \begin{center} \begin{tabular}{|l|l|l|l|l|c|}\hline Method &Description &FT&BB &CNN&mAP\\\hline\hline Baseline &Alex-FC7 &No. &No. &Yes.&72.36\\ Baseline &AP10(Alex-FC7) &No. &No. &Yes.&73.75\\ Baseline &AP50(Alex-FC7) &No. &No. &Yes.&73.60\\ Baseline &NFK(Alex-FC7) &No. &No. &Yes.&74.96\\ Baseline &CSF(Alex-FC7) &No. &No. &Yes.&78.46\\ Ours &MPP(Alex-FC7) &No. &No. &Yes.&\textbf{79.54}\\ Ours &MPP(Alex-FC7)+SP &No. &No. &Yes.&79.29\\\hline\hline Ours &MPP(CNNS-FC7) &No. &No. &Yes.&81.40\\ Ours &MPP(CNNS-FC7,FC8)&No. &No. &Yes.&\textbf{82.13}\\\hline \end{tabular} \\(a) Baselines and our methods.\\ \begin{tabular}{|l|l|l|l|l|c|}\hline Method &Description &FT&BB &CNN&mAP\\\hline\hline \bigcell{l}{Perronnin \etal~\cite{IFK}\;10'}&IFK(SIFT+color)&No.&No.&No.&60.3\%\\ \bigcell{l}{He \etal~\cite{SPP}\;'14}&SPPNET-FC7&No.&No.&Yes.&80.10\%\\ \bigcell{l}{Wei \etal~\cite{NUS}\;'14}&Multi-label CNN&Yes.&No.&Yes.&81.50\%\\ Razavian \etal~\cite{OFFTHESHELF}\;'14&AP(Alex)+PT+TA&No.&No.&Yes.&77.20\%\\ Oquab \etal~\cite{SIVIC}\;'14&Alex-FC7+MLP&No.&Yes.&Yes.&77.70\%\\ \bigcell{l}{Chatfield~\etal \cite{DEVIL}\;'14}&\bigcell{l}{AP(CNNS-FC7)+TA}&No.&No.&Yes.&79.74\%\\ \bigcell{l}{Chatfield~\etal \cite{DEVIL}\;'14}&\bigcell{l}{AP(CNNS-FC7)+TA}&Yes.&No.&Yes.&\textbf{82.42}\%\\\hline \end{tabular} \\(b) state-of-the-art methods on \\PASCAL VOC 2007 classification.\\ \end{center} \vspace{-3mm} \caption{Classification performances on PASCAL VOC 2007 classification. ``FT" represents fine-tuning of a pre-trained CNN on VOC2007 and ``BB'' denotes the use of ground truth object bounding boxes in training. (SP: Spatial Pyramid, IFK: Improved Fisher Kernel, SPPNET: Spatial Pyramid Pooling Network, PT: Power Transform, TA: Target data Augmentation in training, MLP: Multilayer Perceptron.)} \vspace{25mm} \label{TAB_VOC2007} \end{table} \begin{table*} \setlength{\tabcolsep}{1.4pt} \footnotesize \begin{center} \begin{tabular}{|l|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hline Method &FT& plane & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & motor & person & plant & sheep & sofa & train & tv & mAP \\\hline\hline Alex-FC7&No.&85.0& 79.7& 82.8& 80.4& 39.7& 69.3& 82.9& 81.7& 58.7& 57.8& 68.5& 75.9& 83.0& 72.5& 90.6& 51.7& 71.1& 60.8& 85.0& 70.5& 72.4\\ AP10(Alex-FC7)&No.&85.7& 80.8& 83.3& 80.7& 40.4& 71.5& 83.8& 82.7& 60.7& 60.5& 70.6& 79.0& 84.5& 75.0& 91.3& 53.4& 70.1& 62.6& 86.5& 72.1& 73.7\\ MPP(Alex-FC7)&No.&90.2& 86.9& 86.6& 84.4& 54.0& 80.0& 87.9& 86.0& 63.4& 72.2& 75.7& 83.1& 87.8& 83.9& 93.0& \textbf{64.8}& 75.8& 69.6& 89.9& 75.9& 79.5\\ MPP(CNNS-FC7)&No. &90.2 & 88.6 & 89.0 & 84.7 & \textbf{58.2}&\textbf{82.8} & 88.1 & 89.0 & \textbf{64.9} & 77.0 & \textbf{78.4} & 86.9 & 89.2 & 86.7 & 92.8 & 61.2 & 81.3 & \textbf{70.0} & 89.8 & \textbf{79.3} & \textbf{81.4}\\\hline\hline Perronnin \etal\cite{IFK}\;10'&No.&75.7& 64.8& 52.8& 70.6& 30.0& 64.1& 77.5& 55.5& 55.6& 41.8& 56.3& 41.7& 76.3& 64.4& 82.7& 28.3& 39.7& 56.6& 79.7& 51.5& 58.3\\ Razavian \etal\cite{OFFTHESHELF}\;'14&No.&90.1& 84.4& 86.5& 84.1& 48.4& 73.4& 86.7& 85.4& 61.3& 67.6& 69.6& 84.0& 85.4& 80.0& 92.0& 56.9& 76.7& 67.3& 89.1& 74.9& 77.2\\ Oquab \etal\cite{SIVIC}\;'14&No.&88.5& 81.5& 87.9& 82.0& 47.5& 75.5& 90.1& 87.2& 61.6& 75.7& 67.3& 85.5& 83.5& 80.0& 95.6& 60.8& 76.8& 58.0& 90.4& 77.9& 77.7\\ Wei \etal\cite{NUS}\;'14&Yes.&95.1& 90.1&\textbf{92.8}& \textbf{89.9}& 51.5& 80.0& \textbf{91.7}& 91.6& 57.7& \textbf{77.8} & 70.9& 89.3& 89.3& 85.2& 93.0& 64.0& \textbf{85.7}& 62.7& 94.4& 78.3& 81.5\\ Chatfield\etal \cite{DEVIL}\;'14&Yes.&\textbf{95.3}&\textbf{90.4}& 92.5& 89.6& 54.4& 81.9& 91.5& \textbf{91.9}& 64.1& 76.3& 74.9& \textbf{89.7}& \textbf{92.2}& \textbf{86.9}& \textbf{95.2}& 60.7& 82.9& 68.0& \textbf{95.5}& 74.4& \textbf{82.4}\\\hline \end{tabular} \end{center} \caption{Per-class classification performances on PASCAL VOC 2007. ``FT" represents fine-tuning of a pre-trained CNN on VOC2007.} \label{TAB_VOC2007_PERCLASS} \end{table*} \begin{table*}[!h] \setlength{\tabcolsep}{1.0pt} \small \begin{center} \begin{tabular}{|l|l|l|l|l|c|}\hline Method &Description &Seg.&CNN&Acc.\\\hline\hline Baseline &Alex-FC7 &No.&Yes.&81.43\\ Baseline &AP10(Alex-FC7) &No.&Yes.&83.40\\ Baseline &AP50(Alex-FC7) &No.&Yes.&83.56\\ Baseline &NFK(Alex-FC7) &No.&Yes.&88.44\\ Baseline &CSF(Alex-FC7) &No.&Yes.&89.35\\ Ours &MPP(Alex-FC7) &No.&Yes.&\textbf{91.28}\\ Ours &MPP(Alex-FC7)+SP &No.&Yes.&90.05\\\hline \end{tabular} \\(a) baselines and our methods.\\ \begin{tabular}{|l|l|l|l|l|c|}\hline Method &Description &Seg.&CNN&Acc.\\\hline\hline \bigcell{l}{Nilsback and Zisserman \cite{FLOWERS}\;'08}&Multple kernel learning&Yes.&No.&77.70\\ \bigcell{l}{Angelova and Zhu \cite{SEGCLS}\;'13}&\bigcell{l}{Seg+DenseHoG+LLC+MaxPooling}&Yes.&No.&80.70\\ \bigcell{l}{Murray and Perronnin \cite{GMP}\;'14}&GMP of IFK(SIFT+color)&No.&No.&81.50\\ \bigcell{l}{Fernando \etal~\cite{FLH}\;'14}&Bag-of-FLH&Yes.&No.&72.70\\ Razavian \etal~\cite{OFFTHESHELF}\;'14&AP(Alex)+PT+TA&No.&Yes.&\textbf{86.8}\\\hline \end{tabular} \\(b) state-of-the-art methods on Oxford 102 Flowers.\\ \end{center} \vspace{-3mm} \caption{Classification performances on Oxford 102 Flowers. ``Seg.'' denotes the use of ground truth segmentations in training. (SP: spatial pyramid, LLC: Locality-constrained Linear Coding, GMP: generalized max pooling, FLH: Frequent Local Histograms, PT: power transform, TA: target data augmentation in training.)} \label{TAB_FLOWERS} \end{table*} \begin{figure*}[!hb] \begin{center} \small \scalebox{0.9}{\begin{tabular}{cccc} \setlength{\tabcolsep}{1.7pt} \includegraphics[width=0.23\linewidth]{fig/fig5/bottle/ID000327.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/bottle/ID003488.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/diningtable/ID001805.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/diningtable/ID003649.jpg}\\ Bottle&Bottle&Dining Table&Dining Table\\ \includegraphics[width=0.23\linewidth]{fig/fig5/car/ID000137.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/car/ID000313.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/sofa/ID007808.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/sofa/ID001868.jpg}\\ Car&Car&Sofa&Sofa\\ \includegraphics[width=0.23\linewidth]{fig/fig5/tvmonitor/ID001905.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/tvmonitor/ID003158.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/train/ID001672.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/train/ID002728.jpg}\\ TV monitor&TV monitor&Train&Train\\ \includegraphics[width=0.23\linewidth]{fig/fig5/pottedplant/ID000397.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/pottedplant/ID002482.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/bus/ID001884.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/bus/ID000858.jpg}\\ Potted plant&Potted plant&Bus&Bus\\ \includegraphics[width=0.23\linewidth]{fig/fig5/bicycle/ID003275.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/bicycle/ID009564.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/aeroplane/ID000846.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/aeroplane/ID007806.jpg}\\ Bicycle&Bicycle&Aeroplane&Aeroplane\\ \includegraphics[width=0.23\linewidth]{fig/fig5/person/ID004844.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/person/ID001968.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/sheep/ID009031.jpg}& \includegraphics[width=0.23\linewidth]{fig/fig5/sheep/ID009169.jpg}\\ Person&Person&Sheep&Sheep\\ \end{tabular}} \end{center} \vspace{-3mm} \caption{Examples of object confidence maps obtained by our image representation on the PASCAL VOC 2007. All examples are test images, not training images.} \label{FIG_OBJCONF} \end{figure*} \FloatBarrier \clearpage\clearpage\clearpage\clearpage \subsection{Weakly-Supervised Object Confidence Map} \begin{figure} \small \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\linewidth]{fig/fig6/ID000069_boat.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID000069_person.jpg}\\ Person&Boat\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID000634_car.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID000634_person.jpg}\\ Car&Person\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID006024_aeroplane.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID006024_car.jpg}\\ Aeroplane&Car\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID006312_bicycle.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID006312_person.jpg}\\ Bicycle&Person\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID007625_dog.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID007625_person.jpg}\\ Dog&Person\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID001720_bottle.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID001720_person.jpg}\\ Bottle&Person\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID009786_cat.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID009786_dog.jpg}\\ Cat&Dog\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID000693_car.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID000693_person.jpg}\\ Car&Person\\ \includegraphics[width=0.45\linewidth]{fig/fig6/ID004072_person.jpg}& \includegraphics[width=0.45\linewidth]{fig/fig6/ID004072_sheep.jpg}\\ Person&Sheep\\ \end{tabular} \end{center} \caption{Examples of multi-object confidence map obtained by our image representation on the PASCAL VOC 2007. All examples are test images, not training images.} \label{FIG_OBJCONF_MULTI} \end{figure} One interesting feature of our method is that we can present object confidence maps for object classification tasks, though we train the SVM classifiers \textit{without bounding box annotation but only with class-level labels}. To recover confidence maps, we trace how much weight is given to each local patch and accumulate all the weights of local activations. Tracing the weight of local activations is possible because our final representation can be formed regardless of the number of scales and the number of local activation vectors. To trace the weight of each patch, we compute our final representation per patch using the corresponding single activation vector only and compute the score from the pre-trained SVM classifiers we used for object classification. Fig. \ref{FIG_OBJCONF} and Fig. \ref{FIG_OBJCONF_MULTI} show several examples of object confidence map on the VOC 2007 test images. In the figures, we can verify our image representation encodes the discriminative image patches well, despite large within-class variations as well as substantial geometric changes. As we discussed in \Sref{EXPANALYSIS}, the images containing small-size objects also present the accurate confidence maps. These maps may further be utilized as an considerable cue for object detection/localization and also be useful for analyzing image representation. \section{Discussion} We have proposed the multi-scale pyramid pooling for better use of neural activations from a pre-trained CNN. There are several conclusions we can derive through our study. First, we should take the scale characteristic of neural activations into consideration for the successful combination of a Fisher kernel and a CNN. The activations become uninformative as a patch size becomes smaller, however they can contribute to better scale invariance when they meet a simple scale-wise normalization. Second, dense deep neural activations from multiple scale levels are extracted with reasonable computations by replacing the fully connection with equivalent multiple convolution filters. It enables us to pool the truly multi-scale activations and to achieve significant performance improvements on the visual recognition tasks. Third, reasonable object-level confidence maps can be obtained from our image representation even though only class-level labels are given for supervision, which can be further applied to object detection or localization tasks. In the comprehensive experiments on three different recognition tasks, the results suggest that our proposal can be used as a primary image representation for better performances in various visual recognition tasks. {\small \bibliographystyle{ieee}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let ${\mathbb D} = \{z : |z| < 1\}$ be the unit disk in the complex plane and $H({\mathbb D})$ be the space of all analytic functions on ${\mathbb D}$. For $a\in {\mathbb D}$, let $\sigma_a$ be the automorphism of ${\mathbb D}$ exchanging $0$ for $a$, namely $\sigma_a(z)=\frac{a-z}{1-\bar{a}z}, ~z\in {\mathbb D}.$ For $0<p<\infty$, the Bergman space $A^p$ consists of all $f\in H({\mathbb D})$ such that \[ \|f\|_{A^p}^p=\int_{{\mathbb D}}|f(z)|^pdA(z)<\infty, \] where $dA(z)=\frac{1}{\pi}dxdy$ denote the normalized area Lebesgue measure. The Bloch space, denoted by $\mathcal{B}=\mathcal{B}({\mathbb D})$, is the space of all $f \in H({\mathbb D})$ such that \begin{eqnarray}} \def\endr{\end{eqnarray} \|f\|_\beta} \def\g{\gamma}\def\ol{\overline} \def\ve{\varepsilon} \def\d{\delta = \sup_{z \in {\mathbb D}}(1-|z|^2) |f'(z)|<\infty. \nonumber\endr Under the norm $\|f\|_{\mathcal{B}}=|f(0)|+ \|f\|_{\beta} \def\g{\gamma}\def\ol{\overline} \def\ve{\varepsilon} \def\d{\delta}$, the Bloch space is a Banach space. From Theorem 1 of \cite{ax}, we see that \[ \|f\|_\beta \approx \sup_{a \in {\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^2} .\] See \cite{zhu2} for more information of the Bloch space. For $0<p<\infty$, let $H^p$ denote the Hardy space of functions $f\in H({\mathbb D})$ such that $$ \|f\|_{H^p}^p=\sup_{0<r<1}\frac{1}{2\pi}\int_0^{2\pi}|f(re^{i\theta})|^pd\theta<\infty. $$ We say that an $f \in H({\mathbb D})$ belongs to the $ BMOA$ space, if $$ \|f\|^2_{*} =\sup_{I\subseteq \partial {\mathbb D}} \frac{1}{ |I|} \int_{I}|f(\zeta)-f_I|^2\frac{d\zeta}{2\pi} <\infty, $$ where $f_I=\frac{1}{|I|}\int_If(\zeta)\frac{d\zeta}{2\pi}.$ It is well known that $BMOA$ is a Banach space under the norm $\|f\|_{BMOA}=|f(0)|+\|f\|_{*}$. From \cite{ga}, we have $$\|f\|_{*} \approx \sup_{w\in {\mathbb D}}\|f\circ\sigma_w-f(w)\|_{H^2}. $$ Throughout the paper, $S({\mathbb D})$ denotes the set of all analytic self-maps of ${\mathbb D}$. Let $u \in H(\mathbb{D})$ and $\varphi\in S({\mathbb D})$. For $f \in H(\mathbb{D})$, the composition operator $C_\varphi$ and the multiplication operator $M_u$ are defined by $$ (C_\varphi f)(z) = f(\varphi(z)) ~~~\mbox{and}~~~~~(M_u f)(z)=u(z)f(z),$$ respectively. The weighted composition operator $uC_\varphi$, induced by $u $ and $\varphi $, is defined as follows. $$ (uC_\varphi f)(z) =u(z) f(\varphi(z)), \ \ f \in H(\mathbb{D}). $$ It is clear that the weighted composition operator $uC_\varphi$ is the generalization of $C_\varphi$ and $M_u$. It is well known that $C_\varphi$ is bounded on $BMOA$ for any $\varphi\in S({\mathbb D})$ by Littlewood's subordination theorem. The compactness of the operator $C_\varphi :BMOA\rightarrow BMOA$ was studied in \cite{bcm, gll2013, smith, wulan, wzz}. Based on results in \cite{bcm} and \cite{smith}, Wulan in \cite{wulan} showed that $C_\varphi :BMOA\rightarrow BMOA$ is compact if and only if \begin{eqnarray}} \def\endr{\end{eqnarray} \lim_{n\rightarrow \infty}\| \varphi^n\|_*=0~~~~\,\,\,\,and\,\,\,\, \lim_{|\varphi(a)|\rightarrow 1} \| \sigma_a \circ\varphi\|_{*}=0.\nonumber \endr In \cite{wzz}, Wulan, Zheng and Zhu further showed that $C_\varphi :BMOA\rightarrow BMOA$ is compact if and only if $ \lim_{n\rightarrow \infty} \|\varphi^n\|_*=0$. In \cite{Lj2007}, Laitila gave some function theoretic characterizations for the boundedness and compactness of the operator $uC_\varphi :BMOA\rightarrow BMOA$. In \cite{col}, Colonna used the idea of \cite{wzz} and showed that $uC_\varphi :BMOA\rightarrow BMOA$ is compact if and only if \[ \lim_{n\rightarrow \infty}\|u\varphi^n\|_*=0\,\,\,\,and\,\,\,\, \lim_{|\varphi(a)|\rightarrow 1}\big(\log\frac{2}{1-|\varphi(a)|^2}\big)\|u\circ\sigma_a-u(a)\|_{H^2}=0. \] Motivated by results in \cite{col}, Laitila and Lindstr\"{o}m gave the estimates for norm and essential norm of the weighted composition $uC_\varphi :BMOA\rightarrow BMOA$ in \cite{ll}, among others, they showed that, under the assumption of the boundedness of $uC_\varphi$ on $BMOA$, \[\|uC_\varphi \|_{e,BMOA\rightarrow BMOA}\approx \limsup_{n\rightarrow \infty}\|u\varphi^n\|_*+\limsup_{|\varphi(a)|\rightarrow 1}\big(\log\frac{2}{1-|\varphi(a)|^2}\big)\|u\circ\sigma_a-u(a)\|_{H^2}.\] % Recall that the essential norm of a bounded linear operator $T:X\rightarrow Y$ is its distance to the set of compact operators $K$ mapping $X$ into $Y$, that is, $$\|T\|_{e, X\rightarrow Y}=\inf\{\|T-K\|_{X\rightarrow Y}: K~\mbox{is compact}~~\},$$ where $X,Y$ are Banach spaces and $\|\cdot\|_{X\rightarrow Y}$ is the operator norm. By Schwarz-Pick lemma, it is easy to see that $C_\varphi$ is bounded on the Bloch space $\mathcal{B}$ for any $\varphi\in S({\mathbb D})$. The compactness of $C_\varphi$ on $\mathcal{B}$ was studied in for example \cite{lou, mm, t, wzz, zhao}. In \cite{wzz}, Wulan, Zheng and Zhu proved that $C_\varphi :{\mathcal{B}}\rightarrow {\mathcal{B}}$ is compact if and only if $\lim_{n\rightarrow\infty}\| \varphi^n \|_{\mathcal{B}}=0.$ In \cite{zhao}, Zhao obtained the exact value for the essential norm of $C_\varphi :{\mathcal{B}} \rightarrow{\mathcal{B}} $ as follows. $$ \|C_\varphi\|_{e,{\mathcal{B}} \rightarrow {\mathcal{B}} } =\Big(\frac{e}{2 }\Big) \limsup_{n\rightarrow\infty} \|\varphi^n \|_{{\mathcal{B}}}. $$ In \cite{osz}, Ohno, Stroethoff and Zhao studied the boundedness and compactness of the operator $u C_\varphi:\mathcal{B} \to \mathcal{B} $. In \cite{col1}, Colonna provided a new characterization of the boundedness and compactness of the operator $u C_\varphi:\mathcal{B} \to \mathcal{B} $ by using $\|u\varphi^n\|_{\mathcal{B}}$. The essential norm of the operator $u C_\varphi:\mathcal{B} \to \mathcal{B} $ was studied in \cite{h1, mz1, mz2}. In \cite{mz1}, the authors proved that $$ \|u C_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} $$ $$\approx\max\Big( \limsup_{|\varphi(a)|\rightarrow 1} \frac{|u(z)\varphi'(z)|(1-|z|^2) }{ 1-|\varphi(z)|^2 } , ~~ \limsup_{|\varphi(a)|\rightarrow 1} \log\frac{e}{1-|\varphi(a)|^2} |u'(z)| (1-|z|^2) \Big) $$ In \cite{h1}, the authors obtained a new estimate for the essential norm of $u C_\varphi:\mathcal{B} \to \mathcal{B} $, i.e., they showed that $$ \|u C_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}} } \approx \max\Big( \limsup_{j\rightarrow\infty} \|I_u (\varphi^j )\|_{{\mathcal{B}} }, ~~ \limsup_{j\rightarrow\infty} \log n\|J_u (\varphi^j )\|_{{\mathcal{B}} } \Big), $$ where $ I_u f(z)=\int_0^z f'(\zeta)u(\zeta)d\zeta, J_u f(z)=\int_0^z f(\zeta)u'(\zeta)d\zeta.$ Motivated by the work of \cite{col1, col, ll, wzz}, the aim of this article is to give some new estimates for the norm and essential norm of the operator $u C_\varphi:\mathcal{B} \to \mathcal{B} $. As corollaries, we obtain some new characterizations for the boundedness and compactness of the operator $u C_\varphi:\mathcal{B} \to \mathcal{B} $. Throughout this paper, constants are denoted by $C$, they are positive and may differ from one occurrence to the other. The notation $a \lesssim b$ means that there is a positive constant $C$ such that $a \leq C b$. Moreover, if both $a\lesssim b$ and $b\lesssim a$ hold, then one says that $a \approx b$. \section{Norm of $uC_\varphi $ on the Bloch space} In this section we give some estimates for the norm of the operator $u C_\varphi:\mathcal{B} \to \mathcal{B} $. For this purpose, we need some lemmas which stated as follows. The following lemma can be found in \cite{zhu2}. \begin{lem} {\it Let $f\in{\mathcal{B}}$. Then \[ |f(z)|\lesssim \log\frac{2}{1-|z|^2}\|f\|_{\mathcal{B}} , ~~~~~z\in{\mathbb D}.\]} \end{lem} \begin{lem}{\it For $2\leq p< \infty$ and $f\in {\mathcal{B}}$, \[ \sup_{a \in {\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^2}\approx \sup_{a \in {\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^p}. \]} \end{lem} \begin{proof} Using H\"{o}lder inequality, we get \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a \in {\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^2}\leq \sup_{a \in {\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^p}, \endr for $2\leq p <\infty$. On the other hand, there exists a constant $C >0$ such that (see \cite[p.38]{Xi}) \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a \in {\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^p} \leq C \|f\|_{{\mathcal{B}}} \lesssim \sup_{a \in {\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^2}, \endr which, combined with (2.1), implies the desired result. \end{proof} \begin{lem} \cite{Sw1996} {\it For $f\in A^2$, \[ \|f\|_{A^2}^2 \approx |f(0)|^2+\int_{{\mathbb D}}|f'(w)|^2(1-|w|^2)^2dA(w). \]} \end{lem} The classical Nevanlinna counting function $N_\varphi$ and the generalized Nevanlinna counting functions $N_{\varphi,\gamma}$ for $\varphi$ are defined by (see \cite{Sj1987}) \[ N_\varphi(w)=\sum_{z\in \varphi^{-1}\{w\}}\log\frac{1}{|z|}\,\,\,and\,\,\, N_{\varphi,\gamma}(w)=\sum_{z\in \varphi^{-1}\{w\}}\big(\log\frac{1}{|z|}\big)^\gamma, \] respectively, where $\gamma>0$ and $w\in{\mathbb D}\backslash \{\varphi(0)\}$. \begin{lem} \cite{Sw1996} {\it Let $\varphi\in S({\mathbb D})$ and $f\in A^2$. Then \[ \|f\circ\varphi\|_{A^2}^2 \approx |f(\varphi(0))|^2+\int_{{\mathbb D}}|f'(w)|^2N_{\varphi,2}(w)dA(w). \]} \end{lem} \begin{lem}\cite{Sj1987} {\it Let $\varphi\in S({\mathbb D})$ and $\gamma>0$. If $\varphi(0)\neq 0$ and $0<r<|\varphi(0)|$, then \[ N_{\varphi,\gamma}(0)\leq \frac{1}{r^2}\int_{r{\mathbb D}}N_{\varphi,\gamma}dA. \]} \end{lem} \begin{lem} {\it Let $\varphi\in S({\mathbb D})$ such that $\varphi(0)=0$. If $\sup_{0<|w|<1}|w|^2N_{ \varphi,2}(w)<\delta,$ then \begin{eqnarray}} \def\endr{\end{eqnarray} N_{ \varphi,2}(w) \leq \frac{4\delta}{(\log 2)^2}\big(\log \frac{1}{|w|} \big)^2 \endr when $\frac{1}{2}\leq |w|<1$.} \end{lem} \begin{proof} See the proof of Lemma 2.1 in \cite{smith}.\end{proof} \begin{lem} {\it For all $g\in A^2$ and $\phi\in S({\mathbb D})$ such $g(0)=\phi(0)=0$, we have \begin{eqnarray}} \def\endr{\end{eqnarray} \|g\circ \phi\|_{A^2} \lesssim \|\phi\|_{A^2}\|g\|_{A^2}. \endr In particular, for all $f\in{\mathcal{B}}$, $a\in{\mathbb D}$ and $\varphi\in S({\mathbb D})$, \begin{eqnarray}} \def\endr{\end{eqnarray} \|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2} \lesssim \|\sigma_{\varphi(a)}\circ\varphi\circ\sigma_a\|_{A^2} \|f\circ\sigma_a-f(a)\|_{A^2}. \nonumber \endr } \end{lem} \begin{proof} Let $\phi\in S({\mathbb D})$ such that $\phi(0)=0$. Then, \begin{eqnarray}} \def\endr{\end{eqnarray} \|\sigma_z\circ \phi-\sigma_z(\phi(0))\|_{A^2}^2= \int_{{\mathbb D}}\frac{(1-|z|^2)^2|\phi(w)|^2}{|1-\bar{z}\phi(w)|^2}dA(w) \leq 4\|\phi\|_{A^2}^2. \endr From Lemmas 2.3 and 2.4, and (2.5) we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} \|\sigma_z\circ \phi-\sigma_z(\phi(0))\|_{A^2}^2&=& \int_{{\mathbb D}}|(\sigma_z\circ\phi )'|^2(\log\frac{1}{|w|})^2dA(w)\nonumber\\ &=&\int_{{\mathbb D}} N_{\sigma_z\circ\phi,2} dA(w)\leq 4\|\phi\|_{A^2}^2. \endr For $z\in{\mathbb D}\setminus\{0\}$, from Lemma 4.2 in \cite{Sw1996} and Lemma 2.5, we have \begin{eqnarray}} \def\endr{\end{eqnarray} |z|^2N_{\phi,2}(z)= |z|^2N_{\sigma_z\circ\phi,2}(0) \leq \int_{|z|{\mathbb D}} N_{\sigma_z\circ\phi,2}(w) dA(w)\leq 4\|\phi\|_{A^2}^2. \endr So, by Lemma 2.6 we get \begin{eqnarray}} \def\endr{\end{eqnarray} N_{\phi,2}(z) \leq \frac{16}{(\log2)^2}\|\phi\|_{A^2}^2(\log\frac{1}{|z|})^2, \endr for $z\in{\mathbb D}\setminus \frac{1}{2}{\mathbb D} $. Thus, \begin{eqnarray}} \def\endr{\end{eqnarray} \int_{{\mathbb D}\setminus \frac{1}{2}{\mathbb D} }|g'(z)|^2N_{\phi,2}(z) dA(z)\leq \frac{16}{(\log2)^2}\|\phi\|_{A^2}^2\|g\|_{A^2}^2. \endr In addition, for $z\in{\mathbb D}$ and $g\in A^2$, from Theorems 4.14 and 4.28 of \cite{zhu2}, we have $|g'(z)|\leq (1-|z|^2)^{-2}\|g\|_{A^2}$. Then, \begin{eqnarray}} \def\endr{\end{eqnarray} &&\int_{\frac{1}{2}{\mathbb D}}|g'(z)|^2N_{\phi,2}(z)dA(z)\leq 16\|g\|_{A^2}^2\int_{\frac{1}{2}{\mathbb D}}N_{\phi,2}(z)dA(z) \leq16\|\phi\|_{A^2}^2\|g\|_{A^2}^2. \endr Since $g(0)=0$, by Lemma 2.4 we have \begin{eqnarray}} \def\endr{\end{eqnarray} \|g\circ \phi\|_{A^2}^2 \approx \int_{{\mathbb D}}|g'(z)|^2N_{\phi,2}(z) dA(z). \endr Combine with (2.9), (2.10) and (2.11), we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} \|g\circ \phi\|_{A^2} \lesssim \|\phi\|_{A^2}\|g\|_{A^2} ,\nonumber \endr as desired. In particular, for all $f\in{\mathcal{B}}$, $a\in{\mathbb D}$ and $\varphi\in S({\mathbb D})$, if we set $$g= f\circ \sigma_{\varphi(a)}-f(\varphi(a)), ~~~\phi=\sigma_{\varphi(a)} \circ \varphi \circ \sigma_a,$$ we get \begin{eqnarray}} \def\endr{\end{eqnarray} \|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2} \lesssim \|\sigma_{\varphi(a)}\circ\varphi\circ\sigma_a\|_{A^2} \|f\circ\sigma_a-f(a)\|_{A^2}. \nonumber \endr The proof is complete. \end{proof} For the simplicity of the rest of this paper, we introduce the following abbreviation. Set \[ \alpha(u,\varphi,a)=|u(a)|\cdot\|\sigma_{\varphi(a)}\circ\varphi\circ\sigma_a\|_{A^2}, \] \[ \beta(u,\varphi,a)= \log\frac{2}{1-|\varphi(a)|^2} \|u\circ\sigma_a-u(a)\|_{A^2}, \] where $a\in {\mathbb D}$, $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$. \begin{thm} {\it Let $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}\approx |u(0)|\log\frac{2}{1-|\varphi(0)|^2}+ \sup_{a\in {\mathbb D}}\alpha(u,\varphi,a)+\sup_{a\in {\mathbb D}}\beta(u,\varphi,a).\nonumber \endr} \end{thm} \begin{proof} First we give the upper estimate for $\|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}$. For all $f\in {\mathcal{B}}$, using the triangle inequality, we get \begin{eqnarray}} \def\endr{\end{eqnarray} &&\|(uC_\varphi f)\circ\sigma_a-(uC_\varphi f)(a)\|_{A^2}\nonumber\\ &=&\|(u\circ\sigma_a-u(a))\cdot(f\circ\varphi\circ\sigma_a-f(\varphi(a)))\nonumber\\ &&+u(a)(f\circ\varphi\circ\sigma_a-f(\varphi(a))) +(u\circ\sigma_a-u(a))f(\varphi(a))\|_{A^2}\nonumber\\ &\leq&\|(u\circ\sigma_a-u(a))\cdot(f\circ\varphi\circ\sigma_a-f(\varphi(a)))\|_{A^2}\nonumber\\ &&+|u(a)|\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}+ |f(\varphi(a))|\|u\circ\sigma_a-u(a)\|_{A^2}. \endr By Lemmas 2.1 and 2.7, we have \begin{eqnarray}} \def\endr{\end{eqnarray} &&|u(a)|\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}+ |f(\varphi(a))|\|u\circ\sigma_a-u(a)\|_{A^2}\nonumber\\ &\lesssim& \alpha(u,\varphi,a)|\|f\circ\sigma_a-f(a)\|_{A^2}+ \log\frac{2}{1-|\varphi(a)|^2}\|u\circ\sigma_a-u(a)\|_{A^2} \|f \|_{{\mathcal{B}}}\nonumber \\ &\lesssim& \big(\alpha(u,\varphi,a)+\beta(u,\varphi,a) \big) \|f \|_{{\mathcal{B}}} . \endr From Lemmas 2.1 and 2.2, we get \begin{eqnarray}} \def\endr{\end{eqnarray} &&\sup_{a\in {\mathbb D}}\|(u\circ\sigma_a-u(a))\cdot(f\circ\varphi\circ\sigma_a-f(\varphi(a)))\|_{A^2}\nonumber\\ &\lesssim & \sup_{a\in {\mathbb D}}\log2\|u\circ\sigma_a-u(a)\|_{A^2}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}\nonumber\\ &\lesssim & \sup_{a\in {\mathbb D}}\log\frac{2}{1-|\varphi(a)|^2}\|u\circ\sigma_a-u(a)\|_{A^2}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}\nonumber\\ &\lesssim& \sup_{a\in {\mathbb D}}\beta(u,\varphi,a)\|f\circ \varphi\|_{{\mathcal{B}}} \lesssim \sup_{a\in {\mathbb D}}\beta(u,\varphi,a)\|f\|_{{\mathcal{B}}}. \endr Then, by (2.12), (2.13) and (2.14), we have \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a\in {\mathbb D}}\|(uC_\varphi f)\circ\sigma_a-(uC_\varphi f)(a)\|_{A^2}\lesssim \big(\sup_{a\in {\mathbb D}}\alpha(u,\varphi,a) +\sup_{a\in {\mathbb D}}\beta(u,\varphi,a)\big)\|f\|_{{\mathcal{B}}}.\nonumber \endr In addition, by Lemma 2.1, $|(uC_\varphi f)(0)|\lesssim |u(0)|\log\frac{2}{1-|\varphi(0)|^2}\|f\|_{{\mathcal{B}}}, $ we get \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi f\|_{\mathcal{B}} &\approx & |(uC_\varphi f)(0)|+ \sup_{a\in {\mathbb D}}\|(uC_\varphi f)\circ\sigma_a-(uC_\varphi f)(a)\|_{A^2} \nonumber\\ &\lesssim & |u(0)|\log\frac{2}{1-|\varphi(0)|^2}\|f\|_{{\mathcal{B}}}+ \sup_{a\in {\mathbb D}}\alpha(u,\varphi,a)\|f\|_{{\mathcal{B}}}+\sup_{a\in {\mathbb D}}\beta(u,\varphi,a)\|f\|_{{\mathcal{B}}}\nonumber, \endr which implies \begin{eqnarray}} \def\endr{\end{eqnarray} &&\|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}\lesssim |u(0)|\log\frac{2}{1-|\varphi(0)|^2}+ \sup_{a\in {\mathbb D}}\alpha(u,\varphi,a)+\sup_{a\in {\mathbb D}}\beta(u,\varphi,a). \endr Next we find the lower estimate for $\|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}$. Let $f=1$. It is easy to see that $\|u\|_{{\mathcal{B}}}\leq \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}$. For any $a\in {\mathbb D}$, set \begin{eqnarray}} \def\endr{\end{eqnarray} f_a(z)=\sigma_{\varphi(a)}(z)-\varphi(a),\,\,\,\,\,\,\, z\in{\mathbb D}. \endr Then, $f_a(0)=0$, $f_a(\varphi(a))=-\varphi(a)$, $\|f_a\|_{{\mathcal{B}}}\leq 4$ and $\|f_a\|_{\infty}\leq 2$. Using triangle inequality, we get \begin{eqnarray}} \def\endr{\end{eqnarray} \alpha(u,\varphi,a)&=&|u(a)|\cdot\|\sigma_{\varphi(a)}\circ\varphi\circ\sigma_a-\varphi(a)+\varphi(a) \|_{A^2}\nonumber\\ &=& \|u(a)\cdot(f_a\circ\varphi\circ\sigma_a-f_a(\varphi(a)))\|_{A^2}\nonumber \\ &\leq&\|(u\circ\sigma_a-u(a))\cdot f_a\circ\varphi\circ\sigma_a\|_{A^2}\nonumber\\ &&+\|(u\circ\sigma_a)\cdot f_a\circ\varphi\circ\sigma_a-u(a)f_a(\varphi(a))\|_{A^2}\\ &\leq&2\|u\circ\sigma_a-u(a)\|_{A^2}+\|(uC_\varphi f_a)\circ\sigma_a-(uC_\varphi f_a)(a)\|_{A^2}\nonumber\\ &\leq&2\|u \|_{\mathcal{B}}+4\|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}} \leq 6\|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}} \nonumber. \endr Set \begin{eqnarray}} \def\endr{\end{eqnarray} h_a(z)=\log\frac{2}{1-\overline{\varphi(a)}z},\,\,\,\,\,\, z\in{\mathbb D}. \endr Then, $h_a\in {\mathcal{B}}$, $h_a(\varphi(a))=\log\frac{2}{1-|\varphi(a)|^2}$ and $\sup_{a\in {\mathbb D}} \|h_a \|_{{\mathcal{B}}} \leq 2+\log 2$. Using triangle inequality and Lemma 2.7, we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} \beta(u,\varphi,a)&=& \|\log\frac{2}{1-|\varphi(a)|^2}\cdot (u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &=& \|h_a(\varphi(a))(u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &\leq&\|(h_a\circ\varphi\circ\sigma_a-h_a(\varphi(a)))\cdot (u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &&+ \| (u\circ\sigma_a)\cdot h_a\circ\varphi\circ\sigma_a- u(a)h_a(\varphi(a))\|_{A^2} \\ &&+ \|u(a) ( h_a\circ\varphi\circ\sigma_a- h_a(\varphi(a)) ) \|_{A^2} \nonumber\\ & \lesssim &\|(h_a\circ\varphi\circ\sigma_a-h_a(\varphi(a)))\cdot (u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &&+\|(uC_\varphi h_a)\circ\sigma_a-(uC_\varphi h_a)(a)\|_{A^2}+ \alpha(u,\varphi,a)\|h_a\circ\sigma_a-h_a(a)\|_{A^2}\nonumber\\ & \lesssim &\|(h_a\circ\varphi\circ\sigma_a-h_a(\varphi(a)))\cdot (u\circ\sigma_a-u(a))\|_{A^2}\nonumber \\ &&+(2+\log 2)\|uC_\varphi \|_{{\mathcal{B}} \rightarrow {\mathcal{B}}} + (2+\log 2)\alpha(u,\varphi,a)\nonumber . \endr By Lemmas 2.2 and 2.7, we have \begin{eqnarray}} \def\endr{\end{eqnarray} &&\|(h_a\circ\varphi\circ\sigma_a-h_a(\varphi(a)))\cdot (u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ & \lesssim & \|(h_a\circ\varphi\circ\sigma_a-h_a(\varphi(a)))\|_{A^2}\|u\circ\sigma_a-u(a)\|_{A^2} \nonumber\\ &\leq & \|h_a\circ\varphi \|_{{\mathcal{B}}} \|u\|_{{\mathcal{B}}} \lesssim \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}. \endr Combining (2.17), (2.19) and (2.20), we have \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a\in {\mathbb D}}\alpha(u,\varphi,a)+\sup_{a\in {\mathbb D}}\beta(u,\varphi,a)\lesssim \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}\nonumber. \endr Moreover, since \begin{eqnarray}} \def\endr{\end{eqnarray} |u(0)|\log\frac{2}{1-|\varphi(0)|^2}=|(uC_\varphi h_{0})(0)| \leq (2+\log 2) \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}} \lesssim \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}\nonumber. \endr Therefore, \begin{eqnarray}} \def\endr{\end{eqnarray} &&|u(0)|\log\frac{2}{1-|\varphi(0)|^2}+\sup_{a\in {\mathbb D}}\alpha(u,\varphi,a)+\sup_{a\in {\mathbb D}}\beta(u,\varphi,a) \lesssim \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}.\nonumber \endr We complete the proof of the theorem. \end{proof} As a corollary, we obtain the following new characterization of the boundedness of $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}.$ \begin{cor} {\it Let $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$. Then $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$ is bounded if and only if \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a\in {\mathbb D}}|u(a)|\cdot\|\sigma_{\varphi(a)}\circ\varphi\circ\sigma_a \|_{A^2} <\infty \,\,\,\nonumber\endr and \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a\in {\mathbb D}} \log\frac{2}{1-|\varphi(a)|^2} \|u\circ\sigma_a-u(a)\|_{A^2} <\infty\nonumber. \endr} \end{cor} In particular, when $\varphi(z)=z$, we obtain the estimate of the norm of the multiplication $M_u: {\mathcal{B}} \rightarrow {\mathcal{B}}$. \begin{cor} {\it Let $u\in H({\mathbb D})$. Then $$ \|M_u\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}} \approx |u(0)|\log2+\sup_{a\in {\mathbb D}} \log\frac{2}{1-|a|^2} \|u\circ\sigma_a-u(a)\|_{A^2} . $$} \end{cor} \begin{lem} {\it Suppose that $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$ is bounded. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a\in {\mathbb D}}\|uC_\varphi (\sigma_{\varphi(a)}-\varphi(a)) \|_{{\mathcal{B}}} \approx \sup_{n\geq 0}\| u\varphi^n \|_{{\mathcal{B}}} \endr and \begin{eqnarray}} \def\endr{\end{eqnarray} \limsup_{|\varphi(a)|\rightarrow 1}\| uC_\varphi (\sigma_{\varphi(a)}-\varphi(a)) \|_{{\mathcal{B}}} \lesssim\limsup_{n\rightarrow \infty}\| u\varphi^n \|_{{\mathcal{B}}} . \endr} \end{lem} \begin{proof} From Corollary 2.1 of \cite{col1}, we see that \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a\in {\mathbb D}}\|uC_\varphi \sigma_{\varphi(a)} \|_{{\mathcal{B}}} \approx \sup_{n\geq 0}\| u\varphi^n \|_{{\mathcal{B}}}. \nonumber \endr Then (2.21) follows immediately. The Taylor expansion of $\sigma_{\varphi(a)}-\varphi(a)$ is \begin{eqnarray}} \def\endr{\end{eqnarray} \sigma_{\varphi(a)}-\varphi(a)=-\sum_{n=0}^\infty \big(\overline{\varphi(a)}\big)^n(1-|\varphi(a)|^2)z^{n+1}\nonumber. \endr Then, by the boundedness of $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$ we have \begin{eqnarray}} \def\endr{\end{eqnarray} && \| uC_\varphi (\sigma_{\varphi(a)}-\varphi(a)) \|_{\mathcal{B}} \leq (1-|\varphi(a)|^2)\sum_{n=0}^\infty |\varphi(a)|^n\|u\varphi^{n+1} \|_{{\mathcal{B}}}\nonumber \endr For each $N$, set \[M_1=:\sum_{n=0}^N|\varphi(a)|^n\|u\varphi^{n+1} \|_{{\mathcal{B}}}.\] Then we get \begin{eqnarray}} \def\endr{\end{eqnarray} &&\| uC_\varphi (\sigma_{\varphi(a)}-\varphi(a)) \|_{\mathcal{B}}\nonumber\\ &\leq& (1-|\varphi(a)|^2)\sum_{n=0}^N |\varphi(a)|^n\|u\varphi^{n+1} \|_{{\mathcal{B}}} +\big((1-|\varphi(a)|^2)\sum_{n=N+1}^\infty |\varphi(a)|^n\big)\|u\varphi^{n+1} \|_{{\mathcal{B}}}\nonumber\\ &\leq& M_1(1-|\varphi(a)|^2) +\big((1-|\varphi(a)|^2)\sum_{n=N+1}^\infty |\varphi(a)|^n \sup_{n\geq N+1}\|u\varphi^{n+1} \|_{{\mathcal{B}}}\nonumber\\ &\leq& M_1(1-|\varphi(a)|^2)+2\sup_{n\geq N+1}\|u\varphi^{n+1} \|_{{\mathcal{B}}}\nonumber. \endr Taking $\limsup_{|\varphi(a)|\rightarrow 1}$ to the last inequality and then letting $N\rightarrow\infty$, we get the desired result. \end{proof} \begin{prop} {\it Let $\varphi\in S({\mathbb D})$ and $u\in H({\mathbb D})$. The following statements hold. \begin{itemize} \item[(i)] For $a\in {\mathbb D}$, let $f_a(z)=\sigma_{\varphi(a)}-\varphi(a)$. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \alpha(u,\varphi,a)\lesssim \frac{\beta(u,\varphi,a)}{\log\frac{2}{1-|\varphi(a)|^2}}+\|(uC_\varphi f_a)\circ\sigma_a-(uC_\varphi f_a)(a)\|_{A^2}\nonumber. \endr \item[(ii)] For $a\in {\mathbb D}$, let $g_a=\frac{h^2_a}{h_a(\varphi(a))}$, where $h_a(z)=\log\frac{2}{1-\overline{\varphi(a)}z}$. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \beta(u,\varphi,a)&\lesssim &\alpha(u,\varphi,a)+\|(g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a)))\cdot(u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &&+ \|(uC_\varphi g_a)\circ\sigma_a-(uC_\varphi g_a)(a)\|_{A^2}\nonumber. \endr \item[(iii)] For all $f\in{\mathcal{B}}$ and $a\in{\mathbb D}$, \begin{eqnarray}} \def\endr{\end{eqnarray} \|(uC_\varphi f)\circ\sigma_a-(uC_\varphi f)(a)\|_{A^2}&\lesssim& \|(u\circ\sigma_a-u(a))\cdot(f\circ\varphi\circ\sigma_a-f(\varphi(a)))\|_{A^2}\nonumber\\ &&+\Big(\alpha(u,\varphi,a)+\beta(u,\varphi,a)\Big)\|f\|_{{\mathcal{B}}}\nonumber. \endr \item[(iv)]For all $f\in{\mathcal{B}}$ and $a\in{\mathbb D}$, \begin{eqnarray}} \def\endr{\end{eqnarray} &&\|(u\circ\sigma_a-u(a))\cdot(f\circ\varphi\circ\sigma_a-f(\varphi(a)))\|_{A^2}\nonumber\\ &\lesssim&\|f\|_{{\mathcal{B}}} \min \Big\{\sup_{w\in{\mathbb D}}\beta(u,\varphi,w),\frac{\|uC_\varphi\|_{{\mathcal{B}}\rightarrow {\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2} }}\Big\}\nonumber. \endr \end{itemize}} \end{prop} \begin{proof} (i) It is easy to see that $\|f_a\circ\varphi\circ\sigma_a\|_\infty\leq 2$. For any $a\in{\mathbb D}$, we get \begin{eqnarray}} \def\endr{\end{eqnarray} \alpha(u,\varphi,a)&=&|u(a)|\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}\nonumber\\ &=&\|(u\circ\sigma_a-u(a))\cdot f_a\circ\varphi\circ\sigma_a-(uC_\varphi f_a)\circ\sigma_a-(uC_\varphi f_a)(a)\|_{A^2}\nonumber\\ &\lesssim& \|u\circ\sigma_a-u(a)\|_{A^2}+\|(uC_\varphi f_a)\circ\sigma_a-(uC_\varphi f_a)(a)\|_{A^2}\nonumber\\ &\leq& \frac{\beta(u,\varphi,a)}{\log\frac{2}{1-|\varphi(a)|^2}}+\|(uC_\varphi f_a)\circ\sigma_a-(uC_\varphi f_a)(a)\|_{A^2}\nonumber. \endr (ii) It is obvious that $g_a(\varphi(a))=\log\frac{2}{1-|\varphi(a)|^2}$. Since $(g_a\circ\sigma_{\varphi(a)}-g_a(\varphi(a)))(0)=0$, \[ g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a))=g_a\circ\sigma_{\varphi(a)}\circ(\sigma_{\varphi(a)} \circ\varphi\circ\sigma_a)-g_a(\varphi(a)), \] by Lemma 2.7 and the fact that $\sup_{a\in{\mathbb D}}\|g_a \|_{{\mathcal{B}}}<\infty$ we obtain \[ |u(a)|\|g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a))\|_{A^2}\lesssim \alpha(u,\varphi,a)\sup_{a\in{\mathbb D}}\|g_a \|_{{\mathcal{B}}} \lesssim \alpha(u,\varphi,a). \] By the triangle inequality we get \begin{eqnarray}} \def\endr{\end{eqnarray} \beta(u,\varphi,a)&=& \|g_a(\varphi(a))\cdot (u\circ\sigma_a-u(a)) \|_{A^2}\nonumber\\ &=&\|(g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a)))\cdot(u\circ\sigma_a-u(a))\nonumber\\ &&+u(a)(g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a)))-(u(a)g_a\circ\varphi\circ\sigma_a-u(a)g_a(\varphi(a)))\|_{A^2}\nonumber\\ &\leq&\|(g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a)))\cdot(u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &&+|u(a)|\|g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a))\|_{A^2}+ \|(uC_\varphi g_a)\circ\sigma_a-(uC_\varphi g_a)(a)\|_{A^2}\nonumber\\ &\lesssim &\alpha(u,\varphi,a)+\|(g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a)))\cdot(u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &&+ \|(uC_\varphi g_a)\circ\sigma_a-(uC_\varphi g_a)(a)\|_{A^2}\nonumber, \endr as desired. (iii) See the proof of Theorem 2.8. (iv) Using the fact that $\log 2\leq \log\frac{2}{1-|\varphi(a)|^2}$ and Theorem 2.8, we have \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{a\in{\mathbb D}}\|u\circ\sigma_a-u(a)\|_{A^2}\leq \sup_{a\in{\mathbb D}}\beta(u,\varphi,a)\lesssim \|uC_\varphi\|_{{\mathcal{B}}\rightarrow {\mathcal{B}}} . \endr By Lemma 2.2 and H\"{o}lder inequality, we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} &&\|(u\circ\sigma_a-u(a))\cdot(f\circ\varphi\circ\sigma_a-f(\varphi(a)))\|_{A^2}^2\nonumber\\ &=&\|(u\circ\sigma_a-u(a))^2(f\circ\varphi\circ\sigma_a-f(\varphi(a)))^2\|_{A^1}\nonumber\\ &\leq&\|u\circ\sigma_a-u(a)\|_{A^2}\|u\circ\sigma_a-u(a)\|_{A^4}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^8}^2\nonumber\\ &\leq&\|u\circ\sigma_a-u(a)\|_{A^2}\sup_{a\in{\mathbb D}}\|u\circ\sigma_a-u(a)\|_{A^4} \sup_{a\in{\mathbb D}}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^8}^2\nonumber\\ &\lesssim& \beta(u,\varphi,a)\sup_{a\in{\mathbb D}}\|u\circ\sigma_a-u(a)\|_{A^2} \sup_{a\in{\mathbb D}}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}^2/\log\frac{2}{1-|\varphi(a)|^2}\nonumber. \endr Then, by the boundedness of $C_\varphi$ on ${\mathcal{B}}$ and (2.23), we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} &&\beta(u,\varphi,a)\sup_{a\in{\mathbb D}}\|u\circ\sigma_a-u(a)\|_{A^2} \sup_{a\in{\mathbb D}}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}^2/\log\frac{2}{1-|\varphi(a)|^2}\nonumber\\ &\lesssim&(\sup_{a\in{\mathbb D}}\beta(u,\varphi,a))^2\sup_{a\in{\mathbb D}}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}^2/\log\frac{2}{1-|\varphi(a)|^2}\nonumber\\ &\lesssim&\sup_{a\in{\mathbb D}}\|f\circ\varphi\circ\sigma_a-f(\varphi(a))\|_{A^2}^2\min \bigg\{\sup_{a\in{\mathbb D}}\beta(u,\varphi,a), \frac{\|uC_\varphi\|_{{\mathcal{B}}\rightarrow {\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2}}}\bigg\}^2\nonumber\\ &\lesssim& \|f\circ\varphi \|_{{\mathcal{B}}}^2\min \bigg\{\sup_{a\in{\mathbb D}}\beta(u,\varphi,a), \frac{\|uC_\varphi\|_{{\mathcal{B}}\rightarrow {\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2}}}\bigg\}^2\nonumber\\ &\lesssim& \|f \|_{{\mathcal{B}}}^2 \min \bigg\{\sup_{a\in{\mathbb D}}\beta(u,\varphi,a), \frac{\|uC_\varphi\|_{{\mathcal{B}}\rightarrow {\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2}}}\bigg\}^2\nonumber. \endr The proof is complete. \end{proof} \begin{thm} {\it Let $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$. Suppose that $uC_\varphi$ is bounded on ${\mathcal{B}}$. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi \|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}&\approx & |u(0)|\log\frac{2}{1-|\varphi(0)|^2} + \sup_{n\geq 0}\|u\varphi^n \|_{{\mathcal{B}}}+\sup_{a\in {\mathbb D}} \beta(u,\varphi,a)\nonumber. \endr} \end{thm} \begin{proof} For any $f\in {\mathcal{B}}$, by (iii) and (iv) of Proposition 2.12, we get \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi f\|_{\beta}\lesssim \sup_{a\in{\mathbb D}}\big(\alpha(u,\varphi,a) +\beta(u,\varphi,a)\big)\|f\|_{{\mathcal{B}}}.\nonumber \endr By Lemma 2.11 and (i) of Proposition 2.12, we have \begin{eqnarray}} \def\endr{\end{eqnarray} \alpha(u,\varphi,a)&\lesssim& \beta(u,\varphi,a)/\log\frac{2}{1-|\varphi(a)|^2} + \sup_{a\in{\mathbb D}}\|uC_\varphi f_a\|_{{\mathcal{B}}}\nonumber \\ & \lesssim & \beta(u,\varphi,a)+ \sup_{n\geq 0} \|u\varphi^n\|_{{\mathcal{B}}}.\nonumber \endr Thus, \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi f\|_{\beta} \lesssim \big(\sup_{a\in{\mathbb D}} \beta(u,\varphi,a) +\sup_{n\geq 0}\|u\varphi^n\|_{{\mathcal{B}}}\big)\|f\|_{{\mathcal{B}}}\nonumber. \endr In addition, $(uC_\varphi f)(0)|=|u(0)||f(\varphi(0))|\lesssim |u(0)||\log\frac{2}{1-|\varphi(0)|^2}\|f\|_{{\mathcal{B}}}\nonumber.$ Therefore, \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}\lesssim |u(0)|\log\frac{2}{1-|\varphi(0)|^2}+\sup_{n\geq 0}\|u\varphi^n\|_{{\mathcal{B}}}+\sup_{a\in {\mathbb D}}\beta(u,\varphi,a). \nonumber \endr On the other hand, let $f(z)=z^n$. Then $f\in{\mathcal{B}}$ for all $n\geq 0$. Thus \[\sup_{n\geq 0}\|u\varphi^n\|_{{\mathcal{B}}}=\sup_{n\geq 0}\|(uC_\varphi)z^n\|_{{\mathcal{B}}}\leq\|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}} <\infty, \] which, together with Theorem 2.8, implies \begin{eqnarray}} \def\endr{\end{eqnarray} |u(0)|\log\frac{2}{1-|\varphi(0)|^2}+\sup_n\|u\varphi^n\|_{{\mathcal{B}}}+\sup_{a\in {\mathbb D}}\beta(u,\varphi,a) \lesssim\|uC_\varphi\|_{{\mathcal{B}} \rightarrow {\mathcal{B}}}\nonumber. \endr The proof of the theorem is complete. \end{proof} \begin{cor} {\it Let $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$. Then $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$ is bounded if and only if \begin{eqnarray}} \def\endr{\end{eqnarray} \sup_{n\geq 0}\|u\varphi^n \|_{{\mathcal{B}}}<\infty ~~~~~\mbox{and}~~~~~~~~~~\sup_{a\in {\mathbb D}} \log\frac{2}{1-|\varphi(a)|^2} \|u\circ\sigma_a-u(a)\|_{A^2}<\infty \nonumber. \endr} \end{cor} \section{Essential norm of $uC_\varphi $ on the Bloch space} In this section we characterize the essential norm of the weighted composition operator $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$ by using various form, especially we will use the Bloch norm of $u \varphi^n$. For $t\in(0,1)$, we define \begin{eqnarray}} \def\endr{\end{eqnarray} E(\varphi,a,t)=\{z\in{\mathbb D}:|(\sigma_{\varphi(a)}\circ\varphi\circ\sigma_a)(z)|>t\} \nonumber. \endr Similarly to the proof of Lemma 9 of \cite{ll}, we get the following result. Since the proof is similar, we omit the details. \begin{lem} {\it Let $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$. Then \begin{eqnarray}} \def\endr{\end{eqnarray} &&\widetilde{\gamma}:=\limsup_{r\rightarrow 1}\limsup_{t\rightarrow 1}\sup_{|\varphi(a)|\leq r} \Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4} \lesssim \limsup_{n\rightarrow \infty}\|u\varphi^n\|_{{\mathcal{B}}}\nonumber. \endr} \end{lem} \begin{thm} {\it Let $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$ such that $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$ is bounded. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}}&\approx& \limsup_{n\rightarrow\infty}\|u\varphi^n\|_{{\mathcal{B}}} +\limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}}\nonumber\\ &\approx& \widetilde{\alpha}+ \widetilde{\beta}+ \widetilde{\gamma}\nonumber\\ &\approx& \widetilde{\alpha}+ \limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}} + \widetilde{\gamma} \nonumber\\ &\approx& \limsup_{n\rightarrow\infty}\|u\varphi^n\|_{{\mathcal{B}}} + \widetilde{\beta} \nonumber, \endr where $\widetilde{\alpha }=\limsup_{|\varphi(a)|\rightarrow 1} \alpha(u,\varphi,a), ~~~~~ \widetilde{\beta}=\limsup_{|\varphi(a)|\rightarrow 1} \beta(u,\varphi,a)$ and $$ ~~~g_a(z)= \Big(\log\frac{2}{1- \overline{\varphi(a)}z} \Big)^2 \Big(\log\frac{2}{1-|\varphi(a)|^2}\Big)^{-1}.$$} \end{thm} \begin{proof} Set $f_n(z)=z^n$. It is well known that $f_n\in{\mathcal{B}}$ and $f_n\rightarrow 0$ weakly in ${\mathcal{B}}$ as $n\rightarrow \infty$. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} \gtrsim \limsup_{n\rightarrow\infty} \|uC_\varphi f_n\|_{{\mathcal{B}}}=\limsup_{n\rightarrow\infty}\|u\varphi^n\|_{{\mathcal{B}}}. \endr Choose $a_n\in{\mathbb D}$ such that $|\varphi(a_n)|\rightarrow 1$ as $n\rightarrow \infty$. It is easy to check $g_{a_n}$ are uniformly bounded in ${\mathcal{B}}$ and converges weakly to zero in ${\mathcal{B}}$ (see \cite{osz}). By these facts we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} &\gtrsim& \limsup_{n\rightarrow \infty} \|uC_\varphi g_{a_n}\|_{{\mathcal{B}}} = \limsup_{|\varphi(a)|\rightarrow 1} \|uC_\varphi g_a\|_{{\mathcal{B}}}. \endr By (3.1) and (3.2), we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}}\gtrsim \limsup_{n\rightarrow\infty}\|u\varphi^n\|_{{\mathcal{B}}} +\limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}}. \endr From (i) of Proposition 2.12, we see that \begin{eqnarray}} \def\endr{\end{eqnarray} \alpha(u,\varphi,a) \lesssim \frac{\beta(u,\varphi,a)}{\log\frac{2}{1-|\varphi(a)|^2}}+\|uC_\varphi f_a \|_{{\mathcal{B}}}, \nonumber \endr which together with Lemma 2.11 implies that \begin{eqnarray}} \def\endr{\end{eqnarray} \widetilde{\alpha }=\limsup_{|\varphi(a)|\rightarrow 1} \alpha(u,\varphi,a) \lesssim \limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi f_a \|_{{\mathcal{B}}} \lesssim \limsup_{n\rightarrow\infty}\|u\varphi^n\|_{{\mathcal{B}}}. \endr From (ii) and (iv) of Proposition 2.12, we see that \begin{eqnarray}} \def\endr{\end{eqnarray} \beta(u,\varphi,a)&\lesssim &\alpha(u,\varphi,a)+\|(g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a)))\cdot(u\circ\sigma_a-u(a))\|_{A^2}\nonumber\\ &&+ \|(uC_\varphi g_a)\circ\sigma_a-(uC_\varphi g_a)(a)\|_{A^2} \nonumber\\ & \lesssim & \alpha(u,\varphi,a)+ \|g_a\|_{{\mathcal{B}}}\frac{\|uC_\varphi\|_{{\mathcal{B}}\rightarrow {\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2} }} + \|uC_\varphi g_a \|_{{\mathcal{B}}},\nonumber \endr which implies that \begin{eqnarray}} \def\endr{\end{eqnarray} \widetilde{\beta}=\limsup_{|\varphi(a)|\rightarrow 1} \beta(u,\varphi,a)& \lesssim & \widetilde{\alpha} + \limsup_{|\varphi(a)|\rightarrow 1} \|uC_\varphi g_a \|_{{\mathcal{B}}} . \endr By Lemma 3.1, (3.3), (3.4) and (3.5), we have \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}}&\gtrsim& \widetilde{\alpha} +\widetilde{\gamma}+\limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}}\nonumber\\ &\gtrsim& \widetilde{\alpha} +\widetilde{\gamma}+ \widetilde{\beta}, \nonumber \endr and \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} &\gtrsim & \limsup_{n\rightarrow\infty}\|u\varphi^n\|_{{\mathcal{B}}} + \widetilde{\alpha} +\limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}} \nonumber\\ &\gtrsim &\limsup_{n\rightarrow\infty}\|u\varphi^n\|_{{\mathcal{B}}}+ \widetilde{\beta} \nonumber . \endr Next we give the upper estimate for $\|uC_\varphi\|_{e, {\mathcal{B}} \rightarrow {\mathcal{B}}}$. For $n\geq 0$, we define the linear operator on ${\mathcal{B}}$ by $(K_nf)(z)=f( \frac{n}{n+1}z)$. It is easy to check that $K_n$ is a compact operator on ${\mathcal{B}}$. Thus \[ \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}}\leq \limsup_{n \rightarrow \infty}\sup_{\|f\|_{{\mathcal{B}}}\leq 1}\|uC_\varphi(I-K_n)f\|_{{\mathcal{B}}}, \] where $I$ is the identity operator. Let $S_n=I-K_n$. Then, \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi \|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} &\leq& \liminf_{n\rightarrow \infty}\|uC_\varphi S_n\|_{{\mathcal{B}}}\nonumber\\ &=&\liminf_{n\rightarrow \infty}\sup_{\|f\|_{{\mathcal{B}}}\leq 1}\big(|u(0)(S_nf)(\varphi(0))| + \|(uC_\varphi S_nf \|_{\beta}\big)\\ &=&\liminf_{n\rightarrow \infty}\sup_{\|f\|_{{\mathcal{B}}}\leq 1} \| uC_\varphi S_nf \|_{\beta}\nonumber. \endr Let $f\in{\mathcal{B}}$ such that $\|f\|_{{\mathcal{B}}}\leq 1$. Fix $n\geq 0$, $r\in(0,1)$ and $t\in(\frac{1}{2},1)$. Then \begin{eqnarray}} \def\endr{\end{eqnarray} \| uC_\varphi S_nf \|_{\beta}&\approx&\sup_{a\in {\mathbb D}}\|(uC_\varphi S_nf)\circ\sigma_a-(uC_\varphi S_nf)(a)\|_{A^2}\nonumber\\ &\leq& \sup_{|\varphi(a)|\leq r}\|(uC_\varphi S_nf)\circ\sigma_a-(uC_\varphi S_nf)(a)\|_{A^2}\nonumber\\ &&+\sup_{|\varphi(a)|> r}\|(uC_\varphi S_nf)\circ\sigma_a-(uC_\varphi S_nf)(a)\|_{A^2}. \endr By (iii) and (iv) of Proposition 2.11, we have \begin{eqnarray}} \def\endr{\end{eqnarray} &&\sup_{|\varphi(a)|> r}\|(uC_\varphi S_nf)\circ\sigma_a-(uC_\varphi S_nf)(a)\|_{A^2}\nonumber\\ &\lesssim& \| S_nf \|_{{\mathcal{B}}}\sup_{|\varphi(a)|> r}\bigg(\alpha(u,\varphi,a)+\beta(u,\varphi,a) +\frac{\|uC_\varphi \|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}}{ \sqrt{\log\frac{2}{1-|\varphi(a)|^2}}}\bigg) . \endr In addition, \begin{eqnarray}} \def\endr{\end{eqnarray} &&\sup_{|\varphi(a)|\leq r}\|(uC_\varphi S_nf)\circ\sigma_a-(uC_\varphi S_nf)(a)\|_{A^2}\nonumber\\ &\leq&\sup_{|\varphi(a)|\leq r}\big(|(S_nf)(\varphi(a))|\|u\circ\sigma_a-u(a)\|_{A^2} \nonumber\\ && +\|u\circ\sigma_a\cdot (( C_\varphi S_nf)\circ\sigma_a-( C_\varphi S_nf)(a))\|_{A^2}\big)\nonumber\\ &\leq&\|u \|_{{\mathcal{B}}}\max_{|w|\leq r}|(S_nf)(w)|+I_1^{1/2}+I_2^{1/2} , \endr where \begin{eqnarray}} \def\endr{\end{eqnarray} I_1=\sup_{|\varphi(a)|\leq r}\int_{{\mathbb D}\backslash E(\varphi,a,t)}|(u\circ\sigma_a)(z)\cdot((S_nf)\circ\varphi\circ\sigma_a(z)-(S_nf)(\varphi(a)))|^2dA(z), \nonumber \endr \begin{eqnarray}} \def\endr{\end{eqnarray} I_2=\sup_{|\varphi(a)|\leq r}\int_{ E(\varphi,a,t)}|(u\circ\sigma_a)(z)\cdot((S_nf)\circ\varphi\circ\sigma_a(z) -(S_nf)(\varphi(a)))|^2dA(z).\nonumber \endr Let $\varphi_a=\sigma_{\varphi(a)}\circ\varphi\circ \sigma_a$. Then by (3.19) in \cite[p.37]{Lj2007}, we have \begin{eqnarray}} \def\endr{\end{eqnarray} |(S_nf)\circ\sigma_{\varphi(a)}\circ\varphi_a(z)-(S_nf\circ\varphi)(a)| \lesssim \sup_{|w|\leq t}| \big((S_nf)\circ\sigma_{\varphi(a)}\big)(w)-(S_nf)(\varphi(a))|\nonumber \endr for $z\in {\mathbb D}\backslash E(\varphi,a,t)$. Since \begin{eqnarray}} \def\endr{\end{eqnarray} \|u\circ\sigma_a\cdot\varphi_a\|_{A^2}&\leq&\|u\circ\sigma_a-u(a)\|_{A^2}\|\varphi_a\|_\infty +|u(a)|\|\varphi_a\|_2\nonumber\\ &\lesssim&\sup_{a\in{\mathbb D}}\|u\circ\sigma_a-u(a)\|_{A^2}+\alpha(u,\varphi,a_n)\nonumber\\ &\lesssim& \|uC_\varphi\|_{{\mathcal{B}}\rightarrow {\mathcal{B}}}\nonumber, \endr we have \begin{eqnarray}} \def\endr{\end{eqnarray} I_1& \lesssim& \sup_{|\varphi(a)|\leq r}\sup_{|w|\leq t}|\big((S_nf)\circ\sigma_{\varphi(a)}\big)(w)-(S_nf)(\varphi(a))|^2 \|u\circ\sigma_a\cdot\varphi_a\|_{A^2}^2\nonumber\\ &\lesssim & \|uC_\varphi\|^2_{{\mathcal{B}}\rightarrow {\mathcal{B}}} \sup_{|z|\leq\frac{t+r}{1+tr}}|(S_nf)(z)|^2\nonumber. \endr By Lemma 2.2, we get \begin{eqnarray}} \def\endr{\end{eqnarray} \|(S_nf)\circ\varphi\circ\sigma_a-(S_nf)(\varphi(a))\|_{A^4}^2&\leq& \sup_{a\in{\mathbb D}}\|(S_nf)\circ\varphi\circ\sigma_a-(S_nf)(\varphi(a))\|_{A^4}^2\nonumber\\ &\lesssim& \sup_{a\in{\mathbb D}}\|f\circ\sigma_a-f(a)\|_{A^2}^2\leq 1,\nonumber \endr which implies that \begin{eqnarray}} \def\endr{\end{eqnarray} I_2&\leq&\sup_{|\varphi(a)|\leq r}\Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/2} \|(S_nf)\circ\varphi\circ\sigma_a-(S_nf)(\varphi(a))\|_{A^4}^2\nonumber\\ &\leq&\sup_{|\varphi(a)|\leq r}\Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/2}. \nonumber \endr By combining the above estimates, for $r\in(0,1)$ and $t\in(\frac{1}{2},1)$, we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} && \| uC_\varphi S_nf \|_{\beta} \nonumber\\ &\lesssim& \sup_{|\varphi(a)|> r}\Big(\alpha(u,\varphi,a)+\beta(u,\varphi,a) +\frac{\|uC_\varphi \|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}}{\sqrt{ \log\frac{2}{1-|\varphi(a)|^2} }}\Big)\nonumber\\ &&+\sup_{|\varphi(a)|\leq r}\Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4}+\sup_{|z|\leq\frac{t+r}{1+tr}}|(S_nf)(z)| \|uC_\varphi\|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}\nonumber. \endr Taking the supremum over $\|f\|_{\mathcal{B}}\leq 1$ and letting $n\rightarrow \infty$, we obtain \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} &\lesssim& \sup_{|\varphi(a)|> r}\Big(\alpha(u,\varphi,a)+\beta(u,\varphi,a) +\frac{\|uC_\varphi \|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}}{\sqrt{ \log\frac{2}{1-|\varphi(a)|^2} }}\Big)\nonumber\\ &&+\sup_{|\varphi(a)|\leq r}\Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4}, \nonumber \endr which implies that \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} \lesssim \widetilde{\alpha}+ \widetilde{\beta}+ \widetilde{\gamma}, \nonumber \endr By (3.4) (3.5) and Lemma 3.1, we get \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} &\lesssim & \widetilde{\beta}+ \limsup_{n\rightarrow \infty}\| u\varphi^n \|_{{\mathcal{B}}}\nonumber\\ &\lesssim & \widetilde{\alpha}+ \limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}}+ \limsup_{n\rightarrow \infty}\| u\varphi^n \|_{{\mathcal{B}}}\nonumber\\ &\lesssim & \limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}}+ \limsup_{n\rightarrow \infty}\| u\varphi^n \|_{{\mathcal{B}}}. \nonumber \endr By (ii), (iv) of Proposition 2.12, we have \begin{eqnarray}} \def\endr{\end{eqnarray} &&\|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}}\nonumber\\ &\lesssim& \sup_{|\varphi(a)|> r}\Big(\alpha(u,\varphi,a)+\beta(u,\varphi,a) +\frac{\|uC_\varphi \|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}}{\sqrt{ \log\frac{2}{1-|\varphi(a)|^2} }}\Big)\nonumber\\ &&+\sup_{|\varphi(a)|\leq r}\Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4} \nonumber\\ &\lesssim& \sup_{|\varphi(a)|> r}\Big(\alpha(u,\varphi,a) +\|(uC_\varphi g_a)\circ\sigma_a-(uC_\varphi g_a)(a)\|_{A^2}\nonumber \\ &&+ \|(u\circ\sigma_a-u(a) ) \cdot (g_a\circ\varphi\circ\sigma_a-g_a(\varphi(a)))\|_{A^2} + \frac{\|uC_\varphi \|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2}}}\Big) \nonumber\\ &&+\Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4} \nonumber\\ &\lesssim& \sup_{|\varphi(a)|> r}\Big(\alpha(u,\varphi,a) +\| uC_\varphi g_a \|_{{\mathcal{B}}} + \|g_a\|_{{\mathcal{B}}} \frac{\|uC_\varphi \|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2}}} + \frac{\|uC_\varphi \|_{{\mathcal{B}}\rightarrow{\mathcal{B}}}}{\sqrt{\log\frac{2}{1-|\varphi(a)|^2}}}\Big) \nonumber\\ &&+\Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4} , \nonumber \endr which implies that \begin{eqnarray}} \def\endr{\end{eqnarray} \|uC_\varphi\|_{e,{\mathcal{B}}\rightarrow {\mathcal{B}}} &\lesssim & \widetilde{\alpha}+ \limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}}+ \widetilde{\gamma}. \nonumber \endr We complete the proof of the theorem. \end{proof} From Theorem 3.2, we immediately get the following characterizations for the compactness of the operator $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$. \begin{cor} {\it Let $u\in H({\mathbb D})$ and $\varphi\in S({\mathbb D})$ such that $uC_\varphi$ is bounded on ${\mathcal{B}}$. Then the following statements are equivalent. \begin{itemize} \item[(i)] The operator $uC_\varphi:{\mathcal{B}} \rightarrow {\mathcal{B}}$ is compact. \item[(ii)]$$\limsup_{n\rightarrow \infty}\|u\varphi^n\|_{\mathcal{B}}=0~~~~\mbox{and}~~~~ \limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}}=0.$$ \item[(iii)]$$\limsup_{n\rightarrow \infty}\|u\varphi^n\|_{\mathcal{B}}=0~~~~\mbox{and}~~~~ \limsup_{|\varphi(a)|\rightarrow 1}\beta(u,\varphi,a)=0.$$ \item[(iv)]\begin{eqnarray}} \def\endr{\end{eqnarray} \limsup_{|\varphi(a)|\rightarrow 1}\alpha(u,\varphi,a)=0, ~~~~~ \limsup_{|\varphi(a)|\rightarrow 1}\beta(u,\varphi,a)=0,\nonumber \endr and \begin{eqnarray}} \def\endr{\end{eqnarray} \limsup_{r\rightarrow 1}\limsup_{t\rightarrow 1}\sup_{|\varphi(a)|\leq r} \Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4} =0.\nonumber \endr \item[(v)]\begin{eqnarray}} \def\endr{\end{eqnarray} \limsup_{|\varphi(a)|\rightarrow 1}\alpha(u,\varphi,a)=0, ~~~~~ \limsup_{|\varphi(a)|\rightarrow 1}\|uC_\varphi g_a\|_{\mathcal{B}} =0,\nonumber \endr and \begin{eqnarray}} \def\endr{\end{eqnarray} \limsup_{r\rightarrow 1}\limsup_{t\rightarrow 1}\sup_{|\varphi(a)|\leq r} \Big(\int_{E(\varphi,a,t)}|u(\sigma_a(z))|^4dA(z)\Big)^{1/4} =0.\nonumber \endr \end{itemize}} \end{cor}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The oscillating cantilever driven adiabatic reversals (OSCAR) technique in magnetic resonance force microscopy (MRFM) introduced in \cite{stipe} has been used to successfully detect a single electron spin below the surface of a solid \cite{rug1}. In the OSCAR MRFM technique the vibrations of the cantilever tip (CT) with an attached ferromagnetic particle in presence of a {\it rf} magnetic field cause the periodic reversals of the effective magnetic field acting on the single electron spin. If the conditions of adiabatic motion are satisfied the spin follows the effective magnetic field. The back action of the spin on the CT causes a small frequency shift of the CT vibrations, which can be measured with high precision. The quasiclassical theory of OSCAR MRFM has been developed in \cite{bkt}. This theory contains two important limitations. First, it assumes that the external magnetic field $\vec{B}_{ext}$ at the spin is much greater than the dipole field $\vec{B}_d$ produced by the ferromagnetic particle. In real experiments, in order to increase the frequency shift $\delta\omega_c$, one has to decrease the distance between the CT and the spin to values where the dipole field becomes sometimes greater than the external field\cite{rug1}. Second, it was assumed in \cite{bkt} that the spin is located in the plane of the cantilever vibrations. Thus, the quasiclassical theory should be extended in order to describe both an arbitrary relation between $\vec{B}_{ext}$ and $\vec{B}_d$ and an arbitrary location of the spin. This extension is presented in our paper. A single spin is a quantum object which must be described using quantum theory. The quantum theory of OSCAR MRFM has been developed in \cite{bbt} with the same limitations as the quasiclassical theory. It was found, as may be expected, that the frequency shift $\delta\omega_c$ in quantum theory may assume only two values $\pm |\delta\omega_c|$ corresponding to the two directions of the spin relative to the effective magnetic field. The value of $|\delta\omega_c|$ in quantum theory is the same as the maximum frequency shift calculated using quasiclassical theory (where it can take any value between $-|\delta\omega_c|$ and $|\delta\omega_c|$). Thus, to calculate the quantum frequency shift, it is reasonable to use quasiclassical instead of quantum theory. \section{Equations of motion} \begin{figure} \includegraphics[scale=0.4]{nf1} \caption{ MRFM setup. The equilibrium position of the spin and the cantilever with a spherical ferromagnetic particle. $\vec{m}$ is the magnetic moment of the ferromagnetic particle, $\vec{\mu}$ is the magnetic moment of the spin, $\vec{B}_{ext}$, $\vec{B}_d^{(0)}$ and $\vec{B}_0$ are respectively the external permanent magnetic field, the dipole field on the spin, and the net magnetic field. In general the vectors $\vec{B}_d^{(0)}$ and $\vec{B}_0$ do not lie in the $x-z$ plane. } \label{exp} \end{figure} We consider the MRFM setup shown in Fig.~\ref{exp}. The CT oscillates in the $x-z$ plane. The origin is placed at the equilibrium position of the center of the ferromagnetic particle). Note that here we ignore the static displacement of the CT caused by the magnetic force of the spin. The magnetic moment of the spin, $\vec{\mu}$ shown in Fig.~\ref{exp}, points initially in the direction of the magnetic field $\vec{B}_0$, which corresponds to the equilibrium position of CT. (See Eq.~(\ref{bfie})). We assume now that the {\it rf} magnetic field $2 \vec{B}_1$ is linearly polarized in the plane which is perpendicular to $\vec{B}_0$. (Later we will generalize this to an arbitrary direction of polarization). The dipole magnetic field $\vec{B}_d$ is given by: \begin{equation} \vec{B}_d = \frac{\mu_0}{4\pi} \frac{3(\vec{m}\cdot\vec{n})\vec{n}-\vec{m}} {r_v^3}, \label{dipf} \end{equation} where $\vec{m}$ is the magnetic moment of the ferromagnetic particle pointing in the positive $z$-direction, $r_v$ is the (variable) distance between the moving CT and the stationary spin and $\vec{n}$ is the unit vector pointing from the CT to the spin. We define: \begin{equation} r_v = \sqrt{ (x-x_c)^2 +y^2+z^2}, \label{co1} \end{equation} \begin{equation} \vec{n} = \left( \frac{x-x_c}{r_v}, \ \frac{y}{r_v}, \ \frac{z}{r_v}\right), \label{coo} \end{equation} where $x,y,z$ are the spin coordinates, and $x_c$ is the CT-coordinate (i.e. the coordinate of the center of the ferromagnetic particle). At the equilibrium, the net magnetic field at the spin is, \begin{equation} \vec{B}_0 = \vec{B}_{ext} + \vec{B}_d^{(0)}, \label{bfie} \end{equation} \begin{equation} \vec{B}_d^{(0)} = \frac{ 3 m \mu_0 }{4\pi r^5} \left( zx, \ zy, \ z^2- \frac{r^2}{3} \right), \label{bfi1} \end{equation} \begin{equation} \vec{B}_{ext} = (0 , \ 0, \ B_{ext}), \label{bfi2} \end{equation} where $r=\sqrt{x^2+y^2+z^2}$. In the approximation which is linear in $x_c$, the magnetic field $\vec{B}_d$ changes by the value of $\vec{B}_{d}^{(1)}$: \begin{equation} \vec{B}_d^{(1)} = - (G_x, G_y, G_z) \ x_c, \label{bp1} \end{equation} \begin{equation} (G_x,G_y,G_z) = \frac{3 m \mu_0}{4\pi r^7} \left( z(r^2-5x^2), -5xyz, x(r^2- 5z^2) \right), \label{bp2} \end{equation} where $(G_x,G_y,G_z)$ describes the gradient of the magnetic field at the spin location at $x_c=0$: \begin{equation} (G_x, G_y, G_z) = \left( \frac{\partial B_{d}^x}{\partial x}, \frac{\partial B_{d}^y}{\partial x}, \frac{\partial B_{d}^z}{\partial x} \right). \label{gg} \end{equation} (Note that the magnetic field and its gradient depend on the CT coordinate $x_c$). Next we consider the equation of motion for the spin magnetic moment $\vec{\mu}$ in the system of coordinates rotating with the {\it rf} field at frequency $\omega$ about the magnetic field $\vec{B}_0$. (The $\tilde{z}$ axis of this new system points in the direction of $\vec{B}_0$). We have: \begin{equation} \begin{array}{lll} \vec{\dot{\mu}} &= - \gamma \vec{\mu} \times \vec{B}_{eff},\\ &\\ \vec{B}_{eff} &= \left( B_1,\ 0,\ B_0 -(\omega/\gamma) - x_c \sum_i G_i \cos\alpha_i\right),\\ &\\ \cos\alpha_i &= B_0^i/B_0.\\ \end{array} \label{beff} \end{equation} Here $\alpha_i$, ($i=x,y,z$) are the angles between the direction of the magnetic field $\vec{B}_0$ and the axes $x,y,z$ of the laboratory system of coordinates; and $\gamma$ is the gyromagnetic ratio of the electron spin. ($\gamma$ is the absolute value of the gyromagnetic ratio). We ignore the transverse components of the dipole field $\vec{B}_d$ because they represent the fast oscillating terms in the rotating system of coordinate. Also we consider only the rotating component of the {\it rf} magnetic field. The equations of motion for the CT can be written, \begin{equation} \ddot{x}_c + \omega^2_c x_c = F_x/m^*, \label{eom} \end{equation} where $\omega_c$ and $m^*$ are the frequency and the effective mass of the CT and $F_x$ is the magnetic force acting on the ferromagnetic particle on CT. We consider the CT oscillations in the laboratory system of coordinates. Ignoring fast oscillating terms in the laboratory system, we obtain: \begin{equation} F_x = -\mu_{\tilde{z}} \sum_i G_i \cos\alpha_i. \label{fdx} \end{equation} Next, we will use the following units: for time $ 1/\omega_c$, for magnetic moment $\mu_B$, for magnetic field $\omega_c/\gamma$, for length the characteristic distance $L_0$ between CT and the spin, for force $k_cL_0$, where $k_c= m^* \omega_c^2$ is the effective CT spring constant. Using these units, we derive the following dimensionless equations of motion: \begin{equation} \displaystyle \begin{array}{lll} &\dot{\vec{\mu}} = - \vec{\mu} \times \vec{B}_{eff},\\ &\\ &\ddot{x}_c + x_c = F_x,\\ &\\ &\vec{B}_{eff} = \left( B_1, 0, \Delta-\beta {\cal G} x_c\right),\\ &\\ &F_x = -\alpha \beta {\cal G} \mu_{\tilde{z}}, \\ &\\ &\Delta = B_0 -\omega,\\ &\\ &\displaystyle {\cal G} = \frac{1}{r^7}[ z(r^2-5x^2)\cos\alpha_x -5xyz\cos\alpha_y \\ &\\ &+x(r^2- 5z^2) \cos\alpha_z].\\ \end{array} \label{adi} \end{equation} The parameters $\alpha$ and $\beta$ are given by: \begin{equation} \alpha =\frac{\mu_B \omega_c}{\gamma k_c L_0^2}, \qquad \beta = \frac{3\gamma\mu_0 m }{4\pi\omega_c L_0^3}. \label{ab} \end{equation} Note that all quantities in Eq.~(\ref{adi}) are dimensionless, i.e. $x$ means $x/L_0$, $\mu$ means $\mu/\mu_B$, $B_0$ means $\gamma B_0/\omega_c$ and so on. In terms of dimensional quantities the parameter $\beta$ is the ratio of the dipole frequency $\gamma B_d^{(0)}$ to the CT frequency $\omega_c$, and the product $\alpha\beta$ is the ratio of the static CT displacement $F_x/k_c$ to the CT-spin distance $L_0$. The derived equations are convenient for both numerical simulations and analytical estimates. \begin{figure} \includegraphics[scale=0.33]{nf2n} \caption{ The OSCAR MRFM frequency shift $\delta\omega_c(z)$ at the central resonant surface ($\Delta = 0$), for $x>0$. The symbols show the numerical data, the lines correspond to the estimate (\ref{dom}) for (a) $y=0$ (circles), (b) $y=x/2$ (squares) and (c) $y=x$ (diamonds). Solid squares and circles indicate frequency shifts at the spin locations indicated in Fig.~\ref{cro}. In all Figures the coordinates $x,y$ and $z$ are in units of $L_0$ and the frequencies are in units of $\omega_c$. } \label{osci} \end{figure} \section{The OSCAR MRFM frequency shift} In this section we present the analytical estimates and the numerical simulations for the OSCAR MRFM frequency shift. When the CT oscillates, the resonant condition $\omega = \gamma |\vec{B}_{ext} + \vec{B}_d|$ can be satisfied only if the spin is located inside the resonant slice which is defined by its boundaries: \begin{equation} |\vec{B}_{ext} + \vec{B}_d (x_c=\pm A)| = \omega/\gamma, \label{res} \end{equation} where $A$ is the amplitude of the CT vibrations. For an analytical estimate, we assume that the spin is located at the central surface of the resonant slice. In this case in Eq.~(\ref{adi}) $\Delta=0$. To obtain an analytical estimate for the OSCAR MRFM frequency shift we will assume an ideal adiabatic motion and put $\vec{\dot{\mu}}=0$ in Eq.~(\ref{adi}). Let the CT begin its motion (at $t=0$) from the right end position $x_c(0)=A$. Then the initial direction (i.e. at $t=0$) of the effective magnetic field $\vec{B}_{eff}$ relative to the magnetic field $\vec{B}_{ext}+\vec{B}_d$ and of the magnetic moment $\vec{\mu}$ depends on the sign of ${\cal G}$: $\vec{B}_{eff}$ and $\vec{\mu}$ have the same direction for ${\cal G} < 0$ and opposite directions for ${\cal G}>0$. Substituting the derived expression for $\mu_{\tilde{z}} \simeq - B^z_{eff}{\cal G}/|\vec{B_{eff}}||{\cal G}|$ into $F_x$ we obtain the following equation for $x_c$: \begin{equation} \ddot{x}_c +x_c \left\{ 1 + \frac{\alpha \beta^2 {\cal G}| {\cal G}| } {\sqrt{B_1^2 + (\beta {\cal G} x_c)^2 }} \right\}=0. \label{simp} \end{equation} We solve this equation as in \cite{bkt}, using the perturbation theory of Bogoliubov and Mitropolsky\cite{bogo}, and we find the dimensionless frequency shift (see Appendix): \begin{eqnarray} \nonumber &&\delta\omega_c \simeq \frac{2}{\pi} \frac{\alpha \beta^2 {\cal G} |{\cal G}|} {\sqrt{B_1^2 +(\beta {\cal G} A)^2 }} \left\{ 1+\right. \\ && \nonumber \\ && \left. \frac{1}{2} \frac{B_1^2}{B_1^2+(\beta{\cal G} A)^2} \Big[ \ln \Big( \frac{4\sqrt{B_1^2 + (\beta{\cal G} A)^2}}{B_1}\Big) + \frac{1}{2} \Big] \right\}. \label{dom} \end{eqnarray} In typical experimental conditions we have $$B_1 \ll \beta {\cal G} A, $$ and Eq.~(\ref{dom}) transforms to the simple expression \begin{equation} \delta\omega_c = \frac{2}{\pi} \frac{\alpha \beta {\cal G}}{ A}. \label{dom1} \end{equation} One can see that the frequency shift is determined by the ratio of the static CT displacement $F_x/k_c$ to the amplitude of the CT vibrations A. We will also present Eq.~(\ref{dom}) in terms of dimensionless quantities: \begin{equation} \frac{\delta\omega_c}{\omega_c} = \frac{2\mu_B G_0}{ \pi A k_c}, \label{domdim} \end{equation} where \begin{equation} G_0=\sum_i G_i \cos\alpha_i. \label{gg0} \end{equation} Eqs.~(\ref{dom}) and (\ref{domdim}) represent an extension of the estimate derived in \cite{bkt}. These equations are valid for any point on the central resonant surface and for any relation between $\vec{B}_{ext}$ and $\vec{B}_d$. It follows from Eq.~(\ref{dom}) that $\delta\omega_c$ is an even function of $y$ and an odd function of $x$. \begin{figure} \includegraphics[scale=0.33]{nf3n} \caption{Cross-sections of the resonant slice for $z=-0.1$ and $z=-1$. The dashed lines show the intersection between the cross sections and the central resonant surface. The solid squares and circles indicate spin locations which correspond to the frequency shifts given by the same symbols in Fig.~\ref{osci}. } \label{cro} \end{figure} In our computer simulations we have used the following parameters taken from experiments \cite{rug1}: $$ \omega/2\pi = 5.5 \ kHz, \quad k_c= 110 \ \mu N/m, \quad A=16 \ nm, $$ $$ B_{ext} = 30 \ mT, \quad \omega/2\pi = 2.96 \ GHz, \quad \omega/\gamma= 106 \ mT, $$ $$ |G_z| = 2\times 10^5 \ T/m, \quad B_1 = 300 \ \mu T, \quad L_0 \approx 350 \ nm. $$ The corresponding dimensionless parameters are the following: $$ \alpha = 1.35\times 10^{-13}, \quad \beta = 1.07 \times 10^6, \quad A=4.6\times 10^{-2}, $$ $$ B_1 = 1.5 \times 10^3, \qquad B_{ext} = 1.53 \times 10^5, \qquad \omega= 5.4\times 10^5. $$ As initial conditions we take: $$ \vec{\mu}(0) = (0, 0, 1), \qquad x_c(0) = A, \qquad \dot{x}_c (0) = 0. $$ Below we describe the results of our computer simulations. Fig.~\ref{osci} shows the frequency shift $\delta\omega_c$ as a function of the spin $z$-coordinate at the central resonant surface ($\Delta = 0$). First, one can see an excellent agreement between the numerical data and the analytical estimate (\ref{dom}). Second, as expected, the maximum magnitude of the frequency shift $|\delta\omega_c|$ can be achieved when the spin is located in the plane of the CT vibrations $y=0$. However, for $y=x$, it has almost the same magnitude $|\delta\omega_c|$ (with the opposite sign of $\delta\omega_c$). Moreover, for $y=x$ the dependence $\delta\omega_c(z)$ has an extremum, which can be used for the measurement of the spin $z$-coordinate. If the distance between the CT and the surface of the sample can be controlled, then the ``depth'' of the spin location below the sample surface can be determined. (In all Figures, the coordinates $x,y$ and $z$ are given in units of $L_0$, and the frequency shift is in units of $\omega_c$.) Fig.~\ref{cro} shows the cross-sections of the resonant slice for $z=-0.1$ and $z=-1$. The greater the distance from the CT, the smaller the cross-sectional area. The solid squares and circles in Fig.~\ref{cro} show the spin locations which correspond to the frequency shifts indicated by the same symbols in Fig.~\ref{osci}. \begin{figure} \includegraphics[scale=0.33]{nf4n} \caption{a) The OSCAR MRFM frequency shift $\delta\omega_c(r_p)$ inside the cross-sectional area of the resonant slice for $x>0$. The solid lines correspond to $y=0$ and the dashed lines correspond to $y=x$. Lines are $1 - $, $z=-0.1$, $2 - $, $z=-0.43$, and $3 - $, $z=-1$. $r_p = (x^2+y^2)^{1/2}$. b) the cross-section of the resonant slice $z=-0.1$. The bold segments show the spin locations which correspond to the lines $1$ on $a)$. } \label{nf4} \end{figure} Fig.~\ref{nf4} demonstrates the ``radial'' dependence of the frequency shift $\delta\omega_c (r_p)$, where $r_p = (x^2+y^2)^{1/2}$. The value of $r_p$ can be changed by the lateral displacement of the cantilever. As one may expect, the maximum value of $|\delta\omega_c|$ corresponds to the central resonant surface. The maximum becomes sharper as z decreases. Thus, a small distance between the CT and the sample surface is preferable for the measurement of the radial position of the spin. Fig,~\ref{dphi} shows the ``azimuthal dependence'' of the frequency shift $\delta\omega_c (\phi)$, where $\phi = \tan^{-1} (y/x)$ and the spin is located on the central resonant surface. Note that for the given values of $z$ and $\phi$, the coordinates $x$ and $y$ of the spin are fixed if the spin is located on the central resonant surface. The value of $\phi$ can be changed by rotating the cantilever about its axis. One can see the sharp extrema of the function $\delta\omega_c (\phi)$. Again, the small distance between the CT and the sample is preferable for the measurement of the ``azimuthal position'' of the spin. Finally, we consider the realistic case in which the direction of polarization of the {\it rf} field $2\vec{B}_1$ is fixed in the laboratory system of coordinates. Now the angle $\theta$ between the direction of polarization of $2\vec{B}_1$ and the field $\vec{B}_0$ depends on the spin coordinate because the magnitude and the direction of the dipole field $\vec{B}_d^{(0)}$ depend on the spin location. To describe this case we ignore the component of $2\vec{B}_1$ which is parallel to $\vec{B}_0$, and change $B_1$ to $B_1 \sin\theta$ in all our formulas. As an example, Fig.~\ref{pola} demonstrates the dependence $\delta\omega_c(z)$ for the case in which the {\it rf} field is polarized along the $x$-axis. One can see that in a narrow region of $z$ the magnitude of the frequency shift sharply drops. This occurs because in this region the magnetic field $\vec{B}_0$ is almost parallel to the $x$-axis. Thus, the effective field $B_1 \sin\theta$ is small: the condition of the adiabatic motion $ \gamma [ B_1 \sin \theta ]^2 \gg | d\vec{B}_{eff}/dt |$ is not satisfied; and the spin does not follow the effective magnetic field. The dashed lines in Fig.~\ref{pola} correspond to the analytical estimate (\ref{dom}) with the substitution $B_1 \to B_1 \sin\theta$: the analytical estimate assumes adiabatic conditions, which are violated for small $\theta$. The sharp drop of $|\delta\omega_c |$ could be observed either by the change of the distance between the CT and the sample surface or by the change of the direction of polarization of the {\it rf} field. In any case this effect provides an independent measurement of the spin ``depth'' below the sample surface. \begin{figure} \includegraphics[scale=0.33]{nf5n} \caption{a) $\delta\omega_c (\phi)$, with $\phi= \tan^{-1} (y/x)$ for the central resonant surface and $z=-0.1$ (full line); $z=-0.43$ (dashed line), $z=-1$ (dotted line). b) solid line shows the cross-section of the resonant slice for $z=-0.1$. Dashed line shows the intersection between the plane $z=-0.1$ and the central resonant surface. The solid circle in b) shows the spin location $\phi/\pi=-0.1$ whose corresponding frequency shift is marked by a solid circle on a). } \label{dphi} \end{figure} \section{Conclusion} We have derived the quasiclassical equations of motion describing the OSCAR technique in MRFM for arbitrary relation between the external and dipole magnetic fields and arbitrary location of a single spin. We have obtained an analytical estimate of the OSCAR MRFM frequency shift $\delta\omega_c$ which is in excellent agreement with numerical simulations. We have shown that the dependence $\delta\omega_c$ on the position of spin relative to the cantilever contains characteristic maxima and minima which can be used to determine the position of the spin. We believe that moving cantilever in three dimensions, rotating it (or the sample) about the cantilever's axis and changing the direction of the polarization of the {\it rf} magnetic field, experimentalist eventually will enable the determination of the position of a single spin. We hope that our work will help to achieve this goal. \begin{figure} \includegraphics[scale=0.33]{nf6n} \caption{ a) $\delta\omega_c (z)$ when the {\it rf} field $\vec{B}_1$ is parallel to the $x$-axis. The spin is located at the central resonant surface $y=0$, $x>0$. Solid line are numerical data, dashed line is the analytical estimate (\ref{dom}), which assumes adiabatic motion of the spin magnetic moment $\vec{\mu}$ parallel to $\vec{B}_{eff}$. For a few numerical points indicated as solid cirlces in a) the corresponding $\vec{B}_0$ field is shown in b). b) solid line : intersection between the central resonant surface and the $x-z$ plane. Arrows show the magnetic field $\vec{B}_0$ on this intersection at the points indicated as solid circles in a). The absolute value of the frequency shift $|\delta\omega_c|$ drops at the spin locations where $\vec{B}_0$ is approximately parallel to $\vec{B}_1$ $(\theta \ll 1)$. } \label{pola} \end{figure} \section*{Acknowledgments} This work was supported by the Department of Energy (DOE) under Contract No. W-7405-ENG-36, by the Defense Advanced Research Projects Agency (DARPA), by the National Security Agency (NSA), and by the Advanced Research and Development Activity (ARDA). \section{Appendix} Eq.~(\ref{simp}) can be written in the following form: \begin{equation} \frac{d^2 x_c}{d\tau^2} + x_c = \epsilon f(x_c), \label{aa1} \end{equation} where $\tau=\omega_c t$ is the dimensionless time, \begin{equation} f(x_c) = \frac{\beta {\cal G} x_c}{\sqrt{B_1^2+(\beta {\cal G})^2 x_c^2}}, \label{aa2} \end{equation} and $\epsilon= - \alpha\beta|{\cal G}| $. The approximate solution of (\ref{aa1}), can be written as \cite{bogo}: \begin{equation} x_c(\tau)=a(\tau) \cos\psi(\tau)+ O(\epsilon), \label{aa3} \end{equation} where in the first order in $\epsilon$, $a(\tau)$ and $\psi(\tau)$ satisfy the equations: \begin{equation} \begin{array}{lll} \displaystyle \frac{da}{d\tau} &= \epsilon P_1 (a) +O(\epsilon), \\ &\\ \displaystyle \frac{d\psi}{d\tau} &= 1 + \epsilon Q_1 (a) +O(\epsilon), \\ \end{array} \label{aa4} \end{equation} and the functions $P_1(a)$ and $Q_1(a)$ are given by: \begin{equation} P_1(a) = -\frac{1}{2\pi} \int_0^{2\pi} f(a\cos\psi) \sin\psi \ d\psi, \label{a5a} \end{equation} \begin{equation} Q_1(a) = -\frac{1}{2\pi a} \int_0^{2\pi} f(a\cos\psi) \cos\psi \ d\psi. \label{a5b} \end{equation} On inserting the explicit expression (\ref{aa2}) for $f(a\cos\psi)$ one gets: \begin{equation} \label{a6a} P_1(a) = 0, \end{equation} \begin{equation} \label{a6b} Q_1(a) = -\frac{2\beta {\cal G} }{\pi\sqrt{B_1^2 +(\beta {\cal G} a)^2}} \int_0^{\pi/2} \frac{(1-\sin^2 \psi)}{\sqrt{1-k^2\sin^2\psi}} \ d\psi, \end{equation} where \begin{equation} k^2 = \frac{(\beta {\cal G} a)^2}{ B_1^2 +(\beta {\cal G} a)^2}. \label{aa7} \end{equation} Eq.~(\ref{a6b}) can be written as: \begin{equation} Q_1(a) = -\frac{2\beta {\cal G} }{\pi k^2\sqrt{B_1^2+(\beta {\cal G} a)^2}} [ (k^2 -1) K(k) + E(k)], \label{aa8} \end{equation} where $K(k)$ and $E(k)$ are the complete elliptic integrals of the first and second kind. When $k \simeq 1$ elliptic integrals can be approximated by: \begin{equation} K(k) \approx \ln \frac{4}{\sqrt{1-k^2}} +\frac{1}{4}\left( \ln \frac{4}{\sqrt{1-k^2}} -\frac{1}{2}\right) (1-k^2) , \label{a9a} \end{equation} \begin{equation} E(k) \approx 1 + \frac{1}{2}\left( \ln \frac{4} {\sqrt{1-k^2}} -\frac{1}{2} \right) (1-k^2). \label{a9b} \end{equation} In the first order approximation the frequency shift is: \begin{eqnarray} \nonumber && \delta\omega_c \simeq \epsilon Q_1(a) = \frac{2}{\pi} \frac{\alpha \beta^2 {\cal G} |{\cal G}|} {\sqrt{B_1^2 +(\beta {\cal G} a)^2 }} \left\{ 1+ \right. \\ \nonumber && \\ && \left. \frac{1}{2}\frac{B_1^2}{B_1^2+(\beta{\cal G} a)^2} \Big[ \ln \Big( \frac{4\sqrt{B_1^2 + (\beta{\cal G} a)^2}}{B_1}\Big) +\frac{1}{2} \Big] \right\} . \label{aa9} \end{eqnarray} In the approximation $a \approx A$, one obtains Eq.~(\ref{dom}).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Models of quantum magnetism are of great interest in the quest to understand quantum phase transitions and many body states with strong quantum fluctuations. Studies in this field typically focus on identifying phases and phase transitions between them as a function of some coupling ratio. These coupling ratios are typically difficult or impossible to tune in experimental systems. In contrast, external magnetic fields are easy to adjust in experiments, making studies of field-driven quantum phase transitions particularly relevant. Despite this fact, such phase transitions have been largely neglected by the theoretical literature. Here, we present a study of the field-driven saturation transition in a two-dimensional (2D) quantum antiferromagnet known as the $J$-$Q$ model. In this model, a nearest neighbor antiferromagnetic Heisenberg exchange of strength $J$ competes with a four-spin interaction of strength $Q$, which favors valence-bond solid order. The form of this term is $-Q P_{i,j} P_{k,l}$ (where $P_{i,j} \equiv \frac{1}{4} - \mathbf{S}_i \cdot \mathbf{S}_j$ and $i,j$ and $k,l$ denote parallel bonds of an elementary plaquette of the square lattice). While the $Q$ interaction competes with the Heisenberg exchange ($-J P_{i,j}$), it does not produce frustration in the conventional sense, allowing numerically-exact quantum Monte Carlo studies of the physics. We find that the field-driven saturation transition from the antiferromagnet to the fully saturated state in the $J$-$Q$ model is composed of two regimes: a low-$Q$ continuous transition and high-$Q$ discontinuous (first order) transition with magnetization jumps, both of which will be address here. For low $Q$, we find that the transition is continuous and is therefore expected to be governed by a zero-scale-factor universality, which was predicted by Sachdev \textit{et al.} in 1994 \cite{sachdev1994}, but until now had not been tested numerically or experimentally in spatial dimension $d=2$ (2D). Although the leading order behavior matches the Sachdev \textit{et al.} prediction, we find multiplicative logarithmic violations of scaling at low temperature. Such violations are to be expected based on the fact that 2D represents the upper critical dimension for this transition, but these scaling violations do \textit{not} match the form predicted by Sachdev \textit{et al.} for reasons that are currently unclear. At high $Q$, the saturation transition is first order and there are discontinuities (jumps) in the magnetization known as \textit{metamagnetism} \cite{iaizzi2017,jacobs1967,stryjewski1977}. These jumps are caused by the onset of attractive interactions between magnons (spin flips on a fully polarized background) mediated by the $Q$-term (a mechanism previously established in the 1D $J$-$Q$ model \cite{iaizzi2017}). We use a high-magnetization expansion to obtain an exact solution for the critical coupling ratio $(Q/J)_{\rm min}$ where the jump first appears. \section{Background} The $J$-$Q$ model is part of a family of Marshall-positive Hamiltonians constructed from products of singlet projection operators \cite{kaul2013}. The two-dimensional realization of the \mbox{$J$-$Q$} model is given by \begin{eqnarray} H_{JQ} & = & -J \sum \limits_{\braket{i,j}} P_{i,j} - Q \sum \limits_{\braket{i,j,k,l}} P_{i,j} P_{k,l} \end{eqnarray} where $\braket{i,j}$ sums over nearest neighbors and $\braket{i,j,k,l}$ sums over plaquettes on a square lattice as pairs acting on rows $\begin{smallmatrix} k&l\\i&j\end{smallmatrix} $ and columns $\begin{smallmatrix} j&l\\i&k\end{smallmatrix}$ \cite{sandvik2007}. The zero-field $J$-$Q$ model has been extensively studied in both one \cite{iaizzi2017,sanyal2011,tang2011a,tang2014} and two \cite{sandvik2007,lou2009,sandvik2010,jin2013,tang2013} spatial dimensions, where it provides a numerically tractable way to study the deconfined quantum critical point marking the transition between the N\'{e}el antiferromagnetic state and the valence-bond solid (VBS). The VBS breaks $Z_4$ lattice symmetry to form an ordered arrangement of local singlet pairs. Here we will not focus on this aspect of the $J$-$Q$ model, but instead add an external magnetic field $h_z$, \begin{align} H_{JQh} = H_{JQ} - h_z \sum \limits_i S^z_i, \end{align} and study the magnetization near the field-driven transition to saturation. A separate paper \cite{iaizzi2018dqc} will discuss magnetic field effects in the vicinity of the N\'{e}el-VBS transition (see also Ref.~\onlinecite{mythesis}). Hereafter we will either fix the energy scale by (1) setting $J=1$ (and thereby referring to the dimensionless parameters $q\equiv Q/J$ and $h \equiv h_z/J$) or by (2) requiring $J+Q=1$ (and thereby referring to the dimensionless parameters $s\equiv Q/(J+Q)$ and $h\equiv h_z/(J+Q)$). The magnetization jumps correspond to a first-order phase transition (sometimes called metamagnetism) in which the magnetization changes suddenly in response to an infinitesimal change in the magnetic field \cite{jacobs1967,stryjewski1977}. This sort of transition usually occurs in spin systems with frustration or intrinsic anisotropy \cite{gerhardt1998,hirata1999,aligia2000,dmitriev2006,kecke2007,sudan2009,arlego2011,kolezhuk2012,huerga2014}, but recent work \cite{iaizzi2015,iaizzi2017,mythesis,mao2017} has shown that metamagnetism occurs in the 1D $J$-$Q$ model, which (in the absence of a field) is both isotropic and unfrustrated. The magnetization jumps in the 1D $J$-$Q$ model are caused by the onset of attractive interactions between magnons (flipped spins against a fully polarized background) mediated by the four-spin interaction \cite{iaizzi2017}. In the 1D case the critical coupling ratio $q_{\rm min}$ can be determined exactly using a high-magnetization expansion \cite{iaizzi2017}. Here we build on previous work \cite{iaizzi2017} to include the 2D case. Zero-scale-factor universality, first proposed by Sachdev \textit{et al.} in Ref.~\onlinecite{sachdev1994}, requires response functions to obey scaling forms that depend only on the \textit{bare} coupling constants, without any nonuniversal scale factors in the arguments of the scaling functions. It applies to continuous quantum phase transitions that feature the onset of a nonzero ground state expectation value of a conserved density \cite{sachdev1994,iaizzi2017}. The saturation transition in the $J$-$Q$ model for $q<q_{\rm min}$ is just such a situation \cite{iaizzi2017}, although the 2D case is at the upper critical dimension of the theory, so we expect to find (universal) multiplicative logarithmic corrections to the zero-factor scaling form. \textit{Outline:} The methods used in this work are summarized in \cref{s:meta_methods}. In \cref{s:pd}, we discuss a schematic phase diagram of the 2D $J$-$Q$ model. In \cref{s:jump}, we focus on the onset of a magnetization jump at $q_{\rm min}$, where the saturation transition becomes first order, and derive an exact result for the value of $q_{\rm min}$. In \cref{s:zsf2d} we discuss the universal scaling behavior near the continuous saturation transition, focusing on tests of the zero-scale-factor prediction as well as the presence of multiplicative logarithmic corrections expected at the upper critical dimension ($d=2$). Our conclusions are discussed further in \cref{s:discussion}. \section{Methods \label{s:meta_methods}} For the exact solution for $q_{\rm min}$ we have used Lanczos exact diagonalization \cite{sandvik2011computational} of the two-magnon (flipped spins on a fully polarized background) Hamiltonian, which we derive in an exact high-field expansion. The large-scale numerical results obtained here were generated using the stochastic series expansion quantum Monte Carlo (QMC) method with directed loop updates \cite{sandvik_dl} and quantum replica exchange. This QMC program is based on the method used in our previous work \cite{iaizzi2017}. The stochastic series expansion is a QMC method which maps a \mbox{$d$-dimensional} quantum problem onto a $(d+1)$-dimensional classical problem by means of a Taylor expansion of the density matrix $\rho = e^{-\beta H}$, where the extra dimension roughly corresponds to imaginary time in a path-integral formulation \cite{sandvik2011computational}. In the QMC sampling, the emphasis is on the operators that move the world-lines rather than the lines themselves. The method used here is based on the techniques first described in Ref.~\onlinecite{sandvik_dl} (as applied to the Heisenberg Model). In addition to the standard updates, we incorporated quantum replica exchange \cite{hukushima1996,sengupta2002}, a multicanonical method in which the magnetic field (or some other parameter) is sampled stochastically by running many simulations in parallel with different magnetic fields and periodically allowing them to swap fields in a manner that obeys the detailed balance condition. To further enhance equilibration we used a technique known as $\beta$-doubling, a variation on simulated annealing. In $\beta$-doubling simulations begin at high temperature and the desired inverse temperature is approached by successive doubling of $\beta$; each time $\beta$ is doubled a new operator string is formed by appending the existing operator string to itself \cite{sandvik2002}. A detailed description of all of these techniques can be found in Chapter 5 of Ref.~\onlinecite{mythesis}. \section{Phase Diagram \label{s:pd}} In Fig.~\ref{f:phase}, we present a schematic zero-temperature phase diagram of the 2D \mbox{$J$-$Q$} model. Here the $h$ axis corresponds to the well-understood 2D Heisenberg antiferromagnet in an external field, and the $q$-axis corresponds to the previously studied \cite{sandvik2007,lou2009,sandvik2010,jin2013,tang2013,shao2016} zero-field \mbox{$J$-$Q$} model, which for $q<q_c$ has long-range antiferromagnetic N\'eel order in the ground state. At finite temperature $O(3)$ spin-rotation symmetry (which is continuous) cannot be spontaneously broken (according to the Mermin-Wagner Theorem \cite{mermin1966}), so there is no long-range spin order; instead there is a ``renormalized classical'' regime with the spin correlation length diverging exponentially as $T\rightarrow 0$ like $\xi \propto e^{2\pi \rho_s/T}$ \cite{chakravarty1988}. At $q_c$, the zero-field $J$-$Q$ model undergoes a quantum phase transition to the valence-bond solid (VBS) state. The off-axes area of \cref{f:phase} has not previously been studied; we here focus on the region around the field-driven saturation transition, $h_s(q)$. The region around the deconfined quantum critical point, $q_c$, will be addressed in a forthcoming publication \cite{iaizzi2018dqc}. Starting from the N\'{e}el state ($q<q_c$) on the $q$ axis, adding a magnetic field forces the antiferromagnetic correlations into the $XY$ plane, producing a partially polarized canted antiferromagnetic state. At finite temperature, there is no long-range N\'{e}el order, but the addition of a field permits a BKT-like transition to a phase with power-law spin correlations. For $q>q_c$, the ground state has VBS order. This state has a finite gap, so it survives at finite temperature and is destroyed by the magnetic field only after it the closes spin gap. The destruction of the VBS recovers the canted antiferromagnetic state (or partially polarized spin disordered phase for $T>0$). \begin{figure} \centering \includegraphics[width=80mm]{jqh-phase-diagram-2d.pdf} \caption{Schematic phase diagram of the 2D \mbox{$J$-$Q$} model in an external field at zero temperature. The different phases and critical points are explained in the text. \label{f:phase}} \end{figure} We here will focus on the saturation transition in the high-field region of the phase diagram. The system reaches saturation (where all spins are uniformly aligned in the $+z$ direction) at $h=h_s(q)$. For $q<q_{\rm min}$, this transition is continuous and the saturation field is given by $h_s(q\leq q_{\rm min})=4J$ (in this regime $h_s(q)$ is a dashed line). For $q>q_{\rm min}$ the saturation transition is first order (i.e. metamagnetic) and there are macroscopic discontinuities in the magnetization (in this regime $h_s(q)$ is a solid line). The point $q_{\rm min}$ denotes the onset of metamagnetism, here the magnetization is still continuous, but the magnetic susceptibility diverges at saturation (corresponding to an infinite-order phase transition). \section{Metamagnetism \label{s:jump}} Magnetization jumps (also known as metamagnetism) can appear due to a variety of mechanisms including broken lattice symmetries, magnetization plateaus \cite{honecker2004}, localization of magnetic excitations \cite{richter2004,schnack2001,schulenburg2002}, and bound states of magnons \cite{aligia2000,kecke2007,iaizzi2017}. It has previously been established that magnetization jumps occur in the \mbox{$J$-$Q$} chain caused by the onset of a bound state of magnons \cite{iaizzi2015,iaizzi2017,mao2017}; to our knowledge, this is the first known example of metamagnetism in the absence of frustration or intrinsic anisotropy. To understand the mechanism for metamagnetism, we consider bosonic spin flips (magnons) on a fully polarized background. These magnons are hardcore bosons that interact with a short-range repulsive interaction in the Heisenberg limit. The introduction of the $Q$-term produces an effective short-range \textit{attractive} interaction between magnons. At $q_{\rm min}$, this attractive force dominates and causes pairs of magnons to form bound states. \subsection{Exact Solution for $q_{\rm min}$ \label{s:jumpED}} We will now find $q_{\rm min}$ for the 2D $J$-$Q$ model using the procedure developed for the $J$-$Q$ chain in Ref.~\onlinecite{iaizzi2017}. Let us define bare energy of an $n$-magnon state, $\bar E_n$, as \begin{align} E_n (J,Q,h) = \bar E_n (J,Q) -nh/2. \end{align} We can then define the binding energy of two magnons as \begin{align} \Xi(q) \equiv 2 \bar E_1 - \bar E_2. \end{align} The $Q$ term is nonzero only when acting on states where there are exactly two magnons on a plaquette, so it does not contribute to the single-magnon dispersion, which has a tight-binding-like form \cite{iaizzi2017}. We can therefore solve analytically for the single-magnon energy, $\bar E_1 = -4J$. The two-magnon energy, $\bar E_2$ corresponds to the ground state in the two magnon sector, and must be determined numerically. Since this is only a two-body problem, relatively large systems can be studied using Lanczos exact diagonalization to obtain $\bar E_2$ to arbitrary numerical precision. \begin{figure} \centering \includegraphics[width=80mm]{2017-05-10-e2-fig.pdf} \caption{Binding energy $\Xi(q,L)$ plotted against $q$ for several system sizes calculated using exact diagonalization. The thin black line represents $\Xi=0$. Inset: zoomed-in view of crossing point. \label{f:xing}} \end{figure} In Fig.~\ref{f:xing} we plot the binding energy of two magnons, $\Xi(q,L)$, for $0 \leq q \leq 1$ and $L=4, \, 8, \, 12, \, 16$. For all sizes the binding energy becomes positive around $q\approx 0.417$. We can also see that Fig.~\ref{f:xing} strongly resembles the analogous figure for the $J$-$Q$ chain (see Fig.~6 of Ref.~\onlinecite{iaizzi2017}). For $q<q_{\rm min}$ finite size effects result in an \textit{underestimate} of the binding energy and for $q>q_{\rm min}$ finite size effects cause an \textit{overestimate} of the binding energy. Around $q_{\rm min}$ these effects cancel out and the crossing is \textit{nearly} independent of system size (in the 1D case the crossing is exactly independent of $L$). Using a bracketing procedure, we can extract $q_{\rm min}(L)$ to arbitrary numerical precision. \cref{t:qmin} contains a list of $q_{\rm min}(L)$ for select $L \times L$ systems with $L\leq 24$. $q_{\rm min}$ converges exponentially fast in $L$, so based on extrapolation using these modest sizes we know $q_{\rm min}(L=\infty) = 0.41748329$ to eight digits of precision. Although we do not plot it here, the exponential convergence of $q_{\rm min}(L)$ can be seen from the underlines in \cref{t:qmin}, which indicate the digits which are converged to the thermodynamic limit; the number of underlined digits grows linearly with $L$. Note here that $q_{\rm min}$ is not the same as $q_c$ (the N\'{e}el-VBS transition point), and these two phase transitions are governed by completely different physics. \begin{table} \caption{$q_{\rm min}(L)$ calculated to machine precision for select $L \times L$ systems using Lanczos exact diagonalization. The underlined portions of the numbers represent the digits that are fully converged to the thermodynamic limit. \label{t:qmin}} \begin{center} \begin{tabular}{c|c} L & $q_{\rm min}$ \\ \hline 4 & \underline{0.41}3793103448 \\ 6 & \underline{0.417}287630402 \\ 8 & \underline{0.4174}67568061 \\ 10 & \underline{0.41748}1179858 \\ 12 & \underline{0.417482}857341 \\ 14 & \underline{0.417483}171909 \\ 16 & \underline{0.4174832}50752 \\ 18 & \underline{0.4174832}74856 \\ 20 & \underline{0.4174832}83375 \\ 22 & \underline{0.41748328}6742 \\ 24 & \underline{0.41748328}8198 \\ \end{tabular} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=80mm]{2017-07-21-probDens-l20.pdf} \caption{Probability density of magnon separation in the $x$-direction for $r_y = 0$, $|\psi(r_x,r_y=0)|$ in the two-magnon sector of the $J$-$Q$ model; calculated using Lanczos exact diagonalization. \label{f:pdens}} \end{figure} In \cref{f:pdens} we plot the ground state probability density in the two magnon sector as a function of separation of the magnons in the $x$ direction, $r_x$ (with $r_y =0$). Here we consider a small ($18\times18$) system to make the features at the boundary easier to distinguish on the scale of the figure. For $q=0$, we can see that the probability density takes on the form of a free particle with periodic boundary conditions in $r_x,r_y$, with a single excluded site at $r_x=r_y=0$. In the continuum limit, this corresponds to a repulsive delta potential. For $q>q_{\rm min}$ the wave function takes on the exponentially decaying form of a bound state. At $q=q_{\rm min}$ (the crossover between repulsive and attractive interactions) the wave function becomes flat with an exponentially-decaying short-distance disturbance of the form $\psi \propto 1-a e^{-r_x /b}$ (this was confirmed by further data not depicted here). This exponential disturbance explains why the finite size effects vanish exponentially near $q_{\rm min}$. This form of the wave function in the 2D case stands in contrast to the flat wave function in the 1D $J$-$Q$ model, where the bulk wavefunction at $q_{\rm min}$ is perfectly flat and $q_{\rm min}$ is exactly independent of $L$ for $L>6$ \cite{iaizzi2017}. The onset of attractive interactions between magnons has previously been found to cause metamagnetism \cite{aligia2000,kecke2007,iaizzi2017}, but bound pairs of magnons are not a sufficient condition to guarantee the existence of a macroscopic magnetization jump. The magnetization could, for example, change by steps of $\Delta m_z=2$, but never achieve a macroscopic jump \cite{honecker2006,kecke2007}. For a true jump to occur, the point $q_{\rm min}$ must be the beginning of an instability leading to ever larger bound states of magnons. In the next section we will confirm numerically that a macroscopic magnetization jump does in fact occur in the full magnetization curves obtained via quantum Monte Carlo simulations. It will not be possible to detect the onset of the magnetization jump (which is initially infinitesimal) by directly examining the magnetization curves due to finite-temperature rounding. Instead, in \cref{s:zsf2d} we will examine the scaling of the magnetization near saturation and find that a qualitative change in behavior consistent with the onset of a different universality class, occurs at the predicted value of $q_{\rm min}$. \subsection{Quantum Monte Carlo Results} In Fig.~\ref{f:mmag}, we plot the magnetization density, \begin{align} m = \frac{2}{L^2}\sum S^z_i, \end{align} of the 2D \mbox{$J$-$Q$} model as a function of external field for several different values of \mbox{$0\leq s \leq 1$} where $s$ is defined such that \mbox{$J=1-s$} and $Q=s$ such that \mbox{$J+Q=1$}. Here we use a $16\times 16$ lattice with $\beta=4$. Ordinarily, QMC can study much larger systems than this, but as was observed in our previous work \cite{iaizzi2015,iaizzi2017}, the $J$-$Q$ model with a field is exceptionally difficult to study, even when using enhancements such as $\beta$-doubling and quantum replica exchange (both used here). We have compared to smaller and larger sizes and finite size effects do not qualitatively affect the results on the scale of Fig.~\ref{f:mmag}. For $s=0$ (the Heisenberg limit), the magnetization is linear in $h$ for small fields, and smoothly approaches saturation at $h=4J$. When $s=0.2$, corresponding to a coupling ratio of $q=0.25$, the magnetization curve begins to take on a different shape: shallower at low field and steeper near saturation. This trend continues as $s$ increases: for $s\geq 0.8$, there is a clear discontinuity. Although the jump should appear for $q \geq q_{\rm min} = 0.417$, which corresponds to $s_{\rm min} = 0.294$, this is difficult to distinguish in the QMC data. At $q_{\rm min}$, the jump is infinitesimal, and even when the jump is larger, such as for $s=0.4$ and $s=0.6$, it is hard to clearly distinguish due to finite temperature effects, which round off the discontinuity in the magnetization. These results are nonetheless consistent with the value of $q_{\rm min}$ predicted using the exact method, and demonstrate that a macroscopic magnetization jump does in fact occur. We will discuss more evidence for $q_{\rm min} \approx 0.417$ from the critical scaling of the magnetization near saturation in \cref{s:zsf2d}. \begin{figure} \centering \includegraphics[width=80mm]{2017-06-13-magcurve-l16.pdf} \caption{Magnetization density of the 2D \mbox{$J$-$Q$} model as function of external field, $h$, for a range of different values of $s$ defined such that $J=1-s$ and $Q=s$. Here $s=0,0.2,0.4,0.6,0.8,1$ with $\beta=4$ correspond to $q=0, 0.25, 0.67, 1.5, 4, \infty$, respectively (with rescaled non-constant $\beta$). Results from QMC with quantum replica exchange. \label{f:mmag}} \end{figure} \section{Zero-Scale-Factor Universality\label{s:zsf2d}} In the $J$-$Q$ model, magnetization near saturation should be governed by a remarkably simple zero-scale-factor universality for $q<q_{\rm min}$ (where the saturation transition is continuous) \cite{sachdev1994,iaizzi2017}. Here, ``zero-scale-factor'' means that the response functions are universal functions of the bare coupling constants and do not depend on any nonuniversal numbers \cite{sachdev1994}. Zero-scale-factor universality applies to low-dimensional systems where there is a quantum phase transition characterized by a smooth onset of a conserved density \cite{sachdev1994}. Typically this is applied to the transition from the gapped singlet state of integer spin chains to a field-induced Bose-Einstein condensate of magnons (excitations above the zero magnetization state). In the $J$-$Q$ model, we instead start from the saturated state with $h>h_s$, and consider flipped spins on this background---magnons---as $h$ is decreased below $h_s$. In the 1D case, the zero-factor scaling form applies for all $q<q_{\rm min}$ at sufficiently low temperature, and is violated by a logarithmic divergence at exactly $q_{\rm min}$ \cite{iaizzi2017}. The 2D $J$-$Q$ model is at the upper critical dimension of this universality class, so we expect multiplicative logarithmic violations of the zero-factor scaling form for all $q$. We will describe the universal scaling form and its application to the saturation transition in the 2D $J$-$Q$ model and then show that the low-temperature violations of the scaling form do not match the prediction in Ref. \onlinecite{sachdev1994}. In two spatial dimensions, the zero-factor scaling form for the deviations from saturation $(\delta \braket{m} \equiv 1 - \braket{m})$ is given by Eq. (1.23) of Ref. \onlinecite{sachdev1994}: \begin{align} \delta \braket{m} = g \mu_B \left( \frac{2M}{\hbar^2 \beta}\right) \mathcal{M}(\beta \mu), \label{sachdev} \end{align} where $M$ is the bare magnon mass (which is $M=1$ when $J=1$), and $\mu$ represents the field, $\mu \equiv h_s-h$. For $q\leq q_{\rm min}$, the saturation field is $h_s=4J$ (which can be determined analytically from the level crossing between the saturated state and the state with a single flipped spin \cite{iaizzi2017}). We set $\hbar=1$ and $\delta \braket{m} = g \mu_B \braket{n}$ to define the rescaled magnon density: \begin{align} n_s (q,\beta \mu) \equiv \frac{\beta \braket{n}}{2} = \mathcal{M}(\beta \mu). \label{ns} \end{align} We emphasize again that these magnons are spin flips on fully polarized background, so $n\rightarrow 0$ corresponds to the saturated state. The field is also reversed from the usual case (of a gapped singlet state being driven to a polarized ground state by applying a uniform field). Thus, in the present case, $h>h_s$ produces a negative $\mu$, which means $n\rightarrow0$, and $h<h_s$ corresponds to a positive $\mu$ and a finite density of magnons. At the saturation field, $\mu=0$, the scaling form in \cref{ns} predicts that the density takes on a simple form: \begin{align} \braket{n} = 2 \mathcal{M}(0) T. \end{align} At this same point the rescaled density, $n_s$, becomes independent of temperature: \begin{align} n_s (q,0) \equiv \frac{\beta \braket{n}}{2} = \mathcal{M}(0). \end{align} In our case there are two spatial dimensions and $z=2$ imaginary time dimensions, so the total dimensionality is $d=4$, which is the upper critical dimension of the zero-scale-factor universality \cite{sachdev1994}. At low temperatures, we therefore expect to see multiplicative logarithmic violations of this scaling form, which should be universal as well. \begin{figure} \centering \includegraphics[width=80mm]{2018-07-27-zsf-varq.pdf} \includegraphics[width=80mm]{2018-07-27-zsf-varq-zoom.pdf} \caption{(Top) The zero-scale-factor-rescaled magnon density [Eq.~(\ref{ns})] at $h=h_s$, $\mu=0$ calculated using QMC with quantum replica exchange. The bright green line is a fit to the log-corrected scaling form Eq.~(\ref{logfit}). (Bottom) A zoomed-in view. \label{f:zsf}} \end{figure} In Fig. \ref{f:zsf}, we plot the rescaled magnon density at saturation, $n_s(q,\mu=0)$, as a function of temperature for two different sizes, $32\times32$ and $64\times64$. Here we use the exact value of the saturation field $h_s(q\leq q_{\rm min}) = 4J$. These sizes are large enough that finite size effects only become important at low temperature; the results for the two different sizes overlap completely for $T\geq 0.1$, but exhibit some separation at lower temperature depending on the value of $q$. From simulations of $96 \times 96$ and $128 \times 128$ systems (not depicted here) we know that the $64 \times 64$ curve for $q=q_{\rm min}$ is converged to the thermodynamic limit within error bars. If there were no corrections to \cref{ns} the lines in \cref{f:zsf} would exhibit no temperature dependence. Instead, we observe violations of the scaling form for all $q$. For $q=0$, there is some non-monotonic behavior, with a local minimum around $T=0.35$; at low temperatures, $n_s(T)$ appears to diverge like $\log(1/T)$, which on this semi-log scale manifests as a straight line. For $q=0.1$ and $0.2$, the behavior is similar, although the whole curve is shifted upwards. For $q=0.3$, the local minimum in $n_s(T)$ appears to be gone. The divergence for $q<q_{\rm min}$ looks log-linear, but it is difficult to distinguish between different powers of the log by fitting alone. At $q=0.4$ and $q=q_{\rm min}=0.4174833$, finite size effects become more important, and it appears that the log has a different power. \subsection{Behavior around $q_{\rm min}$} We can also use the low-temperature behavior of $n_s$ in \cref{f:zsf} to verify our prediction of $q_{\rm min}$ (from the high-magnetization expansion discussed in \cref{s:jumpED}). At $q_{\rm min}$, the transition is no longer the smooth onset of a conserved density, therefore the zero-scale-factor universality does not apply (not even with logarithmic corrections). For all $q<q_{\rm min}$, the low-temperature divergence appears to obey a form $\log \left( \frac{1}{T} \right)$, or some power of it. At $q=q_{\rm min}$ the divergence of $n_s(q_{\rm min},T)$ takes on a \textit{qualitatively} different form that appears to diverge faster than $\log \left( \frac{1}{T} \right)$. This confirms the value of $q_{\rm min}$ predicted by the high-magnetization expansion, even though no sign of a discontinuity can be observed in the magnetization curves themselves due to finite-temperature rounding (see \cref{f:mmag}). \subsection{Low-temperature scaling violations} Sachdev \textit{et al.} \cite{sachdev1994} derived a form for the logarithmic violations of the zero-scale-factor universality that occur at the upper critical dimension. At $\mu =0$ (saturation, $h=h_s$) they predict that the magnon density will take on the form \begin{align} \braket{n} = \frac{2M k_B T}{4 \pi} \left[ \log \left( \frac{\Lambda^2}{2M k_B T} \right) \right]^{-4} \end{align} (see Eq. (2.20) of Ref.~\onlinecite{sachdev1994}). Where $\Lambda$ is an upper (UV) momentum cutoff. We can plug this into Eq.~(\ref{ns}) to find a prediction for the log-corrected form of the rescaled magnon density: \begin{align} n_s(\mu =0) = \frac{M}{4 \pi} \left[ \log \left( \frac{\Lambda^2}{2M k_B T} \right) \right]^{-4} .\label{eq:logcorr} \end{align} This form should also be universal, but the UV cutoff should depend on microscopic details. For simplicity we will restrict our analysis to the Heisenberg limit ($Q=0$). Setting the magnon mass, $M=1$ (the bare value) and introducing a fitting parameter, $a$, we can attempt to fit our QMC results for $n_s(q=0,T\rightarrow0)$ to the form \begin{align} n_s = a \left[ \log \left( \frac{\Lambda^2}{T} \right) \right]^{-4} \label{logfit}. \end{align} Automatic fitting programs were unable to find suitable values of $a$ and $\Lambda$ (in the low temperature regime where the divergence appears), so we manually solved for $a$ and $\Lambda$ using two points: $n_s(T=0.04)=0.278$ and $n_s(T=0.10)=0.2604$, finding $a=2.65354 \times 10^6$ and $\Lambda = 1.7\times10^{-13}$. We plot the resulting curve as a bright green line in Fig.~\ref{f:zsf}. Although this \textit{appears} to produce a good fit to the rescaled numerical data at low $T$, the fitting parameters do not make physical sense. The prefactor is fixed by the theory to be $a=M/(4 \pi)\approx 0.08$, yet the fitted value is huge: $a\approx 10^6$ (7 orders of magnitude too large). Worse yet, the UV cutoff, $\Lambda$, is extremely small ($10^{-13}$), much smaller than any other scale in this problem. In zero-scale-factor universality, there should be no renormalization of bare parameters, but even allowing for renormalization of the mass, $M$ (perhaps due to being at the upper critical dimension), it is not possible for \cref{logfit} to match the data while maintaining a physically sensible (i.e., large) value of the UV cutoff $\Lambda$. On close inspection, the fit in Fig.~\ref{f:zsf} bears a remarkable resemblance to a linear $\log T$ divergence. Indeed, since $T \gg \Lambda^2$, we can expand Eq.~(\ref{logfit}) in a Taylor series around small $u=\log T$ and we find an expression, \begin{small}% \begin{align}% n_s = \frac{a}{\left( \log \Lambda^2 \right)^4} \left[ 1 + 4 \frac{\log T}{\log \Lambda^2} + 10 \left( \frac{\log T}{\log \Lambda^2} \right)^2 + \cdots \right], \end{align}% \end{small}% that is linear in $\log T$ to first order and converges rapidly because $\log \Lambda^2 \approx -58$. Considering this fact and the unphysical parameters required to make the Sachdev form fit the data, it is clear that \cref{eq:logcorr} does not accurately describe the violations of the zero-scale-factor universality at its upper critical dimension. The apparent fit is instead a roundabout approximation of the true form, which is (approximately) proportional to $\log \left( \frac{1}{T} \right)$ to some positive power close to 1, although the exact power is difficult to determine from fitting. The reasons for the failure of the form predicted in Ref.~\onlinecite{sachdev1994} are unclear at this time. \section{Conclusions \label{s:discussion}} Here we have presented a numerical study of the two-dimensional $J$-$Q$ model in the presence of an external magnetic field, focusing on the field-induced transition to the saturated (fully polarized) state. Building on a previous version of this study which focused on the 1D case \cite{iaizzi2015,iaizzi2017}, we have found that the saturation transition is metamagnetic (i.e., has magnetization jumps) above a critical coupling ratio $q_{\rm min}$. The existence of metamagnetism in the $J$-$Q$ model is surprising because all previously known examples of metamagnetic systems had either frustration or intrinsic anisotropy. This transition is caused by the onset of bound states of magnons (flipped spins against a fully polarized background) induced by the four-spin $Q$ term. The same mechanism can explain presence of metamagnetism in a similar ring-exchange model \cite{huerga2014}. We have determined $q_{\rm min}$ using an exact high-magnetization expansion (see Ref.~\onlinecite{iaizzi2017}). Although it is not possible to directly observe the onset of the magnetization jump in the QMC data, we do see an apparent change in universal scaling behavior at $q_{\rm min}$ (\cref{f:zsf}) which most likely corresponds to the presence of an infinitesimal magnetization jump which goes on to become the macroscopic jump we see at high $q$ and matches the results of our exact calculation. We cannot exclude the possibility that there is some intermediate behavior, like a spin nematic phase \cite{orlova2017} between $q\approx 0.417$ and some higher-$q$ onset of metamagnetism, but we believe this is unlikely. For $q<q_{\rm min}$, the saturation transition is continuous and is governed by a zero-scale-factor universality at its upper critical dimension \cite{sachdev1994}. This universality has already been shown apply to the 1D case \cite{iaizzi2017}. We have presented the first-ever numerical test of the zero-scale-factor universality in two dimensions. We found that the low-temperature scaling violations do \textit{not} obey the form proposed by Ref. \onlinecite{sachdev1994}, which predicts a divergence as a \textit{negative} power of $\log T$ as $T\rightarrow0$, and instead they appear to diverge as some \textit{positive} power of $\log T$. There are still some important unanswered questions here that need to be addressed in future studies. It is still unclear why the scaling violations to not match the form predicted by Ref.~\onlinecite{sachdev1994} or what should be the correct form of the violations. In a preliminary report (Ref. \onlinecite{mythesis}, Ch. 3) we considered an alternative form of the violations based on an analogy to the scaling of the order parameter in the 4D Ising universality class (also at its upper critical dimension). This universality matches the leading-order scaling predictions of the zero-scale-factor universality, and produced a better but not fully convincing agreement with the scaling violations observed in our QMC results. The theoretical basis for the analogy was weak. Further theoretical work is required to determine the correct form of the scaling violations. Once the proper form of the scaling violations is established it should be checked over the full range of its validity $0\leq q < q_{\rm min}$. At $q_{\rm min}$, the zero-scale-factor universality does not apply, but it is not currently clear what universal behavior should appear. Finally, we have not discussed the behavior of this system at low fields; this aspect of the $J$-$Q$ model including the field effect near the deconfined quantum critical point $q_c$ \cite{shao2016} will be addressed in a forthcoming publication \cite{iaizzi2018dqc}. \section*{Acknowledgements} The work of A.I. and A.W.S. was supported by the NSF under Grants No. DMR-1710170 and No. DMR-1410126, and by a Simons Investigator Award. A.I. acknowledges support from the APS-IUSSTF Physics PhD Student Visitation Program for a visit to K.D. at the Tata Institute of Fundamental Research in Mumbai. The computational work reported in this paper was performed on the Shared Computing Cluster administered by Boston University's Research Computing Services.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \vspace{5mm} The study of random matrix theory goes back to the principal component data analysis of J. Wishart in 1920s-1930s and revolutionary ideas of E.Wigner in quantum physics in 1950s that linked statistical properties of the energy levels of heavy-nuclei atoms with spectral properties of Hermitian random matrices with independent components. In 1960s, F. Dyson introduced three archetypal types of matrix ensembles: Circular Orthogonal Ensemble (COE), Circular Unitary Ensemble (CUE), and Circular Symplectic Ensemble (CSE), see e.g. \cite{Dyson1}-\cite{Dyson4}. The probability density of the eigenvalues $ \{e^{i\*\theta_j}\}_{j=1}^N, \ \ 0\leq \theta_1, \ldots, \theta_N <2\*\pi,$ is given by \begin{align} \label{betaensemble} p_N(\overline{\theta})=\frac{1}{Z_{N}(\beta)}\prod_{1\leq j< k\leq N}\left|e^{i\theta_j}-e^{i\theta_k}\right|^{\beta}, \end{align} where $\beta=1,2,$and $4$ correspond to COE, CUE, and CSE, correspondingly. For arbitrary $\beta>0,$ a (sparse) random matrix model with eigenvalues distribution following (\ref{betaensemble}) was introduced in \cite{KN}. The ensemble (\ref{betaensemble}) for arbitrary $\beta>0$ is known as the Circular Beta Ensemble (C$\beta$U). The Circular Unitary Ensemble ($\beta=2$) corresponds to the joint distribution of the eigenvalues of an $N\times N$ random unitary matrix U distributed according to the Haar measure. In particular, the partition function is given by \begin{align} \label{partfun} Z_{N}(2)=(2\pi)^N \times N! \end{align} In \cite{paper}, pair counting statistics of the form \begin{align} \label{pairs} S_N(f)=\sum_{1\leq i\neq j\leq N} f(L_N\*(\theta_i-\theta_j)), \end{align} were studied for C$\beta$E (\ref{betaensemble}) for $1\leq L_N\leq N$ under certain smoothness assumptions on $f.$ The research in \cite{paper} was motivated by a classical result of Montgomery on pair correlation of zeros of the Riemann zeta function \cite{montgomery1}-\cite{montgomery2}. Assuming the Riemann Hypothesis, Montgomery studied the distribution of the ``non-trivial'' zeros on the critical line $1/2 +i\*\mathbb{R}.$ Rescaling zeros $\{ 1/2 \pm \gamma_n\},$ \[ \tilde{\gamma}_n= \frac{\gamma_n}{2\*\pi} \*\log(\gamma_n), \] Montgomery considered the statistic $\sum_{0<\tilde{\gamma}_j\neq \tilde{\gamma}_k <T} f(\tilde{\gamma}_j- \tilde{\gamma}_k )$ for large $T$ and sufficiently fast decaying $f$ with $\supp{\mathcal{F}(f)} \subset [-\pi, \pi],$ where $\mathcal{F}(f)$ denotes the Fourier transform of $f.$ The results of \cite{montgomery1}-\cite{montgomery2} imply that the two-point correlations of the (rescaled) critical zeros coincide in the limit with the local two point correlations of the eigenvalues of a CUE random matrix. The results of \cite{paper} deal with the limiting behavior of (\ref{pairs}) in three different regimes, namely macroscopic ($L_N=1$), mesoscopic ($1\ll L_N \ll N$) and microscopic ($L_N=N$). In the unscaled $L_N=1$ case it was shown that \begin{align} \label{pairs1} S_N(f)=\sum_{1\leq i\neq j\leq N} f(\theta_i-\theta_j), \end{align} has a non-Gaussian fluctuation in the limit $N\to \infty$ provided $f$ is a sufficiently smooth function on the unit circle. Namely, let $f$ be a real even integrable function on the unit circle. Denote the Fourier coefficients of $f$ as \begin{align} \label{FourierS} \hat{f}(k)=\frac{1}{2\*\pi}\*\int_0^{2\*\pi}f(x)\* e^{-i\*k\*x}\* dx. \end{align} Let us assume that $f'\in L^2(\mathbb{T})$ for $\beta=2$, $\sum_{k\in \mathbb{Z}}|\hat{f}(k)|\*|k|<\infty$ for $\beta<2, \ \sum_{k\in \mathbb{Z}}|\hat{f}(k)|\*|k|\*\log(|k|+1)<\infty$ for $\beta=4,$ and $\sum_{k\in \mathbb{Z}}|\hat{f}(k)||k|^2<\infty$ for $\beta \in (2,4)\cup (4, \infty).$ Then we have the following convergence in distribution as $N\rightarrow \infty$: $$S_N(f)-\mathbb{E} S_N(f)\xrightarrow{\hspace{2mm}\mathcal{D}\hspace{2mm} } \frac{4}{\beta}\sum_{k=1}^{\infty}\hat{f}(k)k(\varphi_k-1),$$ where $\varphi_m$ are i.i.d. exponential random variables with $\mathbb{E}(\varphi_m)=1$. For $\beta=2$ the result was proven under the optimal condition $\sum_{k\in \mathbb{Z}}|\hat{f}(k)|^2\*|k|^2<\infty.$ The goal of this paper is to study the fluctuation of the pair counting statistic (\ref{pairs1}) when $\var(S_N(f))$ slowly grows with $N$ to infinity. \begin{definition} A positive sequence ${V_N}$ is said to be slowly varying in sense of Karamata (\cite{BGT}) if \begin{align} \label{karamata} \lim_{n\rightarrow \infty}\frac{V_{\floor{\lambda N}}}{V_N}= 1, \quad \quad \forall\lambda>0, \end{align} where $\floor{m}$ denotes the integer part of $m$ . \end{definition} The following notation will be used throughout the paper: \begin{align} \label{vvvv} V_N=\sum_{k=-N}^{k=N}|\hat{f(k)}|^2|k|^2. \end{align} \begin{thm} Let $f\in L^2(\mathbb{T})$ be a real even function such that $V_N=\sum_{k=-N}^{k=N}|\hat{f(k)}|^2|k|^2, \\mathbb{N}=1,2,\ldots, $ is a slowly varying sequence that diverges to infinity as $N\to \infty$. Then we have the following convergence in distribution \[ \frac{ S_N(f)-\mathbb{E} S_N(f)}{\sqrt{2\*\sum_{-N}^{N} |\hat{f}(k)|^2\*|k|^2}}\xrightarrow{\hspace{2mm}\mathcal{D}\hspace{2mm} } \mathcal{N}(0,1) \] \end{thm} Linear statistics of the eigenvalues of random matrices $\sum_{j=1}^N f(\lambda_j) $ have been studied extensively in the literature. Johansson (\cite{johansson1}) proved for (\ref{betaensemble}) for arbitrary $\beta>0$ and sufficiently smooth real-valued $f$ that \[ \frac{\sum_{j=1}^N f(\theta_j) - N\*\hat{f}(0)}{\sqrt{\frac{2}{\beta}\*\sum_{-\infty}^{\infty} |\hat{f}(k)|^2\*|k|}} \]converges in distribution to a standard Gaussian random variable. In particular, for $\beta=2$ he proved the result under the optimal conditions on $f$, namely \[\sum_{-\infty}^{\infty} |\hat{f}(k)|^2\*|k|<\infty.\] If the variance of the linear statistic goes to infinity with $N,$ Diaconis and Evans \cite{DE} proved the CLT in the case $\beta=2$ provided the sequence $\{\sum_{-N}^{N} |\hat{f}(m)|^2\*|m| \}_{n \in \mathbb{N}}$ is slowly varying. For the results on the linear eigenvalue statistics in the mesoscopic regime\\ $\sum_{j=1}^N f(L_N\*\theta_j), \ 1\ll L_N \ll N,$ we refer the reader to \cite{sasha}, \cite{BL}, \cite{lambert}, and references therein. For additional results on the spectral properties of C$\beta$U we refer the reader to \cite{DS}, \cite{johansson3}, \cite{BF}, \cite{Sasha}, \cite{HKOC}, \cite{meckes}, \cite{PZ}, \cite{webb}, \cite{WF1}, and references therein. The proof of the main result of the paper (Theorem 1.2) is given in the next section. Throughout the paper, he notation $a_N=O(b_N)$ means that the ratio $ a_N/b_N$ is bounded from above in absolute value. The notation $a_N=o(b_N)$ means that $a_n/b_N\to 0$ as $N\to \infty.$ Occasionally, for non-negative quantities, in this case we will also use the notation $a_N \ll b_N.$ Research has been partially supported by the Simons Foundation Collaboration Grant for Mathematicians \#312391. \vspace{5mm} \section{Proof of Theorem 1.2} \vspace{5mm} The section is devoted to the proof of Theorem 1.2. We start by recalling the formula for the variance of $S_N(f)$ from Proposition 4.1 of \cite{paper}: \begin{align} \label{v1} \var(S_N(f)) &=4\*\sum_{1\leq s\leq N-1}s^2|\hat{f}(s)|^2 + 4\* (N^2-N)\sum_{N\leq s} |\hat{f}(s)|^2 \\ &-4\sum_{\substack{1\leq s,t \\ 1\leq |s-t|\leq N-1\\ N\leq \max(s,t)}}(N-|s-t|)\hat{f}(s)\hat{f}(t)\hspace{2mm} -4\sum_{\substack{1\leq s,t\leq N-1\\mathbb{N}+1\leq s+t}}((s+t)-N) \hat{f}(s)\hat{f}(t). \nonumber \end{align} Our first goal is to show that the last two (off-diagonal) terms in the variance expression (\ref{v1}) are much smaller than $V_N =\sum_{s=-N}^{N} s^2|\hat{f}(s)|^2$ for large $N$ provided (\ref{karamata}) is satisfied. \begin{lemma} Let $V_N$ from (\ref{vvvv}) be a slowly varying sequence diverging to infinity as $N\to \infty$. Then, as $N\to\infty$, we have \begin{enumerate} \item[(i)] \begin{align*} \sum_{\substack{1\leq s,t\leq N\\ s+t\geq N+1}}s|\hat{f}(s)|\cdot|\hat{f}(t)| =o\of{V_N}, \end{align*} \item[(ii)] \begin{align*} (N+1)\sum_{\substack{s-t\leq N\\s\geq N+1\\1\leq t\leq N}}|\hat{f}(s)|\cdot|\hat{f}(t)|=o\of{V_N} , \end{align*} \item[(iii)] \begin{align*} N\sum_{\substack{|s-t|\leq N-1\\s,t\geq N}}|\hat{f}(s)|\cdot|\hat{f}(t)|\ =o\of{V_N}. \end{align*} \end{enumerate} \end{lemma} The proof of the lemma is somewhat similar to the proof of Lemma 4.4. in \cite{paper}. For the convenience of the reader, we give the full details of the proof below. \begin{pfo}{\textit{Lemma 2.1}} Proof of (i). Let $x_s=s|\hat{f}(s)|$ for $1\leq s \leq N$ and $X_N=\{x_s\}_{s=1}^N$. Define a vector $Y_N:=X_N\*\mathds{1}_{(s> N/2)},$ so that the first $\floor{N/2}$ coordinates of $Y_N$ are zero and the rest coincide with the corresponding coordinates of $X_N.$ We note that \begin{align} \label{tom} 2\*||X_N||_2^2=V_N \ \text{and} \ ||Y_N||_2^2=o(V_N), \end{align} where $||X||_2$ denotes the Euclidean norm of a vector $X\in \mathbb{R}^N.$ The last bound follows from the condition (\ref{karamata}) on the slow growth of $V_N.$ We now write the off diagonal variance term in (i) as a bilinear form: \begin{align} \label{4.15} \sum_{\substack{1\leq s,t\leq N\\ s+t\geq N+1}}s|\hat{f}(s)|\cdot|\hat{f}(t)| &=\sum_{t=1}^{N} x_t \cdot \left(\frac{1}{t}\sum_{s=N-t+1}^{N} x_s\right)\nonumber\\ &=\sum_{t=1}^N x_t \cdot \left(\frac{1}{t}\sum_{s=1}^t (U_N\*X_N)_s\right) =\langle X_N,A_N X_N\rangle, \end{align} with $A_N=B_N\*U_N$, where $U_N$ is a unitary permutation matrix given by $(U_N)_{s,t}= \mathds{1}_{(t=N-s+1)}$ and $B_N$ is a lower triangular matrix given by $(B_N)_{s,t}= (1/s)\mathds{1}_{(t\leq s)}$. The matrix $A_N$ is given by: \[ A_N = \renewcommand\arraystretch{1.25} \begin{pmatrix} 0 & 0 & 0 & 0 & \dots & 1 \\ 0 & 0 &\ddots &\ddots &\frac{1}{2} & \frac{1}{2} \\ 0 & \ddots & \ddots &\frac{1}{3} & \frac{1}{3} & \frac{1}{3} & \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac{1}{N} & \frac{1}{N} &\frac{1}{N} & \frac{1}{N} &\dots & \frac{1}{N} \end{pmatrix} \] Taking into account that the upper-right $\floor{N/2}\times \floor{N/2}$ block of $A_N$ is zero, we can bound $\langle X_N , A_N X_N \rangle\leq \langle X_N, A_N Y_N\rangle + \langle Y_N , A_N X_N \rangle. $ It was shown in \cite{paper} that $||A_N||_{op}\leq 3,$ where $||A||_{op}$ denotes the operator norm. This implies that the expression in (\ref{4.15})is bounded from above by $3\*||X_N||_2\*||Y_N||_2=o(V_N)$ by (\ref{tom}). This completes the proof of Lemma 2.1(i).\\ To prove part (ii), let $B_N$ be defined as in the proof of part $(i)$. Similarly, let $x_s=s|\hat{f}(s)|, \ 1\leq s\leq 2\*N,$ and $X_N=\{x_s\}_{s=1}^{2N}$. Now, $X_N$ is a $2N$-dimensional vector such that $||X_N||_2^2$ grows slowly in $N$. Define $Y_N:=X_N\*\mathds{1}_{(s> N)},$ so that the first $N$ coordinates of $Y_N$ are zero and the rest coincide with the corresponding coordinates of $X_N.$ Observe that \begin{align*} N\sum_{\substack{s-t\leq N\\s\geq N+1\\1\leq t\leq N}}|\hat{f}(s)|\cdot|\hat{f}(t)| &\leq \sum_{t=1}^N x_t\left(\frac{1}{t}\sum_{s=N+1}^{N+t}x_s\right)\\ &= \langle C_N X_N, D_N X_N\rangle, \end{align*} where $$ C_N=\renewcommand\arraystretch{1.25} \begin{pmatrix} I_N & 0\\ 0 & 0 \end{pmatrix} \hspace{5mm}\text{and}\hspace{5mm} D_{N} = \renewcommand\arraystretch{1.25} \begin{pmatrix} 0 & B_N\\ 0 & 0 \end{pmatrix}$$ We note that $||D_N||_{op}\leq 3$ and $||C_N||_{op}=1.$ Once again: $$\langle C_N X_N, D_N X_N\rangle= \langle C_N X_N, D_N Y_N\rangle \leq 3\*||X_N||_2\*||Y_N||_2=o(V_N).$$ The proof of (ii) is completed.\\ Proof of (iii). We start by estimating the l.h.s. of (iii) from above by \begin{align} \label{myt} 2\*N\*\sum_{\substack{t-N+1 \leq s\leq N+t-1\\t\geq s\geq N}}|\hat{f}(s)|\cdot|\hat{f}(t)|. \end{align} As before, let $x_s=s|\hat{f}(s)|$ for $s\geq 1.$ Define $X=\{x_s\}_{s=1}^{\infty}$ and \\ $X^{(j)}=X\*\mathds{1}_{(j\*N\leq s< (j+1)\*N)}, \ j=0,1,2,\ldots.$ We can bound (\ref{myt}) from above by \begin{align} \label{jj} 2\* \sum_{t=N}^\infty x_t\left(\frac{1}{t}\sum_{s=t-N+1}^{t}x_s\right) = 2\*\sum_{j=1}^{\infty} \sum_{t=j\*N}^{(j+1)\*N-1} x_t\left(\frac{1}{t}\sum_{s=t-N+1}^{t}x_s\right), \end{align} One can write the second sum at the r.h.s. of (\ref{jj}) as \begin{align} \label{jjj} \sum_{t=j\*N}^{(j+1)\*N-1} x_t\left(\frac{1}{t}\sum_{s=t-N+1}^{t}x_s\right)= \langle X^{(j)}, R_{N,j} (X^{(j-1)}+X^{(j)}) \rangle, \end{align} where $R_{N,j}$ is a bounded linear operator such that \begin{align} \label{jjjj} (R_{N,j})_{t,s}=\frac{1}{t}\*\mathds{1}_{(t-N+1\leq s \leq t)}\*\mathds{1}_{(jN\leq t < (j+1)N)}. \end{align} The operator norm of $R_{N,j}$ is bounded from above by its Hilbert-Schmidt norm \begin{align} \label{j5} ||R_{N,j}||_{op}\leq ||R_{N,j}||_2 = \sqrt{N\*\sum_{t=jN}^{(j+1)N-1} \frac{1}{t^2}}\leq \sqrt{\frac{N^2}{j^2\*N^2}}=\frac{1}{j}. \end{align} Now, by the Cauchy-Schwarz inequality, the r.h.s. of (\ref{jjj}) can be bounded from above by \begin{align} \label{j6} &\langle X^{(j)}, R_{N,j} (X^{(j-1)}+X^{(j)}) \rangle \leq ||X^{(j)}||_2\*||R_{N,j}||_{op} \* (||X^{(j-1)}||_2 + ||X^{(j)}||_2)\\ \label{j7} &\leq \frac{1}{j}\*||X^{(j)}||_2^2 +\frac{1}{j}\*||X^{(j)}||_2\* ||X^{(j-1)}||_2. \end{align} Summing up the r.h.s. of the last inequality with respect to $j\geq 1$ gives $o(V_N).$ Indeed, $2\*||X^{(0)}||_2^2=V_N$ and summation by parts gives \begin{align} \label{sumparts} \sum_{j=1}^{\infty}\frac{1}{j}\*||X^{(j)}||_2^2 \leq \sum_{j=1}^{\infty} \frac{1}{j^2}\*(V_{j\*N}-V_N). \end{align} The condition (\ref{karamata}) on the slow growth of $V_N$ (\ref{karamata}) implies that the r.h.s. of (\ref{sumparts}) is $o(V_N)$. To sum the second term in (\ref{j7}), we write \begin{align} \label{0330} \sum_{j=1}^{\infty} \frac{1}{j}\*||X^{(j)}||_2\* ||X^{(j-1)}||_2= ||X^{(1)}||_2\* ||X^{(0)}||_2 + \sum_{j=2}^{\infty} \frac{1}{j}\*||X^{(j)}||_2\* ||X^{(j-1)}||_2. \end{align} The first term on the r.h.s. of (\ref{0330}) is $\sqrt{(V_{2N}-V_{N})\*V_N}=o(V_N).$ To deal with the second term, we use the Cauchy-Schwarz inequality and proceed as in (\ref{sumparts}). This completes the proof of Lemma 2.1. \end{pfo} \begin{pfo}{\textit{Theorem 1.2}}\\ We now proceed to finish the proof of Theorem 1.2. We recall that \begin{align} \label{formula} S_N(f)=\sum_{1\leq i\neq j\leq N}f(\theta_i-\theta_j)=2\sum_{k=1}^{\infty}\hat{f}(k)\left|\sum_{j=1}^N\exp\of{ik\theta_j}\right|^2+ \hat{f}(0)\*N^2-N\*f(0). \end{align} Denote by $t_{N,k}$ the trace of the $k$-th power of a CUE matrix, i.e. \begin{align} \label{kkk} t_{N,k}:=\sum_{j=1}^N e^{i\*k\*\theta_j}, \ \ k=0,\pm 1, \pm 2, \ldots. \end{align} Then \begin{align} &\frac{S_N(f)-\mathbb{E} S_N(f)}{\sqrt{V_N}}=\frac{2}{\sqrt{V_N}}\*\sum_{k=1}^{\infty}\hat{f}(k)\*(|t_{N,k}|^2-\mathbb{E} |t_{N,k}|^2) =\nonumber \\ \label{aaa} &\frac{2}{\sqrt{V_N}}\*\sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)\*(|t_{N,k}|^2-\mathbb{E} |t_{N,k}|^2) +\frac{2}{\sqrt{V_N}}\sum_{\floor{N/M_N}+1}^{\infty}\hat{f}(k)\*(|t_{N,k}|^2-\mathbb{E} |t_{N,k}|^2). \end{align} Here $\{M_N\}_{N=1}^{\infty}$ is positive integer-valued sequence sufficiently slowly growing to infinity as $N\to \infty$ in such a way that \begin{align} \label{Kara} \lim_{N\to \infty} \frac{V_{\floor{N\*M_N}}}{V_N}=\lim_{N\to \infty} \frac{V_{N}}{V_{\floor{N/M_N}}}=1. \end{align} The existence of such a sequence follows from (\ref{karamata}). It follows from Lemma 2.1 that the second moment of the second term on the r.h.s. of (\ref{aaa}) is going to zero as $N \to \infty.$ We formulate this result in the next lemma. \begin{lemma} \begin{align} \label{lemma22} \var \left(\sum_{\floor{N/M_N}+1}^{\infty}\hat{f}(k)\*|t_{N,k}|^2\right)=o(V_N). \end{align} \end{lemma} \begin{pfo}{\textit{Lemma 2.2}} It follows from (\ref{v1}) that \begin{align} \label{v22} &\var \left(\sum_{\floor{N/M_N}+1}^{\infty}\hat{f}(k)\*|t_{N,k}|^2\right) =4\*\sum_{N/M_N<s\leq N-1}s^2|\hat{f}(s)|^2 + 4\* (N^2-N)\sum_{N\leq s} |\hat{f}(s)|^2 \\ &-4\sum_{\substack{N/M_N\leq s,t \\ 1\leq |s-t|\leq N-1\\ N\leq \max(s,t)}}(N-|s-t|)\hat{f}(s)\hat{f}(t)\hspace{2mm} -4\sum_{\substack{N/M_N\leq s,t\leq N-1\\mathbb{N}+1\leq s+t}}((s+t)-N) \hat{f}(s)\hat{f}(t). \nonumber \end{align} The first term on the r.h.s. of (\ref{v22}) is equal to $2\*(V_N-V_{\floor{N/M_N}})$ and is $o(V_N)$ by (\ref{Kara}). The last two terms on the r.h.s. of (\ref{v22}) are $o(V_N)$ by Lemma 2.1. Finally, the second term on the r.h.s. of (\ref{v22}) is bounded from above by \begin{align} 4\*N^2\*\sum_{N\leq s} |\hat{f}(s)|^2\leq 2\* \sum_{j=1}^{\infty} \frac{1}{j^2}\*(V_{(j+1)\*N}-V_{j\*N})=o(V_N), \end{align} where the last estimate follows from (\ref{karamata}). This completes the proof of Lemma 2.2. \end{pfo} To finish the proof of the theorem, we have to show that the first term on the r.h.s. of (\ref{aaa}) converges in distribution to a standard Gaussian random variable. To do this, we first show that the first $\floor{M_N/2}$ moments of \begin{align} \label{part1} \frac{2}{\sqrt{V_N}}\*\sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)\*(|t_{N,k}|^2-\mathbb{E} |t_{N,k}|^2) \end{align} coincide with the first $\floor{M_N/2}$ moments of \begin{align} \label{expsum} \frac{2}{\sqrt{V_N}}\*\sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)k(\varphi_k-1), \end{align} where $\varphi_k, \ k\geq 1,$ are i.i.d. exponential random variables. \begin{lemma} Let $m$ be a positive integer such that $1\leq m<\frac{M_N}{2}.$ Then \begin{align} \label{momenty} \mathbb{E} \left( \frac{2}{\sqrt{V_N}}\*\sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)\*(|t_{N,k}|^2-\mathbb{E} |t_{N,k}|^2)\right)^m=\mathbb{E} \left(\frac{2}{\sqrt{V_N}}\* \sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)k(\varphi_k-1)\right)^m. \end{align} \end{lemma} \begin{pfo}{\textit{Lemma 2.3}} The formula (\ref{momenty}) follows from the identity \begin{align} \label{identity} \mathbb{E} \prod_{i=1}^l |t_{N,k_i}|^2 =\mathbb{E} \prod_{i=1}^l k_i\*\varphi_{k_i}, \end{align} provided \begin{align} \label{condition} 2\*\sum_{i=1}^l |k_i|\leq N, \ \ 0<k_1, \ldots, k_l. \end{align} (\ref{condition}) was established in \cite{DS} and \cite{sasha} (see also \cite{jm}) where it was shown that a large number of the joint moments/cumulants of \[ k^{-1/2}\*t_{N,k}=k^{-1/2}\*Tr (U^k), \ \ k\geq 1, \] coincide with the corresponding joint moments/cumulants of a sequence of i.i.d. standard complex Gaussian random variables. Namely, if we denote by $\kappa(t_{N,k_1}, \ldots, t_{N,k_n})$ the joint cumulant of $\{t_{N,k_j}, \ 1\leq j\leq n\}$ then \begin{align} \label{kumulyanty} \kappa(t_{N,k_1}, \ldots, t_{N,k_n})=0 \end{align} if at one of the following two conditions is satisfied: \begin{align*} & (i) \ \ n\geq 1, \ \ \text{and} \ \sum_{j=1}^n k_j \neq 0 \\ & (ii) \ \ n>2, \ \ \sum_{j=1}^n k_j =0, \ \ \text{and} \ \sum_{j=1}^n |k_j|\leq N. \end{align*} In addition, $\kappa(t_{N,k}, t_{N,-k})=\min(|k|, N).$ We refer the reader to Lemma 5.2 of \cite{paper} for the details. Taking into account that the absolute value squared of a standard complex Gaussian random variable is distributed according to the exponential law we obtain (\ref{identity}-\ref{condition}). This finishes the proof of Lemma 2.3. \end{pfo} Now, the proof of Theorem 1.2 immediately follows from the following standard lemma. \begin{lemma} For any $t\in \mathbb{R}$ \begin{align} \mathbb{E} \exp\left( \frac{t}{\sqrt{\sum_{k=1}^N a_k^2}}\* \sum_{k=1}^{N} a_k\*(\varphi_k-1) \right) \to \exp(t^2/2) \end{align} as $N\to \infty,$ provided \begin{align} \label{koef} \sum_{k=1}^{\infty} a_k^2 =\infty \ \ \text{and} \ \ \max_{1\leq k\leq N} |a_k|=o\left(\sqrt{\sum_{k=1}^N a_k^2}\right). \end{align} \end{lemma} \begin{pfo}{\textit{Lemma 2.4}} The result of Lemma 2.4 follows from standard direct computations using independence of $\varphi_k$'s. \end{pfo} Setting $a_k=\hat{f}(k) \*k, \ \ k\geq 1$ and applying Lemma 2.4, we obtain the convergence of the exponential moment (and, hence, all moments) of $\sqrt{\frac{2}{V_N}}\* \sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)k(\varphi_k-1)$ to that of a standard real Gaussian random variable. Thus, both $$\sqrt{\frac{2}{V_N}}\*\sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)k(\varphi_k-1)$$ and $$\sqrt{\frac{2}{V_N}}\*\sum_{k=1}^{\floor{N/M_N}}\hat{f}(k)\*(|t_{N,k}|^2-\mathbb{E} |t_{N,k}|^2 $$ converge in distribution to a standard real Gaussian random variable. Since the second term in (\ref{aaa}) goes to $0$ in $L^2$ we conclude that Theorem 1.2 is proven. \end{pfo}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Supernovae (SNe) are stellar explosions divided into two main categories: core-collapse (CC) and Type Ia SNe. CC SNe arise from massive stars $\gtrsim$8 $M_{\odot}$ that undergo CC when the iron core cannot be supported by nuclear fusion \citep[e.g.,][]{Ib74,ET04,Sm09}. SNe Ia are the thermonuclear explosions of a carbon-oxygen white dwarf (WD) triggered by mass transfer from a companion, which could be a non-degenerate hydrogen- or helium-burning star or another WD (see \citealt{maoz14} for a review). SNe are usually typed using their optical spectra around maximum light, days after explosion (see reviews by e.g., \citealt{1997ARA&A..35..309F, 2017hsn..book..195G}), and hundreds of SNe are discovered per year through dedicated surveys (e.g. ASAS-SN: \citealt{2014ApJ...788...48S, Ho17a,Ho17b,Ho17c,Ho18}; ATLAS: \citealt{Ton18}; Pan-STARRs: \citealt{2016arXiv161205243F,2016arXiv161205560C, 2018ApJ...857...51J}, ZTF: \citealt{Smi14,Du18,Kul18} and YSE: \citealt{2019ATel13330....1J}). However, these objects are often too distant ($\sim$1--100~Mpc) to resolve the SN ejecta and the environment of the progenitor star. In this context, supernova remnants (SNRs), which are the leftover structures of SNe that happened hundreds or thousands of years ago, provide a complementary close view of the explosive endpoint of stellar evolution. In particular, SNRs give valuable information about SN progenitors. At X-ray wavelengths, strong emission lines from shocked SN ejecta can be used to probe the nucleosynthetic products \citep[e.g.][]{Ba08a,Park13,Ya15,MR17}. Additionally, their X-ray spectra and morphologies depend on the ejecta, explosion energetics, and the surrounding circumstellar material left by the progenitor \citep[e.g.][]{Ba03,Ba07,Lop09,Lop11,Pat12,Pat17,Woo17,Woo18}. The most reliable ways to connect SNRs to their progenitors are via detection of an associated pulsar \citep{He68,Ta99} or through light echoes \citep[e.g.,][]{Res05, Res08a,Res08b}. Another approach is to examine the stellar populations surrounding these sources (e.g., \citealt{Ba09,Au18}). SNRs can also be typed from their abundance ratios (e.g., \citealt{reynolds07,katsuda18}), morphologies \citep{Lop09,Lop11,Peters13,Lopez18}, Fe-K centroids \citep{Ya14a}, and absorption line studies (e.g., \citealt{hamilton00,fesen17}). However, even with these varied techniques, the explosive origin of some SNRs remain uncertain, such as the Milky Way SNR 3C~397 (G41.1$-$0.3). \citet{Sa00} suggested that 3C~397 arose from a CC SN based on its proximity to molecular clouds and on enhanced abundances of intermediate-mass elements from {\it ASCA} observations. By contrast, \cite{Ya14a} and \cite{Ya15} found that a Type~Ia origin was more likely, given 3C~397's Fe K-shell centroid and its exceptionally high abundances of neutron-rich, stable iron-group elements (Cr, Mn, Ni, Fe; though these elements have been detected in CC SNRs, e.g. \citealt{sato20}). Recent efforts to determine its progenitor metallicity \citep{Da17,MR17} have assumed a Type Ia nature. Although its exact age is uncertain, 3C 397 is likely ${\sim} \, 1350-5000$ years old \citep[][]{Sa05,Lea16}, implying it is dynamically more evolved than other well-known Type Ia SNRs (e.g. G1.9$+$0.3, 0509$-$67.5, Tycho, Kepler, SN 1006). The X-ray emission from 3C~397 is thermal in nature without a non-thermal component \citep{Ya15}, unlike some other Type Ia SNRs (e.g., G1.9$+$0.3: \citealt{zoglauer15}; Tycho: \citealt{lopez15}). Its X-ray morphology is quite irregular \citep{Sa05}, and \cite{Lee19} suggested that it likely results from interaction with a dense surrounding medium rather than an asymmetric explosion. 3C~397 is among the class of mixed-morphology SNRs \citep{Rho98}, which are center-filled in X-rays and have a shell-like morphology in the radio. 3C~397 is expanding into a high-density ambient medium (with n$_{\rm amb}\sim 2-5$~cm$^{-3}$; \citealt{Lea16}), with a strong westward gradient. Although this environment is more consistent with a CC SNR, other Type Ia SNRs have signs of interaction: e.g., Tycho may be interacting with a nearby molecular cloud \citep{LeeKT04,Zh16}, N103B has CO clouds along its southeastern edge \citep{sano18}. In this paper, we aim to better constrain the explosive origin of 3C~397 using the observed emission in the full X-ray band (0.7--10~keV) and comparing it to synthetic SNR spectra generated from both Type Ia and CC explosion models. The paper is organized as follows. In Section~\ref{sec:analysis}, we present the X-ray observations of 3C 397 and the spectral-fitting process. In Section~\ref{sec:discussion}, we compare the observational and synthetic results. Finally, in Section~\ref{sec:conclusions}, we summarize the conclusions. \section{Observations and data analysis}\label{sec:analysis} Following the previous studies by \cite{Ya14a} and \cite{Ya15}, we take advantage of the high spectral resolution of the X-ray Imaging Spectrometer (XIS) on board \textit{Suzaku} to measure the centroids and fluxes of all Ly$\alpha$ and K$\alpha$ \, emission lines. We analyze \textit{Suzaku} observations \texttt{505008010} and \texttt{508001010}, which were taken on 2010 October 24 and 2013 October 30, with total exposure times of 69 and 103 ks, respectively. We use \texttt{HEASOFT} version 2.12 and reprocess the unfiltered public data using the \textit{aepipeline}, the most up-to-date calibration files, and the standard reduction procedures\footnote{\url{http://heasarc.nasa.gov/docs/suzaku/processing/criteria xis.html}}. We extracted spectra from both the front- (XIS0, XIS3) and back-illumincated (XIS1) CCDs of the entire SNR using an elliptical region and \texttt{XSELECT} version 2.4d. For the background spectrum, we extracted spectra from the full field-of-view of the XIS observations, excluding the calibration regions and the SNR. To generate the redistribution matrix files (RMF) and ancillary response files (ARF), we use the standard \textit{Suzaku} analysis tools \textit{xisrmfgen} and \textit{xisarfgen}, respectively. Due to an error with the ARF analysis pipeline, we are unable to extract a XIS1 spectrum from ObsID \texttt{508001010} and thus do not use this spectrum for our analysis. We also remove (from both the source and background spectra) the contribution of the non-X-ray (i.e., instrumental) background (NXB) that arises from charged particles and hard X-ray \citep{Ta08} interacting with the detectors. To simulate the instrumental background, we use \textit{xisnxbgen} \citep{Ta08} to generate a NXB model and then subtract it from our source and background spectra, similar to what was done in, e.g., \citet{Au15}. Rather than merging the spectra together, we fit our background-subtracted spectra simultaneously using \texttt{XSPEC}\footnote{\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/}} \citep[][version 12.10.1]{Ar96} using the standard atomic database. We analyze a broad energy range ($0.7-10.0$~keV) to measure the centroids of all prominent Ly$\alpha$ and K$\alpha$\ lines in the 3C~397 data. We model the X-ray continuum using two absorbed bremsstrahlung components (XSPEC model: \textsc{tbabs*(bremss+bremss)}), one soft (with temperature $kT_{\rm s}$) and one hard (with temperature $kT_{\rm h}$). For the emission lines, we include phenomenological Gaussian components with centroid energies, widths, and normalizations tied among the five source spectra. We freeze the column density to $N_{\rm H}=3.5 \times 10^{22}$~cm$^{-2}$ (as derived from our physical model listed in Table \ref{table:3C397_vnei}), similar to \citealt{Ya14a} who used $N_{\rm H} = 3 \times 10^{22}$~cm$^{-2}$ based on the results of \citealt{Sa05}. We adopt solar abundance values from \cite{wilms00}. Tables~\ref{table:3C397_lines} and ~\ref{table:3C397_cont} show the best-fit results, including the centroid energies, line fluxes, and the thermal plasma properties. The spectra and the mean best-fit model for 3C 397 are shown in Figure \ref{fig:Spectra_3C397}. \cite{Ya15}, who modeled the 5--9 keV spectrum from {\it Suzaku} to focus on the Fe-peak elements, derived fluxes for several emission lines common to our analysis: Cr K$\alpha$, Mn K$\alpha$, Fe K$\alpha$,and Ni K$\alpha$\ + Fe K$\beta$. Our centroid energies and fluxes are comparable to those measured by \cite{Ya15}, except our Ni K$\alpha$\ + Fe K$\beta$ centroid energy is 31~eV lower ($\sim$1.2 $\sigma$) and flux is $\approx$60\% greater than that previous work. This is likely a result of the different continuum temperatures used by \citet{Ya15} and in this analysis ($\sim$2.1 keV versus $\sim$1.6~keV, respectively), and the fact that we used two components rather than one to model the X-ray emission of the SNR. However, these differences between \citet{Ya15} and our work do not affect our conclusions. \begin{figure} \includegraphics[width=\columnwidth]{f1.pdf} \caption{The five background-subtracted \textit{Suzaku} X-ray spectra of 3C~397, the best-fit model (as listed in Tables \ref{table:3C397_lines} and \ref{table:3C397_cont}), and the associated residuals.} \label{fig:Spectra_3C397} \end{figure} \begin{table} \begin{center} \caption{Best-fit model line parameters for the combined \textit{Suzaku} spectra of 3C 397. All uncertainties are 90\% confidence intervals. \label{table:3C397_lines}} \begin{tabular}{ccc} \hline \hline \noalign{\smallskip} Transition & Centroid energy & $\rm{\left< Flux \right>}$ \\ \noalign{\smallskip} & (eV) & $\rm{\left( ph \, cm^{-2} \, s^{-1} \right)}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Ne \,\, Ly\alpha}$ & $1027_{-4}^{+5}$ & $\left(1.22_{-1.2}^{+01.3}\right) \times 10^{-2}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Mg \, K\alpha}$ & $1345_{-1}^{+1}$ & $\left(4.03_{-0.12}^{+0.12}\right) \times 10^{-3}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Si \, K\alpha}$ & $1853_{-1}^{+1}$ & $\left(1.78_{-0.03}^{+0.03}\right) \times 10^{-3}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Si \, K\beta}$ & $2218_{-6}^{+4}$ & $\left(9.13_{-0.70}^{+0.70}\right) \times 10^{-5}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{S \, K\alpha}$ & $2454_{-1}^{+1}$ & $\left(5.15_{-0.10}^{+0.10}\right) \times 10^{-4}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Ar \, K\alpha}$ & $3124_{-4}^{+4}$ & $\left(7.11_{-0.40}^{+0.40}\right) \times 10^{-5}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Ca \, K\alpha}$ & $3878_{-6}^{+9}$ & $\left(2.01_{-0.20}^{+0.20}\right) \times 10^{-5}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Cr \, K\alpha}$ & $5601_{-11}^{+12}$ & $\left(1.00_{-0.10}^{+0.10}\right) \times 10^{-5}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Mn \, K\alpha}$ & $6061_{-13}^{+21}$ & $\left(7.3_{-0.84}^{+0.96}\right) \times 10^{-6}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Fe \, K\alpha}$ & $6552_{-2}^{+3}$ & $\left(1.39_{-0.03}^{+0.03}\right) \times 10^{-4}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\rm{Ni \, K\alpha \, + \, Fe \, K\beta}$ & $7585_{-12}^{+13}$ & $\left(2.61_{-0.14}^{+0.14}\right) \times 10^{-5}$ \\ \hline \noalign{\smallskip} \end{tabular} \end{center} \vspace{-5mm} \end{table} \begin{table} \begin{center} \caption{Best-fit $N_{\rm H}$ and bremsstrahlung components in phenomenological spectral fit. All uncertainties are 90\% confidence intervals. \label{table:3C397_cont}} \vspace{-5mm} \begin{tabular}{lcccc} \hline \hline \noalign{\smallskip} $N_{\rm{H}}$ & $kT_{\rm{s}}$ & Norm$_{\rm{s}}^{\rm a}$ & $kT_{\rm{h}}$ & Norm$_{\rm{h}}^{\rm a}$ \\ \noalign{\smallskip} (10$^{22}$ cm$^{-2}$) & (keV) & & (keV) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} 3.49 [frozen] & 0.24$\pm$0.01 & $14.0_{-0.6}^{+0.7}$ & 1.60$\pm$0.03 & 0.02$\pm$0.001 \\ \noalign{\smallskip} \hline \end{tabular} \end{center} $^{\rm a}$ The normalizations Norm$_{\rm s}$ and Norm$_{\rm h}$ are given in units of ($10^{-14}/4 \pi D^{2}$) $\int n_{\rm e} n_{\rm H} \, dV$ (cm$^{-5}$), where $D$ is the distance to the source (cm), $n_{\rm e}$ and $n_{\rm H}$ are the electron and hydrogen densities ($\mathrm{cm}^{-3}$), respectively. \end{table} \section{Results and Discussion}\label{sec:discussion} \subsection{Explosive Origin Constraints from Line Ratios and Centroid Energies} \begin{figure*} \centering \includegraphics[width=\textwidth]{f2.pdf} \vspace{-5mm} \caption{Emission line fluxes normalized to the Fe K$\alpha$\, flux versus Fe K$\alpha$\, centroid energy for the $M_{\rm Ch}$ models (green circles), sub-$M_{\rm Ch}$ models (blue squares), CC models (purple diamonds; from \citealt{Pat15}), and observed from 3C~397 (orange star). Lighter shades of blue and green correspond to greater ambient densities.} \label{fig:Flux_Ratios_3C397} \end{figure*} We compare the observational fluxes reported in Table \ref{table:3C397_lines} with theoretical models for the X-ray spectra of both Type Ia and CC SNRs. In contrast to \cite{Ya15}, who focused on the $M_{\rm{Mn}} / M_{\rm{Fe}}$ and $M_{\rm{Ni}} / M_{\rm{Fe}}$ mass ratios, we examine a broader range of metals that includes both intermediate-mass and Fe-peak elements. We use a grid of synthetic X-ray spectra \citep[see][for a detailed explanation]{MR18} for Type Ia SN models that assume a progenitor with a metallicity of $Z = 0.009$ ($\approx0.64~Z_{\sun}$) which is expanding into the SNR phase, similar to that used in previous studies of Type Ia SNRs \citep[e.g.,][]{Ba03,Ba05,Ba08b,MR18}. We analyze the synthetic spectra from the X-ray emitting ejecta in $M_{\rm Ch}$ and sub-$M_{\rm Ch}$ models \citep{Br19}, adopting three uniform ambient medium densities: $\rho_{\rm amb}$ = $10^{-24}$, $2\times10^{-24}$, and $5\times10^{-24}$~g cm$^{-3}$ (corresponding to ambient number densities of $n_{\rm amb} = 1$, 2, and 5.0~cm$^{-3}$). These values are consistent with the estimated densities around 3C~397 of $n_{\rm amb} \ {\sim} \ 2-5 \, \rm{cm}^{-3}$ \citep{Lea16}. In addition, we consider synthetic X-ray spectra from single-star CC explosion models (specifically, models s25D and s12D from \citealt{heger10}) and adopted in previous SNR studies \citep{Lee14,Pat15}. In total, we produce eight SNR models using two sets of mass-loss rates and wind velocities ($10^{-5} \, M_{\odot}\rm{ \, yr^{-1}}$, 10 $\rm{km \, s^{-1}}$ and $2 \times 10^{-5} \, M_{\odot}\rm{ \, yr^{-1}}$, 20 $\rm{km \, s^{-1}}$) and four CC SN ejecta profiles. The four ejecta profiles are from two stars with initial masses of 12 and 25 $M_{\odot}$ (that lose $\sim$3 and 13 $M_{\odot}$, respectively, by the onset of CC), a 6 $M_{\odot}$ He star enclosed in a 10~$M_{\odot}$ H envelope (tailored to mimic SN~1987A), and a 18 $M_{\odot}$ main-sequence star with a mass-loss of 15 $M_{\odot}$ by CC (matched to Type IIb SN~1993J). We note that while this set of CC SNR models is not comprehensive, it is diverse enough to be representative and has been shown to provide a good match to the bulk dynamics of most CC SNRs \citep{Pat15,Pat17}. We calculate centroid energies and line fluxes from the unconvolved, differential photon fluxes of these SNR model spectra using Equations 2 and~3 of \citet{MR18}. For each transition, we select the energy integration range from the 3$\sigma$ limit of the corresponding Gaussian profile in the convolved {\it Suzaku} spectra. Figure \ref{fig:Flux_Ratios_3C397} shows the emission line ratios (relative to Fe K$\alpha$) versus the Fe K$\alpha$\ centroid energy derived from the synthetic spectra. We normalize the ratios relative to Fe K$\alpha$\ because that line is detected in many SNRs and is useful to characterize SN progenitors \citep{Ya14a,Pat15,PatB17,MR18}. In SNRs, the Fe K$\alpha$\, flux is sensitive to the electron temperature and ionization timescale, and the Fe K$\alpha$\, centroid energy is an excellent tracer of the mean charge state of Fe \citep{Vi12,Ya14b,Ya14a}. As a consequence, the latter can be used to distinguish whether SNRs arise from Type Ia and CC SNe \citep{Ya14a}, with the former having centroids $<$6550~eV and the latter having centroids $>$6550~eV. \footnotetext{We note that one possible exception is the SNR W49B, which has a Fe K$\alpha$\ centroid of 6663$\pm$1~eV reported by \cite{Ya14a}, and it is debated whether the originating explosion was a Type Ia \citep{zhou18} or CC SN \citep{lopez09a,lopez13}.} We find that 3C~397 has a Fe K$\alpha$\ centroid of 6552$^{+3}_{-2}$~eV, consistent with the value derived by \cite{Ya14a} of 6556$^{+4}_{-3}$~eV and at the boundary that distinguishes Type Ia from CC progenitors. We find that at the measured value of the Fe K$\alpha$\ centroid, the observed line flux rations derived for 3C~397 are broadly consistent with the Type Ia $M_{\rm Ch}$ and sub-$M_{\rm Ch}$ models and are incompatible with the CC models. The Mg/Fe flux ratio of 3C~397 is $\sim$50\% greater than our Type Ia model predictions, but the Si/Fe, S/Fe, and Ar/Fe flux ratios are consistent with both Type Ia scenarios as long as $\rho_{\rm amb} \, \gtrsim \, (2.0-5.0) \times 10^{-24} \, \rm{g \, cm^{-3}}$. The Ca/Fe flux ratio in 3C~397 is $\sim$2.5$\times$ below our model predictions. Previous studies comparing the derived emission properties of Ca K$\alpha$\ to hydrodynamical models have found similar inconsistencies. For example, \cite{MR17} showed that the Ca/S mass ratio measured from X-ray spectra of Type Ia SNRs cannot be reproduced with the standard reaction rates used in most SN Ia explosion models. Both \citet{Ya15} and \cite{MR17} pointed out that the (Ni K$\alpha$+Fe K$\beta$)/FeK$\alpha$ flux ratio is exceptionally large for a Type Ia SNR. \citet{MR17} showed that this large ratio is suggestive of a high-metallicity progenitor, which may also explain the anomalous Ca/Fe ratios seen for 3C~397. We note that discrepancies between the observed values and the models may be due to well-documented challenges in comparing simple explosion models to an entire X-ray spectrum. For example, one-dimensional hydrodynamic models cannot account for variations in interstellar absorption, non-thermal contribution, and background across the SNR (see \citealt{Ba03} and \citealt{Ba06}). \begin{figure*} \centering \includegraphics[width=\textwidth]{f3.pdf} \vspace{-5mm} \caption{Emission-line centroid energies versus expansion age for the $M_{\rm Ch}$ models (circles), sub-$M_{\rm Ch}$ models (diamonds) of different ambient densities: $\rho_{\rm amb}$=[1.0, 2.0, 5.0] $\times10^{-24}$ g cm$^{-3}$. The shaded purple region corresponds to the best-fit emission-line centroid energy of 3C397 as derived in Table \ref{table:3C397_lines}.} \label{fig:Centroids_3C397} \end{figure*} Figure \ref{fig:Centroids_3C397} shows the theoretical and observational centroid energies for the transitions depicted in Figure \ref{fig:Flux_Ratios_3C397}. These centroids tend to have higher energies for greater expansion ages and ambient densities. For Mg and Si, the observed values in 3C~397 are consistent with both $M_{\rm Ch}$ and sub-$M_{\rm Ch}$ Type Ia models of medium ambient densities and a wide range of ages ($\gtrsim 200-5000$ years). The centroid energies of S, Ar, Ca, Fe and Ni are more consistent with the highest ambient medium densities ($\rho_{\rm amb} = 5.0\times10^{-24}$~g~cm$^{-3}$), suggesting that 3C~397 is in a dense environment, consistent with that found by \cite{Lea16} and its irregular morphology \citep{Lee19}. While the centroid energies of S, Ar, and Ni can occur over a wide range of ages ($\gtrsim 700-5000$ years), the Ca and Fe centroids set the most stringent constraints. Our results suggest that 3C~397 has an age between 2000--4000 years, consistent with (but more constraining than) estimates reported in the literature (1350--5300 years: \citealt{Sa00,Sa05,Lea16,2018ApJ...866....9L}). To further explore the ionization state of the plasma, we extract the centroid energy as a function of the parent ion charge for all of the observed K$\alpha$\ transitions listed in Table~\ref{table:3C397_lines} (Mg, Si, S, Ar, Ca, Cr, Mn, Fe and Ni) using the \textit{AtomDB} database \citep{Fo12, Fo14}. Figure \ref{fig:Centroids_ATOMDB} shows these centroid energies and the values measured for 3C 397, including the corresponding ionization state for each transition. The derived centroids suggest that the plasma of 3C~397 is highly (but not fully) ionized, and the charge number of the Fe-peak elements saturates at an ion charge of 20. These values are at the extreme end of observations of other Type Ia SNRs, though they are still lower than those found for CC SNRs \citep[c.f., Figure 1 of][]{Ya14b}, supporting the Type Ia progenitor origin of 3C~397. \begin{figure*} \centering \includegraphics[width=\textwidth]{f4.pdf} \vspace{-3mm} \caption{K$\alpha$\, emission line centroid energies versus ion parent charge (charge number) from the \textit{AtomDB} database. The best-fit values for 3C 397 (Table~\ref{table:3C397_lines}) are shown as orange shaded regions. The Fe-peak elements are more ionized than the intermediate-mass elements and are more consistent with Type Ia progenitor.} \label{fig:Centroids_ATOMDB} \end{figure*} \subsection{Explosive Origin Constraints from Plasma Ionization State and Metal Abundances} To further probe the explosive origin of 3C~397, we search for evidence of overionization (recombination) by fitting the $0.7-10$~keV spectrum with multiple non-equilibrium ionization (NEI) model components ({\sc vvrnei}). Overionization is a signature of rapid cooling that causes the ions to be stripped of more electrons than expected for the observed electron temperature of the plasma. This rapid cooling could arise from thermal conduction \citep{2002ApJ...572..897K}, adiabatic expansion \citep{1989MNRAS.236..885I}, or interaction with dense material \citep{2005ApJ...630..892D}. To date, overionization has only been detected in mixed-morphology SNRs (e.g., W49B: \citealt{Ka05,Oz09,Mic10,Lo13}; IC443: \citealt{Ka02,Ka05,2009ApJ...705L...6Y,Oh14, Ma17}), many of which have been classified as CC SNRs based on their elemental abundances, their morphologies, and the dense material in their environments. We find that the ejecta emission of 3C 397 is best described by an underionized plasma, where the temperature of the electrons is greater than the ionization temperature, contrary to an overionized plasma. However, we note that the absence of overionization does not exclude a CC origin, since many CC SNRs (such as Cassiopeia~A: \citealt{Hug00}) are also underionized. Finally, we aim to constrain the explosive origin of 3C~397 based on the abundance ratios from the $0.7-10$~keV {\it Suzaku} spectra. \cite{Sa05} analyzed a 66~ks {\it Chandra} observation of 3C~397 and found that the emission was ejecta-dominated and best fit by two NEI plasma components. However, due to low signal-to-noise, the derived metal abundances (e.g., of Si and Fe) were not well constrained. Subsequent work using the {\it Suzaku} observations of 3C~397 analyzed specific energy bands (e.g., 2--5 keV: \citealt{MR17}; 5--9 keV: \citealt{Ya14a,Ya15}) rather than the full X-ray spectrum. We find that three absorbed NEI plasma (\textsc{tbabs*(nei+vnei+vvnei)}) components best describe the spectra of 3C~397 (see Table~\ref{table:3C397_vnei} for the best-fit parameters). Here, we let the column density $N_{\rm H}$, ionisation timescale $\tau$, normalisation, and temperatures of each NEI component be free parameters. Due to the strong emission lines from Si, S, Ar, Ca, Cr, Mn, Fe, and Ni, the abundances of these elements were also allowed to vary, while all other elements in each component were set to solar. We find that the two hottest components have super-solar abundances and are associated with ejecta, whereas the coolest component has ISM (solar) abundances. The ionisation timescale of the ISM component was frozen to $\tau = 5\times10^{13}$~s~cm$^{-3}$ as this parameter was unconstrained. We also add three Gaussians, two with centroid energies of 1.01$\pm$0.03 and 1.23$\pm$0.02~keV to compensate for large residuals that correspond to Ne Ly$\alpha$ and Ne {\sc x} (or Fe {\sc XXI}), respectively \citep{Fo12, Fo14}. It is possible that the Ne could be at a different temperature or ionization to the rest of the plasma, causing it to not be fully captured by our NEI models. The third Gaussian, with a centroid energy of 6.43$\pm$0.01~keV, accounts for the low centroid of the Fe K$\alpha$\ emission line in 3C~397, which is less than the $\sim$6.7 keV Fe peak energy assumed in the vnei/vvnei components. With the addition of the three Gaussians, our best fit yields $\chi^{2}_{\rm reduced} = 1.89$. \begin{table} \caption{The best-fit parameters from physical model of spectra. All uncertainties are 90\% confidence intervals.} \label{table:3C397_vnei} \begin{center} \begin{tabular}{lcc} \hline \hline Component & Parameter & Value \\ \hline tbabs & N$_{\rm H}$ ($\times10^{22}$ cm$^{-2}$) & 3.49$_{-0.04}^{+0.02}$ \\ \hline nei & $kT_{\rm s}$ (keV) & 0.22$\, \pm \,0.02$\\ & $\tau$ ($\times 10^{13}$ s cm$^{-3}$) &5.00 [frozen] \\ & normalization$^{\rm a}$ ($\times10^{-1}$) & 7.65$\pm$0.6\\ \hline vnei & $kT_{\rm s}$ (keV) & 0.58$\pm$0.01\\ & Si & 3.37$_{-0.09}^{+0.11}$ \\ & S & 4.28$_{-0.14}^{+0.10}$ \\ & Ar & 6.56 $_{-1.00}^{+0.60}$\\ & Ca & 12.4$_{-3.0}^{+2.0}$\\ & $\tau$ ($\times 10^{11}$ s cm$^{-3}$)& 4.07$_{-0.49}^{+0.59}$ \\ & normalization$^{\rm a}$ ($\times10^{-2}$)& 5.8$\pm$0.1 \\ \hline vvnei & $kT_{\rm h}$ (keV) & 1.89$\, \pm \,0.03$ \\ & Cr & 25.3$_{-4.2}^{+4.1}$ \\ & Mn & 57.7$_{-12}^{+9.3}$ \\ & Fe & 13.2$_{-0.7}^{+0.4}$\\ & Ni & 62.7$_{-6.2}^{+3.3}$ \\ & $\tau$ ($\times 10^{11}$ s cm$^{-3}$)& 1.05$_{-0.1}^{+0.4}$ \\ & normalization$^{\rm a}$ ($\times10^{-2}$)& 1.7$\pm$0.1\\ \hline & $\chi^{2}_{\rm reduced}$ & 1.89 \\ \hline \end{tabular} \end{center} $^{\rm a}$ The normalizations are given in units of ($10^{-14}/4 \pi D^{2}$) $\int n_{\rm e} n_{\rm H} \, dV$ (cm$^{-5}$), where $D$ is the distance to the source (cm), $n_{\rm e}$ and $n_{\rm H}$ are the electron and hydrogen densities ($\mathrm{cm}^{-3}$), respectively. \end{table} We find super-solar abundances of metals in the two hottest components, with some elements (e.g., Cr, Mn, Ni) extremely enhanced relative to others (e.g., Fe). Super-solar abundances and ejecta-dominated emission is common among mixed-morphology SNRs \citep[e.g.,][]{La06,Uc12,Au14,Au15}. However, the extreme abundances of the Fe peak elements \citep{Ya15,MR17} suggested that 3C~397 arose from a Chandrasekhar mass progenitor that produced significant neutron-rich material during the explosion. We calculate the X-ray emitting mass swept-up $M_{\rm X}$ by the forward shock of 3C~397 using \hbox{$M_{X} = 1.4 m_{\rm H} n_{\rm H} f V$}, where $m_{\rm H}$ and $n_{\rm H}$ is the mass and number density of hydrogen, $V$ is the volume, and $f$ is the filling factor. We adopt a distance to the SNR of $D = 8.5$~kpc \citep{rana18} and a radius of 2.5\arcmin\ $\approx$ 6.2~pc. Based on the best-fit normalization of the ISM plasma and assuming $n_{\rm e} = 1.2 n_{\rm H}$, we find $n_{\rm H} = 4.4$~cm$^{-2}$, consistent with previous measurements in the literature \citep{Lea16} and the results from Section~\ref{sec:discussion}. The corresponding $M_{\rm X}$ is 148 $d^{5/2}~f^{1/2}$~$M_{\sun}$ (where $d$ is the distance scaled to 8.5~kpc), suggesting the SNR is in the Sedov-Taylor phase. Assuming that the reverse shock has heated all of the ejecta, we estimate the mass of ejecta by summing the mass of each element $M_{\rm i}$ given the measured abundances: $M_{\rm i}=[(a_{\rm i}-1)/1.4](n_{\rm i}/n_{\rm H})(m_{\rm i}/m_{\rm H})M_{\rm tot}$. Here $a_{\rm i}$ is the abundance of element $i$ listed in Table \ref{table:3C397_vnei}, $m_{\rm i}$ is the atomic mass of element $i$, $n_{\rm i}/n_{\rm H}$ is its ISM abundance relative to hydrogen, and $M_{\rm tot}$ is the total mass of the ejecta thermal components. Based on the abundances in Table \ref{table:3C397_vnei}, we find that an ejecta mass of $\sim1.22\,d^{5/2}f^{1/2} M_{\sun}$, consistent with a Type Ia explosive origin. From the abundances listed in in Table~\ref{table:3C397_vnei}, we calculate the mass ratios $M_{\rm Fe}/M_{\rm S} = 11.7^{+0.3}_{-0.2}$ and $M_{\rm Si}/M_{\rm S} = 1.05\pm0.01$. Here we have assumed that all ejecta have been shocked. These values are consistent with those of our most energetic sub-$M_{\rm Ch}$ Type Ia SN models, whereas the $M_{\rm Fe}/M_{\rm S}$ from our fits is $\gtrsim5\times$ the predictions from CC models of \cite{Pat15} and \cite{Suk16a}. The $M_{\rm Si}/M_{\rm Fe}$ ratio can be used to constrain the white dwarf progenitor mass \citep{2018ApJ...857...97M}. We find $M_{\rm Si}/M_{\rm Fe} = 0.09\pm0.002$ for 3C~397, which corresponds to a $\sim$1.06-1.15~$M_{\sun}$ white dwarf from the Bravo models presented in \cite{2018ApJ...857...97M}. \cite{Ya15} ruled out sub-$M_{\rm Ch}$ models for 3C~397 based on the Ni/Fe and Mn/Fe mass ratios, and our result is consistent with that conclusion. \section{Conclusions}\label{sec:conclusions} We analyze the {\it Suzaku} X-ray observations of SNR 3C 397 to constrain its explosive origin. We measure the centroid energies and line fluxes using a phenomenological model, and we compare the values to those derived from synthetic spectra produced by Type Ia and CC explosion models. We find 3C~397 is most consistent with a Type Ia SN scenario that occurred in a high-density ambient medium ($\rho_{\rm amb} \gtrsim$2--5$\times10^{-24}$~g~cm$^{-3}$) $\approx$2000--4000 years ago. We model the $0.7-10$~keV X-ray spectra using multiple NEI components, and we find that the ejecta are underionized and have super-solar abundances consistent with a Type Ia origin. Finally, we calculate elemental mass ratios and compare to Type Ia and CC models. We show that these ratios are consistent with the former, and the $M_{\rm Si}/M_{\rm Fe}$ ratio suggests a white dwarf progenitor near $M_{\rm Ch}$. \\ \section*{Acknowledgments} Support for this work has been provided by the Chandra Theory award TM8-19004X. H.M.-R. acknowledges funding as a CCAPP Price Visiting Scholar, supported by the Dr. Pliny A. and Margaret H. Price Endowment Fund. H.M.-R. also acknowledges support from the NASA ADAP grant NNX15AM03G S01, a PITT PACC, a Zaccheus Daniel and a Kenneth P. Dietrich School of Arts \& Sciences Predoctoral Fellowship. KAA is supported by the Danish National Research Foundation (DNRF132). This research has made use of NASA's Astrophysics Data System (ADS, \url{http://adswww.harvard.edu/}). Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. {\it Software}: \texttt{FTOOLS} \citep{Bl95}, \texttt{XSPEC} \citep{Ar96}, \texttt{SAOIMAGE DS9} \citep{Jo03}, \texttt{ChN}\ \citep{Ell07,Pat09,Ell10,Pat10,Cas12,Lee14,Lee15}, \texttt{Matplotlib} \citep{Hun07}, \texttt{IPython} \citep{PeG07}, \texttt{Numpy} \citep{vaW11}, \texttt{PyAtomDB} (\url{http://atomdb.readthedocs.io/en/master/}), \texttt{Astropy} \citep{Astro13,Astro18}, \texttt{Python} (\url{https://www.python.org/}), \texttt{SciPy} (\url{https://www.scipy.org/}). \bibliographystyle{mnras_bib}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Ultrasound (US) B-mode is a widespread medical imaging technique of the tissue acoustical impedance. With this technique, an ultrasound wave is transmitted by an acoustic transducer and is backscattered by the change in acoustic impedance at tissue interfaces. Measuring the analytic signal obtained from ultrasound waves received on the transducer surface leads to the reconstruction of these acoustic impedance interfaces. Moreover, when ultrasound waves are scattered by acoustic impedance inhomogeneities, their contributions add up constructively or destructively on the transducer surface. The total intensity thus varies randomly due to these interferences \cite{cobbold2007foundations}. The phenomenon appears on reconstructed images as a granular pattern with variable size and intensity, which looks spatially random: it is called ``acoustic speckle'' \cite{abbott1979acoustic}. The speckle can be treated as noise and consequently be reduced, or on the contrary be considered as a feature \cite{noble2006ultrasound}. Under the first point of view, quantitative ultrasound methods were proposed based on the backscattering coefficient \cite{lizzi1983theoretical,lizzi1987relationship,insana1990describing}, where the speckle is removed to obtain a measure that depends only on tissue acoustic properties; see \cite{goshal2013state} for further references. Under the second point of view, quantitative ultrasound methods were developed by exploiting the statistics of the echo envelope \cite{burckhardt1978speckle,wagner1983statistics} that depend on the speckle and tissue characteristics to discriminate tissue types; see \cite{destrempes2010critical,destrempes2013review,yamaguchi2013quantitative} for further references. In either case, quantitative ultrasound can be used to diagnose pathologies with ultrasound images. Another field of application is speckle-tracking that aims at estimating local displacements and tissue deformations; see for instance \cite{ophir1991elastography, hein1993current, lubinski1999speckle}. On the other hand, the Lorentz force electrical impedance tomography (LFEIT) method \cite{montalibet2002scanning,grasland2013LFEIT}, also known as magneto-acousto electrical tomography \cite{haider2008magneto}, is a medical imaging technique producing electrical conductivity images of tissues \cite{wen1998hall, roth2011role, ammari2014mathematical}. With this technique, an ultrasound wave is transmitted by an acoustic transducer in a biological tissue placed in a magnetic field. The movement of the tissue in the magnetic field due to the ultrasound propagation induces an electric current due to Lorentz force. The measurement of this current with electrodes in contact with the sample allows to reconstruct images of electrical impedance interfaces. The goal of this work was to assess the presence of an ``acousto-electrical'' speckle in the LFEIT technique, similar to the acoustic speckle in US imaging, as suggested in a previous work \cite{haider2008magneto}. The first part of this work presents the theoretical similarity of measured signals in these two imaging techniques. This similarity is then observed experimentally on an acoustic and electrical interface. Finally, a bovine sample is imaged using both methods to observe the two types of speckle. \section{Theoretical background} The goal of this section is to compare the mathematical framework of the LFEIT method with that describing radio-frequency signals in US imaging. \subsection{Local current density in Lorentz force electrical impedance tomography} \begin{figure}[!ht] \begin{center} \includegraphics[width=.8\linewidth]{Figure1.pdf} \caption{An ultrasound wave is transmitted in a conductive medium placed in a magnetic field. This induces an electric current $I_{tot}$ due to Lorentz force. This current separates in two components, $I_{ins}$ which stays inside the medium, and $I_{meas}$ which is measured by electrodes in contact. The measured signal allows to reconstruct images of electrical impedance interfaces.} \label{Figure1} \end{center} \end{figure} The LFEIT principle is illustrated in Figure \ref{Figure1}. In this technique, an ultrasound wave is transmitted along a unit vector $\mathbf{e}_z$ in a conductive medium placed in a magnetic field \textcolor{black}{along $\mathbf{e}_x$ of intensity $B_x$}. This induces an electric current due to Lorentz force in the direction $\mathbf{e}_y=\mathbf{e}_z \times \textcolor{black}{\mathbf{e}_x}$. We assume that the conductive medium is moved by the ultrasound wave along the $\mathbf{e}_z$ direction so that all particles within the medium are moved with a mean velocity of amplitude $v_{\textcolor{black}{z}}$ parallel to $\mathbf{e}_z$ \cite{mari2009bulk}. Since the medium is placed in a magnetic field $B_x$, a particle $k$ of charge $q_k$ is deviated by a Lorentz force $\mathbf{F}_k=q_k v_{\textcolor{black}{z}} B_x \mathbf{e}_y$. Using Newton's second law, the velocity $\mathbf{u}_k$ of the charged particle can be calculated as: \begin{equation} \mathbf{u}_k = v_{\textcolor{black}{z}} \mathbf{e}_z + \mu_k v_{\textcolor{black}{z}} B_x \mathbf{e}_y, \label{eq1} \end{equation} where $\mu_k$ is the mobility of the particle $k$. The density of current $\mathbf{j}$, defined as $\sum_k q_k \mathbf{u}_k$ \textcolor{black}{per unit volume}, is then equal to the sum of $\sum_k q_k v_{\textcolor{black}{z}} \mathbf{e}_z$ and $\sum_k q_k \mu_k v_{\textcolor{black}{z}} B_x \mathbf{e}_y$ \textcolor{black}{per unit volume}. If the medium is assumed electrically neutral, i.e. $\sum_k q_k=0$, the first term is equal to zero. By introducing the electrical conductivity \textcolor{black}{$\sigma$}, defined as $\sum_k q_k \mu_k$ \textcolor{black}{per unit volume}, the second term is equal to $\sigma v_{\textcolor{black}{z}} B_x \mathbf{e}_y$. Equation (\ref{eq1}) can consequently be written as: \begin{equation} \mathbf{j}=\sigma v_{\textcolor{black}{z}} B_x \mathbf{e}_y. \label{eq2} \end{equation} The local density of current $\mathbf{j}$ is thus proportional to the electrical conductivity of the sample $\sigma$, to the component $B_x$ of the applied magnetic field, and to the local sample speed $v$ \cite{montalibet2002scanning}. \subsection{Measured signal in Lorentz force electrical impedance tomography} Equation (\ref{eq2}) is however local. We consider, as a first approximation, the ultrasound beam as a plane wave along $\mathbf{e}_z$ \textcolor{black}{inside a disk of diameter $W$}, with the velocity $v_{\textcolor{black}{z}} = 0$ outside the beam. The total current $I_{tot}$ induced by the Lorentz force is then equal to the \textcolor{black}{average} of the current density $j$ \textcolor{black}{computed over all surfaces $S$} inside the ultrasound beam that are perpendicular to $e_y$ \cite{montalibet2002these}: \begin{equation} I_{tot} (t) = \frac{1}{\textcolor{black}{W}} \textcolor{black}{\int}{\int{\int{\mathbf{j}.\mathbf{dS} \textcolor{black}{~dy}}}} =\frac{1}{\textcolor{black}{W}}\textcolor{black}{ \int}{\int{\int{\sigma v_{\textcolor{black}{z}} B_x ~dz ~dx \textcolor{black}{~dy}}}}. \label{eq3} \end{equation} As electrodes are placed outside the beam, we assume they measure only a fraction $\alpha$ of the total current $I_{tot}$ induced by the Lorentz force. The current $I_{tot}$ then induces a current outside the beam which separates in two components: a current $I_{ins}$ that stays inside the medium and operates through a resistance $R_{ins}$, and a current $I_{meas}$ that is measured by electrodes and operates through a resistance $R_{meas}$. Due to Ohm's law, which states that $I_{ins} R_{ins} = I_{meas} R_{meas}$, the coefficient $\alpha$, defined as $I_{meas}/I_{tot}$, is also equal to $R_{ins}/(R_{meas} + R_{ins})$. This coefficient, rather difficult to estimate, depends consequently on a few parameters: the size and location of electrodes, and the circuit and medium electrical impedance. We can nevertheless conclude that the lower the resistance $R_{meas}$ compared to $R_{ins}$, the higher $\alpha$ is, and the higher is the measured electrical current. For a linear and inviscid medium, the medium velocity $v_{\textcolor{black}{z}}$ due to ultrasound wave propagation is related to the ultrasound pressure $p$ and the medium density $\rho$ (assumed here constant over time but not necessarily over \textcolor{black}{space}) using the identity $\frac{\partial v_{\textcolor{black}{z}} (t,z)}{\partial t} = - \frac{1}{\rho}\frac{\partial p(t,z)}{\partial z}$ \textcolor{black}{(having assumed a plane wave)}. From these considerations on the coefficient $\alpha$ and the medium velocity $v_{\textcolor{black}{z}}$, the measured current $I_{meas}$ is equal to: \begin{equation} I_{meas}(t) = \frac{\alpha}{\textcolor{black}{W}} \textcolor{black}{\int}{\textcolor{black}{\int}{\int_{z_1}^{z_2}{\sigma B_x \Bigl (\int_{-\infty}^{t}{-\frac{1}{\rho}\frac{\partial p(\tau,z)}{\partial z}~d\tau} \Bigr ) ~dz\textcolor{black}{~dx~dy}}}}, \label{eq4} \end{equation} where $z_1$ and $z_2$ are the boundaries of the studied medium along $\mathbf{e}_z$. Considering a progressive acoustic wave that propagates only along the direction $\mathbf{e}_z$ and is not attenuated in the measurement volume, we may assume that $p(\tau,z) $ is of the form $P(\tau-z/c)$ where $c$ is the speed of sound in the medium. Equation \ref{eq4} can consequently be written as: \begin{equation} I_{meas}(t) = \frac{\alpha}{\textcolor{black}{W}} \textcolor{black}{\int}{\textcolor{black}{\int}{\int_{z_1}^{z_2}{\sigma B_x \frac{1}{\rho c} \Bigl ( P(t-z/c) - P(-\infty) \Bigr ) ~dz\textcolor{black}{~dx~dy}}}}, \label{eq5} \end{equation} where $P(-\infty) = 0$ since the transmitted pulse is of finite length. Assuming $B_x$ and \textcolor{black}{$c$ constant over space} ($c$ varies typically from -5 to +5\% between soft biological tissues \cite{hill2004physical}), and replacing the integration over $z$ by an integration over $\tau = z/c$, equation \ref{eq5} becomes: \begin{equation} I_{meas}(t) = \frac{\alpha B_x}{\textcolor{black}{W}} \int_{-\infty}^{\infty}{H_{\textcolor{black}{z}}(\tau) P(t-\tau) ~d\tau}, \label{eq6} \end{equation} where \textcolor{black}{$H_z(\tau)=\int{\int{H(x^\prime,y^\prime,z^\prime=c\tau)dx^\prime dy^\prime}}$, $H$ is equal to $\frac{\sigma}{\rho}$ within the studied medium and $0$ elsewhere.} Since the medium density vary typically by a few percent between different soft tissues \cite{cobbold2007foundations}, while the electrical conductivity of soft tissues can vary up to a few tens \cite{gabriel1996dielectric2}, $H$ can be seen mostly as a variable describing the electrical conductivity of the tissue. One can show that if the DC component of the transmitted pressure is null, then $I_{meas}$ is null whenever the electrical conductivity is constant \cite{wen1998hall}. Thus, equation \ref{eq6} shows that the measured electric current is proportional to the convolution product of a function $H_{\textcolor{black}{z}}$ of the electrical conductivity with the axial transmitted pressure wave $P$. \subsection{Measured signal in B-mode ultrasound imaging} Ultrasound imaging is based on the measurement of reflections of the transmitted acoustic wave (i.e., backscattering, diffraction or specular reflection). We assume that acoustic inhomogeneities are scattering only a small part of the transmitted pressure, so that scattered waves have a negligible amplitude compared to the main acoustic beam, and that the diffraction and attenuation are small in the region of interest. The voltage $RF(t)$ measured at time $t$ on the transducer, known as the radiofrequency signal, is then equal to: \begin{equation} RF(t)=D\int_{-\infty}^{\infty}{T_{z}(\tau) P(t-\tau)~d\tau}, \label{eq8} \end{equation} where $D$ is a constant of the transducer related to the acousto-electric transfer function, $T_{z}(\tau)=\int{\int{ T(x^\prime,y^\prime,z^\prime=c\tau\textcolor{black}{/2})~dx^\prime~dy^\prime}}$ with $T$ the continuous spatial distribution of point scatterers, $P(t)$ is the axial transmitted pressure wave (the pulse shape), and $\tau=\textcolor{black}{2}z/c$ \textcolor{black}{(the factor $2$ takes into account round-trip wave propagation in ultrasound imaging)}. In the latter expression of $T_z(\tau)$, the double integral is performed over a disk of diameter $W$ (the ultrasound beam is considered as a plane wave along $\mathbf{e}_z$ inside a limited width $W$ as in the previous section) \cite{bamber1980ultrasonic}. The envelope of $RF(t)$ represents the A-mode ultrasound signal. \textcolor{black}{Note that under a more realistic incident pressure wave model, one can consider pulse beam profiles in the definition of $H_z$ (at emission) and $T_z$ (at emission and at reception), which can be deduced by multiplying the pulse shape with the incident pulse beam profile}. Also, attenuation can be taken into account by convolution of Eq. \ref{eq8} (and similarly, of Eq. \ref{eq6}) with an attenuation function. Hence, the received signal in US imaging is proportional to the convolution product of the axial spatial distribution of point scatterers $T_z$ with the transmitted pulse shape $P$. If scatterers are small compared to the ultrasound wavelength, reflected waves can interfere. When the envelope of the signal is calculated and a B-mode image is formed line by line, this phenomenon appears on the image as a granular texture called acoustic speckle \cite{abbott1979acoustic}. Apart from a proportionality coefficient, equations (\ref{eq6}) and (\ref{eq8}) have a similar form, where $T$ and $H$ play an analogous role. A speckle phenomenon is thus expected to be observed in the LFEIT technique. Its nature would be electrical, because mainly related to electrical conductivity inhomogeneities, and also acoustic, because spatial characteristics are related to the acoustic wavelength. \section{Materials and methods} Two experiments were performed in this study. The first experiment aimed at comparing a LFEIT signal and a US signal on a simple acoustic and electrical conductivity interface using the same acoustic transducer. This approach was used to perform a test with a large change in both electrical conductivity and acoustic impedance. Thus, tissue functions $T_{\textcolor{black}{z}}$ and $H_{\textcolor{black}{z}}$ could be considered as square pulse functions with strong gradients at the interface locations. According to the \textcolor{black}{above hypotheses}, the two \textcolor{black}{scan line} signals are expected to be identical apart from a proportionality coefficient \textcolor{black}{in this special case}. The goal of the second experiment was to observe a complex biological tissue with the two imaging techniques and using the same acoustic transducer. A speckle pattern of similar spatial characteristics is expected to be observed on both types of images, since they are related to the acoustic wavelength and beam width, but different, because the nature of the imaged parameter is acoustic in one case and electrical in the other case. \subsection{Measured signal on an acousto-electrical interface} The experiment setup is illustrated in Figure \ref{Figure2}. A generator (HP33120A, Agilent, Santa Clara, CA, USA) was used to create 0.5 MHz, 3 cycles sinusoid bursts at a pulse repetition frequency of 100 Hz. This excitation was amplified by a 200 W linear power amplifier (200W LA200H, Kalmus Engineering, Rock Hill, SC, USA) and sent to a 0.5 MHz, 50 mm in diameter transducer focused at 210 mm and placed in a degassed water tank. The peak-to-peak pressure at the focal point was equal to 3 MPa. A 4x15x20 cm$^3$ mineral oil tank was located from 15 to 35 cm away from the transducer in the ultrasound beam axis. This oil tank was used to decrease the loss of current from the sample to the surrounding medium and to increase consequently the signal-to-noise ratio, but was not mandatory. It was inserted in a 300$\pm$50 mT magnetic field created by a U-shaped permanent magnet, composed of two poles made of two 3x5x5 cm$^3$ NdFeB magnets (BLS Magnet, Villers la Montagne, France) separated by a distance of 4.5 cm. The tested medium was a 5x10x10 cm$^3$ 10\% gelatin filled with 5\% salt sample placed inside the oil tank from 30 to 40 cm away from the transducer, and presented consequently a strong acoustic and electrical interface. A pair of 3x0.1x10 cm$^3$ copper electrodes was placed in contact with the sample, above and under it, respectively. The electrodes were linked through an electrical wire to a 1 MV/A current amplifier (HCA-2M-1M, Laser Components, Olching, Germany). A voltage amplifier could also be used, but the current amplifier presents a smaller input impedance (a few Ohms vs 50 Ohms) and increases consequently the amount of current measured by the electrode, as depicted by the factor $\alpha$ in equation \ref{eq4}. The signals were then measured by an oscilloscope with 50 $\Omega$ input impedance (WaveSurfer 422, LeCroy, Chestnut Ridge, NY, USA) and averaged over 1000 acquisitions. US signals were simultaneously recorded using the same oscilloscope with a 1/100 voltage probe. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\linewidth]{Figure2.pdf} \caption{A transducer is transmitting ultrasound pulses toward a sample placed in an oil tank placed in a magnetic field. The induced electric current is received by two electrodes in contact respectively with two sides of the gelatin.} \label{Figure2} \end{center} \end{figure} To quantify similarities between the two signals, we computed the correlation coefficient between them. Such coefficient is equal to 1 when the two signals are directly proportional and 0 when they are uncorrelated. \subsection{Observation of speckle in a biological tissue} The same apparatus as in the first experiment was used, but the gelatin sample was replaced by a 2x6x6 cm$^3$ piece of bovine rib muscle purchased at a grocery store. It presented many fat inclusions, as pictured in Figure \ref{Figure3}. B-mode images were produced line by line by moving the transducer along the $e_y$ direction by 96 steps of 0.5 mm. Acoustic and electrical signals were post-processed using the Matlab software (The MathWorks, Natick, MA, USA) by computing the magnitude of the Hilbert transform of the signal \cite{roden1991analog}, and were displayed with grayscale and jet color images, respectively. \begin{figure}[!ht] \begin{center} \includegraphics[width=.5\linewidth]{Figure3.pdf} \caption{Picture of the 2x6x6 cm$^3$ imaged bovine rib muscle sample, with many fat inclusions.} \label{Figure3} \end{center} \end{figure} \section{Results} Figure \ref{Figure4}-(A) presents the electrical signal measured by electrodes from the first phantom interface, 195 to 220 microseconds after acoustic transmission, corresponding to a distance of 30 cm. Figure \ref{Figure4}-(B) depicts the signal acquired by the US transducer from the phantom interface, 395 to 420 microseconds after acoustic transmission, corresponding to a distance of 30 cm (back and forth). Both signals consist in three to four cycles at a central frequency of 500 kHz. The correlation coefficient between both signals is 0.9297, indicating a high similarity. This similarity was observed at the second phantom interface, with a correlation coefficient between both signals equal to 0.9179. LFEIT and US images of the phantom can be seen in \cite{grasland2013LFEIT}. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\linewidth]{Figure4.pdf} \caption{(A) Electrical signal acquired by electrodes from the first phantom interface, 195 to 220 $\mathrm{\mu}$s after acoustic wave transmission. The signal is made of three to four cycles at 500 kHz. (B) Electrical signal acquired by the acoustic transducer from the first phantom interface, 395 to 420 $\mathrm{\mu}$s after acoustic wave transmission. The signal is also made of three to four cycles at 500 kHz.} \label{Figure4} \end{center} \end{figure} Figures \ref{Figure5}-(A) and -(B) present the Lorentz force electrical impedance tomography image and the ultrasound image, respectively, of the bovine muscle sample. The amplitude varied from -2 to -2.8 dB with the first technique and from -1.5 to -2.5 dB with the other (0 dB being a measured amplitude of 1 V). Main interfaces of the medium can be retrieved on both images, as previously shown \cite{grasland2013LFEIT}, \textcolor{black}{even if signals at the boundaries and the interior of the bovine sample were of similar amplitude}. A speckle pattern was present in both images. \textcolor{black}{The typical spots were of size of same order of magnitude; i.e., 5-8 mm in the Z-direction and 10-18 mm in the Y-direction, but their spatial distributions were different, as expected due to the difference between the electrical and acoustic inhomogeneities. For each image, the signal-to-noise ratio was estimated as the base 10 logarithm of the ratio of the mean square amplitude of the RF signals in the 22.5-26 cm zone with the mean square amplitude in the 19-20 cm noisy zone. The signal-to-noise ratio was lower by 0.9 dB in the LFEIT image compared to the US image.} \begin{figure}[!ht] \begin{center} \includegraphics[width=1\linewidth]{Figure5.pdf} \caption{(A) Lorentz force electrical impedance tomography (LFEIT) image of the bovine muscle sample. (B) Ultrasound (US) image of the same bovine rib muscle sample. A speckle pattern can be seen inside the medium.} \label{Figure5} \end{center} \end{figure} \section{Discussion} The gelatin phantom used in the first experiment presented an interface of acoustic and electrical impedances. According to the correlation coefficient, the measured signals were very similar, which is a first indication of the validity of the approach presented in the theoretical section: the reflected wave is proportional to the convolution product of the acoustic impedance distribution with the transmitted ultrasound pulse shape, while the induced electric current is proportional to the convolution product of the electrical impedance distribution with the transmitted ultrasound pulse shape. The second experiment showed two images of granular pattern. This pattern does not represent macroscopic variations of acoustic or electrical impedances, and we interpreted it as speckle. The granular pattern appeared visually with similar characteristics of size and shape in both images. Note that the size of the speckle spots along the ultrasound beam was different than in the orthogonal direction, because the first is mainly related to the acoustic wavelength and the second to the ultrasound beam width \cite{obrien1993single}. The spots were quite larger than those usually seen in clinical ultrasound images, due to the characteristics of the instrument (three cycles at 500 kHz, 1.5 cm beam width). Although the bright spots were of similar size, their precise locations in the two images were different. This shows that the observed speckle reveals information of different nature in the two modalities: acoustic or electrical inhomogeneities. In this case, the bovine meat sample presented not only large layers of fat but also small inclusions of adipose tissues whose electrical conductivity differs from muscle (differences can be ten time higher at 500 kHz \cite{gabriel1996dielectric2}), whereas they have a close acoustic impedance (differences smaller than 10\% \cite{cobbold2007foundations}). We introduce henceforth the term ``acousto-electrical speckle''. It is justified by the fact that its spatial characteristics are related to the acoustic parameters, especially the ultrasound wavelength, while its nature is related to the electrical impedance variation distribution $H$. The existence of this speckle could allow using speckle-based ultrasound techniques in the Lorentz force electrical impedance tomography technique, for example compound imaging, speckle-tracking algorithm or quantitative ultrasound for tissue characterization purposes \cite{odonnell1994internal,jespersen1998multi,mamou2013quantitative}. These techniques have however not be applied in this study because of the low spatial resolution of the images due to the low frequency transducer used. This study shows that the electrical impedance inhomogeneities can be studied using LFEIT at a scale controlled by the acoustic wavelength instead of the electromagnetic wavelength, which is 5 orders of magnitude larger at a same frequency, and would be prohibitive in the context of biological tissues imaging. These inhomogeneities should also be observed in a ``reverse'' mode (terminology from Wen et al. \cite{wen1998hall}) called Magneto-Acoustic Tomography with Magnetic Induction, where an electrical current and a magnetic field are combined to produce ultrasound waves \cite{roth1994theoretical, xu2005magneto, ammari2009mathematical}. However, presence of speckle in this last technique has not yet been demonstrated and which of the two methods will be most useful in biomedical imaging is not clear \cite{roth2011role}. \section{Conclusion} In this study, the similarity between two imaging modalities, the Lorentz force electrical impedance tomography and ultrasound imaging, was assessed theoretically. This similarity was then observed experimentally on a basic acoustic and electrical interface with both methods. Then, the two techniques were used to image a biological tissue presenting many acoustic and electrical impedance inhomogeneities. The speckle pattern formed in both images exhibited similar spatial characteristics. This suggests the existence of an ``acousto-electrical speckle'' with spatial characteristics driven by acoustic parameters but due to electrical impedance variation distribution. This allows considering the use of ultrasound speckle-based image processing techniques on Lorentz force electrical impedance tomography data and to study electrical inhomogeneity structures at ultrasound wavelength scale. \section{Acknowledgments} Part of the project was financed by a Discovery grant of the Natural Sciences and Engineering Research Council of Canada (grant \# 138570-11). The first author was recipient of a post-doctoral fellowship award by the FRM SPE20140129460 grant. The authors declare no conflict of interest in the work presented here. \section{Bibliography} \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Introduction} \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{graphics/teaser.png} \end{center} \caption{(Top) In principle, image restoration and enhancement techniques should improve visual recognition performance by creating higher quality inputs for recognition models. This is the case when a Super Resolution Convolutional Neural Network~\cite{Chao:2014:SRCNN} is applied to the image in this panel. (Bottom) In practice, we often see the opposite effect --- especially when new artifacts are unintentionally introduced, as in this application of Deep Deblurring~\cite{Su:2016:DBN}. Here we describe a new video dataset for the study of this problem containing thousands of annotated frames.} \label{fig:Teaser} \end{figure} To build a visual recognition system for any application these days, one's first inclination is to turn to the most recent machine learning breakthrough from the area of deep learning, which no doubt has been enabled by access to millions of training images from the Internet. But there are many circumstances where such an approach cannot be used as an off-the-shelf component to assemble the system we desire, because its training dataset does not take into account the myriad artifacts that can be experienced in the environment. As computer vision pushes farther into real-world applications, what should a software system that can interpret images from sensors placed in any unrestricted setting actually look like? First, it must incorporate a set of algorithms, drawn from the areas of computational photography and machine learning, into a processing pipeline that corrects and subsequently classifies images across time and space. Image restoration and enhancement algorithms that remove corruptions like blur, noise and mis-focus, or manipulate images to gain resolution, change perspective and compensate for lens distortion are now commonplace in photo editing tools. Such operations are necessary to improve the quality of raw images that are otherwise unacceptable for recognition purposes. But they must be compatible with the recognition process itself, and not adversely affect feature extraction or classification. Remarkably, little thought has been given to image restoration and enhancement algorithms for visual recognition --- the goal of computational photography thus far has simply been to make images look appealing after correction~\cite{zeyde2010single,bevilacqua2012low,timofte2014,huang2015single,Su:2016:DBN}. It remains unknown what impact many transformations have on visual recognition algorithms. To begin to answer that question, exploratory work is needed to find out which image pre-processing algorithms, in combination with the strongest features and supervised machine learning approaches, are promising candidates for different problem domains. One popular problem that contains imaging artifacts typically not found in computer vision datasets crawled from the web is the interpretation of images taken by aerial vehicles~\cite{tomic2012toward,reilly2013shadow}. This task is a key component of a number of applications including robotic navigation, scene reconstruction, scientific study, entertainment, and visual surveillance. Furthermore, aerial vehicles are capable of recording many hours of video, covering imagery across extensive terrains and through a wide variety of weather conditions, which leads to the need of automating image interpretation in order to ease the workload of human operators. Images captured from aerial vehicles tend to present a wide variety of artifacts and optical aberrations. These can be the product of weather and scene conditions (\eg, rain, light refraction, smoke, flare, glare, occlusion, etc.), movement (\eg, motion blur), or the recording equipment (\eg, video compression, sensor noise, lens distortion, mis-focus, etc.). So how do we begin to address the need for image restoration and enhancement algorithms that are compatible for visual recognition in a scenario like the one above? In this paper, we propose the use of a benchmark dataset captured in realistic settings where image artifacts are common, as opposed to more typical imagery crawled from social media and web-based photo albums. Given the current popularity of aerial vehicles within computer vision, we suggest the use of data captured by UAVs and manned Gliders as a relevant and timely challenge problem. But this can't be the only data we consider, as we do not have knowledge of certain scene parameters that generated the artifacts of interest. Thus we also suggest that controlled ground-based data where scene parameters and target objects can be controlled are also essential. And finally, we need a set of protocols for evaluation that move away from a singular focus on perceived image quality to include classification performance. By combining all of these elements, we have created a new dataset dubbed UG$^2$ (UAV, Glider, and Ground data), which consists of hundreds of videos and over 160,000 annotated frames spanning hundreds of ImageNet classes~\cite{ImageNet:ILSVRC:2014}. It is being released in advance of a significant prize competition that will be open to participation from the computer vision community in 2018. In summary, the contributions of this paper are: \begin{itemize} \item A new video benchmark dataset representing both ideal conditions and common aerial image artifacts, which we make available to facilitate new research and to simplify the reproducibility of experimentation\footnote{See the Supplemental Material for example videos from the dataset, which will be released shortly after this paper is accepted for publication.}. \item An extensive evaluation of the influence of image aberrations and other problematic conditions on current object recognition models including VGG16 and VGG19~\cite{VGG:2014}, InceptionV3~\cite{Inception:2015}, and ResNet50~\cite{ResNet50:2015}. \item An analysis of the impact and suitability of basic and state-of-the-art image and video processing algorithms used in conjunction with current deep learning models. In this work, we look at image interpolation, super-resolution~\cite{Chao:2014:SRCNN, Kim:2016:VDSR} and deblurring~\cite{Su:2016:DBN, Nah:2016:DSDD}. \end{itemize} \section{Related work} \label{Related-work} To put the UG$^2$ dataset into context, we can consider two different facets of the problem: existing datasets and methodologies for image restoration and enhancement. \subsection{Datasets} \label{Related-work:Datasets} Having access to a diverse ground truth annotated dataset is crucial for recognition tasks, and while there have been efforts in acquiring and annotating large-scale surveillance video datasets \cite{CAVIAR:Dataset:2004, CUHK:Dataset:2014, TISI:Dataset:2013} which provide a "fixed" aerial view of urban scenes, datasets incorporating images captured by aerial vehicles are scarce. Moreover, these attempts to compile video/image datasets primarily focused on specific research areas - event/action understanding, video summarization, face recognition and are ill-suited for object recognition tasks, even if they share some common imaging artifacts that impairs recognition as a whole. As for example, the VIRAT Video Dataset \cite{Virat:Dataset:2011}, presents "challenging, realistic, natural and challenging (in terms of its resolution, background clutter, diversity in scenes)" aerial and ground videos for event recognition. The aerial video section of this dataset contains 4 hours of video exhibiting smooth camera motion and good weather conditions. Among other action understanding video datasets, the UCF Aerial Action Data Set \cite{ UCFAA} featuring video sequences at different heights and aerial viewpoints collected by remote controlled blimp equipped with an HD camera and UCF-ARG dataset \cite{ UCFARG} consisting of videos recorded from a ground camera, a rooftop camera and aerial videos are worth mentioning. Among image datasets, Yao \etal \cite{LHI:Dataset:2007} proposed a general purpose annotated dataset to facilitate automatic annotation by grouping visual knowledge at three levels: scene, object and low middle level detailing the object contours, surface norm and albedo change. It is composed of 13 subsets, among which they include an aerial image subset consisting of 1,625 images distributed over of 10 different classes : Airport, business, harbor, industry, intersection, mountain, orchard, parking, residential, and school. For face recognition from video surveillance in an uncontrolled setting, SCface \cite{grgic2011scface} has gained quite a reputation. SCface consists of 4,160 static human face images (in visible and infrared spectrum) of 130 subjects, taken with 5 video surveillance cameras with different resolutions and 1 IR camera under various pose and lighting conditions. \subsection{Image restoration and enhancement} \label{Related-work:ImgRestAndEnhanc} Image enhancement and restoration techniques can subtract imaging artifact and introduced as a pre-processing step can potentially help in object recognition and classification. For example, historically and quite recently, techniques like super-resolution have been combined with face recognition \cite{lin2005face, lin2007super, hennings2008simultaneous, yu2011face, huang2011super, uiboupin2016facial, rasti2016convolutional} and person re-identification \cite{jing2015super} algorithms for video surveillance data. Surveillance videos for face recognition are often captured by low cost, wide-angled imaging devices, positioned in a way such that the viewing area is maximized. Naturally, this results in very low resolution video frames with a number of people in an unconstrained environment (uncontrolled pose and illumination, mis-focus). Having super-resolution as a pre-processing step has shown to improve performance significantly. While the datasets \cite{patterson2002cuave, jaynes2005terrascope, messer1999xm2vtsdb} on which most of these works are based \cite{lin2005face,lin2007super} share some of the imaging artifacts inherently present in our dataset, they are multi-modal having both synchronous speech and video data and diverge mostly on the purpose - identifying pedestrians and understanding human actions. As a result, the backgrounds remain quite the same and lack large objects. Given the exhaustive list of methods, we primarily concentrated on three sub-classes: super-resolution, deblurring methods, and the simple, yet effective deconvolution. In principle, super-resolution can be used to increase the number of pixels on a target in an image, making it more suitable for feature extraction and subsequent classification. This is highly desirable for UAV imagery, where altitude and distance routinely affect target resolution. Most of the previous works on super resolution were dictionary based and concentrated on optimizing low resolution/high resolution dictionaries through different methods like sparse representation \cite{yang2008image, yang2010image}, Nearest Neighbor approaches\cite{freeman2002example}, image priors \cite{efrat2013accurate}, local self examples \cite{freedman2011image}, in contrast to recent approaches to super resolution which has been inspired by deep learning \cite{Chao:2014:SRCNN,Kim:2016:VDSR}. In fact, Dong et al's \cite{Chao:2014:SRCNN} work was the first to draw parallels between sparse representation super-resolution technique and a CNN model. The method successfully applied a feed-forward CNN to super-resolution, was trained on ImageNet and achieved state-of-the-art results. However, it suffered primarily from three constraints: high training time due to low learning rate, inability to infer contextual information from large image regions, the network's inability to cope with different scale factor. VDSR \cite{Kim:2016:VDSR}, to some extent eliminated these concerns and can be considered as one of the pioneering works in this regard. Motion blur in videos can be considered as a product of camera shake, movement of the object during image capture, scene depth and atmospheric turbulence. In such situations, deblurring techniques can restore image quality by removing these blurring artifacts through deconvolution or methods that use multi-image aggregation and fusion. The lost spatial frequencies in motion blurred images can be recovered by image deconvolution, provided the motion is at least locally shift-invariant \cite{levin2007image, levin2009understanding, joshi2009image, heflin2010single, levin2011natural} (see \cite{wang2014recent} for a comprehensive list of deblurring deconvolution techniques). Multi-frame aggregation methods \cite{law2006lucky, matsushita2006full, cho2009fast} depends heavily on the fact that camera shake is highly random across a temporal axis which leads to some patches with high blurred content and others with sharp scenes and by combining adjacent frames, the central frame can be restored. However, in such methods, frame registration becomes a necessary and computationally expensive task. The recent Deep Video Deblurring method by Su \etal \cite{Su:2016:DBN} relaxes the need for extensive frame alignment. A major pitfall for all deblurring methods is to find pairs of real blurry image and its corresponding ground-truth. So most methods generate synthetic blurry datasets for estimating the blur kernel, which does not come close to motion blur in real images and is assumed to be locally linear. The Dynamic Deep Deblurring method \cite{Nah:2016:DSDD} attempts to solve this by employing an multi-scale end-to-end CNN network without explicitly assuming a blur kernel. De-noising, on the other hand, relies on deconvolution to recover a sharp image, but with a different blur kernel, compared to motion blur removal \cite{joshi2009image}. Atmospheric blur is more problematic, and methods to reverse its effects make use of blind deconvolution in either a non-iterative mode that relies on a pre-defined modulation transfer function, or an iterative mode that attempts to estimate it \cite{heflin2010single}. While image enhancement and restoration techniques provide a good starting point to evaluate the effectiveness of object recognition performance as a pre-processing step, they suffer primarily from two limitations: \begin{itemize} \item Datasets used for training and validation are to a great extent different from our domain. \item Quality metric used to evaluate/access an image enhancement algorithm's performance might not be a direct indicator of object recognition performance. \end{itemize} In order to test our assumptions and measure the suitability of image restoration/enhancement algorithms as a pre-processing step for recognition on our dataset, we tried some of the basic and state-of-the-art methods: basic interpolation methods \cite{Keys:1981:Interpolation}, convolutional neural network inspired super-resolution \cite{Chao:2014:SRCNN, Kim:2016:VDSR}, deep deblurring techniques \cite{Nah:2016:DSDD, Su:2016:DBN} and traditional blind deconvolution technique \cite{kundur1996blind}, discussed in detail in Section \ref{Algorithms}. \section{The UG$^2$ Dataset} \label{Dataset} UG$^2$ is composed of 253 videos with 1,193,422 frames, representing 228 ImageNet~\cite{ImageNet:ILSVRC:2014} classes extracted from annotated frames from three different video collections. These classes are further categorized into 37 super-classes encompassing visually similar ImageNet categories and two additional classes for pedestrians and resolution chart images (this distribution is explained in more detail in Sec.~\ref{Dataset:DataDistribution}). The three different video collections consist of: (1) 50 Creative Commons tagged videos taken by fixed-wing unmanned aerial vehicles (UAV) obtained from YouTube; (2) 61 videos recorded by pilots of fixed wing gliders; and (3) 142 controlled videos captured on the ground specifically for this dataset (Table~\ref{tab:dataset_summary} presents a summary of the dataset). \begin{table} \centering \subimport{./figures/}{DatasetSummary.tex} \caption{Summary of the UG$^2$ dataset. } \label{tab:dataset_summary} \end{table} Fig.~\ref{fig:datasets-samples} presents example frames from each of the three collections. Note the differences in image quality, resolution, and problematic conditions across the images. Additionally, images from the YouTube and Glider Collections present varying levels of occlusion caused by either the aerial vehicle, image artifacts or super-imposed text. It is also important to note that there are differing levels of video compression among the videos of all three collections, and therefore differences in presence and visibility of compression artifacts. The heterogeneous nature of the videos in this dataset highlights real-world problematic elements for aerial image processing and recognition. Furthermore, the dataset contains a subset of 162,555 object-level annotated images (out of which 3,297 are images of pedestrians). Bounding boxes establishing object regions were manually determined using the Vatic tool for video annotation \cite{Vatic:2013}. Each annotation in the dataset indicates the position, scale, visibility and super-class for an object in a video. For the baseline classification experiments described in Sec.~\ref{Results}, the objects were cropped out from the frames in a square region of at least $224\times224$ pixels (the common input size for many deep learning-based recognition models), using the annotations as a guide. For objects whose area was smaller than $224\times224$ pixels, the cropping area was extended, thus including background elements in the object region. \begin{figure}[!htb] \centering \begin{subfigure}{0.2\textwidth} \includegraphics[width=\textwidth]{graphics/YT_AV101_6852.jpg} \caption{} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=\textwidth]{graphics/YT_AV56_1104.jpg} \caption{} \end{subfigure}\vskip 2mm \begin{subfigure}{0.2\textwidth} \includegraphics[width=\textwidth]{graphics/K_KV35_9871.jpg} \caption{} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=\textwidth]{graphics/K_KV61_19797.jpg} \caption{} \end{subfigure}\vskip 2mm \begin{subfigure}{0.2\textwidth} \includegraphics[width=\textwidth]{graphics/G_car30140sungopro_398.jpg} \caption{} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=\textwidth]{graphics/G_lake50sunson_597.jpg} \caption{} \end{subfigure} \caption{Examples of images contained within the three different collections: (a,b) YouTube Collection; (c,d) Glider Collection; and (e,f) Controlled Ground Collection.} \label{fig:datasets-samples} \end{figure} \subsection{UAV Video Collection} \label{Dataset:YouTube} The UAV Collection found within UG$^2$ consists of video recorded from small UAVs in both rural and urban areas. The videos found in this collection are open source content tagged with a Creative Commons license, obtained from the YouTube video sharing site. Because of this, they have different video resolutions (ranging from $600\times400$ to $3840\times2026$) and frame rates (ranging from 12 FPS to 59 FPS). This collection contains approximately 4 hours of aerial video distributed across 50 different videos. For this collection we observed 8 different video artifacts and other problems: glare/lens flare, poor image quality, occlusion, over/under exposure, camera shaking and noise (present in some videos that use autopilot telemetry), motion blur, and fish eye lens distortion. Additionally this collection contains videos with problematic weather/scene conditions such as night/low light video, fog, cloudy conditions and occlusion due to snowfall. Overall this collection contains 434,264 frames. Across a subset of these frames we observed 31 different classes (including the non-ImageNet pedestrians class), from which we extracted 32,608 object images. The cropped object images have a diverse range of sizes, ranging from $224\times224$ to $800\times800$. \subsection{Glider Video Collection} \label{Dataset:Glider} The Glider Collection found within UG$^2$ consists of video recorded by licensed pilots of fixed wing gliders in both rural and urban areas. This collection contains approximately 7 hours of aerial video, distributed across 61 different videos. The videos have frame rates ranging from 25 FPS to 50 FPS and different types of compression, such as MTS, MP4 and MOV. Given the nature of this collection the videos mostly present imagery taken from thousands of feet above ground, further increasing the difficulty of object recognition tasks. Additionally scenes of take off and landing contain artifacts such as motion blur, camera shaking and occlusion (which in some cases is pervasive throughout the videos, showcasing parts of the glider that partially occlude the objects of interest). For the Glider Collection we observed 6 different video artifacts and problems: glare/lens flare, over/under exposure, camera shaking and noise, occlusion, motion blur, and fish eye lens distortion. Furthermore, this collection contains videos with problematic weather/scene conditions such as fog, cloud and occlusion due to rain. Overall this collection contains over 600,000 frames. Across the annotated frames we observed 20 different classes (including the non-ImageNet class of pedestrians), from which we extracted 31,760 object images. The cropped object images have a diverse range of sizes, ranging from $224\times224$ to $900\times900$. \subsection{Ground Video Collection} \label{Dataset:Ground} In order to provide some ground-truth with respect to problematic image conditions, we performed a video collection on the ground that intentionally induced several common artifacts. One of the main challenges for object recognition within aerial images is the difference in the scale of certain objects compared to those in the images used to train the recognition model. To address this, we recorded video of static objects (\eg, flower pots, buildings) at a wide range of distances (30ft, 40ft, 50ft, 60ft, 70ft, 100ft, 150ft, and 200ft). In conjunction with the differing recording distances, we induced motion blur in images using an orbital shaker to generate horizontal movement at different rotations per minute (100rpm, 120rpm, 140rpm, 160rpm, and 180rpm). Supplementary to this, we recorded video under different weather conditions (sunny, cloudy, rain, snow) that could affect object recognition, and employed a small Sonny Bloggie hand-held camera (with a frame resolution size of $1280\times720$ and frame rate of 60 frames per second) and a GoPro Hero 4 (with a frame resolution size of $1920\times1080$ and frame rate of 30 frames per second), whose fisheye lens introduced additional image distortions. The ground collection contains approximately 33 mins 10 seconds of video, distributed across 136 videos. Overall this collection represents 97,806 frames, and we provide annotations for all of them. Across the annotated frames we have included 20 different ImageNet classes. Furthermore we include an additional class in this collection of videos showcasing a $9\times11$ inch tall $9\times9$ checkerboard grid exhibiting all aforementioned distances as well as all intervals of rotation. The motivation for including this artificial class is to provide a reference with well-defined straight lines to assess the visual impact of image restoration and enhancement algorithms. The cropped object images have a diverse range of sizes, ranging from $224\times224$ pixels to $1000\times1000$ pixels. \subsection{Object Categories and Distribution of Data} \label{Dataset:DataDistribution} A challenge presented by the objects annotated from the YouTube and Glider Collections is the high variability of both object scale and rotation. These two factors make it difficult to differentiate some of the more fine-grained ImageNet categories. For example, while it may be easy to recognize a car from an aerial picture taken from hundreds (if not thousands) of feet above the ground, it might be impossible to determine whether that car is a taxi, a jeep or a sports car. Thus UG$^2$ organizes the objects in high level classes that encompass multiple ImageNet synsets (ImageNet provides images for "synsets" or "synonym sets" of words or phrases that describe a concept in WordNet \cite{fellbaum1998wordnet}). An exception to this rule, is the ground video collection as we had more control over distances in the collection which made possible the fine-grained class distinction. For example, chainlink-fence and stone wall are separate classes in the ground collection and not combined together to form the fence super-class. Over 70\% of the UG$^2$ classes have more than 400 images and over 60\% of the classes are present in the imagery of at least 2 collections (see Fig.~\ref{fig:sharedClassDistrib} for a detailed overview of the distribution of shared classes). Around 20\% of the classes are present in all three collections. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{figures/ClassDistrib_PieChart2.png} \caption{Distribution of annotated images belonging to classes shared by at least two different UG$^2$ collections.} \label{fig:sharedClassDistrib} \end{figure} \section{The UG$^2$ Classification Protocols} \label{ClassifMethod} In order to establish good baselines for classification performance before and after the application of image enhancement and restoration algorithms, we used a selection of common deep learning approaches to recognize annotated objects and then used the correct classification rate as a measure of the base classification performance or classification performance after pre-processing the input images. Namely, we used the Keras~\cite{chollet2015keras} versions of the pre-trained networks VGG16 \& VGG19~\cite{VGG:2014}, Inception V3~\cite{Inception:2015}, and ResNet50~\cite{ResNet50:2015}. These experiments also serve as a demonstration of the UG$^2$ classification protocols. Each candidate restoration or enhancement algorithm should be treated as an image pre-processing step to prepare data to be submitted to all four networks. The entirety of the annotated data for all three collections is used for the baseline experiments, with the exceptions of the pedestrian and resolution chart classes, as they are not members of any synsets recognized by the pre-trained networks. \subsection{Classification Metrics} \label{ClassifMethod:Metrics} The networks used for the UG$^2$ classification task return a list of the ILSVRC synsets along with the probability of the object belonging to each of the synsets classes. However, given the nature of our dataset, and as we discussed in Sec.~\ref{Dataset:DataDistribution}, in some cases it is impossible to provide a fine grained labeling for the annotated objects, consequently, most of the super-classes we defined for UG$^2$ are composed of more than one ILSVRC synset. That is, each annotated image \(i\) has a single super class label \(L_i\) which in turn is composed by a set of ILSVRC synsets \(L_i = \{s_1, s_2, ..., s_n\}\). To measure classification accuracy, we observe the number of correctly identified synsets in the top 1 and 5 predictions made by each pre-trained network. A synset is considered to be correctly classified if it belongs to the set of synsets in the ground-truth super-class label. We use two metrics to measure the accuracy of the networks. For the first one (1C), we measure the rate of detection of at least one correctly classified synset class. In other words, for a super-class label \(L_i = \{s_1, s_2, ..., s_n\}\), a network is able to detect 1 or more correctly classified synsets in the top 1 and 5 predictions. For the second metric (AC), we measure the rate of detecting all the possible correct synset classes in the super-class label synset set. In other words, for a super-class label \(L_i = \{s_1, s_2, s_3\}\), a network is able to detect 1 correct synset for the top 1 label and 3 correct synsets for the top 5 labels. \section{Baseline Enhancement and Restoration Techniques} \label{Algorithms} To shed light on the effects image restoration and enhancement algorithms have on classification, we tested classic and state-of-the-art algorithms designed to improve or recover image details by means of image interpolation~\cite{Keys:1981:Interpolation}, super-resolution~\cite{Chao:2014:SRCNN, Kim:2016:VDSR}, or deblurring~\cite{Nah:2016:DSDD, Su:2016:DBN, Pan:2016:DCPD} (see Fig.~\ref{fig:retouched-samples} for examples). Below we provide a brief explanation of each method. \subsection{Resolution Enhancement} \label{Algorithms:Resolution} \textbf{Interpolation methods.} Though simple, classic image interpolation techniques can be used as a pre-processing step to obtain high resolution information content from low resolution images. These methods attempt to obtain a high resolution image by up-sampling the source image (usually assuming the source image is a down-sampled version of the high resolution one) and by providing the best approximation of a pixel's color and intensity values depending on the nearby pixels. Since they do not need any prior training, they can be directly applied to any image. Nearest neighbor interpolation uses a weighted average of the nearby translated pixel values in order to calculate the output pixel value. Bilinear interpolation increases the number of translated pixel values to two and bicubic interpolation increases it to four. \textbf{SRCNN.} The Super-Resolution Convolutional Neural Network (SRCNN)~\cite{Chao:2014:SRCNN} introduced deep learning techniques to super-resolution. The method employs a feedforward deep CNN to learn an end-to-end mapping between low resolution and high resolution images. The network consists of three layers with specific functions and different filter sizes: patch extraction and aggregation with 64 filters of size $9\times9$; non-linear mapping with 32 filters of size $1\times1$; and a reconstruction layer with a filter size of $5\times5$. The filter weights are randomly initialized by sampling from a Gaussian distribution with 0 mean and a standard deviation of 0.001. Dong \etal used Mean Squared Error as the loss function and minimized it by using stochastic gradient descent with standard backpropagation. The network was trained on 5 million ``sub-images" generated from 395,909 images of the ILSVRC 2013 ImageNet detection training partition~\cite{ILSVRC15}. The network directly outputs a high-resolution image from the low resolution image. Typically, the results obtained from SRCNN can be distinguished from their low resolution counterparts by their sharper edges without visible artifacts. To preserve spatial integrity, the test images are zero-padded before processing. \textbf{VDSR.} The Very Deep Super Resolution (VDSR) algorithm~\cite{Kim:2016:VDSR} aims to outperform SRCNN by employing a deeper CNN inspired by the VGG architecture~\cite{VGG:2014}. It also decreases training iterations and time by employing residual learning with a very high learning rate for faster convergence. At the design level, the network has 20 weight layers, where each layer except the first and the last consists of 64 filters of size $3\times3\times64$. The first layer operates on the input and the last layer is used for reconstruction. Kim \etal solve the problems related to the high learning rate (\eg, exploding gradients) through image residual learning and adaptive gradient clipping. {\small \bibliographystyle{ieee} \section{Introduction} \section{Introduction} \section{\vspace{-1.8mm}Introduction} \label{Introduction} \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{graphics/teaser.png} \end{center}\vspace{-5mm} \caption{(Top) In principle, image restoration and enhancement techniques should improve visual recognition performance by creating higher quality inputs for recognition models. This is the case when a Super Resolution Convolutional Neural Network~\cite{Chao:2014:SRCNN} is applied to the image in this panel. (Bottom) In practice, we often see the opposite effect --- especially when new artifacts are unintentionally introduced, as in this application of Deep Deblurring~\cite{Su:2016:DBN}. We describe a new video dataset (Sec.~\ref{Dataset}) for the study of problems with algorithm and data interplay (Sec.~\ref{Results}) like this one.} \label{fig:Teaser} \vspace{-5mm} \end{figure} To build a visual recognition system for any application these days, one's first inclination is to turn to the most recent machine learning breakthrough from the area of deep learning, which no doubt has been enabled by access to millions of training images from the Internet. But there are many circumstances where such an approach cannot be used as an off-the-shelf component to assemble the system we desire, because even the largest training dataset does not take into account all of the artifacts that can be experienced in the environment. As computer vision pushes further into real-world applications, what should a software system that can interpret images from sensors placed in any unrestricted setting actually look like? First, it must incorporate a set of algorithms, drawn from the areas of computational photography and machine learning, into a processing pipeline that corrects and subsequently classifies images across time and space. Image restoration and enhancement algorithms that remove corruptions like blur, noise, and mis-focus, or manipulate images to gain resolution, change perspective, and compensate for lens distortion are now commonplace in photo editing tools. Such operations are necessary to improve the quality of raw images that are otherwise unacceptable for recognition purposes. But they must be compatible with the recognition process itself, and not adversely affect feature extraction or classification (Fig.~\ref{fig:Teaser}). Remarkably, little thought has been given to image restoration and enhancement algorithms for visual recognition --- the goal of computational photography thus far has simply been to make images look appealing after correction~\cite{zeyde2010single,bevilacqua2012low,timofte2014,huang2015single,Su:2016:DBN}. It remains unknown what impact many transformations have on visual recognition algorithms. To begin to answer that question, exploratory work is needed to find out which image pre-processing algorithms, in combination with the strongest features and supervised machine learning approaches, are promising candidates for different problem domains. One popular problem that contains imaging artifacts typically not found in computer vision datasets crawled from the web is the interpretation of images taken by aerial vehicles~\cite{tomic2012toward,reilly2013shadow}. This task is a key component of a number of applications including robotic navigation, scene reconstruction, scientific study, entertainment, and visual surveillance. Images captured from aerial vehicles tend to present a wide variety of artifacts and optical aberrations. These can be the product of weather and scene conditions (\eg, rain, light refraction, smoke, flare, glare, occlusion), movement (\eg, motion blur), or the recording equipment (\eg, video compression, sensor noise, lens distortion, mis-focus). How do we begin to address the need for image restoration and enhancement algorithms that are compatible for visual recognition in a scenario like this one? In this paper, we propose the use of a benchmark dataset captured in realistic settings where image artifacts are common, as opposed to more typical imagery crawled from social media and web-based photo albums. Given the current popularity of aerial vehicles within computer vision, we suggest the use of data captured by UAVs and manned gliders as a relevant and timely challenge problem. But this can't be the only data we consider, as we do not have knowledge of certain scene parameters that generated the artifacts of interest. Thus we suggest that ground-based video with controlled scene parameters and target objects is also essential. And finally, we need a set of protocols for evaluation that move away from a singular focus on perceived image quality to include classification performance. By combining all of these elements, we have created a new dataset dubbed UG$^2$ (UAV, Glider, and Ground), which consists of hundreds of videos and over 150,000 annotated frames spanning hundreds of ImageNet classes~\cite{ILSVRC15}. In summary, the contributions of this paper are: \vspace{-3mm} \begin{itemize}[noitemsep] \item A new video benchmark dataset representing both ideal conditions and common aerial image artifacts, which we make available to facilitate new research and to simplify the reproducibility of experimentation\footnote{See the Supplemental Material for example videos from the dataset.The dataset can be accessed at: \url{https://goo.gl/AjA6En}.}. \item An extensive evaluation of the influence of image aberrations and other problematic conditions on common object recognition models including VGG16 and VGG19~\cite{VGG:2014}, InceptionV3~\cite{Inception:2015}, and ResNet50~\cite{ResNet50:2015}. \item An analysis of the impact and suitability of basic and state-of-the-art image and video processing algorithms used in conjunction with common object recognition models. In this work, we look at deblurring~\cite{Su:2016:DBN, Nah:2016:DSDD}, image interpolation, and super-resolution~\cite{Chao:2014:SRCNN, Kim:2016:VDSR}. \end{itemize} \vspace{-4mm} \section{\vspace{-1.8mm}Related work} \label{Related-work} \textbf{Datasets.} The areas of image restoration and enhancement have a long history in computational photography, with associated benchmark datasets that are mainly used for the qualitative evaluation of image appearance. These include very small test image sets such as Set5~\cite{bevilacqua2012low} and Set14~\cite{zeyde2010single}, and the set of blurred images introduced by Levin \etal~\cite{levin2009understanding}. Datasets containing more diverse scene content have been proposed including Urban100~\cite{huang2015single} for enhancement comparisons and LIVE1~\cite{sheikh2006statistical} for image quality assessment. While not originally designed for computational photography, the Berkeley Segmentation Dataset has been used by itself~\cite{huang2015single} and in combination with LIVE1~\cite{yang2014single} for enhancement work. The popularity of deep learning methods has increased demand for training and testing data, which Su \etal provide as video content for deblurring work~\cite{Su:2016:DBN}. Importantly, none of these datasets were designed to combine image restoration and enhancement with recognition for a unified benchmark. Most similar to the dataset we introduce in this paper are various large-scale video surveillance datasets, especially those which provide a ``fixed" overhead view of urban scenes~\cite{CAVIAR:Dataset:2004, grgic2011scface, CUHK:Dataset:2014, TISI:Dataset:2013}. However, these datasets are primarily meant for other research areas (\eg, event/action understanding, video summarization, face recognition) and are ill-suited for object recognition tasks, even if they share some common imaging artifacts that impair recognition as a whole. With respect to data collected by aerial vehicles, the VIRAT Video Dataset~\cite{Virat:Dataset:2011} contains ``realistic, natural and challenging (in terms of its resolution, background clutter, diversity in scenes)" imagery for event recognition. Other datasets including aerial imagery are the UCF Aerial Action Data Set~\cite{UCFAA}, UCF-ARG~\cite{UCFARG}, UAV123~\cite{mueller2016benchmark}, and the multi-purpose dataset introduced by Yao \etal~\cite{LHI:Dataset:2007}. As with the computational photography datasets, none of these sets have specific protocols for image restoration and enhancement coupled with object recognition. \begin{table} \centering \subimport{./figures/}{DatasetSummary.tex} \vspace{-3mm} \caption{Summary of the UG$^2$ dataset (See Supp. Tables~3 and 4 for a detailed breakdown of these conditions).} \label{tab:dataset_summary} \vspace{-6mm} \end{table} \textbf{Restoration and Enhancement for Recognition.} In this paper we consider the image restoration technique of deblurring, where the objective is to recover a sharp version $x'$ of a blurry image $y$ without knowledge of the blur parameters. When considering motion, the original sharp image $x$ is convolved with a blur kernel $k$: $y = k \ast x$~\cite{levin2011efficient}. Accordingly, the sharp image can be recovered through deconvolution~\cite{levin2007image, levin2009understanding, joshi2009image, levin2011natural} (see \cite{wang2014recent} for a comprehensive list of deconvolution techniques for deblurring) or methods that use multi-image aggregation and fusion~\cite{law2006lucky, matsushita2006full, cho2009fast}. Intuitively, if an image has been corrupted by blur, then deblurring should improve performance of recognizing objects in the image. An early attempt at unifying a high-level task like object recognition with a low-level task like deblurring was the Deconvolutional Network~\cite{zeiler2010deconvolutional, zeiler2011adaptive}. Additional work has been undertaken in face recognition~\cite{yao2008improving,nishiyama2009facial,zhang2011close}. In this work, we look at deep learning-based deblurring techniques~\cite{Nah:2016:DSDD, Su:2016:DBN} and a basic blind deconvolution method~\cite{kundur1996blind}. With respect to enhancement, we focus on the specific technique of single image super-resolution, where an attempt is made at estimating a high-resolution image $x$ from a single low-resolution image $y$. The relationship between these images can be modeled as a linear transformation $y = Ax + n$, where $A$ is a matrix that encodes the processes of blurring and downsampling, and $n$ is a noise term~\cite{efrat2013accurate}. A number of super-resolution techniques exist including sparse representation~\cite{yang2008image, yang2010image}, Nearest Neighbor approaches~\cite{freeman2002example}, image priors~\cite{efrat2013accurate}, local self examples~\cite{freedman2011image}, neighborhood embedding~\cite{timofte2013anchored}, and deep learning \cite{Chao:2014:SRCNN,Kim:2016:VDSR}. Super-resolution can potentially help in object recognition by amplifying the signal of the target object to be recognized. Thus far, such a strategy has been limited to research in face recognition~\cite{lin2005face, lin2007super, hennings2008simultaneous, yu2011face, huang2011super, uiboupin2016facial, rasti2016convolutional} and person re-identification~\cite{jing2015super} algorithms for video surveillance data. Here we look at simple interpolation methods~\cite{Keys:1981:Interpolation} and deep learning-based super-resolution \cite{Chao:2014:SRCNN, Kim:2016:VDSR}. \vspace{-3mm} \section{\vspace{-1.8mm}The UG$^2$ Dataset} \label{Dataset} UG$^2$ is composed of $289$ videos with $1,217,496$ frames, representing $228$ ImageNet~\cite{ILSVRC15} classes extracted from annotated frames from three different video collections (see Supp. Table~2 for the complete list of classes). These classes are further categorized into 37 super-classes encompassing visually similar ImageNet categories and two additional classes for pedestrian and resolution chart images (this distribution is explained in detail below). The three different video collections consist of: (1) 50 Creative Commons tagged videos taken by fixed-wing unmanned aerial vehicles (UAV) obtained from YouTube; (2) $61$ videos recorded by pilots of fixed wing gliders; and (3) $178$ controlled videos captured on the ground specifically for this dataset. Table~\ref{tab:dataset_summary} presents a summary of the dataset and Fig.~\ref{fig:datasets-samples} presents example frames from each of the collections. \begin{figure}[t] \centering \begin{subfigure}{0.40\textwidth} \centering \includegraphics[width=0.40\textwidth]{graphics/YT_AV101_6852.jpg} \includegraphics[width=0.40\textwidth]{graphics/YT_AV56_1104.jpg} \caption{UAV Collection} \end{subfigure}\vskip 0mm \begin{subfigure}{0.40\textwidth} \centering \includegraphics[width=0.40\textwidth]{graphics/K_KV35_9871.jpg} \includegraphics[width=0.40\textwidth]{graphics/K_KV61_19797.jpg} \caption{Glider Collection} \end{subfigure}\vskip 0mm \begin{subfigure}{0.40\textwidth} \centering \includegraphics[width=0.40\textwidth]{graphics/G_car30140sungopro_398.jpg} \includegraphics[width=0.40\textwidth]{graphics/G_lake50sunson_597.jpg} \caption{Controlled Ground Collection} \end{subfigure} \vspace{-2mm} \caption{Examples of images in the three UG$^2$ collections.} \label{fig:datasets-samples} \vspace{-6mm} \end{figure} Furthermore, the dataset contains a subset of $159,464$ object-level annotated images. Bounding boxes establishing object regions were manually determined using the Vatic tool for video annotation \cite{Vatic:2013}. Each annotation in the dataset indicates the position, scale, visibility and super-class for an object in a video. This is useful for running classification experiments. For example, for the baseline experiments described in Sec.~\ref{Results}, the objects were cropped out from the frames in a square region of at least $224\times224$ pixels (a common input size for many deep learning-based recognition models), using the annotations as a guide. Videos are also tagged to indicate problematic conditions. \textbf{UAV Video Collection.} This collection found within UG$^2$ consists of video recorded from small UAVs in both rural and urban areas. The videos in this collection are open source content tagged with a Creative Commons license, obtained from the YouTube video sharing site. Because of the source, they have different video resolutions (from $600\times400$ to $3840\times2026$) and frame rates (from $12$ FPS to $59$ FPS). This collection contains approximately $4$ hours of aerial video distributed across $50$ different videos. For this collection we observed 8 different video artifacts and other problems: glare/lens flare, poor image quality, occlusion, over/under exposure, camera shaking and noise (present in some videos that use autopilot telemetry), motion blur, and fish eye lens distortion. Additionally this collection contains videos with problematic weather/scene conditions such as night/low light video, fog, cloudy conditions and occlusion due to snowfall. Overall it contains $434,264$ frames. Across a subset of these frames we observed $31$ different super-classes (including the non-ImageNet pedestrians class), from which we extracted $32,608$ object images. The cropped object images have a diverse range of sizes, from $224\times224$ to $800\times800$. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/ClassDistrib_PieChart2.png} \caption{Distribution of annotated images belonging to classes shared by at least two different UG$^2$ collections.} \label{fig:sharedClassDistrib} \vspace{-6mm} \end{figure} \textbf{Glider Video Collection.} This collection found within UG$^2$ consists of video recorded by licensed pilots of fixed wing gliders in both rural and urban areas. It contains approximately $7$ hours of aerial video, distributed across $61$ different videos. The videos have frame rates ranging from $25$ FPS to $50$ FPS and different types of compression such as MTS, MP4 and MOV. Given the nature of this collection the videos mostly present imagery taken from thousands of feet above ground, further increasing the difficulty of object recognition tasks. Additionally, scenes of take off and landing contain artifacts such as motion blur, camera shaking, and occlusion (which in some cases is pervasive throughout the videos, showcasing parts of the glider that partially occlude the objects of interest). For the Glider Collection we observed $6$ different video artifacts and other problems: glare/lens flare, over/under exposure, camera shaking and noise, occlusion, motion blur, and fish eye lens distortion. Furthermore, this collection contains videos with problematic weather/scene conditions such as fog, clouds and occlusion due to rain. Overall this collection contains $657,455$ frames. Across the annotated frames we observed $20$ different classes (including the non-ImageNet class of pedestrians), from which we extracted $31,760$ object images. The cropped object images have a diverse range of sizes, from $224\times224$ to $900\times900$. \textbf{Ground Video Collection.} In order to provide some ground-truth with respect to problematic image conditions, we performed a video collection on the ground that intentionally induced several common artifacts. One of the main challenges for object recognition within aerial images is the difference in the scale of certain objects compared to those in the images used to train the recognition model. To address this, we recorded video of static objects (\eg, flower pots, buildings) at a wide range of distances ($30$ft, $40$ft, $50$ft, $60$ft, $70$ft, $100$ft, $150$ft, and $200$ft). In conjunction with the differing recording distances, we induced motion blur in images using an orbital shaker to generate horizontal movement at different rotations per minute ($120$rpm, $140$rpm, $160$rpm, and $180$rpm). Parallel to this, we recorded video under different weather conditions (sun, clouds, rain, snow) that could affect object recognition, and employed a Sony Bloggie hand-held camera (with $1280\times720$ resolution and a frame rate of 60 FPS) and a GoPro Hero 4 (with $1920\times1080$ resolution and a frame rate of 30 FPS), whose fisheye lens introduced further distortion. The Ground Collection contains approximately $40$ minutes of video, distributed across $178$ videos. Overall this collection represents $95,096$ annotated frames, and $28,009$ unannotated frames distributed across $42$ specific videos. The annotated frames contain $20$ different ImageNet classes. Furthermore, an additional class of videos showcasing a $9\times11$ inch $9\times9$ checkerboard grid exhibiting all aforementioned distances and all intervals of rotation. The motivation for including this artificial class is to provide a reference with well-defined straight lines to assess the visual impact of image restoration and enhancement algorithms. The cropped object images have a diverse range of sizes: from $224\times224$ to $1000\times1000$. \textbf{Object Categories and Distribution of Data.} A challenge presented by the objects annotated in the UAV and Glider collections is the high variability of both object scale and rotation. These two factors make it difficult to differentiate some of the more fine-grained ImageNet categories. For example, while it may be easy to recognize a car from an aerial picture taken from hundreds (if not thousands) of feet above the ground, it might be impossible to determine whether that car is a taxi, a jeep or a sports car. An exception to this rule is the Ground Collection where we had more control over distances from the target which made possible the fine-grained class distinction. For example, chainlink-fence and bannisters are separate classes in the Ground Collection and are not combined to form a fence super-class. Thus UG$^2$ organizes the objects in high level classes that encompass multiple ImageNet synsets (ImageNet provides images for ``synsets" or ``synonym sets" of words or phrases that describe a concept in WordNet~\cite{fellbaum1998wordnet}; for more detail on the relationship between UG$^2$ and ImageNet classes see Supp. Table~1). Over 70\% of the UG$^2$ classes have more than 400 images and 58\% of the classes are present in the imagery of at least two collections (Fig.~\ref{fig:sharedClassDistrib}). Around 20\% of the classes are present in all three collections. \vspace{-3mm} \section{\vspace{-1.8mm}The UG$^2$ Classification Protocols} \label{ClassifMethod} In order to establish good baselines for classification performance before and after the application of image enhancement and restoration algorithms, we used a selection of common deep learning approaches to recognize annotated objects and then considered the correct classification rate. Namely, we used the Keras~\cite{chollet2015keras} versions of the pre-trained networks VGG16 \& VGG19~\cite{VGG:2014}, Inception V3~\cite{Inception:2015}, and ResNet50~\cite{ResNet50:2015}. These experiments also serve as a demonstration of the UG$^2$ classification protocols. Each candidate restoration or enhancement algorithm should be treated as an image pre-processing step to prepare data to be submitted to all four networks, which serve as canonical classification references. The entirety of the annotated data for all three collections is used for evaluation, with the exceptions of the pedestrian and resolution chart classes, which do not belong to any synsets recognized by the networks. For our current experiments, we restricted our analysis on UG$^2$ dataset to pre-trained networks. Re-training the networks with our dataset would be considered in future. With respect to restoration and enhancement approaches that must be trained, we suggest a cross-dataset protocol [48] where some annotated training data should come from outside UG$^2$. However, un-annotated videos for additional validation purposes and parameter tuning are provided. \textbf{Classification Metrics.} The networks used for the UG$^2$ classification task return a list of the ImageNet synsets along with the probability of the object belonging to each of the synsets classes. However, given what we discussed in Sec.~\ref{Dataset}, in some cases it is impossible to provide a fine-grained labeling for the annotated objects. Consequently, most of the super-classes we defined for UG$^2$ are composed of more than one ImageNet synset. That is, each annotated image \(i\) has a single super-class label \(L_i\) which in turn is defined by a set of ImageNet synsets \(L_i = \{s_1, ..., s_n\}\). To measure accuracy, we observe the number of correctly identified synsets in the top 5 predictions made by each pre-trained network. A prediction is considered to be correct if it's synset belongs to the set of synsets in the ground-truth super-class label. We use two metrics for this. The first measures the rate of detection of at least 1 correctly classified synset class. In other words, for a super-class label \(L_i = \{s_1, ..., s_n\}\), a network is able to detect 1 or more correctly classified synsets in the top 5 predictions. The second measures the rate of detecting all the possible correct synset classes in the super-class label synset set. For example, for a super-class label \(L_i = \{s_1, s_2, s_3\}\), a network is able to detect 3 correct synsets in the top 5 labels.\vspace{-4mm} \section{\vspace{-1.8mm}Baseline Enhancement and Restoration} \label{Algorithms} To shed light on the effects image restoration and enhancement algorithms have on classification, we tested classic and state-of-the-art algorithms for image interpolation~\cite{Keys:1981:Interpolation}, super-resolution~\cite{Chao:2014:SRCNN, Kim:2016:VDSR}, and deblurring~\cite{Nah:2016:DSDD, Su:2016:DBN, Pan:2016:DCPD} (see Supp. Fig.~1 for examples). \textbf{Interpolation methods.} These classic methods attempt to obtain a high resolution image by up-sampling the source image (usually assuming the source image is a down-sampled version of the high resolution one) and by providing the best approximation of a pixel's color and intensity values depending on the nearby pixels. Since they do not need any prior training, they can be directly applied to any image. Nearest neighbor interpolation uses a weighted average of the nearby translated pixel values in order to calculate the output pixel value. Bilinear interpolation increases the number of translated pixel values to two and bicubic interpolation increases it to four. \textbf{SRCNN.} The Super-Resolution Convolutional Neural Network (SRCNN)~\cite{Chao:2014:SRCNN} introduced deep learning techniques to super-resolution. The method employs a feedforward deep CNN to learn an end-to-end mapping between low resolution and high resolution images. The network was trained on 5 million ``sub-images" generated from 395,909 images of the ILSVRC 2013 ImageNet detection training partition~\cite{ILSVRC15}. Typically, the results obtained from SRCNN can be distinguished from their low resolution counterparts by their sharper edges without visible artifacts. \textbf{VDSR.} The Very Deep Super Resolution (VDSR) algorithm~\cite{Kim:2016:VDSR} aims to outperform SRCNN by employing a deeper CNN inspired by the VGG architecture~\cite{VGG:2014}. It also decreases training iterations and time by employing residual learning with a very high learning rate for faster convergence. The VDSR network was trained on 291 images, collectively taken from Yang \etal \cite{yang2010image} and the Berkeley Segmentation Dataset \cite{martin2001database}. Unlike SRCNN, the network is capable of handling different scale factors. A good image processed by VDSR is characterized by well-defined contours and a lack of edge effects at the borders. \textbf{Basic Blind Deconvolution.} The goal of any deblurring algorithm is to attempt to remove blur artifacts (\ie, the products of motion or depth variation, either from the object or the camera) that degrade image quality. This can be as simple as employing Matlab's blind deconvolution algorithm \cite{kundur1996blind}, which deconvolves the image using the maximum likelihood algorithm, with a $3\times3$ array of 1s as the initial point spread function. \textbf{Deep Video Deblurring.} The Deep Video Deblurring algorithm~\cite{Su:2016:DBN} was designed to address camera shake blur. However, in the results presented by Su \etal the algorithm also obtained good results for other types of blur, such as motion blur. This algorithm employs a CNN that was trained with video frames containing synthesized motion blur such that it receives a stack of neighboring frames and returns a deblurred frame. The algorithm allows for three types of frame-to-frame alignment: no alignment, optical flow alignment, and homography alignment. For our experiments we used optical flow alignment, which was reported to have the best performance with this algorithm. \textbf{Deep Dynamic Scene Deblurring.} Similarly, the Deep Dynamic Scene Deblurring algorithm~\cite{Nah:2016:DSDD} utilizes deep learning in order to remove motion blur. Nah \etal implement a multi-scale CNN to restore blurred images in an end-to-end manner without assuming or estimating a blur kernel model. The network was trained using blurry images generated by averaging sequences (by considering gamma correction) of sharp frames in a dynamic scene with high speed cameras. Given that this algorithm was computationally expensive, we directly applied it to the cropped object regions, rather than to the full video frame. \vspace{-3mm} \section{\vspace{-1.8mm}UG$^2$ Baseline Results and Analysis} \label{Results} \textbf{Original Classification Results.} Fig.~\ref{fig:graph:Baseline_Comp} depicts the baseline classification results for the UG$^2$ collections, without any pre-processing, at rank 5 (results for top 1 predictions can be found in Supp. Figs.~2-4). Overall we observed distinct differences between the results for all three collections, particularly between the airborne collections (UAV and Glider collections) and the Ground Collection. These results establish that common deep learning networks alone cannot achieve good classification rates for this dataset. \begin{figure}[t] \centering \scalebox{.75}{\subimport{./figures/}{BaselineComparison.tex}} \caption{Classification rates at rank 5 for the original, unprocessed, frames for each collection in the dataset.} \label{fig:graph:Baseline_Comp} \vspace{-4mm} \end{figure} \begin{figure*}[!ht] \begin{subfigure}{0.33\textwidth} \scalebox{.7}{\subimport{./figures/}{YoutubeInterpClassif.tex}} \caption{\vspace{-2.2mm}} \label{fig:classR5_interp_yt} \end{subfigure} \begin{subfigure}{0.33\textwidth} \scalebox{.7}{\subimport{./figures/}{KawaInterpClassif.tex}} \caption{\vspace{-2.2mm}} \label{fig:classR5_interp_glider} \end{subfigure} \begin{subfigure}{0.33\textwidth} \scalebox{.7}{\subimport{./figures/}{GroundInterpClassif.tex}} \caption{\vspace{-2.2mm}} \label{fig:classR5_interp_ground} \end{subfigure} \caption{\vspace{-2.2mm}Comparison of classification rates at rank 5 for each collection after applying resolution enhancement.} \label{fig:classR5_res} \end{figure*} Given the very poor quality of its videos, the UAV Collection turned out to be the most challenging in terms of object classification, obtaining the lowest classification performance out of the three collections. While the Glider Collection shared similar problematic conditions with the UAV Collection, we found that the images in this collection had a slightly higher classification rate than those in the UAV Collection in terms of identifying at least one correctly classified synset class. This improvement might be caused by the limited degree of movement of the gliders, since it ensured that the movement between frames was kept more stable over time, and by the camera's recording quality. The controlled Ground Collection yielded the highest classification rates, which, in an absolute sense, are still low. \begin{figure*}[ht] \begin{subfigure}{0.33\textwidth} \scalebox{.7}{\subimport{./figures/}{YoutubeDeblClassif.tex}} \caption{\vspace{-2.2mm}} \label{fig:classR5_debl_yt} \end{subfigure} \begin{subfigure}{0.33\textwidth} \scalebox{.7}{\subimport{./figures/}{KawaDeblClassif.tex}} \caption{\vspace{-2.2mm}} \label{fig:classR5_debl_glider} \end{subfigure} \begin{subfigure}{0.33\textwidth} \scalebox{.7}{\subimport{./figures/}{GroundDeblClassif.tex}} \caption{\vspace{-2.2mm}} \label{fig:classR5_debl_ground} \end{subfigure} \caption{Comparison of classification rates at rank 5 for each collection after applying deblurring.}\vspace{-3mm} \label{fig:classR5_debl} \end{figure*} \textbf{Effect of Restoration and Enhancement.} Ideally, image restoration and enhancement algorithms should help object recognition by improving poor quality images and should not impair it for good quality images. To test this assumption for the algorithms described in Sec.~\ref{Algorithms}, we used them to pre-process the annotated video frames of UG$^2$ and then proceeded to re-crop the objects of interest using the annotated bounding box information (as described in Sec.~\ref{Dataset}). Given that the scale of the images enhanced with the interpolation algorithms was doubled, the bounding boxes were scaled accordingly in those cases. Furthermore, the cropped object images were re-sized to $224\times224$ (input for VGG16, VGG19 and ResNet50) and $229\times229$ (input for Inception V3) during the classification experiments. See Supp. Tables~ 5-8, 9-13, and 13-16 for detailed breakdowns of the results for what follows. \begin{figure*}[t] \begin{subfigure}{0.3\textwidth} \scalebox{.6}{\subimport{./figures/}{GroundWeathClassif.tex}} \vspace{2mm} \caption{Impact of different weather conditions on the baseline Ground Collection.} \label{fig:classR5_weath_base} \end{subfigure} \hspace{4.5mm} \begin{subfigure}{0.3\textwidth} \scalebox{.6}{\subimport{./figures/}{GroundWeathClassif_Interp.tex}} \caption{Impact of resolution enhancement techniques.} \label{fig:classR5_weath_res} \end{subfigure} \hspace{4.5mm} \begin{subfigure}{0.3\textwidth} \scalebox{.6}{\subimport{./figures/}{GroundWeathClassif_Debl.tex}} \caption{Impact of deblurring techniques.} \label{fig:classR5_weath_deb} \end{subfigure} \caption{Comparison of classification rates for different weather conditions at rank 5 for the Ground Collection. To simplify the analysis, each point represents the performance when considering the output of all four networks simultaneously.} \label{fig:classR5_weath} \vspace{-6mm} \end{figure*} As can be observed in Figs.~\ref{fig:classR5_res} and~\ref{fig:classR5_debl}, the behaviour of the resolution enhancement and deblurring algorithms is different between the airborne and Ground collections. For the most part, both types of algorithms tended to improve the rate of identification for at least one correct class for all of the networks for the UAV and Glider collections (Figs.~\ref{fig:classR5_interp_yt},~\ref{fig:classR5_interp_glider},~\ref{fig:classR5_debl_yt},~and \ref{fig:classR5_debl_glider}). Over 60\% of the experiments reported an increase in the correct classification rate compared to that of the baseline. Conversely, for the Ground Collection, the restoration and enhancement algorithms seemed to impair the classification for all networks (Figs.~\ref{fig:classR5_interp_ground} and~\ref{fig:classR5_debl_ground}), going as far as reducing the at least one class identification performance by more than 16\% for some experiments. More than 60\% of the experiments reported a decrease in the classification rate for the Ground Collection. The property of hurting recognition performance on good quality imagery is certainly undesirable in these cases. Further along these lines, while the classification rate for at least one correct class was increased for the airborne collections, after employing enhancement techniques the classification rate for finding all possible sub-classes in the super-class was negatively impacted for all three collections. Between 53\% and 68\% of the experiments reported a decrease in this metric. But this behaviour seemed to be more prevalent for the deblurring algorithms. For the UAV and Glider collections 75\% and 92\% of the deblurring experiments respectively had a negative impact in the classification rate for finding all possible classes, while only 40\% and 45\% of the resolution enhancement experiments reported a negative impact for the same metric. We can also consider the performance with respect to individual networks. For the UAV (Fig.~\ref{fig:classR5_interp_yt}) and Glider (Fig.~\ref{fig:classR5_interp_glider}) Collections, SRCNN provided the best results for the VGG16 and VGG19 networks in both metrics, and was also the best in improving the rate of finding all the correct synsets of their respective super-class. Nevertheless, the best overall improvement for the rate of correctly classifying at least one class in both collections was achieved by employing the Dynamic Deep Deblurring algorithm, with an improvement of 8.96\% for the Inception network in the UAV case, and 6.5\% for the Glider case. Resolution enhancement algorithms dominated the classification rate improvement for the Ground Collection, where VDSR obtained the highest improvement in both metrics for the VGG16, VGG19 (the best with 3.56\% improvement), and Inception networks, while Bilinear interpolation achieved the highest improvement for the ResNet50 network. In contrast, Blind Deconvolution drove down performance in all of the algorithms we tested for almost all networks. For the UAV Collection, Blind Deconvolution led to a decrease of at most 6.07\% in the rate of classifying at least 1 class correctly for the ResNet50 network. This behaviour was also observed for the Glider and Ground collections, where it led to the highest decreases in the classification rate of both metrics for all networks. These being 7\% for the ResNet50 network for the Glider Collection and 15.06\% for the VGG16 network for the Ground Collection. \textbf{Effect of Weather Conditions.} A significant contribution of our dataset is the availability of ground-truth for weather conditions in the Ground collection. Without any pre-processing applied to that set, the classification performance under different weather conditions varies widely (Fig.~\ref{fig:classR5_weath_base}). In particular, there was a stark contrast between the classification rates of video taken during rainy weather and video taken under any other condition, with rain causing the classification rate for both metrics to drop. Likewise, snowy weather presented a lower classification rate than cloudy or sunny weather as it shares some of the problems of rainy video capture: adverse lighting conditions and partial object occlusion from the precipitation. Cloudy weather proved to be the most beneficial for image capture as those videos lacked most of the problems of the other conditions. Sunny conditions are not the best because of glare. This study confirms previously reported results for the impact of weather on recognition~\cite{Boult_2009_HRB}. We also analyzed the interplay between the different restoration and enhancement algorithms and different weather conditions (Figs.~\ref{fig:classR5_weath_res} and \ref{fig:classR5_weath_deb}; see Supp. Tables~17-20 for detailed results). For this analysis we observed that resolution enhancement algorithms provided small benefits for both metrics. 50\% of the experiments improved the correct classification rate of at least one class, and 40.63\% improved the other metric. Again, resolution enhancement algorithms tended to provide the most improvement. The highest improvement (3.36\% for the correct classification rate of at least one class) was achieved for sunny weather by the VDSR algorithm. Note that while classification for the more problematic weather conditions (rain, snow and sun) was improved, this was not the case for cloudy weather, where the original images were already of high quality. \vspace{-2mm} \section{\vspace{-1.8mm}Discussion} \label{Analysis} The results of our experiments led to some surprises. While the restoration and enhancement algorithms tended to improve the classification results for the diverse imagery included in our dataset, no approach was able to improve the results by a significant margin. Moreover, in some cases, performance degraded after image pre-processing, particularly for higher quality frames, making these kind of pre-processing techniques unviable for heterogeneous datasets. We also noticed that different algorithms for the same type of image processing can have very different effects, as can different combinations of pre-processing and recognition algorithms. Depending on the metric considered, performance could be better or worse for various techniques. A possible reason for this can be that most of these networks were trained with images having a single type of image distortion and hence fail for images with multiple distortions from heterogenous sources. Significant improvement can be achieved if the networks are re-trained with UG$^2$. However, this needs further investigation and would be done in future. Thus, UG$^2$ dataset will prove to be useful for studying these phenomena for quite some time to come. UG$^2$ forms the core of a large prize challenge that will be announced in Fall 2017 and run from Spring to early Summer 2018. In this paper, we described one protocol that is part of that challenge. Several alternate protocols that are useful for research in this direction will also be included. For instance, we did not look at networks that combine feature learning, image enhancement and restoration, and classification. A protocol supporting this will be available. UG$^2$ can also be used for more traditional computational photography assessment (\ie, making the images look better), and this too will be supported. Stay tuned for more. \textbf{Acknowledgement} Funding was provided under IARPA contract \#2016-16070500002. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. We thank Dr. Adam Czajka, visiting assistant professor at the University of Notre Dame and Mr. Sebastian Kawa for assistance with data collection. {\small \bibliographystyle{latex/ieee} \section{Examples of enhancement algorithm results on UG$^2$} \begin{figure}[!ht] \centering \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/Baseline_Boa_AV00101_7_7198.png} \caption{Baseline} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/NN_Boa_AV00101_7_7198.png} \caption{Nearest neighbor} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/Bilinear_Boa_AV00101_7_7198.png} \caption{Bilinear} \end{subfigure}\hskip 5mm \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/Bas_Res_chart_30_na_sunny_sony_0_99.png} \caption{Baseline} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/NN_Res_chart_30_na_sunny_sony_0_99.png} \caption{Nearest neighbor} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/Bil_Res_chart_30_na_sunny_sony_0_99.png} \caption{Bilinear} \end{subfigure}\vskip 2mm \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/Bicubic_Boa_AV00101_7_7198.png} \caption{Bicubic} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/SRCNN_Boa_AV00101_7_7198.png} \caption{SRCNN} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/VDSR_Boa_AV00101_7_7198.png} \caption{VDSR} \end{subfigure}\hskip 5mm \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/Bic_Res_chart_30_na_sunny_sony_0_99.png} \caption{Bicubic} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/SRCNN_Res_chart_30_na_sunny_sony_0_99.png} \caption{SRCNN} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/VDSR_Res_chart_30_na_sunny_sony_0_99.png} \caption{VDSR} \end{subfigure}\vskip 2mm \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/BD_Boa_AV00101_7_7198.png} \caption{Blind deconv.} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/DD_Boa_AV00101_7_7198.png} \caption{Deep deblurring} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/DDD_Boa_AV00101_7_7198.png} \caption{DDD} \end{subfigure} \hskip 5mm \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/BD_Res_chart_30_na_sunny_sony_0_99.png} \caption{Blind deconv.} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/DD_Res_chart_30_na_sunny_sony_0_99.png} \caption{Deep deblurring} \end{subfigure} \begin{subfigure}{0.15\textwidth} \includegraphics[width=\textwidth]{graphics/DDD_Res_chart_30_na_sunny_sony_0_99.png} \caption{DDD} \end{subfigure} \caption{Examples of images processed with the methods analyzed. DDD stands for Dynamic Deep Deblurring. Best viewed in color and zoomed in.} \label{Supp-fig-retouched-samples} \end{figure} \pagebreak \section{ImageNet and UG$^2$ equivalencies} \begin{longtable}[c]{|p{.2\linewidth}|p{.2\linewidth}|p{.55\linewidth}|} \caption{Equivalencies between the UG$^2$ super-classes and ImageNet synsets. \textbf{(*)} These ImageNet classes are considered as super-classes for the Ground Collection classification, since we do have the fine-grained annotations for them. \textbf{(**)} These ImageNet classes are considered part of a super-class ``Bicycle" exclusive for the Ground Collection classification.} \label{Supp-tab:UG_INet_eqs} \hline \multicolumn{3}{| c |}{UG$^2$ - ImageNet equivalencies}\\ \hline \textbf{UG2 class} & \textbf{Synset ID} & \textbf{ImageNet class} \\ \hline \endfirsthead \hline \multicolumn{3}{|c|}{UG$^2$ - ImageNet equivalencies (Continuation)}\\ \hline \textbf{UG2 class} & \textbf{Synset ID} & \textbf{ImageNet class} \\ \hline \endhead \endfoot \hline \endlastfoot \subimport{./figures/}{Classes.tex} \end{longtable} \pagebreak \section{UG$^2$ Collection's details} ~ \begin{table}[H] \caption{Number of images per class in each collection} \label{Supp-tab:Imgs_per_class} \subimport{./figures/}{Classes_iperclass.tex} \end{table} ~ \begin{table}[H] \caption{Number of videos with problematic conditions in the airborne collections} \label{Supp-tab:Airborne_artifacts} \subimport{./figures/}{ProblematicConds_Airborne.tex} \end{table} ~ \begin{table}[H] \caption{Number of images with controlled and uncontrolled problematic conditions in the Ground collection} \label{Supp-tab:Ground_artifacts} \subimport{./figures/}{ProblematicConds_Ground.tex} \end{table} \pagebreak \section{Rank 5 classification details} \textbf{1C} stands for the rate of correct classification of at least one class \textbf{AC} stands for the rate of correct classification of all possible classes \subsection{UAV Collection} \begin{table}[H] \vspace{-3mm} \caption{Details for the UAV Collection's classification results at rank 5} \label{Supp-tab:YT_t5_overall_dets} \vspace{-3mm} \subimport{./figures/}{Youtube_classif_details.tex} \end{table} \begin{table}[H] \caption{Overall summary for the best and worst pre-processing algorithms for the UAV collection classification at rank 5\label{Supp-tab:YT_t5_overall_sum}} \subimport{./figures/}{Youtube_C5_Summ_Overall.tex} \end{table} ~ \begin{table}[H] \caption{Summary for the best and worst resolution enhancement algorithms for the UAV collection classification at rank 5\label{Supp-tab:YT_t5_overall_res}} \subimport{./figures/}{Youtube_C5_Summ_Res.tex} \end{table} ~ \begin{table}[H] \caption{Overall summary for the best and worst deblurring algorithms for the UAV collection classification at rank 5\label{Supp-tab:YT_t5_overall_debl}} \subimport{./figures/}{Youtube_C5_Summ_Deb.tex} \end{table} \pagebreak \subsection{Glider Collection} ~ \begin{table}[H] \caption{Details for the Glider Collection's classification results at rank 5} \label{Supp-tab:Kawa_t5_overall_dets} \subimport{./figures/}{Kawa_classif_details.tex} \end{table} ~ \begin{table}[H] \caption{Overall summary for the best and worst pre-processing algorithms for the Glider collection classification at rank 5\label{Supp-tab:Kawa_t5_overall_sum}} \subimport{./figures/}{Kawa_C5_Summ_Overall.tex} \end{table} ~ \begin{table}[H] \caption{Summary of the best and worst resolution enhancement algorithms for the Glider collection classification at rank 5\label{Supp-tab:Kawa_t5_overall_res}} \subimport{./figures/}{Kawa_C5_Summ_Res.tex} \end{table} ~ \begin{table}[H] \caption{Overall summary for the best and worst deblurring algorithms for the Glider collection classification at rank 5\label{Supp-tab:Kawa_t5_overall_debl}} \subimport{./figures/}{Kawa_C5_Summ_Deb.tex} \end{table} \pagebreak \subsection{Ground Collection} ~ \begin{table}[H] \caption{Details for the Ground Collection's classification results at rank 5} \label{Supp-tab:Ground_t5_overall_dets} \subimport{./figures/}{Ground_classif_details.tex} \end{table} ~ \begin{table}[H] \caption{Overall summary of the best and worst pre-processing algorithms for the Ground collection classification at rank 5\label{Supp-tab:Ground_t5_overall_sum}} \subimport{./figures/}{Ground_C5_Summ_Overall.tex} \end{table} ~ \begin{table}[H] \caption{Summary of the best \& worst resolution enhancement algorithms for the Ground collection classification at rank 5\label{Supp-tab:Ground_t5_overall_res}} \subimport{./figures/}{Ground_C5_Summ_Res.tex} \end{table} ~ \begin{table}[H] \caption{Overall summary for the best and worst deblurring algorithms for the Ground collection classification at rank 5\label{Supp-tab:Ground_t5_overall_debl}} \subimport{./figures/}{Ground_C5_Summ_Deb.tex} \end{table} \pagebreak \subsubsection{Impact of weather conditions} ~ \begin{table}[H] \caption{Details for the Ground Collection's classification results (under different weather conditions) at rank 5} \label{Supp-tab:Ground_t5_weather_overall_dets} \subimport{./figures/}{Ground_classif_weather_details.tex} \end{table} ~ \begin{table}[H] \caption{Overall summary for the best and worst pre-processing algorithms to deal with adverse weather conditions for the Ground collection classification at rank 5\label{Supp-tab:Ground_t5_weath_overall_sum}} \subimport{./figures/}{Ground_C5_WeathSumm_Overall.tex} \end{table} ~ \begin{table}[H] \caption{Summary for the best and worst resolution enhancement algorithms to deal with adverse weather conditions for the Ground collection classification at rank 5\label{Supp-tab:Ground_t5_weath_overall_res}} \subimport{./figures/}{Ground_C5_WeathSumm_Res.tex} \end{table} ~ \begin{table}[H] \caption{Overall summary for the best and worst deblurring algorithms to deal with adverse weather conditions for the Ground collection classification at rank 5\label{Supp-tab:Ground_t5_weath_overall_debl}} \subimport{./figures/}{Ground_C5_WeathSumm_Deb.tex} \end{table} \pagebreak \section{Rank 1 Classification results} \begin{figure}[ht] \begin{center} \subimport{./figures/}{YTClassifT1.tex} \end{center} \vspace{-5mm} \caption{Comparison of classification rates at rank 1 for the UAV Collection after applying several resolution enhancement and deblurring techniques.} \label{Supp-fig:graph:YT_T1} \end{figure} \begin{figure}[ht] \begin{center} \subimport{./figures/}{KClassifT1.tex} \end{center} \vspace{-5mm} \caption{Comparison of classification rates at rank 1 for the Glider Collection after applying several resolution enhancement and deblurring techniques.} \label{Supp-fig:graph:K_T1} \end{figure} \begin{figure}[ht] \begin{center} \subimport{./figures/}{GClassifT1.tex} \end{center} \vspace{-5mm} \caption{Comparison of classification rates at rank 1 for the Ground Collection after applying several resolution enhancement and deblurring techniques.} \label{Supp-fig:graph:G_T1} \end{figure} \section{Sample Videos} We include three sample videos from UG$^2$, one per collection. Given the size restrictions, the videos included are segments from the original videos, and the video quality and speed (in the case of the UAV Collection sample) were modified to reduce the file size. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Analysing respiratory sound has been recently attracted attention as leveraging robust machine learning and deep learning methods. In these both approaches, systems proposed generally comprise two main steps, referred to as the front-end feature extraction and the back-end model. In machine learning based systems, handcrafted features such as Mel-frequency cepstral coefficient (MFCC)~\cite{lung_tree_18, lung_hmm_02}, or a combined feature of four time domain features (variance, range, sum of simple moving average) and one frequency domain feature (spectrum mean)~\cite{lung_svm_01} are extracted at the front-end feature extraction. These features are then fed into conventional machine learning models of Hidden Markov Model~\cite{lung_hmm_02}, Support Vector Machine~\cite{lung_svm_01}, or Decision Tree~\cite{lung_tree_18} for specific tasks of classification or regression. Meanwhile, deep learning based systems make use of spectrogram where both temporal and spectral features are well represented. These spectrograms are then explored by a wide range of convolutional deep neural networks (CNNs)~\cite{lung_cnn_01, lung_cnn_02, pham2020_01, lam_11} or recurrent neural networks (RNNs)~\cite{lung_rnn_01, lung_rnn_02}. Compare between two approaches, deep learning based systems are more effective and show potential results mentioned in~\cite{lung_cnn_01, pham2020_01, lam_11}. As the deep learning approach proves powerful for analysing respiratory sounds, we evaluate a wide range of deep leaning frameworks which are applied for the specific task of audio respiratory cycles classification in this paper. By conducting extensive experiments on the 2017 Internal Conference on Biomedical Health Informatics (ICBHI)~\cite{lung_dataset}, which is one of the largest benchmark respiratory sound dataset published, we mainly contribute: (1) We evaluate whether benchmark and complex deep neural network architectures (e.g. ResNet50, Xception, InceptionV3, etc.) are more effective than inception-based and low footprint models, and (2) we evaluate whether applying transfer learning techniques on the downstream task of respiratory cycle classification can achieve competitive performance compared with direct training approaches. \section{ICBHI dataset and tasks defined} ICBHI dataset~\cite{lung_dataset} comprising of 920 audio recordings was collected from 128 patients over 5.5 hours. Each audio recording contains one or different types of respiratory cycles (\textit{Crackle}, \textit{Wheeze}, \textit{Both Crackle \& Wheeze}, and \textit{Normal}) which are labelled with onset and offset time by experts. Given ICBHI dataset, this paper aims to classify four different types of respiratory cycles mentioned that is also the main task of the ICBHI challenge~\cite{lung_dataset}. To evaluate, we follow the ICBHI challenge, then split 920 audio recordings into Train and Test subsets with a ratio of 60/40 without overlapping patient objects in both subsets (Note that some systems randomly separate ICBHI dataset into training and test subsets regardless patient object~\cite{lung_cnn_01, lung_cnn_02, lung_rnn_01}). Using available onset and offset time, we then extract respiratory cycles from entire recordings, obtain four categories of respiratory cycles on each subset. Regarding the evaluating metrics, we use Sensitivity (Sen.), Specitivity (Spec.), and ICBHI scores (ICB.) which is an average score of Sen. and Spec.. These scores are also the ICBHI criteria as mentioned in the ICBHI challenge~\cite{icb_ratio} and~\cite{ic_cnn_19_iccas, pham2020cnnmoe}. \section{Deep learning frameworks proposed} \label{framework} To classify four types of respiratory cycles from ICBHI dataset, we firstly propose a high-level architecture of three main deep learning frameworks as shown in Figure~\ref{fig:A1}: (I) The upper stream in Figure~\ref{fig:A1} shows how we directly train and evaluate small-footprint inception-based network architectures; (II) Benchmark and large footprint deep learning network architectures of VGG16, VGG19, MobileNetV1, MobileNetV2, ResNet50, DenseNet201, InceptionV3, Xception are directly trained and evaluated as shown in the middle stream of Figure~\ref{fig:A1}; and (III) The lower stream in Figure~\ref{fig:A1} shows how we reuse pre-trained models, which were trained with the large-scale AudioSet dataset, to extract embedding features for the training process on multilayer perception (MLP) network architecture. In general, these three proposed deep learning frameworks comprise two main steps of the front-end spectrogram feature extraction and the back-end classification model. \subsection{The front-end spectrogram feature extraction} \label{feature} \begin{figure}[t] \vspace{-0.2cm} \centering \includegraphics[width =1.0\linewidth]{A1.eps} \vspace{-0.5cm} \caption{The high-level architecture for three deep learning frameworks proposed.} \label{fig:A1} \end{figure} \begin{table}[t] \caption{The general inception based network architectures.} \vspace{-0.2cm} \centering \scalebox{0.95}{ \begin{tabular}{|c |c |} \hline \textbf{Single Inception Layer} & \textbf{Double Inception Layers} \\ \hline \multicolumn{2}{|c|}{BN} \\ \hline Inc(\textbf{ch1}) - ReLU & Inc(\textbf{ch1}) - ReLU - Inc(\textbf{ch1}) - ReLU \\ \hline \multicolumn{2}{|c|}{BN - MP - Dr(10\%) - BN} \\ \hline Inc(\textbf{ch2}) - ReLU & Inc(\textbf{ch2}) - ReLU - Inc(\textbf{\textbf{ch2}}) - ReLU\\ \hline \multicolumn{2}{|c|}{BN - MP - Dr(15\%) - BN} \\ \hline Inc(\textbf{ch3}) - ReLU & Inc(\textbf{ch3}) - ReLU - Inc(\textbf{ch3}) - ReLU \\ \hline \multicolumn{2}{|c|}{BN - MP - Dr(20\%) - BN} \\ \hline Inc(\textbf{ch4}) - ReLU & Inc(\textbf{ch4}) - ReLU - Inc(\textbf{ch4}) - ReLU\\ \hline \multicolumn{2}{|c|}{BN - GMP - Dr(25\%)} \\ \hline \multicolumn{2}{|c|}{FC(\textbf{fc1}) - ReLU - Dr(30\%)} \\ \hline \multicolumn{2}{|c|}{FC(\textbf{fc2}) - ReLU - Dr(30\%)} \\ \hline \multicolumn{2}{|c|}{FC(\textbf{4}) - Softmax} \\ \hline \end{tabular} } \vspace{-0.3cm} \label{table:inc01} \end{table} As proposed deep learning frameworks are shown in Figure~\ref{fig:A1}, we firstly duplicate the short-time cycles or cut off the long-time cycles to make all respiratory cycles equal to 10 seconds. For the first two deep learning frameworks (I) and (II), we extract Wavelet-based spectrograms which proves effective in our previous work~\cite{lam_11}. As we reuse the setting from~\cite{lam_11}, we then generate Wavelet spectrograms of $124{\times}154$ from 10-second respiratory cycles. Meanwhile, we extract log-Mel spectrogram in the deep learning framework (III) as we re-use pre-trained models from~\cite{kong_pretrain} receiving log-Mel spectrogram input. By using the same setting from~\cite{kong_pretrain}, we generate log-Mel spectrograms of $128{\times}1000$ for 10-second respiratory cycles. To enforce back-end classifiers, two data augmentation methods of spectrum~\cite{spec_aug} and mixup~\cite{mixup1, mixup2} are applied on both log-Mel and Wavelet-based spectrograms before feeding into back-end deep learning models for classification. \subsection{The back-end deep learning networks for classification} \textit{(I) The low-footprint inception based network architectures}: As the potential results were achieved from our previous work~\cite{lam_11}, we further evaluate different inception based network architectures in this paper. In particular, two high-level architectures with single or double inception layers as shown in Table~\ref{table:inc01} are used in this paper. These architectures perform different layers: inception layer (Inc(output channel number)) as shown in Figure~\ref{fig:A2}, batch normalization (BN)~\cite{batchnorm}, rectified linear units (ReLU)~\cite{relu}, max pooling (MP), global max pooling (GMP), dropout~\cite{dropout} (Dr(percentage)), fully connected (FC(output node number)) and Sigmoid layers. By using the two architectures and setting different parameters to channel numbers of inception layers and output node numbers of fully connected layers, we then create six inception based deep neural networks as shown in Table~\ref{table:inc02}, referred to as Inc-01, Inc-02, Inc-03, Inc-04, Inc-05, and Inc-06, respectively. \textit{(II) The benchmark and complex neural network architectures}: We next evaluate different benchmark neural network architectures of VGG16, VGG19, MobileNetV1, MobileNetV2, ResNet50, DenseNet201, InceptionV3, and Xception, which are available in Keras library~\cite{keras_app} and popularly applied in different research domains. Compare with the inception based network architectures used in the framework (I), these benchmark neural networks show large footprint with complex architecture and trunks of convolutional layers. \begin{figure}[t] \vspace{-0.2cm} \centering \includegraphics[width =0.9\linewidth]{A2.eps} \caption{The single inception layer architecture.} \label{fig:A2} \end{figure} \begin{table}[t] \caption{Setting for inception based network architectures.} \vspace{-0.2cm} \centering \scalebox{0.9}{ \begin{tabular}{|c |c |c |c |c |c |c |} \hline \textbf{Networks} & \textbf{Inc-01} & \textbf{Inc-02} & \textbf{Inc-03} & \textbf{Inc-04} & \textbf{Inc-05} & \textbf{Inc-06}\\ \hline \textbf{Single/Double} & & & & & & \\ \textbf{Inception } & Single & Double & Single & Double & Single & Double \\ \textbf{Layers} & & & & & & \\ \hline \textbf{ch1} & 32 & 32 & 64 & 64 & 128 & 128 \\ \textbf{ch2} & 64 & 64 & 128 & 128 & 256 & 256 \\ \textbf{ch3} & 128 & 128 & 256 & 256 & 512 & 512 \\ \textbf{ch4} & 256 & 256 & 512 & 512 & 1024 & 1024 \\ \textbf{fc1} & 512 & 512 & 1024 & 1024 & 2048 & 2048 \\ \textbf{fc2} & 512 & 512 & 1024 & 1024 & 2048 & 2048 \\ \hline \end{tabular} } \vspace{-0.3cm} \label{table:inc02} \end{table} \begin{table*}[t] \caption{Performance comparison of proposed deep learning frameworks.} \vspace{-0.2cm} \centering \scalebox{1.0}{ \begin{tabular}{| l c | l c | l c | l c |} \hline \textbf{Inception-based } &\textbf{Scores} &\textbf{Benchmark } &\textbf{Scores} &\textbf{Transfer learning} &\textbf{Scores} \\ \textbf{Frameworks} &\textbf{(Spec./Sen./ICB.)} &\textbf{Frameworks} &\textbf{(Spec./Sen./ICB.)} &\textbf{Frameworks} &\textbf{(Spec./Sen./ICB.)} \\ \hline Inc-01 &56.3/\textbf{40.5}/48.4 &VGG16 &70.1/28.6/49.3 &VGG14 &\textbf{82.1}/28.1/\textbf{55.1}\\ Inc-02 &69.7/31.9/50.8 &VGG19 &69.7/28.4/49.1 &DaiNet19 &76.4/26.9/51.7\\ Inc-03 &81.7/28.4/\textbf{55.1} &MobileNetV1 &75.5/14.3/44.9 &MobileNetV1 &64.4/\textbf{40.3}/52.3\\ Inc-04 &\textbf{84.0}/24.8/54.4 &MobileNetV2 &74.7/16.1/45.4 &MobileNetV2 &76.0/32.7/54.4\\ Inc-05 &80.5/26.3/53.4 &ResNet50 &\textbf{88.0}/15.2/\textbf{51.6} &LeeNet24 &70.7/30.9/52.8\\ Inc-06 &74.8/30.0/52.4 &DenseNet201 &71.7/30.3/51.1 &Res1DNet30 &74.9/26.7/50.8\\ & &InceptionV3 &70.9/\textbf{32.2/51.6} &ResNet38 &71.6/32.2/51.9 \\ & &Xception &75.7/22.1/48.9 &Wavegram-CNN &69.0/38.1/53.5 \\ \hline \end{tabular} } \label{table:res_01} \end{table*} \begin{table}[t] \caption{The MLP architecture used for training embedding features.} \vspace{-0.2cm} \centering \scalebox{1.0}{ \begin{tabular}{l c} \hline \textbf{Setting layers} & \textbf{Output} \\ \hline FC(4096) - ReLU - Dr($10\%$) & $4096$ \\ FC(4096) - ReLU - Dr($10\%$) & $4096$ \\ FC(1024) - ReLU - Dr($10\%$) & $1024$ \\ FC(4) - Softmax & $4$ \\ \hline \end{tabular} } \vspace{-0.3cm} \label{table:mlp} \end{table} \textit{(III) The transfer learning based network architectures}: As transfer learning techniques have proven effective for downstream tasks with a limitation of training data and smaller categories classified~\cite{kong_pretrain}, we leverage pre-trained networks which were trained with the large-scale AudioSet dataset from~\cite{kong_pretrain}: LeeNet24, DaiNet19, VGG14, MobileNetV1, MobileNetV2, Res1DNet30, ResNet38, Wavegram-CNN. We then modify these networks to match the downstream task of classifying four respiratory cycles in ICBHI dataset. In particular, we remain trainable parameters from the first layer to the global pooling layer of the pre-trained networks. We then replace layers after the global pooling layer by new fully connected layers to create a new network (i.e. Trainable parameters in new fully connected layers are initialized with mean and variance set to 0 and 0.1, respectively). In the other words, we use a multilayer perception (MLP) as shown in Table~\ref{table:mlp} (i.e. The multilayer perception (MLP) is configured by FC, ReLU, Dr, and Softmax layers) to train embedding features extracted from the pre-trained models (i.e. The embedding features are the feature map of the final global pooling layer in the pre-trained networks). \section{Experiments and results} \subsection{Experimental setting for back-end classifiers} As using spectrum~\cite{spec_aug} and mixup~\cite{mixup1, mixup2} data augmentation methods, labels are not one-hot encoding format. Therefore, we use Kullback–Leibler divergence (KL) loss shown in Eq. (\ref{eq:kl_loss}) below. \begin{align} \label{eq:kl_loss} Loss_{KL}(\Theta) = \sum_{n=1}^{N}\mathbf{y}_{n}\log \left\{ \frac{\mathbf{y}_{n}}{\mathbf{\hat{y}}_{n}} \right\} + \frac{\lambda}{2}||\Theta||_{2}^{2} \end{align} where \(\Theta\) are trainable parameters, constant \(\lambda\) is set initially to $0.0001$, $N$ is batch size set to 100, $\mathbf{y_{i}}$ and $\mathbf{\hat{y}_{i}}$ denote expected and predicted results. While we construct deep learning networks proposed in frameworks (I) and (II) with Tensorflow, we use Pytorch for extracting embedding features and training MLP in the framework (III) as the pre-trained networks were built in Pytorch environment. We set epoch number=100 and using Adam method~\cite{Adam} for optimization. \subsection{Performance comparison among deep learning frameworks proposed} As the experimental results are shown in Table~\ref{table:res_01}, it can be seen that generally the low-footprint inception based frameworks and the transfer learning based frameworks are competitive and outperform the benchmark frameworks. Table~\ref{table:res_01} records the best ICBHI score of 55.1\% from the Inc-03 framework and the transfer learning framework with the pre-trained VGG14 (i.e. The best performance obtained from the pre-trained VGG14 makes sense as this network outperforms the other network architectures for classifying sound events in AudioSet dataset). Notably, while we use the same network architecture of MobileNetV1 and MobinetV2, transfer learning frameworks significantly outperform the benchmark frameworks. As a result, we can conclude that (1) applying the transfer learning technique on the downstream task of classifying respiratory cycles is effective; and (2) low-footprint inception based networks focusing on minimal variation of time and frequency are effective for respiratory sounds rather than benchmark and large network architectures. \subsection{Early, middle, and late fusion of inception based and transfer learning based frameworks} \begin{table}[t] \caption{Performance comparison of fusion strategies of inception based and transfer learning based frameworks.} \vspace{-0.2cm} \centering \scalebox{1.0}{ \begin{tabular}{l c c c} \hline \textbf{Fusion Strategies} & \textbf{Spec.} & \textbf{Sen.} & \textbf{ICB.}\\ \hline Pre-trained VGG14 &82.1 &28.1 &55.1\\ Inc-03 &81.7 &28.4 &55.1 \\ The early fusion &79.9 &\textbf{30.9} &55.4 \\ The middle fusion &\textbf{87.3} &25.1 &56.2 \\ The late fusion &85.6 &30.0 &\textbf{57.3} \\ \hline \end{tabular} } \vspace{-0.5cm} \label{table:fusion} \end{table} As the deep learning frameworks basing on Inc-03 and transfer learning with the pre-trained VGG14 achieve the best scores, we then evaluate whether a fusion of results from these frameworks can help to further improve the task performance. In particular, we propose three fusion strategies. In the first and second fusion strategies referred to as the early and middle fusion, we concatenate the embedding feature extracted from the pre-trained VGG14 (e.g. the feature map of the global pooling of the pre-trained VGG14) with an embedding feature extracted from Inc-03 to generate a new combined feature. We than train the new combined feature with a MLP network architecture as shown in Table~\ref{table:mlp}. While the feature map of the max global pooling (MGP) of Inc-03 is considered as the embedding feature in the first fusion strategy, the feature map of the second fully connected layer of Inc-03 (e.g. FC(fc2)) is used for the second fusion strategy. In the third fusion strategy referred to as the late fusion, we use PROD fusion of the predicted probabilities obtained from these inception-based and transfer learning based frameworks. The PROD fusion result \(\mathbf{p_{f-prod}} = (\bar{p}_{1}, \bar{p}_{2}, ..., \bar{p}_{C}) \) is obtained by: \begin{equation} \label{eq:mix_up_x1} \bar{p_{c}} = \frac{1}{S} \prod_{s=1}^{S} \bar{p}_{sc} ~~~ for ~~ 1 \leq s \leq S, \end{equation} where \(\mathbf{\bar{p_{s}}}= (\bar{p}_{s1}, \bar{p}_{s2}, ..., \bar{p}_{sC})\) is the predicted probability of a single framework, $C$ is the category number and the \(s^{th}\) out of \(S\) individual frameworks evaluated. The predicted label \(\hat{y}\) is determined by: \begin{equation} \label{eq:label_determine} \hat{y} = arg max (\bar{p}_{1}, \bar{p}_{2}, ...,\bar{p}_{C} ) \end{equation} As Table~\ref{table:fusion} shows, all three fusion strategies help to enhance the performance, report a further ICBHI score improvement of 0.3, 1.1, 2.2 from early, middle, late fusion strategies, respectively. It proves that embedding features extracted from the inception based Inc-03 framework and the transfer learning framework with the pre-trained VGG14 contain distinct features of respiratory cycles. \subsection{Performance comparison to the state of the art} To compare with the state of the art, we only select published systems which follow the recommended official setting of ICBHI dataset~\cite{icb_ratio} with the train/set ratio of 60/40 and none of overlapping patient subject on both subsets. As experimental results are shown in Table~\ref{table:res_02}, our system with a late fusion of inception-based and transfer learning frameworks outperform the state of the art, records the best score of 57.3\%. \begin{table}[t] \caption{Comparison against the state-of-the-art systems.} \vspace{-0.2cm} \centering \scalebox{1.0}{ \begin{tabular}{|l |l |c |c |c|} \hline \textbf{Method} &\textbf{Spec.} &\textbf{Sen.} &\textbf{ICBHI Score} \\ \hline HMM~\cite{ic_hmm_18_sp} &38.0 &41.0 &39.0 \\ DT~\cite{lung_dataset} &75.0 &12.0 &43.0 \\ 1D-CNN~\cite{t2021} &36.0 &51.0 &43.0 \\ SVM~\cite{ic_svm_18_sp} &78.0 &20.0 &47.0 \\ Autoencoder~\cite{dat_01} &69.0 &30.0 &49.0 \\ ResNet~\cite{ma2019} &69.2 &31.1 &50.2 \\ ResNet~\cite{Ma2020} &63.2 &41.3 &52.3 \\ Inception~\cite{lam_11} &73.2 &32.2 &53.2 \\ CNN-RNN~\cite{ic_cnn_19_iccas} &81.0 &28.0 &54.0 \\ ResNet50~\cite{microsoft2021} &72.3 &40.1 &56.2 \\ \hline \textbf{Our best system} &\textbf{85.6} &\textbf{30.0} &\textbf{57.3} \\ \hline \end{tabular} } \vspace{-0.3cm} \label{table:res_02} \end{table} \section{Conclusion} This paper has presented an exploration of various deep learning models for detecting respiratory anomaly from auditory recordings. By conducting intensive experiments over the ICBHI dataset, our best model, which uses an ensemble of inception-based and transfer-learning-based deep learning frameworks, outperforms the state-of-the-art systems, then validate this application of deep learning for early detecting of respiratory anomaly. \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Cryptocurrency has not only changed the financial world but also charitable fundraising. Annual crypto donations have skyrocketed from \$4.2 million in 2020 to \$69.6 million in 2021. 42\% of those donations used Ether (ETH) currency, and 36\% of the donations used Bitcoin (BIC) currency \citep{hrywna_2022}. ETH is the digital currency of the Ethereum blockchain, and BIC is the digital currency of the Bitcoin blockchain. The great potential of cryptocurrency in fundraising lies in the prosociality of crypto asset holders who have a strong desire to support freedom, democracy, and people's ability to have their own personal and financial lives peacefully, according to Ethereum co-founder Vitalik Buterin \citep{ramaswamy_2022}. More importantly, we propose that the promise of crypto fundraising stems from ``crypto rewards'' that are fungible or non-fungible tokens (NFT) to recognize contributors for their support. Crypto rewards are operationalized as ``airdrops.'' An airdrop is considered a marketing initiative to raise awareness or crypto funds for a cause \citep{sergeenkov_2022}. Marketers send out fungible tokens that are newly minted to their community members for free or in exchange for completing a task (e.g., following a Twitter account, joining a Telegram group, donating to a designated wallet). These reward tokens are mostly valueless initially but will increase when the corresponding cause is well received and when the tokens begin trading on an exchange. These causes are usually blockchain-based projects, and the reward tokens are a currency that could be used in these projects. This is referred to as \textit{initial coin offering} (ICO) \citep{arnold2019blockchain} in contrast to \textit{initial public offering} (IPO). In this study, we go beyond the original definition of ICO and examine the impact of crypto rewards in crypto fundraising for social causes that are not blockchain-based. This extension of ICO could potentially change the landscape of fundraising, affording attention from the blockchain community, non-profit sector, and policymakers. In essence, crypto rewards are extrinsic incentives to promote charitable giving, whose impact is mixed, according to past studies. Extrinsic incentives have been widely adopted to promote giving as self-interest is considered a primary motivation for human behavior \citep{kohn2008brighter}. Without an extrinsic incentive, people are hesitant to donate even if they have strong feelings of compassion for a cause \citep{miller1994collective}. Thus, other than its direct impact, the extrinsic incentive serves as an ``excuse'' for people to conceal their prosocial motivation and act ``rationally,'' affecting their decisions indirectly \citep{holmes2002committing}. To this end, crypto rewards could cause an increase in giving. At the same time, this extrinsic motivation may undermine people's intrinsic motivation by activating a more self-interested mindset \citep{chao2021self}, which subsequently reduces giving, as predicted in the theories of motivation crowding-out \citep{frey2001motivation,frey1997cost}. The relative strength of these opposing predictions rests on the choice and framing of the rewards \citep{zlatev2016selfishly}. Different from the widely studied thank-you gifts of mugs and t-shirts, crypto rewards' value could increase over time. They are more likely to be considered investment products that could carry high value. Given these unique features, analyzing the impact of crypto rewards will not only generate insights about incentives for prosocial behavior but also enhance the understanding of the business value of blockchain technology \citep{benabou2006incentives}. We examine the impact of crypto rewards in fundraising by leveraging the crypto fundraising event initialized by the Ukrainian government during Russia's invasion of Ukraine. On Feb. 26, 2022, the Ukrainian government announced that they would accept crypto donations and publicized their Ethereum and Bitcoin wallet addresses. On March 2, 2022, the Ukrainian government further announced an airdrop with the donor list snapshot to be taken the next day \citep{time_2022}. Although the Ukrainian government canceled this airdrop several hours before the snapshot, it mobilized a large number of donors and resulted in record donations totaling more than \$25 million from more than 84,000 cryptocurrency donations in only one week. We first perform an interrupted time series (ITS) analysis under the framework of Autoregressive Integrated Moving Average (ARIMA) to estimate the impact of the airdrop on the Ethereum and Bitcoin blockchains separately \citep{schaffer2021interrupted}. We further perform a comparative ITS analysis, or difference-in-difference (DiD), to exploit the differential impacts of this airdrop on Ethereum and Bitcoin by estimating an \textit{ordered treatment effect} \citep{callaway2021difference}. The DiD analysis leverages the differential programmability capabilities of Bitcoin and Ethereum, where an airdrop is much more likely to be implemented in the Ethereum blockchain than the Bitcoin blockchain \citep{cointelegraph_2021}. Both analyses are widely used to analyze policy interventions, and the comparative ITS (DiD) allows stronger identification by better controlling for temporal effects of fundraising (e.g., awareness of the cause, dynamic situations in Ukraine). From the ITS analysis, we find that crypto rewards effectively increased donation count and decreased average donation size for both Ethereum and Bitcoin blockchains. From the comparative ITS analysis, we further find that the average donation count for Ethereum increased 812.48\% more than Bitcoin in response to the crypto rewards. However, the treatment effect on the average donation size is not significantly different for these two blockchains. In our split-sample analysis, we further find that the crypto rewards more aggressively increased donation counts and decreased average donation size for the Ethereum blockchain as compared to the Bitcoin blockchain when the donations are small ($\leq\$250$). When the donations are large ($>\$250$), we only observe a more positive effect on donation count from Ethereum; we did not observe a more negative effect on average donation size from Ethereum compared to Bitcoin. Our study leveraged an unprecedented crypto fundraising event that offered crypto rewards in the form of an airdrop to its supporters. This event is the first of its kind and is of vital importance for the crypto community that seeks to expand the societal influence of blockchains, policymakers who need to understand the economic implications of crypto donations, and non-profit organizations who are turning to crypto to raise funds for emerging crises. Theoretically, we reconcile and extend past findings that either identified a positive or a negative impact of thank-you gifts by showing that the positive effect is more likely to manifest in donation count and the negative effect is more likely to manifest in donation size. Such insights deepen our understanding of charitable giving as sequential decisions of whether to give and how much to give. Moreover, we identify the economic value of crypto rewards and contribute to the understanding of extrinsic motivations in charitable giving. \section{Theory and Literature} \label{sec:theory} The effect of extrinsic incentives on prosocial behavior has been widely studied in the literature of economics, psychology, and information systems \citep{newman2012counterintuitive,gneezy2000fine,liu2021does}. In this study, we focus on the effect of crypto rewards on ``first-time'' donations (instead of the long-term consequences of monetary incentives). The nature of crypto rewards is a ``thank-you'' gift for contributing a crypto donation. Conditional thank-you gifts are a common form of extrinsic incentives, where a donor gets a gift (e.g., mug, water bottle) conditional on his or her charitable contribution. Prior works find mixed evidence regarding the effect of conditional gifts. On the one hand, extrinsic motivations predict an increase in giving because most people are primarily motivated by self-interest \citep{kohn2008brighter}. As a matter of fact, self-interest has become a social norm to the extent that people overestimate the salience of self-interest such that they would be hesitant to donate to a charitable cause even when they have strong feelings of compassion \citep{miller1994collective,miller1999norm}. In such cases, extrinsic incentives could work as an ``excuse'' to rationalize people's prosocial behavior by concealing their prosocial motivations \citep{holmes2002committing}. As evidence, \cite{falk2007gift} finds that the frequency of donations increased by 17\% when a small gift was given to donors and by 75\% when a large gift was given. Similarly, \cite{alpizar2008does} find that small gifts prior to the donation request increased the proportion of donations by 5\%. Past works usually consider thank-you gifts that are of relatively low and static value (e.g., CDs, mugs, and t-shirts) \citep{shang2006impact}. While crypto rewards are also of low value initially (e.g., newly minted tokens), their value could increase over time. When the value of the thank-you gifts is volatile and potentially high,\footnote{While most airdrop rewards are below \$20, the reward can be more than a thousand dollars \citep{boom_2022}.} the rewards could have a direct and positive impact on prospective donors' funding decisions. At the same time, the acts of giving become more likely to be considered an investing behavior, and the indirect impact of thank-you gifts as an excuse for prosocial behavior is more pronounced. Through the above mechanism, extrinsic incentives could motivate people who would not otherwise donate to make a contribution \citep{ratner2011norm}. Extrinsic incentives can hardly increase the contribution size through this mechanism because increasing contribution size at a fixed (and unknown) reward is not economically rational. Thus, we propose our first hypothesis for the outcome of donation count. \begin{hyp}[H\ref{hyp:first}] \label{hyp:first} Offering a crypto reward, as reflected in an airdrop, would lead to an increased count of donations. \end{hyp} On the other hand, the theories of motivation crowding-out predict that people would reduce giving when extrinsic incentives are provided because extrinsic motivations would crowd out intrinsic motivations \citep{frey1997cost,frey2001motivation,gneezy2000fine,benabou2006incentives}. Extrinsic motivations are activated by monetary rewards, praise, or fame, and intrinsic motivations are related to activities that people undertake because they derive satisfaction from them. As \citet[p.746]{frey1997cost} stated, \enquote{If a person derives intrinsic benefits simply by behaving in an altruistic manner or by living up to her civic duty, paying her for this service reduces her option of indulging in altruistic feelings.} Many studies discover evidence in support of the motivation crowding-out theory. \cite{newman2012counterintuitive} find that among the donors who are willing to contribute, those who were offered a thank-you gift donated a significantly lower amount than those who were not offered a thank-you gift. \cite{chao2017demotivating} further finds that the negative effect of extrinsic motivation from a thank-you gift is only present when the gift is visually salient to occupy the prospective donor's attention. The crypto rewards in our context are significant enough to occupy the donors' attention because the reward tokens are immutable proof of the donors' good deeds. They are particularly likely to elicit a self-interested mindset because, as we stated in the development of H1, the value of the rewards could increase as the cause becomes more salient, and the reward tokens are essentially financial investment products. As a result, the crypto rewards likely activate the crowding-out effect to reduce people's giving. We consider the decisions of whether to give and how much to give sequential. While the investment product feature could positively affect the initial decision of whether to give, it leads to a more pronounced self-interested mindset such that the negative crowding-out effect will manifest in the subsequent decision of how much to give. Thus, the average contribution size would be lower than in the case when no crypto rewards are available. \begin{hyp}[H\ref{hyp:second}] \label{hyp:second} Offering a crypto reward, as reflected in an airdrop, would lead to a decreased average donation size. \end{hyp} \section{Research Context} \subsection{Cryptocurrency Donations and an Airdrop} In this section, we introduce the relevant Ukrainian crypto fundraising activity and illustrate its timeline in Figure ~\ref{fig:timeline}. The Ukrainian government posted pleas for cryptocurrency donations on Feb. 26 at 10:29 AM, 2022 (UTC). Since the banking system of Ukraine was at risk from a Russian attack, crypto offered an alternative financial structure because it uses cryptography to secure transactions \citep{time_2022}. In a \href{https://twitter.com/ukraine/status/1497594592438497282?lang=en}{tweet}, the Ukrainian government announced their Ethereum and Bitcoin wallet addresses (Figure \ref{fig:timeline}).\footnote{ The Ministry of Digital Transformation later started accepting more currencies, including SOL, DOGE, and DOT.} Ether (ETH) is the native currency traded on the Ethereum blockchain, and Bitcoin (BTC) is the currency traded on the Bitcoin blockchain. Both ETH and BTC are digital currencies based on the distributed ledger technology of blockchain \citep{cointelegraph_2021}. \begin{figure} \begin{center} \includegraphics[height=3.3in]{graphs/timeline.png} \caption{Timeline} \label{fig:timeline} \end{center} \end{figure} On March 1 at 1:43 AM, Ukraine announced that an ``airdrop'' has not been confirmed, but formally announced it on March 2 at 1:43 AM that they will reward donors who supported Ukraine with an airdrop. The planned snapshot of the list of donor wallet addresses would be taken the next day. While the initial announcement that ``An airdrop has not been confirmed yet'' does not officially start the airdrop, it is a signal that there is a potential airdrop. Thus, people may react to this potential airdrop even before the official announcement is posted. One day later on March 3 at 6:37 AM, the vice prime minister of Ukraine and the Minister of Digital Transformation of Ukraine announced the cancellation of this airdrop several hours before the scheduled snapshot. It was believed that the cancellation was due to crypto holders' unethical behavior of donating a minimal amount to profit from the airdrop \citep{strachan_2022}. \subsection{Bitcoin and Ethereum} In this section, we present background knowledge about why the airdrop is much more likely to be implemented on Ethereum than Bitcoin. Bitcoin was created as an alternative to national currencies and designed to be a medium of value exchange. Ethereum was created to go beyond the function of a digital currency given that it is more programmable; it is also a software platform to facilitate and monetize smart contracts and decentralized applications. While the Ukrainian government did not explicitly mention that the airdrop would be performed on Ethereum, it is highly likely the case for the following reasons. First, while an airdrop can be issued in both Ethereum and Bitcoin theoretically,\footnote{El Salvador has airdropped each of its citizens \$30 worth of Bitcoin to promote this new currency.} an airdrop is much more likely to be issued on the Ethereum blockchain due to its better programmable capacities to support smart contracts. According to Airdrop King, 81\% of airdrops are issued on the Ethereum blockchain. Second, the Ukrainian government did not specify the reward to be airdropped, and the reward could be either fungible tokens or NFTs. Airdropping NFTs is only possible on Ethereum as it concerns smart contracts. Third, it is a consensus among the crypto community that airdrops will be issued on the Ethereum blockchain. As anecdotal evidence, a crypto Youtuber with 166,000 subscribers uploaded a video tutorial for Ukraine's crypto airdrop and suggested people not to donate to the Bitcoin address if they would like to qualify for this airdrop.\footnote{The Youtuber is \href{https://www.youtube.com/c/CryptoHustleLive}{Crypto Hustle}, and the video can be found \href{https://www.youtube.com/watch?v=dSJzSOOhEs4}{here}.} Indeed, while the Ukrainian government canceled this airdrop, they later airdropped NFTs to 34,000 Ether donors as promised. This is another piece of evidence that the airdrop was most likely to be issued on the Ethereum blockchain. \section{Data} We collected donation transactions between February 26 and March 4, 2022, from the public wallet of Ukraine donations to focus on the airdrop, with the time zone being UTC. Transactions of Bitcoin were collected from the blockchain tracker,\footnote{The address is 357a3So9CbsNfBBgFYACGvxxS6tMaDoa1P.} and transactions of the Ethereum blockchain were collected from the etherscan tracker.\footnote{The address is 0x165CD37b4C644C2921454429E7F9358d18A45e14. } We calculated the USD value of donation contributions using the historical prices of Bitcoin and Ethereum. Bitcoin prices were retrieved from Yahoo Finance at a daily frequency, and we used the opening price. Ethereum prices vary by minute, and we retrieved the historical price for every transaction based on the timestamps of the transactions. There were 14,903 donation transactions on the Bitcoin blockchain and 69,740 donation transactions on the Ethereum blockchain during this observation window. In total, \$9,874,757 was raised from the Bitcoin blockchain, and \$16,043,036 was raised from the Ethereum blockchain. We plot the histogram of the log-transformed donation transaction size (with no aggregation) for Bitcoin and Ethereum in Figure ~\ref{fig:histogram}. We notice some extremely small donations ($<\$1$) for Ethereum. We show in a robustness check that this small set of donations has no major impact on our findings. \begin{figure}[H] \begin{center} \includegraphics[height=2in]{graphs/histogram.png} \caption{Histogram of Donation Size for Individual Donations} \label{fig:histogram} \end{center} \end{figure} \begin{table}[H] \caption{Hourly Summary Statistics (N=268)} \begin{tabular}{llllllll} \hline & Min & Q1 & Median & Mean & S.D. & Q3 & Max \\ \hline $Ether_{c}$ & 0 & 0 & 0.50 & 0.50 & 0.50 & 1 & 1 \\ $Airdrop_{t}$ & 0 & 0 & 0 & 0.31 & 0.46 & 1 & 1 \\ $DonationCount_{c,t}$ & 0 & 43 & 78.5 & 302.06 & 606.72 & 155.25 & 3720 \\ $AvgDonationSize_{c,t}$ & 64.04 & 160.39 & 252.60 & 573.65 & 1233.90 & 495.54 & 12568.55 \\ \hline \end{tabular} \label{table:summary} \end{table} We aggregate data at an hourly level during this time window. For the first day of our observation (Feb. 26, 2022), we have 14 hourly observations for both Bitcoin and Ethereum because the announcement to accept crypto donations was posted on Feb 26 at 10:29 AM. The first donation for BTC was performed on Feb. 26 at 12:01 PM, and the first donation for Ether on Feb. 26 at 11:54 AM. We started the observation at 11:00 AM, and the first observation for BTC corresponds to zero donations. For the remaining days (Feb. 27 through March 4), we have 24 observations each day except for the period of a ``potential but not confirmed airdrop.'' Remember that an announcement was posted about the airdrop not being confirmed on March 1 at 1:42 AM, and the official confirmation was on Mar 2 at 1:43 AM (Figure \ref{fig:timeline}). After excluding the time window of the potential airdrop, we observe the outcomes of our interest at 134 discrete observation points (hours) for both Ethereum and Bitcoin. We use subscript $c$ to refer to currencies, where $c$ can take $b$ for Bitcoin and $e$ for Ether. We use subscript $t$ to refer to the index of time that ranges from the $1^{st}$ to the $134^{th}$ hour. The dependent variables of our interest include: \begin{itemize} \item $AvgDonationSize_{c,t}$: the average contribution size for currency $c$ during time $t$. \item $DonationCount_{c,t}$: the hourly number of donations for currency $c$ during time $t$. \end{itemize} Our key independent variables include the currency dummy and a binary variable indicating whether the airdrop is actively available (See below). The airdrop is considered available between March 2 at 2:00 AM and March 3 at 6:00 PM, as the airdrop was announced on March 2 at 1:43 AM and scheduled to snapshot at March 3 at 6:00 PM. Although the airdrop was canceled on March 3 at 6:37 AM, we consider the end of the treatment to be the planned snapshot because many people are not aware of the cancellation. We note that our results stay unchanged even if we change the end time of the treatment to the cancellation of the treatment. This is reported in a robustness check. \begin{itemize} \item $Ether_{c}$: a binary variable that takes the value of one if the currency is Ether and zero if it is Bitcoin. \item $Airdrop_{t}$: a binary variable that takes the value of one if the airdrop is actively available and zero otherwise. \end{itemize} The summary statistics are reported in Table ~\ref{table:summary}. Other than these reported variables, we also coded hourly dummy variables to account for the temporal trends of giving, which are likely affected by the media coverage of this event and the changing situations in Ukraine. To illustrate the data patterns, we further report the breakdown of the summary statistics by groups in Table ~\ref{table:summarybygroup} and plot the variations of the variables over time in Figures ~\ref{fig:numofdonations} and ~\ref{fig:averagedonations}. As can be seen in Figure ~\ref{fig:numofdonations}, there was a hike in donation counts for Ethereum in comparison to Bitcoin while the airdrop was available. According to Table ~\ref{table:summarybygroup}, the average number of hourly donations was 107.32 on Bitcoin when the airdrop was not available and 87.98 when it was available. However, the difference in donation count for BIC is not statistically significant ($t$=1.54). The average number of hourly donations on Ethereum was 106.48 when the airdrop was not available and 1400.78 when the airdrop was available. This substantial increase in the hourly number of donations is statically significant ($t$=-8.56). As can be seen in Figure ~\ref{fig:averagedonations}, there is a decrease in average donation size when the airdrop was available. The decrease is statistically significant both for Bitcoin and Ethereum. The average donation size dropped from an average of \$814.80 to \$390.51 for Bitcoin ($t$=1.92) and from an average of \$566.54 to \$233.04 for Ethereum ($t$=3.90). \begin{table} \caption{Summary Statistics by Groups (Hourly)} \begin{tabular}{llllllll} \hline & \multicolumn{3}{c}{$Airdrop_{t}$=0} & \multicolumn{3}{c}{$Airdrop_{t}$=1} & \multicolumn{1}{c}{Paired $t$-test} \\ \cline{2-8} & \multicolumn{1}{c}{Mean} & Median & \multicolumn{1}{c}{S.E.} & \multicolumn{1}{c}{Mean} & Median & \multicolumn{1}{c}{S.E.} & \multicolumn{1}{c}{$t$-stats} \\ \hline \multicolumn{8}{l}{Panel A. Contributions on Bitcoin ($Ether_{c}$=0)} \\ \hline $DonationCount_{c,t}$ & 107.32 & 70 & 105.48 & 87.98 & 88 & 39.64 & 1.54 \\ $AvgDonationSize_{c,t}$ & 814.8 & 241.35 & 1872.25 & 390.51 & 195.66 & 658.10 & 1.92* \\\hline \multicolumn{8}{l}{Panel B. Contributions on Ethereum ($Ether_{c}$=1)} \\ \hline $DonationCount_{c,t}$ & 106.48 & 59 & 123.48 & 1400.78 & 1600 & 965.11 & -8.56*** \\ $AvgDonationSize_{c,t}$ & 566.54 & 364.49 & 1872.25 & 233.04 & 185.89 & 180.96 & 3.90*** \\ \hline \textit{Note:} & \multicolumn{3}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \label{table:summarybygroup} \end{table} \begin{figure} \centering \begin{minipage}[t]{0.9\textwidth} \includegraphics[width=\textwidth]{graphs/Number_of_Donation_label.png} \caption{Number of Donations.} \label{fig:numofdonations} \end{minipage} \hfill \begin{minipage}[t]{0.9\textwidth} \includegraphics[width=\textwidth]{graphs/Average_donation_label.png} \caption{Average Contribution Size.} \label{fig:averagedonations} \end{minipage} \end{figure} \section{Empirical Design and Analyses} The introduction of an airdrop as a reward to crypto donors is a quasi-experiment. This context is different from a policy intervention that happens to everyone because the airdrop has a much more pronounced impact on the Ethereum blockchain than the Bitcoin blockchain. This context is also different from a randomized controlled trial, which requires a control group unaffected by the intervention.\footnote{In a randomized controlled trial, participants have no knowledge of whether they are in the control group or treatment group. In our context, people have information about the different chances of getting rewards on the two different blockchains.} Although the Bitcoin blockchain is likely less affected by the airdrop than the Ethereum blockchain, it is still affected. This is especially the case because Ether and Bitcoin are currencies that are exchangeable, despite the exchange fees. Since the intervention can potentially affect both the Bitcoin and the Ethereum blockchains, we perform an interrupted time series analysis (ITS) under the framework of Autoregressive Integrated Moving Average (ARIMA) modeling in Section \ref{ITS} to estimate the impact of the intervention on Bitcoin and Ethereum separately \citep{schaffer2021interrupted}. This ITS design does not account for the potential time confoundedness. We handle the temporal effects that could confound the effect of the intervention using a comparative interrupted time series design, or difference-in-difference (DiD) analysis, in Section \ref{DiD} to estimate the \textit{ordered treatment effect} by exploiting the airdrop's differential impacts on Ethereum and Bitcoin blockchains \citep{callaway2021difference}. \subsection{Separate Effects of the Intervention} \label{ITS} In this analysis, we evaluate the impact of this airdrop on Bitcoin and Ethereum blockchains separately by performing an interrupted time series analysis (ITS) under the framework of Autoregressive Integrated Moving Average (ARIMA) modeling. ITS is one of the best designs to estimate the causal impact of a policy intervention when randomized controlled trials are not available \citep{bernal2017interrupted,kontopantelis2015regression}. We use the pre-intervention period data to fit an ARIMA model, with which we forecast the fundraising outcomes in the absence of the intervention. We then use the forecast counterfactual as the dependent variable and allow the observed data to be the independent variables to estimate the effect of the intervention. ITS could be operationalized as a simple segmented regression which typically requires the time series to have an easy-to-model trend. In our context, we apply the ITS on an ARIMA model to make better predictions by accounting for autocorrelation, seasonality, and underlying time trends \citep{schaffer2021interrupted}. \subsubsection{Method} We divide the time window into a pre-intervention period, an intervention period, and an after-intervention period. The pre-intervention period lasts from Feb. 26 at 11:00 AM to March 1 at 2:00 AM, corresponding to the time interval between the Ukrainian government's announcement to accept crypto donations and the statement that an airdrop was not confirmed. As mentioned previously, we exclude the duration between the announcement that an airdrop was not confirmed and the airdrop confirmation because it is hard to know whether the intervention has an effect during that period. Thus, the intervention period starts from the official confirmation on March 2 at 2:00 AM and ends at the planned snapshot on March 3 at 6:00 PM. Although a cancellation was announced on March 1 at 1:42 AM, it did not reach the audience effectively; people continued to participate actively in the airdrop until the planned snapshot. The post-intervention periods correspond to the remaining hours on March 3, 2022 after the planned snapshot and the entirety of March 4, 2022. Given this data strategy, our estimation of the effect of airdrop is conservative. Our results remain unchanged when we consider the cancellation to be the ending of the intervention. For each currency, we use the automated algorithm to select the best fitting ARIMA model for the outcomes of $DonationCount_{c,t}$ and $AvgDonationSize_{c,t}$ using the pre-intervention period data. To reduce non-stationarity, which is a key assumption for ARIMA, we log-transform all the outcome variables. An ARIMA model is specified ARIMA$(p,d,q)$ if no seasonal adjustment is made and ARIMA$(p,d,q)\times (P,D,Q)_S$ if the seasonal adjustment is performed, where: \begin{itemize} \item $p$ = the order of the autoregressive (AR) component to include $p$ lags of the dependent variable \item $d$ = the degree of non-seasonal differencing \item $q$ = the order of the moving average (MA) component to include $q$ lags of the error \item $P$ = the order of the AR component for the seasonal part \item $D$ = the degree of seasonal differencing \item $Q$ = the order of the seasonal differencing part \end{itemize} We perform the model selection based on minimizing the information criteria of AIC and BIC. For Bitcoin, the best model for $AvgDonationSize_{c,t}$ is ARIMA(0,0,0). That is to say, the errors are uncorrelated across time - they are white noises. For Ether, the best model for $log(AvgDonationSize_{c,t})$ is ARIMA(1,1,2). This suggests some autocorrelation and non-stationarity. For BIC, the best model for $DonationCount_{c,t}$ is ARIMA$(2,1,1)\times (0,1,0)_{24}$; for Ether, the best model is ARIMA$(0,1,1)\times (0,1,0)_{24}$. Both models include an adjustment for seasonality based on the hour of the day. The degree of seasonal differencing is one, meaning that we adjust for seasonality by taking the difference between each observation and the previous value at lag 24. This is intuitive because people's time availability varies across the day. This hour of day adjustment was not needed for the average donation size because the average donation size does not fluctuate over the day. Also, first differencing needs to be performed on both Bitcoin and Ethereum with the outcome of donation count. This indicates non-stationarity, which is expected because the fundraising performance is affected by the spread of the information (i.e., Ukraine is accepting crypto donations) and the dynamic situations in Ukraine. We then forecast the outcomes $Outcome^{'}$ for the intervention period and use the forecasts as the outcome variable, with the dependent variable being the observed outcomes, $Outcome$. We estimate a linear regression without intercept, and the estimated coefficients reflect the effect of the intervention. Note that no other variables are needed in this regression because the time trends, seasonality, and autocorrelation of the outcome have already been accounted for in the forecasting process \citep{schaffer2021interrupted}. \subsubsection{Results} We plot the actual data in solid lines and the predicted outcome in dashed lines for each outcome and each currency in Figures 5 through 8. As can be seen in Figures ~\ref{fig:avg_bic} and ~\ref{fig:avg_eth}, the observed average donation size (solid blue line) is lower than the predicted value (red dashed line). In Figures ~\ref{fig:donation_bic} and ~\ref{fig:donation_eth}, the observed count of donations (solid blue line) is much higher than the predicted value (red dashed line). Comparing Figure ~\ref{fig:donation_bic} with Figure ~\ref{fig:donation_eth}, we find that the effect of the intervention is much more substantial for ETH than BIC. \begin{figure} \centering \begin{minipage}[t]{0.45\textwidth} \includegraphics[height=1.7in]{graphs/Avg-BIC.png} \caption{$AvgDonationSize_{c,t}$ for BIC.} \label{fig:avg_bic} \end{minipage} \hfill \begin{minipage}[t]{0.45\textwidth} \includegraphics[height=1.7in]{graphs/Avg-ETH.png} \caption{$AvgDonationSize_{c,t}$ for ETH.} \label{fig:avg_eth} \end{minipage} \begin{minipage}[b]{0.45\textwidth} \includegraphics[height=1.7in]{graphs/Donation-BIC.png} \caption{$DonationCount_{c,t}$ for BIC.} \label{fig:donation_bic} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[height=1.7in]{graphs/Donation-ETH.png} \caption{$DonationCount_{c,t}$ for ETH.} \label{fig:donation_eth} \end{minipage} \end{figure} According to the estimation results, the airdrop lowered the log-transformed average donation size by 7.22\% ($p<0.001$) for Bitcoin and by 18.66\% ($p<0.001$) for Ethereum. Further, the log-transformed count of hourly donations increased by 41.64\% for BIC and by 199.57\% for Ether in response to the intervention. \underline{Thus, both H1 and H2 are supported.} Finally, we report the comparisons between the forecasts and the post-intervention period observations. We find that the observed log-transformed average donation size is 1.16\% lower than the prediction for BIC and is 13.74\% lower than prediction for ETH. We also find that the observed log-transformed donation count is 0.91\% lower for BIC and 18.82\% higher for ETH. This indicates that the treatment effect likely continued to manifest for ETH after the planned snapshot, but the magnitude of the effect has been greatly reduced. The analyses above separately estimate the impact of the airdrop on BTC and ETH. By checking the ACF plots of the residuals for each ARIMA model, we observe no autocorrelation or non-stationarity (Appendix A), demonstrating the validity of our analysis. However, this analysis has several limitations. First, it is hard to gauge whether the decrease in the average donation size is due to motivation crowd-out or a selection issue (i.e., the donors who contribute after the availability of the crypto rewards may be systematically different from early donors). Second, while the ARIMA model accounted for seasonality, there could be other non-linear temporal trends that could confound the observed treatment effects. For example, there could be an abrupt change in Ukraine that coincides with the introduction of the airdrop. As such, we perform a comparative ITS, or a DiD analysis, to compare the differential impacts of the airdrop on the Bitcoin and Ethereum blockchains in the next section. \subsection{Comparative Effects of the Intervention} \label{DiD} To address the limitations stated above, we lend support from a comparative interrupted time series design, or DiD research design, to enhance identification by estimating an ordered treatment effect to exploit the difference between the impacts of the airdrop on Bitcoin and Ethereum \citep{wing2018designing}. We adopt an extension of DiD with two treatment conditions of different intensities \citep{callaway2021difference,fricke2017identification}. This is a special condition of DiD without controls that compares a strong treatment with a weak treatment and has been widely used to understand interventions without controls. For example, \cite{duflo2001schooling} used this framework to study the effect of school construction on schooling and labor market outcomes by comparing regions with low and high levels of newly constructed schools. \cite{felfe2015can} leveraged the regional variation in childcare expansion rates to understand the effect of formal childcare on maternal employment as well as child development. \subsubsection{Method} In the comparative ITS (DiD), we aim to identify the ordered treatment effect, or the average treatment effect on the treated (Ethereum), represented by $ATET(EB|E)=E[Outcome^{E}_{1}-Outcome^{B}_{1}|E]$, where $E[Outcome^{d}_{t}]$ represents the expected outcome, with $d\in[B,E,0]$ and $t\in[0,1]$. We use $d=B$ to illustrate the treatment to get an airdrop reward with a low probability as in the Bitcoin blockchain; we use $d=E$ to illustrate the treatment to get an airdrop reward with a high probability as in the Ethereum blockchain; we use $d=0$ to illustrate the condition when the treatment of an airdrop is not available. We use $t=1$ to denote the time when the airdrop was available and $t=0$ to denote the time when it is not available. This is equivalent to the local treatment effects discussed in \cite{angrist1995identification}. We can re-write this equation such that: \begin{equation} \begin{split} &ATET(EB|E)=E[Outcome^{E}_{1}-Outcome^{B}_{1}|E] =E[Outcome^{E}_{1}|E]-E[Outcome^{B}_{1}|E]. \end{split} \end{equation} We observe $E[Outcome^{E}_{1}|E]$ but not $E[Outcome^{B}_{1}|E]$, and draw inferences from the Bitcoin blockchain by leveraging the strong parallel assumption that $E[Outcome^B_{1}|E]-E[Outcome^0_{0}|E]=E[Outcome^B_{1}|B]-E[Outcome^0_{0}|B]$. This assumption is an equal effect size assumption that likely holds in our context if ETH holders and BIC holders are equivalently sensitive to external rewards. We believe that this assumption holds because people's choice between BIC and ETH is a financial decision unrelated to their prosocial motivations. With this assumption, we can re-write $ATET(EB|E)$ such that: \begin{equation} \begin{split} &ATET(EB|E)=E[Outcome^{E}_{1}-Outcome^{B}_{1}|E] \\ &=E[Outcome^{E}_{1}|E]-E[Outcome^0_{0}|E]-E[Outcome^B_{1}|B]+E[Outcome^0_{0}|B], \end{split} \end{equation} where every component of the right side of Equation (2) is observed. Even if we believe that this assumption does not hold (e.g., ETH holders may react more aggressively to the airdrop), we can partially identify the ordered treatment effects as long as $E[Outcome^B_{0}|E]-E[Outcome^0_{0}|E]=E[Outcome^B_{0}|B]-E[Outcome^0_{0}|B]$. This common trend assumption is widely used in classic DiD designs and highly likely to hold given the parallel pre-intervention trends in Figures 2 and 3 \citep{callaway2021difference}. \cite{fricke2017identification} prove that with partial identification, we can interpret the estimates as the lower bound in the magnitude for the treatment effect. The econometric model we estimate is specified as: \begin{equation} Outcome_{c,t}=\beta_{0}+\beta_{1} Ether_{c} Airdrop_{t} +\beta_{2}Ether_{c}+\beta_{2}Airdrop_{t}+\beta_{3}Ether_{c}+\eta_{t}+\epsilon_{c,t}, \end{equation} where the dependent variable $Outcome_{c,t}$ can be operationalized as the logarithm of the number of hourly donations ($DonationCount_{c,t}$) or the logarithm of the average contribution size ($AvgDonationSize_{c,t}$). These outcomes are both log-transformed after adding one because they are highly skewed. To identify the ordered treatment effect, we use two-way fixed effects \citep{callaway2021difference}. $\eta_{t}$ represents the hourly time dummy variables that account for the time fixed effects. The time effects could source from the dynamic situations in Ukraine, the awareness of the crypto fundraising event, or simply the varying availability of time people have at different times. Such temporal trends affect the Bitcoin and Ethereum blockchains in the same way. The systematic difference between BIC and ETH is the group level fixed effects, and we control for it using $Ether_{c}$. The coefficient of our interest is $\beta_{1}$ as it indicates the differential impacts of the airdrop on the blockchains of Bitcoin and Ethereum. \subsubsection{Results} We first run the regression following Equation (3) for the full sample and report the results in Table ~\ref{table:results}. We can see from Model (1) of Table ~\ref{table:results} that the coefficient for $Airdrop_{t} \times Ether_{c}$ is significantly positive for the outcome of donation counts ($\beta_{1}=2.211$, $p<0.01$). Specifically, the average hourly donation count of the Ethereum blockchain has increased 812.48\% more than the Bitcoin blockchain in the intervention. However, from Model (2) of Table ~\ref{table:results}, the coefficient for $Airdrop_{t} \times Ether_{c}$ is insignificant for the outcome of average donation size ($\beta_{1}=-0.265$, $p>0.1$). That is to say, the average donation size for Ethereum did not change differently from that for Bitcoin during this intervention. We note that the observation number for Model (2) of Table ~\ref{table:results} is 267 but that for Model (1) is 268 because there were no donations in the first hour of fundraising for BIC and the outcome of average donation size was un-defined in that case. \begin{table}[!htbp] \centering \caption{Regression Results} \label{} \begin{tabular}{c c c \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{\textit{Dependent variable:}} \\ \cline{2-3} \\[-1.8ex] & \multicolumn{1}{c}{Log($DonCount$)} & \multicolumn{1}{c}{Log($AvgDonSize$)} \\ \\[-1.8ex] & \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)}\\ \hline \\[-1.8ex] $Airdrop\times Ether$ & $2.211^{***}\,(0.159)$ & $-0.265\,(0.203)$ \\ $Airdrop$ & $3.778^{***}\,(0.604)$ & $0.275\,(0.945)$ \\ $Ether$ & $-0.044\,(0.088)$ & $0.110\,(0.113)$ \\ Intercept & $0.715^{*}\,(0.426)$ & $5.244^{***}\,(0.771)$ \\ \hline \\[-1.8ex] Time Dummy & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} \\ \hline Observations & \multicolumn{1}{c}{268} & \multicolumn{1}{c}{267} \\ R$^{2}$ & \multicolumn{1}{c}{0.895} & \multicolumn{1}{c}{0.659} \\ Adjusted R$^{2}$ & \multicolumn{1}{c}{0.788} & \multicolumn{1}{c}{0.307} \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{2}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \label{table:results} \end{table} \begin{table} \centering \caption{Regression Results - Split Sample} \label{} \begin{tabular}{c c c c c \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{4}{c}{\textit{Dependent variable:}} \\ \cline{2-5} & \multicolumn{2}{c}{\textit{$AvgDonSize \leq250$}}& \multicolumn{2}{c}{\textit{$AvgDonSize>250$}} \\ \cline{2-5} \\[-1.8ex] & \multicolumn{1}{c}{Log($DonCount$)} & \multicolumn{1}{c}{Log($AvgDonSize$)} & \multicolumn{1}{c}{Log($DonCount$)} & \multicolumn{1}{c}{Log($AvgDonSize$)} \\ \\[-1.8ex] & \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)}\\ \hline \\[-1.8ex] $Airdrop\times Ether$ & $2.273^{***}$ & $-0.330^{***}$ & $1.855^{***}$ & $0.112$ \\ & (0.167) & (0.052) & (0.149) & (0.215) \\ $Airdrop$ & $3.757^{***}$ & $0.194$ & $2.219^{***}$ & $0.868$ \\ & (0.636) & (0.244) & (0.568) & (1.005) \\ $Ether$ & $-0.156^{*}$ & $0.018$ & $0.411^{***}$ & $-0.300^{**}$ \\ & (0.092) & (0.029) & (0.083) & (0.120) \\ Intercept & $0.628$ & $3.699^{***}$ & $0.141$ & $6.613^{***}$ \\ & (0.448) & (0.199) & (0.400) & (0.820) \\ \hline \\[-1.8ex] Time Dummy & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} \\ \hline Observations & \multicolumn{1}{c}{268} & \multicolumn{1}{c}{267} & \multicolumn{1}{c}{268} & \multicolumn{1}{c}{267} \\ R$^{2}$ & \multicolumn{1}{c}{0.888} & \multicolumn{1}{c}{0.729} & \multicolumn{1}{c}{0.886} & \multicolumn{1}{c}{0.622} \\ Adjusted R$^{2}$ & \multicolumn{1}{c}{0.773} & \multicolumn{1}{c}{0.450} & \multicolumn{1}{c}{0.768} & \multicolumn{1}{c}{0.233} \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{4}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \label{table:split-results} \end{table} These findings add to our results in the separate analysis in Section \ref{ITS}. To derive more insights, we re-process our data by constructing the data for small donations ($AvgDonSize \leq250$) and large donations ($AvgDonSize>250$) separately. We chose the threshold of \$250 because the median of average donation size is \$252.60. We separately run the regressions for the two datasets and report our findings in Table ~\ref{table:split-results}. For small donations, we find from Model (1) and Model (2) that the airdrop increased the number of donations more aggressively for Ethereum ($\beta^{small}_{1}=2.273$, $p<0.01$) and decreased the average donation size more aggressively for Ethereum ($\beta^{small}_{1}=-0.330$, $p<0.01$). Specifically, the average donation size dropped by 30.1\% more for Ethereum as compared to Bitcoin during this intervention. However, for large donations, as in Model (3) and Model (4) of Table ~\ref{table:split-results}, we only find a more aggressive increase in the average donation count from Ethereum ($\beta^{large}_{1}=1.855$, $p<0.01$), but the decrease in the average donation size was similar between Ethereum and Bitcoin ($\beta^{large}_{1}=0.112$, $p>0.1$). This explained the insignificance of the coefficient for $Airdrop_{t} \times Ether_{c}$ in the main model in Table ~\ref{table:results}, which could be due to the greater variance in donation size for large donations. While these findings do not speak directly to our hypotheses, they offer better understanding of the mechanisms and stronger identifications. We discuss the details in later sections. \section{Robustness Checks} To validate our results, we perform three robustness checks. The first tests whether the reduction in donation size is caused by the existence of extremely small donations, thus, the split-sample analysis is performed. The second rules out an alternative explanation about donation motivation and serves to confirm our main finding, thus, the full-sample analysis is performed. The third is about the sensitivity of time, and the full-sample analysis is performed. \subsection{Exclude Extremely Small Donations} One of our major explanations for the more aggressive reduction in contribution sizes for small donations on Ethereum is motivation crowding-out. However, this finding could also be driven by extremely small donations made by self-interested donors who seek to profit from this fundraising activity. Indeed, we observe a few extremely small donations ($<\$1$) in Figure 2 for the Ethereum blockchain. In this robustness check, we keep only donations that are more than \$30 for both Bitcoin and Ethereum. We aggregate our data again and re-run our models. As can be seen from Table ~\ref{table:robustnes-extreme}, our findings remain unchanged. \subsection{Exclude ENS Users} In this robustness check, we exclude Ethereum users who adopted the Ethereum Name Service (ENS). ENS is a naming system to map human-readable names such as ``chiemsee.eth'' to a machine-readable wallet address like ``0xe817FfE0893Dee3e870c37F94b6e2Ada2e02BBBc.'' This naming service costs \$8.80 and is a way for donors to reveal their identity. Past literature indicates that identity plays an important role in charitable giving as it is associated with additional extrinsic motivations such as reputation \citep{reinstein2012reputation}. The availability of ENS in Ethereum and its unavailability in Bitcoin could potentially make these two blockchains incomparable. Thus, we removed all the ENS users and re-run our DiD analysis for the full sample. We report the results in Table ~\ref{table:robustnes-ens}, and our findings stay robust. \begin{table}[H] \centering \caption{Robustness - Exclude Extreme Donations} \label{} \begin{tabular}{c c c c c \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{4}{c}{\textit{Dependent variable:}} \\ \cline{2-5} & \multicolumn{2}{c}{\textit{$AvgDonSize \leq250$}}& \multicolumn{2}{c}{\textit{$AvgDonSize>250$}} \\ \cline{2-5} \\[-1.8ex] & \multicolumn{1}{c}{Log($DonCount$)} & \multicolumn{1}{c}{Log($AvgDonSize$)} & \multicolumn{1}{c}{Log($DonCount$)} & \multicolumn{1}{c}{Log($AvgDonSize$)} \\ \\[-1.8ex] & \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)}\\ \hline \\[-1.8ex] $Airdrop\times Ether$ & $1.913^{***}$ & $-0.077^{***}$ & $1.855^{***}$ & $0.112$ \\ & $(0.142)$ & $(0.029)$ & $(0.149)$ & $(0.215)$ \\ $Airdrop$ & $3.036^{***}$ & $0.827^{***}$ & $2.219^{***}$ & $0.868$ \\ & $(0.542)$ & $(0.134)$ & $(0.568)$ & $(1.005)$ \\ $Ether$ & $-0.227^{***}$ & $0.081^{***}$ & $0.411^{***}$ & $-0.300^{**}$ \\ & $(0.079)$ & $(0.016)$ & $(0.083)$ & $(0.120)$ \\ Intercept & $0.663^{*}$ & $3.637^{***}$ & $0.141$ & $6.613^{***}$ \\ & (0.382) & (0.109) & (0.400) & (0.820) \\ \hline \\[-1.8ex] Time Dummy & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} \\ \hline Observations & \multicolumn{1}{c}{268} & \multicolumn{1}{c}{267} & \multicolumn{1}{c}{268} & \multicolumn{1}{c}{267} \\ R$^{2}$ & \multicolumn{1}{c}{0.889} & \multicolumn{1}{c}{0.676} & \multicolumn{1}{c}{0.886} & \multicolumn{1}{c}{0.622} \\ Adjusted R$^{2}$ & \multicolumn{1}{c}{0.776} & \multicolumn{1}{c}{0.343} & \multicolumn{1}{c}{0.768} & \multicolumn{1}{c}{0.233} \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{4}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \label{table:robustnes-extreme} \end{table} \begin{table}[H] \centering \caption{Robustness - Exclude ENS Users} \label{} \begin{tabular}{c c c \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{\textit{Dependent variable:}} \\ \cline{2-3} \\[-1.8ex] & \multicolumn{1}{c}{Log($DonCount$)} & \multicolumn{1}{c}{Log($AvgDonSize$)} \\ \\[-1.8ex] & \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)}\\ \hline \\[-1.8ex] $Airdrop\times Ether$ & $2.173^{***}\,(0.162)$ & $-0.256\, (0.207)$ \\ $Airdrop$ & $3.888^{***}\,(0.615)$ & $1.834^{*}\,(0.965)$ \\ $Ether$ & $-0.179^{**}\,(0.089)$ & $0.034\,(0.115)$ \\ Intercept & $0.639\,(0.433)$ & $3.683^{***}\,(0.787)$ \\ \hline \\[-1.8ex] Time Dummy & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} \\ \hline Observations & \multicolumn{1}{c}{268} & \multicolumn{1}{c}{267} \\ R$^{2}$ & \multicolumn{1}{c}{0.888} & \multicolumn{1}{c}{0.657} \\ Adjusted R$^{2}$ & \multicolumn{1}{c}{0.773} & \multicolumn{1}{c}{0.303} \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{2}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \label{table:robustnes-ens} \end{table} \subsection{Alternative Observation Window} In our main analysis, we consider the observation window from Feb. 26, 2022 to March 4, 2022, with the treatment lasting from the official confirmation of the airdrop till the planned snapshot. In this robustness check, we restrict the observation window to start on Feb. 26, 2022 and end at the cancellation of the airdrop on March 3, 2022. This time window captures the treatment more strictly. The exclusion of the post-intervention period further allows a conservative estimation of the treatment effect. We re-run our analyses and report the results in Table ~\ref{table:robustnes-window}; our results stay consistent. \begin{table}[!htbp] \centering \caption{Robustness - Alternative Observation Window} \label{} \begin{tabular} {c c c \\[-1.8ex]\hline \hline \\[-1.8ex] & \multicolumn{2}{c}{\textit{Dependent variable:}} \\ \cline{2-3} \\[-1.8ex] & \multicolumn{1}{c}{Log($DonCount$)} & \multicolumn{1}{c}{Log($AvgDonSize$)} \\ \\[-1.8ex] & \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)}\\ \hline \\[-1.8ex] $Airdrop\times Ether$ & $2.136^{***}\,(0.189)$ & $-0.363\,(0.248)$ \\ $Airdrop$ & $4.094^{***}\,(0.624)$ & $-0.335\,(0.999)$ \\ $Ether$ & $-0.192^{*}\,(0.113)$ & $0.234\,(0.149)$ \\ Intercept & $0.789^{*}\,(0.439)$ & $5.120^{***}\,(0.817)$ \\ \hline \\[-1.8ex] Time Dummy & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes}\\ \hline Observations & \multicolumn{1}{c}{184} & \multicolumn{1}{c}{183} \\ R$^{2}$ & \multicolumn{1}{c}{0.883} & \multicolumn{1}{c}{0.656} \\ Adjusted R$^{2}$ & \multicolumn{1}{c}{0.763} & \multicolumn{1}{c}{0.296} \\ \hline \hline \\[-1.8ex] \textit{Note:} & \multicolumn{2}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \label{table:robustnes-window} \end{table} \section{Conclusions} \subsection{Mechanism Discussions} From our analysis of the separate effects of the intervention in Section \ref{ITS}, we find that crypto rewards effectively increase donation counts but decrease average donation size. From our analysis of the comparative effects of the intervention in Section \ref{DiD}, we further find that the donation count increase is more aggressive for donations on Ethereum than Bitcoin. This is because the crypto rewards are much more likely to be issued on the Ethereum blockchain. However, the rewards did not affect donation sizes on Ethereum and Bitcoin in statistically different ways. Further split-sample studies reveal that there is a more aggressive decrease in average donation sizes from Ethereum for small donations but not for large donations. These findings suggest that the extrinsic incentive in the format of crypto rewards has a positive impact on people's decisions of whether to donate and a negative impact on the decision of how much to donate. Below we discuss the possible mechanisms that drive such findings. For the donation count increase, two mechanisms may be at play at the same time. First, the donation count increases because people are self-interested and want to profit from the crypto rewards. If this is the case, a fundraiser would need to increase the rewards to attract more donations, with no guarantee that the ``net profit'' of the fundraising activities would benefit from the rewards. While we cannot rule out this potential mechanism, we stress that the cost of rewards is low for crypto fundraising because the reward tokens are usually valueless in the beginning. The value of the rewards may increase later when the fungible or non-fungible tokens are sold, but the promotional expenditure will be paid by future buyers instead of the fundraising organizations. Thus, the crypto rewards are a novel way to reduce fundraising costs and improve the efficiency of fundraising. Meanwhile, we highlight the importance of the second mechanism that the donation count may increase because people use the rewards as an excuse to rationalize their prosocial behavior. This mechanism is likely at play because prospective donors respond more aggressively to Ethereum where the treatment is stronger. In addition, the donation sizes of crypto donations are generally large to indicate altruism \citep{jha_2022}. As we reported in Table 2, the median of the daily average donation size was \$241.35 for Bitcoin and \$364.49 for Ethereum before the airdrop. It was \$195.66 for Bitcoin and \$185.89 for Ethereum when the airdrop became available. As a matter of fact, the minimum daily average donation size was \$87.46 for Bitcoin and \$85.58 for Ethereum before the rewards were available and \$84.57 for Bitcoin and \$64.12 for Ethereum when the crypto rewards were available. If people donate purely for the purpose of profiting, they would not contribute so much because the Ukrainian government did not specify the crypto rewards, and donors have no way to evaluate the cost and benefit of their contributions accurately. Similarly, two mechanisms could drive the decrease in contribution size. First, the average donation size decreased because the extrinsic rewards shifted people's mindset from ``a socially-minded altruistic perspective to a more economically minded, monetary perspective'' \citet[][p.95]{newman2012counterintuitive}. We believe that this mindset-shifting mechanism is at play because a stronger incentive (the intervention is stronger in Ethereum than in Bitcoin) corresponds to a more significant donation size decrease for small donations ($\leq\$250$). At the same time, the smaller average contribution size could be due to the selection process that donors who contributed when the airdrop was available systematically donated less because of their budget limitations or preferential reasons. While this mechanism is likely at play because we observe a reduction in donation size for both blockchains in our separate analysis, it does not affect the validity of the first mechanism because our results hold in the DiD analysis for small donations. Further, our results hold even if we remove opportunistic donors who made extremely small donations. \subsection{Managerial Implications} Our study generates rich implications for the crypto community. First, while ICO has been widely used as a promotional strategy for blockchain-based projects, its extension to support social causes that are not blockchain-based is a novel application that has never been studied. Our study suggests that ICO has a great potential to stimulate donations by either directly leveraging the self-interested motivation or making use of the excuses people need to behave prosocially. To increase the societal impact of blockchain technology and accelerate the adoption of blockchain, the founders and designers of blockchain should consider applying crypto rewards to various social movements and activities. Second, our finding also indicates the necessity for blockchains to support airdrops. While Bitcoin has a greater market cap than Ethereum, Ethereum has grown more rapidly than Bitcoin. Our study shows that Ethereum's compatibility with smart contracts could effectively improve fundraising performance. Blockchain designers should continue to improve the design of blockchains to support airdrops better. For example, currently, airdrops can only be issued if a donor makes a direct transfer of funds to the recipient's wallet. Donors who use intermediary platforms (e.g., Coinbase) will not receive the airdrop due to technology limitations. Blockchain designers can work with intermediary platforms to design and streamline the airdrop process better. Our study is also important for fundraising organizations. While many non-profit organizations have started to accept cryptocurrency as donations, they have little experience in how to raise crypto funds. Our study suggests that crypto rewards are an effective way to acquire donors. Fundraisers can feature the potential value increase of the crypto rewards to encourage charitable giving. Such a strategy is likely effective for both self-interested and socially conscious people. In the meantime, we find a potential decrease in donation size. While this decrease only occurs in small donations, it suggests the possibility of motivation crowding-out that may be triggered in other crypto fundraising activities. Our findings that more aggressive donation size decrease happens only among small donations but not large donations inspire future works to propose strategies to alleviate such donation size decreases with better crypto reward designs. Finally, our study is of great value for policymakers regarding blockchain legislation. Cryptocurrency is considered a capital asset for federal income tax. To enjoy tax benefits, people could donate cryptocurrency to avoid capital gains tax.\footnote{Crypto donations are tax-deductible at the fair market value at the time of the donations.} We show that airdrops are effective promotional strategies for crypto fundraising. Policymakers should discuss topics about whether airdrops should be taxed and how they should be taxed. For example, when reward tokens are re-sold, the new buyers actually pay for the fundraising cost of the initial cause. The associated tax policy would play an important role in the implementation of crypto rewards in fundraising. In addition, our findings imply that with the presence of crypto rewards, donors take into account the future salience of a cause when making contributions. Crypto rewards could possibly change the allocation of funds in the society by making people more forward-looking. \subsection{Limitations and Future Works} Our study captures a unique crypto fundraising event, which was the first of its kind, to understand the impact of crypto rewards in fundraising for social causes. Despite the unique research opportunity and our best efforts, we have limitations that could be addressed by future studies. First, while we analyzed both the separate effects and the comparative effects of crypto rewards, we did not compare the crypto rewards with other traditional thank-you gifts with a constant value. Future works could leverage experiments to understand how people perceive a thank-you gift of a fixed value and varying value differently. Second, we did not look into the specifications of crypto rewards as the fundraiser of our context did not specify the rewards. Future works could look into the design of crypto rewards (e.g., compare fungible tokens with non-fungible tokens). Third, we analyze a quasi-experiment, and our analysis is subject to corresponding limitations. The separate intervention analysis did not fully account for the time confoundedness, and the comparative study does not have a control group. While we have used various ways to account for such issues, a randomized controlled trial could better solve these challenges. Last but not least, our study focuses on the first crypto fundraising event for a social cause. As more and more fundraising activities use similar promotional strategies, the effect of the crypto rewards could decay over time. Future works can analyze the time effect on the effectiveness of an airdrop. We believe that crypto fundraising is a fertile area for research and a promising tool for practice. We hope that our work paves the way for future studies to continue understanding how the invention of blockchain could transform into societal value. \newpage \section*{Appendix A - ARIMA Model} Below we present the ACF plots to show that the chosen ARIMA models are not affected by autocorrelation concerns. The ACF generates a numeric vector with the correlations of the residuals of the model. The x-axis corresponds to different lags of the residuals, and the y-axis represents the correlation. We can see from the figures below that the correlations are mostly between the significance level (the blue dashed lines). Thus, the residuals of our regressions are not autocorrelated. \begin{figure}[H] \centering \begin{minipage}[t]{0.45\textwidth} \includegraphics[height=1.4in]{graphs/BIC_AVG.png} \caption{$AvgDonationSize_{c,t}$ for BIC.} \end{minipage} \hfill \begin{minipage}[t]{0.45\textwidth} \includegraphics[height=1.4in]{graphs/ETH_AVG.png} \caption{$AvgDonationSize_{c,t}$ for ETH.} \end{minipage} \begin{minipage}[b]{0.45\textwidth} \includegraphics[height=1.4in]{graphs/BIC_Donation.png} \caption{$DonationCount_{c,t}$ for BIC.} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[height=1.4in]{graphs/ETH_Donation.png} \caption{$DonationCount_{c,t}$ for ETH.} \end{minipage} \end{figure} \bibliographystyle{plainnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec: Intro} Through millennia of evolutionary processes, humans have evolved the sense of touch as a critical sensory method to perceive the world. Delicate tasks, including tactile perception, grasping different shaped objects, and manipulation of tools, can be completed with fluency and insight given direct sensory feedback. As the ratio between intelligent robots and humans has been increasing worldwide, robotic systems are pursuing more dexterity in the face of contact-rich scenarios, where tactile sensors are being developed to detect the necessary tactile information for robot interaction with objects and environments. Conventional tactile sensors utilize transduction materials such as piezoresistive, capacitive, and piezoelectric components to convert physical contact into digital signals for processing at high speed. But sensitivities to environmental temperature, vibration, and electrical interference are issues to be solved. Also, difficulties exist in data acquisition connection and interpretation software which further inhibits the broader application of such sensors \cite{zou2017novel}. In recent years, research into vision-based tactile sensors has been growing due to superiorities in low cost, easy fabrication, high durability, and multi-axial measurements. Developments in digital cameras have made capturing contact situations with high-quality images of low cost and easy to interface with. Moreover, progress in computer vision and deep learning assists in transferring knowledge from visual perception to tactile perception, enabling faster analysis of high dimensional tactile representation in larger images. \begin{figure} \centering \vspace{2mm} \includegraphics[width=0.47\textwidth]{figure/cover.pdf} \vspace{-0.4cm} \caption{a) Two DelTact sensors are mounted on a FE Gripper of Panda Robotic Arm to grasp a yellow key. b) Visualized optical flow. c) Gaussian density plotted in hot map. d) Isometric view of the shape reconstruction. e) Estimated shear force distribution. f) Estimated normal force distribution.} \label{fig: cover} \vspace{-0.7cm} \end{figure} Representative studies including GelSight \cite{yuan2017gelsight}, GelSlim \cite{taylor2021gelslim3} and Digit \cite{lambeta2020digit} demonstrated the advance of vision-based tactile sensor in contact measurement. These sensors adopted photometric stereo technology to obtain a precise depth estimation. And, similar to GelForce \cite{vlack2004gelforce}, they measured the surface deformation by tracking dot markers. However, to guarantee the accuracy of photometric stereo, the dot tracking methods cannot achieve full resolution. The deformation in the undetected regions relied on interpolation from the near measurements, which might cause information to lose. To obtain a full-resolution surface deformation tracking, Sferrazza et al. \cite{sferrazza2019design} and Kuppuswamy et al. \cite{kuppuswamy2020soft} captured the movement of pattern/particles with higher density on the contact substrate. But the quality of the tracking remained for further improvement and the results were utilized solely for force \cite{sferrazza2019design} or depth \cite{kuppuswamy2020soft}. Therefore, our motivation is to develop a sensor that can first, achieve full resolution measurement of the surface deformation and then, extract more contact information including depth and force. In this paper, we present DelTact, a new version of the vision-based tactile sensor based on our previous framework \cite{du2021high}. This name comes from the abbreviation of the sensor's main feature: using a dense color pattern to capture tactile information. The sensor is designed to be compact and convenient in integrating itself into modern robotic systems such as grippers and robot fingers with online sensing of shape and force at high spatial and temporal resolution. Our work thus makes three main contributions to this field: \begin{itemize} \item Presenting a new modular hardware design of a vision-based tactile sensor. This sensor has a simple structure, large sensing area ($675$mm$^2$), while its size is maintained compact (shown in Table. \ref{Tab:sensor comp}). \item Proposing a parametric optimization framework of the previous random color pattern \cite{du2021high} with indentation experiment to track 2D displacement field with an accuracy of submillimeter scale at full resolution. \item Integrating online tactile measurement algorithms into software to extract contact shape and force distribution from the optical flow at a high spatial ($798\times586$) and temporal resolution (40Hz). \end{itemize} The paper proceeds as follows: Section. \ref{sec: related works} introduces related works on vision-based tactile sensor designs and information processing methods. In, Section. \ref{sec: hardware}, we give a complete description of design and fabrication of the proposed sensor. In Section. \ref{sec: software}, algorithms including raw image preprocessing, surface deformation measurement, and contact information extraction are presented. In Section. \ref{sec: experiment}, pattern selection and tactile measurement experiment results are shown and analyzed. Finally, in Section. \ref{sec: conclusion}, discussion and conclusion with future research are identified. \section{Related Work} \label{sec: related works} \subsection{Vision-based Tactile Sensor} Early in 2001, Kamiyama et al. developed a vision-based tactile sensor \cite{kamiyama2001vision}, where colored markers were deployed in a transparent elastomer and tracked by a CCD camera to measure the gel deformation at different depths. The concept of this prototype later was improved into GelForce \cite{vlack2004gelforce}, which could obtain complete information on contact force (i.e., direction, magnitude and distribution). Continued research based on the GelForce-type working principle focused on a more compact form factor hardware with broadened functionalities of the sensor to better integrate with robotic systems such as robot hands and grippers. Yamaguchi et al. \cite{yamaguchi2016combining} proposed a FingerVision sensor to combine visual and tactile sensing with only one monocular camera. Lepora et al. \cite{ward2018tactip} introduced the TacTip family with a bio-inspired data acquisition system to simulate mechanoreceptors under human skin and detect contact information. Sferrazza et al. \cite{sferrazza2019design} presented a high-resolution tactile sensor with randomly distributed fluorescent markers and used optical flow tracking to achieve high-accuracy force sensing. Kuppuswamy et al. \cite{kuppuswamy2020soft} showed a Soft-bubble gripper with a pseudorandom dot pattern to estimate shear deformation. These sensors are all characterized by simple structures and easy fabrication. Another series of GelSight-type sensors adopted a retrographic sensing technique to obtain high-resolution 3D deformation. Works regarding GelSight were demonstrated by Yuan et al. \cite{yuan2017gelsight} and Dong et al. \cite{dong2017improved}, who cast colored light onto a Lambertian reflectance skin to measure the surface normal of deformation directly and reconstructed the dense accurate 3D shape. To further reduce the size of the sensor for convenient installation onto grippers, GelSlim 3.0 \cite{taylor2021gelslim3} was developed with optimized optical and hardware design. Padmanabha et al. \cite{padmanabha2020omnitact} showed a finger-size touch sensor, OmniTact, to perform multi-directional tactile sensing. Lambeta et al. \cite{lambeta2020digit} released their fingertip Digit sensor with integrated circuit design at low cost for extensive application in robot manipulation. In our work, we aim to design our sensor that combines the advantages of these two types, that is to achieve full resolution multi-modality contact perception and have a simpler structure and less restricted optical requirement than the Gelsight-type sensor. \subsection{Tactile Information Extraction} The origin signal received by the vision-based tactile sensor contains diverse compound information, which is dependent on the contact condition between sensor and environment. Furthermore, to achieve dexterity in challenging tactile-related tasks such as tactile exploration, grasping, manipulation, and locomotion, tactile information extraction algorithms generally have capacity in multi-modality measuring of contact and versatility in recognizing various levels of features with models \cite{li2020review}. Existing tactile sensors directly obtained low-level features such as deformation \cite{yamaguchi2016combining}, texture \cite{dong2017improved}, contact area localization \cite{donlon2018gelslim}, geometry reconstruction \cite{yuan2017gelsight} and force estimation \cite{yuan2017gelsight,sferrazza2019design} at the contact site. Algorithms with low complexity could solve the problem using linear regression \cite{dong2017improved}, principal component analysis (PCA) \cite{she2019cable}, and graphic features such as entropy \cite{yuan2015measurement}, Voronoi feature \cite{cramphorn2018voronoi} and Gaussian density \cite{du2021high}. Besides, complicated tasks that require high-level information have been performed, including object recognition \cite{yuan2017connecting}, localization of dynamic objects \cite{li2014localization}, simultaneous localization and mapping on objects \cite{bauza2019tactile}, and slip detection \cite{dong2017improved}. Learning-based methods may be preferred in such tasks to analyze high-dimensional tactile images with good generalization and accuracy. In our work, we aim at using cost-effective algorithms to estimate low-level contact information as a proof of concept for our tactile sensing method. \section{Hardware Design} \label{sec: hardware} For the hardware part, three principles are suggested as guidelines in the sensor design. \begin{enumerate} \item \textbf{Robustness}: The sensor should provide accurate and stable performance. This requires higher mechanical strength for longer service life and fewer noises during image capturing. \item \textbf{Compactness}: With a compact size, the sensor is enabled for better integration with robot fingers to perform manipulation tasks under different scenarios, especially in a narrow space. \item \textbf{Easy to Use}: The sensor is easy to install, operate and maintain. Electrical parts including signal and power wires are also convenient for connection. \end{enumerate} \begin{figure} [ht] \centering \includegraphics[width=0.48\textwidth]{figure/sensor_exp_cr.pdf} \vspace*{-3mm} \caption{Mechanical configuration of DelTact (explosive view). The parts are 1. tactile skin base; 2. acrylic plate; 3. sensor shell; 4. light holder; 5. camera; 6. camera holder; 7. sensor cover; 8. screws.} \label{fig:explod sensor} \vspace{-0.5cm} \end{figure} \subsection{System Configuration} \label{subsec: System Configuration} Based on the design principles, the system configuration of DelTact is elaborated below. It consists of three subsystems (i.e., tactile subsystem, imaging subsystem, and mechanical subsystem) underlying the primary function of the sensor. The subsystems can be further disassembled into eight individual parts. Each part is designed for the least space required to achieve as much compactness as possible. Details about the subsystems are presented as follows. \subsubsection{Tactile Subsystem} The tactile subsystem comprises a tactile skin base and an acrylic plate. The tactile skin base (part 1 in Fig. \ref{fig:explod sensor}) is a black housing frame that fixes the tactile skin with dense color pattern at the bottom and avoids skin detachment from the sensor. The generation of dense color pattern is presented in Section \ref{subsubsec: pattern generate}. For the tactile skin material, we choose a transparent soft silicone rubber (Solaris™ from Smooth-On, Inc.). The Solaris™ satisfies the requirement for both softness and toughness with a shore hardness of 15A and tensile strength of 180 psi. The thickness of the tactile skin is 12 millimeters and a surface area of $36$mm$\times34$mm is obtained with fillets on edges to reduce wear. To guarantee enough support against excessive deformation under external load, a 2-mm thick rectangle acrylic plate (part 2 in Fig. \ref{fig:explod sensor}) is attached tightly to the back of the skin. \begin{figure} \centering \vspace*{2mm} \includegraphics[width=0.4\textwidth]{figure/sensor_comp_cr.pdf} \vspace*{-1mm} \caption{3D model of previous Gecko-enhanced tactile sensor (left, $52 \times 108 \times 69$ mm$^3$), FingerVision sensor (middle, $51 \times 65 \times 52$ mm$^3$) and DelTact (right, $39 \times 60 \times 30$ mm$^3$).} \label{fig:sensor comp} \vspace{-0.5cm} \end{figure} \subsubsection{Imaging Subsystem} The imaging subsystem consists of a light holder and a camera module (part 4 and 5 in Fig. \ref{fig:explod sensor}). The light holder is made of semitransparent white resin, and a light strip with five 5050 SMD LEDs is inserted into the holder. The strip is connected with a resistor of $750\Omega$ in series and powered by a $5$V DC power source to reach the desired illuminance. Owing to the scattering inside the light holder, light from LEDs is diffused to reduce overexposure. For the camera, the Waveshare IMX219 Camera Module with a short fisheye lens is chosen to achieve a 200-degree FOV and close minimum photographic distance. This camera is consistent with our compact design principle and able to acquire images with a high resolution of $1280\times720$ at 60 frames per second. Regarding signal transmission, the camera is connected to an Nvidia Jetson Nano B01 board, where the image can be directly processed with CUDA on board or sent to another PC with ethernet. To integrate the camera into the system, a camera holder (part 6 in Fig. \ref{fig:explod sensor}) is inserted to lock the camera with two M2 screws, which also fixes and stabilizes the camera and protects the camera circuit. \subsubsection{Mechanical Subsystem} The mechanical subsystem includes the sensor shell and the sensor cover (part 3 and 7 in Fig. \ref{fig:explod sensor}). The purpose of designing the shell and the cover is to encapsulate the tactile sensing parts from outside interference and achieve maximum compact assembly. Therefore, the shell and cover are opaque and fully enclose the sensor to block external light and dust. The wall thickness of these components is 1.5 millimeters to ensure sufficient strength. Concerning flexible assembly and reducing relative slip, the cover has four snap-fit cylinders connected to the camera holder. All the sensor components are assembled by four 19-mm long M1.6 screws, and each part can be maintained or replaced within minutes thanks to the modular design. A 20-mm long end-effector mount (shown in Fig. \ref{fig:explod sensor}) is set on the shell to work with the external connections. The geometry and position of the mount can be easily redesigned to fit into different grippers. The overall dimension of the sensor is $39\times60\times30$ mm$^3$ with the end-effector mount counted in. As shown in Fig. \ref{fig:sensor comp}, the size of DelTact is significantly reduced compared to the previous sensors \cite{du2021high}\cite{pang2021viko}, while the sensing area is almost maintained the same. \subsection{Fabrication Process} \label{subsec: Fabrication Process} Fabrication of the sensor takes several steps into account. To begin with, for preparation, all the mechanical components and a mold for silicone casting are fabricated using a photocuring 3D printer with black or white tough epoxy resin from Formlabs. This guarantees higher accuracy, stronger mechanism and smoother surface. Then the solvents of two-part Solaris™ silicone are mixed in a $1:1$ ratio and rest in a vacuum pump to remove air bubbles. The mixture is poured gently into the mold to cure in desired shape. This mold can cast four modules at a time. Meanwhile, the tactile sensing base is put into the mold to bind together with the tactile skin. The acrylic plate is laser cut and filmed uniformly with a layer of prime coat (DOWSIL™ PR-1200 RTV from Dow, Inc.) to enhance bonding between the silicone. It is also put onto a tactile sensing base whilst the silicone is curing. The mixed gel cures in 16 hours at room temperature ($23^\circ$C), and a heating process at $65^\circ$C in a constant temperature cabinet can effectively reduce this time to 8-10 hours. When formed, an elastomer layer adheres firmly under the acrylic plate and serves as the deformation interfacing substrate. The dense color pattern is applied onto the transparent silicone surface with water transfer printing technique. The paint of the pattern is printed on a dry ductile soft film sticker. When contacted with water, the film becomes adhesive and adheres to the silicone surface. After the moisture is evaporate, the color pattern will remain tightly on the silicone. Different types of pattern stickers can be made to switch between different modalities, e.g., dense flow, sparse dots and transparent gel (shown in Fig. \ref{fig:tactile base}). When the sticker is dried, two thin layers of protection silicone (Dragon Skin™ 10 FAST from Smooth-On, Inc.) are coated on the pattern. White pigment is added to the inner layer to enhance imaging brightness and disperse light from the LEDs. Compared to the past design of spraying a frosted paint layer \cite{pang2021viko}, this method is simpler but more durable and takes less time to achieve the same effect. The outermost layer is sealed with black pigment to isolate potential external light disturbance and block background interference. \begin{figure} \centering \vspace*{2mm} \includegraphics[width=0.43\textwidth]{figure/tactile_base.jpg} \vspace*{-1mm} \caption{Tactile sensing skin with three types of pattern (from left to right: dense color pattern, dot matrix, transparent).} \label{fig:tactile base} \vspace{-7mm} \end{figure} Finally, all eight components are assembled. Due to the minimum tolerance remaining in structural design, the four holes on each part are well aligned for screw threading. The DelTact sensor can be mounted onto the FE gripper of Panda robotic arm directly to perform manipulation tasks (shown in Fig. \ref{fig: cover}.a). \section{Software Design} \label{sec: software} In this section, we show how to convert raw input images into meaningful contact information from DelTact. Then rich contact information such as deformation, force distribution, and shape with high resolution and accuracy is extracted online using computationally effective algorithms. \subsection{Image Preprocessing} As a fisheye lens is used to obtain a large field of view (FOV) and fully cover the sensing area, the camera requires calibration to compensate for the radial and tangential distortion. In addition, distortion also occurs due to the thick silicone layer, which adds a lens effect to the original image. Therefore, we take this into account by calibrating the camera module in the presence of the gel. The \href{https://docs.opencv.org/3.4/dc/dbb/tutorial_py_calibration.html}{OpenCV} camera calibration functionality is utilized \cite{opencv_library}. To capture the distorted image through silicone, we fabricate a tactile base with a transparent Solaris™ layer (the right one shown in Fig. \ref{fig:tactile base}) and mount it onto the sensor. A chessboard is printed to mark the 3D position of points in the world frame. We mounted the transparent tactile skin to the sensor and took 14 images of the chessboard at different positions and orientations for the intrinsic and extrinsic calibration. \subsection{Deformation Measurement with Dense Color Pattern} \label{subsec: Random Color Pattern algo} Optical flow tracking of the dense random color pattern is the prime algorithm that is utilized to measure sensor surface deformation. The vector field obtained from the algorithm represents the 2D projection of the 3D surface deformation on camera frame, from which rich contact information can be extracted. Thus, tracking accuracy of the pattern influences the sensor performance at a fundamental level. Here we adopt improved optical flow with adaptive referencing, which has been proposed in \cite{du2021high} and is briefly reviewed in Section \ref{subsubsec: dis flow}. Then, we focus on generating color patterns with high randomness (Section \ref{subsubsec: pattern generate}). \subsubsection{Dense Optical Flow and Adaptive Reference} \label{subsubsec: dis flow} We pursue a dense optical flow using Gunnar Farneback's algorithm \cite{farneback2003two} on GPU for accuracy and less overhead. The algorithm estimates the 2D displacement vector field from the image sequence at high frequency on GPU \cite{opencv_library}. The algorithm solves the traditional optical flow problem in a dense (per pixel) manner, by finding a warping vector \textbf{u} = (\textit{u}, \textit{v}) for each template patch \textit{T} in the reference image which minimizes the squared error between patches in reference image and query image $I_{t}$. \begin{equation} \begin{aligned} \text{\textbf{u}} = \text{argmin}_{\text{\textbf{u}}'} \sum_{x}[I_{t}(\text{\textbf{x}} + \text{\textbf{u}}') - T(\text{\textbf{x}})]^{2} \end{aligned} \label{eq: warping vector} \vspace{-0.1cm} \end{equation} where \textbf{x} = $(x, y)^T$ represents pixels in patch $T$ from the reference image. While referring to a static initial frame causes imperfections under large deformation when the rigid template matching fails to track the distorted pattern, an adaptive referencing strategy is introduced to automatically select a new reference frame during operation. An inverse wrapped image is computed based on a coarser flow and compared with the current reference image. Once the photometric error between two images exceeds a fixed threshold, the current image is set to replace the old reference image for tracking. During this process, the total optical flow is the superposition of all the flows calculated. This method guarantees that a matched correspondence between each frame is accurate and allows small non-linear transformation for the template image \cite{du2021high}. \subsubsection{Pattern Generation} \label{subsubsec: pattern generate} The purpose of using a dense color pattern is that the dense optical flow algorithm estimates the motion of patches based on the image intensity variance as shown in Eq. \ref{eq: warping vector}. Therefore, an initial frame where every pixel has distinct RGB (or grayscale) values is more random and contains more features to track as opposed to a monochromatic image that is untrackable. Three parameters, i.e. pattern resolution $h \times w$ (the pattern has a length-width ratio of 1:1), patch size $d$, and randomness regulation factor $r$, are predetermined to form the pattern. Here the patch size determines the length of the square color patch in mm. For instance, as the pattern is printed in an area of $35 \times 35$ mm$^2$, a patch size of 0.15 indicates that each color patch is $0.15 \times 0.15$ mm$^2$. Therefore, the pattern resolution will be $700 \times 700$, and each patch takes up $3 \times 3$ pixels. To generate the color pattern, we begin with filling the first patch at the top-left of the image. Three numbers are drawn randomly with a uniform distribution from 0 to 1 and applied to RGB channels of the patch. Then the rest of the patches are computed based on the existing patches. A neighbor patch of a particular patch is defined if they are connected through an edge, i.e. forming an 4-neighbor system. Moreover, the randomness regulation factor, $r \in [0,1)$ adjusts the variance of RGB values between neighboring patches. A new patch is filled so that the minimum value of the difference square in each RGB channel between the new patch and its neighbors is larger than $r$. A detailed description of this process is shown in Algorithm \ref{alg:pat_code}. \begin{algorithm} \caption{Patch Generation} \label{alg:pat_code} \begin{algorithmic}[1] \REQUIRE ~~\\ Patch size in pixels, $d$;\\ Randomness regulation factor between neighbors, $r$ ;\\ Neighbor patches, $P_1, P_2, .., P_n$;\\ \ENSURE ~~\\ Dense color patch, $P$; \STATE Initialize zero value matrix $P$ with shape $d \times d \times 3$; \FOR{i from 1 to n} \STATE Store RGB values $R_i, G_i, B_i$ of neighbor patch $P_i$; \ENDFOR \STATE Generate $R, G, B = rand(0,1)$; \WHILE{$ min((R-R_i)^2, (G-G_i)^2, (B-B_i)^2) < r$} \STATE Regenerate $R$, $G$, $B$; \ENDWHILE \STATE Apply $R, G, B$ to RGB channels of $P$; \RETURN $P$ \end{algorithmic} \end{algorithm} We continue to fill the first row and first column of patches, where the neighbors are the left and the upper patches respectively. Finally, all unfilled patches are computed in sequence with row-major order concerning the neighbor patches generated previously. To find the proper parameters of $d$ and $r$ such that the error of optical flow tracking is minimized, an indentation experiment was conducted shown in Section \ref{sec: pat exp}. \subsection{Tactile Measurement Algorithm}\label{sec: tactile measurement algo} In this section, we present the algorithm pipeline for extracting tactile information, i.e., shape and contact force, from the image. The experimental results of tactile measurements are presented to demonstrate sensor performance. \subsubsection{Shape Reconstruction} The method of 3D shape reconstruction was presented in our previous work \cite{du2021high} based on the optical flow with adaptive referencing mentioned in Section \ref{subsubsec: dis flow}. Because the 2D optical flow is essentially a projection of 3D deformation on camera, an expansion field indicates a deformation in normal direction. Thus, to extract the shift-invariant measurement of expansion, we apply a 2D Gaussian distribution kernel to the flow vectors and accumulate the distribution at each point to obtain the Gaussian density. The covariance matrix is given by: \begin{equation} \mathbf{Q} = \begin{bmatrix} \sigma^2 & 0 \\ 0 & \sigma^2 \\ \end{bmatrix}. \label{eq: covariance} \end{equation} The relative depth of the surface deformation can be directly estimated from negative Gaussian density. The result of shape reconstruction is shown in Section \ref{sec: shape exp}. \subsubsection{Contact Force Estimation} Surface total force (normal force and shear force along x/y-directions) can be inferred from the vector field based on natural Helmholtz-Hodge decomposition (NHHD) \cite{zhang2019effective}. The optical flow $\vec{V}$ is decomposed by \begin{equation} \vec{V} = \vec{d} + \vec{r} +\vec{h}, \vspace{-1mm} \end{equation} where $\vec{d}$ denotes curl-free component ($\nabla \times \vec{d} = \vec{0}$), $\vec{r}$ denotes divergence-free component ($\nabla \cdot \vec{r} = \vec{0}$), and $\vec{h}$ is harmonic ($\nabla \times \vec{h} = \vec{0}$, $\nabla \cdot \vec{h} = \vec{0}$) \cite{bhatia2014natural}. Then summation of vector norms on $\vec{d}$ and norm of vector summation of $\vec{V}$ can be used to estimate total normal force and shear force. Sometimes a densely distributed force field is preferred in providing richer information for control purposes. Therefore, we now present a method to break down total force into force distribution. Given the optical flow with NHHD components, $\vec{V} = \vec{d} + \vec{r} +\vec{h}$, we approximate the normal force and shear forces in x and y directions: $f = \begin{bmatrix} f_{normal} & f_{shearX} & f_{shearY} \end{bmatrix}^T$ at the displacement point $p = (i, j)$ with an linear model: \begin{equation} f= \text{diag}\left ( Ax \right ). \vspace{-1mm} \end{equation} $A$ is a $3 \times 6$ linear coefficient matrix \begin{equation} A = \begin{bmatrix} a_{11} & a_{12} & ... & a_{16} \\ a_{21} & a_{22} & ... & a_{26} \\ a_{31} & a_{32} & ... & a_{36} \end{bmatrix}, \vspace{-1mm} \end{equation} where $a_{14} = a_{15} = a_{16} = 0$. And $x$ is a $6 \times 3$ linear term matrix \begin{equation} x = \begin{bmatrix} D_p & {h_{px}} + {r_{px}} & {h_{px}} + {r_{px}} \\[0.2em] D_p^2 & \left( {h_{px}} + {r_{px}} \right)^2 & \left( {h_{px}} + {r_{px}} \right)^2 \\[0.2em] D_p^3 & \left( {h_{px}} + {r_{px}} \right)^3 & \left( {h_{px}} + {r_{px}} \right)^3 \\[0.2em] 0 & {h_{py}} + {r_{py}} & {h_{py}} + {r_{py}} \\[0.2em] 0 & \left( {h_{py}} + {r_{py}} \right)^2 & \left( {h_{py}} + {r_{py}} \right)^2 \\[0.2em] 0 & \left( {h_{py}} + {r_{py}} \right)^3 & \left( {h_{py}} + {r_{py}} \right)^3 \end{bmatrix}, \end{equation} where $D_p$ is the processed non-negative Gaussian density, ${r_{px}}$, ${r_{py}}$, ${h_{px}}$ and ${h_{py}}$ are the x and y component of $\vec{r}$ and $\vec{h}$ at point $p$. Then, $A$ is calibrated by the total force which is assumed to be the superposition of $f$ across the surface. The total forces along normal and shear directions $F= \begin{bmatrix} F_{normal} & F_{shearX} & F_{shearY} \end{bmatrix}^T$ are also defined by the linear model as: \begin{equation} F = \text{diag}\left ( AX \right ), \end{equation} with \begin{equation} X = \begin{bmatrix} \sum D_p & \sum \left( {h_{px}} + {r_{px}} \right) & \sum \left( {h_{px}} + {r_{px}} \right) \\[0.2em] \sum D_p^2 & \sum \left( {h_{px}} + {r_{px}} \right)^2 & \sum \left( {h_{px}} + {r_{px}} \right)^2 \\[0.2em] \sum D_p^3 & \sum \left( {h_{px}} + {r_{px}} \right)^3 & \sum {\left( {h_{px}} + {r_{px}} \right)^3} \\[0.2em] 0 & \sum \left( {h_{py}} + {r_{py}} \right) & \sum \left( {h_{py}} + {r_{py}} \right) \\[0.2em] 0 & \sum \left( {h_{py}} + {r_{py}} \right)^2 & \sum \left( {h_{py}} + {r_{py}} \right)^2 \\[0.2em] 0 & \sum \left( {h_{py}} + {r_{py}} \right)^3 & \sum \left( {h_{py}} + {r_{py}} \right)^3 \end{bmatrix}. \end{equation} This model is calibrated with another force indentation experiment as shown in Section \ref{sec: force cali}. \section{Experiments and Results}\label{sec: experiment} In this section, we present the experiment steps and results of random pattern selection and force regression model calibration with two similar indentation experiments. Then the experimental results of tactile measurements, i.e., to extract shape and contact force from the optical flow, are presented to demonstrate sensor performance. \subsection{Pattern Selection} \label{sec: pat exp} Because the ink printing quality and the camera resolution are limited, there are a minimal patch size and maximal randomness for the pattern to be captured by the camera. To choose the proper dense color pattern that fits well with the optical flow method, we conducted a series of indentation experiments to measure the accuracy of flow tracking using different patterns. We first fabricated nine tactile skin bases with patch size $d \in \{ 0.1, 0.15, 0.2 \}$ and randomness regulation factor $r \in \{ 0.1, 0.3, 0.5 \} $. The skin bases were mounted on a testing sensor that had the same configuration as DelTact but a different shell for fixing on a table (shown in Fig. \ref{fig:pat exp config}.b). Five 3D printed indenters (shown in Fig. \ref{fig:pat exp config}.c) pressed the sensor surface and moved along x/y/z-axis by the electric linear stage to generate surface deformation in all directions. \begin{figure} \centering \vspace*{3mm} \includegraphics[width=0.475\textwidth]{figure/pat_config.pdf} \vspace*{-3mm} \caption{Pattern selection experiment configuration. a) Demountable tactile skin base using dense color combined with white dots pattern($d=0.2$, $r=0.3$). b) Data collection with electric 3 axis linear stage. c) Solidworks modelings of 5 indenters. From left to right: 4 dots, edges, ellipsoid, hexagonal prism, star.} \label{fig:pat exp config} \vspace{-7mm} \end{figure} The experiment was carried out in the following steps: \begin{enumerate} \item The sensor with tactile base mounted was fixed on a table. \item The 2-dot indenter was installed on the linear stage. \item The stage was driven to press the sensor surface at four positions, the contact depths at each position were 5 mm and then 10 mm. \item At each depth, the indenter moved in x/y-directions for $\pm 10$ mm. During this process, the camera continued to capture the image. \item The stage retracted to its initial position to change the indenter. Then step 3 to 5 were repeated. \item After five indenters were used, the tactile base was replaced, and data collection was repeated from step 2. \end{enumerate} For each combination of indenter and tactile base, 3600 data points were collected. Each data point was an image with deformed dense color pattern with dot markers embedded inside (a sample is shown in Fig. \ref{fig:pat exp config}.a). A color filter separated the white color dots in the image and a blob detection measured the sub-pixel displacement vectors, $\textbf{u}_i = (dx_i, dy_i)$, from current position to initial position at $p_i = (x_i,y_i)$ ($i$ = 1,2,3...169). As the tracking from blob detection could reach sub-pixel accuracy, the white dot displacements were regarded as the ground truth to compare with optical flow at corresponding positions. We then ran the dense optical algorithm and calculated the average error $\delta$ between the flow displacement vectors $\textbf{u}'_i = (dx'_i, dy'_i)$ at $p_i$ and $\textbf{u}_i$ at $p_i$. For $n = 169$, the error $\Bar{\delta}$ was given by: \begin{equation} \begin{aligned} \Bar{\delta}=\frac{\sum_{i=1}^{n}\sqrt{\left(\left(x_i-x'_i\right)^{2}+\left(y_i-y'_i\right)^{2}\right)}}{n} \end{aligned} \label{eq: tracking error} \end{equation} The experiment results are shown in the upper one of the stacked column charts in Fig. \ref{fig:pat exp result}. The average tracking errors for each pattern under five indenters are accumulated to evaluate the performance under different contact shapes. The pattern with $d = 0.1$ and $r = 0.5$ gives the lowest error of 0.11 mm. And we continued to conduct another indentation to obtain a sub-optimal result with $r$ and $d$ near this value, given $d \in \{ 0.5, 0.75, 0.1 \}$ and $r \in \{ 0.4, 0.5, 0.6 \}$. As shown in the lower chart in Fig. \ref{fig:pat exp result}, when $d = 0.075$ and $r = 0.6$, the error reaches the lowest of 0.08 mm. Therefore, this pattern is used to fabricate our sensor. We can see that patch size $d$ dominates the error. When $d$ is small enough, increasing $r$ does not have significant influence on result. It is also noticed that as the $d$ becomes extremely smaller ($d = 0.05$), the error increases (around 0.5 mm) instead of drop. The reason is that the color patch is such small that is beyond the maximum resolution that the printer can fabricate. This results in a ambiguous gray area in the pattern causing mis-tracking in the optical flow algorithm. But in the rest of the experimental range, the tracking error decreases when the patch size decreases and randomness regulation factor increases, which agrees with our initial purpose of using a more random and denser color pattern to obtain more accurate displacement tracking. \begin{figure} \centering \includegraphics[width=0.478\textwidth]{figure/ind_exp.pdf} \vspace*{-0.4cm} \caption{Compression of the average optical flow tracking error between patterns with different patch size $d$ and random regulation factor $r$.} \label{fig:pat exp result} \vspace{-0.2cm} \end{figure} \subsection{Shape Reconstruction}\label{sec: shape exp} In practice, we set $\sigma=3.0$, and a guided filter \cite{he2015fast} was used to reduce high-frequency noises and smooth the surface while maintaining shape features such as edges. We conducted the shape reconstruction for four objects in Fig. \ref{fig:3d rec}. The 3D shape of the contact objects is shown in the results, including the features of faces, edges, curves, and corners. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figure/dep_est_cr.pdf} \vspace*{-4mm} \caption{3D reconstruction of four shapes (from left to right): sphere, cylinder, ring, and toy hat. The first row are the objects. The second row is the estimated Gaussian densities plotted in hot map. A more reddish region indicates a higher depth. The third row is the isometric view of the deformation.} \label{fig:3d rec} \vspace{-0.6cm} \end{figure} \subsection{Contact Force Distribution}\label{sec: force cali} To calibrate the contact force model, we collected force and flow data by conducting a similar indentation experiment as shown in Section \ref{sec: pat exp}. An ATI Nano17 F/T Sensor was installed on the sensor to measure high-accuracy surface normal and shear force. As for the indenters, we used five 3D printed spheres with diameters of 10, 12, 15, 18, and 22 mm. Each indenter pressed the sensor at 9 positions and moved 5 normal steps and 9 shear steps to load different normal and shear forces. To avoid the influence of slip, only steady-state data were recorded. $A$ was solved using linear regression with $ 9 \times \left( 5 \times 9 + 1 \right) \times 5 = 2070$ data points, where 1656 data points were used for training and 414 for testing. The range of measured shear force during experiment was from -2.49 N to 2.94 N in x-direction, and -2.82 N to 2.86 N in y-direction. The range of normal force was from 0 N to 9.67 N. We adopted $n = 3$, and the resulting adjusted coefficient of determination $R^2$ together with root mean square error (RMSE) are shown in Table. \ref{Tab:linearR}. \begin{table} \vspace*{2mm} \caption{Linear Regression Analysis} \vspace*{-3mm} \centering \begin{tabular}{l|cc} \textbf{Force} & Adjusted $R^2$ & RMSE (N) \\ \hline $F_{normal}$ & 0.99 & 0.30 \\ $F_{shearX}$ & 0.98 & 0.14 \\ $F_{shearY}$ & 0.98 & 0.17 \end{tabular} \label{Tab:linearR} \vspace*{-6mm} \end{table} The RMSE is 0.3 N, 0.14 N and 0.17 N for normal and shear force in x/y-directions. Considering that this is the error of surface total force, the actual error of force estimation will be lower if divided at each point on the surface. Finally, the calibrated model was applied to the sensor, where we revealed the force distribution results in Fig. \ref{fig: force dis}. The algorithm achieved an online computation frequency of 40 Hz with an Intel Core i7-7700 CPU and an NVIDIA GTX 1060 GPU. \begin{figure} [ht] \centering \includegraphics[width=0.48\textwidth]{figure/force_cr.pdf} \vspace*{-7mm} \caption{Force distribution estimation result of a sphere indenter. For visualization, the dense vector field of optical flow and shear force are sparsely displayed. A white contour representing the contact area is shown in the shear force distribution.} \label{fig: force dis} \end{figure} \section{Discussion and Conclusion}\label{sec: conclusion} In this work, we present the design of a new vision-based tactile sensor with an optimized dense random color pattern. The proposed sensor, named DelTact, adopts reinforced hardware that features greater compactness and robustness. It can be mounted onto various types of end-effectors with redesigned sensor connection part. The size of DelTact is reduced by one-third compared to the fingervision sensor \cite{du2021high} whilst keeping the sensory area sufficient for contact measurement. Random color patterns generated from different parameter sets were tested with an indentation experiment for minimal tracking error. Regarding software, image preprocessing, shape reconstruction, and contact force estimation algorithms are presented with experimental results showcasing that our sensor has multi-modality sensing abilities with high resolution and frequency. A comparison between DelTact and other vision-based tactile sensors is presented in Table. \ref{Tab:sensor comp}. From the table, we can see that our sensor provides a large sensing area at a higher resolution with a compact size. \begin{table}[ht] \scriptsize \vspace*{3mm} \caption{Sensor Comparison} \vspace*{-2mm} \centering \begin{tabular}{l|c|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|c|>{\centering\arraybackslash}m{1cm}} \hline Sensor & Resolution & Sensing Area (mm$^2$)& Pixel Size (mm) & Size (mm$^3$) & Calibration\\ \hline \cite{dong2017improved} & \multirow{6}{*}{640x480} & 252 & 0.028 & 40x80x40 & $\checkmark$ \\ \cite{taylor2021gelslim3} & & 675 & 0.047 & 37x80x20 & $\checkmark$ \\ \cite{lambeta2020digit} & & 305 & 0.031 & 20x27x18 & $\times$\\ \cite{yamaguchi2016combining} & & 750 &0.049& 40x47x30 & $\checkmark$ \\ \cite{sferrazza2019design} & & 900 &0.054& 50x50x43.8 & $\checkmark$ \\ \cite{du2021high} & & 756 & 0.049& 20x20x26 & $\times$ \\ \hline \cite{ward2018tactip} & 162 & 628&1.96 & 20x20x26 & $\times$ \\ \hline \textbf{Ours} & 798x586 & 675 &0.037 & 39x60x30 & $\checkmark$ \\ \hline \end{tabular} \label{Tab:sensor comp} \vspace*{-6mm} \end{table} We acknowledge the compromise between the information quality and limitation of sensor performance, while rough contact feedback can satisfy the system perception demand in some cases with lower requirements in hardware and software. The proposed methods of shape reconstruction with Gaussian density falls short compared with prior works in texture measurement based on photometric stereo such as GelSight \cite{dong2017improved} and GelSlim \cite{taylor2021gelslim3}, which are able to recognize surface features at sub-millimeter scale. However, photometric stereo requires strict conditions for surface reflection and illumination properties. Learning-based force estimation manages to measure contact force within an error of 0.1 N \cite{sferrazza2019design}, but the confidence of the model prediction relies on a large amount of training data (over 10000) from long collecting procedures. Therefore, the motivation of our work is to devise an easily fabricated and calibrated sensor that is sufficient and cost-effective in tactile information extraction for a broader range of tasks. Possible future work includes testing the versatility of the sensor on robot perception and manipulation. Beyond the low-level features, using optical flow, we may try to extract higher-level features such as vibration and slip, which are critical for maintaining stability in grasping tasks. Besides, we aim to obtain higher accuracy 3D point cloud of surface deformation to remove noise in shape reconstruction .
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:level1}Introduction} Turbulence is connected to a flow mechanism which converts kinetic energy into heat, known as the turbulence cascade. Turbulent flows exhibit universal statistical properties; it can thus be expected that the turbulence cascade is a universal process as well. Understanding the cascade dynamics is a major challenge of out-of-equilibrium physics. According to the classical Richardson-Kolmogorov phenomenology \cite{kolmogorov1941dissipation}, the turbulence cascade is separated into collective modes. (i) Large scales which carry the bulk of the kinetic energy, (ii) small scales which dissipate the energy, and (iii) intermediate self-similar scales which mediate between the two. On average, energy moves from large to small scales, with energy dissipation being only a passive consequence of the energy injection into the cascade by the large scales. Thus, a one-way interaction between large and small scales is implied. The above description yields important predictions, such as a scaling law for the kinetic energy dissipation rate and the celebrated -5/3 law, both validated by experiment in a wide variety of flows \cite{sreenivasan1984scaling,vassilicos2015dissipation,batchelor1953theory}. Recent experimental and numerical results have revealed a new universal dissipation scaling, different to the classical one, appearing in extensive regions of decaying homogenous turbulence \cite{valente2012universal,isaza2014grid,goto2016unsteady}, boundary-free shear flows \cite{nedic2013axisymmetric,cafiero2020non}, as well as in forced periodic turbulence \cite{goto2015energy,goto2016local}. These results suggest the existence of a new type of cascade whose physics do not abide to the Richardson-Kolmogorov phenomenology. Here, the above observations are explained theoretically, by revising the Richardson-Kolmogorov phenomenology, so as to include a feedback mechanism, enabling active interaction between large and small scales. The resulting framework yields the new dissipation scaling, as well as an equation for integral length scale evolution of the flow. Contrary to previous theories, turbulence invariants are not assumed. The decay of the turbulent kinetic energy is found to be governed by a generalized logistic equation, reflecting the self-regulation of the cascade. \section{Self-regulation} \label{sec:feedback} The idea that a cascade feedback mechanism lies behind the new dissipation scaling has been anticipated by two existing non-Kolmogorov theories of turbulence, which have had some success in predicting the novel non-Kolmogorov dynamics, albeit both containing inconsistencies (see appendix \ref{app:1}). George's theory \cite{george1992decay} (similar to the theory of Ref. \cite{barenblatt1974theory}) leads to the new dissipation scaling. The cascade in that case is assumed fully self similar, i.e. evolving as a ``coherent whole". In contrast, Richardson-Kolmogorov phenomenology presupposes independently evolving compartments (i.e. large/small scales), with a self-similar range of scales only in between. It might be thought that George's viewpoint implies a ``balanced" cascade of quasi-steady evolution. However, the novel dissipation scaling has been observed in cascades where intense fluctuations disturb the establishment of a balance, while the non-negligible cascade time-lag does not permit immediate relaxation \cite{goto2015energy,goto2016local}. As a result, evolution ``as a whole" suggests a regulatory mechanism of information exchange between large and small scales, which is thus implicit in George's theory. Goto and Vassilicos \cite{goto2016unsteady} observed that large scales are not self-similar, and excluded them from George's analysis. Given the above discussion, this treatment removes George's implicit assumption of active communication between large and small scales, which, as proposed here, is the main cause of the new dissipation physics. Indeed, in order to predict the new scaling, Goto and Vassilicos \cite{goto2016unsteady} had to explicitly assume an \textit{ad-hoc} link between large and small scales (i.e. that their dissipation rates are proportional). We reiterate that such an explicit link was not necessary in George's theory, as full self-similarity already implied it, but became necessary as soon as full self-similarity was broken. In appendix \ref{app:1} it is argued that, similar to the large scales, the small scales must also be removed from the self-similar analysis. We therefore return to our starting point, the Richardson-Kolmogorov picture of large and small scales, with a self-similar range only in between. However, our previous discussion suggests an important difference. The physics connected to the new dissipation scaling imply a feedback mechanism linking large and small scales, which must therefore be included in the analysis. \section{Phenomenology and assumptions} \label{sec:phenom} \begin{figure}[b] \includegraphics[width=.78\columnwidth]{Sketch.pdf \caption{\label{fig:Sketch} Proposed cascade picture. An intermediate range of self-similar scales is bounded by the non-dimensional wave numbers $\kappa_a$ and $\kappa_b$ from the large and small scales, respectively. A direct energy cascade ``feeds" the small scales, while an inverse helicity cascade regulates transport.} \end{figure} The ensuing analysis concerns homogeneous turbulence in cases where the new dissipation scaling has been observed. In decaying turbulence (grid and periodic box turbulence) this concerns the interval/region soon after turbulence starts to decay (i.e. close to the grid \cite{isaza2014grid}, or soon after the forcing stops \cite{goto2016unsteady}). At later times/distances the system transitions to the classical Kolmogorov dissipation scaling (note however that there are indications that even then Kolmogorov's assumptions are not fully valid \cite{goto2016unsteady}). In forced periodic box turbulence the flow-quantities undergo intense fluctuations \cite{goto2015energy,goto2016local} (even though the forcing is constant), during which the system always obeys the new dissipation scaling (see appendix \ref{app:2}). We note that the classical (Kolmogorov's) scaling is $\epsilon \sim K^{3/2}/L$, while the new dissipation scaling has been found from experiment to be $\epsilon \sim \nu Re_{L0} K/L^2$ \cite{vassilicos2015dissipation}, where $K$ is the turbulent kinetic energy, $L$ the integral length scale, $Re_{L0}$ is the integral scale Reynolds number at the onset of decay and $\nu$ the kinematic viscosity. In any case, homogeneous turbulence can be described by the scale-by-scale energy budget \begin{equation} \frac{\partial K^>(k,t)}{\partial t} = \Pi(k,t) - \epsilon^>(k,t) \,. \label{eq:budget} \end{equation} \noindent With $E(k,t)$ the energy spectrum, $K^>(k,t) = \int_k ^ \infty E(k,t) dk$ and $\epsilon^>(k,t) = 2 \nu \int_k ^ \infty k^2 E(k,t) dk$ are the turbulent kinetic energy and dissipation rate, respectively, for wavenumbers larger than $k$. $\Pi(k,t)$ is the interscale flux of turbulent kinetic energy from wavenumbers smaller to wavenumbers larger than $k$. We omit ``per unit mass" throughout the text for brevity. Note that equation \ref{eq:budget} lacks a kinetic energy production term, i.e. this is assumed to act only in very small wavenumbers (large scales) not included in equation \ref{eq:budget}, or not be present at all, as in the case of purely decaying turbulence. We now perform a series of assumptions, which are validated using high Reynolds periodic box Direct Numerical Simulations (DNS) data of forced and decaying turbulence, (see \cite{goto2016unsteady} and appendix \ref{app:2} for more information on the numerical method and test cases). \paragraph*{Assumption 1:} Similar to the Richardson-Kolmogorov phenomenology, the cascade is separated into large scales, small scales and intermediate scales (see figure \ref{fig:Sketch}). The latter are assumed self-similar during decay, and bounded by the non dimensional wavenumbers $\kappa_a$ and $\kappa_b$, i.e. $$ E(k,t) = A(t)f(kL,^*), \hspace{0.4cm} \text{for $\kappa_a<kL<\kappa_b$} \,, $$ \noindent where $\kappa = kL$ and $L(t) = \frac{3\pi}{4}\int_0 ^\infty k^{-1}E(k,t)dk/K(t)$. Following George \cite{george1992decay}, the argument $^*$ is included to indicate a dependency on initial conditions. Given the above assumption, we expect that the dissipation in the self-similar range $\epsilon^{ab}(t)$ will scale as the total dissipation of the cascade $\epsilon(t)$. That is, because the majority of $\epsilon^{ab}(t)$ is expected to occur at the largest wavenumbers of the self similar range, i.e. close to $\kappa_b$. The eddy turnover time at $\kappa \approx \kappa_b$ will thus regulate both $\epsilon^{ab}$ and $\Pi_b$, the latter being the interscale energy flux at $\kappa_b$ (see figure \ref{fig:Sketch}). We may thus expect $\epsilon^{ab} \sim \Pi_b$ (i.e. that their ratio is time-independent). Neglecting the dissipation of the large scales (i.e. for $\kappa<\kappa_a$) we have $\Pi_b \approx \epsilon - \epsilon^{ab}$ (Kolmogorov's small scale stationarity hypothesis \cite{kolmogorov1941degeneration}). Combination of the above yields $\epsilon^{ab} = \Phi \epsilon$, where $\Phi$ is a constant of proportionality. This result is validated in figure \ref{fig:A1}a where the appropriately normalized dissipation of periodic-box decaying turbulence is plotted as a function of the number of eddy turnover times $\hat{t} = \int _{0} ^t \frac{u}{L}dt$, where $\frac{3}{2}u^2 = K$. Two simulation sizes $N^3$ are plotted, i.e. $N=2048$ and $N = 1024$ (larger size corresponds to larger Reynolds number). The cutoff non-dimensional wavenumber $\kappa_b$ naturally increases with initial Reynolds number, and is taken to be equal to to 41 and 22, for the high and low Re cases, respectively (i.e. approximately at the wavenumber where the -5/3 spectral scaling starts to break down, see figure \ref{fig:A1}b). For both cases the normalized dissipation is relatively constant while the new dissipation scaling holds, giving some support to assumption 1. However, for larger times $\epsilon^{ab} \sim \epsilon$ ceases to be valid, and this coincides with the shift of the system to the classical (Kolmogorov) dissipation scaling. The reason for this is that assumption 1 treats $\kappa_a$ and $\kappa_b$ as time-independent. When the new dissipation scaling is valid, this is indeed true. In that case, the beginning of the self-similar range (and thus $\kappa_a$) occurs shortly after the spectral peak (see figure \ref{fig:A1}b). Our DNS results show that while the latter diminishes with time, it always stays centred around the same normalized wavenumber $kL$. At the same time we expect $\kappa_b$ to be roughly proportional to $L/\lambda$, where $\lambda$ is the Taylor microscale. In section \ref{sec:diss} it is indeed shown that $L/\lambda$ stays constant with time when the new dissipation scaling is valid. The transition of the system to the classical scaling coincides with the disappearance of the spectral peak, and the start of a decreasing trend of $L/\lambda$ with time: $\kappa_a$ and $\kappa_b$ are thus no longer time-independent and assumption 1 is invalid. Given the above analysis, we may obtain a scaling law for the energy spectrum $E(k,t)$. Using assumption 1 the dissipation of the self similar part of the cascade is given as $$ \epsilon^{ab} = 2 \nu L^3 A \int_{\kappa_a} ^{\kappa_b} \kappa^2 f(\kappa,^*)d\kappa \,, $$ \noindent which yields an expression for the time-evolution parameter of the spectrum $A(t)$. However, we have just shown that when the new dissipation scaling is valid we have $\epsilon^{ab} = \Phi \epsilon$, and thus we obtain \begin{equation} E(k,t) = \frac{\Phi \epsilon L^3}{2 \nu I_2}f(\kappa,^*), \hspace{0.4cm} \text{for $\kappa_a<kL<\kappa_b$} \,, \label{eq:A1sb} \end{equation} \noindent where $I_2=\int_{\kappa_a} ^{\kappa_b} \kappa^2 f(\kappa,^*)d\kappa$. This scaling was introduced first in Ref. \cite{george1992decay}, using qualitatively similar arguments. Goto and Vassilicos \cite{goto2016unsteady} provided evidence for equation \ref{eq:A1sb}, for high enough wavenumbers, and when the new dissipation is valid. In figure \ref{fig:A1} we reproduce their data (decaying periodic box turbulence at $N=2048$) for completeness. The DNS data offer acceptable support for this scaling. Note that these spectra are the the hardest to collapse, as in this time-interval the Reynolds number varies the most during decay. \begin{figure*} \centerline{ \begin{tabular}{ll} $\qquad$ (a) & $\qquad$ (b) \\ \includegraphics[width=.96\columnwidth]{A1a.pdf} & \includegraphics[width=.96\columnwidth]{A1b.pdf} \end{tabular} } \caption{(a) Normalized dissipation against number of turnover times, for periodic box decaying turbulence, with simulation sizes $N=2048$ (red line) and $N=1024$ (blue line). The grey stripe marks the transition region from the new dissipation scaling to the classical one. (b) Normalized energy spectra for many instances while the new dissipation scaling is valid. The novel scaling ceases to be valid approximately at the same time when the spectral peak at $kL\approx2$ disappears.} \label{fig:A1} \end{figure*} \paragraph*{Assumption 2:} Much similarly to the Richardson-Kolmogorov phenomenology, it is assumed that a wavenumber $\kappa_a$ exists in the upper part of the self-similar range, such that $$ \Pi_a = C_x u^3/L \,, $$ \noindent where $\Pi_a$ is evaluated at $\kappa_a$, and $C_x$ is a coefficient of proportionality. While this expression is generally accepted for Kolmogorov turbulence \cite{vassilicos2015dissipation,pope2001turbulent}, it is not straightforward that it holds when the new dissipation scaling is valid. For instance, Goto and Vassilicos \cite{goto2016unsteady} have shown that in decaying turbulence the above does not hold for a wide wavenumber range in the self-similar part of the cascade. However, figure \ref{fig:A2}a shows that the above relationship holds for decaying periodic-box turbulence if $\kappa_a$ is taken shortly after the spectral peak. Specifically, here $\Pi_a$ is calculated for $\kappa_a\approx 3.3$, with the spectral peak being centred around $kL = 2$. A similar result can be also obtained for forced turbulence. There, Goto and Vassilicos \cite{goto2016local} have shown that assumption 2 is always valid, when calculated at an appropriate wavenumber. The coefficient of proportionality in forced turbulence was found to be very close to the one calculated here ($C_x \approx 0.38$). \begin{figure*} \centerline{ \begin{tabular}{ll} $\qquad$ (a) & $\qquad$ (b) \\ \includegraphics[width=.96\columnwidth]{A2a.pdf} & \includegraphics[width=.96\columnwidth]{A3a.pdf} \end{tabular} } \caption{ (a) Normalized interscale energy flux of the large scales $\Pi_a$ and (b) normalized parameter $G(t)$ for decaying periodic turbulence of domain size $N=2048$ (red) and $N=1024$ (blue) (the forcing stops at $\hat{t}=0$). The dashed line in (a) corresponds to an ordinate value of 0.38.} \label{fig:A2} \end{figure*} \paragraph*{Assumption 3:} When the flow exhibits the new dissipation scaling, the large scale interscale flux, $\Pi_a$, and the dissipation rate, $\epsilon$, are connected via the expression $$ \Pi_a \sim \epsilon Re_L \,. $$ \noindent This is the essential point of departure from the Kolmogorov phenomenology, which simply assumes $\Pi_a \sim \epsilon$. Assumption 3 is admittedly \textit{ad-hoc}; it is necessary for the analysis to yield the new dissipation scaling (see section \ref{sec:diss}). Conversely, assumption 2 transforms the new dissipation scaling into a simpler statement (assumption 3) which is much easier to interpret physically. In figure \ref{fig:A2}b we validate the above assumption by plotting the normalized flux $G(t) = \Pi_a/(\epsilon Re_L)$ for decaying periodic box turbulence (as above, $\kappa_a=3$ and $\kappa_a=3.5$ for the two domain sizes). The normalized flux drops slightly and then remains relatively constant, as long as the system is characterized by the new dissipation scaling, providing some backing to assumption 3 (we note that for slightly larger $\kappa_a$ the constancy of $G(t)$ improves). We now argue that assumption 3 is the expression of a negative feedback in the cascade. This is more evident in forced turbulence conditions where the turbulence parameters exhibit quasiperiodical oscillations, even if the forcing remains invariant in time (see \cite{goto2016local} and appendix \ref{app:2}). This behaviour is reminiscent of predator-prey systems \cite{brauer2012mathematical} where a negative feedback works to establish ``balance" in the system and oscillations are observed. Assumption 3 expresses a negative feedback according to the following causal chain. In forced turbulence, if the interscale flux $\Pi_a$ were to increase, then this would cause an increase in $\epsilon$ (after a time-lag). Turbulence would thus start to decay, causing a drop in $Re_L$ (in appendix \ref{app:2} we show that $\epsilon$ and $Re_L$ are indeed somewhat anticorrelated in forced turbulence). Assumption 3 would then halt the increase of $\Pi_a$, moving the system towards its previous state (negative feedback). The opposite would occur if $\Pi_a$ were to decrease. The above causal chain requires a physical mechanism which would permit an information exchange between large and small scales. We now postulate such a mechanism based on helicity, the latter being the inner product of velocity and vorticity, $H = \boldsymbol{u}\boldsymbol{\omega}$. High values of $H$ deplete the nonlinearity of Navier-Stokes equations, suppressing the interscale transfer of the cascade \cite{moffatt2014helicity}. Small-scale helicity thus offers a pathway for active communication between large and small scales. We first discuss the results of two recent works which, when combined, indicate this role of small scale helicity in the cascade. First, the DNS of \cite{portela2018turbulence} imply that, when the new dissipation scaling holds, small scale structures of high helicity exist in the flow, whose appearance is correlated to that of large coherent vortices in the flow. It is interesting that the current DNS results actually show that the new dissipation scaling holds for as long as the vortex peak of figure \ref{fig:A1}b (footprint of large coherent vortices) appears in the spectrum. As soon as the peak disappears, the system transitions to the classical dissipation scaling. Second, the analysis of \cite{bos2017dissipation} (see also \cite{yoshizawa1994nonequilibrium}) links the new dissipation scaling to a -7/3 slope in the energy spectrum, coexisting with the -5/3 slope, and therefore masked by it. The earlier work of \cite{brissaud1973helicity} actually suggests that a -7/3 slope is the footprint of an inverse helicity cascade, i.e. helicity transport from small to large scales. Combining the above points, we may postulate the following feedback mechanism, also depicted in figure \ref{fig:Sketch}. An instability mechanism causes the large scales to create small helical structures of high helicity. Helicity then cascades up towards the large scales, finally intercepting the interscale flux $\Pi_a$. Assumption 3 (and thus the new dissipation scaling) could be thought to be the expression of these dynamics, in the sense that $\Pi_a$ is larger when dissipation is high (so that the small helical structures are destroyed) and when $Re_L$ is large (so that scale separation and thus the inverse cascade lag is large). Validation of this physical mechanism is left as a task for future research. \section{Results} \subsection{Dissipation rate} \label{sec:diss} First, we consider forced turbulence. Assumption 3 is $$ \epsilon = C \frac{\Pi_a}{Re_L}\,. $$ \noindent Considering a time-averaged cascade where large scale dissipation is negligible, we have $\overline{ \epsilon } = \overline{\Pi}_a $, where the bar denotes the time-averaging operation. We expect that the cascade time-lag breaks any correlation between $\Pi_a$ and $Re_L$ in forced turbulence (see appendix \ref{app:2} for validation of this assumption). Thus, time averaging of the above expression yields $C = 1/\overline{Re^{-1}_L}$. This is approximately $C \approx \overline{Re}_L$ (the forced turbulence data of \cite{goto2016local} confirm this simplification). Consequently, combination of assumptions 2 and 3 yields \begin{equation} \epsilon \sim \overline{ uL} \frac{u^2}{L^2} \,, \label{eq:diss1} \end{equation} \noindent which is the new dissipation scaling. For decaying turbulence, we achieve a similar result if, instead of time averaging, we perform ensemble averaging at time $t=0$, where the turbulence is still forced. Thus, we have $\langle \epsilon_0 \rangle = \langle \Pi_{a0} \rangle $, where the subscript 0 signifies the time $t=0$, and we obtain \begin{equation} \epsilon \sim u_0 L_0 \frac{u^2}{L^2} \,. \label{eq:diss2} \end{equation} \noindent In turbulence literature, dissipation scalings are commonly expressed using the dissipation coefficient $C_\epsilon = \epsilon L/u^3$, which is constant in Kolmogorov turbulence. Using the definition of the Taylor length scale $\lambda^2 \equiv 15 \nu u^2/\epsilon$, we obtain \begin{equation} L/\lambda \sim C_\epsilon Re_\lambda \,, \label{eq:diss3} \end{equation} \noindent which shows that $L/\lambda$ increases linearly with $Re_\lambda$ in Kolmogorov turbulence. On the other hand, when the new dissipation scaling holds (i.e. equations \ref{eq:diss1} and \ref{eq:diss2}), we have \begin{equation} C_\epsilon \sim \sqrt{Re_{L0} } Re_\lambda ^{-1}\,, \label{eq:diss4} \end{equation} \noindent where $Re_{L0}$ may denote either the time-averaged Reynolds number for forced turbulence, or the initial condition Reynolds number, for decaying turbulence. Substitution to expression \ref{eq:diss3} shows that $L/\lambda$ is constant during decay when the new dissipation scaling holds. In figure \ref{fig:V1}, we validate the above predictions using data from the literature for forced periodic, decaying periodic, and grid turbulence (see appendix \ref{app:2} and \cite{goto2016unsteady} for more info on the data-sets used). For forced turbulence (figure \ref{fig:V1}a) the different simulation runs are always characterized by the new dissipation scaling (equation \ref{eq:diss4}). In decaying turbulence (figure \ref{fig:V1}b) all five simulations begin with the new dissipation scaling, and later transition to the Kolmogorov scaling ($C_\epsilon \approx const$). As mentioned in the previous section, this state change coincides with the disappearance of the coherent vortices from the flow. In grid turbulence (figure \ref{fig:V1}c), for all tested grids the flow begins with the new dissipation scaling ($L/\lambda = const$) and at larger distances from the grid it transitions to the Kolmogorov scaling ($L/\lambda \sim Re_\lambda$). \begin{figure*} \centerline{ \begin{tabular}{lll} $\qquad$ (a) & $\qquad$ (b) & $\qquad$ (c) \\ \includegraphics[width=.64\columnwidth]{V1.pdf} & \includegraphics[width=.64\columnwidth]{V2.pdf}& \includegraphics[width=.64\columnwidth]{V3.pdf} \end{tabular} } \caption{Time evolution of the normalized $C_\epsilon$ for (a) forced periodic and (b) decaying periodic turbulence simulations of various initial Reynolds numbers (from \cite{goto2016local} and \cite{goto2016unsteady}). (c) Spatial evolution of $L/\lambda$ for various grids in grid-generated turbulence experiments (from \cite{valente2012universal}).} \label{fig:V1} \end{figure*} \subsection{Integral length scale} \label{sec:integral} The two dissipation scalings (classical, new) discussed in the previous sections, provide a starting point for the prediction of the kinetic energy evolution of homogenous decaying turbulence, in the sense that $dK/dt = - \epsilon$. However, this equation cannot be integrated, given that $\epsilon$ is a function of $L$, which is itself an unknown function of time. This closure problem has been conventionally resolved via the \textit{ad hoc} assumption of ``turbulence invariants" \cite{sinhuber2015decay,saffman1967large}. This assumption is often arbitrary, given that an infinite number of invariants exist in turbulent flows \cite{vassilicos2011infinity}. In contrast to previous theories, the current framework yields a prediction of $L$ implicitly and does not rely on the assumption of invariants. Neglecting the kinetic energy of the small scales ($kL>\kappa_b$), we obtain an estimate for the turbulence kinetic energy for scales larger than $k$, by integrating equation \ref{eq:A1sb} from $k$ to $\kappa_b/L$, i.e. \begin{equation} K^>(k,t) \approx \frac{\Phi \epsilon L^2}{2 \nu I_2} I_0(kL) \,, \label{eq:L1} \end{equation} \noindent where $I_0(kL) = \int_{\kappa} ^{\kappa_b} f(\kappa,^*) d\kappa$. Injection of $\partial K^>/\partial t$ evaluated at $\kappa_a$ (we remind that assumption 1 states that both $\kappa_a$ and $\kappa_b$ are time-independent), along with the new dissipation scaling ($\epsilon = u_0 L_0 C_x \frac{u^2}{L^2}$) and assumption 3 ($\Pi_a = \frac{uL}{u_0 L_0} \epsilon$) to the scale-by-scale energy budget (equation \ref{eq:budget}) yields \begin{equation} \frac{1}{\nu} \frac{dL^2}{dt} = A - B Re_\lambda \,, \label{eq:L2} \end{equation} \noindent where $A = 4 \frac{ I_2 \frac{1}{\Phi} - \frac{1}{3}C_x Re_{L0} I_0 }{\kappa_a f(\kappa_a,^*)}$ and $B = \frac{4 I_2 }{\Phi \kappa_a f(\kappa_a,^*)}\sqrt{\frac{C_x}{15Re_{L0}}}$ are positive constants dependent on initial conditions. In the above, $I_0 = \int_{\kappa_a} ^{\kappa_b} f(\kappa,^*) d\kappa$. The above analysis can also yield a prediction for the point of transition from the new to the classical dissipation scaling. Equation \ref{eq:L1} relies on the assumption $\epsilon^{ab} \sim \epsilon $ (see section \ref{sec:phenom}) which does not hold in the classical dissipation scaling (see figure \ref{fig:A1}). However, we may consider $\epsilon^{ab} \sim \epsilon $ to be approximately valid for a small time interval after the state change. We may thus repeat the analysis of this section, but using the ``classical" expressions for the dissipation and interscale transfer, i.e. $\epsilon \sim u^3/L$ and $\Pi_a \sim \epsilon$. The result (see appendix \ref{app:3}) is \begin{equation} \frac{1}{\nu} \frac{dL^2}{dt} = -A' + B' Re^2 _\lambda \,, \label{eq:L3} \end{equation} \noindent with $B'$ a positive constant for sufficiently high Reynolds numbers. We thus conclude that the transition from the new to the classical dissipation scaling occurs when the slope of $\frac{dL^2}{dt}$ changes sign. This is in agreement with the observation of \cite{goto2016unsteady}, that the state change coincides with the location where $\frac{dL^2}{dt}$ assumes its maximum value. We emphasize that equation \ref{eq:L3} is not valid, in general, during the classical decay, but only for a very small interval after the state change of the system. The above predictions are validated in figure \ref{fig:L1}a using the two decaying periodic box data-sets. In accordance with equation \ref{eq:L2}, $\frac{dL^2}{dt}$ is a linear decreasing function of $Re_\lambda$, for as long as the new dissipation scaling holds (see figure \ref{fig:L1}b). When the system transitions to the classical scaling (i.e. $C_\epsilon = cont$), $\frac{dL^2}{dt}$ becomes an increasing function of $Re_\lambda$, in agreement with equation \ref{eq:L3}. The maximum value of $\frac{dL^2}{dt}$ marks the state change. \begin{figure*} \centerline{ \begin{tabular}{ll} $\qquad$ (a) & $\qquad$ (b) \\ \includegraphics[width=.96\columnwidth]{L1.pdf} & \includegraphics[width=.96\columnwidth]{L2.pdf} \end{tabular} } \caption{(a) $\frac{1}{\nu}\frac{dL^2}{dt}$ and (b) $C_\epsilon$ for domain sizes $N=2048$ (red) and $N=1024$ (blue) (the forcing stops at $t=t_0$). The thick part of the lines marks the range where $\frac{dL^2}{dt}$ grows.} \label{fig:L1} \end{figure*} \subsection{Turbulent kinetic energy} \label{sec:Equation} Having validated the scalings \ref{eq:diss2} and \ref{eq:L2}, we may combine them to obtain an expression for the evolution of the turbulent kinetic energy during decay. Elimination of time yields (equations \ref{eq:diss3} and \ref{eq:diss4} are also used) \begin{equation} \frac{du^2}{dL^2} = \frac{-u^2}{C_1L^2 - C_2 u L^3}\,, \label{eq:K1} \end{equation} \noindent where $C_1 = \frac{6I_2/(\Phi Re_{L0}C_x) - 2I_0}{\kappa_a f(\kappa_a,^*)}$ and $C_2 = \frac{6I_2}{\Phi \kappa_a f(\kappa_a,^*) Re_{L0} C_x u_0 L_0}$. In the above we have considered $\frac{3}{2} \frac{du^2}{dt} = - \epsilon$, i.e. decaying turbulence without turbulence production. It can be checked by substitution that a solution to the above equation is \begin{equation} \frac{C_1-1}{uL} = C_2 - \left(\frac{u}{C}\right)^{C_1-1} \,, \label{eq:K2} \end{equation} \noindent with $C$ a positive constant of integration. Evidently, the current framework correctly predicts a continuously decreasing Reynolds number during decay, in contrast to previous theories for the new dissipation scaling (see appendix \ref{app:1}). Combination of equations \ref{eq:K1} and \ref{eq:K2} yields \begin{equation} \frac{du^2}{dt} \sim - u^4 \left[1- \left(\frac{u}{c}\right)^{C_1-1} \right]^2 \,, \label{eq:K3} \end{equation} \noindent with $c$ a positive constant. Expression \ref{eq:K3} is a generalized logistic equation \cite{tsoularis2002analysis} (if $C_1=1$ it reduces to a generalized Gompertz equation), and it expresses the regulation introduced by assumption 3 via the term $\left[1- \left(\frac{u}{c}\right)^{C_1-1} \right]^2$. For this term (and thus for regulation) to be negligible, the second term on the right hand side of equation \ref{eq:K2} would also need to be negligible. Thus, $Re_L$ would have to remain approximately constant during decay. Then, assumption 3 would reduce to $\Pi_a \sim \epsilon$ (i.e. Kolmogorov turbulence) and thus the regulation that it otherwise expresses (see discussion in section \ref{sec:phenom}) would be lost. We might inquire how does self-regulation affect the distribution of kinetic energy across the scales. Combination of equations \ref{eq:diss2} and \ref{eq:L1} yields \begin{equation} \frac{K^{ab}}{K} = \frac{\Phi Re_{L0} C_x I_0}{3 I_2} = const \,, \label{eq:K4} \end{equation} \noindent where $K^{ab}$ is the energy of the self-similar scales. Thus, we obtain the result that, despite a qualitative change in the spectrum (i.e. disappearance of the spectral peak, see figure \ref{fig:A1}b), self-regulation in the cascade guarantees that the ratio of the kinetic energy between large and self-similar scales be constant. In figure \ref{fig:K}a we show that the ratio $K^{ab}/K$ indeed stays relatively constant when the separation wavenumber is taken immediately after the spectral peak (see figure \ref{fig:A1}b), i.e. at $\kappa_a = 2.3$, for both of our decaying-turbulence data sets. Note that while the new dissipation scaling holds, the cascade undergoes the most change during decay, losing roughly 80\% of its initial kinetic energy (see figure \ref{fig:K}b). \begin{figure*} \centerline{ \begin{tabular}{ll} $\qquad$ (a) & $\qquad$ (b) \\ \includegraphics[width=.96\columnwidth]{Ka.pdf} & \includegraphics[width=.96\columnwidth]{Kb.pdf} \end{tabular} } \caption{(a) Ratio of the kinetic energy of the self-similar range over the total cascade kinetic energy and (b) total cascade kinetic energy, versus number of turnover times for domain sizes $N=2048$ (red) and $N=1024$ (blue). } \label{fig:K} \end{figure*} \section{Concluding discussion} By superimposing a feedback mechanism on the classical Richardson-Kolmogorov phenomenology we derived expressions for various flow quantities (dissipation rate, integral length scale, kinetic energy) of the non-Kolmogorov universal cascade that has been recently discovered \cite{seoud2007dissipation}. We reiterate that the new type of cascade regulates forced turbulence \cite{goto2015energy}, the region of decaying turbulence where the bulk of kinetic energy is lost \cite{isaza2014grid}, and almost the whole extent of turbulent wakes \cite{redford2012universality,dairay2015non}. Therefore, it might be considered more relevant for engineering applications (and thus turbulence modelling) than classical Kolmogorov turbulence. In the special case of forced turbulence, the current cascade picture resembles low-order predator-prey dynamics; prey (large scales) feeds the predator (small scales) in a self-regulating manner. These dynamics would explain the quasi-periodic oscillations of the turbulence quantities observed (see \cite{goto2015energy,goto2016local} and appendix \ref{app:2}), which indeed resemble, qualitatively, the response of predator-prey systems \cite{brauer2012mathematical}. The present analysis describes how small scales remove energy from the system during this regime. The question of how large scales replenish energy when forcing is present, remains open and will comprise the topic of future research. Another topic which remains open is the exact instability mechanism which generates the feedback, here attributed to an inverse helicity cascade.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Discussion and Future Work}\label{sec:conFuture} In this ongoing work, we proposed a method based on black-box optimization and RL to deal with the problem of adapting speech-enhancement algorithms to varying input signal quality. Our work is related to hyperparameter optimization in deep learning and machine learning\footnote{Here, hyperparameters refer to training parameters such as learning rate, decay function or a network structure such as the number of layers, number of hidden units, type of layers \textit{etc.}}, which has been extensively studied in the literature. Methods like random search~\citep{Bergstra2012}, Bayesian optimization with probabilistic surrogates (\textit{e.g.}, Gaussian processes~\citep{Snoek2012, HenrandezLobato2014}) or deterministic surrogates (\textit{e.g.}, radial basis functions~\citep{ilievski2017efficient}) have been used to find the best setup for the model hyperparameters. However, once these methods find a set of parameters for a given model offline, the set typically remains fixed throughout the inference process. In contrast, our RL approach adaptively changes parameters of the underlying (data-driven or analytical) algorithm at inference time, achieving the best performance under all input signal conditions. Furthermore, in this particular paper, we demonstrated how to apply our dynamic parameter-adaptation technique to the problem of speech enhancement~\citep{Tashev2009}. To the best of our knowledge, black-box optimization using reinforcement learning for real-time application such as speech enhancement has not been conducted before, the previous work~\citep{chen17eLTLWGD} only studies a simple synthetic task. Based on experiments with real user data, we showed that our RL agent is very effective in changing algorithmic parameters at a frame level, enabling existing speech-enhancement algorithms to adapt to changing input signal quality and denoising performance. However, there are still hurdles that need to be overcome in the design of a reliable reward function that helps us achieve the best algorithmic performance across a diverse range of metrics including WER, SER and PESQ. We intend to address this challenge in future work, in addition to reducing the overhead of RL computation during training. \section{Experimental Results}\label{sec:con} We evaluated the performance of our methodology with single-channel recordings based on real user queries to the Microsoft Cortana Voice Assistant. We split studio-level clean recordings into training, validation and test sets comprising $7500$, $1500$ and $1500$ queries, respectively. Further, we mixed these clean recordings with noise data (collected from 25 different real-world environments), while accounting for distortions due to room characteristics and distances from the microphone. Thus, we convolved the resulting noisy recordings with specific room-impulse responses, and scaled them to achieve a wide input SNR range of 0-30 dB. Each (clean and noisy) query has on average more than $4500$ audio frames of spectral amplitudes, each lasting $16$ ms. We applied a Hann weighting window to the frames allowing accurate reconstruction with a $50\%$ overlap. These audio frames in the spectral domain formed the features for our algorithm. Since we utilized a $512$-point short-time Fourier Transform (STFT), each feature vector was a positive real number of dimensionality $256$. To train our network model, we employed a first-order stochastic gradient-based optimization method, Adam ~\citep{KingmaB14}, with a learning rate that was adjusted on the validation set. Furthermore, we used a single layer LSTM with $196$ hidden units and number of steps equal to the number of frame. We trained it with a batch size of $1$ given each of input file has different number of frames. To compute the reward function, we employed the negated mean-squared error (MSE) between the ground-truth clean signals $g_t$ and denoised input signals $\hat{g}_t$ as follows: \begin{equation} \label{eq:reward} r_t = - \Vert g_t - \hat{g}_t \Vert^2_2. \end{equation} Further, to avoid instability during training, we normalized this reward function to lie between $[-1,1]$. Our underlying speech-enhancement algorithm~\citep{Tashev2009}, i.e black-box, was based on a generalization of the decision-directed approach, first defined in \citep{Ephraim:84}. To quantify the performance of speech enhancement, we employed the following metrics: \begin{itemize} \setlength{\itemsep}{-1pt} \item Signal-to-noise ratio (SNR) dB \item Log spectral distance (LSD) \item Mean squared error in time domain (MSE) \item Word error rate (WER) \item Sentence error rate (SER) \item Perceptual evaluation of speech quality (PESQ) \end{itemize} \begin{table}[t] \centering \begin{tabular}{cllllll} \toprule Method & {\sc SNR}(dB) & {\sc LSD} &{\sc MSE}& {\sc WER} & {\sc SER} & {\sc PESQ}\\ \hline \\ [-3pt] \textbf{Noisy Data} & $15.18$ & $23.07$ & $0.04399$ & $15.4$ & $25.07$ & $2.26$ \\ [3pt] \textbf{Baseline~\citep{Tashev2009}} & $18.82$ & $22.24$ & $0.03985$ & $14.77$ & $25.93$ & $2.40$ \\ [2pt] \textbf{Proposed (Unbiased Estimation)} & $\mathbf{26.16}$ & $\mathbf{21.48}$ & $\mathbf{0.03749}$ & $17.38$ & $31.87$ & $2.40$ \\ [2pt] \textbf{Proposed (Baselined Estimation)} & \multirow{1}{*}{$\mathbf{26.68}$} & \multirow{1}{*}{$\mathbf{21.12}$} & \multirow{1}{*}{$\mathbf{0.03756}$} & \multirow{1}{*}{$18.97$} & \multirow{1}{*}{$32.73$} & \multirow{1}{*}{$2.38$} \\ [3pt] \textbf{Clean Data} & $57.31$ & $1.01$ & $0.0$ & $2.19$ & $7.4$ & $4.48$ \\ \bottomrule \end{tabular} \vspace*{10pt} \caption{The proposed RL approach improves MSE, LSD and SNR with no algorithmic changes to the baseline speech-enhancement process, except frame-level adjustment of the control parameters.} \label{tb:results} \vspace*{-5pt} \end{table} A larger value is desirable for the first and last metrics, while a lower value is better for the rest. To compute the WER and SER, we employed a production-level automatic speech recognition (ASR) algorithm, whose acoustical model was trained separately on a different dataset that had similar statistics as our training examples. Thus, the ASR algorithm was not re-trained during our speech-enhancement experiments. Results of evaluating our model on the test data are shown in Table~\ref{tb:results}. In the baseline approach~\citep{Tashev2009}, we utilized a non-linear solver to find the set of algorithmic parameters that achieved the best score for a multi-variable function that equally weighted all of the above metrics. This unconstrained optimization was performed \textit{offline} once across the training data, which resulted in a parameter set that achieved the best trade-off for all metrics and feature vectors across the input SNR range. This parameter set was held constant when the speech-enhancement algorithm~\citep{Tashev2009} was applied to the test audio frames. However, in the proposed approaches (third and fourth rows in the table), the parameter set was adjusted depending on the action proposed by our RL meta-network. The third row corresponds to utilizing LSTM-based RL alone on top of the baseline speech-enhancement algorithm [also known as the unbiased estimator that utilizes the reward function of Eq.~\eqref{eq:reward}]. While in the fourth row, we add an additional step of reducing the variance of gradient estimation by baselining Eq.~\eqref{eq:reward}. From Table~\ref{tb:results}, we see that the proposed models show better performance on MSE, LSD and SNR, improving them by up to $16\%$, $4\%$, and $42\%$, respectively. However, they do not show improvement on the other metrics (WER, SER and PESQ). In fact, these results are expected because our RL network only employs a measure for signal distortion as the reward function [see Eq.~\eqref{eq:reward}]. Thus, it is able to only optimize metrics that are related to this measure (\textit{i.e.}, MSE, SNR and LSD). The difficulty of including WER, SER and PESQ in the RL optimization process lies in the fact that these metrics do not provide a direct way of quantifying representation error. There is also no good signal-level proxy for them that can be computed with a low processing cost, which is necessary to train the RL algorithm in practical amounts of time. Thus, although our network already deals with a hard and complex optimization problem due to the black-box optimization and policy gradients, as part of future work, we are continuing to investigate different methods of incorporating WER, SER and PESQ functions into the RL reward signal. \section{Introduction}\label{sec:intro} Noise-suppression algorithms for a single-channel of audio data employ machine-learning or statistical methods based on the amplitude of the short-term Fourier Transform of the input signal \citep{Ephraim:84, ephraim1995signal, boll1979suppression, xu2014experimental}. Although the approach we propose can be applied to the entire gamut of noise-suppression techniques, in this paper, we only illustrate its benefits with the classical algorithms for speech enhancement that are based on spectral restoration~\citep{Tashev2009}. Such algorithms typically comprise four components~\citep{Tashev2009}: (1) voice-activity detection, (2) noise-variance estimation, (3) suppression rule, and (4) signal amplification. The first two components help gather statistics on the target speech signal in the input audio, while the third and fourth components allow us to utilize these statistics to distill out the estimated speech signal. Despite these components being based on sound mathematical principles~\citep{Tashev2009}, their performance is directly influenced by a sizeable set of parameters such as those that control the gain, geometry weighting, estimator bias, voice- and noise-energy thresholds, and \textit{etc}. Consequently, the combined set of these parameters plays a critical role in achieving the best performance of the end-to-end speech-enhancement process. Needless to say, the numerical values of these parameters are heavily influenced by the input signal and noise characteristics. Furthermore, thanks to the complex interdependency between statistical models~\citep{Tashev2009}, there is no known best value for these parameters that works well across all levels of input signal quality. Therefore any offline optimization process, such as the simplex method~\citep{Nash20020}, is only a \textit{sub-optimal solution} that tends to achieve good performance across only a small range of instantaneous input signal-to-noise ratios (SNRs). This is the status quo that we intend to break. \section{Proposed Approach}\label{sec:model} We develop data-driven techniques that allow us to adjust control parameters of a classical speech-enhancement algorithm~\citep{Tashev2009} dynamically at a frame level, depending just on simple feedback from the underlying algorithm. Specifically, we rely on reinforcement-learning (RL) based on a network of long-short term memory (LSTM) cells \citep{Hochreiter}. We show that our method can achieve the best performance of speech enhancement across a broad range of input SNRs. \begin{figure}[tb] \centering \centerline{\includegraphics[width=0.7\textwidth]{figures/ourModel2}} \caption{Proposed architecture. $g_t$ and $f_t$ are clean and noisy input frames, while $s_t$ and $r_t$ are action and reward values at time step $t$, respectively. At each time step, our model utilizes $s_t$ and $f_t$ to find the best set of control parameters that maximize $r_t$ of the speech-enhancement black-box algorithm.} \label{fig:ourmodel} \end{figure} The neural-network model that we propose is shown in Fig.~\ref{fig:ourmodel}. As mentioned before, it treats the classical speech-enhancement algorithm~\citep{Tashev2009} as a black box. Suppose $s_t$ and $r_t$ represent the action proposed by our network model and the reward signal that is available to it from the algorithm at an instance of time $t$. We set up an objective function as follows: \begin{equation} \label{eq:objective_function} J^\pi(\theta) = \mathbb{E}_{\pi_\theta} \bigg[\sum_{t=0}^T r_t(s_t)\bigg] \end{equation} where $\pi_{\theta}$ is the policy that our model tries to learn. The goal of this function is to maximize the expected reward over time. Thus, in order to solve Eq.~\eqref{eq:objective_function}, we employ the REINFORCE algorithm \citep{Williams, ZarembaS15, RanzatoCAZ15, NIPS2014_5542,baVald2015,s2sIlya}. At each time step, our network picks an action at a given state of the model (i.e. given it policy $\pi_{\theta}$) that causes some change to the set of control parameters that are applied to the black box. In other words, each parameter of the speech-enhancement algorithm~\citep{Tashev2009} is mapped to an action. Thus, an action can result in an increase or decrease of the parameter value by a specific step size. It could also lead to no change in the parameter values. Note that within the black-box algorithm, we also utilize the clean signal $g_t$ in addition to the noisy frame at every time step. The reason we do this is because once the underlying speech-enhancement algorithm~\citep{Tashev2009} is done with the denoising process, it relies on the ground-truth clean signal $g_t$ to compute a score that represents the reward function for the RL agent. $g_t$ is not used within the black box in any other way.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{section:Introduction} The increasing trend for more power generation utilizing renewable energy resources is born from the fact that conventional energy sources like coal, petroleum, and natural gas are on the verge of extinction \cite{segmentation_wiley}, \cite{GNN}. Also, the global energy crisis provoked because Russia invaded Ukraine has given rise to the unrivalled momentum for renewables. Disruptions in supplying and high prices of fossil fuels during the crisis are the reasons that the countries are strengthening their policies for supporting more power generation using renewables. Experts are also predicting that throughout 2022-2027, renewables would account for over 90\% of global electricity expansion and surpass coal \cite{renewablestat}. The cumulative power generation using various renewable sources is shown in Figure \ref{fig:cumulative}, considering the past, present and future scenarios \cite{resiliency_cost}. For instance, India is set to double the new installations of solar photovoltaic panels to achieve the target of 500 GW by 2030 and its additions towards renewables are shown in Figure \ref{fig:India_stat}. \begin{figure} \centering \includegraphics[width=3in]{Figures/Cumulative.jpg}\\ \caption{Cumulative power generating capacity of various sources from 2011- 2027} \label{fig:cumulative} \end{figure} \begin{figure} \centering \includegraphics[width=2.8in]{Figures/India_renewable_2022.jpg}\\ \caption{India's renewable capacity additions from 2016-2027} \label{fig:India_stat} \end{figure} From Figure \ref{fig:India_stat}, we can say that renewable additions are majorly contributed by the solar PV generation installed from the utility end. The utility-scale solar photovoltaics is referred to as a large number of solar modules installed together to establish a power plant \cite{ViT}. Solar power plants require acres of land for encompassing. For example, Bhadla Solar Park, in Rajasthan, India is the world's largest solar power plant established in 2020 encompassing nearly 14,000 acres of land and generating 2.25 GW of power. However, after the commissioning of the huge power plants, their monitoring and maintenance become challenging. Monitoring is an important measure taken to increase the power output from solar PV panels, which can be affected by several factors such as shading, wear and tear due to environmental conditions \cite{resiliency}. To enhance the performance of solar PV panels and generate more power, it is feasible to implement smart monitoring which automatically detects the faulty cells in the solar PV panels without the requirement of large human interference as manual inspection of PV modules is simply not possible. Usually, these faults in solar PV panels are referred to as hot spots and potential induced degradation (PID). Hotspots occur when the solar panel is in shading condition and current flow is not possible across the weak cells and current concentration increases to other cells, leading it to overheat and mechanically damage \cite{segmentation_IEEE}. PID occur because of humidity, heat or voltage variations in the cell. As solar panels are designed with a life span of 25 years but these faults can sharply decrease their performance and efficiency. Thus, it is advisable to have a monitoring feature in large solar power plants to make its performance more reliable and durable. The monitoring of solar PV panels is implemented manually using visual inspection or analysing through current-voltage characteristics \cite{IV_insepection} and another effective approach is to capture the infrared (IR) thermographic images of solar panels and identify the faults \cite{segmentation_springer}. The use of IR thermographic images is a reliable technique for the identification of faults in solar PV panels. Capturing the thermal images manually on the ground level is a time taking process as observed in \cite{8435954}. For 3 MW of solar power plants, analysis using ground images took 34 days of the inspection time; while capturing aerial thermal images reduced the time of analysis to approximately 3 hours. Thus, it is advantageous to prefer aerial thermal images for the analysis \cite{hotspot_define}. Various methods have been proposed to identify the faults in solar PV panels using aerial thermal images. For instance, the Canny edge image segmentation technique is used to identify the region of interest (ROI) i.e., affected region in solar panels \cite{canny}. The Robust Principal Component Analysis (RPCA) is used to separate the sparse corrupted anomalous components from a low-rank background \cite{pca}. Recent review works have been presented that detail the proposed methods and their inadequacy to address the problem \cite{review}, \cite{review2}, \cite{review3}, \cite{review4}. It has clearly stated the issues in the current state of the art because of the complexity and multi-layering process. Recently, many researchers have proposed frameworks using deep-learning algorithms where the infrared thermal imagery dataset is first labelled as faulty and healthy PV modules for autonomous PV module monitoring. Further, they train the model and using classification technique, faults have been identified. For instance, a binary classifier with a multiclass classifier is used to detect the fault and its type \cite{binary}. Convolutional neural networks and decision tree algorithms are used for the detection of external faults such as delamination, burn marks, glass breakage, discolouration, and snail trails on solar panels \cite{decision}. There has been a lot of work done in the same context \cite{ensemble}, \cite{HAIDARI2022102110}, \cite{cnn1}, \cite{NAVEENVENKATESH2022110786}, \cite{automatic}. All these works mentioned requires training labels or ground labels for training the proposed models; in a practical scenario, it is not always possible to have ground truth images of solar PV panels. Thus, these classification techniques are not adequate to rely on for real-time automatic monitoring of large-scale solar power plants. Another technique referred to as image segmentation is used to detect hot spots in solar panels. Otsu thresholding algorithm and its modified versions have been used to segment the faulty regions in solar panels but their segmentation accuracy is low, thus they are also not reliable \cite{otsu}, \cite{otsu1}. We propose an unsupervised method that does not require prior training labels or ground truth. The unmanned aerial infrared thermographic images of solar power modules in power plants are captured and directly fed as input. Then with the help of convolutional neural network layers, features of images are extracted and clustered to segment the objects based on the features of the images \cite{main}. This unsupervised learning optimizes the clusters by backpropagation using iterative stochastic gradient descent. Then with help of the loss function, we have also reduced the noises occurring during the segmentation process. Finally, we get segmented RGB images from which we can identify the hotspots, PID and snail trails in the solar PV modules. Further, these RGB images are converted to greyscale to enhance the features of the image so that faulty cells, normal cells and backgrounds are easily differentiable. Earlier, the proposed unsupervised feature clustering segmentation algorithm was found effective for bone age assessment to diagnose growth disorders using x-ray images \cite{xray}, to realize the extraction process of cage aquaculture \cite{aqua} and for segmentation of rock and coal images in the mining industry \cite{coal}. The key contributions of this work are as follows: \begin{itemize} \item A novel method is proposed for the identification of internal faults such as hotspots, snail trails and potential-induced degradation in solar PV panels. \item The input infrared thermal images are captured using unmanned aerial vehicles and fed to the unsupervised deep learning algorithm which performs segmentation based on feature clustering. \item It does not require any prior training labels and ground truth labels. Thus, the proposed algorithm is suitable to get integrated into any large-scale solar power plant. \end{itemize} The paper is organized as follows: a description of the dataset is provided in Section \ref{section:Datasets}, detailed explanation of the methodology of the unsupervised learning algorithm for segmentation is presented in Section \ref{section:Feature}. Section \ref{section:Result} provides the experimental result on a real dataset of solar PV panels. Finally, Section \ref{section:Conclusion} concludes the paper. \section{Description of Dataset} \label{section:Datasets} Here, we used the online accessible dataset from \cite{Dataset}. This dataset comprises infrared thermal images of solar photovoltaic panels. The infrared thermographic surveillance technique is found to be effective for defect identification and analysis. Generally, objects with more than absolute zero temperature radiate thermal energy in infrared form; if temperature increases, radiated thermal energy gets more intense. Thus, these thermographic images allow the analysis of temperature variations. Thermal cameras are used for capturing infrared thermographic images \cite{hotspot1}. Thermal cameras can capture radiations in the infrared range of the electromagnetic spectrum. The zones with higher temperatures could be captured easily, but the human eye is not capable to identify these zones. Thus, image segmentation techniques are found suitable. For solar photovoltaic panels, infrared thermographic images can be captured using unmanned aerial vehicles (UAVs), which travel over photovoltaic panels by taking the proper angle setting of the camera into consideration. \begin{figure} \centering \includegraphics[width=2in]{Figures/Thermal_Image.jpg}\\ \caption{Grey-scale image pre-processed from a thermal image of solar PV panel} \label{fig:thermal} \end{figure} The considered dataset was captured by Zenmuse-XT with the spectral range of 7.5 - 13 $\mu m$, thermal sensitivity $< 50\ mK$ and provided images of size $336 \times\ 256$ in JPG format. This camera glides over the monocrystalline solar panels and captures thermal images. Further for enhancing the features of the thermal images, they are pre-processed and converted into grey-scale images as shown in Figure \ref{fig:thermal}. Figure \ref{fig:histogram} shows the histogram distribution of the grey-scale image. It shows how many times each intensity value in the image occurs. The human eye is not capable to identify the fault arising in solar PV panels such as hot spots or snail trails present in the images of a photovoltaic panel. To solve this issue, we propose a deep learning-based segmentation algorithm to identify the hot spots in solar photovoltaic panels. These image datasets do not have any prior training labels available, thus unsupervised learning algorithm is best suited to identify the fault. In the real world also, the implication of the proposed model is feasible as it does not require any prior ground truth labels. \begin{figure} \centering \includegraphics[width=2.2in]{Figures/histogram_1.jpg}\\ \caption{Histogram distribution of the grey-scale image of solar PV panel} \label{fig:histogram} \end{figure} \section{Methods and Materials} \label{section:Feature} The IR thermal image dataset of Solar PV panels is of diversified resolutions, it is required to identify the hot-spot and snail trail regions while suppressing the noise. We exploit an unsupervised deep learning algorithm, inspired by a novel and effective unsupervised learning image-segmentation algorithm proposed in [1]. Implementation of supervised learning algorithms involves pixel levelling, ground truth images and original images. On the other hand, unsupervised algorithms require no prior labelling, training images or ground truth images; they extract the features of an image and implement the feature learning process to allocate the labels. Thus, we can easily identify the defects in the solar PV panels by segmenting images based on differentiable feature clustering. Here, we use Convolutional Neural Network (CNN) to extract the pixel-level features and identify the hot spots and snail trails in the input IR image of solar PV panels. \begin{figure*}[] \centering \includegraphics[width=6in]{Figures/Method.jpg} \caption{Architecture of the unsupervised image segmentation model for PV panels} \label{fig:Method} \end{figure*} The flow of the proposed algorithm is shown in Figure \ref{fig:Method}, includes CNN layers to fragment out the high-level features, a batch normalization layer to rescale and make the algorithm perform faster in stable mode, an argmax layer for pseudo labelling to the features extracted and backpropagation of loss function to evaluate the performance of the algorithm. \subsection{Feature Extraction using CNN} An image \{$I_{PV}\in \mathbb{R}^X$\} having dimension $L \times W$, where $k_n \in \mathbb{R}^X $ is the pixel value normalized to [0,1] is fed as input to the initial layer of the algorithm. A $p$-dimensional feature vector \{$m_n \in \mathbb{R}^p$\} is computed from $k_{n} $, by passing it through $M$ number of channels of two-dimensional convolutional layers of kernel size $3 \times\ 3$, RELU activation function, and followed by batch normalization layer to attain $N$ pixels of an input image. Subsequently, the feature vector \{$m_n$\} is fed as input to $q$ number of channels of the 1D convolutional kernel of size $1 \times\ 1$, then passed through the batch normalization layer. Finally, with a linear classifier, a response vector \{$rv_n \in W_l k_n $\} is obtained where $W_l\in \mathbb{R}^{q \times\ p}$, which is further normalized to $rv^\prime_{n}$ such that it has zero mean and unit variance \cite{main}. Then, the argmax layer is applied for labelling the clusters $c_n$ for each pixel by considering the criteria of selecting the dimension that has a maximum value of $rv^\prime_{n}$. Simply, we can say that the cluster label for the pixel was given which is equivalent to the maximum value of $rv^\prime_{n}$ and signifies the clustering of the feature vector into $q$ clusters. The $t-th$ cluster of the endmost response $rv^\prime_{n}$ is given as: \begin{equation} C_t = max \{ rv^\prime_{n}\}= {rv^\prime_{n}} \in \mathbb{R}^q \hspace{0.25cm}| \hspace{0.25cm} rv^\prime_{n,s} \leq {rv^\prime_{n,t}}, \hspace{0.25cm}\forall s \end{equation} where, ${rv_{n,t}^{'}}$ represents the $t-th$ element of ${rv_{n}^{'}}$. This process is equivalent to the assignment process of each pixel to the neighbouring nodes of the $q$-points which are present at infinite distances, in $q$-dimensional space on the relevant axis. \subsection{Number of Clusters} In unsupervised learning of image segmentation, the number of definite labels for the cluster varies on the image fed as input. If an image has a multitude of objects then it will have a high number of clusters and vice versa. Initially, the number of clusters $q$ for training the model is kept high. Then with the effective use of feature similarity and spatial continuity constraints, similar and neighbouring pixels are merged and eventually end up with a smaller number of clusters. Here, we considered 18 and 4 as the maximum and the minimum number of clusters respectively for segmenting the image into hot spots, snail trails, background, and normal regions of solar PV panels. \subsection{Loss Function} The loss function is used as a constraint for improving the feature similarity and spatial continuity between the pseudo labels assigned to image pixels, which is given as: \begin{equation} L=L_{fs}+\alpha L_{sc}= L_{fs}\{{rv^\prime_{n}},c_n\}+ \alpha L_{sc}\{{rv^\prime_{n}}\} \end{equation} where $L_{fs}$ is similarity loss and $L_{sc}$ is the continuity loss. $\alpha$ is weight balancing between the feature similarity and spatial continuity loss functions. As discussed in the previous section argmax function provides pseudo labels to the image's pixels as per the features. After the assignment of pseudo labels, it is passed through this loss function. \subsubsection{Feature Similarity Constraint} Implementation of this constraint would help in stabilising the clusters in the image by enhancing the similarity of similar features. The image pixels that have similar features should be within a cluster and various clusters should have distinct features. The network weights are adjusted to minimize the similarity loss function to extract the important features for clustering. The feature similarity is computed using the cross-entropy loss between $rv^\prime_{n}$ and $c_n$ as: \begin{equation} L_{fs} (rv^\prime_{n},c_n)= \sum_{n=1}^{N} \sum_{z=1}^{q} -\delta (z-c_n) \; ln(rv^\prime_{n}) \end{equation} where, $\delta(z)=\left\{ \begin{aligned} & 1 \qquad \textup{if} \quad z=1 \\ & 0 \qquad \textup{if} \quad z\neq 1 \\ \end{aligned}\right.$ \subsubsection{Spatial Continuity Constraints} It is preferred to have spatial continuity among the clusters of the image's pixels as it helps to suppress the excess number of labels created due to the complicated structures and patterns in the image. Spatial Continuity Constraint is computed by taking the L1-norm of vertical and horizontal variation of response map $rv^\prime_{n} $ into consideration; implemented by a differential operator. Mathematically, it is defined as: \begin{equation} L_{sc} ({rv^\prime_{n}})= \sum_{\beta=1}^{W-1} \sum_{\gamma=1}^{L-1} ||{rv_{\beta+1,\gamma}^{'}} - {rv_{\beta,\gamma}^{'}} ||_1 + ||{rv_{\beta,\gamma+1}^{'}} - {rv_{\beta,\gamma}^{'}} ||_1 \end{equation} where, $L$ and $W$ are the length and width of an input image. ${rv_{(\beta,\gamma)}^{'}}$ is the pixel value at $(\beta,\gamma)$ from the response map $rv^\prime_{n}$. \subsubsection{ Mechanism by Backpropagation} This section details the approach for training the network in unsupervised image segmentation. After feeding the input image, with constant model parameters, cluster labels are predicted and then the model is trained based on the parameters by considering the predicted labels. The prediction of cluster labels is a forward process on the other hand training of the model is a backward process which is based on stochastic gradient descent with momentum by updating the model parameters. This stochastic gradient descent with momentum helps in accelerating the gradient vectors in the right direction, leading to faster convergence. We are computing the loss and backpropagating it for updating the parameters. This forward and backward process is implemented in the loop for $E$ iterations to achieve the final segmentation of the image in clusters.\\ \begin{figure} \includegraphics[width=3in]{Figures/Flow.jpg}\\ \caption{Process flow diagram for the proposed algorithm} \label{fig:Flow} \end{figure} The identification of similarity between features of various pixels is the first criterion that needs to be addressed for the allocation of labels to the pixels. As discussed above, first we fed the infrared thermal image of solar PV panels of size 336×256 to CNN modules for feature extraction. These CNN modules comprise a convolutional layer, a ReLU layer, and a BN layer and these layers are connected from end to end. We considered three CNN modules. The first two modules are equipped with two-dimensional convolutional layers of kernel size 3×3 and the last module has a one-dimensional convolutional layer of kernel size 1×1. Then it is passed through the argmax layer for pseudo labels. Further, the network is trained by computing the loss function and implementing the backpropagation to enhance the cluster segmentation in the image. All the parameters used for the segmentation of PV panel images are tabulated in Table \ref{tab:hyperparameters}. The chosen value for the weight balancing constant in the loss function is based on the loss variation with iterations and found the minimum value of loss when it is taken 5; discussed in detail in section \ref{section:Result}. Other parameters are chosen based on the standardized case for computational ease as in \cite{aqua}. \begin{table} \centering \caption{Hyperparameters of unsupervised segmentation algorithm} \begin{tabular}{cc} \toprule Hyperparameters & Values \\ \midrule Size of IR thermal image of solar panel & 336×256 \\ Stochastic gradient descent momentum & 0.9 \\ Learning Rate & 0.1 \\ Number of iterations & 200 \\ Weight balancing constant in loss function & 5\\ \bottomrule \end{tabular} \label{tab:hyperparameters} \end{table} \begin{figure}[h] \centering \includegraphics[width=1.5in]{Figures/Forward.jpg}\\ \caption{Initial pseudo cluster labels allotted to input thermal image of solar PV panel} \label{fig:forward} \end{figure} The image obtained by the implementation of the proposed algorithm would be in RGB colour. Thus, to identify and enhance the feature of RGB colour images, they are converted to greyscale images to detect the faults in the solar PV panels. The process flow of images is illustrated in Figure \ref{fig:Flow}. \begin{algorithm} \caption{Unsupervised segmentation of solar PV panels}\label{alg:alg1} \begin{algorithmic} \STATE \STATE {\textsc{Input:}} ${I_{PV}\in \mathbb{R}^X}$ with dimension L×W \STATE {\textsc{Output:}} Hot spots in solar PV panels \STATE {\textsc{Initialize:}} $E \leftarrow $ The number of iterations \STATE \hspace{0.5cm} Feature Vector $\{m_n\} \leftarrow$ 2D convolutional $\{k_n\}$ \STATE \hspace{0.5cm} Response vector $\{rv_n\} \leftarrow$ 1D convolutional $\{m_n\}$ \STATE \hspace{0.5cm} $\{rv^\prime_{n}\} \leftarrow Norm \{rv_n\}$ \STATE \hspace{0.5cm} $\{c_n\} \leftarrow Argmax \{{rv_{n,t}^{'}}\}$ \STATE \hspace{0.5cm} Compute $L$ using equation (2) \STATE \hspace{0.5cm} 2D convolutional layers,\\ \hspace{0.5cm} 1D convolutional layer $\leftarrow$ Update ${L}$ \STATE \hspace{0.5cm} Segmented RGB Image \STATE \textsc{Return:} Segmented Greyscale Image \end{algorithmic} \label{alg1} \end{algorithm} \section{Results and Discussion} \label{section:Result} In this section, we will discuss how the proposed model works for datasets detailed in section \ref{section:Datasets}. The available images of solar PV panels in the dataset are converted form of infrared thermal images into greyscale images. These greyscale images are considered as the input image for the proposed model. For analysis, we used six images from the dataset and fed them independently to the proposed unsupervised image segmentation algorithm. In detail, we have already discussed the methodology of the proposed algorithm in section \ref{section:Feature}. \begin{figure}[h] \centering \includegraphics[width=3.4in]{Figures/Epochs.jpg} \caption{Implementation of backward propagation using computational loss for enhancing the cluster formation (up to 200 iterations)} \label{fig:epochs} \end{figure} \begin{figure}[h] \centering \includegraphics[width=3in]{Figures/Curve.jpg}\\ \caption{Loss v/s iteration curve for various values of $\alpha$} \label{fig:alpha} \end{figure} \begin{figure*} [h] \centering \includegraphics[width=6in]{Figures/Output_1.jpg}\\ \caption{Segmentation of input greyscale thermal images of solar PV panel} \label{fig:Main_result} \end{figure*} Initially, when the greyscale thermal image of the solar PV panel is fed as input using CNN layers, the feature vector is computed and using the argmax function pseudo labels are allotted to the images as shown in \ref{fig:forward}. These initial labels are obtained using forward propagation and need further optimisation for actual cluster labels. Thus, with the help of backward propagation the pixel labels, and their features are optimized based on iterative stochastic gradient descent. Then computed the similarity loss and spatial continuity loss to assign the same label to the pixel with similar features and spatial continuity to reduce the noises in the image segmentation process as shown in \ref{fig:epochs}. We ran the model for 200 iterations to settle the noise in the clustered pixels and get the properly segmented image as output. In addition to that, the value for the weight balancing constant $\alpha$ in the loss function given in equation (2) needs to be fixed. Thus, we ran the model for $\alpha=1$, $\alpha=5$ and $\alpha=10$. And plotted the variation in loss function with the number of iterations as shown in Figure \ref{fig:alpha}. Here, we can see that the value of the loss function is starting with very high loss values for $\alpha=1$ and $\alpha=10$ in comparison to the $\alpha=5$. Also, at $200$ iteration, the value of the loss function is 0.508 for $\alpha=1$, 0.202 for $\alpha=5$, and 0.417 for $\alpha=5$. Thus, the loss value is comparatively low for $\alpha=5$. Thus, we fixed the $\alpha=5$ for the analysis. After, the implementation of the model we get the final segmented solar PV panels using feature clustering. These segmented images have clustered labels differentiating the background, solar PV panel and fault defects in them. From Figure \ref{fig:Main_result}, we can see that six greyscale thermal images of solar PV panels are fed as input to the model and segmented RGB-coloured images are obtained as output. The different colours/cluster labels are allotted to objects in the image so that they are differentiable. Further, to make the analysis more reliable and implementable in the real world, we converted the segmented RGB colour image to a segmented greyscale image. This process makes it simpler to detect the faults in solar PV panels. From Figure \ref{fig:Main_result}, in segmented greyscale images, the dark spots are the faulty cells of solar panels which is easily identifiable to the human eye. Thus, we can say that the proposed framework is suitable for monitoring solar power plants. It does not require any prior training labels and ground truth labels. Also in very less computational time, the segmentation of the captured thermal images of solar PV panels is achieved and defects are identified. \section{Conclusion} \label{section:Conclusion} In this paper, we proposed a novel unsupervised image segmentation algorithm based on a convolutional neural network used for segmenting faulty cells in solar photovoltaic panels. From the solar power plants, using the thermal camera infrared thermographic images of solar panels are captured. These images are pre-processed to greyscale images to extract the important features. Then the greyscale images are passed to the CNN layers which again extract out the highly important features from the input greyscale image of solar PV panels and then an argmax layer performed the differentiable task for feature clustering. The CNN layers effectively assigned the cluster labels to the pixels of the input image. Further, to achieve better clustering of similar features, backpropagation of the proposed loss function (feature similarity loss and spatial continuity loss) was applied to the normalized response of convolutional layers. As a result, it became effective in distinguishing faulty cells and normal cells in solar PV panels. To further make fault identification easy, we converted the segmented RGB image to a greyscale image, on which the dark spots are represented as the faulty cells in the panel. Altogether, this proposed algorithm eased the segmentation of the thermal image of solar PV panels; so that defects like hot spots and snail trails can be identified. The following process and analyzed results clearly show the effectiveness of the proposed algorithm. It can easily be implemented in the real world for monitoring and maintenance of large solar power plants with the requirement of less manpower and at a low cost. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }